From report at bugs.python.org Sat Feb 1 05:48:01 2020 From: report at bugs.python.org (Marco Sulla) Date: Sat, 01 Feb 2020 10:48:01 +0000 Subject: [New-bugs-announce] [issue39516] ++ does not throw a SyntaxError Message-ID: <1580554081.01.0.712736172146.issue39516@roundup.psfhosted.org> New submission from Marco Sulla : Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) [GCC 9.2.1 20190909] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 1 ++ 2 3 This is probably because the interpreter reads: 1 + +2 1. ++ could be an operator in future. Probably not. Probably never. But you never know. 2. A space between an unary operator and the object should not be allowed 3. the first expression is clearly unreadable and hard to understand, so completely unpythonic ---------- components: Interpreter Core messages: 361159 nosy: Marco Sulla priority: normal severity: normal status: open title: ++ does not throw a SyntaxError type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 12:02:12 2020 From: report at bugs.python.org (Tomas Ravinskas) Date: Sat, 01 Feb 2020 17:02:12 +0000 Subject: [New-bugs-announce] [issue39517] runpy calls open_code with Path object Message-ID: <1580576532.94.0.922991847859.issue39517@roundup.psfhosted.org> New submission from Tomas Ravinskas : runpy accepts Path like objects but open_code seems to only accept strings, so calling open_code with Path object throws TypeError. I think runpy should call str() on all path passed to open_code. The relevant line is 232 in runpy.py in function _get_code_from_file. ---------- components: Library (Lib) messages: 361176 nosy: Tomas Ravinskas priority: normal severity: normal status: open title: runpy calls open_code with Path object type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 12:26:42 2020 From: report at bugs.python.org (Vitaly Zdanevich) Date: Sat, 01 Feb 2020 17:26:42 +0000 Subject: [New-bugs-announce] [issue39518] Dark theme Message-ID: <1580578002.89.0.0285133557366.issue39518@roundup.psfhosted.org> New submission from Vitaly Zdanevich : Please save our eyes. And batteries. Do not ignore this property of useragent https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme ---------- assignee: docs at python components: Documentation messages: 361177 nosy: Vitaly Zdanevich, docs at python priority: normal severity: normal status: open title: Dark theme _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 14:53:10 2020 From: report at bugs.python.org (brasko) Date: Sat, 01 Feb 2020 19:53:10 +0000 Subject: [New-bugs-announce] [issue39519] Can't upgrade pip version 19.3.1 to 20.0.2 on Python 3.7.4 Message-ID: <1580586790.78.0.0495228970738.issue39519@roundup.psfhosted.org> New submission from brasko : Hi when I want to upgrade pip version I get: Usage: C:\Users\User\AppData\Local\Programs\Python\Python37\python.exe -m pip install [options] [package-index-options] ... C:\Users\User\AppData\Local\Programs\Python\Python37\python.exe -m pip install [options] -r [package-index-options] ... C:\Users\User\AppData\Local\Programs\Python\Python37\python.exe -m pip install [options] [-e] ... C:\Users\User\AppData\Local\Programs\Python\Python37\python.exe -m pip install [options] [-e] ... C:\Users\User\AppData\Local\Programs\Python\Python37\python.exe -m pip install [options] ... no such option: -u ---------- components: Windows messages: 361189 nosy: brasko, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Can't upgrade pip version 19.3.1 to 20.0.2 on Python 3.7.4 type: security versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 16:10:07 2020 From: report at bugs.python.org (Batuhan) Date: Sat, 01 Feb 2020 21:10:07 +0000 Subject: [New-bugs-announce] [issue39520] AST Unparser can't unparse ext slices correctly Message-ID: <1580591407.68.0.673076668701.issue39520@roundup.psfhosted.org> New submission from Batuhan : (this issue has already a PR for ast.unparse) >>> from __future__ import annotations >>> import ast >>> x: Tuple[1:2,] = 3 >>> __annotations__["x"] 'Tuple[1:2]' >>> ast.dump(ast.parse("Tuple[1:2,]")) "Module(body=[Expr(value=Subscript(value=Name(id='Tuple', ctx=Load()), slice=ExtSlice(dims=[Slice(lower=Constant(value=1, kind=None), upper=Constant(value=2, kind=None), step=None)]), ctx=Load()))], type_ignores=[])" >>> ast.dump(ast.parse("Tuple[1:2]")) "Module(body=[Expr(value=Subscript(value=Name(id='Tuple', ctx=Load()), slice=Slice(lower=Constant(value=1, kind=None), upper=Constant(value=2, kind=None), step=None), ctx=Load()))], type_ignores=[])" ---------- components: Interpreter Core messages: 361193 nosy: BTaskaya priority: normal severity: normal status: open title: AST Unparser can't unparse ext slices correctly type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 16:42:09 2020 From: report at bugs.python.org (Stefan Pochmann) Date: Sat, 01 Feb 2020 21:42:09 +0000 Subject: [New-bugs-announce] [issue39521] reversed(mylist) much slower on Python 3.8.1 32-bit for Windows Message-ID: <1580593329.06.0.491648028148.issue39521@roundup.psfhosted.org> New submission from Stefan Pochmann : Somehow `reversed` became much slower than `iter` on lists: List with 1,000 elements: > python -m timeit -s "a = list(range(1000))" "list(iter(a))" 50000 loops, best of 5: 5.73 usec per loop > python -m timeit -s "a = list(range(1000))" "list(reversed(a))" 20000 loops, best of 5: 14.2 usec per loop List with 1,000,000 elements: > python -m timeit -s "a = list(range(250)) * 4000" "list(iter(a))" 50 loops, best of 5: 7.08 msec per loop > python -m timeit -s "a = list(range(250)) * 4000" "list(reversed(a))" 20 loops, best of 5: 15.5 msec per loop On another machine I tried ten different Python versions and found out it's only version 3.8.1 and only the 32-bit version: 32-bit 64-bit CPython iter reversed iter reversed 3.5.4 19.8 19.9 22.4 22.7 3.6.8 19.8 19.9 22.3 22.6 3.7.6 19.9 19.9 22.3 22.5 3.8.1 19.8 24.9 22.4 22.6 Another time with 3.8.0 instead of 3.8.1: 32-bit 64-bit CPython iter reversed iter reversed 3.5.4 19.5 19.6 21.9 22.2 3.6.8 19.5 19.7 21.8 22.1 3.7.6 19.5 19.6 21.7 22.0 3.8.0 19.4 24.5 21.7 22.1 I used the "Stable Releases" "executable installer"s from here: https://www.python.org/downloads/windows/ More details here: https://stackoverflow.com/q/60005302/12671057 ---------- components: Build messages: 361195 nosy: Stefan Pochmann priority: normal severity: normal status: open title: reversed(mylist) much slower on Python 3.8.1 32-bit for Windows type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 18:35:59 2020 From: report at bugs.python.org (Batuhan) Date: Sat, 01 Feb 2020 23:35:59 +0000 Subject: [New-bugs-announce] [issue39522] AST Unparser with unicode kinded constants Message-ID: <1580600159.52.0.798208361602.issue39522@roundup.psfhosted.org> New submission from Batuhan : >>> from __future__ import annotations >>> import ast >>> x: u"a" = 3 >>> __annotations__["x"] "'a'" >>> ast.dump(ast.parse(__annotations__["x"])) == ast.dump(ast.parse('u"a"')) False I guess before touching constant part, we should wait for GH-17426 (afterward I can prepare a patch) ---------- components: Interpreter Core messages: 361199 nosy: BTaskaya priority: normal severity: normal status: open title: AST Unparser with unicode kinded constants type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 21:05:21 2020 From: report at bugs.python.org (Alex Henrie) Date: Sun, 02 Feb 2020 02:05:21 +0000 Subject: [New-bugs-announce] [issue39523] Unnecessary variable assignment and initial loop check in pysqlite_cursor_executescript Message-ID: <1580609121.79.0.36186029109.issue39523@roundup.psfhosted.org> New submission from Alex Henrie : pysqlite_cursor_executescript currently has the following while loop: /* execute statement, and ignore results of SELECT statements */ rc = SQLITE_ROW; while (rc == SQLITE_ROW) { rc = pysqlite_step(statement, self->connection); if (PyErr_Occurred()) { (void)sqlite3_finalize(statement); goto error; } } This can and should be rewritten as a do-while loop to avoid having to initialize rc to SQLITE_ROW and then check its value knowing that the value check will succeed. ---------- components: Library (Lib) messages: 361200 nosy: alex.henrie priority: normal severity: normal status: open title: Unnecessary variable assignment and initial loop check in pysqlite_cursor_executescript type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 1 23:09:48 2020 From: report at bugs.python.org (mpheath) Date: Sun, 02 Feb 2020 04:09:48 +0000 Subject: [New-bugs-announce] [issue39524] Escape sequences in doc string of ast._pad_whitespace Message-ID: <1580616588.9.0.5083341091.issue39524@roundup.psfhosted.org> New submission from mpheath : In the ast module, a function named _pad_whitespace has a doc string with escape sequences of \f and \t. The current doc string from Lib/ast.py:305 is: """Replace all chars except '\f\t' in a line with spaces.""" Example of doc string output in a REPL: Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ast >>> import inspect >>> inspect.getdoc(ast._pad_whitespace) "Replace all chars except '\x0c ' in a line with spaces." >>> The \x0c is the formfeed and the ' ' (5 spaces) was the tab. It is my understanding that the output should be: "Replace all chars except '\f\t' in a line with spaces." I would expect the source to be: """Replace all chars except '\\f\\t' in a line with spaces.""" or perhaps a raw string: r"""Replace all chars except '\f\t' in a line with spaces.""" The current Lib/ast.py:305 is Python 3.9.0 alpha 3 though the issue is also in Python 3.8.0 and 3.8.1 with 3.8/Lib/ast.py:227 . Python 3.7.4 3.7/Lib/ast.py does not have the function _pad_whitespace as it appears major code changes occurred in the ast module with Python 3.8.0. ---------- messages: 361203 nosy: mpheath priority: normal severity: normal status: open title: Escape sequences in doc string of ast._pad_whitespace type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 00:10:48 2020 From: report at bugs.python.org (David Hwang) Date: Sun, 02 Feb 2020 05:10:48 +0000 Subject: [New-bugs-announce] [issue39525] math.remainder() give wrong answer on large integer Message-ID: <1580620248.23.0.0499654222962.issue39525@roundup.psfhosted.org> New submission from David Hwang : These two numbers are off by 1, and so should give different answer to >>> math.remainder(12345678901234567890,3) 1.0 >>> math.remainder(12345678901234567891,3) 1.0 ---------- components: Library (Lib) messages: 361211 nosy: David Hwang priority: normal severity: normal status: open title: math.remainder() give wrong answer on large integer type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 00:22:38 2020 From: report at bugs.python.org (mlwtc) Date: Sun, 02 Feb 2020 05:22:38 +0000 Subject: [New-bugs-announce] [issue39526] print(text1.get(1.2,1.5)) Message-ID: <1580620958.57.0.556548292097.issue39526@roundup.psfhosted.org> New submission from mlwtc : >>> from tkinter import * >>> root = Tk() >>> text1 = Text(root,width=30,height=3) >>> text1.insert(INSERT,'abcdefghijklmnopqrstuvwxyz123456789123456789') >>> print(text1.get(1.0,1.30)) abc >>> print(text1.get(1.0,1.31)) abcdefghijklmnopqrstuvwxyz12345 >>> print(text1.get(1.0,1.20)) ab >>> print(text1.get(1.0,1.21)) abcdefghijklmnopqrstu >>> print(text1.get(1.0,1.10)) a >>> print(text1.get(1.0,1.11)) abcdefghijk >>> print(text1.get(1.0,1.9)) abcdefghi Is there a bug here? ---------- components: Build messages: 361212 nosy: mlwtc priority: normal severity: normal status: open title: print(text1.get(1.2,1.5)) type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 00:29:55 2020 From: report at bugs.python.org (hai shi) Date: Sun, 02 Feb 2020 05:29:55 +0000 Subject: [New-bugs-announce] [issue39527] Update doc of argparse.rst Message-ID: <1580621395.73.0.298845669728.issue39527@roundup.psfhosted.org> New submission from hai shi : 1. examples don't need import argparse much times(IMHO, it should be a default behavior); 2. argparse have no doctest, it's not a good behavior; ---------- assignee: docs at python components: Documentation messages: 361213 nosy: docs at python, mdk, rhettinger, shihai1991 priority: normal severity: normal status: open title: Update doc of argparse.rst type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 00:56:30 2020 From: report at bugs.python.org (Dan Snider) Date: Sun, 02 Feb 2020 05:56:30 +0000 Subject: [New-bugs-announce] [issue39528] add " Message-ID: <1580622990.14.0.618721528246.issue39528@roundup.psfhosted.org> Change by Dan Snider : ---------- nosy: bup priority: normal severity: normal status: open title: add " _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 06:11:37 2020 From: report at bugs.python.org (Andrew Svetlov) Date: Sun, 02 Feb 2020 11:11:37 +0000 Subject: [New-bugs-announce] [issue39529] Deprecate get_event_loop() Message-ID: <1580641897.21.0.210075972969.issue39529@roundup.psfhosted.org> New submission from Andrew Svetlov : Yuri proposed it for Python 3.8 but at that time the change was premature. Now we can reconsider it for 3.9 The problem is that asyncio.get_event_loop() not only returns a loop but also creates it on-demand if the thread is main and the loop doesn't exist. It leads to weird errors when get_event_loop() is called at import-time and asyncio.run() is used for asyncio code execution. get_running_loop() is a much better alternative when used *inside* a running loop, run() should be preferred for calling async code at top-level. Low-level new_event_loop()/loop.run_until_complete() are still present to run async code if top-level run() is not suitable for any reason. asyncio.run() was introduced in 3.7, deprecation on get_event_loop() in 3.8 was able to complicate support of 3.5/3.6 by third-party libraries. 3.5 now reached EOL, 3.6 is in the security-fix mode and going close to EOL. Most people are migrated to newer versions already if they care. The maintenance burden of the introduced deprecation should be pretty low. ---------- messages: 361229 nosy: asvetlov priority: normal severity: normal status: open title: Deprecate get_event_loop() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 09:17:34 2020 From: report at bugs.python.org (Mark Dickinson) Date: Sun, 02 Feb 2020 14:17:34 +0000 Subject: [New-bugs-announce] [issue39530] Documentation about comparisons between numeric types is misleading Message-ID: <1580653054.78.0.127921454413.issue39530@roundup.psfhosted.org> New submission from Mark Dickinson : The documentation[1] for comparisons between mixed types says: > [...] when a binary arithmetic operator has operands of different > numeric types, the operand with the "narrower" type is widened to > that of the other, where integer is narrower than floating point, > which is narrower than complex. Comparisons between numbers of > mixed type use the same rule. That "use the same rule" part of the last sentence is misleading: it suggests that (for example) when an int is compared with a float, the int is first converted to a float, and then the two floats are compared. But that's not what actually happens: instead, the exact values of the int and float are compared. (And it's essential that equality comparisons happen that way, else equality becomes intransitive and dictionaries with numeric keys get very confused as a result.) I suggest dropping the last sentence and adding a new paragraph about comparisons between numbers of mixed type. [1] https://github.com/python/cpython/blob/master/Doc/library/stdtypes.rst#numeric-types-----classint-classfloat-classcomplex ---------- assignee: docs at python components: Documentation messages: 361234 nosy: docs at python, mark.dickinson priority: normal severity: normal status: open title: Documentation about comparisons between numeric types is misleading versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 14:19:26 2020 From: report at bugs.python.org (EMO) Date: Sun, 02 Feb 2020 19:19:26 +0000 Subject: [New-bugs-announce] [issue39531] Memory Leak in multiprocessing.Pool() Message-ID: <1580671166.84.0.839644381298.issue39531@roundup.psfhosted.org> New submission from EMO : After even deleting all variables it still reserves memory of around a GB. ---------- components: Library (Lib) files: User's.py messages: 361253 nosy: EMO priority: normal severity: normal status: open title: Memory Leak in multiprocessing.Pool() type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48877/User's.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 18:45:17 2020 From: report at bugs.python.org (Isaac Muse) Date: Sun, 02 Feb 2020 23:45:17 +0000 Subject: [New-bugs-announce] [issue39532] Pathlib: handling of `.` in paths and patterns creates unmatchable paths Message-ID: <1580687117.16.0.54415875836.issue39532@roundup.psfhosted.org> New submission from Isaac Muse : It appears that the pathlib library strips out `.` in glob paths when they represent a directory. This is kind of a naive approach in my opinion, but I understand what was trying to be achieved. When a path is given to pathlib, it normalizes it by stripping out non-essential things like `.` that represent directories, and strips out trailing `/` to give a path without unnecessary parts (the stripping of trailing `/` is another discussion). But there is a small twist, when given an empty string or just a dot, you need to have something as the directory, so it allows a `.`. So, it appears the idea was since this normalization is applied to paths, why not apply it to the glob patterns as well, so it does. But the special logic that ensures you don't have an empty string to match does not get applied to the glob patterns. This creates unmatchable paths: >>> import pathlib >>> str(pathlib.Path('.')) '.' >>> pathlib.Path('.').match('.') Traceback (most recent call last): File "", line 1, in File "C:\Python36\lib\pathlib.py", line 939, in match raise ValueError("empty pattern") ValueError: empty pattern I wonder if it is appropriate to apply this `.` stripping to glob patterns. Personally, I think the glob pattern, except for slash normalization, should remain unchanged, but if it is to be normalized above and beyond this, at the very least should use the exact same logic that is applied to the paths. ---------- components: Library (Lib) messages: 361259 nosy: Isaac Muse priority: normal severity: normal status: open title: Pathlib: handling of `.` in paths and patterns creates unmatchable paths type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 2 23:26:02 2020 From: report at bugs.python.org (ntninja) Date: Mon, 03 Feb 2020 04:26:02 +0000 Subject: [New-bugs-announce] [issue39533] Use `statx(2)` system call on Linux for extended `os.stat` information Message-ID: <1580703962.85.0.294314145935.issue39533@roundup.psfhosted.org> New submission from ntninja : Background: For a long time several Linux filesystems have been tracking two extra bits of information. The file attributes bits[1] and the file creation time (aka crtime aka btime aka birthtime)[2]. Before Linux 4.11 accessing these required secret knowledge (ioctl numbers) or access to unstable interfaces (debugfs). However since that version the statx(2) system call[3] has finally been added (it has a long history), which exposes these two fields adds (struct) space for potentially more. Since CPython already exposes `st_birthtime` on FreeBSD and friends, I think it would be fair to also expose this field on Linux. As the timestamp value is only available on some file systems and configurations it is not guaranteed that the system call will return a value for btime at all. I suppose the field should be set to `None` in that case. In my opinion it should also become a regular field (available on all platforms) since, with this addition, we now have a suitable value to return on every major platform CPython targets: `stx_btime` on Linux, `st_birthtime` on macOS/FreeBSD and `st_ctime` on Windows. `stx_attributes` could be exposed as a new `st_attributes` flag specific to Linux as there is no equivalent on other platforms to my knowledge (Window's `st_file_attributes` is similar in some aspects but has a completely different format and content). There is a Python script I created, that calls statx(2) using ctypes here: https://github.com/ipfs/py-datastore/blob/e566d40a8ca81d8628147e255fe7830b5f928a43/datastore/filesystem/util/statx.py It may be useful as a reference when implementing this in C. [1]: https://man.cx/chattr(1) [2]: https://unix.stackexchange.com/a/50184/47938 [3]: https://man.cx/statx(2) ---------- messages: 361265 nosy: ntninja priority: normal severity: normal status: open title: Use `statx(2)` system call on Linux for extended `os.stat` information type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 03:21:12 2020 From: report at bugs.python.org (Julien Palard) Date: Mon, 03 Feb 2020 08:21:12 +0000 Subject: [New-bugs-announce] [issue39534] Clarify tutorial on return statement in finally clause. Message-ID: <1580718072.8.0.280721133758.issue39534@roundup.psfhosted.org> New submission from Julien Palard : According to [1][2] the documentation about return in finally statement is missleading in [3]. It currently states: > If a finally clause includes a return statement, the finally clause?s return statement will execute before, and instead of, the return statement in a try clause. I would prefer speaking about returned values instead of statements executed, I think it would clarify the point. [1]: https://mail.python.org/archives/list/docs at python.org/message/LBMO47JSDPKFKLYR25HAKD7A76D5IHWI/ [2]: https://stackoverflow.com/questions/59639733/python-docs-have-misleading-explanation-of-return-in-finally [3]: https://docs.python.org/3.7/tutorial/errors.html#defining-clean-up-actions ---------- assignee: mdk components: Documentation messages: 361269 nosy: mdk priority: normal severity: normal status: open title: Clarify tutorial on return statement in finally clause. type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 04:21:12 2020 From: report at bugs.python.org (Robert Pierce) Date: Mon, 03 Feb 2020 09:21:12 +0000 Subject: [New-bugs-announce] [issue39535] multiprocessing.Process file descriptor resource leak Message-ID: <1580721672.06.0.925204130251.issue39535@roundup.psfhosted.org> New submission from Robert Pierce : multiprocessing.Process opens a FIFO to the child. This FIFO is not documented the the Process class API and it's purpose is not clear from the documentation. It is a minor documentation bug that the class creates non-transparent resource utilization. The primary behavioral bug is that incorrect handling of this FIFO creates a resource leak, since the file descriptor is not closed on join(), or even when the parent Process object goes out of scope. The effect of this bug is that programs generating large numbers of Process objects will hit system resource limits of open file descriptors. ---------- assignee: docs at python components: Documentation, Library (Lib) files: proc_test.py messages: 361273 nosy: Robert Pierce, docs at python priority: normal severity: normal status: open title: multiprocessing.Process file descriptor resource leak type: resource usage versions: Python 3.6 Added file: https://bugs.python.org/file48878/proc_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 04:46:56 2020 From: report at bugs.python.org (Jairo Vadillo) Date: Mon, 03 Feb 2020 09:46:56 +0000 Subject: [New-bugs-announce] [issue39536] Datetime strftime: %Y exports years < 1000 with 3 digits instead of 4 on Linux Message-ID: <1580723216.73.0.692049460733.issue39536@roundup.psfhosted.org> New submission from Jairo Vadillo : This two examples are pretty simple. On MacOS stftime %Y works as expected, retrieving 4 digits: Python 3.7.6 (default, Dec 30 2019, 19:38:26) [Clang 11.0.0 (clang-1100.0.33.16)] on darwin > datetime.strftime(datetime.now().replace(year=100), "%Y-%m-%d") '0100-02-03' But on Linux...: Python 3.7.6 (default, Jan 3 2020, 23:35:31) [GCC 8.3.0] on linux > datetime.strftime(datetime.now().replace(year=100), "%Y-%m-%d") '100-02-03' This causes a lot of trouble when storing and then retrieving string dates from any string based storage. ---------- components: Library (Lib) messages: 361274 nosy: Jairo Vadillo priority: normal severity: normal status: open title: Datetime strftime: %Y exports years < 1000 with 3 digits instead of 4 on Linux type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 05:06:34 2020 From: report at bugs.python.org (Mark Shannon) Date: Mon, 03 Feb 2020 10:06:34 +0000 Subject: [New-bugs-announce] [issue39537] Change line number table format Message-ID: <1580724394.63.0.487397350613.issue39537@roundup.psfhosted.org> New submission from Mark Shannon : The current line number table format has two issues that need to be addressed. 1. There is no way to express that a bytecode does not have have a line number. The `END_ASYNC_FOR` bytecode, bytecodes for cleaning up the variable used to store exceptions in exception handles, and a few other cases, are all artificial and should have no line number. 2. It is inefficient to find a line number when tracing. Currently, whenever the line number changes, the line number table must be re-scanned from the the start. I propose to fix this by implementing a new line number table. Each instruction (currently pair of bytes) would have a one byte line-offset value. An offset of 0 indicates that the instruction has no line number. In addition to the offset table there would be a table of bytecode-offset, base-line pairs. Following the pairs is the instruction count. Adding the instruction count at the end means that the table is not just a table of start, line pairs, but also a table of (inclusive) start, line, (exclusive) end triples. This format makes it very easy to scan forwards and backwards. Because each entry covers up to 255 lines, the table is very small. The line of the bytecode at `n*2` (instruction `n`) is calculated as: offset = lnotab[n] if offset == 0: line = -1 # artificial else: line_base = scan_table_to_find(n) line = offset + line_base The new format fixes the two issues listed above. 1. Having no line number is expressed by a 0 in the offset table. 2. Since the offset-base table is made up of absolute values, not relative ones, it can be reliably scanned backwards. It is even possible to use a binary search, although a linear scan will be faster in almost all cases. The number format would be larger than the old one. However, the code object is composed not only of code, but several tuples of names and constants as well, so increasing the size of the line number has a small effect overall. ---------- components: Interpreter Core messages: 361277 nosy: Mark.Shannon priority: normal severity: normal status: open title: Change line number table format type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 05:19:15 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 03 Feb 2020 10:19:15 +0000 Subject: [New-bugs-announce] [issue39538] SystemError when set Element.attrib to non-dict Message-ID: <1580725155.12.0.53190069949.issue39538@roundup.psfhosted.org> New submission from Serhiy Storchaka : The C implementation raises a SystemError after setting Element.attrib to non-dict. >>> from xml.etree import ElementTree as ET >>> e = ET.Element('a') >>> e.attrib = 1 >>> e.get('x') Traceback (most recent call last): File "", line 1, in SystemError: Objects/dictobject.c:1438: bad argument to internal function >>> e.items() Traceback (most recent call last): File "", line 1, in SystemError: Objects/dictobject.c:2732: bad argument to internal function >>> e.keys() Traceback (most recent call last): File "", line 1, in SystemError: Objects/dictobject.c:2712: bad argument to internal function The only valid non-dict value is None (although it is an implementation detail). >>> e.attrib = None >>> e.get('x') >>> e.items() [] >>> e.keys() [] The Python implementation raises an AttributeError (even for None). >>> import sys >>> sys.modules['_elementtree'] = None >>> from xml.etree import ElementTree as ET >>> e = ET.Element('a') >>> e.attrib = 1 >>> e.get('x') Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython3.8/Lib/xml/etree/ElementTree.py", line 358, in get return self.attrib.get(key, default) AttributeError: 'int' object has no attribute 'get' >>> e.items() Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython3.8/Lib/xml/etree/ElementTree.py", line 388, in items return self.attrib.items() AttributeError: 'int' object has no attribute 'items' >>> e.keys() Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython3.8/Lib/xml/etree/ElementTree.py", line 377, in keys return self.attrib.keys() AttributeError: 'int' object has no attribute 'keys' Other way to trigger an error is via __setstate__(). ---------- components: Extension Modules, XML messages: 361279 nosy: eli.bendersky, scoder, serhiy.storchaka priority: normal severity: normal status: open title: SystemError when set Element.attrib to non-dict type: behavior versions: Python 2.7, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 05:25:05 2020 From: report at bugs.python.org (Gilles Van Assche) Date: Mon, 03 Feb 2020 10:25:05 +0000 Subject: [New-bugs-announce] [issue39539] Improve Keccak support in hashlib including KangarooTwelve Message-ID: <1580725505.22.0.0334645206917.issue39539@roundup.psfhosted.org> New submission from Gilles Van Assche : Dear all, I think it would be nice if hashlib would include the support of Keccak with a chosen suffix, as well as the fast instance KangarooTwelve (K12). 1) Currently, hashlib's interface for Keccak only supports the 6 instances of FIPS 202 (SHA3-* and SHAKE*). However, the instances in NIST SP 800-185 (cSHAKE, KMAC, ?) use a different suffix and therefore cannot be instantiated on top of the aforementioned 6 instances. Instead, simply adding the suffix as an argument to the constructor would enable a user to instantiate plain Keccak (as in Ethereum) or the SP 800-185 instances. 2) K12 is an alternative hash function (and XOF) in the Keccak family. It is fast, parallelizable and it benefits directly from the cryptanalysis on the (unchanged) underlying permutation since 2008. This would be IMHO a valuable addition to hashlib. Among others, implementations of K12 can be found in the XKCP on GitHub. Kind regards, Gilles (co-designer of Keccak and K12) ---------- components: Library (Lib) messages: 361280 nosy: gvanas priority: normal severity: normal status: open title: Improve Keccak support in hashlib including KangarooTwelve type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 07:29:54 2020 From: report at bugs.python.org (Alexander McFarlane) Date: Mon, 03 Feb 2020 12:29:54 +0000 Subject: [New-bugs-announce] [issue39540] Logging docs don't address the creation of multiple loggers when a hierarchy is provided Message-ID: <1580732994.06.0.683672257184.issue39540@roundup.psfhosted.org> New submission from Alexander McFarlane : If `logger_name` is a hierarchy format (e.g. `logger_name = 'parent.child'`) and the logger name `'parent'` has not been created, the function call `logging.getLogger(logger_name)` will create all loggers in the hierarchy (in this instance two loggers, `'parent'` and `'parent.child'` will be created) This is not documented anywhere in the logging documentation. Suggest that this is detailed under `logging.getLogger` More info... https://stackoverflow.com/q/59990300/4013571 ---------- assignee: docs at python components: Documentation messages: 361287 nosy: docs at python, flipdazed priority: normal severity: normal status: open title: Logging docs don't address the creation of multiple loggers when a hierarchy is provided type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 08:19:57 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Feb 2020 13:19:57 +0000 Subject: [New-bugs-announce] [issue39541] distutils: Remove bdist_wininst (Windows .exe installers) in favor of bdist_wheel (.whl) Message-ID: <1580735997.04.0.997914254478.issue39541@roundup.psfhosted.org> New submission from STINNER Victor : The distutils bdist_wininst has been deprecated in Python 3.8 by bpo-37481 in favor of bdist_wheel. See "Deprecate bdist_wininst" discussion: https://discuss.python.org/t/deprecate-bdist-wininst/1929 I now propose to remove it from the Python code base to ease the Python maintenance. One of the project project which used .exe Windows installer was Pillow, but this project doesn't publish .exe installers since Pillow 6.2.0 (October 2019): * "No more deprecated bdist_wininst .exe installers #4029 [hugovk]" * https://github.com/python-pillow/Pillow/pull/4029 Attached PR removes bdist_wininst: use bdist_whell instead. ---------- components: Library (Lib) messages: 361290 nosy: vstinner priority: normal severity: normal status: open title: distutils: Remove bdist_wininst (Windows .exe installers) in favor of bdist_wheel (.whl) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 10:42:01 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Feb 2020 15:42:01 +0000 Subject: [New-bugs-announce] [issue39542] Cleanup object.h header Message-ID: <1580744521.06.0.137468585162.issue39542@roundup.psfhosted.org> New submission from STINNER Victor : In bpo-39489, I removed the COUNT_ALLOCS special build. The object.h header can now be cleaned up to simplify the code. ---------- components: C API messages: 361305 nosy: vstinner priority: normal severity: normal status: open title: Cleanup object.h header versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 11:53:48 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Feb 2020 16:53:48 +0000 Subject: [New-bugs-announce] [issue39543] Py_DECREF(): use inlined _Py_Dealloc() Message-ID: <1580748828.27.0.856458139281.issue39543@roundup.psfhosted.org> New submission from STINNER Victor : In bpo-35059, I converted Py_DECREF() macro to a static inline function (commit 2aaf0c12041bcaadd7f2cc5a54450eefd7a6ff12). Then in bpo-35134, I moved _Py_Dealloc() macro to the newly created Include/cpython/object.h header file (commit 6eb996685e25c09499858bee4be258776e603c6f). The problem is that when Py_DECREF() was converted to a static inline function, it stopped to use the *redefine* _Py_Dealloc() fast macro, but instead use the slow regular function call: PyAPI_FUNC(void) _Py_Dealloc(PyObject *); Py_DECREF() performance is critical for overall Python performance. I will work on a PR to fix this issue. See also bpo-39542 which updates object.h and cpython/object.h. ---------- components: C API messages: 361310 nosy: vstinner priority: normal severity: normal status: open title: Py_DECREF(): use inlined _Py_Dealloc() versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 12:53:35 2020 From: report at bugs.python.org (tegavu) Date: Mon, 03 Feb 2020 17:53:35 +0000 Subject: [New-bugs-announce] [issue39544] Pathlib PureWindowsPath sorting incorrect (is not natural sort) Message-ID: <1580752415.37.0.934686672002.issue39544@roundup.psfhosted.org> New submission from tegavu : Wrong behavior in pathlib.PureWindowsPath - sorting does not use natural sort. Everything below was written based on W7x64 & Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32. The documentation (https://docs.python.org/3/library/pathlib.html#general-properties) states: "Paths of a same flavour are comparable and orderable." This can be done like this: from pathlib import * print( PureWindowsPath('C:\\1') < PureWindowsPath('C:\\a') ) This returns True. This is expected because 1 is sorted before a on Windows. This sorting also works well for harder cases where other sorting functions fail: !1 should be before 1 and !a should be before a. But it fails with natural sorting: from pathlib import * print( PureWindowsPath('C:\\15') < PureWindowsPath('C:\\100') ) This returns False. This is a bug in my opinion, since PureWindowsPath should sort like Windows(Explorer) would sort. Right now PureWindowsPath does probably something like NTFS sorting, but NTFS is not Windows and from a function called 'WindowsPath' I expect a path that would be given in Windows Explorer. In case a simple `dir` on Windows sorts by NTFS names (I am not sure!), PureWindowsPath also fails, since (for example) "[" < "a" should be False. See this image for comparison: https://i.imgur.com/GjBhWsS.png Here is a string that can be used directly as a list to check sorting: test_list = ['15', '100', '11111', '!', '#', '$', '%', '&', "'", '(', ')', '+', '+11111', '+aaaaa', ',', '-', ';', '=', '@', '[', ']', '^', '_', '`', 'aaaaa', 'foo0', 'foo_0', '{', '}', '~', '?', '?', '?', '?', '?'] ---------- messages: 361315 nosy: tegavu priority: normal severity: normal status: open title: Pathlib PureWindowsPath sorting incorrect (is not natural sort) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 13:26:38 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 03 Feb 2020 18:26:38 +0000 Subject: [New-bugs-announce] [issue39545] await is not supported in f-string in 3.6 Message-ID: <1580754398.55.0.721406999094.issue39545@roundup.psfhosted.org> New submission from Serhiy Storchaka : The following code is compiled in 3.7, but is a syntax error in 3.6. async def f(x): f"{await x}" I have not found mentioning this change in What's New, and it looks grammatically correct. It looks as a bug in 3.6. It may be too later to fix it in 3.6, but at least it should be documented. ---------- assignee: docs at python components: Documentation, Interpreter Core messages: 361317 nosy: docs at python, eric.smith, serhiy.storchaka priority: normal severity: normal status: open title: await is not supported in f-string in 3.6 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 15:11:28 2020 From: report at bugs.python.org (Kyle Meyer) Date: Mon, 03 Feb 2020 20:11:28 +0000 Subject: [New-bugs-announce] [issue39546] argparse: allow_abbrev=False is ignored for alternative prefix characters Message-ID: <1580760688.73.0.980793532046.issue39546@roundup.psfhosted.org> New submission from Kyle Meyer : As of Python v3.8.0 (specifically commit b1e4d1b603), specifying `allow_abbrev=False` does not disable abbreviation for prefix characters other than '-'. --8<---------------cut here---------------start------------->8--- import argparse parser = argparse.ArgumentParser(prefix_chars='+', allow_abbrev=False) parser.add_argument('++long') print(parser.parse_args(['++lo=val'])) --8<---------------cut here---------------end--------------->8--- Observed output (with b1e4d1b603 and current master): Namespace(long='val') Expected (and observed with b1e4d1b603^ and 3.7.3): usage: scratch.py [+h] [++long LONG] scratch.py: error: unrecognized arguments: ++lo=val I will follow up with a PR to propose a fix. ---------- components: Library (Lib) messages: 361326 nosy: kyleam priority: normal severity: normal status: open title: argparse: allow_abbrev=False is ignored for alternative prefix characters type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 17:59:34 2020 From: report at bugs.python.org (Will Bond) Date: Mon, 03 Feb 2020 22:59:34 +0000 Subject: [New-bugs-announce] [issue39547] hmac.new() default parameter change not mentioned in changelog Message-ID: <1580770774.29.0.234296121178.issue39547@roundup.psfhosted.org> New submission from Will Bond : When running code on Python 3.8 that previous was running 3.3, I ran into the issue that the default value for the digestmod parameter of hmac.new() has been changed to backwards-incompatible value. I generally would have expected such a break to show up in https://docs.python.org/3/whatsnew/3.8.html#api-and-feature-removals. If not there, somewhere on the 3.8 changelog page. ---------- assignee: docs at python components: Documentation messages: 361329 nosy: docs at python, wbond priority: normal severity: normal status: open title: hmac.new() default parameter change not mentioned in changelog versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 18:31:29 2020 From: report at bugs.python.org (Stephen Balousek) Date: Mon, 03 Feb 2020 23:31:29 +0000 Subject: [New-bugs-announce] [issue39548] Request fails when 'WWW-Authenticate' header for Digest Authentication does not contain 'qop' Message-ID: <1580772689.06.0.438668045886.issue39548@roundup.psfhosted.org> New submission from Stephen Balousek : When making an HTTP request using an opener with an attached HTTPDigestAuthHandler, the request causes a crash when the returned 'WWW-Authenticate' header for the 'Digest' domain does not return the optional 'qop' value. Response headers: ================= Content-Type: application/json Content-Security-Policy: default-src 'self' 'unsafe-eval' 'unsafe-inline';img-src 'self' data: X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block Content-Length: 600 WWW-Authenticate: Digest realm="ServiceManager", nonce="1580815098100956" WWW-Authenticate: Basic realm="ServiceManager", charset="UTF-8" Cache-Control: max-age=0, no-cache, no-store, must-revalidate Expires: 0 Pragma: no-cache Crash: ====== Error: Exception: 'NoneType' object has no attribute 'split' Traceback (most recent call last): ... File "/home/sbalousek/bin/restap.py", line 1317, in RunTest status, payload, contentType = ExecuteRequest(baseUrl, test, tap); File "/home/sbalousek/bin/restap.py", line 1398, in ExecuteRequest response = opener.open(request, payload, timeout); File "/usr/lib/python3.8/urllib/request.py", line 523, in open response = meth(req, response) File "/home/sbalousek/bin/restap.py", line 1065, in http_response return self.process_response(request, response, HTTPErrorProcessor.http_response); File "/home/sbalousek/bin/restap.py", line 1056, in process_response return handler(self, request, response); File "/usr/lib/python3.8/urllib/request.py", line 632, in http_response response = self.parent.error( File "/usr/lib/python3.8/urllib/request.py", line 555, in error result = self._call_chain(*args) File "/usr/lib/python3.8/urllib/request.py", line 494, in _call_chain result = func(*args) File "/usr/lib/python3.8/urllib/request.py", line 1203, in http_error_401 retry = self.http_error_auth_reqed('www-authenticate', File "/usr/lib/python3.8/urllib/request.py", line 1082, in http_error_auth_reqed return self.retry_http_digest_auth(req, authreq) File "/usr/lib/python3.8/urllib/request.py", line 1090, in retry_http_digest_auth auth = self.get_authorization(req, chal) File "/usr/lib/python3.8/urllib/request.py", line 1143, in get_authorization if 'auth' in qop.split(','): AttributeError: 'NoneType' object has no attribute 'split' Diagnosis: ========== The crash is a result of an optional 'qop' value missing from the 'WWW-Authenticate' header. This bug was introduced in changes for issue 38686. ---------- components: Library (Lib) messages: 361330 nosy: Stephen Balousek priority: normal severity: normal status: open title: Request fails when 'WWW-Authenticate' header for Digest Authentication does not contain 'qop' type: crash versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 3 20:07:30 2020 From: report at bugs.python.org (=?utf-8?q?Alexander_B=C3=B6hn?=) Date: Tue, 04 Feb 2020 01:07:30 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue39549=5D_The_reprlib=2ERep?= =?utf-8?q?r_type_should_permit_the_=E2=80=9Cfillvalue=E2=80=9D_to_be_set_?= =?utf-8?q?by_the_user?= Message-ID: <1580778450.13.0.746492669497.issue39549@roundup.psfhosted.org> New submission from Alexander B?hn : Currently, the `reprlib.recursive_repr(?)` decorator allows a ?fillvalue? parameter to be specified by the user. This is a string value that is used as a placeholder when calculating an objects? repr ? in the case of `recursive_repr(?)` the ?fillvalue? defaults to '...' and may be set by the user to a string of any length. There is no such user-defined ?fillvalue? on the `reprlib.Repr` type, although the '...' string is hardcoded in its implementation and used throughout. I propose that the hardcoded use of the '...' string in the code for the `reprlib.Repr` implementation should be replaced by a ?fillvalue? attribute, set on the class in its `__init__(?)` method ? and therefore overridable in subclasses, like the existing myriad ?max*? instance attributes. PR to follow in short order. ---------- components: Library (Lib) messages: 361334 nosy: fish2000 priority: normal severity: normal status: open title: The reprlib.Repr type should permit the ?fillvalue? to be set by the user type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 4 13:30:00 2020 From: report at bugs.python.org (Joachim Jablon) Date: Tue, 04 Feb 2020 18:30:00 +0000 Subject: [New-bugs-announce] [issue39550] isinstance accepts subtypes of tuples as second argument Message-ID: <1580841000.84.0.0928756449377.issue39550@roundup.psfhosted.org> New submission from Joachim Jablon : (Not really sure it is a bug, but better informed people might find it worthy still) isinstance can accept, as second argument, a type or a potentially nested tuple of types. Only tuples are accepted, as opposed to generic iterables. The reasoning behind using a tuple was recently added through a small refactoring from Victor Stinner: https://github.com/python/cpython/commit/850a4bd839ca11b59439e21dda2a3ebe917a9a16 The idea being that it's impossible to make a self referencing tuple nest, and thus the function, which is recursive, doesn't have to deal with infinite recursion. It's possible to use a tuple subclass, though, and while it doesn't break the function because it reads , the tuple is not explored through the __iter__ interface: >>> class T(tuple): ... def __iter__(self): ... yield self ... >>> isinstance(3, T()) False This is the expected result if checking what the tuple contains, but not if iterating the tuple. For me, there's nothing absolutely wrong with the current behaviour, but it feels like we're walking on a fine line, and if for any reason, the isinstance tuple iteration were to start using __iter__ in the future, this example may crash. Solutions could be handling any iterable but explicitely checking for recursion or, as suggested by Victor Stinner, forbidding subclasses of tuple. Guido van Rossum suggested opening an issue so here it is. A link to the discussion that prompted this: https://twitter.com/VictorStinner/status/1224744606421655554 ---------- messages: 361362 nosy: ewjoachim priority: normal severity: normal status: open title: isinstance accepts subtypes of tuples as second argument type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 4 15:27:49 2020 From: report at bugs.python.org (Dino Viehland) Date: Tue, 04 Feb 2020 20:27:49 +0000 Subject: [New-bugs-announce] [issue39551] mock patch should match behavior of import from when module isn't present in sys.modules Message-ID: <1580848069.21.0.869527016064.issue39551@roundup.psfhosted.org> New submission from Dino Viehland : The fix for bpo-17636 added support for falling back to sys.modules when a module isn't directly present on the module. But mock doesn't have the same behavior - it'll try the import, and then try to get the value off the object. If it's not there it just errors out. Instead it should also consult sys.modules to be consistent with import semantics. ---------- assignee: dino.viehland components: Tests messages: 361366 nosy: dino.viehland priority: normal severity: normal stage: needs patch status: open title: mock patch should match behavior of import from when module isn't present in sys.modules type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 4 17:42:23 2020 From: report at bugs.python.org (Frazer Clews) Date: Tue, 04 Feb 2020 22:42:23 +0000 Subject: [New-bugs-announce] [issue39552] shell scripts use legacy Message-ID: <1580856143.95.0.631656009872.issue39552@roundup.psfhosted.org> Change by Frazer Clews : ---------- components: Installation nosy: frazerclews priority: normal severity: normal status: open title: shell scripts use legacy type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 4 19:45:13 2020 From: report at bugs.python.org (Steve Dower) Date: Wed, 05 Feb 2020 00:45:13 +0000 Subject: [New-bugs-announce] [issue39553] Delete HAVE_SXS protected code Message-ID: <1580863513.73.0.140072621675.issue39553@roundup.psfhosted.org> New submission from Steve Dower : We no longer support SXS manifests, so rather than fixing issue37025 in master, let's just delete the code covered by HAVE_SXS completely. ---------- components: Windows keywords: easy (C) messages: 361393 nosy: ZackerySpytz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Delete HAVE_SXS protected code type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 4 22:47:25 2020 From: report at bugs.python.org (Benoit B) Date: Wed, 05 Feb 2020 03:47:25 +0000 Subject: [New-bugs-announce] [issue39554] @functools.lru_cache() not respecting typed=False Message-ID: <1580874445.88.0.596584453725.issue39554@roundup.psfhosted.org> New submission from Benoit B : I don't know if I'm missing something, but there's a behavior of functools.lru_cache() that I currently don't understand. As the documentation states: "If typed is set to true, function arguments of different types will be cached separately. For example, f(3) and f(3.0) will be treated as distinct calls with distinct results." For a function accepting only positional arguments, using typed=False doesn't seem to be working in all cases. >>> import functools >>> >>> @functools.lru_cache() # Implicitly uses typed=False >>> def func(a): ... return a >>> >>> func(1) >>> func(1.0) >>> >>> print(func.cache_info()) CacheInfo(hits=0, misses=2, maxsize=128, currsize=2) Instead, I would have expected: CacheInfo(hits=1, misses=1, maxsize=128, currsize=2) So it looks like 1 and 1.0 were stored as different values even though typed=False was used. After analyzing the source code of _functoolsmodule.c::lru_cache_make_key(), I found what follows: if (!typed && !kwds_size) { if (PyTuple_GET_SIZE(args) == 1) { key = PyTuple_GET_ITEM(args, 0); if (PyUnicode_CheckExact(key) || PyLong_CheckExact(key)) { <<< it appears that a 'float' would cause 'args' (a tuple) to be returned as the key, whereas an 'int' would cause 'key' /* For common scalar keys, save space by (an int) to be returned as the key. So 1 and 1.0 generate different hashes and are stored as different items. dropping the enclosing args tuple */ Py_INCREF(key); return key; } } Py_INCREF(args); return args; } At some point in the past, the above code section looked like this: if (!typed && !kwds) { Py_INCREF(args); return args; } So no matter what the type of the argument was, it was working. Am I somehow mistaken in my analysis or is this a bug? ---------- components: Library (Lib) messages: 361404 nosy: bbernard priority: normal severity: normal status: open title: @functools.lru_cache() not respecting typed=False type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 4 23:59:17 2020 From: report at bugs.python.org (Steve Dower) Date: Wed, 05 Feb 2020 04:59:17 +0000 Subject: [New-bugs-announce] [issue39555] test_distutils fails for Windows debug build Message-ID: <1580878757.67.0.288943612022.issue39555@roundup.psfhosted.org> New submission from Steve Dower : >From https://buildbot.python.org/all/#/builders/129/builds/306 ====================================================================== FAIL: test_unicode_module_names (distutils.tests.test_build_ext.BuildExtTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.x.bolen-windows10\build\lib\distutils\tests\test_build_ext.py", line 315, in test_unicode_module_names self.assertRegex(cmd.get_ext_filename(modules[0].name), r'foo\..*') AssertionError: Regex didn't match: 'foo\\..*' not found in 'foo_d.cp39-win_amd64.pyd' ====================================================================== FAIL: test_unicode_module_names (distutils.tests.test_build_ext.ParallelBuildExtTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.x.bolen-windows10\build\lib\distutils\tests\test_build_ext.py", line 315, in test_unicode_module_names self.assertRegex(cmd.get_ext_filename(modules[0].name), r'foo\..*') AssertionError: Regex didn't match: 'foo\\..*' not found in 'foo_d.cp39-win_amd64.pyd' ---------- components: Distutils, Windows messages: 361406 nosy: dstufft, eric.araujo, pablogsal, paul.moore, scoder, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: test_distutils fails for Windows debug build type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 01:35:54 2020 From: report at bugs.python.org (Kevin Young) Date: Wed, 05 Feb 2020 06:35:54 +0000 Subject: [New-bugs-announce] [issue39556] Different objects of the same class references the same dictionary Message-ID: <1580884554.4.0.0450040370405.issue39556@roundup.psfhosted.org> New submission from Kevin Young : Test code: class Test(object): def __init__(self, a={}): self._a = a def put(self, k, v): self._a[k] = v if __name__ == '__main__': t1 = Test() t1.put('aa', '11') t1.put('bb', '22') t2 = Test() t2.put('cc', '33') for k, v in t2._a.items(): print(k, '=', v) Output: aa = 11 bb = 22 cc = 33 The expected output should be: cc = 33 My workaround: self._a = dict(a) I have tested on both Python 3.7.3 and 3.8.1, they share the same results. I'm not sure if this is a bug or on-purpose feature of python. Could someone provide some guidance for me? Thank you. ---------- components: Interpreter Core messages: 361407 nosy: Kevin Young priority: normal severity: normal status: open title: Different objects of the same class references the same dictionary versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 02:38:19 2020 From: report at bugs.python.org (Avraham Mahfuda) Date: Wed, 05 Feb 2020 07:38:19 +0000 Subject: [New-bugs-announce] [issue39557] ThreadPoolExecutor is busy-waiting when idle. Message-ID: <1580888299.0.0.965711454165.issue39557@roundup.psfhosted.org> New submission from Avraham Mahfuda : In concurrent.futures.thread line 78 is busy-waiting if the queue is empty, which may cause the CPU to spin to 100% when idle. ---------- messages: 361410 nosy: Avraham Mahfuda priority: normal severity: normal status: open title: ThreadPoolExecutor is busy-waiting when idle. type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 06:20:39 2020 From: report at bugs.python.org (=?utf-8?b?0JHQvtGA0LjRgSDQktC10YDRhdC+0LLRgdC60LjQuQ==?=) Date: Wed, 05 Feb 2020 11:20:39 +0000 Subject: [New-bugs-announce] [issue39558] Implement __len__() for itertools.combinations Message-ID: <1580901639.1.0.0844849880483.issue39558@roundup.psfhosted.org> New submission from ????? ?????????? : and the other objects that have a straightforward formula for the number of elements they will generate. ---------- components: Library (Lib) messages: 361421 nosy: boris priority: normal severity: normal status: open title: Implement __len__() for itertools.combinations type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 07:02:55 2020 From: report at bugs.python.org (Sebastian Rittau) Date: Wed, 05 Feb 2020 12:02:55 +0000 Subject: [New-bugs-announce] [issue39559] uuid.getnode() has unused argument Message-ID: <1580904175.73.0.59628160229.issue39559@roundup.psfhosted.org> New submission from Sebastian Rittau : uuid.getnode() has an undocumented, keyword-only "getters" argument that gets discarded immediately. This is confusing when using code inspection tools and can give the wrong impression that you can somehow override the node getters when you can't. I recommend removing this argument. ---------- components: Library (Lib) messages: 361423 nosy: srittau priority: normal severity: normal status: open title: uuid.getnode() has unused argument versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 07:52:13 2020 From: report at bugs.python.org (Santiago M. Mola) Date: Wed, 05 Feb 2020 12:52:13 +0000 Subject: [New-bugs-announce] [issue39560] PyUnicode_FromKindAndData kind transformation is not documented Message-ID: <1580907133.21.0.114993814029.issue39560@roundup.psfhosted.org> New submission from Santiago M. Mola : PyUnicode_FromKindAndData copies input data and transforms it to the most compact representation. This behavior is not documented. Proposed wording: > The input buffer is copied and transformed into the canonical representation, if necessary. For example, if the buffer is a UCS4 string (PyUnicode_4BYTE_KIND) and it consists only of codepoints in the UCS1 range, it will be transformed into UCS1 (PyUnicode_1BYTE_KIND). ---------- assignee: docs at python components: Documentation messages: 361426 nosy: docs at python, smola priority: normal severity: normal status: open title: PyUnicode_FromKindAndData kind transformation is not documented versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 12:03:41 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 05 Feb 2020 17:03:41 +0000 Subject: [New-bugs-announce] [issue39561] AMD64 Fedora Rawhide LTO + PGO 3.x: "checking for getaddrinfo... no" Message-ID: <1580922221.51.0.460186713117.issue39561@roundup.psfhosted.org> New submission from STINNER Victor : On AMD64 Fedora Rawhide LTO + PGO 3.x buildbot, "checking for getaddrinfo... no" failed: https://buildbot.python.org/all/#/builders/154/builds/243 checking for getaddrinfo... no Fatal: You must get working getaddrinfo() function. or you can specify "--disable-ipv6". It sounds like a regression in Fedora Rawhide libc. ---------- components: Build messages: 361438 nosy: vstinner priority: normal severity: normal status: open title: AMD64 Fedora Rawhide LTO + PGO 3.x: "checking for getaddrinfo... no" versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 12:28:25 2020 From: report at bugs.python.org (jack1142) Date: Wed, 05 Feb 2020 17:28:25 +0000 Subject: [New-bugs-announce] [issue39562] Asynchronous comprehensions don't work in asyncio REPL Message-ID: <1580923705.71.0.29233909242.issue39562@roundup.psfhosted.org> New submission from jack1142 : asyncio REPL doesn't allow using asynchronous comprehensions outside of async func. Same behavior can also be observed when using `ast.PyCF_ALLOW_TOP_LEVEL_AWAIT` flag in `compile()` Example with `async for`: >>> async def async_gen(): ... for x in range(5): ... yield await asyncio.sleep(1, x) ... >>> [x async for x in async_gen()] File "", line 0 SyntaxError: asynchronous comprehension outside of an asynchronous function Example with `await`: >>> [await asyncio.sleep(1, x) for x in range(5)] File "", line 0 SyntaxError: asynchronous comprehension outside of an asynchronous function ---------- components: asyncio messages: 361443 nosy: asvetlov, jack1142, yselivanov priority: normal severity: normal status: open title: Asynchronous comprehensions don't work in asyncio REPL type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 12:31:11 2020 From: report at bugs.python.org (Hector E. Socarras) Date: Wed, 05 Feb 2020 17:31:11 +0000 Subject: [New-bugs-announce] [issue39563] asyncio.Protocol on windows 10 x64 Message-ID: <1580923871.37.0.213869641033.issue39563@roundup.psfhosted.org> New submission from Hector E. Socarras : I test de sample Echo Tcp Server on https://docs.python.org/3.8/library/asyncio-protocol.html. When I run de server i got the following error from python interpreter Traceback (most recent call last): File "server_test.py", line 1, in import asyncio File "C:\Users\Hector\AppData\Local\Programs\Python\Python37\lib\asyncio\__init__.py", line 8, in from .base_events import * File "C:\Users\Hector\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 23, in import socket File "C:\Users\Hector\pynode\net\socket.py", line 7, in class NetBaseProtocol(asyncio.Protocol, EventEmitter): AttributeError: module 'asyncio' has no attribute 'Protocol' the problem was that i have a module named socket in the same folder. But I think that base_events.py should import the apropiate socket module using relative import or supplying full path for his socket import. ---------- components: asyncio messages: 361444 nosy: asvetlov, hsocarras, yselivanov priority: normal severity: normal status: open title: asyncio.Protocol on windows 10 x64 type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 16:21:22 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Wed, 05 Feb 2020 21:21:22 +0000 Subject: [New-bugs-announce] [issue39564] Parsed expression has wrong line/col info when concatenating f-strings Message-ID: <1580937682.24.0.516632817266.issue39564@roundup.psfhosted.org> New submission from Lysandros Nikolaou : When concatenating f-strings, if there is an expression in any STRING node other than the first, col_offset of the parsed expression has a wrong value. For example, parsing f"hello" f"{world}" outputs the following AST: Module( body=[ Expr( value=JoinedStr( values=[ Constant( value="hello", kind=None, lineno=1, col_offset=0, end_lineno=1, end_col_offset=19, ), FormattedValue( value=Name( id="world", ctx=Load(), lineno=1, *col_offset=1,* end_lineno=1, *end_col_offset=6,* ), conversion=-1, format_spec=None, lineno=1, col_offset=0, end_lineno=1, end_col_offset=19, ), ], lineno=1, col_offset=0, end_lineno=1, end_col_offset=19, ), lineno=1, col_offset=0, end_lineno=1, end_col_offset=19, ) ], type_ignores=[], ) Here, col_offset and end_col_offset are wrong in the parsed NAME 'world'. ---------- components: Interpreter Core messages: 361456 nosy: gvanrossum, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Parsed expression has wrong line/col info when concatenating f-strings versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 17:15:07 2020 From: report at bugs.python.org (Enji Cooper) Date: Wed, 05 Feb 2020 22:15:07 +0000 Subject: [New-bugs-announce] [issue39565] Modules/signalmodule.c only works with `NSIG` signals; requires fudging to support realtime signals, etc Message-ID: <1580940907.68.0.904788237522.issue39565@roundup.psfhosted.org> New submission from Enji Cooper : The code in Modules/signalmodule.c makes a number of assumptions of what signals are considered valid, as well as what handlers need to be setup as part of the core interpreter. For example: much of the initialization of signal handlers, etc, is actually keyed off of NSIG, as defined (and guessed on) here: https://github.com/python/cpython/blob/master/Modules/signalmodule.c#L50 . The problem with this is that it makes it impossible for end-users to use `signal.signal`, et al with signal numbers outside of `NSIG`, which includes realtime signals. Furthermore, if one is to extend the size of `NSIG`, it results in an increased O(n) iteration over all of the signals if/when a handler needs to be handled (set or cleared). Proposal: The best way to handle this, in my opinion, is to use a dict-like container to iterate over all of the handlers and rely on the OS to trickle up errors in the signal(3) libcall, as opposed to thinking that the definitions/assumptions in signalmodule.c are absolutely correct. This may or may not be possible, however, depending on code needing to be reentrant, but it would be nice to leverage a middle ground solution of some kind *shrug*. ---------- components: Interpreter Core messages: 361458 nosy: ngie priority: normal severity: normal status: open title: Modules/signalmodule.c only works with `NSIG` signals; requires fudging to support realtime signals, etc versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 5 18:37:24 2020 From: report at bugs.python.org (Nicholas Matthews) Date: Wed, 05 Feb 2020 23:37:24 +0000 Subject: [New-bugs-announce] [issue39566] inspect.Signature.__init__ asks for parameters as dict but treats as list Message-ID: <1580945844.01.0.0291811112362.issue39566@roundup.psfhosted.org> New submission from Nicholas Matthews : The class inspect.Signature asks for parameters of type dict in python 3.8+ (and OrderedDict in earlier versions); however the __init__ function iterates over parameters as if it were a list, specifically: for param in parameters: name = param.name kind = param.kind ... Either the docstring should be changed to specify Sequence / List, or the implementation should be changed to iterate over the values of parameters: for param in parameters.values(): ... (https://github.com/python/cpython/blob/2cca8efe46935c39c445f585bce54954fad2485b/Lib/inspect.py#L2734) ---------- messages: 361461 nosy: Nicholas Matthews priority: normal severity: normal status: open title: inspect.Signature.__init__ asks for parameters as dict but treats as list type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 03:20:48 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 06 Feb 2020 08:20:48 +0000 Subject: [New-bugs-announce] [issue39567] Add audit for os.walk, os.fwalk, Path.glob() and Path.rglob() Message-ID: <1580977248.64.0.174366488821.issue39567@roundup.psfhosted.org> New submission from Serhiy Storchaka : See issue38149. There is an audit for os.scandir(), but it would be useful to have information about higher-level operations. ---------- components: Library (Lib) messages: 361472 nosy: serhiy.storchaka, steve.dower priority: normal severity: normal status: open title: Add audit for os.walk, os.fwalk, Path.glob() and Path.rglob() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 07:56:25 2020 From: report at bugs.python.org (Another One) Date: Thu, 06 Feb 2020 12:56:25 +0000 Subject: [New-bugs-announce] [issue39568] FORMATTING grouping_option ValueError: Cannot specify ', ' with ... Message-ID: <1580993785.28.0.694930470564.issue39568@roundup.psfhosted.org> New submission from Another One : Example for binary representation: >>> x = 123456 >>> print("{:,b}".format(x)) Traceback (most recent call last): File "", line 1, in print("{:,b}".format(x)) ValueError: Cannot specify ',' with 'b'. Why? Comma work only with decimals? But '_' groups delimiter properly work with any integer representation including decimals, hexadecimals, binaries, octals, etc.. ---------- messages: 361480 nosy: Another One priority: normal severity: normal status: open title: FORMATTING grouping_option ValueError: Cannot specify ',' with ... versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 11:00:25 2020 From: report at bugs.python.org (=?utf-8?q?Bj=C3=B6rn_Lindqvist?=) Date: Thu, 06 Feb 2020 16:00:25 +0000 Subject: [New-bugs-announce] [issue39569] Is the return value of pathlib.Path.glob() sorted? Message-ID: <1581004825.17.0.345840814568.issue39569@roundup.psfhosted.org> New submission from Bj?rn Lindqvist : It would be great if the docs were clearer on what you can assume on the ordering of pathlib.Path.glob() calls. Is it sorted? Is it the same in consecutive calls? I'm guessing you can't assume anything at all, which I think should be clarified in the docs. ---------- assignee: docs at python components: Documentation messages: 361494 nosy: Bj?rn.Lindqvist, docs at python priority: normal severity: normal status: open title: Is the return value of pathlib.Path.glob() sorted? type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 11:26:37 2020 From: report at bugs.python.org (UltraLutra) Date: Thu, 06 Feb 2020 16:26:37 +0000 Subject: [New-bugs-announce] [issue39570] Python 3.7.3 Crash on msilib actions Message-ID: <1581006397.33.0.114674633699.issue39570@roundup.psfhosted.org> New submission from UltraLutra : Hello, I'm trying to read MSI files using msilib. Some files make python crash on Record.GetString of a specific cell. Attached is one of the files that causes the crash, and this is the code that is causing it to crash: db = msilib.OpenDatabase('6bcd682374529631be60819d20a71d9d40c67bf0b1909faa459298eda998f833', msilib.MSIDBOPEN_READONLY) query = db.OpenView(f'SELECT * FROM Registry') query.Execute(None) for i in range(6): record = query.Fetch() record.GetString(5) The crash seems to be by trying to read the Value columns of the last row in the Registry table. file: https://github.com/AsafEitani/msilib_crash ---------- components: Windows messages: 361495 nosy: UltraLutra, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python 3.7.3 Crash on msilib actions type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 12:09:39 2020 From: report at bugs.python.org (Sam Gross) Date: Thu, 06 Feb 2020 17:09:39 +0000 Subject: [New-bugs-announce] [issue39571] clang warns "warning: redefinition of typedef 'PyTypeObject' is a C11 feature [-Wtypedef-redefinition]" Message-ID: <1581008979.02.0.137849512464.issue39571@roundup.psfhosted.org> New submission from Sam Gross : A recent commit added a typedef for PyTypeObject in Include/object.h https://github.com/python/cpython/commit/0e4e735d06967145b49fd00693627f3624991dbc This duplicates the typedef in Include/cpython/object.h. Building with clang now issues a warning: ./Include/cpython/object.h:274:3: warning: redefinition of typedef 'PyTypeObject' is a C11 feature [-Wtypedef-redefinition] This is due to the combination of `-Wall` and `-std=c99`. GCC will only warn if the `-pedantic` option is specified. ---------- components: C API messages: 361497 nosy: colesbury, vstinner priority: normal severity: normal status: open title: clang warns "warning: redefinition of typedef 'PyTypeObject' is a C11 feature [-Wtypedef-redefinition]" versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 15:08:00 2020 From: report at bugs.python.org (Brett Cannon) Date: Thu, 06 Feb 2020 20:08:00 +0000 Subject: [New-bugs-announce] [issue39572] [typing] TypedDict's 'total' argument is undocumented Message-ID: <1581019680.46.0.628292741765.issue39572@roundup.psfhosted.org> New submission from Brett Cannon : The docs mention __total__, but there's no mention of how to actually set that attribute, nor what it actually represents. P.S. https://github.com/python/cpython/blob/master/Lib/typing.py#L16 says TypedDict "may be added soon"; I think that's outdated. ;) ---------- assignee: docs at python components: Documentation messages: 361503 nosy: brett.cannon, docs at python, gvanrossum, levkivskyi priority: normal severity: normal stage: needs patch status: open title: [typing] TypedDict's 'total' argument is undocumented versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 18:07:12 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 06 Feb 2020 23:07:12 +0000 Subject: [New-bugs-announce] [issue39573] Make PyObject an opaque structure in the limited C API Message-ID: <1581030432.16.0.48160379721.issue39573@roundup.psfhosted.org> New submission from STINNER Victor : Today, CPython is leaking too many implementation through its public C API. We cannot easily change the "default" C API, but we can enhance the "limited" C API (when Py_LIMITED_API macro is defined). Example of leaking implementation details: memory allocator, garbage collector, structure layouts, etc. Making PyObject an opaque structure would allow in the long term of modify structures to implement more efficient types (ex: list specialized for small integers), and it can prepare CPython to experiment tagged pointers. Longer rationale: * https://pythoncapi.readthedocs.io/ * https://pythoncapi.readthedocs.io/bad_api.html * https://pythoncapi.readthedocs.io/optimization_ideas.html I propose to incremental evolve the existing limited C API towards opaque PyObject, by trying to reduce the risk of breakage. We may test changes on PyQt which uses the limited C API. Another idea would be to convert some C extensions of the standard library to the limited C API. It would ensure that the limited C API contains enough functions to be useful, but would also notify us directly if the API is broken. ---------- components: C API messages: 361513 nosy: vstinner priority: normal severity: normal status: open title: Make PyObject an opaque structure in the limited C API versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 20:01:54 2020 From: report at bugs.python.org (Zachary Westrick) Date: Fri, 07 Feb 2020 01:01:54 +0000 Subject: [New-bugs-announce] [issue39574] str.__doc__ is misleading Message-ID: <1581037314.11.0.0860041079795.issue39574@roundup.psfhosted.org> Change by Zachary Westrick : ---------- assignee: docs at python components: Documentation nosy: docs at python, kcirtsew priority: normal severity: normal status: open title: str.__doc__ is misleading type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 20:57:18 2020 From: report at bugs.python.org (MaskRay) Date: Fri, 07 Feb 2020 01:57:18 +0000 Subject: [New-bugs-announce] [issue39575] `coverage` build target should use --coverage instead of -lgcov Message-ID: <1581040638.77.0.976018979532.issue39575@roundup.psfhosted.org> New submission from MaskRay : This allows clang to get rid of the dependency on libgcov. When linking, GCC passes -lgcov while clang passes the path to libclang_rt.profile-$arch.a ---------- components: Build messages: 361528 nosy: MaskRay priority: normal severity: normal status: open title: `coverage` build target should use --coverage instead of -lgcov type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 6 22:39:20 2020 From: report at bugs.python.org (Tim Peters) Date: Fri, 07 Feb 2020 03:39:20 +0000 Subject: [New-bugs-announce] [issue39576] Surprising MemoryError in `decimal` with MAX_PREC Message-ID: <1581046760.48.0.443155322154.issue39576@roundup.psfhosted.org> New submission from Tim Peters : Here under Python 3.8.1 on 64-bit Windows: >>> import decimal >>> c = decimal.getcontext() >>> c.prec = decimal.MAX_PREC >>> i = decimal.Decimal(4) >>> i / 2 Traceback (most recent call last): File "", line 1, in MemoryError Of course the result is exactly 2. Which I have enough RAM to hold ;-) The implicit conversion is irrelevant: >>> i / decimal.Decimal(2) Traceback (most recent call last): File "", line 1, in MemoryError Floor division instead works fine: >>> i // 2 Decimal('2') ---------- components: Library (Lib) messages: 361536 nosy: tim.peters priority: normal severity: normal status: open title: Surprising MemoryError in `decimal` with MAX_PREC versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 7 04:08:45 2020 From: report at bugs.python.org (Andrea) Date: Fri, 07 Feb 2020 09:08:45 +0000 Subject: [New-bugs-announce] [issue39577] venv --prompt argument is ignored Message-ID: <1581066525.57.0.495811141664.issue39577@roundup.psfhosted.org> New submission from Andrea : In creating a new virtual environment, the help suggest a --prompt argument to specify a different name by the time the environment is active. https://docs.python.org/3/library/venv.html The argument is apparently ignored as the folder name always appears instead to whatever is specified in the prompt. Checking at the config file content there nothing written inside, thought I'm not sure this should be the case. ---------- messages: 361548 nosy: andream priority: normal severity: normal status: open title: venv --prompt argument is ignored versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 7 04:38:41 2020 From: report at bugs.python.org (Frank Harrison) Date: Fri, 07 Feb 2020 09:38:41 +0000 Subject: [New-bugs-announce] [issue39578] MagicMock specialisation instance can no longer be passed to new MagicMock instance Message-ID: <1581068321.62.0.885951490226.issue39578@roundup.psfhosted.org> New submission from Frank Harrison : This is my first bug logged here, I've tried to follow the guideline and search for this issue; please let me know if I missed anything. Summary: unittest.mock.MagicMock has a regression starting in 3.8. The regression was only tested on latest non-prerelease versions of python 3.5, 3.6, 3.7, 3.8 and 3.9. Tested on OSX and Fedora 31. Repro: ------ If you create an instance of a MagicMock specialisation with parameters to __init__(), you can no longer pass that instance to the __init__() function of another MagicMock object e.g. a base-class is replaced with MagicMock. See the unittests bellow for more details, use-cases and fail situations. What happens: ------------- Here's a python3.9 example traceback. It may be worth noting that there is a difference in the tracebacks between 3.8 and 3.9. Traceback (most recent call last): File "<...>", line <..>, in test_raw_magic_moc_passing_thru_single_pos mock_object = mock.MagicMock(mock_param) # error is here, instantiating another object File "/usr/lib64/python3.9/unittest/mock.py", line 408, in __new__ if spec_arg and _is_async_obj(spec_arg): File "/usr/lib64/python3.9/unittest/mock.py", line 2119, in __get__ return self.create_mock() File "/usr/lib64/python3.9/unittest/mock.py", line 2112, in create_mock m = parent._get_child_mock(name=entry, _new_name=entry, File "/usr/lib64/python3.9/unittest/mock.py", line 1014, in _get_child_mock return klass(**kw) TypeError: __init__() got an unexpected keyword argument 'name' Code demonstrating the problem: ------------------------------- import unittest from unittest import mock class TestMockMagicAssociativeHierarchies(unittest.TestCase): """ Mimicing real-world testing where we mock a base-class The intent here is to demonstrate some of the requirements of associative- hierarchies e.g. where a class may have its associative-parent set at run-time, rather that defining it via a class-hierarchy. Obviously this also needs to work with class-hierarchies, that is an associative-parent is likely to be a specialisation of some interface, usually one that is being mocked. For example tkinter and Qt have both a class-hierarchy and a layout- hierarchy; the latter is modifyable at runtime. Most of the tests here mimic a specialisation of an upstream object (say a tk.Frame class), instantiating that specialisation and then passing it to another object. The reason behind this is an observed regression in Python 3.8. """ def test_raw_magic_moc_passing_thru_no_params(self): """ REGRESSION: Python3.8 (inc Python3.9) Create a mocked specialisation passing it to another mock. One real-world use-case for this is simple cases where we simply want to define a new convenience type that forces a default configuration of the inherited type (calls super().__init__()). """ class MockSubCallsParentInit(mock.MagicMock): def __init__(self): super().__init__() # intentionally empty mock_param = MockSubCallsParentInit() mock_object = mock.MagicMock(mock_param) # error is here, instantiating another object self.assertIsInstance(mock_object, mock.MagicMock) def test_raw_magic_moc_passing_thru_single_pos(self): """ REGRESSION: Python3.8 (inc Python3.9) Same as test_raw_magic_moc_no_init_params() but we want to specialise with positional arguments. """ class MockSubCallsParentInitWithPositionalParam(mock.MagicMock): def __init__(self): super().__init__("specialise init calls") mock_param = MockSubCallsParentInitWithPositionalParam() mock_object = mock.MagicMock(mock_param) # error is here, instantiating another object self.assertIsInstance(mock_object, mock.MagicMock) def test_raw_magic_moc_passing_thru_single_kwarg(self): """ REGRESSION: Python3.8 (inc Python3.9) Same as test_raw_magic_moc_passing_thru_single_pos() but we want to specialise with a key-word argument. """ class MockSubCallsParentInitWithPositionalParam(mock.MagicMock): def __init__(self): super().__init__(__some_key_word__="some data") mock_param = MockSubCallsParentInitWithPositionalParam() mock_object = mock.MagicMock(mock_param) # error is here, instantiating another object self.assertIsInstance(mock_object, mock.MagicMock) def test_mock_as_param_no_inheritance(self): """ PASSES Mimic mocking out types, without type specialisation. for example in pseudo code tk.Frame = mock.MagicMock; tk.Frame(t.Frame) """ mock_param = mock.MagicMock() mock_object = mock.MagicMock(mock_param) self.assertIsInstance(mock_object, mock.MagicMock) def test_mock_as_param_no_init_override(self): """ PASSES Leaves the __init__() function behaviour as default; should always work. Note that we do not specialise member functions. Although the intent here is similar to the one captured by test_raw_magic_moc_passing_thru_no_params(), this is a less likely usecase, although it does happen, but is here for completeness """ class MockSub(mock.MagicMock): pass mock_param = MockSub() mock_object = mock.MagicMock(mock_param) self.assertIsInstance(mock_object, mock.MagicMock) def test_init_with_args_n_kwargs_passthru(self): """ PASSES Intended to be the same as test_mock_as_param_no_init_override as well as a base-test for ithe usecases where a user will define more complex behaviours such as key-word modification, member-variable definitions and so on. """ class MockSubInitPassThruArgsNKwargs(mock.MagicMock): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # intentionally redundant mock_param = MockSubInitPassThruArgsNKwargs() mock_object = mock.MagicMock(mock_param) self.assertIsInstance(mock_object, mock.MagicMock) def test_init_with_args_n_kwargs_modify_kwargs(self): """ PASSES Same as test_init_with_args_n_kwargs_passthru() but modifies the kwargs dict on the way through the __init__() function. """ class MockSubModifyKwargs(mock.MagicMock): def __init__(self, *args, **kwargs): kwargs["__kw args added__"] = "test value" super().__init__(*args, **kwargs) mock_param = MockSubModifyKwargs() mock_object = mock.MagicMock(mock_param) self.assertIsInstance(mock_object, mock.MagicMock) def test_init_with_args_n_kwargs_modify_args(self): """ PASSES Same as test_init_with_args_n_kwargs_passthru() but modifies the args on their way through the __init__() function. """ class MockSubModifyArgs(mock.MagicMock): def __init__(self, *args, **kwargs): super().__init__("test value", *args, **kwargs) mock_param = MockSubModifyArgs() mock_object = mock.MagicMock(mock_param) self.assertIsInstance(mock_object, mock.MagicMock) ---------- components: Library (Lib) messages: 361552 nosy: Frank Harrison priority: normal severity: normal status: open title: MagicMock specialisation instance can no longer be passed to new MagicMock instance versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 7 09:44:54 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Fri, 07 Feb 2020 14:44:54 +0000 Subject: [New-bugs-announce] [issue39579] Attribute node in a decorator has wrong end_col_offset Message-ID: <1581086694.35.0.395827343786.issue39579@roundup.psfhosted.org> New submission from Lysandros Nikolaou : There is a problem with the end_col_offset of nested Attribute nodes in decorators. For example, parsing @a.b.c def f(): pass produces the following AST tree (part): decorator_list=[ Attribute( value=Attribute( value=Name( id="a", ctx=Load(), lineno=1, col_offset=1, end_lineno=1, end_col_offset=2, ), attr="b", ctx=Load(), lineno=1, col_offset=1, end_lineno=1, *end_col_offset=6*, ), attr="c", ctx=Load(), lineno=1, col_offset=1, end_lineno=1, end_col_offset=6, ) ], Note that the Attribute node with attr="b" has end_col_offset=6, while it should actually be 4. ---------- components: Interpreter Core messages: 361595 nosy: gvanrossum, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Attribute node in a decorator has wrong end_col_offset versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 7 16:44:10 2020 From: report at bugs.python.org (Mike Solin) Date: Fri, 07 Feb 2020 21:44:10 +0000 Subject: [New-bugs-announce] [issue39580] Check for COMMAND_LINE_INSTALL variable in Python_Documentation.pkg Message-ID: <1581111850.29.0.0540786191291.issue39580@roundup.psfhosted.org> New submission from Mike Solin : Hello Python developers! I'm looking to deploy Python 3 silently to the Macs that I manage, so I can use Python for various scripts. I'm using Munki to accomplish this. However, the Python_Documentation.pkg subpackage includes this code in the postinstall script: ``` # make link in /Applications/Python m.n/ for Finder users if [ -d "${APPDIR}" ]; then ln -fhs "${FWK_DOCDIR}/index.html" "${APPDIR}/Python Documentation.html" open "${APPDIR}" || true # open the applications folder fi ``` Would it be possible to test for the $COMMAND_LINE_INSTALL variable before opening a Finder window? If the $COMMAND_LINE_INSTALL exists, it'd be really great if it didn't open the Finder. This would allow me to silently deploy Python 3 without disrupting my users. Thanks! Mike ---------- components: macOS messages: 361610 nosy: flammable, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Check for COMMAND_LINE_INSTALL variable in Python_Documentation.pkg versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 04:55:36 2020 From: report at bugs.python.org (=?utf-8?b?7J6E7IiY7KeE7ZWZ67aA7IOd?=) Date: Sat, 08 Feb 2020 09:55:36 +0000 Subject: [New-bugs-announce] [issue39581] Python Interpreter Doesn't Work Well In Thread Class Message-ID: <1581155736.57.0.784199472147.issue39581@roundup.psfhosted.org> New submission from ?????? <21600590 at handong.edu>: ================================================================ import threading import time def threadFunc(): while True: print('new thread') time.sleep(2) def main(): th = threading.Thread(target=threadFunc()) th.start() while True: print('main Thread') time.sleep(1) th.join() if __name__ == '__main__': main() ============================================================== When I run the above code in python 3.7, it works in unexpected way. I expected this code causes an syntax error for giving an improper argument to parameter because I gave "threaFunc()" not "threaFun" as an argument of target in Thread class. However, this code executes a function "threadFunc()" as a general function not thread. ---------- components: Windows messages: 361622 nosy: paul.moore, steve.dower, tim.golden, zach.ware, ?????? priority: normal severity: normal status: open title: Python Interpreter Doesn't Work Well In Thread Class type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 06:36:07 2020 From: report at bugs.python.org (David CARLIER) Date: Sat, 08 Feb 2020 11:36:07 +0000 Subject: [New-bugs-announce] [issue39582] ossaudiodev update helpers signature Message-ID: <1581161767.2.0.142949694785.issue39582@roundup.psfhosted.org> Change by David CARLIER : ---------- nosy: devnexen priority: normal pull_requests: 17786 severity: normal status: open title: ossaudiodev update helpers signature versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 08:50:05 2020 From: report at bugs.python.org (Skip Montanaro) Date: Sat, 08 Feb 2020 13:50:05 +0000 Subject: [New-bugs-announce] [issue39583] Remove superfluous "extern C" bits from Include/cpython/*.h Message-ID: <1581169805.61.0.60673163843.issue39583@roundup.psfhosted.org> New submission from Skip Montanaro : I noticed that the files in Include/cpython also have extern C declarations, despite the fact that the only files which #include them do as well. Seems like a small bit of cleanup. PR incoming... ---------- components: C API messages: 361628 nosy: skip.montanaro priority: normal severity: normal status: open title: Remove superfluous "extern C" bits from Include/cpython/*.h type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 09:18:24 2020 From: report at bugs.python.org (Vinay Sharma) Date: Sat, 08 Feb 2020 14:18:24 +0000 Subject: [New-bugs-announce] [issue39584] MacOS crashes by running attached Python code Message-ID: <1581171504.91.0.0818206259105.issue39584@roundup.psfhosted.org> New submission from Vinay Sharma : Consider the following python Code. ``` from multiprocessing.shared_memory import SharedMemory shm = SharedMemory(name='test-crash', create=True, size=1000000000000000000) ``` This causes macOS Catalina, Mojave to freeze and then crash. Although, this works fine on ubuntu. After, debugging I realised that this is due to the ftruncate call. I could replicate the same by calling os.ftruncate and also using ftruncate in C code. Following C++ code also crashes, which confirms that ftruncate in macOS is broken: ``` #include #include #include #include #include #include #include #include #include int main() { int shm_fd = shm_open("/test-shm2", O_CREAT | O_RDWR, 0666); if (shm_fd == -1) { throw "Shared Memory Object couldn't be created or opened"; } int rv = ftruncate(shm_fd, (long long)1000000000000000000); } ``` Should python, in any way handle this, so as to prevent any crashes using python code. ---------- components: C API messages: 361629 nosy: vinay0410 priority: normal severity: normal status: open title: MacOS crashes by running attached Python code versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 09:27:40 2020 From: report at bugs.python.org (hai shi) Date: Sat, 08 Feb 2020 14:27:40 +0000 Subject: [New-bugs-announce] [issue39585] Delete a pending item in _warning.c Message-ID: <1581172060.12.0.48939566603.issue39585@roundup.psfhosted.org> New submission from hai shi : a pend item could be removed (https://github.com/python/cpython/blob/master/Python/_warnings.c#L493). two reasons: 1) every warning have `__name__` and it must not NULL(`The tp_name slot must be set;` from pep0253) 2) the `__name__` of Warning class(including children class) can not be removed. ``` >>> del UserWarning.__name__ Traceback (most recent call last): File "", line 1, in TypeError: can't set attributes of built-in/extension type 'UserWarning' ``` ---------- components: Interpreter Core messages: 361630 nosy: shihai1991 priority: normal severity: normal status: open title: Delete a pending item in _warning.c versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 10:07:52 2020 From: report at bugs.python.org (Hugo van Kemenade) Date: Sat, 08 Feb 2020 15:07:52 +0000 Subject: [New-bugs-announce] [issue39586] Deprecate bdist_msi: use bdist_wheel instead Message-ID: <1581174472.55.0.332465806553.issue39586@roundup.psfhosted.org> New submission from Hugo van Kemenade : According to the "Deprecate bdist_wininst" discussion (July 2019), bdist_msi can be deprecated: https://discuss.python.org/t/deprecate-bdist-wininst/1929 Victor Stinner wrote: "Now the question is if someone here wants to go further is deprecate all distutils commands except sdist and bdist_wheel? Steve Dower wants to deprecate bdist_msi as well: I?m not sure who use bdist_msi. It?s another form of GUI installer, so it?s similar to bdist_wininst. I would also strongly encourage to use bdist_wheel rather than bdist_msi." Brett Cannon wrote: "Probably a good idea, but I personally don?t have the time." Steve Dower wrote: "I think the others are fine to leave (though if people who work more closely with those tools want to say drop them then I?m fine with that too). "bdist_msi and bdist_exe don?t integrate with any other package managers, can?t be integrated with any other installer besides our own Python installer, and in any case are worse than simply copying the files. (If we had a bdist_msm I?d be slightly more sympathetic, but we don?t and probably should not :) ) "I also don?t necessarily think that wheels are always the alternative, particularly for embedded scenarios, but I do think that the fewer options we provide by default will help people find the option they actually need, rather than assuming that because it?s ?blessed? it must see correct." And in "Remove distutils bdist_wininst command" (February 2020): https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115 Victor Stinner wrote: "I don?t plan to remove bdist_msi, even if wheel packages are now recommended." Steve Dower wrote: "We should, though. Installing a package using an MSI is worse than an EXE, as it leaves far more cruft behind if you don?t uninstall it before changing/removing the Python install. "Standalone apps should bundle everything, like pynsist or briefcase. GPO deployment should create their own MSI with everything they want in the bundle and deploy that. Perhaps someone can make an ?installer? based on the py.exe launcher (which I believe supports an attached zip file) that will use pip and a local/embedded wheel. "But we should really discourage package installs that don?t support venv and/or leave cruft behind." Victor Stinner wrote: "If you want to see it disappear, you should start by deprecating it in Python 3.9. It would be a first step." PR to follow. See also: * https://bugs.python.org/issue37481 "Deprecate bdist_wininst: use bdist_wheel instead" * https://bugs.python.org/issue39541 "distutils: Remove bdist_wininst (Windows .exe installers) in favor of bdist_wheel (.whl)" ---------- components: Distutils, Windows messages: 361633 nosy: dstufft, eric.araujo, hugovk, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Deprecate bdist_msi: use bdist_wheel instead versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 16:01:31 2020 From: report at bugs.python.org (Ryan McCampbell) Date: Sat, 08 Feb 2020 21:01:31 +0000 Subject: [New-bugs-announce] [issue39587] Mixin repr overrides Enum repr in some cases Message-ID: <1581195691.69.0.82674637062.issue39587@roundup.psfhosted.org> New submission from Ryan McCampbell : In Python 3.6 the following works: class HexInt(int): def __repr__(self): return hex(self) class MyEnum(HexInt, enum.Enum): A = 1 B = 2 C = 3 >>> MyEnum.A However in Python 3.7/8 it instead prints >>> MyEnum.A 0x1 It uses HexInt's repr instead of Enum's. Looking at the enum.py module it seems that this occurs for mixin classes that don't define __new__ due to a change in the _get_mixins_ method. If I define a __new__ method on the HexInt class then the expected behavior occurs. ---------- components: Library (Lib) messages: 361635 nosy: rmccampbell7 priority: normal severity: normal status: open title: Mixin repr overrides Enum repr in some cases type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 16:45:29 2020 From: report at bugs.python.org (Andy Lester) Date: Sat, 08 Feb 2020 21:45:29 +0000 Subject: [New-bugs-announce] [issue39588] Use memcpy() instead of for() loops in _PyUnicode_To* Message-ID: <1581198329.98.0.186422149655.issue39588@roundup.psfhosted.org> New submission from Andy Lester : Four functions in Objects/unicodectype.c copy values out of lookup tables with a for loop int i; for (i = 0; i < n; i++) res[i] = _PyUnicode_ExtendedCase[index + i]; instead of a memcpy memcpy(res, &_PyUnicode_ExtendedCase[index], n * sizeof(Py_UCS4)); My Apple clang version 11.0.0 on my Mac optimizes away the for loop and generates equivalent code to the memcpy. gcc 4.8.5 on my Linux box (the newest GCC I have) does not optimize away the loop. The four functions are: _PyUnicode_ToLowerFull _PyUnicode_ToTitleFull _PyUnicode_ToUpperFull _PyUnicode_ToFoldedFull ---------- components: Unicode messages: 361636 nosy: ezio.melotti, petdance, vstinner priority: normal severity: normal status: open title: Use memcpy() instead of for() loops in _PyUnicode_To* type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 8 22:31:28 2020 From: report at bugs.python.org (Simon) Date: Sun, 09 Feb 2020 03:31:28 +0000 Subject: [New-bugs-announce] [issue39589] Logging QueueListener should support context manager Message-ID: <1581219088.53.0.399327051816.issue39589@roundup.psfhosted.org> New submission from Simon : The QueueListener could be extended to support the context manager. ---------- components: Library (Lib) messages: 361641 nosy: sbrugman priority: normal severity: normal status: open title: Logging QueueListener should support context manager versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 9 00:02:53 2020 From: report at bugs.python.org (Dennis Sweeney) Date: Sun, 09 Feb 2020 05:02:53 +0000 Subject: [New-bugs-announce] [issue39590] collections.deque.__contains__ and .count should hold strong references. Message-ID: <1581224573.69.0.537740260718.issue39590@roundup.psfhosted.org> New submission from Dennis Sweeney : Similar to https://bugs.python.org/issue39453, but with deques: Python 3.9.0a3+: >>> from collections import deque >>> class A: ... def __eq__(self, other): ... L.clear() ... return NotImplemented ... >>> L = [A(), A(), A()] >>> 17 in L False >>> L = deque([A(), A(), A()]) >>> 17 in L (Crashes with "Unhandled exception thrown: read access violation.") ---------- components: Library (Lib) messages: 361642 nosy: Dennis Sweeney priority: normal severity: normal status: open title: collections.deque.__contains__ and .count should hold strong references. type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 9 00:44:35 2020 From: report at bugs.python.org (Andy Lester) Date: Sun, 09 Feb 2020 05:44:35 +0000 Subject: [New-bugs-announce] [issue39591] Functions in Python/traceback.c can take const pointer arguments Message-ID: <1581227075.1.0.0304587287578.issue39591@roundup.psfhosted.org> New submission from Andy Lester : The functions tb_displayline and tb_printinternal can take const pointers on some of their arguments. tb_displayline(PyObject *f, PyObject *filename, int lineno, const PyObject *name) tb_printinternal(const PyTracebackObject *tb, PyObject *f, long limit) ---------- components: Interpreter Core messages: 361643 nosy: petdance priority: normal severity: normal status: open title: Functions in Python/traceback.c can take const pointer arguments type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 9 02:08:17 2020 From: report at bugs.python.org (F4zi) Date: Sun, 09 Feb 2020 07:08:17 +0000 Subject: [New-bugs-announce] [issue39592] Year not updated at python.org Message-ID: <1581232097.07.0.139601971963.issue39592@roundup.psfhosted.org> New submission from F4zi : The year in https://www.python.org/psf/donations/python-dev/ Should be changed from 2019 to 2020 ---------- assignee: docs at python components: Documentation messages: 361645 nosy: F4zi, docs at python, eric.araujo, ezio.melotti, mdk, willingc priority: normal severity: normal status: open title: Year not updated at python.org _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 9 11:17:25 2020 From: report at bugs.python.org (hai shi) Date: Sun, 09 Feb 2020 16:17:25 +0000 Subject: [New-bugs-announce] [issue39593] Adding a unit test of ctypes Message-ID: <1581265045.65.0.359928840546.issue39593@roundup.psfhosted.org> New submission from hai shi : strlen(data) can not be replaced by Py_SIZE(value) in https://github.com/python/cpython/blob/master/Modules/_ctypes/cfield.c#L1297. victor have give a great example about it in https://github.com/python/cpython/pull/18419 I create this bpo for two reasons: 1. This question info could be removed(some questions info will catch my attention? 2. Current tests should be enhanced(It can not help me found this backward incompatible:( ). ---------- components: Tests messages: 361656 nosy: shihai1991 priority: normal severity: normal status: open title: Adding a unit test of ctypes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 9 12:30:30 2020 From: report at bugs.python.org (Chenyoo Hao) Date: Sun, 09 Feb 2020 17:30:30 +0000 Subject: [New-bugs-announce] [issue39594] Typo in documentation for os.times Message-ID: <1581269430.53.0.74728045521.issue39594@roundup.psfhosted.org> New submission from Chenyoo Hao : 1.Formatting error due to an extra space (After the MSDN link). 2.There are extra words. Original? See the Unix manual page :manpage:`times(2)` and :manpage:`times(3)` manual page on Unix or `the GetProcessTimes MSDN ` _ on Windows. On Windows, only :attr:`user` and :attr:`system` are known; the other attributes are zero. After modification? See the manual page :manpage:`times(2)` and :manpage:`times(3)` on Unix or the `GetProcessTimes MSDN `_ on Windows. On Windows, only :attr:`user` and :attr:`system` are known; the other attributes are zero. ---------- assignee: docs at python components: Documentation messages: 361659 nosy: Chenyoo Hao, docs at python priority: normal severity: normal status: open title: Typo in documentation for os.times type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 9 16:01:21 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 09 Feb 2020 21:01:21 +0000 Subject: [New-bugs-announce] [issue39595] Improve performance of importlib.metadata and zipfile.Path Message-ID: <1581282081.96.0.175299571443.issue39595@roundup.psfhosted.org> New submission from Jason R. Coombs : As reported in [jaraco/zipp#32](https://github.com/jaraco/zipp/issues/32), performance of zipfile.Path is inadequate. This bug tracks the incorporation of those improvements as well as those in [importlib_metadata 1.5](https://importlib-metadata.readthedocs.io/en/latest/changelog%20(links).html#v1-5-0). ---------- messages: 361663 nosy: jaraco priority: normal severity: normal status: open title: Improve performance of importlib.metadata and zipfile.Path versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 02:35:43 2020 From: report at bugs.python.org (wyz23x2) Date: Mon, 10 Feb 2020 07:35:43 +0000 Subject: [New-bugs-announce] [issue39596] reverse parameter for enumerate() Message-ID: <1581320143.14.0.190118566768.issue39596@roundup.psfhosted.org> New submission from wyz23x2 : Starting from Python 2.3, the handy enumerate() was introduced. However, I suggest to add a "reverse" parameter: >>> lis = ['a', 'b', 'c', 'd'] >>> list(enumerate(lis)) [(0,'a'),(1,'b'),(2,'c'),(3,'d')] >>> list(enumerate(lis,reverse=True) [('a',0),('b',1),('c',2),('d',3)] >>> ---------- components: Build messages: 361670 nosy: wyz23x2 priority: normal severity: normal status: open title: reverse parameter for enumerate() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 03:41:14 2020 From: report at bugs.python.org (Shani M) Date: Mon, 10 Feb 2020 08:41:14 +0000 Subject: [New-bugs-announce] [issue39597] sorting the String Message-ID: <1581324074.07.0.0879941203832.issue39597@roundup.psfhosted.org> New submission from Shani M : It showing the wrong string order. 'sss' is to be appear at 3rd place but it comes at last place. 'qwe' is to appear at last place but it comes at 3rd place. ---------- files: Screenshot from 2020-02-10 14-08-52.png messages: 361675 nosy: Shani M priority: normal severity: normal status: open title: sorting the String versions: Python 2.7 Added file: https://bugs.python.org/file48886/Screenshot from 2020-02-10 14-08-52.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 07:23:58 2020 From: report at bugs.python.org (Thomas Reed) Date: Mon, 10 Feb 2020 12:23:58 +0000 Subject: [New-bugs-announce] [issue39598] ERR_CACHE_MISS Message-ID: <1581337438.53.0.0366477237465.issue39598@roundup.psfhosted.org> New submission from Thomas Reed : Hi, I have problem with cache. If there is someone in the detail of product longer that 5 minutes and than click to button "back",it makes error "ERR_CACHE_MISS". Do you know, how can i solve it? Thank you :) ---------- components: Windows messages: 361683 nosy: judiction, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ERR_CACHE_MISS _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 07:54:16 2020 From: report at bugs.python.org (Julien Danjou) Date: Mon, 10 Feb 2020 12:54:16 +0000 Subject: [New-bugs-announce] [issue39599] ABI breakage between 3.7.4 and 3.7.5 Message-ID: <1581339256.79.0.552474006467.issue39599@roundup.psfhosted.org> New submission from Julien Danjou : As I've reported originally on the python-dev list, there seems to be an ABI breakage between 3.7.4 and 3.7.5. https://mail.python.org/archives/list/python-dev at python.org/message/J2FGZPS5PS7473TONJTPAVSNXRGV3TFL/ The culprit commit is https://github.com/python/cpython/commit/8766cb74e186d3820db0a855ccd780d6d84461f7 This happens on a custom C module (built via Cython) when using including with -DPy_BUILD_CORE. I'm not sure it'd happen otherwise. I've tried to provide a minimal use case, but since it seems to be a memory overflow, the backtrace does not make any sense and it's hard to reproduce without the orignal code. ---------- components: C API messages: 361689 nosy: jd priority: normal severity: normal status: open title: ABI breakage between 3.7.4 and 3.7.5 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 10:44:03 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 10 Feb 2020 15:44:03 +0000 Subject: [New-bugs-announce] [issue39600] idle_test: test_fontlist_key() fails if two fonts have the same name Message-ID: <1581349443.21.0.804865306435.issue39600@roundup.psfhosted.org> New submission from STINNER Victor : ====================================================================== FAIL: test_fontlist_key (idlelib.idle_test.test_configdialog.FontPageTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python3.7/idlelib/idle_test/test_configdialog.py", line 104, in test_fontlist_key self.assertNotEqual(down_font, font) AssertionError: 'Cantarell' == 'Cantarell' Example: --- import tkinter.ttk import tkinter.font frame = tkinter.ttk.Frame() families=sorted(tkinter.font.families(frame)) for family in families: print(family) --- Truncated output on my Fedora 31: --- Abyssinica SIL Android Emoji AnjaliOldLipi Bitstream Vera Sans C059 Caladea Cantarell Cantarell Cantarell Cantarell (...) Comfortaa Comfortaa (...)) DejaVu Sans DejaVu Sans DejaVu Sans (...) Source Han Serif CN Source Han Serif CN Source Han Serif CN Source Han Serif CN Source Han Serif CN Source Han Serif CN Source Han Serif TW Source Han Serif TW Source Han Serif TW Source Han Serif TW Source Han Serif TW Source Han Serif TW (...) --- I'm not sure if it's an issue in the unit test or an issue with the widget itself. Does it make sense to display the same font family name twice? ---------- assignee: terry.reedy components: IDLE, Tests messages: 361700 nosy: cheryl.sabella, taleinat, terry.reedy, vstinner priority: normal severity: normal status: open title: idle_test: test_fontlist_key() fails if two fonts have the same name versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 11:48:04 2020 From: report at bugs.python.org (JitterMan) Date: Mon, 10 Feb 2020 16:48:04 +0000 Subject: [New-bugs-announce] [issue39601] brace escapes are not working in formatted string literal format specifications Message-ID: <1581353284.54.0.569958060726.issue39601@roundup.psfhosted.org> New submission from JitterMan : It appears as if escaping the braces by doubling them up is not working properly if the braces are in a format specification within a f-string. >>> print(f'Email:\n {C:{{v.name}} {{v.email}}|\n }') Traceback (most recent call last): File "bugreport.py", line 95, in print(f'Email:\n {C:{{v.name}} {{v.email}}|\n }') NameError: name 'v' is not defined The escaping works as expected when the string's format method is used. ---------- components: 2to3 (2.x to 3.x conversion tool) files: bugreport.py messages: 361702 nosy: jitterman priority: normal severity: normal status: open title: brace escapes are not working in formatted string literal format specifications type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48887/bugreport.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 13:19:50 2020 From: report at bugs.python.org (Pox TheGreat) Date: Mon, 10 Feb 2020 18:19:50 +0000 Subject: [New-bugs-announce] [issue39602] importlib: lazy loading can result in reimporting a submodule Message-ID: <1581358790.35.0.810808142023.issue39602@roundup.psfhosted.org> New submission from Pox TheGreat : Using the LazyLoader class one can modify the sys.meta_path finders so that every import mechanism becomes lazy. This method has been used in Mercurial and by Facebook. My problem is that if I have a package (pa) which imports a submodule (a) in the __init__.py and accesses its attributes (or uses a from list) then that submodule is imported (executed) twice without any warning. I traced back the problem to importlib._bootstrap.py / _find_and_load_unlocked. There is a check there if the submodule has already been imported by the parent package, but the submodule will be imported just after that check because of the _LazyModule and the __path__ attribute access of the submodule. # Crazy side-effects! if name in sys.modules: return sys.modules[name] parent_module = sys.modules[parent] try: path = parent_module.__path__ Maybe we could check if name in sys.modules after the __path__ attribute access? ---------- components: Library (Lib) files: LazyImport.zip messages: 361705 nosy: Pox TheGreat priority: normal severity: normal status: open title: importlib: lazy loading can result in reimporting a submodule type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48889/LazyImport.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 14:29:35 2020 From: report at bugs.python.org (Max) Date: Mon, 10 Feb 2020 19:29:35 +0000 Subject: [New-bugs-announce] [issue39603] Injection in http.client Message-ID: <1581362975.61.0.33794022777.issue39603@roundup.psfhosted.org> New submission from Max : I recently came across a bug during a pentest that's allowed me to perform some really interesting attacks on a target. While originally discovered in requests, I had been forwarded to one of the urllib3 developers after agreeing that fixing it at it's lowest level would be preferable. I was informed that the vulnerability is also present in http.client and that I should report it here as well. The 'method' parameter is not filtered to prevent the injection from altering the entire request. For example: >>> conn = http.client.HTTPConnection("localhost", 80) >>> conn.request(method="GET / HTTP/1.1\r\nHost: abc\r\nRemainder:", url="/index.html") This will result in the following request being generated: GET / HTTP/1.1 Host: abc Remainder: /index.html HTTP/1.1 Host: localhost Accept-Encoding: identity This was originally found in an HTTP proxy that was utilising Requests. It allowed me to manipulate the original path to access different files from an internal server since the developers had assumed that the method would filter out non-standard HTTP methods. The recommended solution is to only allow the standard HTTP methods of GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH. An alternate solution that would allow programmers to use non-standard methods would be to only support characters [a-z] and stop reading at any special characters (especially newlines and spaces). ---------- components: Library (Lib) messages: 361710 nosy: maxpl0it priority: normal severity: normal status: open title: Injection in http.client type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 17:21:41 2020 From: report at bugs.python.org (Paul Ganssle) Date: Mon, 10 Feb 2020 22:21:41 +0000 Subject: [New-bugs-announce] [issue39604] Document PyDateTimeAPI / PyDateTime_CAPI struct Message-ID: <1581373301.32.0.397691619207.issue39604@roundup.psfhosted.org> New submission from Paul Ganssle : The entire public interface documented for the datetime C API is various C macros (see: https://docs.python.org/3/c-api/datetime.html) which are wrappers around function calls to the PyDateTimeAPI / PyDatetime_CAPI struct, but the struct itself is undocumented. Unfortunately (or fortunately, depending on how you think the C API should look), pretty much everyone has to know the implementation details of the C API struct anyway. Bindings in other languages usually can't use the C preprocessor macros and have to directly use the C API struct so projects like PyPy, PyO3 and Cython are using it. The struct also can do things that the macros can't do: consider bug #30155 which is looking for a way to create a datetime object with a tzinfo (which is possible using the C struct). I think we can should go ahead and make the `PyDateTimeAPI` struct "public" and document the functions on it. This may be a bit tougher than one would hope because the overlap between the macros and the struct functions isn't 100%, but it's pretty close, so I would think we'd want to document the two ways to do things rather close to one another. nosy-ing Victor on here in case he has any strong opinions about whether these kinds of struct should be exposed as part of the official public interface. ---------- assignee: docs at python components: C API, Documentation messages: 361733 nosy: belopolsky, docs at python, lemburg, p-ganssle, vstinner priority: normal severity: normal status: open title: Document PyDateTimeAPI / PyDateTime_CAPI struct versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 10 23:45:37 2020 From: report at bugs.python.org (Andy Lester) Date: Tue, 11 Feb 2020 04:45:37 +0000 Subject: [New-bugs-announce] [issue39605] Fix some casts to not cast away const Message-ID: <1581396337.96.0.129945209612.issue39605@roundup.psfhosted.org> New submission from Andy Lester : gcc -Wcast-qual turns up a number of instances of casting away constness of pointers. Some of these can be safely modified, by either: * Adding the const to the type cast, as in: - return _PyUnicode_FromUCS1((unsigned char*)s, size); + return _PyUnicode_FromUCS1((const unsigned char*)s, size); * Removing the cast entirely, because it's not necessary (but probably was at one time), as in: - PyDTrace_FUNCTION_ENTRY((char *)filename, (char *)funcname, lineno); + PyDTrace_FUNCTION_ENTRY(filename, funcname, lineno); These changes will not change code, but they will make it much easier to check for errors in consts. ---------- components: Interpreter Core messages: 361780 nosy: petdance priority: normal severity: normal status: open title: Fix some casts to not cast away const type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 01:35:23 2020 From: report at bugs.python.org (Nathaniel Smith) Date: Tue, 11 Feb 2020 06:35:23 +0000 Subject: [New-bugs-announce] [issue39606] Regression: it should be possible to close async iterators multiple times Message-ID: <1581402923.1.0.805434272092.issue39606@roundup.psfhosted.org> New submission from Nathaniel Smith : In bpo-39386, the 'aclose' method for async generators was fixed so that the following broken code would correctly raise an error: # -- bad code -- async def agen_fn(): yield async def do_bad_thing(): agen = agen_fn() aclose_coro = agen.aclose() await aclose_coro # Should raise RuntimeError: await aclose_coro asyncio.run(do_bad_thing()) # ----------------- However, the merged fix also broke the following correct code, which should *not* raise an error: # -- good code -- async def agen_fn(): yield async def do_good_thing(): agen = agen_fn() await agen.aclose() # Should *not* raise an error, but currently does in Python dev branches: await agen.aclose() asyncio.run(do_good_thing()) # ---------------- It's not supported to iterate the same coroutine object twice. However, making two *independent* calls to aclose should return two independent coroutine objects, and it should be legal to iterate over each object once. This can also occur even if there's only a single call to 'aclose'. For example, this is the recommended idiom for making sure that an async generator correctly closes all its resources: # -- good code -- async def agen_fn(): yield async def careful_loop(): agen = agen_fn() try: async for _ in agen: pass finally: await agen.aclose() asyncio.run(careful_loop()) # ------------------- On released Python, the code above runs correctly. On the latest Python dev branches, it raises a RuntimeError. So basically the problem is that the fix for bpo-39386 is storing the "was aclose iterated?" state on the async generator object, but in fact it needs to be stored on the aclose coroutine object. ---------- keywords: 3.7regression, 3.8regression, 3.9regression messages: 361783 nosy: asvetlov, lukasz.langa, ned.deily, njs, yselivanov priority: release blocker severity: normal status: open title: Regression: it should be possible to close async iterators multiple times versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 04:40:20 2020 From: report at bugs.python.org (Chris Rogers) Date: Tue, 11 Feb 2020 09:40:20 +0000 Subject: [New-bugs-announce] [issue39607] Add a parameter to strip, lstrip, and rstrip that treats the first parameter as a full string Message-ID: <1581414020.64.0.469287582867.issue39607@roundup.psfhosted.org> New submission from Chris Rogers : Consider this string: 'mailto:mailto:mailto:main at example.com' If you try to remove the mailtos with lstrip('mailto:'), you'll be left with this: 'n at example.com' That's because the three strip functions look at each character separately rather than as a whole string. Currently, as a workaround, you have to either use regex or a loop. This can take several lines of code. It would be great if the strip functions had a second parameter that lets you keep the first parameter intact. You could then use this code to get the desired result: 'mailto:mailto:mailto:main at example.com'.lstrip('mailto:', true) >>main at example.com ---------- messages: 361791 nosy: Chris Rogers priority: normal severity: normal status: open title: Add a parameter to strip, lstrip, and rstrip that treats the first parameter as a full string type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 06:04:06 2020 From: report at bugs.python.org (wyz23x2) Date: Tue, 11 Feb 2020 11:04:06 +0000 Subject: [New-bugs-announce] [issue39608] Bug in 00000000000000000 Message-ID: <1581419046.4.0.944643641611.issue39608@roundup.psfhosted.org> New submission from wyz23x2 : Why is this? >>> 0000000000000000000000000000000 # No error 0 >>> 0000000000000000000000000000002 SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers ---------- components: Build messages: 361796 nosy: wyz23x2 priority: normal severity: normal status: open title: Bug in 00000000000000000 type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 06:35:18 2020 From: report at bugs.python.org (Markus Mohrhard) Date: Tue, 11 Feb 2020 11:35:18 +0000 Subject: [New-bugs-announce] [issue39609] Set the thread_name_prefix for asyncio's default executor ThreadPoolExecutor Message-ID: <1581420918.89.0.0710307394.issue39609@roundup.psfhosted.org> New submission from Markus Mohrhard : The ThreadPoolExecutor in BaseEventLoop.run_in_executor should set a thread_name_prefix to simplify debugging. Might also be worth to limit the number of max threads. On our 256 core machines we sometimes get 1000+ threads due to the cpu_count() * 5 default limit. ---------- components: asyncio messages: 361799 nosy: Markus Mohrhard, asvetlov, yselivanov priority: normal severity: normal status: open title: Set the thread_name_prefix for asyncio's default executor ThreadPoolExecutor type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 10:42:52 2020 From: report at bugs.python.org (Eric Wieser) Date: Tue, 11 Feb 2020 15:42:52 +0000 Subject: [New-bugs-announce] [issue39610] memoryview.__len__ should raise an exception for 0d buffers Message-ID: <1581435772.39.0.91175671687.issue39610@roundup.psfhosted.org> New submission from Eric Wieser : Right now, the behavior is: >>> import numpy as np >>> arr_0d = np.array(42) >>> mem_0d = memoryview(arr_0d) >>> len(mem_0d) 1 >>> mem_0d[0] TypeError: invalid indexing of 0-dim memory It seems bizarre to have this object pretend to be a sequence when you ask for its length, yet not behave like one when you actually try to use this length. I'd suggest cpython should behave like numpy here, and fail: >>> len(arr_0d) TypeError: len() of unsized object Perhaps `TypeError: cannot get length of 0-dim memory` would be more appropriate as a message. --- Wasn't sure how to classify this, feel free to reclassify ---------- components: Interpreter Core messages: 361821 nosy: Eric Wieser, skrah priority: normal severity: normal status: open title: memoryview.__len__ should raise an exception for 0d buffers type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 11:50:02 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 11 Feb 2020 16:50:02 +0000 Subject: [New-bugs-announce] [issue39611] PyVectorcall_NARGS(): change return type to Py_ssize_t Message-ID: <1581439802.08.0.269605929245.issue39611@roundup.psfhosted.org> New submission from STINNER Victor : I propose to change PyVectorcall_NARGS() return type from unsigned size_t to signed Py_ssize_t. Currently, the function is defined as: static inline Py_ssize_t PyVectorcall_NARGS(size_t n) { return n & ~PY_VECTORCALL_ARGUMENTS_OFFSET; } But in CPython code base, the result is always stored in a *signed* Py_ssize_t: Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); Sometimes, this nargs is passed to _PyObject_MakeTpCall() which expects nargs to be Py_ssize_t, so it's consistent. In general in CPython, a size uses type Py_ssize_t, not size_t. Example: PyVarObject.ob_size type is Py_ssize_t. ---------- components: C API messages: 361824 nosy: jdemeyer, petr.viktorin, vstinner priority: normal severity: normal status: open title: PyVectorcall_NARGS(): change return type to Py_ssize_t versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 14:35:51 2020 From: report at bugs.python.org (ArtOfCode Studio) Date: Tue, 11 Feb 2020 19:35:51 +0000 Subject: [New-bugs-announce] [issue39612] Python UnicodeDecodeError while using modulefinder Message-ID: <1581449751.58.0.336622448133.issue39612@roundup.psfhosted.org> New submission from ArtOfCode Studio : I want to find all imported modules in a python program. I am using the ``modulefinder`` standard module for my job. I am trying to follow [this example](https://docs.python.org/3.8/library/modulefinder.html#example-usage-of-modulefinder) on docs, but I get this error even if I use the same code as the documents: ````python Traceback (most recent call last): File "C:\Users\Din?el\Desktop\Deploy\zipperutils\find modules.py", line 4, in finder.run_script('bacon.py') File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 165, in run_script self.load_module('__main__', fp, pathname, stuff) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 360, in load_module self.scan_code(co, m) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 433, in scan_code self._safe_import_hook(name, m, fromlist, level=0) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 378, in _safe_import_hook self.import_hook(name, caller, level=level) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 177, in import_hook q, tail = self.find_head_package(parent, name) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 233, in find_head_package q = self.import_module(head, qname, parent) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 326, in import_module m = self.load_module(fqname, fp, pathname, stuff) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 360, in load_module self.scan_code(co, m) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 433, in scan_code self._safe_import_hook(name, m, fromlist, level=0) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 378, in _safe_import_hook self.import_hook(name, caller, level=level) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 177, in import_hook q, tail = self.find_head_package(parent, name) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 233, in find_head_package q = self.import_module(head, qname, parent) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 326, in import_module m = self.load_module(fqname, fp, pathname, stuff) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 360, in load_module self.scan_code(co, m) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 433, in scan_code self._safe_import_hook(name, m, fromlist, level=0) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 378, in _safe_import_hook self.import_hook(name, caller, level=level) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 177, in import_hook q, tail = self.find_head_package(parent, name) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 233, in find_head_package q = self.import_module(head, qname, parent) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 326, in import_module m = self.load_module(fqname, fp, pathname, stuff) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 360, in load_module self.scan_code(co, m) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 466, in scan_code self.scan_code(c, m) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 433, in scan_code self._safe_import_hook(name, m, fromlist, level=0) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 378, in _safe_import_hook self.import_hook(name, caller, level=level) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 177, in import_hook q, tail = self.find_head_package(parent, name) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 233, in find_head_package q = self.import_module(head, qname, parent) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 326, in import_module m = self.load_module(fqname, fp, pathname, stuff) File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\modulefinder.py", line 343, in load_module co = compile(fp.read()+'\n', pathname, 'exec') File "C:\Users\Din?el\AppData\Local\Programs\Python\Python38\lib\encodings\cp1254.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 308: character maps to ```` My operating system is Windows 10. I am using Python3.8.1 If there is a better way of finding imported modules let me know it. Also is there any way of saying that to Python: "use unicode for directory names"? ---------- components: Unicode messages: 361829 nosy: ArtOfCode Studio, ezio.melotti, vstinner priority: normal severity: normal status: open title: Python UnicodeDecodeError while using modulefinder versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 14:51:37 2020 From: report at bugs.python.org (Artur Rodrigues) Date: Tue, 11 Feb 2020 19:51:37 +0000 Subject: [New-bugs-announce] [issue39613] IsolatedAsyncioTestCase closes default event loop Message-ID: <1581450697.14.0.885131821456.issue39613@roundup.psfhosted.org> New submission from Artur Rodrigues : This means that subsequent test cases executed within the same application that don't create the event loop will fail. This seems like a behaviour change that wasn't raised on the original PR. $ cat test.py from unittest import IsolatedAsyncioTestCase, TestCase, main import asyncio class Test1(IsolatedAsyncioTestCase): async def test_one(self): self.assertTrue(True) class Test2(TestCase): def test_two(self): loop = asyncio.get_event_loop() self.assertTrue(true) if __name__ == "__main__": main() $ /usr/local/opt/python at 3.8/bin/python3 test.py .E ====================================================================== ERROR: test_two (__main__.Test2) ---------------------------------------------------------------------- Traceback (most recent call last): File "test.py", line 13, in test_two loop = asyncio.get_event_loop() File "/usr/local/Cellar/python at 3.8/3.8.1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/events.py", line 639, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.' RuntimeError: There is no current event loop in thread 'MainThread'. ---------------------------------------------------------------------- Ran 2 tests in 0.006s FAILED (errors=1) $ uname -a Darwin arturhoo-mbp 18.7.0 Darwin Kernel Version 18.7.0: Sun Dec 1 18:59:03 PST 2019; root:xnu-4903.278.19~1/RELEASE_X86_64 x86_64 ---------- components: asyncio messages: 361831 nosy: arturhoo, asvetlov, yselivanov priority: normal severity: normal status: open title: IsolatedAsyncioTestCase closes default event loop type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 11 18:17:29 2020 From: report at bugs.python.org (Dirk Herrmann) Date: Tue, 11 Feb 2020 23:17:29 +0000 Subject: [New-bugs-announce] [issue39614] Documentation of attribute reference is unclear Message-ID: <1581463049.18.0.764231801329.issue39614@roundup.psfhosted.org> New submission from Dirk Herrmann : Trying to understand attribute reference in Python, I was lost: * In the "Python Language Reference" (I will refer to this as PLR, sorry if that is uncommon, did not find an abbreviation in the glossary): Chapter 6.3.1 is about attribute reference. How the attribute reference actually works is not explained in detail, only with the sentence "This object is then asked to produce the attribute whose name is the identifier." which I find vague. Moreover, in PLR 6.3.1 it is said that it can be customized overriding "__getattr__()", but again, details are unclear. And, when following the link to "__getattr__()" it turns out that "__getattr__()" is not called for attribute access, but only in certain circumstances: * PLR 3.3.1 section "object.__getattr__(self, name)" explains that this is only called when "default attribute access" fails. There is nowhere an explanation of "default attribute access", it is also not mentioned in the index. There is some explanation in parentheses what it means if "default attribute access" fails, but the actual procedure of the "default attribute access" is still not clear. A bit further down in this section it is also mentioned that if an attribute is found using the "normal mechanism" then "__getattr__()" is not called - again not explaining what the "normal mechanism" is. There is some reference to "__getattribute__()" here, saying that with "__getattribute__()" there would be "total control over attribute access", but this leads again to confusion: * PLR 3.3.1 section "object.__getattribute__(self, name)" indicates that this "may still be bypassed" in certain circumstances, referring to PLR 3.3.10, special method lookup, which refers to the "conventional lookup process", to which this is an exception. The basis why this is an exception remains unclear - is it that certain method names are detected during attribute reference? Summary: There is not (or I was too stupid to find) a concise description of how attribute reference works. There are several terms used to refer to certain aspects of it: "default attribute access", "normal mechanism [of attribute access]", "conventional lookup process", which may or may not refer to the same thing, which seems not to be documented anyway. ---------- assignee: docs at python components: Documentation messages: 361837 nosy: Dirk Herrmann, docs at python priority: normal severity: normal status: open title: Documentation of attribute reference is unclear type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 06:55:07 2020 From: report at bugs.python.org (Peter Eisentraut) Date: Wed, 12 Feb 2020 11:55:07 +0000 Subject: [New-bugs-announce] [issue39615] cpython/abstract.h not compatible with C90 Message-ID: <1581508507.12.0.731337298421.issue39615@roundup.psfhosted.org> New submission from Peter Eisentraut : Some inline functions use mixed declarations and code. These end up visible in third-party code that includes Python.h, which might not be using a C99 compiler. Example: In file included from /Users/peter/python-builds/3.9/include/python3.9/abstract.h:843, from /Users/peter/python-builds/3.9/include/python3.9/Python.h:147, from plpython.h:59, from plpy_typeio.h:10, from plpy_cursorobject.h:8, from plpy_cursorobject.c:14: /Users/peter/python-builds/3.9/include/python3.9/cpython/abstract.h:74:5: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement] 74 | Py_ssize_t offset = tp->tp_vectorcall_offset; | ^~~~~~~~~~ ---------- components: Interpreter Core messages: 361880 nosy: petere priority: normal severity: normal status: open title: cpython/abstract.h not compatible with C90 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 07:37:07 2020 From: report at bugs.python.org (=?utf-8?q?Ville_Skytt=C3=A4?=) Date: Wed, 12 Feb 2020 12:37:07 +0000 Subject: [New-bugs-announce] [issue39616] SSLContext.check_hostname description is inaccurate wrt match_hostname Message-ID: <1581511027.29.0.201714121734.issue39616@roundup.psfhosted.org> New submission from Ville Skytt? : Doc says "Whether to match the peer cert?s hostname with match_hostname() in SSLSocket.do_handshake()." but match_hostname() is no longer used to do that in the first place, OpenSSL is used for that. check_hostname affects the OpenSSL usage, too. ---------- assignee: docs at python components: Documentation messages: 361892 nosy: docs at python, scop priority: normal severity: normal status: open title: SSLContext.check_hostname description is inaccurate wrt match_hostname type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 12:21:04 2020 From: report at bugs.python.org (sds) Date: Wed, 12 Feb 2020 17:21:04 +0000 Subject: [New-bugs-announce] [issue39617] max_workers argument to concurrent.futures.ProcessPoolExecutor is not flexible enough Message-ID: <1581528064.43.0.461449541499.issue39617@roundup.psfhosted.org> New submission from sds : The number of workers (max_workers) I want to use often depends on the server load. Imagine this scenario: I have 64 CPUs and I need to run 200 processes. However, others are using the server too, so currently loadavg is 50, thus I will set `max_workers` to (say) 20. But 2 hours later when those 20 processes are done, loadavg is now 0 (because the 50 processes run by my colleagues are done too), so I want to increase the pool size max_workers to 70. It would be nice if it were possible to adjust the pool size depending on the server loadavg when a worker is started. Basically, the intent is maintaining a stable load average and full resource utilization. ---------- components: Library (Lib) messages: 361905 nosy: sam-s priority: normal severity: normal status: open title: max_workers argument to concurrent.futures.ProcessPoolExecutor is not flexible enough type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 12:56:31 2020 From: report at bugs.python.org (Denis Vieira) Date: Wed, 12 Feb 2020 17:56:31 +0000 Subject: [New-bugs-announce] [issue39618] logger.exception with default message Message-ID: <1581530191.64.0.316604727727.issue39618@roundup.psfhosted.org> New submission from Denis Vieira : On my Python projects i like to use the logger.exception() method without any other message. I'm forced to send an empty string on every call. logger.exception('') It would be nice the exception method have the expected parameter "msg" with an default value (''). ---------- messages: 361908 nosy: Denis Vieira priority: normal severity: normal status: open title: logger.exception with default message type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 17:33:53 2020 From: report at bugs.python.org (Ian Norton) Date: Wed, 12 Feb 2020 22:33:53 +0000 Subject: [New-bugs-announce] [issue39619] os.chroot is not enabled on HP-UX builds Message-ID: <1581546833.04.0.518216269375.issue39619@roundup.psfhosted.org> New submission from Ian Norton : When building on HP-UX using: The configure stage fails to detect chroot(). This is due to setting _XOPEN_SOURCE to a value higher than 500. The fix for this is to not set _XOPEN_SOURCE when configuring for HP-UX ---------- components: Interpreter Core messages: 361921 nosy: Ian Norton priority: normal severity: normal status: open title: os.chroot is not enabled on HP-UX builds type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 22:23:38 2020 From: report at bugs.python.org (Andy Lester) Date: Thu, 13 Feb 2020 03:23:38 +0000 Subject: [New-bugs-announce] [issue39620] PyObject_GetAttrString and tp_getattr do not agree Message-ID: <1581564218.88.0.011311131475.issue39620@roundup.psfhosted.org> New submission from Andy Lester : PyObject_GetAttrString(PyObject *v, const char *name) typedef PyObject *(*getattrfunc)(PyObject *, char *) The outer PyObject_GetAttrString takes a const char *name, but then casts away the const when calling the underlying tp_getattr. This means that an underlying function would be free to modify or free() the char* passed in to it, which might be, for example, a string literal, which would be a Bad Thing. The setattr function pair has the same problem. The API doc at https://docs.python.org/3/c-api/typeobj.html says that the tp_getattr and tp_setattr slots are deprecated. If they're not going away soon, I would think this should be addressed. Fixing this in the cPython code by making tp_getattr and tp_setattr take const char * pointers would be simple. I don't have any idea how much outside code it would affect. ---------- components: C API messages: 361929 nosy: petdance priority: normal severity: normal status: open title: PyObject_GetAttrString and tp_getattr do not agree type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 12 23:05:21 2020 From: report at bugs.python.org (Andy Lester) Date: Thu, 13 Feb 2020 04:05:21 +0000 Subject: [New-bugs-announce] [issue39621] md5_compress() in Modules/md5module.c can take a const buf Message-ID: <1581566721.26.0.324789449334.issue39621@roundup.psfhosted.org> New submission from Andy Lester : The function md5_compress does not modify its buffer argument. static void md5_compress(struct md5_state *md5, unsigned char *buf) buf should be const. ---------- components: Extension Modules messages: 361932 nosy: petdance priority: normal severity: normal status: open title: md5_compress() in Modules/md5module.c can take a const buf type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 02:09:49 2020 From: report at bugs.python.org (Zhibin Dong) Date: Thu, 13 Feb 2020 07:09:49 +0000 Subject: [New-bugs-announce] [issue39622] KeyboardInterrupt is ignored when await asyncio.sleep(0) Message-ID: <1581577789.38.0.529951784345.issue39622@roundup.psfhosted.org> New submission from Zhibin Dong : As shown in the code, when 0 is passed to asyncio.sleep function, sometimes the program does not exit when I press . I must press again to close the program. However, when a number, such as 0.01, which is bigger than 0, is passed to the sleep function, the program will exit as expected when I press just once. Is there any bug or just a wrong way to use? Thanks. ---------- components: asyncio files: SleepTest.py messages: 361939 nosy: Zhibin Dong, asvetlov, yselivanov priority: normal severity: normal status: open title: KeyboardInterrupt is ignored when await asyncio.sleep(0) type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48893/SleepTest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 03:19:35 2020 From: report at bugs.python.org (Stuart Ball) Date: Thu, 13 Feb 2020 08:19:35 +0000 Subject: [New-bugs-announce] [issue39623] __str__ and __repr__ for asyncio.Task still omit arg values Message-ID: <1581581975.12.0.0633437109598.issue39623@roundup.psfhosted.org> New submission from Stuart Ball : This is not very helpful if your gather or wait contains multiple versions of foo with different argument values: `` Should just be: `` Would probably take all of 5 minutes to implement and make a lot of people's lives easier. ---------- messages: 361944 nosy: stuball123 priority: normal severity: normal status: open title: __str__ and __repr__ for asyncio.Task still omit arg values _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 03:53:15 2020 From: report at bugs.python.org (Ben Boeckel) Date: Thu, 13 Feb 2020 08:53:15 +0000 Subject: [New-bugs-announce] [issue39624] Trace greedy replaces $prefix and $exec_prefix Message-ID: <1581583995.62.0.313310245437.issue39624@roundup.psfhosted.org> New submission from Ben Boeckel : Previously reported as a sidenote in Issue21016. The `--ignore-dir` option in trace.py replaces `$prefix` and `$exec_prefix` *anywhere* in the path when it really should just replace it in the start of the path and if it is followed by nothing or a path separator (that is, it is a path component). I suspect that there's always a separator though. ---------- components: Library (Lib) messages: 361949 nosy: mathstuf, vstinner priority: normal severity: normal status: open title: Trace greedy replaces $prefix and $exec_prefix _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 07:10:14 2020 From: report at bugs.python.org (Andrew Wall) Date: Thu, 13 Feb 2020 12:10:14 +0000 Subject: [New-bugs-announce] [issue39625] Traceback needs more details Message-ID: <1581595814.35.0.681093135233.issue39625@roundup.psfhosted.org> New submission from Andrew Wall : I encountered a question on Stackoverflow where, unusually, a Traceback was given in full, but I couldn't diagnose the problem. It was like this: Traceback (most recent call last): File "soFailedTraceback.py", line 15, in c = C(C1("C1"), C2("C2")) TypeError: __init__() missing 1 required positional argument: 'p' What I am claiming is missing is info about class C1: File "soFailedTraceback.py", line 8, in def __init__(self, s1, p): Here is the file soFailedTraceback.py: #soFailedTraceback class C: def __init__(self, c1, p): pass class C1: def __init__(self, s1, p): pass class C2: def __init__(self, s1): pass c = C(C1("C1"), C2("C2")) I find the Traceback confusing, because it so happens there are two classes which have required positional argument "p", but python does not directly show which class it is. ---------- messages: 361953 nosy: Andrew Wall priority: normal severity: normal status: open title: Traceback needs more details type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 07:59:45 2020 From: report at bugs.python.org (Ilya Kamenshchikov) Date: Thu, 13 Feb 2020 12:59:45 +0000 Subject: [New-bugs-announce] [issue39626] random choice to delegate to sample on sets Message-ID: <1581598785.79.0.74304854999.issue39626@roundup.psfhosted.org> New submission from Ilya Kamenshchikov : In a few of my projects I had this (minor) pain of having to remember which collections of elements are sets and which are [list, tuple]. It causes me to double check and have random.sample(my_set, 1)[0] in many places. To me this is not how I think and causes friction: conceptually, I know something is a collection and I want 1 random choice from it. Having to differentiate on sequences vs sets makes my code uglier :( This issue is similar to https://bugs.python.org/issue37708. ---------- components: Library (Lib) messages: 361954 nosy: Ilya Kamenshchikov priority: normal severity: normal status: open title: random choice to delegate to sample on sets _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 09:31:28 2020 From: report at bugs.python.org (Vlad Emelianov) Date: Thu, 13 Feb 2020 14:31:28 +0000 Subject: [New-bugs-announce] [issue39627] Fix TypedDict totalizy check for inherited keys Message-ID: <1581604288.74.0.402732748269.issue39627@roundup.psfhosted.org> New submission from Vlad Emelianov : Add changes made in https://github.com/python/typing/pull/700 to upstream. ---------- components: Library (Lib) messages: 361957 nosy: Vlad Emelianov priority: normal severity: normal status: open title: Fix TypedDict totalizy check for inherited keys type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 10:05:28 2020 From: report at bugs.python.org (Marco) Date: Thu, 13 Feb 2020 15:05:28 +0000 Subject: [New-bugs-announce] [issue39628] msg.walk memory leak? Message-ID: <1581606328.48.0.533988298686.issue39628@roundup.psfhosted.org> New submission from Marco : Hello, if I write ``` msg = email.message_from_bytes(...) for part in msg.walk(): content_type = part.get_content_type() if not part.get_content_maintype() == 'multipart': filename = part.get_filename(None) attachment = part.get_payload(decode=True) ``` if the mime parts are more than one, then the memory increases at each iteration and will never be released. ---------- components: email messages: 361959 nosy: barry, falon, r.david.murray priority: normal severity: normal status: open title: msg.walk memory leak? type: resource usage versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 13:16:37 2020 From: report at bugs.python.org (Eric Fahlgren) Date: Thu, 13 Feb 2020 18:16:37 +0000 Subject: [New-bugs-announce] [issue39629] inspect.signature fails on math.hypot Message-ID: <1581617797.68.0.659166194612.issue39629@roundup.psfhosted.org> New submission from Eric Fahlgren : Python 3.8's new math.hypot function also appears to suffer from the same issue as math.log: >>> import math, inspect >>> inspect.signature(math.hypot) Traceback (most recent call last): File "", line 1, in File "C:\Program Files\Python38\lib\inspect.py", line 3093, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped) File "C:\Program Files\Python38\lib\inspect.py", line 2842, in from_callable return _signature_from_callable(obj, sigcls=cls, File "C:\Program Files\Python38\lib\inspect.py", line 2296, in _signature_from_callable return _signature_from_builtin(sigcls, obj, File "C:\Program Files\Python38\lib\inspect.py", line 2107, in _signature_from_builtin raise ValueError("no signature found for builtin {!r}".format(func)) ValueError: no signature found for builtin Possibly related to issue29299? ---------- components: Library (Lib) messages: 361966 nosy: eric.fahlgren priority: normal severity: normal status: open title: inspect.signature fails on math.hypot type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 13 22:25:12 2020 From: report at bugs.python.org (Andy Lester) Date: Fri, 14 Feb 2020 03:25:12 +0000 Subject: [New-bugs-announce] [issue39630] Const some pointers to string literals Message-ID: <1581650712.98.0.799868474663.issue39630@roundup.psfhosted.org> New submission from Andy Lester : Here are some fixes of char * pointers to literals that should be const char * in these four files. +++ Objects/frameobject.c +++ Objects/genobject.c +++ Python/codecs.c +++ Python/errors.c ---------- components: Interpreter Core messages: 361982 nosy: petdance priority: normal severity: normal status: open title: Const some pointers to string literals type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 14 03:49:42 2020 From: report at bugs.python.org (Steve Dower) Date: Fri, 14 Feb 2020 08:49:42 +0000 Subject: [New-bugs-announce] [issue39631] Fix file association MIME type on Windows Message-ID: <1581670182.52.0.370257427786.issue39631@roundup.psfhosted.org> New submission from Steve Dower : The installer for Windows creates file associations in Tools/msi/launcher/launcher_reg.wxs that identify ".py[w]" files as text/plain. This is inconsistent with the mimetypes module, which uses text/x-python, and may cause some applications to assume that calling ShellExecute on a .py file will open a text editor rather than executing the script. We should update the MIME type to text/x-python. This can be backported, as the change is in the launcher and isn't tied to the usual upgrade paths anyway. ---------- components: Windows messages: 361989 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Fix file association MIME type on Windows type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 14 05:03:28 2020 From: report at bugs.python.org (Nicolas Dessart) Date: Fri, 14 Feb 2020 10:03:28 +0000 Subject: [New-bugs-announce] [issue39632] variadic function call broken on armhf when passing a float argument Message-ID: <1581674608.19.0.403543141988.issue39632@roundup.psfhosted.org> New submission from Nicolas Dessart : On armhf and for variadic functions (and contrary to non-variadic functions), the VFP co-processor registers are not used for float argument parameter passing. This specificity is apparently completely disregarded by ctypes which always uses `ffi_prep_cif` to prepare the parameter passing of a function while it should most probably use `ffi_prep_cif_var` for variadic functions. As such variadic function call with float arguments through ctypes is currently broken on armhf targets. I think that ctypes API should be updated to let the user specify if a function is variadic. I've attached a patch to the ctypes unit tests that I'm using to reproduce this bug. pi at raspberrypi:~/code/cpython $ ./python -m test test_ctypes 0:00:00 load avg: 0.00 Run tests sequentially 0:00:00 load avg: 0.00 [1/1] test_ctypes _testfunc_d_bhilfd_var got 2 3 4 -1242230680 -0.000000 -0.000000 test test_ctypes failed -- Traceback (most recent call last): File "/home/pi/code/cpython/Lib/ctypes/test/test_functions.py", line 146, in test_doubleresult_var self.assertEqual(result, 21) AssertionError: -7.086855952261741e-44 != 21 test_ctypes failed == Tests result: FAILURE == 1 test failed: test_ctypes Total duration: 3.8 sec Tests result: FAILURE ---------- components: ctypes files: ctypes_variadic_function_tests.diff keywords: patch messages: 361992 nosy: Nicolas Dessart priority: normal severity: normal status: open title: variadic function call broken on armhf when passing a float argument versions: Python 3.9 Added file: https://bugs.python.org/file48895/ctypes_variadic_function_tests.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 14 09:06:58 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Fri, 14 Feb 2020 14:06:58 +0000 Subject: [New-bugs-announce] [issue39633] venv does not include python. symlink by default Message-ID: <1581689218.89.0.729047022428.issue39633@roundup.psfhosted.org> New submission from Lysandros Nikolaou : At the moment running python -m venv venv or python3 -m venv venv creates a virtual environment that does not contain a python. symlink, which results in executing whatever the default python is when running i.e. python. within an activated virtual env. OTOH if one runs python. -m venv venv, then everything is OK. Would it be possible to include a python. symlink in all cases? If not, then I think we should update the docs to mention that somewhere, since it took me quite a while to figure this out. ---------- components: Library (Lib) messages: 361993 nosy: lys.nikolaou priority: normal severity: normal status: open title: venv does not include python. symlink by default versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 14 15:29:40 2020 From: report at bugs.python.org (Harsh Patel) Date: Fri, 14 Feb 2020 20:29:40 +0000 Subject: [New-bugs-announce] [issue39634] Incorrect heapq heapify naming Message-ID: <1581712180.36.0.700524012127.issue39634@roundup.psfhosted.org> New submission from Harsh Patel : heapify method is a misnomer in that it is actually the make-heap or build-heap procedure from textbooks ---------- components: Library (Lib) messages: 361996 nosy: hp685 priority: normal pull_requests: 17888 severity: normal status: open title: Incorrect heapq heapify naming _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 05:57:43 2020 From: report at bugs.python.org (=?utf-8?q?Fr=C3=A9d=C3=A9ric_Danna?=) Date: Sat, 15 Feb 2020 10:57:43 +0000 Subject: [New-bugs-announce] [issue39635] One paragraph of the doc is not translated in French Message-ID: <1581764263.43.0.781386094953.issue39635@roundup.psfhosted.org> New submission from Fr?d?ric Danna : In the French doc of the 3.8 version, https://docs.python.org/fr/3/tutorial/interpreter.html, there is an entire parapragph which is still writtent in Englush (not translated in French): > On Windows machines where you have installed Python from the > Microsoft Store, the python3.8 command will be available. If you have > the py.exe launcher installed, you can use the py command. See > Digression : D?finition des variables d'environnement for other ways > to launch Python. ``` (See the screenshot attached.) ---------- assignee: docs at python components: Documentation files: ksnip_20200215-115221.png messages: 362009 nosy: Fr?d?ric Danna, docs at python priority: normal severity: normal status: open title: One paragraph of the doc is not translated in French versions: Python 3.8 Added file: https://bugs.python.org/file48897/ksnip_20200215-115221.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 06:01:47 2020 From: report at bugs.python.org (=?utf-8?b?0J7Qu9C10LMg0J/QsNGA0LjQtdCy?=) Date: Sat, 15 Feb 2020 11:01:47 +0000 Subject: [New-bugs-announce] [issue39636] Can't change Treeview row color in Tkinter Message-ID: <1581764507.66.0.693125881903.issue39636@roundup.psfhosted.org> New submission from ???? ?????? : Good afternoon! Cannot change the color of a Treeview string in Python version 3.8.x. The same problem would be in Python version 3.7.3, but a solution for it does not help. You can see the question by reference: https://bugs.python.org/issue36468 PS Sorry for the clumsy English ---------- components: Tkinter messages: 362010 nosy: gpolo, serhiy.storchaka, ???? ?????? priority: normal severity: normal status: open title: Can't change Treeview row color in Tkinter type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 06:23:56 2020 From: report at bugs.python.org (Rick van Rein) Date: Sat, 15 Feb 2020 11:23:56 +0000 Subject: [New-bugs-announce] [issue39637] Probably incorrect message after failed import Message-ID: <1581765836.84.0.753268318543.issue39637@roundup.psfhosted.org> New submission from Rick van Rein : The following error message surprises me: >>> import os.environ Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'os.environ'; 'os' is not a package Shouldn't that say that "'environ' is not a package" instead? After all, 'os' will support >>> import os.path >>> This is confusing :) ---------- components: Interpreter Core messages: 362011 nosy: vanrein priority: normal severity: normal status: open title: Probably incorrect message after failed import versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 06:32:26 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 15 Feb 2020 11:32:26 +0000 Subject: [New-bugs-announce] [issue39638] Keep ASDL signatures for AST nodes Message-ID: <1581766346.86.0.0377348739096.issue39638@roundup.psfhosted.org> New submission from Batuhan Taskaya : It would be super convenient to keep ASDL declarations in AST nodes. There are multiple benefits of it; 1 -> When debugging or playing with the AST, time to time you may require to know what kind of things a field gets or is that field an optional one. 2 -> The AST nodes are pretty limited on what can they do by default. For extending their scope, 3rd party tools often copy python's ASDL to their source and build custom AST nodes from that. And with knowing what every field gets they can automatically generate autotransformer codes from that ASDL spec which takes python's standard AST and convert it to their own AST nodes. We can either create a new attribute or keep this in the docstring. I think keeping this in the docstring can at least give some info about the node rather than None so it makes more sense to me. If the feature wanted, I can propose a PR. ---------- components: Library (Lib) messages: 362012 nosy: BTaskaya, benjamin.peterson, pablogsal priority: low severity: normal status: open title: Keep ASDL signatures for AST nodes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 06:46:49 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 15 Feb 2020 11:46:49 +0000 Subject: [New-bugs-announce] [issue39639] Remote Suite node from AST Message-ID: <1581767209.41.0.653411841191.issue39639@roundup.psfhosted.org> New submission from Batuhan Taskaya : AST is containing a node from past that is explained as "not really an actual node but useful in Jython's typesystem.". There is no usage of it anywhere in the CPython repo, just some code in ast_optimizer, symbol table and compiler to forbid it from running. If there is not any specific reason to keep it, we can just remove and clean some code. ---------- components: Library (Lib) messages: 362014 nosy: BTaskaya, benjamin.peterson, brett.cannon, pablogsal, yselivanov priority: normal severity: normal status: open title: Remote Suite node from AST versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 14:06:08 2020 From: report at bugs.python.org (George Melikov) Date: Sat, 15 Feb 2020 19:06:08 +0000 Subject: [New-bugs-announce] [issue39640] fall back os.fdatasync() to fsync() on POSIX systems without fdatasync() support Message-ID: <1581793568.31.0.586113837137.issue39640@roundup.psfhosted.org> New submission from George Melikov : POSIX fdatasync() is similar to fsync() but it tries not to sync non-needed metadata. If POSIX OS doesn't have it - it's safe to use fsync() (If we need to sync data to disk - we have to use one of these functions). This change will help to run code with fdatasync() on MacOS without fallbacks in Python code. I'll propose a PR soon. ---------- components: IO messages: 362025 nosy: gmelikov priority: normal severity: normal status: open title: fall back os.fdatasync() to fsync() on POSIX systems without fdatasync() support _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 18:24:11 2020 From: report at bugs.python.org (bruce blosser) Date: Sat, 15 Feb 2020 23:24:11 +0000 Subject: [New-bugs-announce] [issue39641] concatenation of Tuples Message-ID: <1581809051.98.0.243817777705.issue39641@roundup.psfhosted.org> New submission from bruce blosser : The concatenation of two tuples into a third tuple, using the + command, causes an error if every member of each of the two tuples is NOT a string! This does not appear to be documented ANYWHERE, and really causes a whole lot of head scratching and more than enough foul language! :) So how does one "add" two tuples together, to create a third tuple, if the members are not all strings? ---------- messages: 362036 nosy: bruceblosser priority: normal severity: normal status: open title: concatenation of Tuples type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 15 22:11:33 2020 From: report at bugs.python.org (Clinton Hunter) Date: Sun, 16 Feb 2020 03:11:33 +0000 Subject: [New-bugs-announce] [issue39642] Behaviour of disabled widgets: widget.bind(func) -vs- w = widget(command=func) Message-ID: <1581822693.94.0.0781458615006.issue39642@roundup.psfhosted.org> New submission from Clinton Hunter : Using the bind method, the event will still trigger when the widget is disabled. However, if using "command=" it doesn't. Wondering whether the behaviour between the two ways of setting up event handling should behave the same? Not a major issue, easy enough to work around using an if. Example: Clicking the printBtn will still work despite being disabled. self.printBtn = tkinter.Button(self.frame, text='Print') self.printBtn['state'] = tkinter.DISABLED self.printBtn.bind(sequence='', func=self.printBtn_onclick) self.printBtn.pack() Clicking on the save button, the event will not trigger (ie the disabled state attribute is honored) self.saveBtn = tkinter.Button(self.frame, text='Save', command=self.saveBtn_onclick) self.saveBtn['state'] = tkinter.DISABLED self.saveBtn.pack() ---------- components: Tkinter messages: 362043 nosy: mrshr3d priority: normal severity: normal status: open title: Behaviour of disabled widgets: widget.bind(func) -vs- w = widget(command=func) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 00:41:42 2020 From: report at bugs.python.org (Ivan Babrou) Date: Sun, 16 Feb 2020 05:41:42 +0000 Subject: [New-bugs-announce] [issue39643] Python calls newfstatat for "" in inspect Message-ID: <1581831702.4.0.180608885783.issue39643@roundup.psfhosted.org> New submission from Ivan Babrou : I noticed that a program (SaltStack) is a lot slower under Python 3. After some stracing I was able to find that inspect module is to blame. In strace output I can see a ton of calls like this: 05:31:56.698829 newfstatat(AT_FDCWD, "", 0x7ffff6bc4cf0, 0) = -1 ENOENT (No such file or directory) <0.000033> 05:31:56.699743 newfstatat(AT_FDCWD, "", 0x7ffff6bc4b70, 0) = -1 ENOENT (No such file or directory) <0.000061> 05:31:56.701328 newfstatat(AT_FDCWD, "", 0x7ffff6bc4cf0, 0) = -1 ENOENT (No such file or directory) <0.000037> 05:31:56.702171 newfstatat(AT_FDCWD, "", 0x7ffff6bc4b70, 0) = -1 ENOENT (No such file or directory) <0.000031> 05:31:56.703614 newfstatat(AT_FDCWD, "", 0x7ffff6bc4cf0, 0) = -1 ENOENT (No such file or directory) <0.000031> 05:31:56.704421 newfstatat(AT_FDCWD, "", 0x7ffff6bc4b70, 0) = -1 ENOENT (No such file or directory) <0.000028> 05:31:56.705751 newfstatat(AT_FDCWD, "", 0x7ffff6bc4cf0, 0) = -1 ENOENT (No such file or directory) <0.000039> 05:31:56.706691 newfstatat(AT_FDCWD, "", 0x7ffff6bc4b70, 0) = -1 ENOENT (No such file or directory) <0.000028> 05:31:56.708148 newfstatat(AT_FDCWD, "", 0x7ffff6bc4cf0, 0) = -1 ENOENT (No such file or directory) <0.000032> This is the entrypoint from Salt: * https://github.com/saltstack/salt/blob/9adc2214c3bb/salt/utils/decorators/__init__.py#L102 Execution with stock code: $ time sudo salt-call --local test.ping local: True real 0m23.481s user 0m22.845s sys 0m0.649s Speedup after not calling into inspect.stack(): $ time sudo salt-call --local test.ping local: True real 0m3.661s user 0m3.253s sys 0m0.423s Stackoverflow suggests that frames with virtual importlib should be skipped: * https://stackoverflow.com/questions/40945752/inspect-who-imported-me ---------- messages: 362044 nosy: Ivan Babrou priority: normal severity: normal status: open title: Python calls newfstatat for "" in inspect type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 01:46:13 2020 From: report at bugs.python.org (Ananthakrishnan) Date: Sun, 16 Feb 2020 06:46:13 +0000 Subject: [New-bugs-announce] [issue39644] Add Binary module. Message-ID: <1581835573.18.0.933405082879.issue39644@roundup.psfhosted.org> New submission from Ananthakrishnan : Add binary module that has binary operations like: binary addition. binary subtracion. binary multiplication. binary division. compliment. 1's complement. 2's complement. cconverting to various number system. converting to BCD. converting to grey code. K-Map function and so on.. ---------- components: C API messages: 362045 nosy: Ananthakrishnan priority: normal severity: normal status: open title: Add Binary module. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 02:38:05 2020 From: report at bugs.python.org (Kyle Stanley) Date: Sun, 16 Feb 2020 07:38:05 +0000 Subject: [New-bugs-announce] [issue39645] Expand concurrent.futures.Future's public API Message-ID: <1581838685.03.0.542500965222.issue39645@roundup.psfhosted.org> New submission from Kyle Stanley : Based on the following python-ideas thread: https://mail.python.org/archives/list/python-ideas at python.org/thread/LMTQ2AI6A7UXEFVHRGHKWD33H24FGM6G/#ICJKHZ4BPIUMOPIT2TDTBIW2EH4CPNCP. In the above ML thread, the author proposed adding a new cf.SerialExecutor class, which seems to be not a great fit for the standard library (based on the current state of the discussion, as of writing this). But, Guido mentioned the following: > IOW I'm rather lukewarm about this -- even if you (Jonathan) have found use for it, I'm not sure how many other people would use it, so I doubt it's worth adding it to the stdlib. (The only thing the stdlib might grow could be a public API that makes implementing this feasible without overriding private methods.) Specifically, the OPs proposal should be reasonably possible to implement (either just locally for themselves or a potential PyPI package) with a few minor additions to cf.Future's public API: 1) Add a means of *publicly* accessing the future's state (future._state) without going through the internal condition's RLock. This would allow the developer to implement their own condition or other synchronization primitive to access the state of the future. IMO, this would best be implemented as a separate ``future.state()`` and ``future.set_state()``. 2) Add a means of *publicly* accessing the future's result (future._result) without going through the internal condition's RLock. This would be similar to the above, but since there's already a ``future.result()`` and ``future.set_result()``, I think it would be best implemented as an optional *sync* parameter that defaults to True. When set to False, it directly accesses future._result without the condition; when set to True, it has the current behavior. 3) Add public global constants for the different possible future states: PENDING, RUNNING, CANCELLED, CANCELLED_AND_NOTIFIED, and FINISHED. This would be useful to serve as a template of possible future states for custom implementations. I also find that ``fut.set_state(cf.RUNNING)`` looks better than ``fut.state("running")`` from an API design perspective. Optional addition: To make ``fut.state()`` and ``fut.set_state()`` more useful for general purposes, it could have a single *sync* boolean parameter (feel free to bikeshed over the name), which changes whether it directly accesses future._state or does so safely through the condition. Presumably, the documentation would explicitly state that with sync=False, the future's state will not be synchronized across separate threads or processes. This would also allow it to have the same API as ``future.result()`` and ``future.set_result()``. Also, as for my own personal motivation in expanding upon the public API for cf.Future, I've found that directly accessing the state of the future can be incredibly useful for debugging purposes. I made significant use of it while implementing the new *cancel_futures* parameter for executor.shutdown(). But, since future._state is a private member, there's no guarantee that it will continue to behave the same or that it can be relied upon in the long-term. This may not be a huge concern for quick debugging sessions, but could easily result in breakage when used in logging or unit tests. ---------- assignee: aeros components: Library (Lib) messages: 362047 nosy: aeros, bquinlan, gvanrossum, pitrou priority: normal severity: normal status: open title: Expand concurrent.futures.Future's public API type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 02:51:44 2020 From: report at bugs.python.org (hai shi) Date: Sun, 16 Feb 2020 07:51:44 +0000 Subject: [New-bugs-announce] [issue39646] compile warning in unicodeobject.c Message-ID: <1581839504.07.0.984464153298.issue39646@roundup.psfhosted.org> New submission from hai shi : Objects/unicodeobject.c: In function ?PyUnicode_IsIdentifier?: ./Include/cpython/unicodeobject.h:396:38: warning: ?data? may be used uninitialized in this function [-Wmaybe-uninitialized] ((const Py_UCS4 *)(data))[(index)] \ ^ Objects/unicodeobject.c:12211:11: note: ?data? was declared here void *data; ^ In file included from ./Include/unicodeobject.h:1026:0, from ./Include/Python.h:97, from Objects/unicodeobject.c:42: ./Include/cpython/unicodeobject.h:391:6: warning: ?kind? may be used uninitialized in this function [-Wmaybe-uninitialized] ((Py_UCS4) \ ^ Objects/unicodeobject.c:12210:9: note: ?kind? was declared here int kind; ^ ---------- components: Interpreter Core messages: 362048 nosy: shihai1991 priority: normal severity: normal status: open title: compile warning in unicodeobject.c type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 05:52:39 2020 From: report at bugs.python.org (hai shi) Date: Sun, 16 Feb 2020 10:52:39 +0000 Subject: [New-bugs-announce] [issue39647] Update doc of init_config.rst Message-ID: <1581850359.34.0.448930293223.issue39647@roundup.psfhosted.org> New submission from hai shi : Due to issue36465, the desc of `dump_refs` in init_config.rst should be udpated. ---------- assignee: docs at python components: Documentation messages: 362066 nosy: docs at python, shihai1991, vstinner priority: normal severity: normal status: open title: Update doc of init_config.rst versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 08:38:42 2020 From: report at bugs.python.org (Ananthakrishnan) Date: Sun, 16 Feb 2020 13:38:42 +0000 Subject: [New-bugs-announce] [issue39648] Update math.gcd() to accept "n" arguments. Message-ID: <1581860322.5.0.900527144573.issue39648@roundup.psfhosted.org> New submission from Ananthakrishnan : If we have to find the gcd of three or more numbers, now we should use gcd(a, gcd(b, gcd(c, gcd(d, e))))) which will create lot of problems. math.gcd should take "n" number of arguments,like: gcd(a,b,c,....) gcd(4,6,8) //returns 2 gcd(2,5,8,6) //returns 1 gcd(6,30,40,60,20,40) //returns 2 ---------- components: Library (Lib) messages: 362069 nosy: Ananthakrishnan, mark.dickinson, serhiy.storchaka, steven.daprano priority: normal severity: normal status: open title: Update math.gcd() to accept "n" arguments. versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 08:41:44 2020 From: report at bugs.python.org (daniel hahler) Date: Sun, 16 Feb 2020 13:41:44 +0000 Subject: [New-bugs-announce] [issue39649] bdb.Bdb.format_stack_entry: checks for obsolete __args__ Message-ID: <1581860504.66.0.826450545942.issue39649@roundup.psfhosted.org> New submission from daniel hahler : It does: ``` if '__args__' in frame.f_locals: args = frame.f_locals['__args__'] else: args = None if args: s += reprlib.repr(args) else: s += '()' ``` However that appears to be wrong/unnecessary since the following likely, but maybe also others: commit 75bb54c3d8 Author: Guido van Rossum Date: Mon Sep 28 15:33:38 1998 +0000 Don't set a local variable named __args__; this feature no longer works and Greg Ward just reported a problem it caused... diff --git a/Lib/bdb.py b/Lib/bdb.py index 3ca25adbbf..f2cf4caa36 100644 --- a/Lib/bdb.py +++ b/Lib/bdb.py @@ -46,7 +46,7 @@ def dispatch_line(self, frame): return self.trace_dispatch def dispatch_call(self, frame, arg): - frame.f_locals['__args__'] = arg + # XXX 'arg' is no longer used if self.botframe is None: # First call of dispatch since reset() self.botframe = frame Code ref: https://github.com/python/cpython/blob/1ed61617a4a6632905ad6a0b440cd2cafb8b6414/Lib/bdb.py#L551-L558. So it should either get removed, or likely be replaced with actually displaying the args. For this the part could be factored out of `do_args` maybe, adjusting it for handling non-current frames. Of course somebody might inject/set `__args__` still (I've thought about doing that initially for pdb++, but will rather re-implement/override `format_stack_entry` instead), so support for this could be kept additionally. ---------- components: Library (Lib) messages: 362070 nosy: blueyed priority: normal severity: normal status: open title: bdb.Bdb.format_stack_entry: checks for obsolete __args__ type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 08:52:17 2020 From: report at bugs.python.org (Paul Marquess) Date: Sun, 16 Feb 2020 13:52:17 +0000 Subject: [New-bugs-announce] [issue39650] Creating zip file where names in local header don't match with central header Message-ID: <1581861137.0.0.439620748688.issue39650@roundup.psfhosted.org> New submission from Paul Marquess : Consider this code (based on code from an issue on StackOverflow) import zipfile import os allFilesToZip = ["/tmp/tom"] with zipfile.ZipFile(allZipPath, 'w') as allZip: for f in allFilesToZip: allZip.write(f, compress_type=zipfile.ZIP_DEFLATED) for zip_info in allZip.infolist(): if zip_info.filename[-1] == '/': continue zip_info.filename = os.path.basename(zip_info.filename) The intention of the code is to add a number of files without the path component. The problem is with the use of infolist. (Forget for now that there is an easier way to achieve the expected result.) The code works in two steps. First it uses the zipfile.write method which will immediately writes the local file header data and the compressed payload to the zipfile on disk. Next the zipinfo entry is used to update the filename. That data gets written only to the central directory in the zip file. The end result is a badly-formed zip file. Here is what I see when I run the code above with both Python 2.7 & 3.7. First create the zip file echo abcd >/tmp/tom python zip.py Unzip sees there is a problem $ unzip -t abc.zip Archive: abc.zip tom: mismatching "local" filename (tmp/tom), continuing with "central" filename version testing: tom OK At least one warning-error was detected in abc.zip. Next dump the internal structure of the zip file - Note the different filename fields output $ zipdetails abc.zip 0000 LOCAL HEADER #1 04034B50 0004 Extract Zip Spec 14 '2.0' 0005 Extract OS 00 'MS-DOS' 0006 General Purpose Flag 0000 [Bits 1-2] 0 'Normal Compression' 0008 Compression Method 0008 'Deflated' 000A Last Mod Time 50487109 'Sat Feb 8 14:08:18 2020' 000E CRC 2CA20FEB 0012 Compressed Length 000000D8 0016 Uncompressed Length 00000180 001A Filename Length 0007 001C Extra Length 0000 001E Filename 'tmp/tom' 0025 PAYLOAD eP...0.....,m.F\?. . 888)RbM.b..$R.......YB./...Y...Nc...m{D. ....pyi.I<......J..G......{:o..'?3.#E.u. .).O.%d}V..0p....z......Z......r]Bc;.U.u |:U.k.}.Zov..zU....h.....tm1...&P.N..... i.8CUA6.&cBcMD.P#...?.A8z.......S.. 00FD CENTRAL HEADER #1 02014B50 0101 Created Zip Spec 14 '2.0' 0102 Created OS 03 'Unix' 0103 Extract Zip Spec 14 '2.0' 0104 Extract OS 00 'MS-DOS' 0105 General Purpose Flag 0000 [Bits 1-2] 0 'Normal Compression' 0107 Compression Method 0008 'Deflated' 0109 Last Mod Time 50487109 'Sat Feb 8 14:08:18 2020' 010D CRC 00001234 0111 Compressed Length 000000D8 0115 Uncompressed Length 00000180 0119 Filename Length 0003 011B Extra Length 0000 011D Comment Length 0000 011F Disk Start 0000 0121 Int File Attributes 0000 [Bit 0] 0 'Binary Data' 0123 Ext File Attributes 81B40000 0127 Local Header Offset 00000000 012B Filename 'tom' 012E END CENTRAL HEADER 06054B50 0132 Number of this disk 0000 0134 Central Dir Disk no 0000 0136 Entries in this disk 0001 0138 Total Entries 0001 013A Size of Central Dir 00000031 013E Offset to Central Dir 000000FD 0142 Comment Length 0000 Should zipfile allow the user to do this? ---------- components: Library (Lib) messages: 362072 nosy: pmqs priority: normal severity: normal status: open title: Creating zip file where names in local header don't match with central header type: behavior versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 11:41:58 2020 From: report at bugs.python.org (Ben Darnell) Date: Sun, 16 Feb 2020 16:41:58 +0000 Subject: [New-bugs-announce] [issue39651] Exceptions raised by EventLoop.call_soon_threadsafe Message-ID: <1581871318.96.0.350376923794.issue39651@roundup.psfhosted.org> New submission from Ben Darnell : Proactor and selector event loops behave differently when call_soon_threadsafe races with a concurrent call to loop.close(). In a selector event loop, call_soon_threadsafe will either succeed or raise a RuntimeError("Event loop is closed"). In a proactor event loop, it could raise this RuntimeError, but it can also raise an AttributeError due to an unguarded access to self._csock. https://github.com/python/cpython/blob/1ed61617a4a6632905ad6a0b440cd2cafb8b6414/Lib/asyncio/proactor_events.py#L785-L787 Comments in BaseSelectorEventLoop._write_to_self indicate that this is deliberate, so the `csock is not None` check here should probably be copied to the proactor event loop version. https://github.com/python/cpython/blob/1ed61617a4a6632905ad6a0b440cd2cafb8b6414/Lib/asyncio/selector_events.py#L129-L136 I'd also accept an answer that the exact behavior of this race is undefined and it's up to the application to either arrange for all calls to call_soon_threadsafe to stop before closing the loop. However, I've had users of Tornado argue that they use the equivalent of call_soon_threadsafe in contexts where this coordination would be difficult, and I've decided that tornado's version of this method would never raise, even if there is a concurrent close. So if asyncio declines to specify which exceptions are allowed in this case, tornado will need to add a blanket `except Exception:` around calls to call_soon_threadsafe. ---------- components: asyncio messages: 362078 nosy: Ben.Darnell, asvetlov, yselivanov priority: normal severity: normal status: open title: Exceptions raised by EventLoop.call_soon_threadsafe versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 12:10:07 2020 From: report at bugs.python.org (Simon Willison) Date: Sun, 16 Feb 2020 17:10:07 +0000 Subject: [New-bugs-announce] [issue39652] sqlite3 bug handling column names that contain square braces Message-ID: <1581873007.59.0.213823942208.issue39652@roundup.psfhosted.org> New submission from Simon Willison : Bit of an obscure bug this one. SQLite allows column names to contain [ and ] characters, even though those are often used as delimiters in SQLite. Here's how to create such a database with bash: ``` sqlite3 /tmp/demo.db < In [5]: cursor.fetchall() Out[5]: [('01.01.2016 00:00 - 01.01.2016 01:00', '23.86')] In [6]: cursor.description Out[6]: (('MTU (CET)', None, None, None, None, None, None), ('Day-ahead Price', None, None, None, None, None, None)) In [7]: conn.row_factory = sqlite3.Row In [8]: cursor = conn.cursor() In [9]: cursor.execute("select * from data") Out[9]: In [10]: row = cursor.fetchall() In [12]: row Out[12]: In [15]: row.keys() Out[15]: ['MTU (CET)', 'Day-ahead Price'] ``` As you can see, it is missing from both `cursor.description` and from `row.keys()` here. But... if you query that database using SQLite directly (with `.headers on` so you can see the name of the columns) it works as expected: ``` $ sqlite3 /tmp/demo.db SQLite version 3.24.0 2018-06-04 14:10:15 Enter ".help" for usage hints. sqlite> .schema CREATE TABLE IF NOT EXISTS "data" ( "MTU (CET)" TEXT, "Day-ahead Price [EUR/MWh]" TEXT ); sqlite> .headers on sqlite> select * from data; MTU (CET)|Day-ahead Price [EUR/MWh] 01.01.2016 00:00 - 01.01.2016 01:00|23.86 sqlite> ``` It looks to me like this is a bug in Python's SQLite3 module. This was first reported here: https://github.com/simonw/sqlite-utils/issues/86 ---------- components: Extension Modules messages: 362081 nosy: simonw priority: normal severity: normal status: open title: sqlite3 bug handling column names that contain square braces versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 12:15:32 2020 From: report at bugs.python.org (Zachary) Date: Sun, 16 Feb 2020 17:15:32 +0000 Subject: [New-bugs-announce] [issue39653] test_posix fails during make test Message-ID: <1581873332.9.0.500912340754.issue39653@roundup.psfhosted.org> New submission from Zachary : Forgive me, for I am a newb and this is the first Python issue I have ever created. My system: Linux debian-thinkpad 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 GNU/Linux While attempting to run "make test" test_posix failed. As you can see in the output apparently test_posix.py can't find a directory. I'm not sure if I can paste output here without it being illegible so I'm putting it in a paste: https://pastebin.com/xfqEzKiw ---------- components: Installation messages: 362084 nosy: jaguardown priority: normal severity: normal status: open title: test_posix fails during make test type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 13:51:00 2020 From: report at bugs.python.org (Hakan) Date: Sun, 16 Feb 2020 18:51:00 +0000 Subject: [New-bugs-announce] [issue39654] pyclbr: remove old references to class browser & add explain readmodule Message-ID: <1581879060.98.0.6999975444.issue39654@roundup.psfhosted.org> Change by Hakan : ---------- assignee: docs at python components: Documentation nosy: docs at python, hakancelik priority: normal severity: normal status: open title: pyclbr: remove old references to class browser & add explain readmodule type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 14:13:46 2020 From: report at bugs.python.org (Orwell) Date: Sun, 16 Feb 2020 19:13:46 +0000 Subject: [New-bugs-announce] [issue39655] Shared_Memory attaching to incorrect Address in Windows Message-ID: <1581880426.57.0.187010353913.issue39655@roundup.psfhosted.org> New submission from Orwell : Shared Memory is attaching to incorrect memory location , ex : retried the documentation example. >>> import numpy as np >>> a = np.array([1, 1, 2, 3, 5, 8]) >>> from multiprocessing import shared_memory >>> shm = shared_memory.SharedMemory(create=True, size=a.nbytes) >>> b = np.ndarray(a.shape, dtype=a.dtype, buffer=shm.buf) >>> b[:] = a[:] >>> b array([1, 1, 2, 3, 5, 8]) >>> type(b) >>> type(a) >>> shm.name 'wnsm_62040dca' >>> shm.buf # In either the same shell or a new Python shell on the same machine >>> import numpy as np >>> from multiprocessing import shared_memory >>> existing_shm = shared_memory.SharedMemory(name='wnsm_62040dca') >>> c = np.ndarray((6,), dtype=np.int64, buffer=existing_shm.buf) >>> c array([ 4294967297, 12884901890, 34359738373, 0, 0, 0], dtype=int64) >>> c[-1] 0 >>> existing_shm.buf ---------- components: Windows messages: 362093 nosy: OH, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Shared_Memory attaching to incorrect Address in Windows type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 14:21:51 2020 From: report at bugs.python.org (Anthony Sottile) Date: Sun, 16 Feb 2020 19:21:51 +0000 Subject: [New-bugs-announce] [issue39656] shebanged scripts can escape from `venv` depending on how it was created Message-ID: <1581880911.6.0.392245585345.issue39656@roundup.psfhosted.org> New submission from Anthony Sottile : This is distilled from a larger example to be small/silly, however this caused real problems A script which was intended for python3.6 exactly was written as follows: ``` #!/usr/bin/env python3.6 ... ``` when creating a virtualenv with `python3.6 -m venv venv36` you end up with `python` / `python3` / `python3.6` executables in the venv however, when creating a virtualenv with `python3 -m venv venv36` you only end up with `python` / `python3` executables ___ using `-mvirtualenv` (pypa/virtualenv) instead of venv, all three are reliably created ___ the fix is fairly straightforward, adding `f'python3.{sys.version_info[0]}'` to this tuple: https://github.com/python/cpython/blob/c33bdbb20cf55b3a2aa7a91bd3d91fcb59796fad/Lib/venv/__init__.py#L246 ---------- components: Library (Lib) messages: 362095 nosy: Anthony Sottile priority: normal severity: normal status: open title: shebanged scripts can escape from `venv` depending on how it was created versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 16:40:39 2020 From: report at bugs.python.org (Dennis Sweeney) Date: Sun, 16 Feb 2020 21:40:39 +0000 Subject: [New-bugs-announce] [issue39657] Bezout and Chinese Remainder Theorem in the Standard Library? Message-ID: <1581889239.13.0.672135436609.issue39657@roundup.psfhosted.org> New submission from Dennis Sweeney : Should something like the following go in the standard library, most likely in the math module? I know I had to use such a thing before pow(a, -1, b) worked, but Bezout is more general. And many of the easy stackoverflow implementations of CRT congruence-combining neglect the case where the divisors are not coprime, so that's an easy thing to miss. def bezout(a, b): """ Given integers a and b, return a tuple (x, y, g), where x*a + y*b == g == gcd(a, b). """ # Apply the Extended Euclidean Algorithm: # use the normal Euclidean Algorithm on the RHS # of the equations # u1*a + v1*b == r1 # u2*a + v2*b == r2 # But carry the LHS along for the ride. u1, v1, r1 = 1, 0, a u2, v2, r2 = 0, 1, b while r2: q = r1 // r2 u1, u2 = u2, u1-q*u2 v1, v2 = v2, v1-q*v2 r1, r2 = r2, r1-q*r2 assert u1*a + v1*b == r1 assert u2*a + v2*b == r2 if r1 < 0: u1, v1, r1 = -u1, -v1, -r1 # a_coefficient, b_coefficient, gcd return (u1, v1, r1) def crt(cong1, cong2): """ Apply the Chinese Remainder Theorem: If there are any integers x such that x == a1 (mod n1) and x == a2 (mod n2), then there are integers a and n such that the above congruences both hold iff x == a (mod n) Given two compatible congruences (a1, n1), (a2, n2), return a single congruence (a, n) that is equivalent to both of the given congruences at the same time. Not all congruences are compatible. For example, there are no solutions to x == 1 (mod 2) and x == 2 (mod 4). For congruences (a1, n1), (a2, n2) to be compatible, it is sufficient, but not necessary, that gcd(n1, n2) == 1. """ a1, n1 = cong1 a2, n2 = cong2 c1, c2, g = bezout(n1, n2) assert n1*c1 + n2*c2 == g if (a1 - a2) % g != 0: raise ValueError(f"Incompatible congruences {cong1} and {cong2}.") lcm = n1 // g * n2 rem = (a1*c2*n2 + a2*c1*n1)//g return rem % lcm, lcm assert crt((1,4),(2,3)) == (5, 12) assert crt((1,6),(7,4)) == (7, 12) ---------- components: Library (Lib) messages: 362106 nosy: Dennis Sweeney priority: normal severity: normal status: open title: Bezout and Chinese Remainder Theorem in the Standard Library? type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 17:12:28 2020 From: report at bugs.python.org (Luca) Date: Sun, 16 Feb 2020 22:12:28 +0000 Subject: [New-bugs-announce] [issue39658] Include user scripts folder to PATH on Windows Message-ID: <1581891148.96.0.449800548141.issue39658@roundup.psfhosted.org> New submission from Luca : When installing Python on Windows, and selecting the option ?Add Python to PATH?, the following folders are added to the "PATH" environment variable: - C:\Users\[username]\AppData\Local\Programs\Python\Python38\Scripts\ - C:\Users\[username]\AppData\Local\Programs\Python\Python38\ However also the following folder should be added, _before_ the other two: - C:\Users\[username]\AppData\Roaming\Python\Python38\Scripts\ This is needed to correctly expose scripts of packages installed with `pip install --user` (`pip` emits a warning when installing a script with `--user` flag if that folder is not in "PATH"). ---------- components: Installation messages: 362108 nosy: lucatrv priority: normal severity: normal status: open title: Include user scripts folder to PATH on Windows _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 16 19:57:42 2020 From: report at bugs.python.org (Barney Gale) Date: Mon, 17 Feb 2020 00:57:42 +0000 Subject: [New-bugs-announce] [issue39659] pathlib calls `os.getcwd()` without using accessor Message-ID: <1581901062.0.0.795874471606.issue39659@roundup.psfhosted.org> New submission from Barney Gale : Whereas most calls to `os` functions from `pathlib.Path` methods happen via `pathlib._Accessor` methods, retrieving the current working directory does not. This problem occurs when calling the `pathlib.Path.cwd()`, `~resolve()` and `~absolute()` methods. ---------- components: Library (Lib) messages: 362114 nosy: barneygale priority: normal severity: normal status: open title: pathlib calls `os.getcwd()` without using accessor versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 00:44:47 2020 From: report at bugs.python.org (Leonard Lausen) Date: Mon, 17 Feb 2020 05:44:47 +0000 Subject: [New-bugs-announce] [issue39660] Contextvars: Optional callbacks on state change Message-ID: <1581918287.85.0.898283372496.issue39660@roundup.psfhosted.org> New submission from Leonard Lausen : contextvars provide APIs to manage, store, and access context-local state. Unfortunately, if Python is used as a frontend for a native libray (eg accessed via ctypes), and in case that the state of interest is managed in the native library, contextvar API is insufficient. To support native libraries, instead of simply exposing the current state via `contextvar.get()`, contextvar API could allow specification of callbacks to update the state in the native library. ---------- messages: 362118 nosy: leezu priority: normal severity: normal status: open title: Contextvars: Optional callbacks on state change versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 08:26:48 2020 From: report at bugs.python.org (Joe Cool) Date: Mon, 17 Feb 2020 13:26:48 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue39661=5D_TimedRotatingFile?= =?utf-8?q?Handler_doesn=E2=80=99t_handle_DST_switch_with_daily_rollover?= Message-ID: <1581946008.38.0.701805565144.issue39661@roundup.psfhosted.org> New submission from Joe Cool : TimedRotatingFileHandler doesn?t handle the switch to/from DST when using daily/midnight rotation. It does not adjust the rollover time so the rollover will be off by an hour. Parameters: when=?midnight?, utc=False ---------- components: Library (Lib) messages: 362140 nosy: snoopyjc priority: normal severity: normal status: open title: TimedRotatingFileHandler doesn?t handle DST switch with daily rollover type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 08:51:38 2020 From: report at bugs.python.org (=?utf-8?b?56aP5rC46Zm95bmz?=) Date: Mon, 17 Feb 2020 13:51:38 +0000 Subject: [New-bugs-announce] [issue39662] Characters are garbled when displaying Byte data Message-ID: <1581947498.56.0.86314760045.issue39662@roundup.psfhosted.org> New submission from ???? : Hex data is garbled when displaying received data from serial. --- code --- recvMessage = serialPort.readline() print(recvMessage, end="\r\n") ------------ --- result --- b'ERXUDP FE80:0000:0000:0000:0280:8700:3015:64F5 FE80:0000:0000:0000:021D:1290:0003:8331 0E1A 0E1A 00808700301564F5 1 0012 \x10\x81\x00\x01\x02\x88\x01\x05\xff\x01r\x01\xe7\x04\x00\x00\x02\x04\r\n' -------------- Mysterious value of 0x01r. When the corresponding value is judged, it becomes 0x72. The correct behavior is... --- correct result --- b'ERXUDP FE80:0000:0000:0000:0280:8700:3015:64F5 FE80:0000:0000:0000:021D:1290:0003:8331 0E1A 0E1A 00808700301564F5 1 0012 \x10\x81\x00\x01\x02\x88\x01\x05\xff\x72\x01\xe7\x04\x00\x00\x02\x04\r\n' -------------- ---------- components: IO messages: 362145 nosy: ???? priority: normal severity: normal status: open title: Characters are garbled when displaying Byte data type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 09:23:16 2020 From: report at bugs.python.org (Cheryl Sabella) Date: Mon, 17 Feb 2020 14:23:16 +0000 Subject: [New-bugs-announce] [issue39663] IDLE: Add additional tests for pyparse Message-ID: <1581949396.83.0.321867900398.issue39663@roundup.psfhosted.org> New submission from Cheryl Sabella : Per msg313179, Terry asked to see tests for when the find_good_parse_start() call returns 0 instead of None. There are two cases when a 0 might be returned: 1. If the code is on the first line in the editor beginning with one of the matching keywords and ending in ":\n", such as "def spam():\n". 2. If the code on the first line is entered as "def spam(", then the hyperparser adds the " \n" in its call to set_code and find_good_parse_start returns a 0. ---------- assignee: terry.reedy components: IDLE messages: 362149 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Add additional tests for pyparse versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 11:23:29 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 17 Feb 2020 16:23:29 +0000 Subject: [New-bugs-announce] [issue39664] Improve test coverage for operator module Message-ID: <1581956609.58.0.543593561813.issue39664@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : This ticket adds tests for operator module where some of the parts where not tested. Current coverage stands at 96.23% [0]. The added tests will get it closer to 100% and will help in testing the Python implementation of operator module. [0] https://codecov.io/gh/python/cpython/branch/master/history/Lib/operator.py ---------- components: Tests messages: 362150 nosy: xtreak priority: normal severity: normal status: open title: Improve test coverage for operator module type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 12:00:26 2020 From: report at bugs.python.org (ppperry) Date: Mon, 17 Feb 2020 17:00:26 +0000 Subject: [New-bugs-announce] [issue39665] Cryptic error message when creating types that don't include themselves in their MRO Message-ID: <1581958826.66.0.0275141327571.issue39665@roundup.psfhosted.org> New submission from ppperry : I was trying to create a class that didn't have any references to itself to test issue39382 and ran the following code: class Meta(type): def mro(cls): return type.mro(cls)[1:] class X(metaclass=Meta): pass This produced an extremely cryptic error message: Traceback (most recent call last): File "", line 1, in class X(metaclass=Meta): TypeError: super(type, obj): obj must be an instance or subtype of type While what I am trying to do may well not be supported, the error message referencing the `super` function, which I didn't use, is not helpful. ---------- components: Build, Interpreter Core messages: 362152 nosy: ppperry priority: normal severity: normal status: open title: Cryptic error message when creating types that don't include themselves in their MRO type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 13:12:00 2020 From: report at bugs.python.org (Cheryl Sabella) Date: Mon, 17 Feb 2020 18:12:00 +0000 Subject: [New-bugs-announce] [issue39666] IDLE: Factor out similar code in editor and hyperparser Message-ID: <1581963120.77.0.85896962977.issue39666@roundup.psfhosted.org> New submission from Cheryl Sabella : Under issue32989, there was discussion about refactoring duplicate code between hyperparser and editor. > Perhaps separate issue: the 'if use_ps1' statements in editor and hyperparser, and a couple of lines before, is nearly identical, and could be factored into a separate editor method that returns a parser instance ready for analysis. It could then be tested in isolation. The method should return a parser instance ready for analysis. ---------- assignee: terry.reedy components: IDLE messages: 362153 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Factor out similar code in editor and hyperparser type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 17:34:41 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Mon, 17 Feb 2020 22:34:41 +0000 Subject: [New-bugs-announce] [issue39667] Update zipfile.Path with zipfile 3.0 Message-ID: <1581978881.84.0.42755558271.issue39667@roundup.psfhosted.org> New submission from Jason R. Coombs : zipp 3.0 includes enhanced support for the .open() method as well as performance improvements in 2.2.1 (https://zipp.readthedocs.io/en/latest/history.html). ---------- components: Library (Lib) messages: 362158 nosy: jaraco priority: normal severity: normal status: open title: Update zipfile.Path with zipfile 3.0 versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 18:33:17 2020 From: report at bugs.python.org (=?utf-8?q?Grzegorz_Kraso=C5=84?=) Date: Mon, 17 Feb 2020 23:33:17 +0000 Subject: [New-bugs-announce] [issue39668] segmentation fault on calling __reversed__() Message-ID: <1581982397.85.0.278174574567.issue39668@roundup.psfhosted.org> New submission from Grzegorz Kraso? : This causes segmentation fault: list((lambda: None).__annotations__.__reversed__()) ---------- components: Interpreter Core messages: 362164 nosy: Grzegorz Kraso? priority: normal severity: normal status: open title: segmentation fault on calling __reversed__() type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 17 22:27:33 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 18 Feb 2020 03:27:33 +0000 Subject: [New-bugs-announce] [issue39669] macOS test failures Message-ID: <1581996453.92.0.653511763632.issue39669@roundup.psfhosted.org> New submission from Terry J. Reedy : macOS test failed twice for PR-18536, for reasons unrelated to the IDLE test additions. Two pages gave completely different reasons. https://github.com/python/cpython/pull/18536/checks?check_run_id=451798955 clang: warning: -framework Tk: 'linker' input unused [-Wunused-command-line-argument] In file included from /Users/runner/runners/2.164.0/work/cpython/cpython/Modules/_tkinter.c:48: /Applications/Xcode_11.3.1.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/tk.h:86:11: fatal error: 'X11/Xlib.h' file not found # include ^~~~~~~~~~~~ 1 error generated. Python build finished successfully! But no tests are listed. >From clicking '...' on above page, View raw logs, https://pipelines.actions.githubusercontent.com/E9sxbx8BNoRYbzXilV3t7ZRT2AjSeiVsTIIDUiDv0jTXfwuZPt/_apis/pipelines/1/runs/4122/signedlogcontent/6?urlExpires=2020-02-18T02%3A39%3A17.4773408Z&urlSigningMethod=HMACV1&urlSignature=ZpdM7bjMgqeUyUCyD4TLVZRYpMxqvYw%2BA9bEs0qCKfE%3D 2020-02-18T02:24:55.9857810Z ====================================================================== 2020-02-18T02:24:55.9858110Z FAIL: test_case_insensitivity (test.test_importlib.extension.test_case_sensitivity.Source_ExtensionModuleCaseSensitivityTest) 2020-02-18T02:24:55.9858780Z ---------------------------------------------------------------------- 2020-02-18T02:24:55.9858930Z Traceback (most recent call last): 2020-02-18T02:24:55.9859090Z File "/Users/runner/runners/2.164.0/work/cpython/cpython/Lib/test/test_importlib/extension/test_case_sensitivity.py", line 36, in test_case_insensitivity 2020-02-18T02:24:55.9859680Z self.assertTrue(hasattr(loader, 'load_module')) 2020-02-18T02:24:55.9859810Z AssertionError: False is not true This happened with 2 subtests. ====================================================================== 2020-02-18T02:24:55.9860160Z FAIL: test_insensitive (test.test_importlib.source.test_case_sensitivity.Frozen_CaseSensitivityTestPEP302) 2020-02-18T02:24:55.9860780Z ---------------------------------------------------------------------- 2020-02-18T02:24:55.9860940Z Traceback (most recent call last): 2020-02-18T02:24:55.9861090Z File "/Users/runner/runners/2.164.0/work/cpython/cpython/Lib/test/test_importlib/source/test_case_sensitivity.py", line 57, in test_insensitive 2020-02-18T02:24:55.9861400Z self.assertIsNotNone(insensitive) 2020-02-18T02:24:55.9861530Z AssertionError: unexpectedly None 4 subtests failed. ---------- components: Tests, macOS messages: 362170 nosy: lukasz.langa, ned.deily, ronaldoussoren, terry.reedy priority: normal severity: normal status: open title: macOS test failures type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 04:10:39 2020 From: report at bugs.python.org (ilya) Date: Tue, 18 Feb 2020 09:10:39 +0000 Subject: [New-bugs-announce] [issue39670] 2to3 fix_apply tries to fix user-defined apply function calls Message-ID: <1582017039.66.0.352808121621.issue39670@roundup.psfhosted.org> New submission from ilya : Consider the following code: def apply(a, b): print(a) print(b) apply(1, 1) 2to3 suggests to fix it as follows: --- a.py (original) +++ a.py (refactored) @@ -2,4 +2,4 @@ print(a) print(b) -apply(1, 1) +(1)(*1) ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 362178 nosy: ilya priority: normal severity: normal status: open title: 2to3 fix_apply tries to fix user-defined apply function calls type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 04:33:32 2020 From: report at bugs.python.org (Tom Pohl) Date: Tue, 18 Feb 2020 09:33:32 +0000 Subject: [New-bugs-announce] [issue39671] Mention in docs that asyncio.FIRST_COMPLETED does not guarantee the completion of no more than one task Message-ID: <1582018412.2.0.458600281682.issue39671@roundup.psfhosted.org> New submission from Tom Pohl : Currently, the documentation of asyncio.wait gives the impression that using FIRST_COMPLETED guarantees the completion of no more than one task. In reality, the number of completed task after asyncio.wait can be larger than one. While this behavior (exactly one complete task if no error or cancellation occurred) would be ultimately desirable, a sentence describing the current behavior would be helpful for new users of asyncio. ---------- assignee: docs at python components: Documentation messages: 362181 nosy: docs at python, tom.pohl priority: normal severity: normal status: open title: Mention in docs that asyncio.FIRST_COMPLETED does not guarantee the completion of no more than one task type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 04:51:02 2020 From: report at bugs.python.org (zd nex) Date: Tue, 18 Feb 2020 09:51:02 +0000 Subject: [New-bugs-announce] [issue39672] SIGSEGV crash on shutdown with shelve & c pickle Message-ID: <1582019462.33.0.255360625193.issue39672@roundup.psfhosted.org> New submission from zd nex : Hello, so I was transferring some our old code from Python2.7 to new and find that new version seems to crash quite a lot. After some finding (good thing faulthandler) I think I tracked it down to to Shelve.__del__ method > going to C Pickle module (not python one). Here it is crash itself. Attached zip has 3 file. When shelve.close is used it does not seem to crash every time. $python3.8 -X faulthandler ce_test_2.py start end Fatal Python error: Segmentation fault Current thread 0x00007fb22e299740 (most recent call first): File "/usr/lib/python3.8/shelve.py", line 124 in __setitem__ File "/usr/lib/python3.8/shelve.py", line 168 in sync File "/usr/lib/python3.8/shelve.py", line 144 in close File "/usr/lib/python3.8/shelve.py", line 162 in __del__ Neopr?vn?n? p??stup do pam?ti (SIGSEGV) Code for crash is here: import shelve import material data = shelve.open("test3", flag="c",writeback=True) def test_shelve(data): for k,v in data.items(): pass print("start") test_shelve(data) #data.close() #fixes SIGSEGV at shutdown #actually problem is in c pickle module; when Python pickle module is used it works print("end") #after this it is crash Code just loads module and shelve and opens file. Then in another function it cycles through data and that creates crash in C pickle module at shutdown. Weird thing is that when cycle through data is not in function it does not crash. Also crash can be avoided when C Pickle is traded for Python Pickle. In REPL it is quite similar just list on shelve.items() and exit makes Python crash. Python 3.8.1 (default, Dec 22 2019, 08:15:39) [GCC 7.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import shelve >>> import material >>> data = shelve.open("test3", flag="c",writeback=True) >>> list(data.items()) [('H1615', Material(name='T?e?e? Romana', code='H1615', vars=0))] >>> exit() Fatal Python error: Segmentation fault Current thread 0x00007f14a2546740 (most recent call first): File "/usr/lib/python3.8/shelve.py", line 124 in __setitem__ File "/usr/lib/python3.8/shelve.py", line 168 in sync File "/usr/lib/python3.8/shelve.py", line 144 in close File "/usr/lib/python3.8/shelve.py", line 162 in __del__ Neopr?vn?n? p??stup do pam?ti (SIGSEGV) Hopefully you can fix this. ---------- components: Library (Lib) files: test_crash_shelve.zip messages: 362186 nosy: zd nex priority: normal severity: normal status: open title: SIGSEGV crash on shutdown with shelve & c pickle type: crash versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48899/test_crash_shelve.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 04:51:59 2020 From: report at bugs.python.org (YoSTEALTH) Date: Tue, 18 Feb 2020 09:51:59 +0000 Subject: [New-bugs-announce] [issue39673] TimeoutError Message-ID: <1582019519.42.0.474248510986.issue39673@roundup.psfhosted.org> New submission from YoSTEALTH : import os try: no = -62 raise OSError(-no, os.strerror(-no)) except TimeoutError: print('Success') except OSError as e: print('Failed:', e) # Failed: [Errno 62] Timer expired Shouldn't `TimeoutError` catch this error? ---------- messages: 362187 nosy: YoSTEALTH priority: normal severity: normal status: open title: TimeoutError type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 06:26:29 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 18 Feb 2020 11:26:29 +0000 Subject: [New-bugs-announce] [issue39674] Keep deprecated features in Python 3.9 to ease migration from Python 2.7, but remove in Python 3.10 Message-ID: <1582025189.05.0.731615044433.issue39674@roundup.psfhosted.org> New submission from STINNER Victor : Following discussion on python-dev, I propose to revert the removal of a few deprecated functions to keep them in Python 3.9, and only remove them in Python 3.10. Please see the following email for the longer rationale, and the discussion for further details: https://mail.python.org/archives/list/python-dev at python.org/thread/EYLXCGGJOUMZSE5X35ILW3UNTJM3MCRE/ With Python 3.8, it was possible to have a single code base working on Python 2.7 and 3.8. Some functions emits DeprecationWarning, but these warnings are ignored (silent) by default. With removed deprecated functions in Python 3.9, *new* code is required to support Python 2.7. The problem is that Python 2.7 is no longer supported. Adding new code to support Python 2.7 sounds painful. Dropping Python 2.7 support isn't free. Projects have to drop Python 2 code, drop CI tests on Python 2, warn users, etc. The idea is to give maintainers one more year (until Python 3.10) to organize their project to schedule properly the removal of Python 2 support. The first motivation is to ease adoption of Python 3.9. -- I propose to start with reverting the removal of collections aliases to Abstract Base Classes (ABC) like collections.Mapping alias to collections.abc.Mapping. Removing these aliases is the change which caused most issues when testing Python projects on Python 3.9. I also propose to modify the What's New In Python 3.9 document to strongly suggest to test your applications with -W default or even -W error to see DeprecationWarning and PendingDeprecationWarning. ---------- components: Library (Lib) messages: 362196 nosy: vstinner priority: normal severity: normal status: open title: Keep deprecated features in Python 3.9 to ease migration from Python 2.7, but remove in Python 3.10 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 06:35:55 2020 From: report at bugs.python.org (gaborbernat) Date: Tue, 18 Feb 2020 11:35:55 +0000 Subject: [New-bugs-announce] [issue39675] forked process in multiprocessing does not honour atexit Message-ID: <1582025755.98.0.0767083175106.issue39675@roundup.psfhosted.org> New submission from gaborbernat : I've talked with Pablo about this in person, and as advised opening the issue here now. I've discovered that forked processes do not honour atexit registrations. See the following example code: from multiprocessing import Process, set_start_method import time import os import atexit def cleanup(): print(f"cleanup {os.getpid()}") atexit.register(cleanup) def run(): time.sleep(0.1) print(f"process done {os.getpid()}") # atexit._run_exitfuncs() if __name__ == "__main__": set_start_method("fork") process = Process(target=run) process.start() process.join() print("app finished") In case of a forked process childs the atexit is never executed (note it works if I ran them manually at the end of the child process; so they're registered correctly). Switching to spawn method makes it work as expected. The behaviour is the same even if you call register within the child process (as opposed to being inherited during forking). Also found this StackOverflow question that mentions this https://stackoverflow.com/a/26476585. At the very least the documentation should explain this; though I'd expect atexit to be called before finalization of the fork processes (assuming the child process exits with 0 exit code). d ---------- messages: 362197 nosy: davin, gaborbernat, pablogsal, pitrou priority: normal severity: normal status: open title: forked process in multiprocessing does not honour atexit _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 08:01:52 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 18 Feb 2020 13:01:52 +0000 Subject: [New-bugs-announce] [issue39676] test_shutil fails with OSError: [Errno 28] No space left on device on "PPC64LE Fedora Stable LTO + PGO 3.x" buildbot Message-ID: <1582030912.19.0.281317890533.issue39676@roundup.psfhosted.org> New submission from STINNER Victor : PPC64LE Fedora Stable LTO + PGO 3.x: https://buildbot.python.org/all/#/builders/449/builds/31 Example: ====================================================================== ERROR: test_big_chunk (test.test_shutil.TestZeroCopySendfile) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.lto-pgo/build/Lib/test/test_shutil.py", line 2405, in test_big_chunk shutil._fastcopy_sendfile(src, dst) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.lto-pgo/build/Lib/shutil.py", line 163, in _fastcopy_sendfile raise err from None File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-ppc64le.lto-pgo/build/Lib/shutil.py", line 149, in _fastcopy_sendfile sent = os.sendfile(outfd, infd, offset, blocksize) OSError: [Errno 28] No space left on device: '@test_3252264_tmp' -> '@test_3252264_tmp2' ---------- components: Tests messages: 362203 nosy: vstinner priority: normal severity: normal status: open title: test_shutil fails with OSError: [Errno 28] No space left on device on "PPC64LE Fedora Stable LTO + PGO 3.x" buildbot versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 08:20:52 2020 From: report at bugs.python.org (thautwarm) Date: Tue, 18 Feb 2020 13:20:52 +0000 Subject: [New-bugs-announce] [issue39677] 3.6+ documentation for MAKE_FUNCTION Message-ID: <1582032052.37.0.86110712101.issue39677@roundup.psfhosted.org> New submission from thautwarm : LINK: https://docs.python.org/3.6/library/dis.html?highlight=bytecode#opcode-MAKE_FUNCTION To avoid being confusing, MAKE_FUNCTION(argc) shall be MAKE_FUNCTION(flag), since 3.6 the operand of MAKE_FUNCTION never means `argcount`. ---------- assignee: docs at python components: Documentation messages: 362208 nosy: docs at python, thautwarm priority: normal severity: normal status: open title: 3.6+ documentation for MAKE_FUNCTION type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 10:13:01 2020 From: report at bugs.python.org (Thomas Moreau) Date: Tue, 18 Feb 2020 15:13:01 +0000 Subject: [New-bugs-announce] [issue39678] RFC improve readability of _queue_management_worker for ProcessPoolExecutor Message-ID: <1582038781.26.0.540574827732.issue39678@roundup.psfhosted.org> New submission from Thomas Moreau : As discussed in GH#17670, the the `_queue_management_worker` function has grown quite long and complicated. It could be turned into an object with a bunch of short and readable helper methods. ---------- components: Library (Lib) messages: 362218 nosy: pitrou, tomMoral priority: normal severity: normal status: open title: RFC improve readability of _queue_management_worker for ProcessPoolExecutor versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 14:16:15 2020 From: report at bugs.python.org (Viktor Roytman) Date: Tue, 18 Feb 2020 19:16:15 +0000 Subject: [New-bugs-announce] [issue39679] functools: singledispatchmethod doesn't work with classmethod Message-ID: <1582053375.25.0.178240077179.issue39679@roundup.psfhosted.org> New submission from Viktor Roytman : I couldn't get the example given for the interaction between @singledispatchmethod and @classmethod to work https://docs.python.org/3/library/functools.html?highlight=singledispatch#functools.singledispatchmethod from functools import singledispatchmethod class Negator: @singledispatchmethod @classmethod def neg(cls, arg): raise NotImplementedError("Cannot negate a") @neg.register @classmethod def _(cls, arg: int): return -arg @neg.register @classmethod def _(cls, arg: bool): return not arg if __name__ == "__main__": print(Negator.neg(0)) print(Negator.neg(False)) Leads to $ python -m bad_classmethod_as_documented Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/viktor/scratch/bad_classmethod_as_documented.py", line 4, in class Negator: File "/home/viktor/scratch/bad_classmethod_as_documented.py", line 12, in Negator def _(cls, arg: int): File "/usr/lib/python3.8/functools.py", line 906, in register return self.dispatcher.register(cls, func=method) File "/usr/lib/python3.8/functools.py", line 848, in register raise TypeError( TypeError: Invalid first argument to `register()`: . Use either `@register(some_class)` or plain `@register` on an annotated function. Curiously, @staticmethod does work, but not as documented (don't decorate the actual implementations): from functools import singledispatchmethod class Negator: @singledispatchmethod @staticmethod def neg(arg): raise NotImplementedError("Cannot negate a") @neg.register def _(arg: int): return -arg @neg.register def _(arg: bool): return not arg if __name__ == "__main__": print(Negator.neg(0)) print(Negator.neg(False)) Leads to $ python -m good_staticmethod 0 True Removing @classmethod from the implementation methods doesn't work, though Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 193, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/viktor/scratch/bad_classmethod_alternative.py", line 20, in print(Negator.neg(0)) File "/usr/lib/python3.8/functools.py", line 911, in _method return method.__get__(obj, cls)(*args, **kwargs) TypeError: _() missing 1 required positional argument: 'arg' ---------- components: Library (Lib) messages: 362233 nosy: Viktor Roytman priority: normal severity: normal status: open title: functools: singledispatchmethod doesn't work with classmethod type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 19:56:00 2020 From: report at bugs.python.org (Alexander Belopolsky) Date: Wed, 19 Feb 2020 00:56:00 +0000 Subject: [New-bugs-announce] [issue39680] datetime.astimezone() method does not handle invalid local times as required by PEP 495 Message-ID: <1582073760.64.0.932670669115.issue39680@roundup.psfhosted.org> New submission from Alexander Belopolsky : Let g be a an invalid time in New York spring-forward gap: >>> g = datetime(2020, 3, 8, 2, 30) According to PEP 495, conversion of such instance to UTC should return a value that corresponds to a valid local time greater than g, but >>> print(g.astimezone(timezone.utc).astimezone()) 2020-03-08 01:30:00-05:00 Also, conversion of the same instance with fold=1 to UTC and back should produce a lesser time, but >>> print(g.replace(fold=1).astimezone(timezone.utc).astimezone()) 2020-03-08 03:30:00-04:00 Note that conversion to and from timestamp works correctly: >>> print(datetime.fromtimestamp(g.timestamp())) 2020-03-08 03:30:00 >>> print(datetime.fromtimestamp(g.replace(fold=1).timestamp())) 2020-03-08 01:30:00 ---------- assignee: belopolsky messages: 362241 nosy: belopolsky, p-ganssle priority: normal severity: normal status: open title: datetime.astimezone() method does not handle invalid local times as required by PEP 495 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 20:04:49 2020 From: report at bugs.python.org (Nathan Goldbaum) Date: Wed, 19 Feb 2020 01:04:49 +0000 Subject: [New-bugs-announce] [issue39681] pickle.load expects an object that implements readinto Message-ID: <1582074289.01.0.450273183434.issue39681@roundup.psfhosted.org> New submission from Nathan Goldbaum : As of https://github.com/python/cpython/pull/7076, it looks like at least the C implementation of pickle.load expects the file argument to implement readinto: https://github.com/python/cpython/blob/ffd9753a944916ced659b2c77aebe66a6c9fbab5/Modules/_pickle.c#L1617-L1622 This is a change in behavior relative to previous versions of Python and I don't see it mentioned in PEP 574 or in the pull request so I'm not sure why it was changed. This change breaks some PyTorch tests (see https://github.com/pytorch/pytorch/issues/32289) and, at least one PyTorch user, although I don't have full details there. I can try to fix this on the PyTorch side but I first want to check that this was an intentional change on the Python side of things. ---------- components: Library (Lib) messages: 362242 nosy: Nathan.Goldbaum priority: normal severity: normal status: open title: pickle.load expects an object that implements readinto versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 21:10:35 2020 From: report at bugs.python.org (Barney Gale) Date: Wed, 19 Feb 2020 02:10:35 +0000 Subject: [New-bugs-announce] [issue39682] pathlib.Path objects can be used as context managers Message-ID: <1582078235.85.0.126538415089.issue39682@roundup.psfhosted.org> New submission from Barney Gale : `pathlib.Path` objects can be used as context managers, but this functionality is undocumented and makes little sense. Example: >>> import pathlib >>> root = pathlib.Path("/") >>> with root: ... print(1) ... 1 >>> with root: ... print(2) ... Traceback (most recent call last): File "", line 1, in File "/home/barney/.pyenv/versions/3.7.3/lib/python3.7/pathlib.py", line 1028, in __enter__ self._raise_closed() File "/home/barney/.pyenv/versions/3.7.3/lib/python3.7/pathlib.py", line 1035, in _raise_closed raise ValueError("I/O operation on closed path") ValueError: I/O operation on closed path `Path` objects don't acquire any resources on __new__/__init__/__enter__, nor do they release any resources on __exit__. The whole notion of the path being `_closed` seems to exist purely to make impure `Path` methods unusable after exiting from the context manager. I can't personally think of a compelling use case for this, and suggest that it be removed. ---------- components: Library (Lib) messages: 362244 nosy: barneygale priority: normal severity: normal status: open title: pathlib.Path objects can be used as context managers versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 18 21:18:46 2020 From: report at bugs.python.org (ilya) Date: Wed, 19 Feb 2020 02:18:46 +0000 Subject: [New-bugs-announce] [issue39683] 2to3 fix_exitfunc suggests duplicated import of atexit module Message-ID: <1582078726.23.0.378327205623.issue39683@roundup.psfhosted.org> New submission from ilya : Consider the following code: import sys def foo(): print(1) def bar(): print(2) if input("case: ") == 1: sys.exitfunc = foo else: sys.exitfunc = bar 2to3 -f exitfunc suggests to fix it as follows: --- a.py (original) +++ a.py (refactored) @@ -1,4 +1,6 @@ import sys +import atexit +import atexit def foo(): print(1) @@ -7,6 +9,6 @@ print(2) if input("case: ") == 1: - sys.exitfunc = foo + atexit.register(foo) else: - sys.exitfunc = bar + atexit.register(bar) So it seems that it produces one import of atexit module per each use of sys.exitfunc. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 362245 nosy: ilya priority: normal severity: normal status: open title: 2to3 fix_exitfunc suggests duplicated import of atexit module type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 00:14:59 2020 From: report at bugs.python.org (Andy Lester) Date: Wed, 19 Feb 2020 05:14:59 +0000 Subject: [New-bugs-announce] [issue39684] PyUnicode_IsIdentifier has two if/thens that can be combined Message-ID: <1582089299.55.0.574843604033.issue39684@roundup.psfhosted.org> New submission from Andy Lester : These two code if/thens can be combined if (ready) { kind = PyUnicode_KIND(self); data = PyUnicode_DATA(self); } else { wstr = _PyUnicode_WSTR(self); } Py_UCS4 ch; if (ready) { ch = PyUnicode_READ(kind, data, 0); } else { ch = wstr[0]; } ---------- components: Interpreter Core messages: 362250 nosy: petdance priority: normal severity: normal status: open title: PyUnicode_IsIdentifier has two if/thens that can be combined _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 01:07:40 2020 From: report at bugs.python.org (Brian May) Date: Wed, 19 Feb 2020 06:07:40 +0000 Subject: [New-bugs-announce] [issue39685] Python 3.8 regression Socket operation on non-socket Message-ID: <1582092460.6.0.713532643736.issue39685@roundup.psfhosted.org> New submission from Brian May : After upgrading to Python 3.8, users of sshuttle report seeing this error: Traceback (most recent call last): File "", line 1, in File "assembler.py", line 38, in File "sshuttle.server", line 298, in main File "/usr/lib/python3.8/socket.py", line 544, in fromfd return socket(family, type, proto, nfd) File "/usr/lib/python3.8/socket.py", line 231, in __init__ _socket.socket.__init__(self, family, type, proto, fileno) OSError: [Errno 88] Socket operation on non-socket https://github.com/sshuttle/sshuttle/issues/381 The cause of the error is this line: https://github.com/sshuttle/sshuttle/blob/6ad4473c87511bcafaec3d8d0c69dfcb166b48ed/sshuttle/server.py#L297 which does: socket.fromfd(sys.stdin.fileno(), socket.AF_INET, socket.SOCK_STREAM) socket.fromfd(sys.stdout.fileno(), socket.AF_INET, socket.SOCK_STREAM) Where sys.stdin and sys.stdout are stdin/stdout provided by the ssh server when it ran our remote ssh process. I believe this change in behavior is as a result of a fix for the following bug: https://bugs.python.org/issue35415 I am wondering if this is a bug in Python for causing such a regression, or a bug in sshuttle. Possibly sshuttle is using socket.fromfd in a way that was never intended? Would appreciate an authoritative answer on this. Thanks ---------- components: IO messages: 362255 nosy: brian priority: normal severity: normal status: open title: Python 3.8 regression Socket operation on non-socket versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 01:23:22 2020 From: report at bugs.python.org (Richard K) Date: Wed, 19 Feb 2020 06:23:22 +0000 Subject: [New-bugs-announce] [issue39686] add dump_json to ast module Message-ID: <1582093402.26.0.349380477656.issue39686@roundup.psfhosted.org> New submission from Richard K : Currently within the ast module, `dump` generates a string representation of the AST for example, >>> ast.dump(node) 'Module(body=[], type_ignores=[])' The proposed enhancement would provide a complementary function, `dump_json` as in a json representation of the ast. This would be useful for those who would like to benefit from the utilities of the json module for formatting, pretty-printing, and the like. It would also be useful for those who want to serialize the AST or export it in a form that can be consumed in an other programming language. A simplified example, >>> import ast >>> node = ast.parse('') >>> ast.dump_json(node) {'Module': {'body': [], 'type_ignores': []}} A simplified example of using `ast.dump_json` with the json module, >>> import json >>> json.dumps(ast.dump_json(node)) '{"Module": {"body": [], "type_ignores": []}}' ---------- components: Library (Lib) messages: 362256 nosy: sparverius priority: normal severity: normal status: open title: add dump_json to ast module type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 06:40:24 2020 From: report at bugs.python.org (Yves) Date: Wed, 19 Feb 2020 11:40:24 +0000 Subject: [New-bugs-announce] [issue39687] re.sub behaves inconsistent between versions with * repetition qualifier Message-ID: <1582112424.4.0.996005066696.issue39687@roundup.psfhosted.org> New submission from Yves : On different platforms and versions the following expression has different results: python -c 'import re; print(re.compile("(.*)", 0).sub("a\\1", "bc"))' As far is I observed: Linux/Python 3.6.9 => abc MacOS/Python 3.7.1 => abca Repl.it/Python 3.8.1 => abca MacOS/Python 2.7.17 => abc Linux/Python 2.7.17 => abc According the the documentation I would guess that "abc" is the correct return value. The issues also occurs without compiling or capture group: re.sub(".*", "a", "cb") a vs aa ---------- components: Regular Expressions messages: 362264 nosy: ezio.melotti, mrabarnett, slomo priority: normal severity: normal status: open title: re.sub behaves inconsistent between versions with * repetition qualifier versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 09:34:50 2020 From: report at bugs.python.org (Valentin Samir) Date: Wed, 19 Feb 2020 14:34:50 +0000 Subject: [New-bugs-announce] [issue39688] tarfile: GNU sparse 1.0 pax tar header offset not properly computed Message-ID: <1582122890.87.0.720726833574.issue39688@roundup.psfhosted.org> New submission from Valentin Samir : When tarfile open a tar containing a sparse file where actual data is bigger than 0o77777777777 bytes (~8GB), it fails listing members after this file. As so, it is impossible to access/extract files located after a such file inside the archive. A tar file presenting the issue is available at https://genua.fr/sample.tar.xz Uncompressed the file is ~16GB. It containes two files: * disk.img a 50GB sparse file containing ~16GB of data * README.txt a simple text file containing "This last file is not properly listed" disk.img was generated using the folowing python script: GB = 1024**3 buf = b"\xFF" * 1024**2 with open('disk.img', 'wb') as f: f.seek(10 * GB) wrotten = 0 while wrotten < 0o77777777777: wrotten += f.write(buf) f.flush() print(wrotten/0o77777777777 * 100, '%') f.seek(50 * GB - 1) f.write(b'\0') sample.tar was generated using GNU tar 1.30 on a Debian 10 with the following command: tar --format pax -cvSf sample.tar disk.img README.txt The following script expose the issue: import tarfile t = tarfile.open('sample.tar') print('members', t.getmembers()) print('offset', t.offset) Its output is: members [] offset 17179806208 members should also list README.txt. I think I have found the root cause of the bug: Because the file is bigger than 0o77777777777, it's size cannot be specified inside the tar ustar header, so a "size" pax extented header is generated. This header contain the full size of the file block in the tar. As the file is sparse, as of sparse format 1.0, the file block contains first a sparse mapping, then the file data. So this block size is the size of the mapping added to the size of the data. Because the file is sparse, a GNU.sparse.realsize header is also added containing the full expanded file size (here 50GB). Here https://github.com/python/cpython/blob/4dee92b0ad9f4e3ea2fbbbb5253340801bb92dc7/Lib/tarfile.py#L1350 tarfile set the tarinfo size to GNU.sparse.realsize (50GB),then, in this block https://github.com/python/cpython/blob/4dee92b0ad9f4e3ea2fbbbb5253340801bb92dc7/Lib/tarfile.py#L1297 the file offset is moved forward from GNU.sparse.realsize (50GB) instead of pax_headers["size"]. Moreover, the move is done from next.offset_data which is set at https://github.com/python/cpython/blob/master/Lib/tarfile.py#L1338 to after the sparse mapping. The move forward in the sparse file should be made from next.offset + BLOCKSIZE. ---------- components: Library (Lib) messages: 362275 nosy: Nit priority: normal severity: normal status: open title: tarfile: GNU sparse 1.0 pax tar header offset not properly computed type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 10:01:49 2020 From: report at bugs.python.org (Charalampos Stratakis) Date: Wed, 19 Feb 2020 15:01:49 +0000 Subject: [New-bugs-announce] [issue39689] test_struct failure on s390x Fedora Clang buildbot Message-ID: <1582124509.96.0.385275594117.issue39689@roundup.psfhosted.org> New submission from Charalampos Stratakis : The clang build was recently added for that buildbot and it seems on that particular architecture, test_struct fails with: ====================================================================== FAIL: test_bool (test.test_struct.StructTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.x.edelsohn-fedora-rawhide-z.clang-ubsan/build/Lib/test/test_struct.py", line 520, in test_bool self.assertTrue(struct.unpack('>?', c)[0]) AssertionError: False is not true https://buildbot.python.org/all/#/builders/488/builds/6 Fedora rawhide recently upgraded Clang to version 10. The rest of the architectures seem fine. ---------- components: Tests messages: 362277 nosy: cstratak, vstinner priority: normal severity: normal status: open title: test_struct failure on s390x Fedora Clang buildbot versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 14:27:32 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 19 Feb 2020 19:27:32 +0000 Subject: [New-bugs-announce] [issue39690] Compiler warnings in unicodeobject.c Message-ID: <1582140452.34.0.85531597667.issue39690@roundup.psfhosted.org> New submission from Serhiy Storchaka : Objects/clinic/unicodeobject.c.h: In function ?unicode_isidentifier?: Objects/unicodeobject.c:12245:22: warning: ?wstr? may be used uninitialized in this function [-Wmaybe-uninitialized] ch = wstr[i]; ~~~~^~~ Objects/unicodeobject.c:12212:14: note: ?wstr? was declared here wchar_t *wstr; ^~~~ Objects/unicodeobject.c: In function ?PyUnicode_IsIdentifier?: Objects/unicodeobject.c:12245:22: warning: ?wstr? may be used uninitialized in this function [-Wmaybe-uninitialized] ch = wstr[i]; ~~~~^~~ ---------- components: Interpreter Core messages: 362288 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Compiler warnings in unicodeobject.c type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 18:02:30 2020 From: report at bugs.python.org (Maor Kleinberger) Date: Wed, 19 Feb 2020 23:02:30 +0000 Subject: [New-bugs-announce] [issue39691] Allow passing Pathlike objects to io.open_code Message-ID: <1582153350.45.0.111444771925.issue39691@roundup.psfhosted.org> New submission from Maor Kleinberger : As in many functions in python3, io.open_code should probably accept pathlike objects and not just path strings. Below is open_code's docstring: > Opens the provided file with the intent to import the contents. > This may perform extra validation beyond open(), but is otherwise interchangeable with calling open(path, 'rb'). The second bit is not entirely true, as open accepts pathlike objects and open_code doesn't. Fixing this will help solve future bugs and existing bugs like https://bugs.python.org/issue39517 I'd be happy to open a pull request if it is agreed that this should be changed. ---------- components: Library (Lib) messages: 362292 nosy: kmaork priority: normal severity: normal status: open title: Allow passing Pathlike objects to io.open_code type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 19 19:11:43 2020 From: report at bugs.python.org (Niklas Smedemark-Margulies) Date: Thu, 20 Feb 2020 00:11:43 +0000 Subject: [New-bugs-announce] [issue39692] Subprocess using list vs string Message-ID: <1582157503.62.0.284670711586.issue39692@roundup.psfhosted.org> New submission from Niklas Smedemark-Margulies : Most (all?) of the functions in subprocess (run, Popen, etc) are supposed to accept either list or string, but the behavior when passing a list differs (and appears to be wrong). For example, see below - invoking the command "exit 1" should give a return code of 1, but when using a list, the return code is 0. ``` >>> import subprocess >>> # Example using run >>> res1 = subprocess.run('exit 1', shell=True) >>> res1.returncode 1 >>> res2 = subprocess.run('exit 1'.split(), shell=True) >>> res2.returncode 0 >>> # Example using Popen >>> p1 = subprocess.Popen('exit 1', shell=True) >>> p1.communicate() (None, None) >>> p1.returncode 1 >>> p2 = subprocess.Popen('exit 1'.split(), shell=True) >>> p2.communicate() (None, None) >>> p2.returncode 0 ``` ---------- messages: 362294 nosy: nik-sm priority: normal severity: normal status: open title: Subprocess using list vs string type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 02:12:11 2020 From: report at bugs.python.org (Josh Rosenberg) Date: Thu, 20 Feb 2020 07:12:11 +0000 Subject: [New-bugs-announce] [issue39693] tarfile's extractfile documentation is misleading Message-ID: <1582182731.51.0.0387908483587.issue39693@roundup.psfhosted.org> New submission from Josh Rosenberg : The documentation for extractfile ( https://docs.python.org/3/library/tarfile.html#tarfile.TarFile.extractfile ) says: "Extract a member from the archive as a file object. member may be a filename or a TarInfo object. If member is a regular file or a link, an io.BufferedReader object is returned. Otherwise, None is returned." Before reading further, answer for yourself: What do you think happens when a provided filename doesn't exist, based on that documentation? In teaching a Python class that uses tarfile in the final project, and expects students to catch predictable errors (e.g. a random tarball being provided, rather than one produced by a different mode of the program with specific expected files) and convert them to user-friendly error messages, I've found this documentation to confuse students repeatedly (if they actually read it, rather than just guessing and checking interactively). Specifically, the documentation: 1. Says nothing about what happens if member doesn't exist (TarFile.getmember does mention KeyError, but extractfile doesn't describe itself in terms of getmember) 2. Loosely implies that it should return None in such a scenario "If member is a regular file or a link, an io.BufferedReader object is returned. Otherwise, None is returned." The intent is likely to mean "all other member types are None, and we're saying nothing about non-existent members", but everyone I've taught who has read the docs came away with a different impression until they tested it. Perhaps just reword from: "If member is a regular file or a link, an io.BufferedReader object is returned. Otherwise, None is returned." to: "If member is a regular file or a link, an io.BufferedReader object is returned. For all other existing members, None is returned. If member does not appear in the archive, KeyError is raised." Similar adjustments may be needed for extract, and/or both of them could be adjusted to explicitly refer to getmember by stating that filenames are converted to TarInfo objects via getmember. ---------- assignee: docs at python components: Documentation, Library (Lib) keywords: easy, newcomer friendly messages: 362298 nosy: docs at python, josh.r priority: normal severity: normal status: open title: tarfile's extractfile documentation is misleading versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 05:03:23 2020 From: report at bugs.python.org (Akos Kiss) Date: Thu, 20 Feb 2020 10:03:23 +0000 Subject: [New-bugs-announce] [issue39694] Incorrect dictionary unpacking when calling str.format Message-ID: <1582193003.32.0.513659901276.issue39694@roundup.psfhosted.org> New submission from Akos Kiss : My understanding was that in function calls, the keys in an **expression had to be strings. However, str.format seems to deviate from that and allows non-string keys in the mapping (and silently ignores them). Please, see the transcript below: >>> def f(): pass ... >>> def g(): pass ... >>> x = {None: ''} >>> y = {1: ''} >>> f(**x) Traceback (most recent call last): File "", line 1, in TypeError: f() keywords must be strings >>> f(**y) Traceback (most recent call last): File "", line 1, in TypeError: f() keywords must be strings >>> g(**x) Traceback (most recent call last): File "", line 1, in TypeError: g() keywords must be strings >>> g(**y) Traceback (most recent call last): File "", line 1, in TypeError: g() keywords must be strings >>> ''.format(**x) '' >>> ''.format(**y) '' I could reproduce this (incorrect?) behavior on macOS with python 3.4-3.7 and on Ubuntu 18.04 with python 3.6. ---------- messages: 362304 nosy: Akos Kiss priority: normal severity: normal status: open title: Incorrect dictionary unpacking when calling str.format type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 06:19:12 2020 From: report at bugs.python.org (Marco Sulla) Date: Thu, 20 Feb 2020 11:19:12 +0000 Subject: [New-bugs-announce] [issue39695] Failed to build _uuid module, but libraries was installed Message-ID: <1582197552.83.0.93580426376.issue39695@roundup.psfhosted.org> New submission from Marco Sulla : When I first done `make` to compile Python 3.9, I did not installed some debian development packages, like `uuid-dev`. So `_uuid` module was not built. After installed the debian package I re-run `make`, but it failed to build `_uuid` module. I had to edit manually `Modules/_uuidmodule.c` and remove all the `#ifdef` directives and leave only `#include ` Maybe `HAVE_UUID_UUID_H` and `HAVE_UUID_H` are created at `configure` phase only? ---------- components: Build messages: 362309 nosy: Marco Sulla priority: normal severity: normal status: open title: Failed to build _uuid module, but libraries was installed type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 06:25:21 2020 From: report at bugs.python.org (Marco Sulla) Date: Thu, 20 Feb 2020 11:25:21 +0000 Subject: [New-bugs-announce] [issue39696] Failed to build _ssl module, but libraries was installed Message-ID: <1582197921.98.0.104383393376.issue39696@roundup.psfhosted.org> New submission from Marco Sulla : Similarly to enhancement request #39695, I missed to install the debian package with the include files for SSL, before compiling Python 3.9. After installed it, `make` continued to not find the libraries and skipped the creation of module _ssl. Searching on internet, I found that doing: make clean ./configure etc make works. Maybe the SSL library check is done only at configure phase? ---------- components: Build messages: 362311 nosy: Marco Sulla priority: normal severity: normal status: open title: Failed to build _ssl module, but libraries was installed type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 06:37:58 2020 From: report at bugs.python.org (Marco Sulla) Date: Thu, 20 Feb 2020 11:37:58 +0000 Subject: [New-bugs-announce] [issue39697] Failed to build with --with-cxx-main=g++-9.2.0 Message-ID: <1582198678.47.0.589589124523.issue39697@roundup.psfhosted.org> New submission from Marco Sulla : I tried to compile Python 3.9 with: CC=gcc-9.2.0 ./configure --enable-optimizations --with-lto --with-cxx-main=g++-9.2.0 make -j 2 I got this error: g++-9.2.0 -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -fprofile-generate -I./Include/internal -I. -I./Include -DPy_BUILD_CORE -o Programs/_testembed.o ./Programs/_testembed.c cc1plus: warning: ?-Werror=? argument ?-Werror=implicit-function-declaration? is not valid for C++ cc1plus: warning: command line option ?-std=c99? is valid for C/ObjC but not for C++ sed -e "s, at EXENAME@,/usr/local/bin/python3.9," < ./Misc/python-config.in >python-config.py LC_ALL=C sed -e 's,\$(\([A-Za-z0-9_]*\)),\$\{\1\},g' < Misc/python-config.sh >python-config gcc-9.2.0 -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -fprofile-generate -I./Include/internal -I. -I./Include -DPy_BUILD_CORE \ -DGITVERSION="\"`LC_ALL=C git --git-dir ./.git rev-parse --short HEAD`\"" \ -DGITTAG="\"`LC_ALL=C git --git-dir ./.git describe --all --always --dirty`\"" \ -DGITBRANCH="\"`LC_ALL=C git --git-dir ./.git name-rev --name-only HEAD`\"" \ -o Modules/getbuildinfo.o ./Modules/getbuildinfo.c In file included from ./Include/internal/pycore_atomic.h:15, from ./Include/internal/pycore_gil.h:11, from ./Include/internal/pycore_pystate.h:11, from ./Programs/_testembed.c:10: /usr/local/lib/gcc/x86_64-pc-linux-gnu/9.2.0/include/stdatomic.h:40:9: error: ?_Atomic? does not name a type I suppose simply `Programs/_testembed.c` is a C source file and must not be compiled with g++ PS: as a workaround, `--with-cxx-main=gcc-9.2.0` works, but probably it's not optimal. ---------- components: Build messages: 362313 nosy: Marco Sulla priority: normal severity: normal status: open title: Failed to build with --with-cxx-main=g++-9.2.0 type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 06:44:23 2020 From: report at bugs.python.org (Marco Sulla) Date: Thu, 20 Feb 2020 11:44:23 +0000 Subject: [New-bugs-announce] [issue39698] asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers Message-ID: <1582199063.86.0.853270870549.issue39698@roundup.psfhosted.org> New submission from Marco Sulla : Python 3.9.0a3+ (heads/master-dirty:f2ee21d858, Feb 19 2020, 23:19:22) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.sleep(-1) Traceback (most recent call last): File "", line 1, in ValueError: sleep length must be non-negative >>> import asyncio >>> async def f(): ... await asyncio.sleep(-1) ... print("no exception") ... >>> asyncio.run(f()) no exception I think that also `asyncio.sleep()` should raise `ValueError` if the argument is less than zero. ---------- components: asyncio messages: 362314 nosy: Marco Sulla, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 07:23:26 2020 From: report at bugs.python.org (Ammar Askar) Date: Thu, 20 Feb 2020 12:23:26 +0000 Subject: [New-bugs-announce] [issue39699] Ubuntu Github action not fully running build process Message-ID: <1582201406.96.0.183994759448.issue39699@roundup.psfhosted.org> New submission from Ammar Askar : I think the Github action for building CPython on Ubuntu is accidentally caching the built Python files. If we take a look at: https://github.com/python/cpython/runs/455936632#step:7:1 and https://github.com/python/cpython/pull/18567/checks?check_run_id=457662461#step:8:1 It seems like it's running way too fast (and producing too little output) to actually be building all of CPython. Adding Steve who originally authored the action to the nosy list to see if they might have any insight. ---------- components: Build messages: 362316 nosy: ammar2, steve.dower priority: normal severity: normal status: open title: Ubuntu Github action not fully running build process type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 08:00:09 2020 From: report at bugs.python.org (David) Date: Thu, 20 Feb 2020 13:00:09 +0000 Subject: [New-bugs-announce] [issue39700] asyncio.selector_events._SelectorTransport: Add logging when sock.getpeername() fails Message-ID: <1582203609.18.0.700176939953.issue39700@roundup.psfhosted.org> New submission from David : `sock.getpeername` can fail for multiple reasons (see https://pubs.opengroup.org/onlinepubs/7908799/xns/getpeername.html) but in `asyncio.selector_events._SelectorTransport` it's try/excepted without any logging of the error: ``` if 'peername' not in self._extra: try: self._extra['peername'] = sock.getpeername() except socket.error: self._extra['peername'] = None ``` This makes it very difficult to debug. Would it be OK if I added here a log with information on the error? Thanks! ---------- components: asyncio messages: 362317 nosy: asvetlov, dsternlicht, yselivanov priority: normal severity: normal status: open title: asyncio.selector_events._SelectorTransport: Add logging when sock.getpeername() fails versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 10:54:32 2020 From: report at bugs.python.org (Stefan Krah) Date: Thu, 20 Feb 2020 15:54:32 +0000 Subject: [New-bugs-announce] [issue39701] Azure Pipelines PR broken Message-ID: <1582214072.19.0.00954727477072.issue39701@roundup.psfhosted.org> Change by Stefan Krah : ---------- nosy: skrah priority: normal severity: normal status: open title: Azure Pipelines PR broken _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 11:12:25 2020 From: report at bugs.python.org (Brandt Bucher) Date: Thu, 20 Feb 2020 16:12:25 +0000 Subject: [New-bugs-announce] [issue39702] PEP 614: Relaxing Grammar Restrictions On Decorators Message-ID: <1582215145.16.0.34815642768.issue39702@roundup.psfhosted.org> New submission from Brandt Bucher : The attached PR implements PEP 614's revised grammar for decorators, with tests. In short: decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE becomes decorator: '@' namedexpr_test NEWLINE I'm marking it as DO-NOT-MERGE until the PEP is accepted, but code review is still appreciated. Discussion of the PEP itself should go to the Python-Dev thread: https://mail.python.org/archives/list/python-dev at python.org/thread/SLKFAR56RA6A533O5ZOZ7XTJ764EMB7I ---------- assignee: brandtbucher components: Interpreter Core messages: 362328 nosy: brandtbucher, gvanrossum priority: normal severity: normal status: open title: PEP 614: Relaxing Grammar Restrictions On Decorators type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 11:25:56 2020 From: report at bugs.python.org (Kostis Gourgoulias) Date: Thu, 20 Feb 2020 16:25:56 +0000 Subject: [New-bugs-announce] [issue39703] Floor division operator and floats Message-ID: <1582215956.02.0.936078092318.issue39703@roundup.psfhosted.org> New submission from Kostis Gourgoulias : This was brought to my attention by a colleague, Albert B. When considering the floor division // operator, 1//0.01 should return 100.0, but instead returns 99.0. My understanding is that this is because 0.01 is represented by Decimal('0.01000000000000000020816681711721685132943093776702880859375') which is greater than 0.01. math.floor(1/0.01) correctly outputs 100. Shouldn't the two approaches provide the same answer? ---------- components: macOS messages: 362330 nosy: Kostis Gourgoulias, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Floor division operator and floats type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 20 18:38:42 2020 From: report at bugs.python.org (Stefan Krah) Date: Thu, 20 Feb 2020 23:38:42 +0000 Subject: [New-bugs-announce] [issue39704] Disable code coverage Message-ID: <1582241922.94.0.0257768022577.issue39704@roundup.psfhosted.org> New submission from Stefan Krah : The automated code coverage on GitHub is quite inaccurate and needlessly flags PRs as red. I'd prefer to make this opt-in. ---------- messages: 362367 nosy: skrah priority: normal severity: normal status: open title: Disable code coverage _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 04:24:31 2020 From: report at bugs.python.org (Mirwi) Date: Fri, 21 Feb 2020 09:24:31 +0000 Subject: [New-bugs-announce] [issue39705] Tutorial, 5.6 Looping Techniques, sorted() example Message-ID: <1582277071.44.0.766763125418.issue39705@roundup.psfhosted.org> New submission from Mirwi : >>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana'] >>> for f in sorted(set(basket)): ... print(f) ... apple banana orange pear Shouldn't 'apple' appear two times as basket is a list that allows duplicates, not a set? I'm just doing my first steps into Python and may be mislead. In that case, sorry for the fuzz. ---------- assignee: docs at python components: Documentation messages: 362395 nosy: docs at python, mirwi priority: normal severity: normal status: open title: Tutorial, 5.6 Looping Techniques, sorted() example type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 05:12:48 2020 From: report at bugs.python.org (Andrey Moiseev) Date: Fri, 21 Feb 2020 10:12:48 +0000 Subject: [New-bugs-announce] [issue39706] unittest.IsolatedAsyncioTestCase hangs on asyncio.CancelledError Message-ID: <1582279968.65.0.00583544355873.issue39706@roundup.psfhosted.org> New submission from Andrey Moiseev : The following code hangs: import asyncio import unittest class TestCancellation(unittest.IsolatedAsyncioTestCase): async def test_works(self): raise asyncio.CancelledError() if __name__ == '__main__': unittest.main() ---------- components: asyncio messages: 362402 nosy: Andrey Moiseev, asvetlov, yselivanov priority: normal severity: normal status: open title: unittest.IsolatedAsyncioTestCase hangs on asyncio.CancelledError type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 05:59:10 2020 From: report at bugs.python.org (Arn Vollebregt (KPN)) Date: Fri, 21 Feb 2020 10:59:10 +0000 Subject: [New-bugs-announce] [issue39707] Abstract property setter/deleter implementation not enforced. Message-ID: <1582282750.75.0.718234610951.issue39707@roundup.psfhosted.org> New submission from Arn Vollebregt (KPN) : When concretely implementing an abstract ABC class with an abstract property getter, setter and deleter it is not enfored that the setter and deleter are implemented. Instead, the property is treated as a read-only property (as would normally be the case without a setter/deleter definition for a property) and the setter/deleter code from the abstract class is not present in the child class. I would expect a TypeError exception when an abstract property is defined with a getter, setter and deleter but only the getter is implemented in a subclass (as is the case when not implementing the property getter). As a fallback, I would find it acceptable the code from the abstract class to be present in the child class, so at least the code that is defined there (in this case raising a NotImplementedError exception) would be executed. An interactive interpreter session to replicate this behavior: arn at hacktop:~$ python3 Python 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import abc >>> >>> # Define the (abstract) interface. ... class MyInterface(abc.ABC): ... ... # Property getter. ... @property ... @abc.abstractmethod ... def myProperty(self) -> str: ... raise NotImplementedError ... ... # Property setter. ... @myProperty.setter ... @abc.abstractmethod ... def myProperty(self, value: str) -> None: ... raise NotImplementedError ... ... # Property deleter. ... @myProperty.deleter ... @abc.abstractmethod ... def myProperty(self) -> None: ... raise NotImplementedError ... >>> # Implemented the interface. ... class MyImplementation(MyInterface): ... ... # No abstract method implementation(s). ... pass ... >>> # Creation of MyImplementation object raises TypeError as expected. ... obj = MyImplementation() Traceback (most recent call last): File "", line 2, in TypeError: Can't instantiate abstract class MyImplementation with abstract methods myProperty >>> import dis >>> # The property getter code would raise an exception as defined in MyInterface. ... dis.dis(MyImplementation.myProperty.fget.__code__.co_code) 0 LOAD_GLOBAL 0 (0) 2 RAISE_VARARGS 1 4 LOAD_CONST 0 (0) 6 RETURN_VALUE >>> # The property setter code would raise an exception as defined in MyInterface. ... dis.dis(MyImplementation.myProperty.fset.__code__.co_code) 0 LOAD_GLOBAL 0 (0) 2 RAISE_VARARGS 1 4 LOAD_CONST 0 (0) 6 RETURN_VALUE >>> # The property deleter code would raise an exception as defined in MyInterface. ... dis.dis(MyImplementation.myProperty.fdel.__code__.co_code) 0 LOAD_GLOBAL 0 (0) 2 RAISE_VARARGS 1 4 LOAD_CONST 0 (0) 6 RETURN_VALUE >>> # Let's reimplement with only the property getter. ... class MyImplementation(MyInterface): ... ... # Only implement abstract property getter. ... @property ... def myProperty(self) -> str: ... return "foobar" ... >>> # Object can be created (against expectations). ... obj = MyImplementation() >>> # The property getter works as defined. ... obj.myProperty 'foobar' >>> # The property cannot be set (read-only). ... obj.myProperty = "barfoo" Traceback (most recent call last): File "", line 2, in AttributeError: can't set attribute >>> # The property cannot be deleted (read-only). ... del obj.myProperty Traceback (most recent call last): File "", line 2, in AttributeError: can't delete attribute >>> # The property getter code returns a string as defined in MyImplementation. ... type(MyImplementation.myProperty.fget) >>> dis.dis(MyImplementation.myProperty.fget.__code__.co_code) 0 LOAD_CONST 1 (1) 2 RETURN_VALUE >>> # The property setter code however does not exist, although defined in MyInterface. ... type(MyImplementation.myProperty.fset) >>> # Nor does the property deleter code, although defined in MyInterface. ... type(MyImplementation.myProperty.fdel) ---------- messages: 362403 nosy: arn.vollebregt.kpn priority: normal severity: normal status: open title: Abstract property setter/deleter implementation not enforced. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 09:09:02 2020 From: report at bugs.python.org (Dennis Clarke) Date: Fri, 21 Feb 2020 14:09:02 +0000 Subject: [New-bugs-announce] [issue39708] final link stage in compile fails for 3.8.1 with missing CFLAGS Message-ID: <1582294142.91.0.373502818362.issue39708@roundup.psfhosted.org> New submission from Dennis Clarke : During compile after a sucessful configure the final link stage fails : /opt/developerstudio12.6/bin/cc -R/usr/local/lib -L/usr/local/lib -R/usr/local/lib -L/usr/local/lib -o python Programs/python.o -Wl,-R,/usr/local/lib -L. -lpython3.8d -lsocket -lnsl -lintl -lrt -ldl -lsendfile -lm -lm ld: fatal: file /opt/developerstudio12.6/lib/compilers/crti.o: wrong ELF class: ELFCLASS32 ld: fatal: file processing errors. No output written to python gmake: *** [Makefile:578: python] Error 2 real 107.96 user 100.96 sys 21.96 alpha$ Easily done manually : alpha$ $CC $CFLAGS -R/usr/local/lib -L/usr/local/lib \ > -o python Programs/python.o \ > -Wl,-R,/usr/local/lib -L. -lpython3.8d -lsocket -lnsl -lintl -lrt -ldl -lsendfile -lm alpha$ alpha$ file python python: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not stripped alpha$ ---------- components: Build messages: 362405 nosy: blastwave priority: normal severity: normal status: open title: final link stage in compile fails for 3.8.1 with missing CFLAGS type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 09:17:18 2020 From: report at bugs.python.org (Dennis Clarke) Date: Fri, 21 Feb 2020 14:17:18 +0000 Subject: [New-bugs-announce] [issue39709] missing CFLAGS during make tests results in test and compile failure Message-ID: <1582294638.45.0.102142275523.issue39709@roundup.psfhosted.org> New submission from Dennis Clarke : Seems to be an error in the Makefile(s) in that the "make test" can not compile some code for the correct architecture : The process seems to begin well and fine : alpha$ LD_LIBRARY_PATH=`pwd` /usr/local/bin/gmake test 2>&1 | tee ../python_3.8.1_SunOS5.10_sparc64vii+.003.test.log LD_LIBRARY_PATH=/usr/local/build/python_3.8.1_SunOS5.10_sparc64vii+.003 ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi /opt/developerstudio12.6/bin/cc -c -xcode=pic32 -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -D_REENTRANT -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -I../Python-3.8.1/Include/internal -IObjects -IInclude -IPython -I. -I../Python-3.8.1/Include -I/usr/local/include -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE -D_XOPEN_SOURCE=600 -I/usr/local/include -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE -D_XOPEN_SOURCE=600 -xcode=pic32 -DPy_BUILD_CORE -o Modules/_math.o ../Python-3.8.1/Modules/_math.c cc: Warning: multiple use of -Q option, previous one discarded. LD_LIBRARY_PATH=/usr/local/build/python_3.8.1_SunOS5.10_sparc64vii+.003 CC='/opt/developerstudio12.6/bin/cc' LDSHARED='/opt/developerstudio12.6/bin/cc -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -G -R/usr/local/lib -L/usr/local/lib -R/usr/local/lib -L/usr/local/lib ' OPT='' _TCLTK_INCLUDES='' _TCLTK_LIBS='' ./python -E ../Python-3.8.1/setup.py build running build running build_ext building '_struct' extension creating build/temp.solaris-2.10-sun4u.64bit-3.8-pydebug/usr creating build/temp.solaris-2.10-sun4u.64bit-3.8-pydebug/usr/local creating build/temp.solaris-2.10-sun4u.64bit-3.8-pydebug/usr/local/build creating build/temp.solaris-2.10-sun4u.64bit-3.8-pydebug/usr/local/build/Python-3.8.1 creating build/temp.solaris-2.10-sun4u.64bit-3.8-pydebug/usr/local/build/Python-3.8.1/Modules . . . /opt/developerstudio12.6/bin/cc -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -G -R/usr/local/lib -L/usr/local/lib -R/usr/local/lib -L/usr/local/lib -R/usr/local/lib -L/usr/local/lib -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -I/usr/local/include -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE -D_XOPEN_SOURCE=600 build/temp.solaris-2.10-sun4u.64bit-3.8-pydebug/usr/local/build/Python-3.8.1/Modules/_uuidmodule.o -L. -L/usr/local/lib -R/usr/local/lib -luuid -o build/lib.solaris-2.10-sun4u.64bit-3.8-pydebug/_uuid.so cc: Warning: multiple use of -Q option, previous one discarded. *** WARNING: renaming "_curses" since importing it failed: ld.so.1: python: fatal: relocation error: file build/lib.solaris-2.10-sun4u.64bit-3.8-pydebug/_curses.so: symbol acs32map: referenced symbol not found *** WARNING: renaming "_curses_panel" since importing it failed: No module named '_curses' INFO: Could not locate ffi libs and/or headers Python build finished successfully! The necessary bits to build these optional modules were not found: _gdbm ossaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc atexit pwd time Failed to build these modules: _ctypes Following modules built successfully but were removed because they could not be imported: _curses _curses_panel running build_scripts creating build/scripts-3.8 copying and adjusting /usr/local/build/Python-3.8.1/Tools/scripts/pydoc3 -> build/scripts-3.8 copying and adjusting /usr/local/build/Python-3.8.1/Tools/scripts/idle3 -> build/scripts-3.8 copying and adjusting /usr/local/build/Python-3.8.1/Tools/scripts/2to3 -> build/scripts-3.8 changing mode of build/scripts-3.8/pydoc3 from 644 to 755 changing mode of build/scripts-3.8/idle3 from 644 to 755 changing mode of build/scripts-3.8/2to3 from 644 to 755 renaming build/scripts-3.8/pydoc3 to build/scripts-3.8/pydoc3.8 renaming build/scripts-3.8/idle3 to build/scripts-3.8/idle3.8 renaming build/scripts-3.8/2to3 to build/scripts-3.8/2to3-3.8 ../Python-3.8.1/install-sh -c -m 644 ../Python-3.8.1/Tools/gdb/libpython.py python-gdb.py /opt/developerstudio12.6/bin/cc -c -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -D_REENTRANT -std=iso9899:2011 -errfmt=error -erroff=%none -errshort=full -xstrconst -xildoff -m64 -xmemalign=8s -xnolibmil -xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy -xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -xarch=sparc -L/usr/local/lib -R/usr/local/lib -I../Python-3.8.1/Include/internal -IObjects -IInclude -IPython -I. -I../Python-3.8.1/Include -I/usr/local/include -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE -D_XOPEN_SOURCE=600 -I/usr/local/include -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE -D_XOPEN_SOURCE=600 -xcode=pic32 -DPy_BUILD_CORE -o Programs/_testembed.o ../Python-3.8.1/Programs/_testembed.c cc: Warning: multiple use of -Q option, previous one discarded. /opt/developerstudio12.6/bin/cc -R/usr/local/lib -L/usr/local/lib -R/usr/local/lib -L/usr/local/lib -o Programs/_testembed Programs/_testembed.o -Wl,-R,/usr/local/lib -L. -lpython3.8d -lsocket -lnsl -lintl -lrt -ldl -lsendfile -lm -lm ld: fatal: file /opt/developerstudio12.6/lib/compilers/crti.o: wrong ELF class: ELFCLASS32 ld: fatal: file processing errors. No output written to Programs/_testembed gmake: *** [Makefile:709: Programs/_testembed] Error 2 alpha$ Manual intervention required : alpha$ alpha$ $CC $CFLAGS -R/usr/local/lib -L/usr/local/lib -o Programs/_testembed \ > Programs/_testembed.o -Wl,-R,/usr/local/lib -L. -lpython3.8d \ > -lsocket -lnsl -lintl -lrt -ldl -lsendfile -lm alpha$ Then one may continue and the tests begin to run. ---------- components: Tests messages: 362406 nosy: blastwave priority: normal severity: normal status: open title: missing CFLAGS during make tests results in test and compile failure versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 10:58:28 2020 From: report at bugs.python.org (Julien Palard) Date: Fri, 21 Feb 2020 15:58:28 +0000 Subject: [New-bugs-announce] [issue39710] "will be returned as unicode" reminiscent from Python 2 Message-ID: <1582300708.21.0.335550003251.issue39710@roundup.psfhosted.org> New submission from Julien Palard : In https://docs.python.org/3/library/calendar.html#calendar.LocaleTextCalendar I read "If this locale includes an encoding all strings containing month and weekday names will be returned as unicode." `unicode` here is a mention of the `unicode` type from Python 2 which does no longer exists, so the whole sentence can just be removed. It happen also in the next paragraph, and twice in Lib/calendar.py. In Python 2: >>> print type(calendar.LocaleTextCalendar(locale="C").formatmonth(2020, 1)) >>> print type(calendar.LocaleTextCalendar(locale="en_US.UTF8").formatmonth(2020, 1)) In Python 3: >>> print(type(calendar.LocaleTextCalendar(locale="C").formatmonth(2020, 1))) >>> print(type(calendar.LocaleTextCalendar(locale="en_US.UTF8").formatmonth(2020, 1))) ---------- assignee: docs at python components: Documentation keywords: easy messages: 362410 nosy: docs at python, mdk priority: normal severity: normal status: open title: "will be returned as unicode" reminiscent from Python 2 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 11:11:52 2020 From: report at bugs.python.org (Dennis Clarke) Date: Fri, 21 Feb 2020 16:11:52 +0000 Subject: [New-bugs-announce] [issue39711] SIGBUS and core dumped during tests of 3.8.1 Message-ID: <1582301512.94.0.0314211324749.issue39711@roundup.psfhosted.org> New submission from Dennis Clarke : The testsuite fails badly with a SIGBUS thus : . . . 0:01:37 load avg: 2.81 [ 26/423/1] test_frozen passed 0:01:40 load avg: 2.77 [ 27/423/1] test_eof passed -- running: test_importlib (31.7 sec), test_socket (31.2 sec) 0:01:41 load avg: 2.75 [ 28/423/1] test_poplib passed -- running: test_importlib (32.8 sec), test_socket (32.2 sec) 0:01:45 load avg: 2.71 [ 29/423/1] test_aifc passed -- running: test_importlib (37.2 sec), test_socket (36.7 sec) 0:01:46 load avg: 2.71 [ 30/423/1] test_unicode_file_functions passed -- running: test_importlib (37.8 sec), test_socket (37.2 sec) 0:01:51 load avg: 2.71 [ 31/423/1] test_listcomps passed -- running: test_importlib (42.9 sec), test_socket (42.4 sec) 0:01:54 load avg: 2.75 [ 32/423/1] test_asdl_parser passed -- running: test_importlib (46.0 sec), test_socket (45.4 sec) 0:01:57 load avg: 2.77 [ 33/423/1] test_richcmp passed -- running: test_importlib (49.1 sec), test_socket (48.6 sec) 0:02:01 load avg: 2.78 [ 34/423/1] test_importlib passed (48.3 sec) -- running: test_socket (51.8 sec) 0:02:04 load avg: 2.76 [ 35/423/1] test_rlcompleter passed -- running: test_socket (55.6 sec) 0:02:08 load avg: 2.75 [ 36/423/1] test_sys_setprofile passed -- running: test_socket (59.0 sec) 0:02:16 load avg: 2.72 [ 37/423/1] test_tuple passed -- running: test_socket (1 min 7 sec), test_capi (31.0 sec) 0:02:39 load avg: 2.73 [ 38/423/1] test_c_locale_coercion passed -- running: test_ast (31.4 sec), test_socket (1 min 30 sec), test_capi (53.8 sec) 0:02:44 load avg: 2.76 [ 39/423/1] test_symtable passed -- running: test_ast (36.6 sec), test_socket (1 min 35 sec), test_capi (58.9 sec) 0:03:01 load avg: 2.88 [ 40/423/1] test_ast passed (51.2 sec) -- running: test_socket (1 min 52 sec), test_capi (1 min 15 sec) 0:03:04 load avg: 2.89 [ 41/423/1] test_pow passed -- running: test_socket (1 min 55 sec), test_capi (1 min 18 sec) 0:03:19 load avg: 2.90 [ 42/423/1] test_http_cookiejar passed -- running: test_socket (2 min 10 sec), test_capi (1 min 33 sec) 0:03:21 load avg: 2.88 [ 43/423/1] test_defaultdict passed -- running: test_socket (2 min 12 sec), test_capi (1 min 36 sec) 0:03:24 load avg: 2.86 [ 44/423/1] test_winconsoleio skipped -- running: test_socket (2 min 15 sec), test_capi (1 min 38 sec) test_winconsoleio skipped -- test only relevant on win32 0:03:34 load avg: 2.77 [ 45/423/1] test_cprofile passed -- running: test_socket (2 min 25 sec), test_capi (1 min 48 sec) 0:03:37 load avg: 2.75 [ 46/423/1] test_nntplib passed -- running: test_socket (2 min 28 sec), test_capi (1 min 51 sec) 0:03:42 load avg: 2.74 [ 47/423/1] test_xml_dom_minicompat passed -- running: test_socket (2 min 33 sec), test_capi (1 min 56 sec) 0:03:46 load avg: 2.77 [ 48/423/1] test_pkgimport passed -- running: test_socket (2 min 37 sec), test_capi (2 min) 0:04:00 load avg: 2.80 [ 49/423/1] test_timeout passed -- running: test_socket (2 min 51 sec), test_capi (2 min 14 sec) 0:04:04 load avg: 2.78 [ 50/423/1] test_pkg passed -- running: test_socket (2 min 54 sec), test_capi (2 min 18 sec) 0:04:09 load avg: 2.79 [ 51/423/1] test_mimetypes passed -- running: test_support (34.9 sec), test_socket (2 min 59 sec), test_capi (2 min 23 sec) 0:04:18 load avg: 2.85 [ 52/423/1] test_base64 passed -- running: test_support (44.1 sec), test_socket (3 min 9 sec), test_capi (2 min 32 sec) 0:04:23 load avg: 2.88 [ 53/423/1] test_metaclass passed -- running: test_support (48.9 sec), test_socket (3 min 13 sec), test_capi (2 min 37 sec) 0:04:24 load avg: 2.89 [ 54/423/2] test_timeit crashed (Exit code 1) -- running: test_support (50.4 sec), test_socket (3 min 15 sec), test_capi (2 min 38 sec) Traceback (most recent call last): File "/usr/local/build/Python-3.8.1/Lib/runpy.py", line 193, in _run_module_as_main File "/usr/local/build/Python-3.8.1/Lib/runpy.py", line 86, in _run_code File "/usr/local/build/Python-3.8.1/Lib/test/regrtest.py", line 14, in File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/__init__.py", line 1, in File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/cmdline.py", line 4, in File "/usr/local/build/Python-3.8.1/Lib/test/support/__init__.py", line 6, in File "/usr/local/build/Python-3.8.1/Lib/asyncio/__init__.py", line 8, in File "/usr/local/build/Python-3.8.1/Lib/asyncio/base_events.py", line 45, in File "/usr/local/build/Python-3.8.1/Lib/asyncio/staggered.py", line 10, in File "", line 991, in _find_and_load File "", line 975, in _find_and_load_unlocked File "", line 671, in _load_unlocked File "", line 779, in exec_module File "", line 874, in get_code File "", line 972, in get_data MemoryError Warning -- regrtest worker thread failed: Traceback (most recent call last): File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest_mp.py", line 264, in run mp_result = self._runtest(test_name) File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest_mp.py", line 229, in _runtest retcode, stdout, stderr = self._run_process(test_name) File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest_mp.py", line 174, in _run_process popen = run_test_in_subprocess(test_name, self.ns) File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest_mp.py", line 62, in run_test_in_subprocess return subprocess.Popen(cmd, File "/usr/local/build/Python-3.8.1/Lib/subprocess.py", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/local/build/Python-3.8.1/Lib/subprocess.py", line 1637, in _execute_child self.pid = _posixsubprocess.fork_exec( OSError: [Errno 12] Not enough space Kill Kill Kill == Tests result: FAILURE == 369 tests omitted: test___all__ test___future__ test__locale test__opcode test__osx_support test__xxsubinterpreters test_abc test_abstract_numbers test_argparse test_array test_asyncgen test_asynchat test_asyncio test_asyncore test_atexit test_audioop test_audit test_augassign test_baseexception test_bdb test_bigaddrspace test_binascii test_binhex test_binop test_bisect test_bool test_buffer test_bufio test_builtin test_bytes test_bz2 test_calendar test_call test_capi test_cgi test_charmapcodec test_class test_clinic test_cmath test_cmd test_cmd_line test_cmd_line_script test_code_module test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_iso2022 test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_codecs test_codeop test_collections test_colorsys test_compare test_compile test_compileall test_concurrent_futures test_configparser test_contains test_context test_contextlib test_contextlib_async test_copy test_copyreg test_coroutines test_crashers test_crypt test_csv test_ctypes test_curses test_dataclasses test_datetime test_dbm test_dbm_dumb test_dbm_gnu test_dbm_ndbm test_decimal test_decorators test_deque test_descr test_devpoll test_dict_version test_dictcomps test_dictviews test_difflib test_distutils test_doctest test_doctest2 test_docxmlrpc test_dtrace test_dummy_thread test_dummy_threading test_dynamic test_dynamicclassattribute test_eintr test_email test_embed test_enum test_enumerate test_epoll test_exception_hierarchy test_exception_variations test_exceptions test_extcall test_faulthandler test_fcntl test_file test_file_eintr test_filecmp test_fileinput test_fileio test_finalization test_float test_flufl test_fnmatch test_fork1 test_format test_fractions test_frame test_fstring test_ftplib test_functools test_future test_future3 test_future4 test_future5 test_gc test_gdb test_generator_stop test_generators test_genericclass test_genericpath test_genexps test_getargs2 test_getopt test_gettext test_glob test_global test_grammar test_grp test_gzip test_hash test_heapq test_hmac test_html test_htmlparser test_http_cookies test_httplib test_httpservers test_idle test_imaplib test_imghdr test_imp test_import test_index test_inspect test_int test_int_literal test_io test_ioctl test_ipaddress test_isinstance test_iter test_iterlen test_itertools test_json test_keyword test_keywordonlyarg test_kqueue test_largefile test_lib2to3 test_linecache test_list test_lltrace test_locale test_logging test_long test_lzma test_mailbox test_mailcap test_marshal test_math test_memoryio test_memoryview test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multiprocessing_fork test_multiprocessing_forkserver test_multiprocessing_main_handling test_multiprocessing_spawn test_named_expressions test_netrc test_nis test_normalization test_ntpath test_numeric_tower test_opcodes test_openpty test_ordered_dict test_os test_ossaudiodev test_osx_env test_parser test_pathlib test_pdb test_peepholer test_pickle test_picklebuffer test_pickletools test_pipes test_pkgutil test_platform test_plistlib test_poll test_popen test_positional_only_arg test_posix test_posixpath test_pprint test_print test_profile test_property test_pstats test_pty test_pulldom test_pwd test_py_compile test_pyclbr test_pydoc test_pyexpat test_queue test_quopri test_raise test_random test_range test_re test_readline test_regrtest test_repl test_reprlib test_resource test_robotparser test_sax test_sched test_scope test_secrets test_select test_selectors test_setcomps test_shelve test_shutil test_signal test_site test_slice test_smtpd test_smtplib test_smtpnet test_sndhdr test_socket test_socketserver test_sort test_source_encoding test_sqlite test_ssl test_startfile test_statistics test_strftime test_string test_string_literals test_stringprep test_strptime test_strtod test_struct test_structmembers test_structseq test_subclassinit test_subprocess test_sunau test_sundry test_super test_support test_symbol test_syntax test_sys test_sys_settrace test_sysconfig test_syslog test_tabnanny test_tarfile test_tcl test_telnetlib test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading test_threading_local test_threadsignals test_time test_tix test_tk test_tokenize test_tools test_trace test_traceback test_tracemalloc test_ttk_guionly test_ttk_textonly test_turtle test_type_comments test_typechecks test_typing test_ucn test_unary test_unicode test_unicode_file test_unicode_identifiers test_unicodedata test_unittest test_univnewlines test_unpack test_unpack_ex test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib_response test_urllibnet test_urlparse test_userdict test_userlist test_userstring test_utf8_mode test_utf8source test_uu test_venv test_wait3 test_wait4 test_warnings test_wave test_weakref test_weakset test_webbrowser test_winreg test_winsound test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmlrpc test_xmlrpc_net test_xxtestfuzz test_yield_from test_zipapp test_zipfile test_zipfile64 test_zipimport test_zipimport_support test_zlib 50 tests OK. 2 tests failed: test_hashlib test_timeit 2 tests skipped: test_msilib test_winconsoleio 0:04:25 load avg: 2.89 0:04:25 load avg: 2.89 Re-running failed tests in verbose mode 0:04:25 load avg: 2.89 Re-running test_hashlib in verbose mode test_algorithms_available (test.test_hashlib.HashLibTestCase) ... ok test_algorithms_guaranteed (test.test_hashlib.HashLibTestCase) ... ok test_blake2b (test.test_hashlib.HashLibTestCase) ... Fatal Python error: Bus error Current thread 0x0000000000000001 (most recent call first): File "/usr/local/build/Python-3.8.1/Lib/test/test_hashlib.py", line 570 in check_blake2 File "/usr/local/build/Python-3.8.1/Lib/test/test_hashlib.py", line 647 in test_blake2b File "/usr/local/build/Python-3.8.1/Lib/unittest/case.py", line 633 in _callTestMethod File "/usr/local/build/Python-3.8.1/Lib/unittest/case.py", line 676 in run File "/usr/local/build/Python-3.8.1/Lib/unittest/case.py", line 736 in __call__ File "/usr/local/build/Python-3.8.1/Lib/unittest/suite.py", line 122 in run File "/usr/local/build/Python-3.8.1/Lib/unittest/suite.py", line 84 in __call__ File "/usr/local/build/Python-3.8.1/Lib/unittest/suite.py", line 122 in run File "/usr/local/build/Python-3.8.1/Lib/unittest/suite.py", line 84 in __call__ File "/usr/local/build/Python-3.8.1/Lib/unittest/suite.py", line 122 in run File "/usr/local/build/Python-3.8.1/Lib/unittest/suite.py", line 84 in __call__ File "/usr/local/build/Python-3.8.1/Lib/unittest/runner.py", line 176 in run File "/usr/local/build/Python-3.8.1/Lib/test/support/__init__.py", line 2030 in _run_suite File "/usr/local/build/Python-3.8.1/Lib/test/support/__init__.py", line 2126 in run_unittest File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest.py", line 209 in _test_module File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest.py", line 153 in _runtest File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/runtest.py", line 193 in runtest File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/main.py", line 318 in rerun_failed_tests File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/main.py", line 691 in _main File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/main.py", line 634 in main File "/usr/local/build/Python-3.8.1/Lib/test/libregrtest/main.py", line 712 in main File "/usr/local/build/Python-3.8.1/Lib/test/__main__.py", line 2 in File "/usr/local/build/Python-3.8.1/Lib/runpy.py", line 86 in _run_code File "/usr/local/build/Python-3.8.1/Lib/runpy.py", line 193 in _run_module_as_main Bus Error - core dumped gmake: *** [Makefile:1130: test] Error 138 alpha$ alpha$ LD_LIBRARY_PATH=`pwd` dbx ./python time_1582294871-pid_2858-uid_16411-gid_20002-fid_python.per-process-core Reading python core file header read successfully Reading ld.so.1 Reading libpython3.8d.so.1.0 Reading libsocket.so.1 Reading libnsl.so.1 Reading libintl.so.8.1.6 Reading librt.so.1 Reading libdl.so.1 Reading libsendfile.so.1 Reading libm.so.2 Reading libc.so.1 Reading libiconv.so.2.6.1 Reading libaio.so.1 Reading libmd.so.1 Reading libc_psr.so.1 Reading en_US.UTF-8.so.3 Reading methods_unicode.so.3 Reading _heapq.so Reading zlib.so Reading libz.so.1.2.11 Reading libmp.so.2 Reading libscf.so.1 Reading libdoor.so.1 Reading libuutil.so.1 Reading libgen.so.1 Reading _bz2.so Reading _lzma.so Reading liblzma.so.5.2.4 Reading libpthread.so.1 Reading grp.so Reading _socket.so Reading math.so Reading select.so Reading _posixsubprocess.so Reading _ssl.so Reading libssl.so.1.1 Reading libcrypto.so.1.1 Reading _struct.so Reading binascii.so Reading _opcode.so Reading _contextvars.so Reading _asyncio.so Reading _hashlib.so Reading _blake2.so Reading _sha3.so Reading _pickle.so Reading _queue.so Reading _datetime.so Reading _bisect.so Reading _sha512.so Reading _random.so Reading _elementtree.so Reading pyexpat.so Reading array.so Reading resource.so Reading _multiprocessing.so Reading _json.so Reading _md5.so Reading _sha1.so Reading _sha256.so t at 1 (l at 1) program terminated by signal BUS (Bus Error) 0xffffffff7d3dccbc: __lwp_kill+0x0008: bcc,a,pt %icc,__lwp_kill+0x18 ! 0xffffffff7d3dcccc Current function is faulthandler_fatal_error 361 raise(signum); (dbx) where current thread: t at 1 [1] __lwp_kill(0x0, 0xa, 0x10010bc20, 0xffffffff7e9c2190, 0xffffffff7d700200, 0x0), at 0xffffffff7d3dccbc [2] raise(0xa, 0xffffffff7d54f7b4, 0x0, 0xffffffff7ea8f985, 0xa, 0x80808080), at 0xffffffff7d3744d4 =>[3] faulthandler_fatal_error(signum = 10), line 361 in "faulthandler.c" [4] __sighndlr(0xa, 0x0, 0x10015a020, 0xffffffff7e9c1f30, 0x0, 0x0), at 0xffffffff7d3d8d6c ---- called from signal handler with signal 10 (SIGBUS) ------ [5] blake2b_increment_counter(S = 0xffffffff7ffc304a, inc = 0), line 77 in "blake2b-ref.c" [6] PyBlake2_blake2b_final(S = 0xffffffff7ffc304a, out = 0xffffffff7ffc31b0 "????w?? ????w??^P????wq\x9b?????|P", outlen = 64U), line 339 in "blake2b-ref.c" [7] _blake2_blake2b_hexdigest_impl(self = 0xffffffff71535220), line 332 in "blake2b_impl.c" [8] _blake2_blake2b_hexdigest(self = 0xffffffff71535220, _unused_ignored = (nil)), line 262 in "blake2b_impl.c.h" [9] method_vectorcall_NOARGS(func = 0xffffffff77a1e830, args = 0x10046a118, nargsf = 9223372036854775809U, kwnames = (nil)), line 393 in "descrobject.c" [10] _PyObject_Vectorcall(callable = 0xffffffff77a1e830, args = 0x10046a118, nargsf = 9223372036854775809U, kwnames = (nil)), line 127 in "abstract.h" [11] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffc5000, oparg = 1, kwnames = (nil)), line 4987 in "ceval.c" [12] _PyEval_EvalFrameDefault(f = 0x100469f40, throwflag = 0), line 3486 in "ceval.c" [13] PyEval_EvalFrameEx(f = 0x100469f40, throwflag = 0), line 741 in "ceval.c" [14] function_code_fastcall(co = 0xffffffff72d19ee0, args = 0xffffffff715327e0, nargs = 7, globals = 0xffffffff72d03bf0), line 283 in "call.c" [15] _PyFunction_Vectorcall(func = 0xffffffff72b12190, stack = 0xffffffff715327a8, nargsf = 9223372036854775815U, kwnames = (nil)), line 410 in "call.c" [16] _PyObject_Vectorcall(callable = 0xffffffff72b12190, args = 0xffffffff715327a8, nargsf = 9223372036854775815U, kwnames = (nil)), line 127 in "abstract.h" [17] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffc7130, oparg = 7, kwnames = (nil)), line 4987 in "ceval.c" [18] _PyEval_EvalFrameDefault(f = 0xffffffff71532620, throwflag = 0), line 3486 in "ceval.c" [19] PyEval_EvalFrameEx(f = 0xffffffff71532620, throwflag = 0), line 741 in "ceval.c" [20] function_code_fastcall(co = 0xffffffff72d0b1e0, args = 0xffffffff7152fbf0, nargs = 1, globals = 0xffffffff72d03bf0), line 283 in "call.c" [21] _PyFunction_Vectorcall(func = 0xffffffff72b122d0, stack = 0xffffffff7152fbe8, nargsf = 1U, kwnames = (nil)), line 410 in "call.c" [22] _PyObject_Vectorcall(callable = 0xffffffff72b122d0, args = 0xffffffff7152fbe8, nargsf = 1U, kwnames = (nil)), line 127 in "abstract.h" [23] method_vectorcall(method = 0xffffffff72805650, args = 0xffffffff7152fbf0, nargsf = 9223372036854775808U, kwnames = (nil)), line 60 in "classobject.c" [24] _PyObject_Vectorcall(callable = 0xffffffff72805650, args = 0xffffffff7152fbf0, nargsf = 9223372036854775808U, kwnames = (nil)), line 127 in "abstract.h" [25] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffc94e0, oparg = 0, kwnames = (nil)), line 4987 in "ceval.c" [26] _PyEval_EvalFrameDefault(f = 0xffffffff7152fa70, throwflag = 0), line 3500 in "ceval.c" [27] PyEval_EvalFrameEx(f = 0xffffffff7152fa70, throwflag = 0), line 741 in "ceval.c" [28] function_code_fastcall(co = 0xffffffff73915520, args = 0x10048e990, nargs = 2, globals = 0xffffffff73902290), line 283 in "call.c" [29] _PyFunction_Vectorcall(func = 0xffffffff73816730, stack = 0x10048e980, nargsf = 9223372036854775810U, kwnames = (nil)), line 410 in "call.c" [30] _PyObject_Vectorcall(callable = 0xffffffff73816730, args = 0x10048e980, nargsf = 9223372036854775810U, kwnames = (nil)), line 127 in "abstract.h" [31] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffcb610, oparg = 2, kwnames = (nil)), line 4987 in "ceval.c" [32] _PyEval_EvalFrameDefault(f = 0x10048e7a0, throwflag = 0), line 3486 in "ceval.c" [33] PyEval_EvalFrameEx(f = 0x10048e7a0, throwflag = 0), line 741 in "ceval.c" [34] _PyEval_EvalCodeWithName(_co = 0xffffffff73915790, globals = 0xffffffff73902290, locals = (nil), args = 0xffffffff7ffcbf30, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffcbf40, kwcount = 0, kwstep = 1, defs = 0xffffffff7391e838, defcount = 1, kwdefs = (nil), closure = (nil), name = 0xffffffff7c412450, qualname = 0xffffffff73914d60), line 4298 in "ceval.c" [35] _PyFunction_Vectorcall(func = 0xffffffff73816910, stack = 0xffffffff7ffcbf30, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [36] _PyObject_Vectorcall(callable = 0xffffffff73816910, args = 0xffffffff7ffcbf30, nargsf = 2U, kwnames = (nil)), line 127 in "abstract.h" [37] method_vectorcall(method = 0xffffffff7b5162f0, args = 0xffffffff72b0a068, nargsf = 1U, kwnames = (nil)), line 89 in "classobject.c" [38] PyVectorcall_Call(callable = 0xffffffff7b5162f0, tuple = 0xffffffff72b0a050, kwargs = 0xffffffff72b16050), line 199 in "call.c" [39] PyObject_Call(callable = 0xffffffff7b5162f0, args = 0xffffffff72b0a050, kwargs = 0xffffffff72b16050), line 227 in "call.c" [40] do_call_core(tstate = 0x10010bc20, func = 0xffffffff7b5162f0, callargs = 0xffffffff72b0a050, kwdict = 0xffffffff72b16050), line 5034 in "ceval.c" [41] _PyEval_EvalFrameDefault(f = 0xffffffff71531220, throwflag = 0), line 3559 in "ceval.c" [42] PyEval_EvalFrameEx(f = 0xffffffff71531220, throwflag = 0), line 741 in "ceval.c" [43] _PyEval_EvalCodeWithName(_co = 0xffffffff73915a00, globals = 0xffffffff73902290, locals = (nil), args = 0xffffffff7ffce700, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffce710, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0xffffffff7c501340, qualname = 0xffffffff73914f40), line 4298 in "ceval.c" [44] _PyFunction_Vectorcall(func = 0xffffffff73816af0, stack = 0xffffffff7ffce700, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [45] _PyObject_FastCallDict(callable = 0xffffffff73816af0, args = 0xffffffff7ffce700, nargsf = 2U, kwargs = (nil)), line 96 in "call.c" [46] _PyObject_Call_Prepend(callable = 0xffffffff73816af0, obj = 0xffffffff72b175a0, args = 0xffffffff72b0a4b0, kwargs = (nil)), line 889 in "call.c" [47] slot_tp_call(self = 0xffffffff72b175a0, args = 0xffffffff72b0a4b0, kwds = (nil)), line 6521 in "typeobject.c" [48] _PyObject_MakeTpCall(callable = 0xffffffff72b175a0, args = 0xffffffff7152ed98, nargs = 1, keywords = (nil)), line 159 in "call.c" [49] _PyObject_Vectorcall(callable = 0xffffffff72b175a0, args = 0xffffffff7152ed98, nargsf = 9223372036854775809U, kwnames = (nil)), line 125 in "abstract.h" [50] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffd05a0, oparg = 1, kwnames = (nil)), line 4987 in "ceval.c" [51] _PyEval_EvalFrameDefault(f = 0xffffffff7152ebf0, throwflag = 0), line 3500 in "ceval.c" [52] PyEval_EvalFrameEx(f = 0xffffffff7152ebf0, throwflag = 0), line 741 in "ceval.c" [53] _PyEval_EvalCodeWithName(_co = 0xffffffff73926930, globals = 0xffffffff739020b0, locals = (nil), args = 0xffffffff7ffd0ec0, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffd0ed0, kwcount = 0, kwstep = 1, defs = 0xffffffff738198d8, defcount = 1, kwdefs = (nil), closure = (nil), name = 0xffffffff7c412450, qualname = 0xffffffff7381adc0), line 4298 in "ceval.c" [54] _PyFunction_Vectorcall(func = 0xffffffff7381cb90, stack = 0xffffffff7ffd0ec0, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [55] _PyObject_Vectorcall(callable = 0xffffffff7381cb90, args = 0xffffffff7ffd0ec0, nargsf = 2U, kwnames = (nil)), line 127 in "abstract.h" [56] method_vectorcall(method = 0xffffffff79c0d6b0, args = 0xffffffff72616108, nargsf = 1U, kwnames = (nil)), line 89 in "classobject.c" [57] PyVectorcall_Call(callable = 0xffffffff79c0d6b0, tuple = 0xffffffff726160f0, kwargs = 0xffffffff7152acb0), line 199 in "call.c" [58] PyObject_Call(callable = 0xffffffff79c0d6b0, args = 0xffffffff726160f0, kwargs = 0xffffffff7152acb0), line 227 in "call.c" [59] do_call_core(tstate = 0x10010bc20, func = 0xffffffff79c0d6b0, callargs = 0xffffffff726160f0, kwdict = 0xffffffff7152acb0), line 5034 in "ceval.c" [60] _PyEval_EvalFrameDefault(f = 0xffffffff72822d00, throwflag = 0), line 3559 in "ceval.c" [61] PyEval_EvalFrameEx(f = 0xffffffff72822d00, throwflag = 0), line 741 in "ceval.c" [62] _PyEval_EvalCodeWithName(_co = 0xffffffff7390ce10, globals = 0xffffffff739020b0, locals = (nil), args = 0xffffffff7ffd3690, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffd36a0, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0xffffffff7c501340, qualname = 0xffffffff7381aa60), line 4298 in "ceval.c" [63] _PyFunction_Vectorcall(func = 0xffffffff7381ca50, stack = 0xffffffff7ffd3690, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [64] _PyObject_FastCallDict(callable = 0xffffffff7381ca50, args = 0xffffffff7ffd3690, nargsf = 2U, kwargs = (nil)), line 96 in "call.c" [65] _PyObject_Call_Prepend(callable = 0xffffffff7381ca50, obj = 0xffffffff72b0aaa0, args = 0xffffffff76e21dc0, kwargs = (nil)), line 889 in "call.c" [66] slot_tp_call(self = 0xffffffff72b0aaa0, args = 0xffffffff76e21dc0, kwds = (nil)), line 6521 in "typeobject.c" [67] _PyObject_MakeTpCall(callable = 0xffffffff72b0aaa0, args = 0xffffffff7152eba8, nargs = 1, keywords = (nil)), line 159 in "call.c" [68] _PyObject_Vectorcall(callable = 0xffffffff72b0aaa0, args = 0xffffffff7152eba8, nargsf = 9223372036854775809U, kwnames = (nil)), line 125 in "abstract.h" [69] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffd5530, oparg = 1, kwnames = (nil)), line 4987 in "ceval.c" [70] _PyEval_EvalFrameDefault(f = 0xffffffff7152ea00, throwflag = 0), line 3500 in "ceval.c" [71] PyEval_EvalFrameEx(f = 0xffffffff7152ea00, throwflag = 0), line 741 in "ceval.c" [72] _PyEval_EvalCodeWithName(_co = 0xffffffff73926930, globals = 0xffffffff739020b0, locals = (nil), args = 0xffffffff7ffd5e50, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffd5e60, kwcount = 0, kwstep = 1, defs = 0xffffffff738198d8, defcount = 1, kwdefs = (nil), closure = (nil), name = 0xffffffff7c412450, qualname = 0xffffffff7381adc0), line 4298 in "ceval.c" [73] _PyFunction_Vectorcall(func = 0xffffffff7381cb90, stack = 0xffffffff7ffd5e50, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [74] _PyObject_Vectorcall(callable = 0xffffffff7381cb90, args = 0xffffffff7ffd5e50, nargsf = 2U, kwnames = (nil)), line 127 in "abstract.h" [75] method_vectorcall(method = 0xffffffff72c18410, args = 0xffffffff77c16978, nargsf = 1U, kwnames = (nil)), line 89 in "classobject.c" [76] PyVectorcall_Call(callable = 0xffffffff72c18410, tuple = 0xffffffff77c16960, kwargs = 0xffffffff7152abf0), line 199 in "call.c" [77] PyObject_Call(callable = 0xffffffff72c18410, args = 0xffffffff77c16960, kwargs = 0xffffffff7152abf0), line 227 in "call.c" [78] do_call_core(tstate = 0x10010bc20, func = 0xffffffff72c18410, callargs = 0xffffffff77c16960, kwdict = 0xffffffff7152abf0), line 5034 in "ceval.c" [79] _PyEval_EvalFrameDefault(f = 0xffffffff72822b30, throwflag = 0), line 3559 in "ceval.c" [80] PyEval_EvalFrameEx(f = 0xffffffff72822b30, throwflag = 0), line 741 in "ceval.c" [81] _PyEval_EvalCodeWithName(_co = 0xffffffff7390ce10, globals = 0xffffffff739020b0, locals = (nil), args = 0xffffffff7ffd8620, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffd8630, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0xffffffff7c501340, qualname = 0xffffffff7381aa60), line 4298 in "ceval.c" [82] _PyFunction_Vectorcall(func = 0xffffffff7381ca50, stack = 0xffffffff7ffd8620, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [83] _PyObject_FastCallDict(callable = 0xffffffff7381ca50, args = 0xffffffff7ffd8620, nargsf = 2U, kwargs = (nil)), line 96 in "call.c" [84] _PyObject_Call_Prepend(callable = 0xffffffff7381ca50, obj = 0xffffffff78e17280, args = 0xffffffff77d30230, kwargs = (nil)), line 889 in "call.c" [85] slot_tp_call(self = 0xffffffff78e17280, args = 0xffffffff77d30230, kwds = (nil)), line 6521 in "typeobject.c" [86] _PyObject_MakeTpCall(callable = 0xffffffff78e17280, args = 0xffffffff7152e9b8, nargs = 1, keywords = (nil)), line 159 in "call.c" [87] _PyObject_Vectorcall(callable = 0xffffffff78e17280, args = 0xffffffff7152e9b8, nargsf = 9223372036854775809U, kwnames = (nil)), line 125 in "abstract.h" [88] call_function(tstate = 0x10010bc20, pp_stack = 0xffffffff7ffda4c0, oparg = 1, kwnames = (nil)), line 4987 in "ceval.c" [89] _PyEval_EvalFrameDefault(f = 0xffffffff7152e810, throwflag = 0), line 3500 in "ceval.c" [90] PyEval_EvalFrameEx(f = 0xffffffff7152e810, throwflag = 0), line 741 in "ceval.c" [91] _PyEval_EvalCodeWithName(_co = 0xffffffff73926930, globals = 0xffffffff739020b0, locals = (nil), args = 0xffffffff7ffdade0, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffdadf0, kwcount = 0, kwstep = 1, defs = 0xffffffff738198d8, defcount = 1, kwdefs = (nil), closure = (nil), name = 0xffffffff7c412450, qualname = 0xffffffff7381adc0), line 4298 in "ceval.c" [92] _PyFunction_Vectorcall(func = 0xffffffff7381cb90, stack = 0xffffffff7ffdade0, nargsf = 2U, kwnames = (nil)), line 441 in "call.c" [93] _PyObject_Vectorcall(callable = 0xffffffff7381cb90, args = 0xffffffff7ffdade0, nargsf = 2U, kwnames = (nil)), line 127 in "abstract.h" [94] method_vectorcall(method = 0xffffffff76d07c50, args = 0xffffffff76e27ec8, nargsf = 1U, kwnames = (nil)), line 89 in "classobject.c" [95] PyVectorcall_Call(callable = 0xffffffff76d07c50, tuple = 0xffffffff76e27eb0, kwargs = 0xffffffff7152ab30), line 199 in "call.c" [96] PyObject_Call(callable = 0xffffffff76d07c50, args = 0xffffffff76e27eb0, kwargs = 0xffffffff7152ab30), line 227 in "call.c" [97] do_call_core(tstate = 0x10010bc20, func = 0xffffffff76d07c50, callargs = 0xffffffff76e27eb0, kwdict = 0xffffffff7152ab30), line 5034 in "ceval.c" [98] _PyEval_EvalFrameDefault(f = 0xffffffff72822960, throwflag = 0), line 3559 in "ceval.c" [99] PyEval_EvalFrameEx(f = 0xffffffff72822960, throwflag = 0), line 741 in "ceval.c" [100] _PyEval_EvalCodeWithName(_co = 0xffffffff7390ce10, globals = 0xffffffff739020b0, locals = (nil), args = 0xffffffff7ffdd5b0, argcount = 2, kwnames = (nil), kwargs = 0xffffffff7ffdd5c0, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0xffffffff7c501340, qualname = 0xffffffff7381aa60), line 4298 in "ceval.c" (dbx) regs current thread: t at 1 current frame: [3] g0-g1 0x0000000000000000 0x00000000000000a3 g2-g3 0x0000000000000000 0x0000000000000000 g4-g5 0xffffffff7e9c1f30 0xffffffffff7fffff g6-g7 0x0000000000000000 0xffffffff7d700200 o0-o1 0x000000000000000a 0xffffffff7d54f7b4 o2-o3 0x0000000000000000 0xffffffff7ea8f985 o4-o5 0x000000000000000a 0x0000000080808080 o6-o7 0x00000001001593d1 0xffffffff7e9c2190 l0-l1 0xffffffff7eb915d8 0x0000000000000000 l2-l3 0x0000000000000000 0x0000000000000000 l4-l5 0x0000000000000000 0x0000000000000000 l6-l7 0x0000000000000000 0x0000000000000000 i0-i1 0x000000000000000a 0x0000000000000000 i2-i3 0x000000010015a020 0x0000000040000000 i4-i5 0xcdcdcdcdcdcdcdcd 0xcdcdcdcdcdcdcdcd i6-i7 0x00000001001594d1 0xffffffff7d3d8d6c y 0x0000000000000000 ccr 0x0000000000000098 pc 0xffffffff7e9c2190:faulthandler_fatal_error+0x260 call _PROCEDURE_LINKAGE_TABLE_+0xc440 [PLT] ! 0xffffffff7eba5440 npc 0xffffffff7d3dccc0:__lwp_kill+0xc clr %o0 (dbx) quit alpha$ At this time version 3.8.1 looks to not be portable to big endian risc architectures such as Solaris 10 on Fujitsu SPARC-VII+ and perhaps other systems. I can try on IBM 64-bit Power with FreeBSD and see what happens. ---------- components: Tests messages: 362411 nosy: blastwave priority: normal severity: normal status: open title: SIGBUS and core dumped during tests of 3.8.1 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 11:57:39 2020 From: report at bugs.python.org (Antoine Pitrou) Date: Fri, 21 Feb 2020 16:57:39 +0000 Subject: [New-bugs-announce] [issue39712] Doc for `-X dev` option should mention PYTHONDEVMODE Message-ID: <1582304259.52.0.96230176724.issue39712@roundup.psfhosted.org> New submission from Antoine Pitrou : In the doc for `-X` options (https://docs.python.org/3/using/cmdline.html#id5), when an option can be triggered through an equivalent environment variable, that variable is mentioned. An exception to that is `-X dev`, which can also be triggered by the PYTHONDEVMODE variable, but that isn't mentioned in the CLI doc. Other missing environment variables in the CLI doc are PYTHONFAULTHANDLER and PYTHONTRACEMALLOC. ---------- assignee: docs at python components: Documentation keywords: easy, newcomer friendly messages: 362414 nosy: docs at python, eric.araujo, ezio.melotti, mdk, pitrou, vstinner, willingc priority: normal severity: normal stage: needs patch status: open title: Doc for `-X dev` option should mention PYTHONDEVMODE type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 12:43:03 2020 From: report at bugs.python.org (Ananth Vijalapuram) Date: Fri, 21 Feb 2020 17:43:03 +0000 Subject: [New-bugs-announce] [issue39713] ElementTree limitation Message-ID: <1582306983.7.0.277512488596.issue39713@roundup.psfhosted.org> New submission from Ananth Vijalapuram : I am trying to parse a very large XML file. Here is the output: /usr/intel/bin/python3.7.4 crif_parser.py Retrieved 3593891712 characters <- this is printed from my script Traceback (most recent call last): File "crif_parser.py", line 9, in tree = ET.fromstring(data) File "/usr/intel/pkgs/python3/3.7.4/lib/python3.7/xml/etree/ElementTree.py", line 1315, in XML parser.feed(text) OverflowError: size does not fit in an int ---------- components: XML messages: 362416 nosy: Ananth Vijalapuram priority: normal severity: normal status: open title: ElementTree limitation type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 12:59:58 2020 From: report at bugs.python.org (Ananth Vijalapuram) Date: Fri, 21 Feb 2020 17:59:58 +0000 Subject: [New-bugs-announce] [issue39714] ElementTree limitation Message-ID: <1582307998.92.0.808661201788.issue39714@roundup.psfhosted.org> New submission from Ananth Vijalapuram : I am trying to parse a very large XML file. Here is the output: python3.7.4 crif_parser.py Retrieved 3593891712 characters <- this is printed from my script Traceback (most recent call last): File "crif_parser.py", line 9, in tree = ET.fromstring(data) File "python3/3.7.4/lib/python3.7/xml/etree/ElementTree.py", line 1315, in XML parser.feed(text) OverflowError: size does not fit in an int ---------- components: XML messages: 362418 nosy: Ananth Vijalapuram priority: normal severity: normal status: open title: ElementTree limitation type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 13:05:03 2020 From: report at bugs.python.org (Ram Rachum) Date: Fri, 21 Feb 2020 18:05:03 +0000 Subject: [New-bugs-announce] [issue39715] Implement __repr__ methods for AST classes Message-ID: <1582308303.42.0.673674481243.issue39715@roundup.psfhosted.org> New submission from Ram Rachum : I was playing with the `ast` library today, and it's frustrating to see objects like these: [<_ast.Import object at 0x00000000033FB048>, <_ast.Import object at 0x00000000033FB0F0>, <_ast.ImportFrom object at 0x00000000033FB160>, <_ast.Import object at 0x00000000033FB1D0>, <_ast.Assign object at 0x00000000033FB240>, <_ast.If object at 0x00000000033FB630>] A little bit more information about each object in the `__repr__` would make this module much easier to work with. ---------- components: Library (Lib) messages: 362419 nosy: cool-RR priority: normal severity: normal status: open title: Implement __repr__ methods for AST classes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 13:20:54 2020 From: report at bugs.python.org (Antony Lee) Date: Fri, 21 Feb 2020 18:20:54 +0000 Subject: [New-bugs-announce] [issue39716] argparse.ArgumentParser does not raise on duplicated subparsers, even though it does on duplicated flags Message-ID: <1582309254.51.0.588412851994.issue39716@roundup.psfhosted.org> New submission from Antony Lee : If one tries to add twice the same flag to an ArgumentParser, one gets a helpful exception: from argparse import ArgumentParser p = ArgumentParser() p.add_argument("--foo") p.add_argument("--foo") results in argparse.ArgumentError: argument --foo: conflicting option string: --foo However, adding twice the same subparser raises no exception: from argparse import ArgumentParser p = ArgumentParser() sp = p.add_subparsers() sp.add_parser("foo") sp.add_parser("foo") even though the two subparsers shadow one another in the same way as two identical flags. ---------- components: Library (Lib) messages: 362421 nosy: Antony.Lee priority: normal severity: normal status: open title: argparse.ArgumentParser does not raise on duplicated subparsers, even though it does on duplicated flags versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 14:20:01 2020 From: report at bugs.python.org (Ram Rachum) Date: Fri, 21 Feb 2020 19:20:01 +0000 Subject: [New-bugs-announce] [issue39717] Fix exception causes in tarfile module Message-ID: <1582312801.89.0.445859390391.issue39717@roundup.psfhosted.org> Change by Ram Rachum : ---------- components: Library (Lib) nosy: cool-RR priority: normal pull_requests: 17962 severity: normal status: open title: Fix exception causes in tarfile module type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 17:01:40 2020 From: report at bugs.python.org (Shantanu) Date: Fri, 21 Feb 2020 22:01:40 +0000 Subject: [New-bugs-announce] [issue39718] TYPE_IGNORE, COLONEQUAL missing from py38 changes in token docs Message-ID: <1582322500.81.0.452498925494.issue39718@roundup.psfhosted.org> New submission from Shantanu : Changed in version 3.8 section of https://docs.python.org/3/library/token.html should mention the addition of TYPE_IGNORE and COLONEQUAL ---------- assignee: docs at python components: Documentation messages: 362436 nosy: docs at python, hauntsaninja priority: normal severity: normal status: open title: TYPE_IGNORE, COLONEQUAL missing from py38 changes in token docs _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 17:32:08 2020 From: report at bugs.python.org (Shantanu) Date: Fri, 21 Feb 2020 22:32:08 +0000 Subject: [New-bugs-announce] [issue39719] tempfile.SpooledTemporaryFile still has softspace property Message-ID: <1582324328.38.0.787053917409.issue39719@roundup.psfhosted.org> New submission from Shantanu : The softspace attribute of files was removed in Python 3 (mentioned in https://raw.githubusercontent.com/python/cpython/master/Misc/HISTORY) However, tempfile.SpooledTemporaryFile still has a softspace property that attempts to return read the softspace attribute. ``` In [23]: t = tempfile.SpooledTemporaryFile() In [24]: t.softspace --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 t.softspace /usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/tempfile.py in softspace(self) 749 @property 750 def softspace(self): --> 751 return self._file.softspace 752 753 def tell(self): AttributeError: '_io.BytesIO' object has no attribute 'softspace' ``` ---------- messages: 362437 nosy: hauntsaninja priority: normal severity: normal status: open title: tempfile.SpooledTemporaryFile still has softspace property _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 17:41:33 2020 From: report at bugs.python.org (Frazer McLean) Date: Fri, 21 Feb 2020 22:41:33 +0000 Subject: [New-bugs-announce] [issue39720] Signature.bind TypeErrors could be more helpful Message-ID: <1582324893.03.0.384494522706.issue39720@roundup.psfhosted.org> New submission from Frazer McLean : Signature.bind does not tell you if a missing argument is keyword-only for example. I created the following snippet to examine the differences: import inspect def run(f): try: f() except TypeError as exc: print(exc.args[0]) else: raise RuntimeError('Expected to raise!') sig = inspect.signature(f) try: sig.bind() except TypeError as exc: print(exc.args[0]) else: raise RuntimeError('Expected to raise!') print() @run def f1(pos_only, /): ... @run def f2(pos_or_kw): ... @run def f3(*, kw_only): ... Output on current 3.9 master: f1() missing 1 required positional argument: 'pos_only' missing a required argument: 'pos_only' f2() missing 1 required positional argument: 'pos_or_kw' missing a required argument: 'pos_or_kw' f3() missing 1 required keyword-only argument: 'kw_only' missing a required argument: 'kw_only' I am willing to create a PR so that the TypeError for f3 says "missing a required keyword-only argument: 'kw_only'", if this would be accepted. ---------- components: Library (Lib) messages: 362439 nosy: RazerM priority: normal severity: normal status: open title: Signature.bind TypeErrors could be more helpful type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 19:25:34 2020 From: report at bugs.python.org (Andy Lester) Date: Sat, 22 Feb 2020 00:25:34 +0000 Subject: [New-bugs-announce] [issue39721] Fix constness of members of tok_state struct. Message-ID: <1582331134.81.0.179748951937.issue39721@roundup.psfhosted.org> New submission from Andy Lester : The function PyTokenizer_FromUTF8 from Parser/tokenizer.c had a comment: /* XXX: constify members. */ This patch addresses that. In the tok_state struct: * end and start were non-const but could be made const * str and input were const but should have been non-const Changes to support this include: * decode_str() now returns a char * since it is allocated. * PyTokenizer_FromString() and PyTokenizer_FromUTF8() each creates a new char * for an allocate string instead of reusing the input const char *. * PyTokenizer_Get() and tok_get() now take const char ** arguments. * Various local vars are const or non-const accordingly. I was able to remove five casts that cast away constness. ---------- components: Interpreter Core messages: 362441 nosy: petdance priority: normal severity: normal status: open title: Fix constness of members of tok_state struct. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 21 20:18:00 2020 From: report at bugs.python.org (Shantanu) Date: Sat, 22 Feb 2020 01:18:00 +0000 Subject: [New-bugs-announce] [issue39722] decimal differs between pure Python and C implementations Message-ID: <1582334280.84.0.579040712908.issue39722@roundup.psfhosted.org> New submission from Shantanu : The dunder methods on decimal.Decimal accept an extra context argument in the pure Python version which the C version does not (violating PEP 399). This came up in https://github.com/python/typeshed/pull/3633, where Sebastian provided the following summary of the issue: ``` Python 3.8.1 (default, Jan 14 2020, 19:41:43) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from _decimal import Decimal as CDecimal >>> from _pydecimal import Decimal as PyDecimal >>> PyDecimal(1).__abs__(None) Decimal('1') >>> CDecimal(1).__abs__(None) Traceback (most recent call last): File "", line 1, in TypeError: expected 0 arguments, got 1 ``` ---------- components: Library (Lib) messages: 362443 nosy: hauntsaninja priority: normal severity: normal status: open title: decimal differs between pure Python and C implementations _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 22 00:43:48 2020 From: report at bugs.python.org (Shantanu) Date: Sat, 22 Feb 2020 05:43:48 +0000 Subject: [New-bugs-announce] [issue39723] io.open_code should accept PathLike objects Message-ID: <1582350228.37.0.922225151662.issue39723@roundup.psfhosted.org> New submission from Shantanu : Currently io.open_code (added in Python 3.8) only accepts str arguments. As per PEP 519, it should probably also accept PathLike. It might be worth extending it to accept bytes as well, both for convenience and because documentation claims it should be interchangeable with ``open(path, 'rb')``. https://github.com/python/cpython/blob/3.8/Modules/_io/_iomodule.c#L510 ---------- components: Library (Lib) messages: 362446 nosy: hauntsaninja priority: normal severity: normal status: open title: io.open_code should accept PathLike objects type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 22 05:29:37 2020 From: report at bugs.python.org (John Smith) Date: Sat, 22 Feb 2020 10:29:37 +0000 Subject: [New-bugs-announce] [issue39724] IDLE threading + stdout/stdin observed blocking behavior Message-ID: <1582367377.76.0.509554808526.issue39724@roundup.psfhosted.org> New submission from John Smith : preamble: I am aware that I am not the first to encounter this issue but neither I could identify a preexisting ticket which fully matches nor is the commonly recommended "solution" (stay away from IDLE) satisfying. environment: win10, python 3.7 (tested with 32 and 64 bit version) description: If the attached script is started from IDLE the "alive" only shows up once for every input, while the script output "alive" frequently if ran from the terminal with python. So there is a discrepancy between the behavior of IDLE and "plain" python, which can lead to serious "irritations". If the print is replaced with logging.info and the logging is setup to write into a file everything works as expected and equal in both environments. thoughts: the input call seems to block access to stdout(?) in "IDLE mode". I noticed that there are several topics/post regarding IDLE's stdout/in behavior but I was unabled to find a (convinient) solution besides "just quit using IDLE". It feels strange that the editor bundled with python has such a reputation and features such a deviation in behavior from "plain" python. ---------- assignee: terry.reedy components: IDLE messages: 362456 nosy: John Smith, terry.reedy priority: normal severity: normal status: open title: IDLE threading + stdout/stdin observed blocking behavior type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 22 16:05:03 2020 From: report at bugs.python.org (Ethan Furman) Date: Sat, 22 Feb 2020 21:05:03 +0000 Subject: [New-bugs-announce] [issue39725] unrelated `from None` exceptions lose prior exception information Message-ID: <1582405503.93.0.737085396902.issue39725@roundup.psfhosted.org> New submission from Ethan Furman : Using the example from https://bugs.python.org/msg293185: ----------------------------------------------------------------------- --> import os --> try: ... os.environ["NEW_VARIABLE"] = bug # bug is not a str ... finally: ... del os.environ["NEW_VARIABLE"] # KeyError ... Traceback (most recent call last): ... KeyError: 'NEW_VARIABLE' ----------------------------------------------------------------------- We lost the original exception, `TypeError: str expected, not object`, because in os.py we have: def __delitem__(self, key): encodedkey = self.encodekey(key) unsetenv(encodedkey) try: del self._data[encodedkey] except KeyError: # raise KeyError with the original key value raise KeyError(key) from None If we remove the `from None` the result of the above code is: ----------------------------------------------------------------------- Traceback (most recent call last): TypeError: str expected, not type During handling of the above exception, another exception occurred: Traceback (most recent call last): KeyError: b'NEW_VARIABLE' During handling of the above exception, another exception occurred: Traceback (most recent call last): KeyError: 'NEW_VARIABLE' ----------------------------------------------------------------------- There are various tricks we can do to fix this isolated issue (and others like it), but the real problem is that one exception handler's work was destroyed by an unrelated exception handler. The intent of `from None` is to get rid of any exception details in the try/except block it is contained within, not to lose details from exceptions that were already in play when its try/except block was entered. Any ideas on how to correct this? ---------- messages: 362478 nosy: ethan.furman, ncoghlan, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: unrelated `from None` exceptions lose prior exception information type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 22 23:38:19 2020 From: report at bugs.python.org (David Harding) Date: Sun, 23 Feb 2020 04:38:19 +0000 Subject: [New-bugs-announce] [issue39726] ctypes on pypi has fallen behind Message-ID: <1582432699.85.0.731912909682.issue39726@roundup.psfhosted.org> New submission from David Harding : I wasn't sure where to report this. ctypes currently bundled with Ubuntu 16.04 and 18.04 is version 1.1.0. ctypes available through pypi is 1.0.2. https://pypi.org/project/ctypes/ This makes maintaining a reproducible environment with venv kind of tricky. It would be desirable to catch the pypi version up to 1.1.0. I don't really know who to bother about this, so I'm starting here. Thanks! ---------- components: ctypes messages: 362488 nosy: David Harding priority: normal severity: normal status: open title: ctypes on pypi has fallen behind type: enhancement versions: Python 3.5, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 00:34:16 2020 From: report at bugs.python.org (James Edington) Date: Sun, 23 Feb 2020 05:34:16 +0000 Subject: [New-bugs-announce] [issue39727] cgi.parse() fatally attempts str.decode when handling multipart/form-data Message-ID: <1582436056.52.0.172079709412.issue39727@roundup.psfhosted.org> New submission from James Edington : It appears that cgi.parse() in Python 3.7.6 [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] fatally chokes on POST requests with multipart/form-data due to some internal processing still relying on assumptions from when str and bytes were the same object. I'll attach as the first comment the "try-it-at-home" file to demonstrate this error. ---------- components: Library (Lib) files: curlLogs.txt messages: 362490 nosy: James Edington priority: normal severity: normal status: open title: cgi.parse() fatally attempts str.decode when handling multipart/form-data type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48902/curlLogs.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 04:07:04 2020 From: report at bugs.python.org (Jonas Malaco) Date: Sun, 23 Feb 2020 09:07:04 +0000 Subject: [New-bugs-announce] [issue39728] Instantiating enum with invalid value results in ValueError twice Message-ID: <1582448824.79.0.766917894274.issue39728@roundup.psfhosted.org> New submission from Jonas Malaco : Trying to instantiate an enum with an invalid value results in "During handling of the above exception, another exception occurred:". $ cat > test.py << EOF from enum import Enum class Color(Enum): RED = 1 GREEN = 2 BLUE = 3 Color(0) EOF $ python --version Python 3.8.1 $ python test.py ValueError: 0 is not a valid Color During handling of the above exception, another exception occurred: Traceback (most recent call last): File "test.py", line 8, in Color(0) File "/usr/lib/python3.8/enum.py", line 304, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.8/enum.py", line 595, in __new__ raise exc File "/usr/lib/python3.8/enum.py", line 579, in __new__ result = cls._missing_(value) File "/usr/lib/python3.8/enum.py", line 608, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 0 is not a valid Color I think this might be related to 019f0a0cb85e ("bpo-34536: raise error for invalid _missing_ results (GH-9147)"), but I haven't been able to confirm. ---------- components: Library (Lib) messages: 362496 nosy: jonasmalaco priority: normal severity: normal status: open title: Instantiating enum with invalid value results in ValueError twice type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 05:08:44 2020 From: report at bugs.python.org (Arnon Yaari) Date: Sun, 23 Feb 2020 10:08:44 +0000 Subject: [New-bugs-announce] [issue39729] stat.S_ISXXX can raise OverflowError for remote file modes Message-ID: <1582452524.82.0.552566008779.issue39729@roundup.psfhosted.org> New submission from Arnon Yaari : The C implementation of the "stat" module on Python 3 (_stat) is using the type "mode_t" for file modes, which differs between operating systems. This type can be defined as either "unsigned short" (for example, in macOS, or the definition added specifically for Windows in _stat.c) or "unsigned long" (Linux and other Unix systems such as AIX). This means that the "stat" module may only work with file modes that come from the same system that Python was compiled for. It is sometimes desirable to work with file modes on remote systems (for example, when using the "fabric" module to handle remote files - https://github.com/fabric/fabric/blob/1.10/fabric/sftp.py#L42). With the pure-python "stat" module on Python 2.7, using macros such as "stat.S_ISDIR" with any value used to work (even values that exceed "unsigned short" on macOS, for example) but with the C implementation this can result in an exception on systems with an "unsigned short" mode_t: >>> stat.S_ISDIR(0o240755) OverflowError: mode out of range I encountered this exception when trying to "put" files from a macOS system to an AIX system with "fabric" (0o240755 is the st_mode found for "/" on AIX). For uniform handling of file modes, modes should be handled as unsigned long instead of the system-defined "mode_t". ---------- components: Library (Lib) messages: 362499 nosy: wiggin15 priority: normal severity: normal status: open title: stat.S_ISXXX can raise OverflowError for remote file modes type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 06:08:24 2020 From: report at bugs.python.org (Mark Fernandes) Date: Sun, 23 Feb 2020 11:08:24 +0000 Subject: [New-bugs-announce] [issue39730] Licence (license) for Python 3.8.1 is missing Message-ID: <1582456104.31.0.935574059782.issue39730@roundup.psfhosted.org> New submission from Mark Fernandes : At https://docs.python.org/3/license.html , the licence agreement for Python 3.8.2rc2 appears, however, I can't see the agreement for 3.8.1. It appears to be missing. Likewise, other licences for various version numbers appear to be missing. Also, it would be really helpful if you said one or more of the following: - that the licence only concerns: + copying, distribution, etc. + granting you extra copyright freedoms to those you already have under applicable law. - that there are no restrictions imposed on you by the licence, for the simple ordinary use of the software (that is other than copying, etc.) for x, y, z purposes. - that if you comply with the GPLv3 licence with respect to the software, you also meet the compliance requirements under the Python licence, and so you can treat the software as though it is GPLv3 software however, if you so treat the software, the software is still nonetheless licensed to you under the Python licence and NOT the GPL licence. ---------- assignee: docs at python components: Documentation messages: 362500 nosy: Mark Fernandes, docs at python priority: normal severity: normal status: open title: Licence (license) for Python 3.8.1 is missing versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 08:55:42 2020 From: report at bugs.python.org (Md. Al-Helal) Date: Sun, 23 Feb 2020 13:55:42 +0000 Subject: [New-bugs-announce] [issue39731] ModuleNotFoundError: No module named '_ctypes' Message-ID: <1582466142.44.0.612475216639.issue39731@roundup.psfhosted.org> New submission from Md. Al-Helal : ERROR: Command errored out with exit status 1: command: /usr/local/bin/python3.8 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-68ypuk30/django-tables2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-68ypuk30/django-tables2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-68ypuk30/django-tables2/pip-egg-info cwd: /tmp/pip-install-68ypuk30/django-tables2/ Complete output (11 lines): Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 20, in from setuptools.dist import Distribution, Feature File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 35, in from setuptools import windows_support File "/usr/local/lib/python3.8/site-packages/setuptools/windows_support.py", line 2, in import ctypes File "/usr/local/lib/python3.8/ctypes/__init__.py", line 7, in from _ctypes import Union, Structure, Array ModuleNotFoundError: No module named '_ctypes' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ---------- components: ctypes messages: 362511 nosy: alhelal priority: normal severity: normal status: open title: ModuleNotFoundError: No module named '_ctypes' type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 09:23:39 2020 From: report at bugs.python.org (Mingye Wang) Date: Sun, 23 Feb 2020 14:23:39 +0000 Subject: [New-bugs-announce] [issue39732] plistlib should export UIDs in XML like Apple does Message-ID: <1582467819.43.0.259670073534.issue39732@roundup.psfhosted.org> New submission from Mingye Wang : Although there is no native UID type in Apple's XML format, Apple's NSKeyedArchiver still works with it because it converts the UID to a dict of {"CF$UID": int(some_uint64_val)}. Plistlib should do the same. For a sample, see https://github.com/apple/swift-corelibs-foundation/blob/2a5bc4d8a0b073532e60410682f5eb8f00144870/Tests/Foundation/Resources/NSKeyedUnarchiver-ArrayTest.plist. ---------- components: Library (Lib) messages: 362513 nosy: Artoria2e5 priority: normal severity: normal status: open title: plistlib should export UIDs in XML like Apple does type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 15:55:30 2020 From: report at bugs.python.org (12345NotFromHere54321) Date: Sun, 23 Feb 2020 20:55:30 +0000 Subject: [New-bugs-announce] [issue39733] Bug in hypergeometric function Message-ID: <1582491330.33.0.385073270653.issue39733@roundup.psfhosted.org> New submission from 12345NotFromHere54321 : I want to evaluate Kummer's hypergeometric function. Code: import scipy.special as sc import numpy as np #Parameters etc: p=2 s = -4.559190954155 -51.659216953928*1j Evaluation: s = -4.559190954155 -51.659216953928*1j sc.hyp1f1(1/p, 1/p + 1, -s) Output: (0.999999999999721-2.57668886227691e-13j) This is close to 1 and agrees with Mathematica (see below) Because the parameters 1/p and 1/p+1 are real, we know that if we replace s by its conjugate, the output should be the conjugate of the first output. This turns out not to be the case: Evaluation: s = -4.559190954155 -51.659216953928*1j s = np.conj(s) sc.hyp1f1(1/p, 1/p + 1, -s) Output: (0.8337882727951572+0.1815268182862942j) This is very far from 1. There seems to be a bug. Mathematica: s = (-4.559190954155+51.659216953928I) sconj=Conjugate[s] Hypergeometric1F1[1/2,3/2,-s] Hypergeometric1F1[1/2,3/2,-sconj] Out[9]= 1.+1.99922*^-11 \[ImaginaryI] Out[10]= 1.-1.99922*^-11 \[ImaginaryI] ---------- messages: 362539 nosy: 12345NotFromHere54321 priority: normal severity: normal status: open title: Bug in hypergeometric function type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 17:47:08 2020 From: report at bugs.python.org (Antoine Pitrou) Date: Sun, 23 Feb 2020 22:47:08 +0000 Subject: [New-bugs-announce] [issue39734] Deprecate readinto() fallback path in _pickle.c Message-ID: <1582498028.39.0.0123392152586.issue39734@roundup.psfhosted.org> New submission from Antoine Pitrou : In issue39681 we reestablished the fallback to read() when a file-like object doesn't provide readinto() in _pickle.c. However, doing so leads to lower performance and all file-like object should nowadays provide readinto() (simply by deriving from the right base class - e.g. io.BufferedIOBase). I propose to issue a DeprecationWarning when the fallback behaviour is selected, so that one day we can finally remove it. ---------- components: Library (Lib) messages: 362547 nosy: ogrisel, pierreglaser, pitrou priority: low severity: normal status: open title: Deprecate readinto() fallback path in _pickle.c type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 22:22:43 2020 From: report at bugs.python.org (Masahiro Sakai) Date: Mon, 24 Feb 2020 03:22:43 +0000 Subject: [New-bugs-announce] [issue39735] Signal handler is invoked recursively Message-ID: <1582514563.33.0.732332773883.issue39735@roundup.psfhosted.org> New submission from Masahiro Sakai : If I run following program, the parent process fails with "RecursionError: maximum recursion depth exceeded", because the signal handler is invoked during the execution of the same handler. ---- import os import signal import time def f(signum, frame): time.sleep(1) signal.signal(signal.SIGUSR1, f) parent_pid = os.getpid() child_pid = os.fork() if child_pid == 0: for i in range(100000): os.kill(parent_pid, signal.SIGUSR1) time.sleep(0.01) else: os.waitpid(child_pid, 0) ---- This behavior is in contrast to other languages such as C or Ruby. In C, when a handler function is invoked on a signal, that signal is automatically blocked during the time the handler is running, unless SA_NODEFER is specified. In Ruby, signal handler is handled in a way similar to Python (i.e. flag is set by C-level signal handler and Ruby/Python-level signal handler is executed later point which is safe for VM), but it prevents recursive signal handler invocation. (Related issue and commit: https://bugs.ruby-lang.org/issues/6009 https://github.com/ruby/ruby/commit/6190bb4d8ad7a07ddb1da8fc687b20612743a34a ) I believe that behavior of C and Ruby is desirable, because writing reentrant signal handler is sometimes error prone. ---------- components: Extension Modules messages: 362562 nosy: msakai priority: normal severity: normal status: open title: Signal handler is invoked recursively type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 23:24:30 2020 From: report at bugs.python.org (Andy Lester) Date: Mon, 24 Feb 2020 04:24:30 +0000 Subject: [New-bugs-announce] [issue39736] const strings in Modules/_datetimemodule.c and Modules/_testbuffer.c Message-ID: <1582518270.89.0.669104927357.issue39736@roundup.psfhosted.org> New submission from Andy Lester : In Modules/_datetimemodule.c, the char *timespec and char *specs[] can be made const. Their contents are never modified. In ndarray_get_format in Modules/_testbuffer.c, char *fmt can be made const. ---------- components: Interpreter Core messages: 362565 nosy: petdance priority: normal severity: normal status: open title: const strings in Modules/_datetimemodule.c and Modules/_testbuffer.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 23 23:56:41 2020 From: report at bugs.python.org (Dennis Sweeney) Date: Mon, 24 Feb 2020 04:56:41 +0000 Subject: [New-bugs-announce] [issue39737] Speed up list.__eq__ by about 6% Message-ID: <1582520201.72.0.954951125447.issue39737@roundup.psfhosted.org> New submission from Dennis Sweeney : The following tiny change: diff --git a/Objects/listobject.c b/Objects/listobject.c index 3c39c6444b..3ac03b71d0 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -2643,8 +2643,7 @@ list_richcompare(PyObject *v, PyObject *w, int op) Py_INCREF(vitem); Py_INCREF(witem); - int k = PyObject_RichCompareBool(vl->ob_item[i], - wl->ob_item[i], Py_EQ); + int k = PyObject_RichCompareBool(vitem, witem, Py_EQ); Py_DECREF(vitem); Py_DECREF(witem); if (k < 0) Creates the following performance improvement: Before: > .\python.bat -m timeit -s "A = list(range(10**7)); B = list(range(10**7))" "A==B" 2 loops, best of 5: 134 msec per loop > .\python.bat -m timeit -s "A = list(range(10**7)); B = list(range(10**7))" "A==B" 2 loops, best of 5: 134 msec per loop After: > .\python.bat -m timeit -s "A = list(range(10**7)); B = list(range(10**7))" "A==B" 2 loops, best of 5: 126 msec per loop > .\python.bat -m timeit -s "A = list(range(10**7)); B = list(range(10**7))" "A==B" 2 loops, best of 5: 126 msec per loop ---------- components: Interpreter Core messages: 362566 nosy: Dennis Sweeney priority: normal severity: normal status: open title: Speed up list.__eq__ by about 6% type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 04:28:03 2020 From: report at bugs.python.org (wjzbf) Date: Mon, 24 Feb 2020 09:28:03 +0000 Subject: [New-bugs-announce] [issue39738] mod operation with large number is not correct. Message-ID: <1582536483.25.0.598086329397.issue39738@roundup.psfhosted.org> New submission from wjzbf : hello python, when i calculate: 151476660579404160000-151476660579404160000//1000000007 * (1e9+7) it returns 67534848.0 when i calculate 151476660579404160000 % (1e9+7) it returns 67536199.0 the two values are not equal. how to explain it? thanks zbf ---------- components: Windows messages: 362574 nosy: paul.moore, steve.dower, tim.golden, wjzbf, zach.ware priority: normal severity: normal status: open title: mod operation with large number is not correct. type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 11:25:28 2020 From: report at bugs.python.org (nono) Date: Mon, 24 Feb 2020 16:25:28 +0000 Subject: [New-bugs-announce] [issue39739] Python crash every time opening pycharm, seems related to tensorflow Message-ID: <1582561528.75.0.376924358185.issue39739@roundup.psfhosted.org> Change by nono : ---------- components: macOS nosy: leo212121, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Python crash every time opening pycharm, seems related to tensorflow type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 12:16:05 2020 From: report at bugs.python.org (RobHammann) Date: Mon, 24 Feb 2020 17:16:05 +0000 Subject: [New-bugs-announce] [issue39740] Select module fails to build on Solaris 11.4 Message-ID: <1582564565.67.0.773509979349.issue39740@roundup.psfhosted.org> New submission from RobHammann : On an x86 Solaris 11.4 system, the standard procedure build of python from source fails to build the 'select' module. Make install then fails upon trying to import 'select'. In the make output, while gcc is building 'select', these errors are printed: building 'select' extension gcc -fPIC -Wno-unused-result -Wsign-compare -g -Og -Wall -D_REENTRANT -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include -I. -I/usr/local/include -I/root/cpython-solaris/Include -I/root/cpython-solaris -c /root/cpython-solaris/Modules/selectmodule.c -o build/temp.solaris-2.11-i86pc.64bit-3.9-pydebug/root/cpython-solaris/Modules/selectmodule.o /root/cpython-solaris/Modules/selectmodule.c:1147:21: error: 'devpoll_methods' undeclared here (not in a function); did you mean 'devpoll_dealloc'? {Py_tp_methods, devpoll_methods}, ^~~~~~~~~~~~~~~ devpoll_dealloc /root/cpython-solaris/Modules/selectmodule.c:2299:20: warning: 'devpoll_methods' defined but not used [-Wunused-variable] static PyMethodDef devpoll_methods[] = { ^~~~~~~~~~~~~~~ Attached is the combined outputs of ./configure, make, and make install ---------- components: Build files: build-output.txt messages: 362598 nosy: RobHammann priority: normal severity: normal status: open title: Select module fails to build on Solaris 11.4 type: compile error versions: Python 3.9 Added file: https://bugs.python.org/file48908/build-output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 12:30:10 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Mon, 24 Feb 2020 17:30:10 +0000 Subject: [New-bugs-announce] [issue39741] Argument Clinic name conflict Message-ID: <1582565410.47.0.513154068013.issue39741@roundup.psfhosted.org> New submission from Batuhan Taskaya : Argument clinic uses some extra variables (like args, or noptargs, nargs etc.) for parsing. But there is a catch about these names, the generated code becomes wrong if there are any usages of them inside the signature. Encountered with this problem while working on *args support (in issue 20291). The possible solution is prefixing every argument in the parser with __clinic_ (__clinic_{var}) for preventing any kind of conflict. I'll draft a PR for this issue. ---------- components: Argument Clinic messages: 362599 nosy: BTaskaya, larry, pablogsal priority: low severity: normal status: open title: Argument Clinic name conflict type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 15:36:13 2020 From: report at bugs.python.org (Enji Cooper) Date: Mon, 24 Feb 2020 20:36:13 +0000 Subject: [New-bugs-announce] [issue39742] Enhancement: add `os.getdtablesize(..)` to `os` (`posix`) module Message-ID: <1582576573.04.0.0159075532911.issue39742@roundup.psfhosted.org> New submission from Enji Cooper : getdtablesize({2,3}) is a wonderful library call/system call to have access to because it allows one to determine the descriptor limits at runtime in an easier manner than having to do the equivalent with os.sysconf(..): >>> os.sysconf(os.sysconf_names["SC_OPEN_MAX"]) This has been present in *BSD since time memorial [1] and in Linux since 2010 [2], so I think it would be a good addition to the `os` (`posix`) module. I will submit a diff for this in a few days, if it's deemed acceptable to have in the `posix` module. 1. https://www.freebsd.org/cgi/man.cgi?query=getdtablesize&sektion=2&manpath=freebsd-release-ports 2. http://man7.org/linux/man-pages/man2/getdtablesize.2.html ---------- components: Library (Lib) messages: 362603 nosy: ngie priority: normal severity: normal status: open title: Enhancement: add `os.getdtablesize(..)` to `os` (`posix`) module type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 16:04:59 2020 From: report at bugs.python.org (Syed Habeeb Ullah Quadri) Date: Mon, 24 Feb 2020 21:04:59 +0000 Subject: [New-bugs-announce] [issue39743] variable quiet is not defined in function main. Message-ID: <1582578299.58.0.858783047501.issue39743@roundup.psfhosted.org> New submission from Syed Habeeb Ullah Quadri : 'quiet' is an argument of function "compile" in "pycompile.py". I do not understand why 'quiet' is used in function "main" when function main has arguments, None. Undefined 'quiet' variable is used in "main" function of "pycompile.py" in lines 200, 204, 214. This is giving error in dist-upgrade of Ubuntu 19.1. Attaching the pycompile.tar.xz for the review. Please fix it at the earliest. ---------- components: Cross-Build files: pycompile.tar.xz messages: 362604 nosy: Alex.Willmer, syed007 priority: normal severity: normal status: open title: variable quiet is not defined in function main. versions: Python 3.8 Added file: https://bugs.python.org/file48909/pycompile.tar.xz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 17:09:41 2020 From: report at bugs.python.org (=?utf-8?q?Marek_Marczykowski-G=C3=B3recki?=) Date: Mon, 24 Feb 2020 22:09:41 +0000 Subject: [New-bugs-announce] [issue39744] asyncio.subprocess's communicate(None) does not close stdin Message-ID: <1582582181.12.0.732686127935.issue39744@roundup.psfhosted.org> New submission from Marek Marczykowski-G?recki : Standard subprocess's communicate() called with None input (or no argument at all closes process stdin. The asyncio variant does not. This leads to issue with various processes that wait for EOF on stdin before terminating. Test script attached. ---------- components: asyncio files: commmunicate-test.py messages: 362605 nosy: asvetlov, marmarek, yselivanov priority: normal severity: normal status: open title: asyncio.subprocess's communicate(None) does not close stdin versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48910/commmunicate-test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 21:47:59 2020 From: report at bugs.python.org (Masahiro Sakai) Date: Tue, 25 Feb 2020 02:47:59 +0000 Subject: [New-bugs-announce] [issue39745] BlockingIOError.characters_written represents number of bytes not characters Message-ID: <1582598879.29.0.157478927981.issue39745@roundup.psfhosted.org> New submission from Masahiro Sakai : According to https://docs.python.org/3/library/exceptions.html#BlockingIOError , 'characters_written' is "An integer containing the number of characters written to the stream before it blocked". But I observed that it represents number of *bytes* not *characters* in the following program. Program: ---- import os import threading import time r, w = os.pipe() os.set_blocking(w, False) f_r = os.fdopen(r, mode="rb") f_w = os.fdopen(w, mode="w", encoding="utf-8") msg = "\u03b1\u03b2\u03b3\u3042\u3044\u3046\u3048\u304a" * (1024 * 16) try: print(msg, file=f_w, flush=True) except BlockingIOError as e: print(f"BlockingIOError.characters_written == {e.characters_written}") written = e.characters_written def close(): os.set_blocking(w, True) f_w.close() threading.Thread(target=close).start() b = f_r.read() f_r.close() print(f"{written} characters correspond to {len(msg[:written].encode('utf-8'))} bytes in UTF-8") print(f"{len(b)} bytes read") ---- Output: ---- BlockingIOError.characters_written == 81920 81920 characters correspond to 215040 bytes in UTF-8 81920 bytes read ---- I think it is confusing behavior. If this is intended behavior, then it should be documented as such and I think 'bytes_written' is more appropriate name. ---------- components: IO messages: 362611 nosy: msakai priority: normal severity: normal status: open title: BlockingIOError.characters_written represents number of bytes not characters _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 24 22:08:43 2020 From: report at bugs.python.org (Orion Fisher) Date: Tue, 25 Feb 2020 03:08:43 +0000 Subject: [New-bugs-announce] [issue39746] Inappropriate short circuit relating to inequality comparison and membership test Message-ID: <1582600123.76.0.783865258946.issue39746@roundup.psfhosted.org> New submission from Orion Fisher : I found a strange issue with how the interpreter produces bytecode for an expression like True != True in [False, False]. Reading it, one would expect the value of the expression to be True, whether the inequality comparison is evaluated first, or the membership test is evaluated first. Indeed, when parentheses are used to control the order of execution these results do occur. However, without any parentheses, the result is False. The underlying cause seems to be in a short circuit which is dependent on the inequality comparison, seen in the JUMP_IF_FALSE_OR_POP instruction. This expression is very synthetic, but I am submitting this bug under the worry that it speaks to a more significant error in the bytecode produced for inequality tests (or membership tests). ---------- components: Interpreter Core files: bug_occurence.py messages: 362612 nosy: Orion Fisher priority: normal severity: normal status: open title: Inappropriate short circuit relating to inequality comparison and membership test type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48911/bug_occurence.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 00:36:29 2020 From: report at bugs.python.org (Ethan Smith) Date: Tue, 25 Feb 2020 05:36:29 +0000 Subject: [New-bugs-announce] [issue39747] test_os debug assertion failure Message-ID: <1582608989.67.0.495966477974.issue39747@roundup.psfhosted.org> New submission from Ethan Smith : With CPython master branch and build.bat -e -p x64, if I run test_os I get the following (in a messagebox transcribed here for ease of consumption). Sorry if I am missing something. This means I am unable to run test_os to completion. I am on Windows 10.0.19559.1000 x64 with CL 19.24.28315/Visual Studio 16.4.3 For test_bad_fd: ----- Debug Assertion Failed! Program: C:\Users\ethanhs\cpython\PCbuild\amd64\python_d.exe File: minkernel\crts\ucrt\src\appcrt\lowio\isatty.cpp Line: 17 Expression: (fh >= 0 && (unsigned)fh < (unsigned)_nhandle) For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. (Press Retry to debug the application) --- ---------- components: Windows messages: 362624 nosy: Ethan Smith, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_os debug assertion failure type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 06:30:30 2020 From: report at bugs.python.org (Noel del rosario) Date: Tue, 25 Feb 2020 11:30:30 +0000 Subject: [New-bugs-announce] [issue39748] PyScripter could not find Python 3.8 64 bits Message-ID: <1582630230.75.0.555477437991.issue39748@roundup.psfhosted.org> New submission from Noel del rosario : I installed Python 2.7, 3.7 and 3.8, all in 64 bits. Then I installed the PYSCRYPTER 3.6 54 bits. The PySCripter can easily set up PYTHON 2.7 and 3.7 easily. But it cannot settup Python 3.8. It cannot find it. ---------- components: Installation messages: 362631 nosy: rosarion priority: normal severity: normal status: open title: PyScripter could not find Python 3.8 64 bits type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 08:24:53 2020 From: report at bugs.python.org (=?utf-8?b?6r+I64+M?=) Date: Tue, 25 Feb 2020 13:24:53 +0000 Subject: [New-bugs-announce] [issue39749] python 3.8.1 (3.14 * 10 = 31.400000002 bug) Message-ID: <1582637093.58.0.373430719424.issue39749@roundup.psfhosted.org> New submission from ?? : 10 * 3.14 31.400000000000002 there is a bug in 3.8.1 python, 10 * 3.14 is 31.4 but in python 31.400000000000002.... ---------- messages: 362641 nosy: ?? priority: normal severity: normal status: open title: python 3.8.1 (3.14 * 10 = 31.400000002 bug) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 11:15:36 2020 From: report at bugs.python.org (=?utf-8?q?Jo=C3=A3o_Eiras?=) Date: Tue, 25 Feb 2020 16:15:36 +0000 Subject: [New-bugs-announce] [issue39750] UnicodeError becomes unpicklable if data is appended to args Message-ID: <1582647336.17.0.517223219132.issue39750@roundup.psfhosted.org> New submission from Jo?o Eiras : Given some exception `ex`, you can append data like ex.args += (value1, value2, ...) and then re-raise. This is something I do in my projects to sometime propagate context when errors are raised, e.g., stacktraces across process boundaries or blobs of text with pickling or unicode errors. When this is done with UnicodeError, the exception becomes non-unpicklable: TypeError: function takes exactly 5 arguments (6 given) Example: import pickle def test_unicode_error_unpickle(): ex0 = UnicodeEncodeError('ascii','message', 1, 2, 'e') ex0.args += ("extra context",) ex1 = pickle.loads(pickle.dumps(ex0)) assert type(ex0).args == type(ex1).args assert ex0.args == ex1.args The issue seems to be UnicodeEncodeError_init() at https://github.com/python/cpython/blob/v3.8.1/Objects/exceptions.c#L1895 and also UnicodeDecodeError_init(). The BaseException is initialized, but then Unicode*Error_init() tries to reparse the arguments and does not tolerate extra values. This because BaseException.__reduce__ return a tuple (class,args). ---------- components: Interpreter Core files: test_unicode_error_unpickle.py messages: 362648 nosy: Jo?o Eiras priority: normal severity: normal status: open title: UnicodeError becomes unpicklable if data is appended to args type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48914/test_unicode_error_unpickle.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 11:37:00 2020 From: report at bugs.python.org (=?utf-8?q?Jo=C3=A3o_Eiras?=) Date: Tue, 25 Feb 2020 16:37:00 +0000 Subject: [New-bugs-announce] [issue39751] multiprocessing breaks when payload fails to unpickle Message-ID: <1582648620.1.0.266788942102.issue39751@roundup.psfhosted.org> New submission from Jo?o Eiras : The multiprocessing module uses pickles to send data between processes. If a blob fails to unpickle (bad implementation of __setstate__, invalid payload from __reduce__, random crash in __init__) when the multiprocessing module will crash inside the _handle_results worker, e.g.: File "lib\threading.py", line 932, in _bootstrap_inner self.run() File "lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "lib\multiprocessing\pool.py", line 576, in _handle_results task = get() File "lib\multiprocessing\connection.py", line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) TypeError: __init__() takes 1 positional argument but 4 were given After this the worker has crashed and every task waiting from results from the pool will wait forever. There are 2 things that I think should be fixed: 1. in handle_results, capture all unrecognized errors and propagate in the main thread. At this point at least one of the jobs' replies is lost forever so there is little point in trying to log and resume. 2. separate the result payload from the payload that contains the job index/id so they are unpickled in two steps. The first step unpickles the data internal to multiprocessing to know which task the result refers to. The second step unpickles the return value or exception from the function that was called, and if this object fails to unpickle, propagate that error to the main thread through the proper ApplyResult or IMapIterator instances. ---------- components: email files: test_multiproc_error_unpickle.py messages: 362649 nosy: Jo?o Eiras, barry, r.david.murray priority: normal severity: normal status: open title: multiprocessing breaks when payload fails to unpickle versions: Python 3.8 Added file: https://bugs.python.org/file48915/test_multiproc_error_unpickle.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 12:24:16 2020 From: report at bugs.python.org (=?utf-8?q?Jo=C3=A3o_Eiras?=) Date: Tue, 25 Feb 2020 17:24:16 +0000 Subject: [New-bugs-announce] [issue39752] multiprocessing halts when child process crashes/quits Message-ID: <1582651456.06.0.685882029108.issue39752@roundup.psfhosted.org> New submission from Jo?o Eiras : Hi. When one of the processes in a multiprocessing.pool picks up a task then then somehow crashes (and by crash I mean crashing the python process with something like a SEGV) or is killed, the pool in the main process will notice one of the workers died and will repopulate the pool, but it does not keep track which task was being handled by the process that died. As consequence, a caller waiting for a result will get stuck forever. Example: with multiprocessing.Pool(1) as pool: result = pool.map_async(os._exit, [1]).get(timeout=2) I found this because I was trying to use a lock with a spawned process on linux and that caused a crash and my program froze, but that is another issue. ---------- components: Extension Modules messages: 362651 nosy: Jo?o Eiras priority: normal severity: normal status: open title: multiprocessing halts when child process crashes/quits type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 13:42:14 2020 From: report at bugs.python.org (Chris Withers) Date: Tue, 25 Feb 2020 18:42:14 +0000 Subject: [New-bugs-announce] [issue39753] inspecting a partial with bound keywods gives incorrect signature Message-ID: <1582656134.57.0.362705493242.issue39753@roundup.psfhosted.org> New submission from Chris Withers : $ python Python 3.8.1 (v3.8.1:1b293b6006, Dec 18 2019, 14:08:53) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from functools import partial >>> def foo(x, y, z, a=None): pass ... >>> p = partial(foo, 1, y=2) >>> from inspect import signature >>> signature(p).parameters.values() odict_values([, , ]) That shouldn't be in there: >>> p(2, y=3) Traceback (most recent call last): File "", line 1, in TypeError: foo() got multiple values for argument 'y' ---------- messages: 362656 nosy: cjw296 priority: normal severity: normal status: open title: inspecting a partial with bound keywods gives incorrect signature _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 15:17:09 2020 From: report at bugs.python.org (Marco Sulla) Date: Tue, 25 Feb 2020 20:17:09 +0000 Subject: [New-bugs-announce] [issue39754] update_one_slot() does not inherit sq_contains and mp_subscript if they are explictly declared Message-ID: <1582661829.56.0.319745659558.issue39754@roundup.psfhosted.org> New submission from Marco Sulla : I noticed that `__contains__()` and `__getitem__()` of subclasses of `dict` are much slower. I asked why on StackOverflow, and an user seemed to find the reason. The problem for him/her is that `dict` implements directly `__contains__()` and `__getitem__()`. Usually, `sq_contains` and `mp_subscript` are wrapped to implement `__contains__()` and `__getitem__()`, but this way `dict` is a little faster, I suppose. The problem is that `update_one_slot()` searches for the wrappers. If it does not find them, it does not inherit the `__contains__()` and `__getitem__()` of the class, but create a `__contains__()` and `__getitem__()` functions that do an MRO search and call the superclass method. This is why `__contains__()` and `__getitem__()` of `dict` subclasses are slower. Is it possible to modify `update_one_slot()` so that, if no wrapper is found, the explicit implementation is inherited? SO answer: https://stackoverflow.com/a/59914459/1763602 ---------- components: C API messages: 362662 nosy: Marco Sulla priority: normal severity: normal status: open title: update_one_slot() does not inherit sq_contains and mp_subscript if they are explictly declared type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 25 18:53:56 2020 From: report at bugs.python.org (Mark Bell) Date: Tue, 25 Feb 2020 23:53:56 +0000 Subject: [New-bugs-announce] [issue39755] Change example of itertools.product Message-ID: <1582674836.69.0.647873846967.issue39755@roundup.psfhosted.org> New submission from Mark Bell : The documentation for itertools.product at: https://docs.python.org/3/library/itertools.html#itertools.product currently says that: For example, product(A, B) returns the same as ((x,y) for x in A for y in B) While this is broadly correct, since product first converts its arguments to tuples, this is not true if A or B are infinite iterables. For example, when A = itertools.count() and B = range(2) then the former runs forever using infinite memory, whereas the latter returns the lazy generator immediately for use. Would it be clearer / more correct to instead say: For example, product(A, B) returns the same as ((x,y) for x in tuple(A) for y in tuple(B)) ---------- assignee: docs at python components: Documentation messages: 362672 nosy: Mark.Bell, docs at python priority: normal severity: normal status: open title: Change example of itertools.product versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 01:40:12 2020 From: report at bugs.python.org (Devin Morgan) Date: Wed, 26 Feb 2020 06:40:12 +0000 Subject: [New-bugs-announce] [issue39756] Event sequence "KeyRelease-Shift_R" not being fired Message-ID: <1582699212.15.0.411870257552.issue39756@roundup.psfhosted.org> New submission from Devin Morgan : Trying to create a remake of Pong and am trying to use Right Control and Right Shift to move the right paddle up and down, moving the paddle down works fine but gets stuck in the move up state, despite my efforts root out logic errors through reviewing and debugging my code and the logic is correct. ---------- components: Tkinter files: Py-Pong.py messages: 362682 nosy: Devin Morgan priority: normal severity: normal status: open title: Event sequence "KeyRelease-Shift_R" not being fired type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48916/Py-Pong.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 04:36:15 2020 From: report at bugs.python.org (Julien Castiaux) Date: Wed, 26 Feb 2020 09:36:15 +0000 Subject: [New-bugs-announce] [issue39757] EmailMessage wrong encoding for international domain Message-ID: <1582709775.3.0.571538664603.issue39757@roundup.psfhosted.org> New submission from Julien Castiaux : Affected python version: 3.5 and above (did test them all except 3.9) Steps to reproduce: from mail.message import EmailMessage from mail.policy import SMTP msg = EmailMessage(policy=SMTP) msg['To'] = 'Joe ' # notice the ? in the domain print(msg.as_string()) It prints To: "Joe " But it should be To: "Joe " While b64/qp can be used to encode most non-ascii headers, the domain part of an email address is an exception. According to IDNA2008 (rfc5890 , rfc5891), non-ascii domain should be encoded using the punycode algorithm and the ACE prefix. ---------- components: email messages: 362687 nosy: Julien Castiaux, barry, r.david.murray priority: normal severity: normal status: open title: EmailMessage wrong encoding for international domain type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 05:36:06 2020 From: report at bugs.python.org (Tom Christie) Date: Wed, 26 Feb 2020 10:36:06 +0000 Subject: [New-bugs-announce] [issue39758] StreamWriter.wait_closed() can hang indefinately. Message-ID: <1582713366.75.0.743984348325.issue39758@roundup.psfhosted.org> New submission from Tom Christie : Raising an issue that's impacting us on `httpx`. It appears that in some cases SSL unwrapping can cause `.wait_closed()` to hang indefinately. Trio are particularly careful to work around this case, and have an extensive comment on it: https://github.com/python-trio/trio/blob/31e2ae866ad549f1927d45ce073d4f0ea9f12419/trio/_ssl.py#L779-L829 Originally raised via https://github.com/encode/httpx/issues/634 Tested on: * Python 3.7.6 * Python 3.8.1 ``` import asyncio import ssl import certifi hostname = 'login.microsoftonline.com' context = ssl.create_default_context() context.load_verify_locations(cafile=certifi.where()) async def main(): reader, writer = await asyncio.open_connection(hostname, 443, ssl=context) print('opened') writer.close() print('close started') await writer.wait_closed() print('close completed') asyncio.run(main()) ``` ---------- components: asyncio messages: 362688 nosy: asvetlov, tomchristie, yselivanov priority: normal severity: normal status: open title: StreamWriter.wait_closed() can hang indefinately. type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 06:44:26 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Wed, 26 Feb 2020 11:44:26 +0000 Subject: [New-bugs-announce] [issue39759] os.getenv documentation is misleading Message-ID: <1582717466.93.0.921378424861.issue39759@roundup.psfhosted.org> New submission from R?mi Lapeyre : The documentation states that "*key*, *default* and the result are str." at https://github.com/python/cpython/blame/3.8/Doc/library/os.rst#L224 but either I'm missing something or it's not actually true: $ python -c 'import os; print(type(os.getenv("FOO")))' $ python -c 'import os; print(type(os.getenv("FOO", default=1)))' Only *key* needs to be a string as it is used to lookup the value in os.environ. I think this can be fixed by a new contributor ---------- assignee: docs at python components: Documentation messages: 362689 nosy: docs at python, remi.lapeyre priority: normal severity: normal status: open title: os.getenv documentation is misleading type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 07:43:02 2020 From: report at bugs.python.org (Ilya Kamenshchikov) Date: Wed, 26 Feb 2020 12:43:02 +0000 Subject: [New-bugs-announce] [issue39760] ast.FormattedValue.format_spec unnecessarily wrapped in JoinedStr Message-ID: <1582720982.96.0.367799860574.issue39760@roundup.psfhosted.org> New submission from Ilya Kamenshchikov : Most usual usecase for format_spec is to specify it as a constant, that would be logical to represent as ast.Constant. However, ast.parse wraps value of ast.FormattedValue.format_spec into a JoinedStr with a single constant value, as can be seen from example below: import ast code = '''f"is {x:d}"''' tree = ast.parse(code) for n in ast.walk(tree): if isinstance(n, ast.FormattedValue): print( type(n.format_spec), len(n.format_spec.values), set(type(v) for v in n.format_spec.values), ) This is confusing for programmatically analyzing the ast, and likely creates some overhead in any modules using ast and FormattedValue. Proposal: represent ast.FormattedValue.format_spec as ast.Constant in most cases. ---------- components: Library (Lib) messages: 362691 nosy: Ilya Kamenshchikov priority: normal severity: normal status: open title: ast.FormattedValue.format_spec unnecessarily wrapped in JoinedStr type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 11:56:28 2020 From: report at bugs.python.org (Marcel Plch) Date: Wed, 26 Feb 2020 16:56:28 +0000 Subject: [New-bugs-announce] [issue39761] Python 3.9.0a4 fails to build when configured with --with-dtrace Message-ID: <1582736188.76.0.993881693157.issue39761@roundup.psfhosted.org> New submission from Marcel Plch : Steps to reproduce: $ wget https://www.python.org/ftp/python/3.9.0/Python-3.9.0a4.tar.xz $ tar xvf Python-3.9.0a4.tar.xz $ cd Python-3.9.0a4 $ ./configure --with-dtrace $ make -j12 /usr/bin/ld: libpython3.9.a(ceval.o): in function `_PyEval_EvalFrameDefault': /home/mplch/Work/fedpkg/Python-3.9.0a4/Python/ceval.c:1117: undefined reference to `python_function__entry_semaphore' /usr/bin/ld: /home/mplch/Work/fedpkg/Python-3.9.0a4/Python/ceval.c:1254: undefined reference to `python_line_semaphore' /usr/bin/ld: /home/mplch/Work/fedpkg/Python-3.9.0a4/Python/ceval.c:3697: undefined reference to `python_function__return_semaphore' /usr/bin/ld: /home/mplch/Work/fedpkg/Python-3.9.0a4/Python/ceval.c:1445: undefined reference to `python_line_semaphore' ... /usr/bin/ld: libpython3.9.a(gcmodule.o):(.note.stapsdt+0x70): undefined reference to `python_gc__done_semaphore' collect2: error: ld returned 1 exit status make: *** [Makefile:709: Programs/_testembed] Error 1 Additional info: $ gcc --version gcc (GCC) 9.2.1 20190827 (Red Hat 9.2.1-1) Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ---------- components: Build messages: 362700 nosy: Dormouse759 priority: normal severity: normal status: open title: Python 3.9.0a4 fails to build when configured with --with-dtrace type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 13:19:28 2020 From: report at bugs.python.org (Enji Cooper) Date: Wed, 26 Feb 2020 18:19:28 +0000 Subject: [New-bugs-announce] [issue39762] PyLong_AS_LONG missing from longobject.h Message-ID: <1582741168.61.0.124981854369.issue39762@roundup.psfhosted.org> New submission from Enji Cooper : While trying to port python 2 C extension code forward to python 3, I noticed that the python 2.6 PyInt -> PyLong unification lacks a forward-compatible API for PyLong_AS_LONG. I'm not sure if this was intentional, but it is a slightly annoying wicket to deal with when forward C extension code that needs to straddle 2 and 3 for a period of time. ---------- components: C API messages: 362705 nosy: ngie priority: normal severity: normal status: open title: PyLong_AS_LONG missing from longobject.h versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 14:18:35 2020 From: report at bugs.python.org (Elad Lahav) Date: Wed, 26 Feb 2020 19:18:35 +0000 Subject: [New-bugs-announce] [issue39763] Hang after fork due to logging trying to reacquire the module lock in an atfork() handler Message-ID: <1582744715.59.0.836949225961.issue39763@roundup.psfhosted.org> New submission from Elad Lahav : The attached code causes the child processes to hang on QNX. The hang is caused by the logging module trying to acquire the module lock while in an atfork() handler. In a system where semaphore state is kept in user mode and is thus inherited from the parent on fork() the semaphore may appear to have a value of 0, and thus will never be posted to in the single-threaded child. I don't know how it works on other systems - may be pure chance. ---------- components: Library (Lib) files: fork_mt.py messages: 362717 nosy: Elad Lahav priority: normal severity: normal status: open title: Hang after fork due to logging trying to reacquire the module lock in an atfork() handler versions: Python 3.8 Added file: https://bugs.python.org/file48917/fork_mt.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 14:25:53 2020 From: report at bugs.python.org (Lidi Zheng) Date: Wed, 26 Feb 2020 19:25:53 +0000 Subject: [New-bugs-announce] [issue39764] PyAsyncGenObject causes task.get_stack() raising AttributeError Message-ID: <1582745153.24.0.421575819454.issue39764@roundup.psfhosted.org> New submission from Lidi Zheng : This issue exists since 3.6. The implementation of get stack on Task is looking for two attribute [1]: "cr_frame" for coroutines, "gi_frame" for generators (legacy coroutines). However, PyAsyncGenObject provides none of them but "ag_frame" [2]. Fix PR: https://github.com/python/cpython/pull/18669 A simple reproduce: def test_async_gen_aclose_compatible_with_get_stack(self): async def async_generator(): yield object() async def run(): ag = async_generator() asyncio.create_task(ag.aclose()) tasks = asyncio.all_tasks() for task in tasks: # No AttributeError raised task.get_stack() self.loop.run_until_complete I found this in my project I want to view who created the Tasks. [1] https://github.com/python/cpython/blob/21da76d1f1b527d62b2e9ef79dd9aa514d996341/Lib/asyncio/base_tasks.py#L27 [2] https://github.com/python/cpython/blob/21da76d1f1b527d62b2e9ef79dd9aa514d996341/Objects/genobject.c#L1329 ---------- components: asyncio messages: 362722 nosy: asvetlov, lidiz, yselivanov priority: normal pull_requests: 18025 severity: normal status: open title: PyAsyncGenObject causes task.get_stack() raising AttributeError type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 15:46:52 2020 From: report at bugs.python.org (Roger Dahl) Date: Wed, 26 Feb 2020 20:46:52 +0000 Subject: [New-bugs-announce] [issue39765] asyncio loop.set_signal_handler() may not behave as expected Message-ID: <1582750012.4.0.489053907607.issue39765@roundup.psfhosted.org> New submission from Roger Dahl : This is a ticket to document two ways in which the behavior of loop.set_signal_handler() may not match what the user expects. First, callbacks to handlers registered with loop.set_signal_handler() may be significantly delayed. I have a program where I've encountered delays of up to maybe a minute or so between hitting Ctrl-C and getting the callback for the SIGINT handler. During this time, the program works through queued aiohttp tasks. Though it's possible to have delays in callbacks for events set with signal.signal(), I haven't personally seen that, and I think that's the case for most users. So I think this point should be included in the docs. Second, set_signal_handler() silently and implicitly removes corresponding handlers set with signal.signal(). Though perhaps logical, this potentially removes a "fast" handler and replaces it with a "slow" one. I think this should be documented as well. ---------- components: asyncio messages: 362734 nosy: asvetlov, rogerdahl, yselivanov priority: normal severity: normal status: open title: asyncio loop.set_signal_handler() may not behave as expected type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 17:01:11 2020 From: report at bugs.python.org (daniel hahler) Date: Wed, 26 Feb 2020 22:01:11 +0000 Subject: [New-bugs-announce] [issue39766] unittest's assertRaises removes locals from tracebacks Message-ID: <1582754471.69.0.783470624559.issue39766@roundup.psfhosted.org> New submission from daniel hahler : I was a bit surprised to find that unittest's assertRaises clears the locals on the traceback, which e.g. prevents pytest to display them in case of failures. This was done via https://bugs.python.org/issue9815 (https://github.com/python/cpython/commit/9681022f1ee5c6c9160c515b24d2a3d1efe8b90d). Maybe this should only get done for expected failures, so that unexpected exceptions can be inspected better? ---------- components: Library (Lib) messages: 362744 nosy: blueyed priority: normal severity: normal status: open title: unittest's assertRaises removes locals from tracebacks versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 18:46:31 2020 From: report at bugs.python.org (Dariusz Trawinski) Date: Wed, 26 Feb 2020 23:46:31 +0000 Subject: [New-bugs-announce] [issue39767] create multiprocessing.SharedMemory by pointing to existing memoryview Message-ID: <1582760791.31.0.349985357098.issue39767@roundup.psfhosted.org> New submission from Dariusz Trawinski : Currently, in order to share numpy array between processes via multiprocessing.SharedMemory object, it is required to copy the memory content with: input = np.ones((1,10,10,10)) shm = shared_memory.SharedMemory(create=True, size=input.nbytes) write_array = np.ndarray(input.shape, dtype=input.dtype,buffer=shm.buf) write_array1[:] = input[:] In result the original numpy array is duplicated in RAM. It also adds extra cpu cycles to copy the content. I would like to recommend adding an option to create shared memory object by pointing it to existing memoryview object, beside current method of using shared memory name. Is that doable? ---------- components: C API messages: 362754 nosy: Dariusz Trawinski priority: normal severity: normal status: open title: create multiprocessing.SharedMemory by pointing to existing memoryview type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 26 21:57:01 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 27 Feb 2020 02:57:01 +0000 Subject: [New-bugs-announce] [issue39768] remove tempfile.mktemp() Message-ID: <1582772221.62.0.0102658338874.issue39768@roundup.psfhosted.org> New submission from wyz23x2 : the tempfile.mktemp() function was deprecated since version 2.3; it's long ago (nearly 17 years)! It should be removed since it causes security holes, as stated in the tempfile doc (https://docs.python.org/3/library/tempfile.html#tempfile.mktemp). ---------- components: IO messages: 362762 nosy: wyz23x2 priority: normal severity: normal status: open title: remove tempfile.mktemp() type: security versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 00:18:05 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Thu, 27 Feb 2020 05:18:05 +0000 Subject: [New-bugs-announce] [issue39769] compileall.compile_dir(..., ddir="") omits the intermediate package paths when prepending the prefix Message-ID: <1582780685.98.0.127631905646.issue39769@roundup.psfhosted.org> New submission from Gregory P. Smith : Easiest to demonstrate as such: ```shell #!/bin/bash mkdir bug touch bug/__init__.py mkdir bug/foo touch bug/foo/__init__.py touch bug/foo/bar.py python3 -m compileall -d "" bug python2 -m compileall -d "" bug echo "prefix embedded in PY3 pyc code object for lib.foo.bar:" strings bug/foo/__pycache__/bar.cpython-3*.pyc | grep prefix echo "prefix embedded in PY2 pyc code object for lib.foo.bar:" strings bug/foo/bar.pyc | grep prefix ``` Run that script and you'll see: Listing 'bug'... Compiling 'bug/__init__.py'... Listing 'bug/foo'... Compiling 'bug/foo/__init__.py'... Compiling 'bug/foo/bar.py'... Listing bug ... Compiling bug/__init__.py ... Listing bug/__pycache__ ... Listing bug/foo ... Compiling bug/foo/__init__.py ... Listing bug/foo/__pycache__ ... Compiling bug/foo/bar.py ... prefix embedded in PY3 pyc code object for lib.foo.bar: /bar.py prefix embedded in PY2 pyc code object for lib.foo.bar: /foo/bar.pyt Notice that the Python 3 pyc file contains a code.co_filename of "/bar.py" instead of the correct value (that Python 2 inserts) of "/foo/bar.py". ---------- messages: 362767 nosy: gregory.p.smith priority: normal severity: normal status: open title: compileall.compile_dir(..., ddir="") omits the intermediate package paths when prepending the prefix versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 00:56:10 2020 From: report at bugs.python.org (Andy Lester) Date: Thu, 27 Feb 2020 05:56:10 +0000 Subject: [New-bugs-announce] [issue39770] Remove unnecessary size calculation in array_modexec in Modules/arraymodule.c Message-ID: <1582782970.67.0.345827847271.issue39770@roundup.psfhosted.org> New submission from Andy Lester : The array_modexec function in Modules/arraymodule.c has a loop that calculates the number of elements in the descriptors array. This size was used at one point, but is no longer. The loop can be removed. ---------- components: Interpreter Core messages: 362772 nosy: petdance priority: normal severity: normal status: open title: Remove unnecessary size calculation in array_modexec in Modules/arraymodule.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 02:12:32 2020 From: report at bugs.python.org (hwgdb Smith) Date: Thu, 27 Feb 2020 07:12:32 +0000 Subject: [New-bugs-announce] [issue39771] EmailMessage.add_header doesn't work Message-ID: <1582787552.48.0.364956702827.issue39771@roundup.psfhosted.org> New submission from hwgdb Smith : here is the partial code: msg = EmailMessage() file_name = "?e?3000P.csv" ctype, encoding = mimetypes.guess_type(file_name) if ctype is None or encoding is not None: ctype = "application/octet-stream" maintype, subtype = ctype.split("/", 1) with open(file_name, "rb") as f: msg.add_attachment(f.read(), maintype=maintype, subtype=subtype, filename=("GBK", "", f"{file_name}")) The file has non-ascii characters name, so I use the three tuple filename with encode GBK, but msg.as_string() doesn't change. print(msg.as_string()) I find the filename is 'filename*=utf-8\'\'%E8%B6 ......'. The encoding is not correct. And of course, after send the message, I saw the attached file's filename displayed incorrect on my mail client or web mail. But when i use the legacy API, and using the Header class to generate the filename, it works. ---------- components: email messages: 362780 nosy: barry, hwgdb Smith, r.david.murray priority: normal severity: normal status: open title: EmailMessage.add_header doesn't work type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 02:57:19 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 27 Feb 2020 07:57:19 +0000 Subject: [New-bugs-announce] [issue39772] Python 2 FAQ shown in help@python.org auto reply Message-ID: <1582790239.29.0.848001765747.issue39772@roundup.psfhosted.org> New submission from wyz23x2 : The auto-reply from help at python contains this: The Python FAQ is available at http://docs.python.org/2/faq/index.html Why is it .org/2/faq, not .org/3/faq? ---------- components: email files: email.png messages: 362784 nosy: barry, r.david.murray, wyz23x2 priority: normal severity: normal status: open title: Python 2 FAQ shown in help at python.org auto reply Added file: https://bugs.python.org/file48919/email.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 03:08:41 2020 From: report at bugs.python.org (David Hewitt) Date: Thu, 27 Feb 2020 08:08:41 +0000 Subject: [New-bugs-announce] [issue39773] Export symbols for vectorcall Message-ID: <1582790921.68.0.338373290894.issue39773@roundup.psfhosted.org> New submission from David Hewitt : I have been looking into using vectorcall in [pyo3](https://github.com/PyO3/pyo3) (Rust bindings to Python) against python3.8. It looks like the _PyObject_Vectorcall symbols are not included in the shared library. I've checked both Windows and Linux. I think the `static inline` definition of `PyObject_Vectorcall` and related functions in `abstract.h` means that they won't be exported as symbols in the final library? ---------- messages: 362789 nosy: David Hewitt priority: normal severity: normal status: open title: Export symbols for vectorcall versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 04:53:08 2020 From: report at bugs.python.org (igo95862) Date: Thu, 27 Feb 2020 09:53:08 +0000 Subject: [New-bugs-announce] [issue39774] Missing documentation on how to make package executable as script Message-ID: <1582797188.41.0.932040725136.issue39774@roundup.psfhosted.org> New submission from igo95862 : This is package documentation: https://docs.python.org/3/tutorial/modules.html#packages To make package executable (python -m package) you need to create a file __main__.py in the package directory. This is pretty much not documented anyone aside of trying to run a package missing __main__.py This page already contains information on how to make module executable. (https://docs.python.org/3/tutorial/modules.html#executing-modules-as-scripts) ---------- assignee: docs at python components: Documentation messages: 362790 nosy: docs at python, igo95862 priority: normal severity: normal status: open title: Missing documentation on how to make package executable as script type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 08:47:39 2020 From: report at bugs.python.org (Antony Lee) Date: Thu, 27 Feb 2020 13:47:39 +0000 Subject: [New-bugs-announce] [issue39775] inspect.Signature.parameters should be an OrderedDict, not a plain dict Message-ID: <1582811259.66.0.363702048971.issue39775@roundup.psfhosted.org> New submission from Antony Lee : https://bugs.python.org/issue36350 / https://github.com/python/cpython/pull/12412 changed Signature.parameters and BoundArguments.arguments to be plain dicts, not OrderedDicts (for Py3.9a4). Even though I agree for BoundArguments.arguments (in fact I argued for this behavior in https://bugs.python.org/issue23080), I think Signature.parameters should remain OrderedDicts. Otherwise, one would get >>> inspect.signature(lambda x, y: None).parameters == inspect.signature(lambda y, x: None).parameters True which seems plain wrong (comparing the signature objects themselves still correctly return False because __eq__ explicitly considers parameter order, but one may e.g. want to compare parameters for equality while ignoring the return annotation). ---------- components: Library (Lib) messages: 362800 nosy: Antony.Lee priority: normal severity: normal status: open title: inspect.Signature.parameters should be an OrderedDict, not a plain dict versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 10:25:13 2020 From: report at bugs.python.org (Evgeny Boytsov) Date: Thu, 27 Feb 2020 15:25:13 +0000 Subject: [New-bugs-announce] [issue39776] Crash in decimal module in heavy-multithreaded scenario Message-ID: <1582817113.42.0.194070983198.issue39776@roundup.psfhosted.org> New submission from Evgeny Boytsov : Hello everybody! We are using Python 3.7 running at CentOS 7 x64. Python is used as a library to create dynamic extensions for our app server. Some time ago we began to experience crashes in decimal module in some heavy-multithreaded scenarios. After some testing and debugging I was able to reproduce it without our own code using only pybind11 library to simplify embedding (in real app we are using boost.python). I've built python 3.8 with clang 7 and address sanitizer enabled and got error "use-after-free" with some additional data. Please find attached C++ source file, python module and ASAN output. Is it really a bug (most probably - data race) or there is something wrong with such embedding scenario? ---------- components: Interpreter Core files: decimal_crash.zip messages: 362807 nosy: boytsovea priority: normal severity: normal status: open title: Crash in decimal module in heavy-multithreaded scenario versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48923/decimal_crash.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 16:09:05 2020 From: report at bugs.python.org (Brett Cannon) Date: Thu, 27 Feb 2020 21:09:05 +0000 Subject: [New-bugs-announce] [issue39777] Use the codecov GH Action Message-ID: <1582837745.01.0.0567552110573.issue39777@roundup.psfhosted.org> New submission from Brett Cannon : Codecov provides a GH Action for uploading coverage reports which we might as well use instead of their bash uploader: https://github.com/marketplace/actions/codecov. ---------- messages: 362839 nosy: brett.cannon, steve.dower priority: normal severity: normal status: open title: Use the codecov GH Action _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 16:52:09 2020 From: report at bugs.python.org (Leonard Lausen) Date: Thu, 27 Feb 2020 21:52:09 +0000 Subject: [New-bugs-announce] [issue39778] collections.OrderedDict and weakref.ref raises "refcount is too small" assertion Message-ID: <1582840329.06.0.498356539569.issue39778@roundup.psfhosted.org> New submission from Leonard Lausen : Below sample program, will raise "Modules/gcmodule.c:110: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small" on Python 3.8.2 debug build. On 3.7.6 debug build, "Modules/gcmodule.c:277: visit_decref: Assertion `_PyGCHead_REFS(gc) != 0' failed." is raised. ``` import collections import gc import weakref hooks_dict = collections.OrderedDict() hooks_dict_ref = weakref.ref(hooks_dict) gc.collect() print('Hello world') ``` The complete error message on 3.8.2 debug build is ``` Modules/gcmodule.c:110: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small Memory block allocated at (most recent call first): File "/home/$USER/test.py", line 6 object : type : weakref refcount: 1 address : 0x7ff788208a70 Fatal Python error: _PyObject_AssertFailed Python runtime state: initialized Current thread 0x00007ff789f9c080 (most recent call first): File "/home/$USER/test.py", line 7 in zsh: abort PYTHONTRACEMALLOC=1 python ~/test.py ``` ---------- components: C API messages: 362846 nosy: leezu priority: normal severity: normal status: open title: collections.OrderedDict and weakref.ref raises "refcount is too small" assertion versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 18:40:07 2020 From: report at bugs.python.org (brian.gallagher) Date: Thu, 27 Feb 2020 23:40:07 +0000 Subject: [New-bugs-announce] [issue39779] [argparse] Add parameter to sort help output arguments Message-ID: <1582846807.44.0.751374358876.issue39779@roundup.psfhosted.org> New submission from brian.gallagher : 1 import argparse 2 3 parser = argparse.ArgumentParser(description='Test') 4 parser.add_argument('c', help='token c') 5 parser.add_argument('b', help='token b') 6 parser.add_argument('d', help='token d') 7 parser.add_argument('-a', help='token a') 8 parser.add_argument('-z', help='token z') 9 parser.add_argument('-f', help='token f', required=True) 10 parser.print_help() It would be nice if we could have the option to alphabetically sort the tokens in the optional and positional arguments sections of the help message in order to find an argument more quickly when reading long help descriptions. Currently we output the following, when the above program is ran: positional arguments: c token c b token b d token d optional arguments: -h, --help show this help message and exit -a A token a -z Z token z -f F token f I'm proposing that we provide a mechanism to allow alphabetical ordering of both sections, like so: positional arguments: b token b c token c d token d optional arguments: -h, --help show this help message and exit -a A token a -f F token f -z Z token z I've chosen to leave -h as an exception, as it will always be there as an optional argument, but it could easily be treated no different. We could provide an optional argument to print_help(sort=False) as a potential approach. If this is something that the maintainer's would be willing to accept, I'd love to take it on and prepare a patch. ---------- components: Library (Lib) messages: 362849 nosy: brian.gallagher priority: normal severity: normal status: open title: [argparse] Add parameter to sort help output arguments type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 19:42:01 2020 From: report at bugs.python.org (Ali McMaster) Date: Fri, 28 Feb 2020 00:42:01 +0000 Subject: [New-bugs-announce] [issue39780] Add HTTP Response code 451 Message-ID: <1582850521.34.0.826318713504.issue39780@roundup.psfhosted.org> Change by Ali McMaster : ---------- components: Library (Lib) nosy: Ali McMaster priority: normal severity: normal status: open title: Add HTTP Response code 451 type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 27 22:15:19 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 28 Feb 2020 03:15:19 +0000 Subject: [New-bugs-announce] [issue39781] IDLE: Do not jump when select in codecontext Message-ID: <1582859719.48.0.980553540558.issue39781@roundup.psfhosted.org> New submission from Terry J. Reedy : Tweak the code context widget so people can select context lines and copy to clipboard (perhaps to document nested code) without having to do the copy while still holding down the left mouse button. Button-down, move, and release (to select) is not really a 'click', so I don't think a doc change is needed. ---------- assignee: terry.reedy components: IDLE messages: 362861 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal stage: commit review status: open title: IDLE: Do not jump when select in codecontext type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 06:18:30 2020 From: report at bugs.python.org (Wang Jie) Date: Fri, 28 Feb 2020 11:18:30 +0000 Subject: [New-bugs-announce] [issue39782] local varible referenced a Exception won't be collected in function Message-ID: <1582888710.58.0.0521565801778.issue39782@roundup.psfhosted.org> New submission from Wang Jie : I referenced an Exception object in a function and found memory usage will increase constantly in the accident. I think it may be a bug. I wrote a minimal code to reproduce it. ```py from threading import local, Thread from time import sleep l = {} def t0(): b = l.get('e') # memory usage won't increase if I remove this line try: raise Exception('1') except Exception as e: l['e'] = e def target(): while True: sleep(0.0001) t0() target() # t = Thread(target=target) # t.daemon = True # t.start() ``` I tried to execute it in IPython and got the following output: ``` In [1]: run py/ref_exception_causes_oom.py In [2]: import objgraph In [3]: objgraph.show_growth(limit=3) frame 78792 +78792 Exception 78779 +78779 traceback 78779 +78779 In [4]: objgraph.show_growth(limit=3) Exception 100862 +22083 traceback 100862 +22083 frame 100875 +22083 In [5]: objgraph.show_growth(limit=3) Exception 115963 +15101 traceback 115963 +15101 frame 115976 +15101 ``` And I tried to execute this code in python2.7 and pypy. Both of them won't occur this problem. ---------- components: Interpreter Core messages: 362873 nosy: wangjie priority: normal severity: normal status: open title: local varible referenced a Exception won't be collected in function type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 06:36:37 2020 From: report at bugs.python.org (Antony Lee) Date: Fri, 28 Feb 2020 11:36:37 +0000 Subject: [New-bugs-announce] [issue39783] Optimize construction of Path from other Paths by just returning the same object? Message-ID: <1582889797.54.0.821069987387.issue39783@roundup.psfhosted.org> New submission from Antony Lee : Many functions which take a path-like object typically also accept strings (sorry, no hard numbers here). This means that if the function plans to call Path methods on the object, it needs to first call Path() on the arguments to convert them, well, to Paths. This adds an unnecessary cost in the case where the argument is *already* a Path object (which should become more and more common as the use of pathlib spreads), as Path instantiation is not exactly cheap (it's on the order of microseconds). Instead, given that Paths are immutable, `Path(path)` could just return the exact same path instance, completely bypassing instance creation (after checking that the argument's type exactly matches whatever we need and is not, say, PureWindowsPath when we want to instantiate a PosixPath, etc.). Note that there is prior art for doing so in CPython: creating a frozenset from another frozenset just returns the same instance: ``` In [1]: s = frozenset({1}); id(s) == id(frozenset(s)) == id(s.copy()) Out[1]: True ``` ---------- components: Library (Lib) messages: 362874 nosy: Antony.Lee priority: normal severity: normal status: open title: Optimize construction of Path from other Paths by just returning the same object? versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 12:44:14 2020 From: report at bugs.python.org (Marco Sulla) Date: Fri, 28 Feb 2020 17:44:14 +0000 Subject: [New-bugs-announce] [issue39784] Tuple comprehension Message-ID: <1582911854.61.0.127481778131.issue39784@roundup.psfhosted.org> New submission from Marco Sulla : I think a tuple comprehension could be very useful. Currently, the only way to efficiently create a tuple from a comprehension is to create a list comprehension (generator comprehensions are more slow) and convert it with `tuple()`. A tuple comprehension will do exactly the same thing, but without the creation of the intermediate list. IMHO a tuple comprehension can be very useful, because: 1. there are many cases in which you create a list with a comprehension, but you'll never change it later. You could simply convert it with `tuple()`, but it will require more time 2. tuples uses less memory than lists 3. tuples can be interned As syntax, I propose (* expr for x in iterable *) with absolutely no blank character between the character ( and the *, and the same for ). Well, I know, it's a bit strange syntax... but () are already taken by generator comprehensions. Furthermore, the * remembers a snowflake, and tuples are a sort of "frozenlists". ---------- components: Interpreter Core messages: 362888 nosy: Marco Sulla priority: normal severity: normal status: open title: Tuple comprehension versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 13:15:33 2020 From: report at bugs.python.org (fireattack) Date: Fri, 28 Feb 2020 18:15:33 +0000 Subject: [New-bugs-announce] [issue39785] usr/bin/python doesn't use default python (3) on Windows Message-ID: <1582913733.02.0.573169535769.issue39785@roundup.psfhosted.org> New submission from fireattack : STR 1. Install both Py2 and 3. 2. Make sure Py3 is the default. 3. (Optional) Make sure only Python3 is in path, not Python2. Run the following script from CMD: ``` #!/usr/bin/python import platform print(platform.python_version()) ``` What expected: 3.8.1 What happened: 2.8.5 According to https://docs.python.org/3/using/windows.html#shebang-lines, `#!/usr/bin/python` should use the default python. My environment is set to default to py3, and I don't even have python2 in my PATH. C:\Users\ikena\Desktop>py --version Python 3.8.1 C:\Users\ikena\Desktop>python --version Python 3.8.1 C:\Users\ikena\Desktop>where python C:\Users\ikena\AppData\Local\Programs\Python\Python38\python.exe C:\Users\ikena\AppData\Local\Microsoft\WindowsApps\python.exe ---------- components: Windows messages: 362892 nosy: fireattack, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: usr/bin/python doesn't use default python (3) on Windows versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 14:44:20 2020 From: report at bugs.python.org (signing_agreement) Date: Fri, 28 Feb 2020 19:44:20 +0000 Subject: [New-bugs-announce] [issue39786] Have the heaps library support max heap Message-ID: <1582919060.63.0.948721548128.issue39786@roundup.psfhosted.org> New submission from signing_agreement : For numeric types, I can negate the numeric argument for max heaps, but if I have strings, I cannot go about negating them. ---------- components: Library (Lib) messages: 362909 nosy: signing_agreement priority: normal severity: normal status: open title: Have the heaps library support max heap type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 15:02:38 2020 From: report at bugs.python.org (Matheus Castanho) Date: Fri, 28 Feb 2020 20:02:38 +0000 Subject: [New-bugs-announce] [issue39787] test_ssl and test_urllib2_localnet failing with new OpenSSL Message-ID: <1582920158.93.0.343168507997.issue39787@roundup.psfhosted.org> New submission from Matheus Castanho : test_ssl and test_urllib2_localnet are failing when Python is built against top-of-tree OpenSSL. I'm attaching the output of: `regrtest.py test_ssl test_urllib2_localnet -W` The output is from a powerpc64le machine with Python 3.8.2+ (1bbb81b251bc) and OpenSSL master (db943f43a60d1b). A git bisect showed the problems started with the following OpenSSL commit: commit db943f43a60d1b5b1277e4b5317e8f288e7a0a3a Author: Matt Caswell Date: Fri Jan 17 17:39:19 2020 +0000 Detect EOF while reading in libssl If we hit an EOF while reading in libssl then we will report an error back to the application (SSL_ERROR_SYSCALL) but errno will be 0. We add an error to the stack (which means we instead return SSL_ERROR_SSL) and therefore give a hint as to what went wrong. Contains a partial fix for #10880 Reviewed-by: Tomas Mraz Reviewed-by: Dmitry Belyavskiy (Merged from https://github.com/openssl/openssl/pull/10882) This also looks similar to: https://bugs.python.org/issue28689 ---------- assignee: christian.heimes components: SSL, Tests files: test-output.txt messages: 362915 nosy: christian.heimes, mscastanho priority: normal severity: normal status: open title: test_ssl and test_urllib2_localnet failing with new OpenSSL type: behavior versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48931/test-output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 16:36:04 2020 From: report at bugs.python.org (Marco Sulla) Date: Fri, 28 Feb 2020 21:36:04 +0000 Subject: [New-bugs-announce] [issue39788] Exponential notation should return an int if it can Message-ID: <1582925764.06.0.47669700025.issue39788@roundup.psfhosted.org> New submission from Marco Sulla : (venv_3_9) marco at buzz:~/sources/python-frozendict$ python Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) [GCC 9.2.1 20190909] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = 1E9 >>> type(a) IMHO if the exponent is positive, and the "base number" (1 in the example) is an integer, the result should be an integer. Optionally, also if the "base number" has a number of decimal places <= the exponent, the result should be an integer. Example: 1.25E2 == 125 If the user wants a float, it can write 1.2500E2 == 125.0 ---------- components: Interpreter Core messages: 362918 nosy: Marco Sulla priority: normal severity: normal status: open title: Exponential notation should return an int if it can type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 17:44:47 2020 From: report at bugs.python.org (Steve Dower) Date: Fri, 28 Feb 2020 22:44:47 +0000 Subject: [New-bugs-announce] [issue39789] Update Windows release build machines to latest versions Message-ID: <1582929887.69.0.782945048331.issue39789@roundup.psfhosted.org> New submission from Steve Dower : Shouldn't have any impact at all, but I'm going to mention it here so it gets in the NEWS file. Just in case someone hits an obscure edge case and is trying to find out what changed. ---------- assignee: steve.dower components: Windows messages: 362931 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Update Windows release build machines to latest versions versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 18:15:35 2020 From: report at bugs.python.org (Steve Dower) Date: Fri, 28 Feb 2020 23:15:35 +0000 Subject: [New-bugs-announce] [issue39790] LICENSE.TXT file does not contain all incorporated software Message-ID: <1582931735.09.0.762109151271.issue39790@roundup.psfhosted.org> New submission from Steve Dower : Looking at https://docs.python.org/3/license.html there's a list of "incorporated software" licenses, most of which say you need to distribute the license. Right now, the LICENSE.txt file we distribute on Windows (made from the /LICENSE and PC/crtlicense.txt files in the repo, plus those in the source dependencies) does not include any of these licenses that come from C source files. Arguably, we should just include all of them in a central file somewhere in the repo, even though there'd be some duplication. (I have no idea whether other distros correctly gather it all together.) ---------- components: Windows messages: 362937 nosy: brett.cannon, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: LICENSE.TXT file does not contain all incorporated software versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 28 23:11:30 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 29 Feb 2020 04:11:30 +0000 Subject: [New-bugs-announce] [issue39791] New `files()` api from importlib_resources. Message-ID: <1582949490.73.0.868974160192.issue39791@roundup.psfhosted.org> New submission from Jason R. Coombs : In the [importlib_resources backport](https://gitlab.com/python-devs/importlib_resources/)... in particular in [issue 58](https://gitlab.com/python-devs/importlib_resources/issues/58) and [merge request 76](https://gitlab.com/python-devs/importlib_resources/-/merge_requests/76), the backport now has a new feature, a "files()" function. Let's incorporate that functionality into importlib.resources. ---------- messages: 362962 nosy: jaraco priority: normal severity: normal status: open title: New `files()` api from importlib_resources. versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 01:04:20 2020 From: report at bugs.python.org (Masahiro Sakai) Date: Sat, 29 Feb 2020 06:04:20 +0000 Subject: [New-bugs-announce] [issue39792] Two Ctrl+C is required to terminate when a pipe is blocking Message-ID: <1582956260.49.0.773143606041.issue39792@roundup.psfhosted.org> New submission from Masahiro Sakai : I noticed that two Ctrl+C instead of one are required to terminate following program on macOS and Linux. I guess that the first Ctrl+C is ignored inside one of the finalizers. ---- import os def main(): r, w = os.pipe() f_w = os.fdopen(w, "w") f_w.buffer.write(b"a" * 65536) f_w.buffer.write(b"b") main() ---- ---------- components: IO messages: 362964 nosy: msakai priority: normal severity: normal status: open title: Two Ctrl+C is required to terminate when a pipe is blocking type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 04:56:17 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 29 Feb 2020 09:56:17 +0000 Subject: [New-bugs-announce] [issue39793] make_msgid fail on FreeBSD 12.1-RELEASE-p1 with different domains Message-ID: <1582970177.09.0.683840972158.issue39793@roundup.psfhosted.org> New submission from Batuhan Taskaya : $ ./python -m test test_email 0:00:00 load avg: 0.25 Run tests sequentially 0:00:00 load avg: 0.25 [1/1] test_email test test_email failed -- Traceback (most recent call last): File "/usr/home/isidentical/cpython/Lib/test/test_email/test_email.py", line 3345, in test_make_msgid_default_domain self.assertTrue( AssertionError: False is not true test_email failed == Tests result: FAILURE == 1 test failed: test_email Total duration: 9.5 sec Tests result: FAILURE >>> socket.getfqdn() 'xxx.com' >>> socket.getfqdn() 'yyy.org' >>> socket.getfqdn() 'xxx.com' >>> socket.getfqdn() 'xxx.yyy.com' ---------- components: email messages: 362969 nosy: BTaskaya, barry, r.david.murray priority: normal severity: normal status: open title: make_msgid fail on FreeBSD 12.1-RELEASE-p1 with different domains versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 06:24:13 2020 From: report at bugs.python.org (Stefan Krah) Date: Sat, 29 Feb 2020 11:24:13 +0000 Subject: [New-bugs-announce] [issue39794] Add --without-decimal-contextvar option to use just threads in decimal Message-ID: <1582975453.71.0.297927511147.issue39794@roundup.psfhosted.org> New submission from Stefan Krah : #39776 has shown that it is hard to understand the interaction between ContextVars and threading in embedded scenarios. I want to understand the code again, so I'm adding back a compile time option to enable the thread local context that was present prior to f13f12d8d. ---------- messages: 362971 nosy: mark.dickinson, rhettinger, skrah priority: normal severity: normal status: open title: Add --without-decimal-contextvar option to use just threads in decimal _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 07:05:59 2020 From: report at bugs.python.org (Ivan Bykov) Date: Sat, 29 Feb 2020 12:05:59 +0000 Subject: [New-bugs-announce] [issue39795] multiprocessing creates duplicates of .pyc files Message-ID: <1582977959.48.0.59229562314.issue39795@roundup.psfhosted.org> New submission from Ivan Bykov : multiprocessing module creates duplicates of .pyc files in __pycache__ dirs because subprocess._args_from_interpreter_flags() ignores "-X pycache_prefix=PATH" option. ---------- components: Library (Lib) messages: 362974 nosy: ivb priority: normal severity: normal status: open title: multiprocessing creates duplicates of .pyc files type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 08:03:37 2020 From: report at bugs.python.org (hai shi) Date: Sat, 29 Feb 2020 13:03:37 +0000 Subject: [New-bugs-announce] [issue39796] warning extension module inited twice in python3.9 Message-ID: <1582981417.61.0.704723505761.issue39796@roundup.psfhosted.org> New submission from hai shi : In master branch, `_PyWarnings_Init()` have been called twice. ``` (gdb) bt #0 _PyWarnings_Init () at Python/_warnings.c:1338 #1 0x0000000000525df3 in pycore_init_import_warnings (tstate=tstate at entry=0x9a19c0, sysmod=0x7ffff7f7e5f0) at Python/pylifecycle.c:680 ... Breakpoint 1, _PyWarnings_Init () at Python/_warnings.c:1338 1338 { (gdb) bt #0 _PyWarnings_Init () at Python/_warnings.c:1338 #1 0x0000000000511aac in _imp_create_builtin (module=, spec=0x7ffff7f2e7d0) at Python/import.c:1293 ``` but in 2.7 branch, '_PyWarnings_Init()': ``` Breakpoint 1, _PyWarnings_Init() at Python/_warnings.c:886 886 m = Py_InitModule3(MODULE_NAME, warnings_functions, warnings__doc__); Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.4.x86_64 (gdb) bt #0 _PyWarnings_Init () at Python/_warnings.c:886 #1 0x00000000004fc4db in Py_InitializeEx (install_sigs=1) at Python/pythonrun.c:242 #2 0x00000000004fcb03 in Py_Initialize () at Python/pythonrun.c:370 #3 0x00000000004154fd in Py_Main (argc=1, argv=0x7fffffffe428) at Modules/main.c:505 #4 0x00000000004145f0 in main (argc=1, argv=0x7fffffffe428) at ./Modules/python.c:23 ``` Why? because pylifecycle.c and _imp extension module will call `_PyWarnings_Init()`. isn't it? ---------- components: Extension Modules messages: 362980 nosy: brett.cannon, shihai1991 priority: normal severity: normal status: open title: warning extension module inited twice in python3.9 type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 08:04:26 2020 From: report at bugs.python.org (Ama Aje My Fren) Date: Sat, 29 Feb 2020 13:04:26 +0000 Subject: [New-bugs-announce] [issue39797] shutdown() in socketserver.BaseServer should be in a different thread from serve_forever() Message-ID: <1582981466.61.0.0832265256128.issue39797@roundup.psfhosted.org> New submission from Ama Aje My Fren : When a subclass of socketserver.BaseServer is running after calling serve_forever() and needs to be shutdown, it may be shut down by sending a [shutdown()](https://docs.python.org/3/library/socketserver.html#socketserver.BaseServer.shutdown). The thing is though that the shutdown() call must be run in a different thread than the one where the serve_forever() was called otherwise it will deadlock. This is documented in the [code](https://github.com/python/cpython/blob/3.8/Lib/socketserver.py#L244) but not in the documentation. It should be in the documentation as well as it is not obvious. ---------- assignee: docs at python components: Documentation messages: 362981 nosy: amaajemyfren, docs at python priority: normal severity: normal status: open title: shutdown() in socketserver.BaseServer should be in a different thread from serve_forever() type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 08:48:37 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 29 Feb 2020 13:48:37 +0000 Subject: [New-bugs-announce] [issue39798] Update and Improve README.AIX Message-ID: <1582984117.38.0.490903429644.issue39798@roundup.psfhosted.org> New submission from Batuhan Taskaya : I was building python on AIX but the old README.AIX file didn't help much. It would be super cool to someone who is familiar with AIX update and improve that file with all new additions and current issues about AIX. ---------- components: Build messages: 362982 nosy: BTaskaya, David.Edelsohn priority: normal severity: normal status: open title: Update and Improve README.AIX _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 10:04:08 2020 From: report at bugs.python.org (Open Close) Date: Sat, 29 Feb 2020 15:04:08 +0000 Subject: [New-bugs-announce] [issue39799] Never return base's fragment from urljoin (urllib.parse) Message-ID: <1582988648.13.0.340894927922.issue39799@roundup.psfhosted.org> New submission from Open Close : According to RFC3986 5.2.2., target fragment is always reference fragment (T.fragment = R.fragment;). This is different from RFC1808 (4. and 5.2.). And it is not mentioned in Modifications section in RFC2396 and RFC3986. Current: >>> import urllib.parse >>> urllib.parse.urljoin('http://a/b#f', '') 'http://a/b#f' Should return: 'http://a/b' --- https://tools.ietf.org/html/rfc3986#section-5.2.2 https://tools.ietf.org/html/rfc1808.html#section-4 https://tools.ietf.org/html/rfc1808.html#section-5.2 ---------- components: Library (Lib) messages: 362983 nosy: op368 priority: normal severity: normal status: open title: Never return base's fragment from urljoin (urllib.parse) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 10:27:53 2020 From: report at bugs.python.org (S Murthy) Date: Sat, 29 Feb 2020 15:27:53 +0000 Subject: [New-bugs-announce] [issue39800] Inconsistent/incomplete disassembly of methods vs method source code Message-ID: <1582990073.8.0.514436528319.issue39800@roundup.psfhosted.org> New submission from S Murthy : I am using the dis module to look at source (and logical) lines of code vs corresponding bytecode instructions. I am bit confused by the output of dis.dis when disassembling a given method vs the corresponding source string, e.g. >>> def f(x): return x**2 >>> dis.dis(f) 1 0 LOAD_FAST 0 (x) 2 LOAD_CONST 1 (2) 4 BINARY_POWER 6 RETURN_VALUE This is the bytecode instruction block for the body only (not the method header), but dis.dis('def f(x): return x**2') produces the instructions for the header and body: >>> dis.dis('def f(x): return x**2') 1 0 LOAD_CONST 0 (", line 1>) 2 LOAD_CONST 1 ('f') 4 MAKE_FUNCTION 0 6 STORE_NAME 0 (f) 8 LOAD_CONST 2 (None) 10 RETURN_VALUE Disassembly of ", line 1>: 1 0 LOAD_FAST 0 (x) 2 LOAD_CONST 1 (2) 4 BINARY_POWER 6 RETURN_VALUE I have traced this difference to the different behaviour of dis.dis for methods vs source code strings: def dis(x=None, *, file=None, depth=None): ... ... if hasattr(x, '__code__'): x = x.__code__ ... # Perform the disassembly ... elif hasattr(x, 'co_code'): # Code object _disassemble_recursive(x, file=file, depth=depth) ... elif isinstance(x, str): # Source code _disassemble_str(x, file=file, depth=depth) ... It appears as if the method body is contained in the code object produced from compiling the source (_try_compile(source, '', ...)) but not if the code object was obtained from f.__code__. Why is this the case, and would it not be better to for dis.dis to behave consistently for methods and source strings of methods, and to generate/produce the complete instruction set, including for any headers? The current behaviour of dis.dis means that Bytecode(x) is also affected, as iterating over the instructions gives you different instructions depending on whether x is a method or a source string of x: >>> for instr in dis.Bytecode(f): ... print(instr) ... Instruction(opname='LOAD_FAST', opcode=124, arg=0, argval='x', argrepr='x', offset=0, starts_line=1, is_jump_target=False) Instruction(opname='LOAD_CONST', opcode=100, arg=1, argval=2, argrepr='2', offset=2, starts_line=None, is_jump_target=False) Instruction(opname='BINARY_POWER', opcode=19, arg=None, argval=None, argrepr='', offset=4, starts_line=None, is_jump_target=False) Instruction(opname='RETURN_VALUE', opcode=83, arg=None, argval=None, argrepr='', offset=6, starts_line=None, is_jump_target=False >>> for instr in dis.Bytecode(inspect.getsource(f)): ... print(instr) ... Instruction(opname='LOAD_CONST', opcode=100, arg=0, argval=", line 1>, argrepr='", line 1>', offset=0, starts_line=1, is_jump_target=False) Instruction(opname='LOAD_CONST', opcode=100, arg=1, argval='f', argrepr="'f'", offset=2, starts_line=None, is_jump_target=False) Instruction(opname='MAKE_FUNCTION', opcode=132, arg=0, argval=0, argrepr='', offset=4, starts_line=None, is_jump_target=False) Instruction(opname='STORE_NAME', opcode=90, arg=0, argval='f', argrepr='f', offset=6, starts_line=None, is_jump_target=False) Instruction(opname='LOAD_CONST', opcode=100, arg=2, argval=None, argrepr='None', offset=8, starts_line=None, is_jump_target=False) Instruction(opname='RETURN_VALUE', opcode=83, arg=None, argval=None, argrepr='', offset=10, starts_line=None, is_jump_target=False) ---------- components: Library (Lib) messages: 362985 nosy: smurthy priority: normal severity: normal status: open title: Inconsistent/incomplete disassembly of methods vs method source code type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 11:11:28 2020 From: report at bugs.python.org (Stefan Pochmann) Date: Sat, 29 Feb 2020 16:11:28 +0000 Subject: [New-bugs-announce] [issue39801] list.insert is slow due to manual memmove Message-ID: <1582992688.24.0.071544861703.issue39801@roundup.psfhosted.org> New submission from Stefan Pochmann : Using a list's insert function is much slower than using slice assignment: > python -m timeit -n 100000 -s "a=[]" "a.insert(0,0)" 100000 loops, best of 5: 19.2 usec per loop > python -m timeit -n 100000 -s "a=[]" "a[0:0]=[0]" 100000 loops, best of 5: 6.78 usec per loop (Note that the list starts empty but grows to 100,000 elements.) At first I thought maybe it's the attribute lookup or function call overhead or so, but inserting near the end shows that that's negligible: > python -m timeit -n 100000 -s "a=[]" "a.insert(-1,0)" 100000 loops, best of 5: 79.1 nsec per loop I asked at StackOverflow and someone pointed out that list.insert uses a manual loop instead of memmove: https://stackoverflow.com/a/60466572/12671057 ---------- components: Interpreter Core messages: 362986 nosy: Stefan Pochmann priority: normal severity: normal status: open title: list.insert is slow due to manual memmove type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 13:48:54 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 29 Feb 2020 18:48:54 +0000 Subject: [New-bugs-announce] [issue39802] Ensure {get, set}_escdelay and {get, set}_tabsize only implemented when the extensions are activated Message-ID: <1583002134.9.0.406998676979.issue39802@roundup.psfhosted.org> New submission from Batuhan Taskaya : Python can't build curses on Solaris because of extensions aren't activated /export/home/isidentical/cpython/Modules/_cursesmodule.c: In function ?_curses_get_escdelay_impl?: /export/home/isidentical/cpython/Modules/_cursesmodule.c:3272:28: error: ?ESCDELAY? undeclared (first use in this function) return PyLong_FromLong(ESCDELAY); ^ /export/home/isidentical/cpython/Modules/_cursesmodule.c:3272:28: note: each undeclared identifier is reported only once for each function it appears in /export/home/isidentical/cpython/Modules/_cursesmodule.c: In function ?_curses_set_escdelay_impl?: /export/home/isidentical/cpython/Modules/_cursesmodule.c:3296:29: error: implicit declaration of function ?set_escdelay? [-Werror=implicit-function-declaration] return PyCursesCheckERR(set_escdelay(ms), "set_escdelay"); ^ /export/home/isidentical/cpython/Modules/_cursesmodule.c: In function ?_curses_set_tabsize_impl?: /export/home/isidentical/cpython/Modules/_cursesmodule.c:3335:29: error: implicit declaration of function ?set_tabsize? [-Werror=implicit-function-declaration] return PyCursesCheckERR(set_tabsize(size), "set_tabsize"); ---------- components: Build messages: 363005 nosy: BTaskaya priority: normal severity: normal status: open title: Ensure {get,set}_escdelay and {get,set}_tabsize only implemented when the extensions are activated type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 14:19:39 2020 From: report at bugs.python.org (Andy Lester) Date: Sat, 29 Feb 2020 19:19:39 +0000 Subject: [New-bugs-announce] [issue39803] _PyLong_FormatAdvancedWriter has an unnecessary str Message-ID: <1583003979.36.0.40883960897.issue39803@roundup.psfhosted.org> New submission from Andy Lester : _PyLong_FormatAdvancedWriter has a PyObject *str that is never used. Remove it. ---------- components: Interpreter Core messages: 363006 nosy: petdance priority: normal severity: normal status: open title: _PyLong_FormatAdvancedWriter has an unnecessary str _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 21:17:30 2020 From: report at bugs.python.org (Paul Ganssle) Date: Sun, 01 Mar 2020 02:17:30 +0000 Subject: [New-bugs-announce] [issue39804] timezone constants in time module inaccurate with negative DST (e.g. Ireland) Message-ID: <1583029050.31.0.658413045225.issue39804@roundup.psfhosted.org> New submission from Paul Ganssle : >From a report on the dateutil tracker today, I found that `time.timezone` and `time.altzone` are not accurate in Ireland (at least on Linux, not tested on other platforms): https://github.com/dateutil/dateutil/issues/1009 Europe/Dublin in the modern era has the exact same rules as Europe/London, but the values for `isdst` are switched, so for Ireland GMT is the "DST" zone with a DST offset of -1H, and IST is the standard zone, while London has GMT as the standard zone and BST as a DST zone of +1h. The documentation for the timezone constants here pretty clearly say that the DST zone should be the *second* value in tzname, and should be the offset for altzone: https://docs.python.org/3/library/time.html#timezone-constants But when setting my TZ variable to Europe/Dublin I get the same thing as for Europe/London: $ TZ=Europe/Dublin python -c \ "from time import *; print(timezone); print(altzone); print(tzname)" 0 -3600 ('GMT', 'IST') $ TZ=Europe/London python -c \ "from time import *; print(timezone); print(altzone); print(tzname)" 0 -3600 ('GMT', 'BST') This would be less of a problem if localtime() were *also* getting isdst wrong in the same way, but it's not: $ TZ=Europe/London python -c \ "from time import *; print(localtime())" time.struct_time(tm_year=2020, tm_mon=3, tm_mday=1, tm_hour=2, tm_min=5, tm_sec=6, tm_wday=6, tm_yday=61, tm_isdst=0) $ TZ=Europe/Dublin python -c \ "from time import *; print(localtime())" time.struct_time(tm_year=2020, tm_mon=3, tm_mday=1, tm_hour=2, tm_min=5, tm_sec=18, tm_wday=6, tm_yday=61, tm_isdst=1) So now it seems that there's no way to determine what the correct timezone offset and name are based on isdst. I'm not entirely sure if this is an issue in our code or a problem with the system APIs we're calling. This code looks like a *very* dicey heuristic (I expect it would also have some problems with Morocco in 2017, even before they were using a type of negative DST, since they used DST but turned it off from May 21st to July 2nd): https://github.com/python/cpython/blob/0b0d29fce568e61e0d7d9f4a362e6dbf1e7fb80a/Modules/timemodule.c#L1612 One option might be to deprecate these things as sort of very leaky abstractions *anyway* and be done with it, but it might be nice to fix it if we can. ---------- messages: 363037 nosy: belopolsky, lemburg, p-ganssle priority: normal severity: normal status: open title: timezone constants in time module inaccurate with negative DST (e.g. Ireland) type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 29 22:42:42 2020 From: report at bugs.python.org (Steven D'Aprano) Date: Sun, 01 Mar 2020 03:42:42 +0000 Subject: [New-bugs-announce] [issue39805] Copying functions doesn't actually copy them Message-ID: <1583034162.79.0.0576327457429.issue39805@roundup.psfhosted.org> New submission from Steven D'Aprano : Function objects are mutable, so I expected that a copy of a function should be an actual independent copy. But it isn't. py> from copy import copy py> a = lambda: 1 py> b = copy(a) py> a is b True This burned me when I modified the copy and the original changed too: py> a.attr = 27 # add extra data py> b.attr = 42 py> a.attr 42 `deepcopy` doesn't copy the function either. ---------- components: Library (Lib) messages: 363039 nosy: steven.daprano priority: normal severity: normal status: open title: Copying functions doesn't actually copy them type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________