From report at bugs.python.org Wed Jan 1 02:21:44 2020 From: report at bugs.python.org (Dominic Mayers) Date: Wed, 01 Jan 2020 07:21:44 +0000 Subject: [New-bugs-announce] [issue39177] In tkinter, simple dialogs, askstrings, etc. with flexible coordinates and no viewable parent. Message-ID: <1577863304.66.0.670645943584.issue39177@roundup.psfhosted.org> New submission from Dominic Mayers : Currently, it's not possible to center or change the coordinates in anyway of an askstring, askfloat or askinteger dialog in simpledialog.py. One can see this by looking at the code: if parent.winfo_viewable(): self.transient(parent) if title: self.title(title) self.parent = parent self.result = None body = Frame(self) self.initial_focus = self.body(body) body.pack(padx=5, pady=5) self.buttonbox() if not self.initial_focus: self.initial_focus = self self.protocol("WM_DELETE_WINDOW", self.cancel) if self.parent is not None: self.geometry("+%d+%d" % (parent.winfo_rootx()+50, parent.winfo_rooty()+50)) Here self.parent is never None, because the first statement would create a run time error if parent was None. So, the geometry always depends on the parent. Moreover, if the parent is not viewable, `parent.winfo_rootx()` and `parent.winfo_rooty()` are both 0. So, we can only set the coordinates of a simple dialog using a viewable parent. This contradicts a bit "simple" in "simpledialog". For example, what about an application that does not have a root window, like git for example, but, which unlike git, needs to create simple dialogs in some occasions. I am aware that a messagebox does not use the code that is presented above, but a messagebox is not a simple dialog - it's only a message. I am also aware of the class SimpleDialog, which also does not use this code, but it only works with buttons. It's not like askstring, askinteger and askfloat. ---------- messages: 359147 nosy: dominic108 priority: normal severity: normal status: open title: In tkinter, simple dialogs, askstrings, etc. with flexible coordinates and no viewable parent. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 09:00:21 2020 From: report at bugs.python.org (Adi) Date: Wed, 01 Jan 2020 14:00:21 +0000 Subject: [New-bugs-announce] [issue39178] Should we make dict not accept a sequence of sets? Message-ID: <1577887221.9.0.378714052511.issue39178@roundup.psfhosted.org> New submission from Adi : While writing this SO answer (https://stackoverflow.com/a/59552970/1453822) I came to think, should dict preemptively make sure it doesn't accept a sequence of sets (given that it may lead to wrong output in the good case, and miserably fail in the worst case)? ---------- messages: 359155 nosy: DeepSpace priority: normal severity: normal status: open title: Should we make dict not accept a sequence of sets? type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 09:22:26 2020 From: report at bugs.python.org (seeking.that) Date: Wed, 01 Jan 2020 14:22:26 +0000 Subject: [New-bugs-announce] [issue39179] pandas tz_convert() seems to report incorrect date conversion Message-ID: <1577888546.9.0.841265032585.issue39179@roundup.psfhosted.org> New submission from seeking.that : Hi python pandas bdate_range tz_convert() seems to have problems as it prints the information incorrectly. Please clarity. The python script and the output is shown below: Two issues that can be highlighted here are: 1) Setting the timezone correctly to US/Pacific prints the dates correctly. But the conversion causes the date calculations to be incorrect. 2. Minor issue just related to display. Though the API hasn't changed, the last call has more information hh-mm-ss-xx-xx which is not there for the rest of the calls with same format signature. Thanks SK import pandas as pd c5 = pd.bdate_range(start='1/1/2018', end = '1/31/2018') print(c5) c5 = c5.tz_localize('UTC') print(c5) c5 = c5.tz_convert('US/Pacific') print(c5) c6 = pd.bdate_range(start='1/1/2018', end = '1/31/2018') print(c6) c6 = c6.tz_localize('US/Pacific') print(c6) ------ DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-08', '2018-01-09', '2018-01-10', '2018-01-11', '2018-01-12', '2018-01-15', '2018-01-16', '2018-01-17', '2018-01-18', '2018-01-19', '2018-01-22', '2018-01-23', '2018-01-24', '2018-01-25', '2018-01-26', '2018-01-29', '2018-01-30', '2018-01-31'], dtype='datetime64[ns]', freq='B') DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-08', '2018-01-09', '2018-01-10', '2018-01-11', '2018-01-12', '2018-01-15', '2018-01-16', '2018-01-17', '2018-01-18', '2018-01-19', '2018-01-22', '2018-01-23', '2018-01-24', '2018-01-25', '2018-01-26', '2018-01-29', '2018-01-30', '2018-01-31'], dtype='datetime64[ns, UTC]', freq='B') DatetimeIndex(['2017-12-31', '2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-07', '2018-01-08', '2018-01-09', '2018-01-10', '2018-01-11', '2018-01-14', '2018-01-15', '2018-01-16', '2018-01-17', '2018-01-18', '2018-01-21', '2018-01-22', '2018-01-23', '2018-01-24', '2018-01-25', '2018-01-28', '2018-01-29', '2018-01-30'], dtype='datetime64[ns, US/Pacific]', freq='B') DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-08', '2018-01-09', '2018-01-10', '2018-01-11', '2018-01-12', '2018-01-15', '2018-01-16', '2018-01-17', '2018-01-18', '2018-01-19', '2018-01-22', '2018-01-23', '2018-01-24', '2018-01-25', '2018-01-26', '2018-01-29', '2018-01-30', '2018-01-31'], dtype='datetime64[ns]', freq='B') DatetimeIndex(['2018-01-01 00:00:00-08:00', '2018-01-02 00:00:00-08:00', '2018-01-03 00:00:00-08:00', '2018-01-04 00:00:00-08:00', '2018-01-05 00:00:00-08:00', '2018-01-08 00:00:00-08:00', '2018-01-09 00:00:00-08:00', '2018-01-10 00:00:00-08:00', '2018-01-11 00:00:00-08:00', '2018-01-12 00:00:00-08:00', '2018-01-15 00:00:00-08:00', '2018-01-16 00:00:00-08:00', '2018-01-17 00:00:00-08:00', '2018-01-18 00:00:00-08:00', '2018-01-19 00:00:00-08:00', '2018-01-22 00:00:00-08:00', '2018-01-23 00:00:00-08:00', '2018-01-24 00:00:00-08:00', '2018-01-25 00:00:00-08:00', '2018-01-26 00:00:00-08:00', '2018-01-29 00:00:00-08:00', '2018-01-30 00:00:00-08:00', '2018-01-31 00:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq='B') ---------- messages: 359156 nosy: Seeking.that priority: normal severity: normal status: open title: pandas tz_convert() seems to report incorrect date conversion versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 09:29:09 2020 From: report at bugs.python.org (Khalid Mammadov) Date: Wed, 01 Jan 2020 14:29:09 +0000 Subject: [New-bugs-announce] [issue39180] Missing getlines func documentation from linecache module Message-ID: <1577888949.55.0.96578961298.issue39180@roundup.psfhosted.org> Change by Khalid Mammadov : ---------- assignee: docs at python components: Documentation nosy: docs at python, khalidmammadov priority: normal severity: normal status: open title: Missing getlines func documentation from linecache module versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 11:53:16 2020 From: report at bugs.python.org (jack1142) Date: Wed, 01 Jan 2020 16:53:16 +0000 Subject: [New-bugs-announce] [issue39181] Add `os.makedirs()` as `Path.mkdir()` equivalent in correspondence table Message-ID: <1577897596.74.0.989558678168.issue39181@roundup.psfhosted.org> New submission from jack1142 : https://github.com/python/cpython/blob/3.7/Doc/library/pathlib.rst#correspondence-to-tools-in-the-modos-module The table mapping `os` functions to `Path`'s equivalents is missing `os.makedirs` in the row with `Path.mkdir()` and they are both equivalent when `Path.mkdir()` is used with `parents=True` kwarg. I can make a PR once this gets triaged, I'm not sure if this doc improvement should only be made to master branch or also 3.7/3.8 so let me know about that too, thanks. ---------- assignee: docs at python components: Documentation messages: 359162 nosy: docs at python, jack1142 priority: normal severity: normal status: open title: Add `os.makedirs()` as `Path.mkdir()` equivalent in correspondence table type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 12:23:00 2020 From: report at bugs.python.org (Dutcho) Date: Wed, 01 Jan 2020 17:23:00 +0000 Subject: [New-bugs-announce] [issue39182] sys.addaudithook(hook) loops indefinitely on mismatch for hook Message-ID: <1577899380.86.0.650724982572.issue39182@roundup.psfhosted.org> New submission from Dutcho : When hook is not a compatible callable, addaudithook() will loop forever. At the minimum, a check for being callable should be executed. Preferably, a non-compatible (i.e. signature != [[str, tuple], Any]) hook callable should also be detected. >py Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.addaudithook(0) error=10 Exception ignored in audit hook: TypeError: 'int' object is not callable File "", line 0 SyntaxError: unknown parsing error error=10 Exception ignored in audit hook: TypeError: 'int' object is not callable File "", line 0 SyntaxError: unknown parsing error error=10 Exception ignored in audit hook: TypeError: 'int' object is not callable File "", line 0 SyntaxError: unknown parsing error ... etc. ... ---------- messages: 359164 nosy: Dutcho priority: normal severity: normal status: open title: sys.addaudithook(hook) loops indefinitely on mismatch for hook type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 17:06:36 2020 From: report at bugs.python.org (Rafael Fontenelle) Date: Wed, 01 Jan 2020 22:06:36 +0000 Subject: [New-bugs-announce] [issue39183] Text divided in two strings due to wrong formatting Message-ID: <1577916396.48.0.991932601877.issue39183@roundup.psfhosted.org> New submission from Rafael Fontenelle : When translating Python docs, the 'library/ensurepip' module [1] has an bullet list where one item's text is split in two due to a simple extra whitespace character. This causes two separated translations strings "``--default-pip``: if a \"default pip\" installation is requested, the" and "``pip`` script will be installed in addition to the two regular scripts." , which clearly should be a single one. Its effect can be seen also in the resulting documentation, where these strings are shown in different lines even when there are enough space in the browser window to should it all. Solution is to remove the extra space and formatting is good again. [1] https://docs.python.org/3/library/ensurepip.html ---------- assignee: docs at python components: Documentation messages: 359170 nosy: docs at python, rffontenelle priority: normal severity: normal status: open title: Text divided in two strings due to wrong formatting type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 18:10:03 2020 From: report at bugs.python.org (Saiyang Gou) Date: Wed, 01 Jan 2020 23:10:03 +0000 Subject: [New-bugs-announce] [issue39184] Many command execution functions are not raising auditing events Message-ID: <1577920203.73.0.648457466575.issue39184@roundup.psfhosted.org> New submission from Saiyang Gou : Similar to `os.system` (which is already raising auditing event), the following functions are also capable of command execution, so they also need auditing: - os.execl - os.execle - os.execlp - os.execlpe - os.execv - os.execve - os.execvp - os.execvpe - os.posix_spawn - os.posix_spawnp - os.spawnl - os.spawnle - os.spawnlp - os.spawnlpe - os.spawnv - os.spawnve - os.spawnvp - os.spawnvpe - os.startfile - pty.spawn By the way, since `os.listdir`, `shutil.copytree` and `shutil.rmtree` are already being audited, is it necessary to audit file operations in the `os` module like `os.remove`? ---------- messages: 359177 nosy: Saiyang Gou priority: normal severity: normal status: open title: Many command execution functions are not raising auditing events type: security versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 20:09:37 2020 From: report at bugs.python.org (anthony shaw) Date: Thu, 02 Jan 2020 01:09:37 +0000 Subject: [New-bugs-announce] [issue39185] Add quiet and detailed verbosity levels to build.bat Message-ID: <1577927377.26.0.421756435825.issue39185@roundup.psfhosted.org> New submission from anthony shaw : The build.bat script (windows build) has a flag for verbose, which sets the msbuild verbosity level to normal. The default level is minimal. The quiet and detailed levels would also be useful for development. ---------- components: Windows messages: 359178 nosy: anthonypjshaw, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add quiet and detailed verbosity levels to build.bat type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 21:23:44 2020 From: report at bugs.python.org (anthony shaw) Date: Thu, 02 Jan 2020 02:23:44 +0000 Subject: [New-bugs-announce] [issue39186] Windows installer instructions refer to mercurial Message-ID: <1577931824.82.0.916015771448.issue39186@roundup.psfhosted.org> New submission from anthony shaw : Very minor, but the instructions in Tools/msi/readme.txt tell the user to ensure hg.exe is in PATH, but the scripts use Git. ---------- components: Windows messages: 359179 nosy: anthonypjshaw, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows installer instructions refer to mercurial _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 1 23:14:05 2020 From: report at bugs.python.org (Andre Burgaud) Date: Thu, 02 Jan 2020 04:14:05 +0000 Subject: [New-bugs-announce] [issue39187] urllib.robotparser does not respect the longest match for the rule Message-ID: <1577938445.91.0.743054392693.issue39187@roundup.psfhosted.org> New submission from Andre Burgaud : As per the current Robots Exclusion Protocol internet draft, https://tools.ietf.org/html/draft-koster-rep-00#section-3.2. a robot should apply the rules respecting the longest match. urllib.robotparser relies on the order of the rules in the robots.txt file. Here is the section in the specs: =================== 3.2. Longest Match The following example shows that in the case of a two rules, the longest one MUST be used for matching. In the following case, /example/page/disallowed.gif MUST be used for the URI example.com/example/page/disallow.gif . User-Agent : foobot Allow : /example/page/ Disallow : /example/page/disallowed.gif =================== I'm attaching a simple test file "test_robot.py" ---------- components: Library (Lib) files: test_robot.py messages: 359181 nosy: gallicrooster priority: normal severity: normal status: open title: urllib.robotparser does not respect the longest match for the rule type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48815/test_robot.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 05:44:44 2020 From: report at bugs.python.org (Kevin Schlossser) Date: Thu, 02 Jan 2020 10:44:44 +0000 Subject: [New-bugs-announce] [issue39188] recent change when passing a Union to a function Message-ID: <1577961884.08.0.520696540473.issue39188@roundup.psfhosted.org> New submission from Kevin Schlossser : OK so There was a change made to fix issue 26628. Something was forgotten about.. On Windows there is the VARIANT Union which gets used all over the Windows API. This change is going to really break a lot of peoples code and there are no code examples of what needs to be done to fix the now broken ctypes, what needs to be done instead of passing a Union? what new structure has been made that Windows is going to se as a Union? That is a huge change to make with no kind of notice that it was going to be done. Now I know it is publicly visible on your issue tracker, but come on now the issue was made 4 years ago and then all of a sudden out of no where boom broken software.. Now If that issue was a major issue and it was causing all kinds of grief I would think that it A. would have been fixed sooner... and B. there would have been more then a handful of people where the original "bug" caused a problem. I am wondering if maybe this bug is a NIX issue. there are simply way to many programs out that are running on Windows and Unions are being passed all the time in functions and no problems are occurring. If there were problems with it in Windows you would have 1000's of reports of crashing from this problem. A change like that should be posted on the Python website before it gets made so that all possible repercussions can be looked at. ---------- components: ctypes messages: 359187 nosy: Kevin Schlossser priority: normal severity: normal status: open title: recent change when passing a Union to a function type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 05:55:28 2020 From: report at bugs.python.org (Bahram Aghaei) Date: Thu, 02 Jan 2020 10:55:28 +0000 Subject: [New-bugs-announce] [issue39189] Use io.DEFAULT_BUFFER_SIZE for filecmp BUFSIZE variable Message-ID: <1577962528.13.0.543315983881.issue39189@roundup.psfhosted.org> New submission from Bahram Aghaei : Hello there, I was reading the `filecmp` module and I noticed that it defined the BUFSIZE manually, I think it's better to stick to the io.DEFAULT_BUFFER_SIZE variable for both consistency and easy to maintain in the future. Cheers, ---------- components: Library (Lib) messages: 359188 nosy: Bahram Aghaei priority: normal pull_requests: 17229 severity: normal status: open title: Use io.DEFAULT_BUFFER_SIZE for filecmp BUFSIZE variable type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 07:26:53 2020 From: report at bugs.python.org (=?utf-8?q?Sindri_Gu=C3=B0mundsson?=) Date: Thu, 02 Jan 2020 12:26:53 +0000 Subject: [New-bugs-announce] [issue39190] _result_handler dies on raised exceptions [multiprocessing] Message-ID: <1577968013.94.0.930637470909.issue39190@roundup.psfhosted.org> New submission from Sindri Gu?mundsson : Raising an Exception in a callback handler of apply_async and/or map_async will kill the _result_handler thread. This causes unexpected behavior as all subsequent callbacks won't be called and worse, pool.join will deadlock. The documentation states that callbacks should return immediately but it does not warn the user against raising an exception. Attached are steps to reproduce. ---------- components: Library (Lib) files: test_pool_error.py messages: 359194 nosy: Sindri Gu?mundsson priority: normal severity: normal status: open title: _result_handler dies on raised exceptions [multiprocessing] type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48818/test_pool_error.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 08:34:18 2020 From: report at bugs.python.org (Kaveshnikov Denis) Date: Thu, 02 Jan 2020 13:34:18 +0000 Subject: [New-bugs-announce] [issue39191] Coroutine is awaited despite an exception in run_until_complete() Message-ID: <1577972058.72.0.902060082647.issue39191@roundup.psfhosted.org> New submission from Kaveshnikov Denis : Hi, I found that if to call run_until_complete() in the task while the event loop will be running, a coroutine sent to run_until_complete() will be performed despite the exception raised from run_until_complete(). It seems to me, it would be better to cancel such a coroutine or just do nothing with it. ---------- components: asyncio files: test_event_loop.py messages: 359196 nosy: asvetlov, dkaveshnikov, yselivanov priority: normal severity: normal status: open title: Coroutine is awaited despite an exception in run_until_complete() type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48819/test_event_loop.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 08:57:29 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 02 Jan 2020 13:57:29 +0000 Subject: [New-bugs-announce] [issue39192] relationlist module Message-ID: <1577973449.68.0.593024649325.issue39192@roundup.psfhosted.org> New submission from wyz23x2 : I've written a handy tool--RelationList. This type can easily create relations between elements in lists. ---------- components: Demos and Tools files: relationlist.py messages: 359197 nosy: asvetlov, dkaveshnikov, wyz23x2, yselivanov priority: normal severity: normal status: open title: relationlist module type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48820/relationlist.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 09:55:57 2020 From: report at bugs.python.org (ggbang) Date: Thu, 02 Jan 2020 14:55:57 +0000 Subject: [New-bugs-announce] [issue39193] Out-of-bound write in ceval.c:_PyEval_EvalFrameDefault Message-ID: <1577976957.26.0.433242127529.issue39193@roundup.psfhosted.org> New submission from ggbang : python version: Python 3.9.0a2 (default, Dec 25 2019, 20:42:47) [GCC 7.5.0] on linux crash log: ``` bash ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? code:x86:64 ???? 0x5555555afb88 <_PyEval_EvalFrameDefault+4056> mov rdx, QWORD PTR [rsi+rdx*8+0x18] 0x5555555afb8d <_PyEval_EvalFrameDefault+4061> add QWORD PTR [rdx], 0x1 0x5555555afb91 <_PyEval_EvalFrameDefault+4065> test eax, eax ? 0x5555555afb93 <_PyEval_EvalFrameDefault+4067> mov QWORD PTR [rcx], rdx 0x5555555afb96 <_PyEval_EvalFrameDefault+4070> jne 0x5555555af226 <_PyEval_EvalFrameDefault+1654> 0x5555555afb9c <_PyEval_EvalFrameDefault+4076> mov rdx, r12 0x5555555afb9f <_PyEval_EvalFrameDefault+4079> sub rdx, QWORD PTR [rsp+0x8] 0x5555555afba4 <_PyEval_EvalFrameDefault+4084> add r12, 0x2 0x5555555afba8 <_PyEval_EvalFrameDefault+4088> mov DWORD PTR [rbx+0x68], edx ????????????????????????????????????????????????????????????????????????????????????????????????????????????? source:Python/ceval.c+1352 ???? 1347 1348 case TARGET(LOAD_CONST): { 1349 PREDICTED(LOAD_CONST); 1350 PyObject *value = GETITEM(consts, oparg); 1351 Py_INCREF(value); ? 1352 PUSH(value); 1353 FAST_DISPATCH(); 1354 } 1355 1356 case TARGET(STORE_FAST): { 1357 PREDICTED(STORE_FAST); ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? threads ???? [#0] Id 1, Name: "python", stopped, reason: SIGSEGV ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? trace ???? [#0] 0x5555555afb93 ? _PyEval_EvalFrameDefault(f=, throwflag=) [#1] 0x55555568ad59 ? _PyEval_EvalFrame(tstate=0x555555b237b0, throwflag=0x0, f=0x7ffff7eee440) [#2] 0x55555568ad59 ? _PyEval_EvalCode(tstate=0x555555b237b0, _co=0x7ffff7ebdd40, globals=0x7ffff7f12480, locals=0x7ffff7f12480, args=0x0, argcount=0x0, kwnames=0x0, kwargs=0x0, kwcount=0x0, kwstep=0x2, defs=0x0, defcount=0x0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) [#3] 0x55555568b0c6 ? _PyEval_EvalCodeWithName(qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=0x0, defs=0x0, kwstep=0x2, kwcount=0x0, kwargs=0x0, kwnames=0x0, argcount=0x0, args=0x0, locals=0x7ffff7f12480, globals=0x7ffff7f12480, _co=0x7ffff7ebdd40) [#4] 0x55555568b0c6 ? PyEval_EvalCodeEx(closure=0x0, kwdefs=0x0, defcount=0x0, defs=0x0, kwcount=0x0, kws=0x0, argcount=0x0, args=0x0, locals=0x7ffff7f12480, globals=0x7ffff7f12480, _co=0x7ffff7ebdd40) [#5] 0x55555568b0c6 ? PyEval_EvalCode(co=0x7ffff7ebdd40, globals=0x7ffff7f12480, locals=0x7ffff7f12480) [#6] 0x5555556d6f1e ? run_eval_code_obj(locals=0x7ffff7f12480, globals=0x7ffff7f12480, co=0x7ffff7ebdd40) [#7] 0x5555556d6f1e ? run_pyc_file(filename=, flags=0x7fffffffdc68, locals=0x7ffff7f12480, globals=0x7ffff7f12480, fp=0x555555b85360) [#8] 0x5555556d6f1e ? PyRun_SimpleFileExFlags(flags=, closeit=, filename=, fp=) [#9] 0x5555556d6f1e ? PyRun_SimpleFileEx(f=, p=, c=) ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? _PyEval_EvalFrameDefault (f=, throwflag=) at Python/ceval.c:1352 1352 PUSH(value); gef? exploitable Description: Access violation on destination operand Short description: DestAv (8/22) Hash: f01ce56ffe2792b45d9959e69a1ae15d.6dcf66201de3c2adc2e25e04dbdb55e8 Exploitability Classification: EXPLOITABLE Explanation: The target crashed on an access violation at an address matching the destination operand of the instruction. This likely indicates a write access violation, which means the attacker may control the write address and/or value. Other tags: AccessViolation (21/22) ``` ---------- components: Interpreter Core files: c1 messages: 359199 nosy: ggbang priority: normal severity: normal status: open title: Out-of-bound write in ceval.c:_PyEval_EvalFrameDefault type: security versions: Python 3.9 Added file: https://bugs.python.org/file48822/c1 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 10:09:12 2020 From: report at bugs.python.org (Jonathan Martin) Date: Thu, 02 Jan 2020 15:09:12 +0000 Subject: [New-bugs-announce] [issue39194] asyncio.open_connection returns a closed client when server fails to authenticate client certificate Message-ID: <1577977752.03.0.200427966409.issue39194@roundup.psfhosted.org> New submission from Jonathan Martin : I'm trying to use SSL to validate clients connecting a an asyncio socket server by specifying CERT_REQUIRED and giving a `cafile` containing the client certificate to allow. client and server code attached. Certificates are generated with: openssl req -x509 -newkey rsa:2048 -keyout client.key -nodes -out client.cert -sha256 -days 100 openssl req -x509 -newkey rsa:2048 -keyout server.key -nodes -out server.cert -sha256 -days 100 Observed behavior with python 3.7.5 and openSSL 1.1.1d ------------------------------------------------------ When the client tries to connect without specifying a certificate, the call to asyncio.open_connection succeeds, but the received socket is closed right away, or to be more exact an EOF is received. Observed behavior with python 3.7.4 and openSSL 1.0.2t ------------------------------------------------------ When the client tries to connect without specifying a certificate, the call to asyncio.open_connection fails. Expected behavior ----------------- I'm not sure which behavior is to be considered the expected one, although I would prefer to connection to fail directly instead of returning a dead client. Wouldn't it be better to have only one behavior? Note that when disabling TLSv1.3, the connection does fail to open: ctx.maximum_version = ssl.TLSVersion.TLSv1_2 This can be reproduces on all latest releases of 3.6, 3.7, and 3.8 (which all have openssl 1.1.1d in my case) ---------- assignee: christian.heimes components: SSL, asyncio files: example_code.py messages: 359200 nosy: Jonathan Martin, asvetlov, christian.heimes, yselivanov priority: normal severity: normal status: open title: asyncio.open_connection returns a closed client when server fails to authenticate client certificate type: behavior versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48824/example_code.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 13:59:30 2020 From: report at bugs.python.org (Recursing) Date: Thu, 02 Jan 2020 18:59:30 +0000 Subject: [New-bugs-announce] [issue39195] re._compile should check if the argument is a compiled pattern before checking cache and flags Message-ID: <1577991570.36.0.292103009672.issue39195@roundup.psfhosted.org> New submission from Recursing : In the re module, re._compile gets called when using most re methods. In my use case (which I think is not rare) I have a small number of compiled patterns that I have to match against a large number of short strings profiling showed that half of the total runtime was still spent in re._compile, checking for the type of the flags and trying to get the pattern in a cache Example code that exhibits this behavior: import re pattern = re.compile("spam") string = "Monty pythons" for _ in range(1000000): re.search(pattern, string) ---------- components: Library (Lib) messages: 359210 nosy: Recursing priority: normal severity: normal status: open title: re._compile should check if the argument is a compiled pattern before checking cache and flags type: performance versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 14:10:58 2020 From: report at bugs.python.org (Joe Gordon) Date: Thu, 02 Jan 2020 19:10:58 +0000 Subject: [New-bugs-announce] [issue39196] json fails to encode dictionary view types Message-ID: <1577992258.0.0.878269321128.issue39196@roundup.psfhosted.org> New submission from Joe Gordon : Python 3 fails to encode dictionary view objects. Assuming this is an expected behavior, what is the thinking behind it? I was unable to find any documentation around this. > import json; json.dumps({}.values()) "TypeError: Object of type dict_values is not JSON serializable" ---------- components: Library (Lib) messages: 359212 nosy: jogo priority: normal severity: normal status: open title: json fails to encode dictionary view types type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 15:07:45 2020 From: report at bugs.python.org (signing_agreement) Date: Thu, 02 Jan 2020 20:07:45 +0000 Subject: [New-bugs-announce] [issue39197] Support the title and description arguments for mutually exclusive argument groups Message-ID: <1577995665.78.0.21290537011.issue39197@roundup.psfhosted.org> New submission from signing_agreement : add_mutually_exclusive_group has one flag, required=False. Yet, regardless of specifying required=True, the help message calls the group "optional arguments". It would be nice to be able to add title and description for mutually exclusive groups. Right now, programmers can only do changes via parser._action_groups[1].title = 'mutually exclusive required arguments' ---------- components: Library (Lib) messages: 359217 nosy: signing_agreement priority: normal severity: normal status: open title: Support the title and description arguments for mutually exclusive argument groups type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 2 15:10:20 2020 From: report at bugs.python.org (Derek Brown) Date: Thu, 02 Jan 2020 20:10:20 +0000 Subject: [New-bugs-announce] [issue39198] Lock may not be released in Logger.isEnabledFor Message-ID: <1577995820.85.0.483829284775.issue39198@roundup.psfhosted.org> New submission from Derek Brown : If an exception were to be thrown in a particular block of code (say, by asyncio timeouts or stopit) within the `isEnabledFor` function of `logging`, the `logging` global lock may not be released appropriately, resulting in deadlock. ---------- components: Library (Lib) messages: 359219 nosy: derektbrown priority: normal severity: normal status: open title: Lock may not be released in Logger.isEnabledFor type: crash versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 08:05:21 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 03 Jan 2020 13:05:21 +0000 Subject: [New-bugs-announce] [issue39199] Improve the AST documentation Message-ID: <1578056721.05.0.299258437761.issue39199@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The AST docs need some love as they can be a bit obscure to someone new to the module. Improvements to be considered in this issue: * Document all available nodes (as of 3.8 and not deprecated ones). This helps to know what classes to consider when implementing methods for the visitors. * Add some short practical examples for the visitors: one to query an AST and another to modify it. ---------- assignee: docs at python components: Documentation messages: 359235 nosy: docs at python, pablogsal priority: normal severity: normal status: open title: Improve the AST documentation versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 09:27:51 2020 From: report at bugs.python.org (=?utf-8?q?Julien_F=C3=A9rard?=) Date: Fri, 03 Jan 2020 14:27:51 +0000 Subject: [New-bugs-announce] [issue39200] Inaccurate TypeError message for `range` without argument Message-ID: <1578061671.15.0.421279192127.issue39200@roundup.psfhosted.org> New submission from Julien F?rard : When passing no argument to `range`, the error message states that (exactly) one argument is expected. Actual: Python 3.9.0a0 (heads/master:d395209653, Jan 3 2020, 11:37:03) [GCC 7.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> range() Traceback (most recent call last): File "", line 1, in TypeError: range expected 1 argument, got 0 Expected message: TypeError: range expected at least 1 argument, got 0 (See for instance: >>> eval() Traceback (most recent call last): File "", line 1, in TypeError: eval expected at least 1 argument, got 0 ) ---------- components: Interpreter Core messages: 359236 nosy: jferard priority: normal severity: normal status: open title: Inaccurate TypeError message for `range` without argument type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 10:32:28 2020 From: report at bugs.python.org (Mathias) Date: Fri, 03 Jan 2020 15:32:28 +0000 Subject: [New-bugs-announce] [issue39201] Threading.timer leaks memory in 3.8.0/3.8.1 Message-ID: <1578065548.82.0.563193246076.issue39201@roundup.psfhosted.org> New submission from Mathias : Hi, I think there is an issue with memory allocating with threading.Timer in 3.8.0/3.8.1. When I run the attached code in Python 3.7.3 I get a memory consumption of approx. 10 MB. If I run the same code with python 3.8.0 or 3.8.1, it keeps consuming memory (several hundreds of MB). I've attached 3 images where I run the attached script under mprof. By inspect of the output from tracemalloc, I can see that /usr/lib/python3.8/threading.py:908 keeps allocating memory. I have tested this with python 3.8.0 from ubuntu 16.04 repository and python 3.8.1 from source. Both versions suffers from the memory consumption issue. ---------- components: Library (Lib) files: images_code.tar messages: 359239 nosy: mneerup priority: normal severity: normal status: open title: Threading.timer leaks memory in 3.8.0/3.8.1 versions: Python 3.8 Added file: https://bugs.python.org/file48825/images_code.tar _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 11:19:53 2020 From: report at bugs.python.org (Ilya) Date: Fri, 03 Jan 2020 16:19:53 +0000 Subject: [New-bugs-announce] [issue39202] Python shelve __del__ ignored exception Message-ID: <1578068393.52.0.269447862583.issue39202@roundup.psfhosted.org> New submission from Ilya : I'm using my own implementation of the memoize by shelve module. In the attachment, there are 2 simple test cases which pass but the console there are a lot of messages like that: Exception ignored in: Traceback (most recent call last): File "C:\Miniconda2\envs\38_common\lib\shelve.py", line 162, in __del__ self.close() File "C:\Miniconda2\envs\38_common\lib\shelve.py", line 144, in close self.sync() File "C:\Miniconda2\envs\38_common\lib\shelve.py", line 172, in sync self.dict.sync() File "C:\Miniconda2\envs\38_common\lib\dbm\dumb.py", line 129, in _commit with self._io.open(self._dirfile, 'w', encoding="Latin-1") as f: PermissionError: [Errno 13] Permission denied: 'C:\\project\\tests\\test_memoize_tmp_t5tai08p\\memoize_test_file.dat.dir' Exception ignored in: Traceback (most recent call last): File "C:\Miniconda2\envs\38_common\lib\dbm\dumb.py", line 274, in close self._commit() File "C:\Miniconda2\envs\38_common\lib\dbm\dumb.py", line 129, in _commit with self._io.open(self._dirfile, 'w', encoding="Latin-1") as f: PermissionError: [Errno 13] Permission denied: 'C:\\project\\tests\\test_memoize_tmp_t5tai08p\\memoize_test_file.dat.dir' Basically, the main issue can be explained like that - Python dbm.dumb._Database should maintain self._modified the attribute in the right way(set it to False) after the _commit method. Later I will try to make changes in the dbm.dumb module and run Python internal tests for that modification to see any regression, if not will add PR here. ---------- components: Library (Lib) files: test_python_shelve_issue.py messages: 359242 nosy: libbkmz priority: normal severity: normal status: open title: Python shelve __del__ ignored exception type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48826/test_python_shelve_issue.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 14:02:54 2020 From: report at bugs.python.org (Jason Li) Date: Fri, 03 Jan 2020 19:02:54 +0000 Subject: [New-bugs-announce] [issue39203] python3 time module misses attributes in Mac installers Message-ID: <1578078174.37.0.442161704111.issue39203@roundup.psfhosted.org> New submission from Jason Li : The issue: AttributeError: module 'time' has no attribute 'clock_gettime'. It probably missed other attributes as well. The problem only appeared with using installers to install python. While Homebrew installed python does not have the issue. The issue occurred in versions, 3.7.1, 3.7.3, 3.7.4, 3.8.1, as I observed. ---------- components: macOS messages: 359248 nosy: jasonli360, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: python3 time module misses attributes in Mac installers type: compile error versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 14:12:25 2020 From: report at bugs.python.org (Cooper Lees) Date: Fri, 03 Jan 2020 19:12:25 +0000 Subject: [New-bugs-announce] [issue39204] Automate adding Type Annotations to Documentation Message-ID: <1578078745.51.0.848404810387.issue39204@roundup.psfhosted.org> New submission from Cooper Lees : What are people's thoughts on automating adding type annotations to documentation now that Typeshed is mature and Python 2 is EOL? (Let us never speak of comment annotations) ---------- components: Library (Lib) messages: 359249 nosy: cooperlees priority: normal severity: normal status: open title: Automate adding Type Annotations to Documentation versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 16:26:24 2020 From: report at bugs.python.org (Brian Quinlan) Date: Fri, 03 Jan 2020 21:26:24 +0000 Subject: [New-bugs-announce] [issue39205] Hang when interpreter exits after ProcessPoolExecutor.shutdown(wait=False) Message-ID: <1578086784.42.0.0982560626547.issue39205@roundup.psfhosted.org> New submission from Brian Quinlan : ``` from concurrent.futures import ProcessPoolExecutor import time t = ProcessPoolExecutor(max_workers=3) t.map(time.sleep, [1,2,3]) t.shutdown(wait=False) ``` Results in this exception and then a hang (i.e. Python doesn't terminate): ``` Exception in thread QueueManagerThread: Traceback (most recent call last): File "/usr/local/google/home/bquinlan/cpython/Lib/threading.py", line 944, in _bootstrap_inner self.run() File "/usr/local/google/home/bquinlan/cpython/Lib/threading.py", line 882, in run self._target(*self._args, **self._kwargs) File "/usr/local/google/home/bquinlan/cpython/Lib/concurrent/futures/process.py", line 352, in _queue_management_worker _add_call_item_to_queue(pending_work_items, File "/usr/local/google/home/bquinlan/cpython/Lib/concurrent/futures/process.py", line 280, in _add_call_item_to_queue call_queue.put(_CallItem(work_id, File "/usr/local/google/home/bquinlan/cpython/Lib/multiprocessing/queues.py", line 82, in put raise ValueError(f"Queue {self!r} is closed") ValueError: Queue is closed ``` ---------- assignee: bquinlan messages: 359257 nosy: bquinlan priority: normal severity: normal status: open title: Hang when interpreter exits after ProcessPoolExecutor.shutdown(wait=False) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 16:33:55 2020 From: report at bugs.python.org (Nicholas Feix) Date: Fri, 03 Jan 2020 21:33:55 +0000 Subject: [New-bugs-announce] [issue39206] Modulefinder does not consider source file encoding Message-ID: <1578087235.53.0.0551529290674.issue39206@roundup.psfhosted.org> New submission from Nicholas Feix : The modulefinder._find_module(...) function returns file objects text mode for source modules using the system encoding. ModuleFinder.load_module(...) can run into decoding issues when the source file encoding does not match the system default. The prior implementation imp.find_module(...) detected the encoding correctly using the tokenize.detect_encoding(...) function. With the following code segment the detection would work again with UTF-8 BOM and PEP 263 type cookies. encoding = None if 'b' not in mode: with open(file_path, 'rb') as file: encoding = tokenize.detect_encoding(file.readline)[0] file = open(file_path, mode, encoding=encoding) ---------- components: Library (Lib) messages: 359259 nosy: Nicholas Feix priority: normal severity: normal status: open title: Modulefinder does not consider source file encoding type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 16:44:04 2020 From: report at bugs.python.org (Yusef Shaban) Date: Fri, 03 Jan 2020 21:44:04 +0000 Subject: [New-bugs-announce] [issue39207] concurrent.futures.ProcessPoolExecutor does not properly reap jobs and spawns too many workers Message-ID: <1578087844.56.0.81762061225.issue39207@roundup.psfhosted.org> New submission from Yusef Shaban : This came up from a supporting library but the actual issue is within concurrent.futures.ProcessPool. Discussion can be found at https://github.com/agronholm/apscheduler/issues/414 ProcessPoolExecutor does not properly spin down and spin up new processes. Instead, it simply re-claims existing processes to re-purpose them for new jobs. Is there no option or way to make it so that instead of re-claiming existing processes, it spins down the process and then spins up another one. This behavior is a lot better for garbage collection and will help to prevent memory leaks. ProcessPoolExecutor also spins up too many processes and ignores the max_workers argument. An example is my setting max_workers=10, but I am only utilizing 3 processes. One would expect given the documentation that I would have at most 4 processes, the main process, and the 3 worker processes. Instead, ProcessPoolExecutor spawns all 10 max_workers and lets the other 7 just sit there, even though they are not necessary. ---------- components: Library (Lib) messages: 359260 nosy: yus2047889 priority: normal severity: normal status: open title: concurrent.futures.ProcessPoolExecutor does not properly reap jobs and spawns too many workers type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 3 19:04:37 2020 From: report at bugs.python.org (ThePokestarFan) Date: Sat, 04 Jan 2020 00:04:37 +0000 Subject: [New-bugs-announce] [issue39208] PDB pm function throws exception without sys import Message-ID: <1578096277.08.0.562262793407.issue39208@roundup.psfhosted.org> New submission from ThePokestarFan : When testing PDB in python 3.8.1, PDB throws an exception when I call the pm() function in PDB without importing system. [Fresh session] >>> import pdb >>> pdb.pm() Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pdb.py", line 1631, in pm post_mortem(sys.last_traceback) AttributeError: module 'sys' has no attribute 'last_traceback' >>> import sys >>> pdb.pm() > /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pdb.py(1631)pm() -> post_mortem(sys.last_traceback) ... ---------- components: Library (Lib), macOS messages: 359264 nosy: ThePokestarFan, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: PDB pm function throws exception without sys import type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 09:29:32 2020 From: report at bugs.python.org (Dong-hee Na) Date: Sat, 04 Jan 2020 14:29:32 +0000 Subject: [New-bugs-announce] [issue39209] Crash on REPL mode with long text copy and paste Message-ID: <1578148172.19.0.147940927734.issue39209@roundup.psfhosted.org> New submission from Dong-hee Na : When I copy and paste the pretty long text into REPL shell. REPL shell is crash down with segment fault. This issue is only reproducible on macOS, but Linux REPL doesn't look like normal behavior. [origin text] 0KiB 0 1.3 0 16738211KiB 237.15 1.3 0 never none [macOS] Python 3.9.0a2+ (heads/master:7dc72b8d4f, Jan 4 2020, 23:22:45) [Clang 11.0.0 (clang-1100.0.33.16)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> a = """ ... ... ... ... ... 0KiB ... 0 ... 1.3 ... 0 ... ... ... 16738211KiB ... 237.15 ... 1.3 ... 0 ... ... never ... none ... ... ... ... """ Assertion failed: ((intptr_t)(int)(a - line_start) == (a - line_start)), function parsetok, file Parser/parsetok.c, line 324. [1] 13389 abort ./python.exe [linux] Python 3.9.0a2+ (heads/master-dirty:7dc72b8, Jan 4 2020, 23:22:11) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = """ 0KiB 0 true 0 16738211KiB 237.15 true 0 never none ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... """ >>> a '\n\n \n \n \n 0KiB\n 0\n true\n 0\n \n \n 16738211KiB\n 237.15\n true\n 0\n \n never\n none\n \n \n\n' >>> ---------- messages: 359290 nosy: corona10, pablogsal priority: normal severity: normal status: open title: Crash on REPL mode with long text copy and paste type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 10:29:55 2020 From: report at bugs.python.org (Yan Mitrofanov) Date: Sat, 04 Jan 2020 15:29:55 +0000 Subject: [New-bugs-announce] [issue39210] Sorting falls back to use __gt__ when __lt__ is not present Message-ID: <1578151795.43.0.682291916428.issue39210@roundup.psfhosted.org> New submission from Yan Mitrofanov : Sorting documentation claims that sorting algorithm is only using < comparisons https://docs.python.org/3/howto/sorting.html#odd-and-ends https://docs.python.org/3/library/stdtypes.html#list.sort When __lt__ implementation is missing, you get an exception class Foo: pass sorted([Foo(), Foo(), Foo()]) TypeError: '<' not supported between instances of 'Foo' and 'Foo' However, if implement __gt__ method, you doesn't get an exception class Foo: def __gt__(self, other): return False sorted([Foo(), Foo(), Foo()]) # ok Is it supposed to work like this? Or is it lack of documentation? ---------- assignee: docs at python components: Documentation messages: 359293 nosy: docs at python, yanmitrofanov priority: normal severity: normal status: open title: Sorting falls back to use __gt__ when __lt__ is not present versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 13:26:34 2020 From: report at bugs.python.org (Shane) Date: Sat, 04 Jan 2020 18:26:34 +0000 Subject: [New-bugs-announce] [issue39211] Change in http.server default IP behavior? Message-ID: <1578162394.57.0.362691529673.issue39211@roundup.psfhosted.org> New submission from Shane : It seems to me that the direct invocation behavior for http.server changed, probably with Python 3.8 (I'm currently using 3.8.1 on Windows 10). On 3.7.X I was able to use it as described in the docs (https://docs.python.org/3/library/http.server.html) > python -m http.server 8000 and it would default to whatever IP address was available. Now, in order for it to function at all (not return "This site can?t be reached" in Chrome), I have to bind it to a specific IP address (say, 127.0.0.1, sticking with the docs example). > python -m http.server 8000 --bind 127.0.0.1 At which point it works fine. So it's still quite usable for this purpose, though I was surprised and -simple as the solution is- the solution is less simple when you don't know it! Was this an intended change? Something something security, perhaps? If so, should it be noted in the "What's new" of the docs? And of course, there's always the slight possibility that some aspect of Windows or Chrome behavior changed, but based on the termal's response I don't think that's the case. Thanks, ---------- messages: 359299 nosy: Shane Smith priority: normal severity: normal status: open title: Change in http.server default IP behavior? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 13:52:25 2020 From: report at bugs.python.org (Ram Rachum) Date: Sat, 04 Jan 2020 18:52:25 +0000 Subject: [New-bugs-announce] [issue39212] Show qualified function name when giving arguments error Message-ID: <1578163945.77.0.403027118696.issue39212@roundup.psfhosted.org> New submission from Ram Rachum : I recently got this familiar error: builtins.TypeError: __init__() takes 1 positional argument but 2 were given It was annoying that I didn't know which `__init__` method was under discussion. I wish that Python used the `__qualname__` of the function to show this error message (and maybe others?) so it'll show like this: builtins.TypeError: FooBar.__init__() takes 1 positional argument but 2 were given If I'm not mistaken, the implementation of this error is in getargs.c in the function vgetargs1_impl. ---------- components: Interpreter Core messages: 359302 nosy: cool-RR priority: normal severity: normal status: open title: Show qualified function name when giving arguments error type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 14:17:36 2020 From: report at bugs.python.org (Joseph Sible) Date: Sat, 04 Jan 2020 19:17:36 +0000 Subject: [New-bugs-announce] [issue39213] cmd should have a hook in the finally block of cmdloop Message-ID: <1578165456.25.0.525966531081.issue39213@roundup.psfhosted.org> New submission from Joseph Sible : Currently, the cmdloop function in cmd has a preloop hook, which runs before the large try block, and a postloop hook, which runs at the end of the body of the large try block. This isn't sufficient for subclasses to safely use readline.set_completion_display_matches_hook, since an exception in the large try block would mean that postloop doesn't get called, so there wouldn't be an opportunity to restore the old value of that callback. This is analogous to how we need the finally block ourself to restore the old value of the completer. Moving where postloop is called would be a breaking change, so we should probably create a new method instead, called postloop_finally or something. ---------- components: Library (Lib) messages: 359305 nosy: Joseph Sible priority: normal severity: normal status: open title: cmd should have a hook in the finally block of cmdloop type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 17:33:35 2020 From: report at bugs.python.org (Anthony Sottile) Date: Sat, 04 Jan 2020 22:33:35 +0000 Subject: [New-bugs-announce] [issue39214] Add curses.window.in_wch Message-ID: <1578177215.08.0.364585714734.issue39214@roundup.psfhosted.org> New submission from Anthony Sottile : (I've already got a patch for this, just making the necessary issue) curses.window.inch is pretty useless for any non-ascii character ---------- components: Extension Modules messages: 359309 nosy: Anthony Sottile priority: normal severity: normal status: open title: Add curses.window.in_wch versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 19:30:24 2020 From: report at bugs.python.org (Anthony Sottile) Date: Sun, 05 Jan 2020 00:30:24 +0000 Subject: [New-bugs-announce] [issue39215] Type Annotation of nested function with positional only arguments triggers SystemError Message-ID: <1578184224.6.0.82739775738.issue39215@roundup.psfhosted.org> New submission from Anthony Sottile : def f(): def g(arg: int, /): pass f() $ python3.9 t2.py Traceback (most recent call last): File "/home/asottile/workspace/t2.py", line 5, in f() File "/home/asottile/workspace/t2.py", line 2, in f def g(arg: int, /): SystemError: no locals when loading 'int' Originally from this StackOverflow post: https://stackoverflow.com/q/59594494/812183 ---------- components: Interpreter Core messages: 359312 nosy: Anthony Sottile priority: normal severity: normal status: open title: Type Annotation of nested function with positional only arguments triggers SystemError versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 20:32:21 2020 From: report at bugs.python.org (Anthony Sottile) Date: Sun, 05 Jan 2020 01:32:21 +0000 Subject: [New-bugs-announce] [issue39216] ast_opt.c -- missing posonlyargs? Message-ID: <1578187941.0.0.50477954259.issue39216@roundup.psfhosted.org> New submission from Anthony Sottile : while fixing bpo-39215, I noticed that there seems to be a place here where posonlyargs was missed: https://github.com/python/cpython/blob/7dc72b8d4f2c9d1eed20f314fd6425eab66cbc89/Python/ast_opt.c#L617-L627 not sure if this is intentional or not -- happy to make a patch which adds a line there if someone can help me with the test ---------- components: Interpreter Core messages: 359317 nosy: Anthony Sottile priority: normal severity: normal status: open title: ast_opt.c -- missing posonlyargs? versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 4 20:59:59 2020 From: report at bugs.python.org (Kevin Schlossser) Date: Sun, 05 Jan 2020 01:59:59 +0000 Subject: [New-bugs-announce] [issue39217] GC of a ctypes object causes application crash Message-ID: <1578189599.82.0.182987253345.issue39217@roundup.psfhosted.org> New submission from Kevin Schlossser : I guess this is a question as much as it is a bug report. I know that all kinds of strange behavior can happen when using ctypes improperly. This is what is taking place. I can provide code if needed. but lets work off of my description of what is taking place first. I am querying DeviceIoControl which is apart of the Windows API.. I have a function that has ctypes objects passed to it.. it does whatever it is that is needed to call DeviceIoControl. I have narrow it down to a single object and I ran the visual studio debugger and it traced the problem back to the garbage collector. So this is the basic layout.. ```python def IOControl(io_ctrl, inBuffer, outBuffer, outBufferSize=None): if outBuffer is None: outBufferSize = INT(0) else: pOutBuffer = ctypes.byref(outBuffer) if outBufferSize is None: outBufferSize = INT(ctypes.sizeof(outBuffer)) else: outBufferSize = INT(outBufferSize) if inBuffer is None: inBufferSize = INT(0) else: pInBuffer = ctypes.byref(inBuffer) inBufferSize = INT(ctypes.sizeof(inBuffer)) DeviceIOControl( io_ctrl, inBuffer, ctypes.byref(inBufferSize), outBuffer, ctypes.byref(outBufferSize) ) class SomeStructure(ctypes.Structure): _fields_ = [ ('SomeField1', ULONG), ('SomeField2, LONG * 100, ] out_buffer = SomeStructure() buf_size = ctypes.sizeof(out_buffer) IOControl( 'some io control code', None, None, out_buffer , buf_size ] ``` The code above will crash Python and the debug leads back to an error in the GC. I do not know what the internals of ctypes are or how they function. when using ctypes.byref() to create a pointer I would imagine that the original instance is held somewhere inside of the pointer. But when it gets passed off to the Windows function where it does or what it does is unknown to me. The process if doing this is more complex then what is above It is for example purposes.. There is another in the Windows API that will wait for data to be had from the first function call. The funny thing is the first time it goes though IOControl without incident. its repeated attempts that cause the appcrash. If someone has a reason as to why this is taking place I am ready to learn!!!... I find it odd behavior that once i mode the code from that function into the same namespace where that function was originally called form everything works fine and no application crashes happen The debugger pointed to data that was being passed to a GC macro or function that did not have the information correct. It was pretty car into the GC code before the application crash happens. I am also able to provide the results of running the debugger any help/information would be greatly appreciated. ---------- components: ctypes messages: 359318 nosy: Kevin Schlossser priority: normal severity: normal status: open title: GC of a ctypes object causes application crash type: crash versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 00:34:52 2020 From: report at bugs.python.org (Reed) Date: Sun, 05 Jan 2020 05:34:52 +0000 Subject: [New-bugs-announce] [issue39218] Assertion failure when calling statistics.variance() on a float32 Numpy array Message-ID: <1578202492.29.0.981939629191.issue39218@roundup.psfhosted.org> New submission from Reed : If a float32 Numpy array is passed to statistics.variance(), an assertion failure occurs. For example: import statistics import numpy as np x = np.array([1, 2], dtype=np.float32) statistics.variance(x) The assertion error is: assert T == U and count == count2 Even if you convert x to a list with `x = list(x)`, the issue still occurs. The issue is caused by the following lines in statistics.py (https://github.com/python/cpython/blob/ec007cb43faf5f33d06efbc28152c7fdcb2edb9c/Lib/statistics.py#L687-L691): T, total, count = _sum((x-c)**2 for x in data) # The following sum should mathematically equal zero, but due to rounding # error may not. U, total2, count2 = _sum((x-c) for x in data) assert T == U and count == count2 When a float32 Numpy value is squared in the term (x-c)**2, it turns into a float64 value, causing the `T == U` assertion to fail. I think the best way to fix this would be to replace (x-c)**2 with (x-c)*(x-c). This fix would no longer assume the input's ** operator returns the same type. ---------- components: Library (Lib) messages: 359323 nosy: reed priority: normal severity: normal status: open title: Assertion failure when calling statistics.variance() on a float32 Numpy array type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 06:33:45 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 05 Jan 2020 11:33:45 +0000 Subject: [New-bugs-announce] [issue39219] Fix attributes of syntax errors raized in the tokenizer Message-ID: <1578224025.61.0.7633610275.issue39219@roundup.psfhosted.org> New submission from Serhiy Storchaka : SyntaxError can be raised at different stages of compiling. In some places the source text is not available and should be read from the file using the file name. Which does not work in case of compiling a string or reading from stdin. >>> 0z File "", line 1 0z ^ SyntaxError: invalid syntax >>> 0xz File "", line 1 SyntaxError: invalid hexadecimal literal In the second example above the source line and the caret are absent in REPL. The proposed PR fixes two errors in raising a SyntaxError in the tokenizer. 1. The text of the source line was not set if an exception was raised in the tokenizer. Since most of these exceptions (with more detailed description) were added in 3.8 I consider this a regressions. 2. The offset attribute was an offset in bytes. Now it is an offset in characters. It only fixes errors in the tokenizer. There are similar bugs in other parts of the compiler. This issue is based on the article https://aroberge.blogspot.com/2019/12/a-tiny-python-exception-oddity.html . ---------- components: Interpreter Core messages: 359331 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Fix attributes of syntax errors raized in the tokenizer type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 08:26:51 2020 From: report at bugs.python.org (Carl Friedrich Bolz-Tereick) Date: Sun, 05 Jan 2020 13:26:51 +0000 Subject: [New-bugs-announce] [issue39220] constant folding affects annotations despite 'from __future__ import annotations' Message-ID: <1578230811.94.0.250306045254.issue39220@roundup.psfhosted.org> New submission from Carl Friedrich Bolz-Tereick : PEP 563 interacts in weird ways with constant folding. running the following code: ``` from __future__ import annotations def f(a: 5 + 7) -> a ** 39: return 12 print(f.__annotations__) ``` I would expect this output: ``` {'a': '5 + 7', 'return': 'a ** 39'} ``` But I get: ``` {'a': '12', 'return': 'a ** 39'} ``` ---------- components: Interpreter Core files: x.py messages: 359341 nosy: Carl.Friedrich.Bolz priority: normal severity: normal status: open title: constant folding affects annotations despite 'from __future__ import annotations' versions: Python 3.7 Added file: https://bugs.python.org/file48827/x.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 08:41:23 2020 From: report at bugs.python.org (Andrew Aladjev) Date: Sun, 05 Jan 2020 13:41:23 +0000 Subject: [New-bugs-announce] [issue39221] Cross compiled python installed wrong version of lib2to3/Grammar pickle Message-ID: <1578231683.23.0.24612692977.issue39221@roundup.psfhosted.org> New submission from Andrew Aladjev : Please see the following gentoo bug https://bugs.gentoo.org/704816 https://github.com/python/cpython/blob/master/Lib/lib2to3/pgen2/driver.py#L110 > head + tail + ".".join(map(str, sys.version_info)) + ".pickle" I've tried "print(sys.version_info)" during compilation and received: > sys.version_info(major=3, minor=6, micro=9, releaselevel='final', serial=0) "sys.version_info" is not the target python version, this is the version of python that is running compilation. This variable needs to be replace with something like "sys.target_python_version". This issue looks simple but I can't fix it by myself. Please assign this issue to core developer. We need to find all places where "sys.version_info" is used as target python version during compilation and replace it. ---------- components: Library (Lib) messages: 359343 nosy: puchenyaka priority: normal severity: normal status: open title: Cross compiled python installed wrong version of lib2to3/Grammar pickle type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 09:58:32 2020 From: report at bugs.python.org (Florian Brucker) Date: Sun, 05 Jan 2020 14:58:32 +0000 Subject: [New-bugs-announce] [issue39222] unittest.mock.Mock.parent is broken or undocumented Message-ID: <1578236312.03.0.58171840403.issue39222@roundup.psfhosted.org> New submission from Florian Brucker : The "parent" attribute of unittest.mock.Mock is either broken or undocumented. For example, on Python 3.7.4: >>> from unittest.mock import Mock >>> m = Mock(x=1, parent=2) >>> m.x 1 >>> m.parent Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.7/unittest/mock.py", line 659, in __repr__ name = self._extract_mock_name() File "/usr/local/lib/python3.7/unittest/mock.py", line 638, in _extract_mock_name _name_list.append(_parent._mock_new_name + dot) AttributeError: 'int' object has no attribute '_mock_new_name' >>> parent = Mock() >>> child = Mock(parent=parent) >>> child.parent is parent False I stumbled upon this while trying to mock an object that has a "parent" attribute. >From the documentation I understand that mocks have built-in parents. However, the documentation never mentions the "parent" attribute specifically, so I always assumed that the built-in parent-child relationship was handled using private or name-mangled attributes. And since the "parent" attribute is not mentioned in the docs, I assumed I could set it by passing an additional kwarg to Mock. I would have expected one of the following, in order of personal preference: a) That a private or name-mangled attribute is used for the built-in parent-child relationship, so that I can mock objects which themselves have a "parent" attribute b) That the special meaning of the "parent" attribute is documented, and that trying to set it directly (via the constructor or via attribute assignment, and without going through attach_mock) triggers a warning. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 359348 nosy: docs at python, florian.brucker priority: normal severity: normal status: open title: unittest.mock.Mock.parent is broken or undocumented type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 10:51:54 2020 From: report at bugs.python.org (Batuhan) Date: Sun, 05 Jan 2020 15:51:54 +0000 Subject: [New-bugs-announce] [issue39223] Fold constant slicing with slices Message-ID: <1578239514.69.0.32377155521.issue39223@roundup.psfhosted.org> New submission from Batuhan : >>> def g(): "abcde"[2:4] ... >>> g.__code__.co_consts (None, 'abcde', 2, 4) to >>> def g(): "abcde"[2:4] ... >>> g.__code__.co_consts (None, 'cd') (I have a patch) ---------- components: Interpreter Core messages: 359350 nosy: BTaskaya priority: normal severity: normal status: open title: Fold constant slicing with slices versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 13:55:17 2020 From: report at bugs.python.org (Daniel Farley) Date: Sun, 05 Jan 2020 18:55:17 +0000 Subject: [New-bugs-announce] [issue39224] HTTPConnection.timeout None support Message-ID: <1578250517.62.0.0768078724722.issue39224@roundup.psfhosted.org> New submission from Daniel Farley : HTTPConnection's `timeout` argument is passed down to `socket.settimeout()` which supports `None` and puts the socket in blocking mode. This isn't documented on the `http.client` page. Otherwise it should not be allowed. ---------- assignee: docs at python components: Documentation messages: 359371 nosy: Daniel Farley, docs at python priority: normal severity: normal status: open title: HTTPConnection.timeout None support type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 17:18:09 2020 From: report at bugs.python.org (Reuven Lerner) Date: Sun, 05 Jan 2020 22:18:09 +0000 Subject: [New-bugs-announce] [issue39225] Python should warn when a global/local has the same name as a builtin Message-ID: <1578262689.16.0.500995659902.issue39225@roundup.psfhosted.org> New submission from Reuven Lerner : Newcomers to Python are often frustrated and surprised when they define variables such as "sum" and "list", only to discover that they've masked access builtins of the same name. External code checkers do help, but those don't work in Jupyter or other non-IDE environments. It would be nice if defining a global/local with the same name as a builtin would generate a warning. For example: list = [10, 20, 30] RedefinedBuiltinWarning: "list" is a builtin, and should normally not be redefined. I'm sure that the wording could use a lot of work, but something like this would do wonders to help newbies, who encounter this all the time. Experienced developers are surprised that these terms aren't reserved words. ---------- components: Interpreter Core messages: 359384 nosy: reuven priority: normal severity: normal status: open title: Python should warn when a global/local has the same name as a builtin _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 17:50:13 2020 From: report at bugs.python.org (=?utf-8?q?Antonio_V=C3=A1zquez_Blanco?=) Date: Sun, 05 Jan 2020 22:50:13 +0000 Subject: [New-bugs-announce] [issue39226] venv does not include pythonXX.lib Message-ID: <1578264613.69.0.856144088862.issue39226@roundup.psfhosted.org> New submission from Antonio V?zquez Blanco : I've tryed to install mod_wsgi using pip lately in a venv. This installation process fails with a message about a missing venv\scripts\libs\python38.lib file as reported in https://github.com/GrahamDumpleton/mod_wsgi/issues/506 It seems that this file used to be included in virtual environments but the behaviour has changed. This library seems to be a dependency for some modules, shouldn't it be included in the virtual environment? Is this behaviour change desired? If so, how should modules link to python.lib? Thanks in advance ---------- components: Library (Lib) messages: 359388 nosy: Antonio V?zquez Blanco priority: normal severity: normal status: open title: venv does not include pythonXX.lib type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 21:18:32 2020 From: report at bugs.python.org (Zac Hatfield-Dodds) Date: Mon, 06 Jan 2020 02:18:32 +0000 Subject: [New-bugs-announce] [issue39227] OverflowError in len(range(2**63)) Message-ID: <1578277112.33.0.0446121238087.issue39227@roundup.psfhosted.org> New submission from Zac Hatfield-Dodds : The value for `len` internally passes through an `ssize_t`, which means that it raises OverflowError for (very) large collections. This is admittedly only possible with collections such as `range` that do not store all their elements in memory, but it would still be nice to have `len(range(n)) == n` without caveats. This was found via a teaching example and is now tracked in my repo of property-based tests for CPython: https://github.com/rsokl/Learning_Python/pull/125 https://github.com/Zac-HD/stdlib-property-tests/blob/bb46996ca4500381ba09a8cd430caaddd71910bc/tests.py#L28-L34 Related to https://bugs.python.org/issue26423, but it's still present in the development branches for 3.7, 3.8, and 3.9; and instead of a wrong result it's an error (which is better!). ---------- components: Interpreter Core messages: 359394 nosy: Zac Hatfield-Dodds priority: normal severity: normal status: open title: OverflowError in len(range(2**63)) type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 5 23:14:25 2020 From: report at bugs.python.org (daniel hahler) Date: Mon, 06 Jan 2020 04:14:25 +0000 Subject: [New-bugs-announce] [issue39228] traceback.FrameSummary does not handle exceptions from `repr()` Message-ID: <1578284065.37.0.657086816746.issue39228@roundup.psfhosted.org> New submission from daniel hahler : Exceptions within `__repr__` methods of captured locals (e.g. via the `capture_locals` argument of `TracebackException`) are not handled: ``` import traceback class CrashingRepr: def __repr__(self): raise RuntimeError("crash") traceback.FrameSummary("fname", 1, "name", locals={"crash": CrashingRepr()}) ``` Result: ``` Traceback (most recent call last): File "test_framesummary_repr.py", line 9, in traceback.FrameSummary("fname", 1, "name", locals={"crash": CrashingRepr()}) File "?/pyenv/3.8.0/lib/python3.8/traceback.py", line 260, in __init__ self.locals = {k: repr(v) for k, v in locals.items()} if locals else None File "?/pyenv/3.8.0/lib/python3.8/traceback.py", line 260, in self.locals = {k: repr(v) for k, v in locals.items()} if locals else None File "test_framesummary_repr.py", line 6, in __repr__ raise RuntimeError("crash") RuntimeError: crash ``` The following patch would fix this: ```diff diff --git i/Lib/traceback.py w/Lib/traceback.py index 7a4c8e19f9..eed7082db4 100644 --- i/Lib/traceback.py +++ w/Lib/traceback.py class FrameSummary: """A single frame from a traceback. @@ -257,7 +265,17 @@ def __init__(self, filename, lineno, name, *, lookup_line=True, self._line = line if lookup_line: self.line - self.locals = {k: repr(v) for k, v in locals.items()} if locals else None + if locals: + self.locals = {} + for k, v in locals.items(): + try: + self.locals[k] = repr(v) + except (KeyboardInterrupt, SystemExit): + raise + except BaseException as exc: + self.locals[k] = f"" + else: + self.locals = None def __eq__(self, other): if isinstance(other, FrameSummary): ``` ---------- components: Library (Lib) messages: 359400 nosy: blueyed priority: normal severity: normal status: open title: traceback.FrameSummary does not handle exceptions from `repr()` type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 00:38:30 2020 From: report at bugs.python.org (Rafael Fontenelle) Date: Mon, 06 Jan 2020 05:38:30 +0000 Subject: [New-bugs-announce] [issue39229] library/functions.rst causes translated builds to fail Message-ID: <1578289110.04.0.11997957012.issue39229@roundup.psfhosted.org> New submission from Rafael Fontenelle : Documentation file library/functions.rst has a syntax issue that when building documentation with warnings as errors, the following message appears: cpython/Doc/library/functions.rst:: WARNING: inconsistent term references in translated message. original: [], translated: [':ref:`evento de auditoria `'] After several testing, it seems that what is causing this is librar/functions.rst's line 795 not having a reference ":ref:`auditing event `". Steps to reproduce the issue: 1. git clone --depth 1 https://github.com/python/cpython 2. mkdir -p locale/pt_BR/LC_MESSAGES 3. git clone --depth 1 https://github.com/python/python-docs-pt-br locale/pt_BR/LC_MESSAGES 4. cd locale/pt_BR/LC_MESSAGES # This takes about 40 minutes (can be ignored for outdated po files with more unrelated syntax errors) 5. tx pull --force --language pt_BR --parallel 6. cd ../../.. 7. cd cpython/Doc/ 8. make venv 9. make html \ SPHINXOPTS='-q --keep-going -jauto \ -D locale_dirs=../../locale \ -D language=pt_BR \ -D gettext_compact=0 \ -D latex_engine=xelatex \ -D latex_elements.inputenc= \ -D latex_elements.fontenc=' 10. Look for library/functions.rst "WARNING" error message between the output. ---------- assignee: docs at python components: Documentation messages: 359401 nosy: docs at python, rffontenelle priority: normal severity: normal status: open title: library/functions.rst causes translated builds to fail type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 09:56:11 2020 From: report at bugs.python.org (mmckerns) Date: Mon, 06 Jan 2020 14:56:11 +0000 Subject: [New-bugs-announce] [issue39230] fail on datetime import if _datetime.py exists in PATH Message-ID: <1578322571.14.0.441191353008.issue39230@roundup.psfhosted.org> New submission from mmckerns : In Lib/datetime.py, there's an import: `from _datetime import *` which will fail if `_datetime.py` exists in the current directory, or earlier in the path than Lib. For reference, see: https://github.com/numpy/numpy/issues/15257 ---------- components: Library (Lib) messages: 359429 nosy: mmckerns priority: normal severity: normal status: open title: fail on datetime import if _datetime.py exists in PATH type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 13:11:51 2020 From: report at bugs.python.org (Robert) Date: Mon, 06 Jan 2020 18:11:51 +0000 Subject: [New-bugs-announce] [issue39231] Mistaken notion in tutorial Message-ID: <1578334311.83.0.989380005846.issue39231@roundup.psfhosted.org> New submission from Robert : https://docs.python.org/3/tutorial/controlflow.html 4.7.8. Function Annotations [...] "The following example has a positional argument, a keyword argument, and the return value annotated:" It is not a "positional argument" but an "optional argument". ---------- assignee: docs at python components: Documentation messages: 359443 nosy: docs at python, r0b priority: normal severity: normal status: open title: Mistaken notion in tutorial type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 13:29:20 2020 From: report at bugs.python.org (Michael Hall) Date: Mon, 06 Jan 2020 18:29:20 +0000 Subject: [New-bugs-announce] [issue39232] asyncio crashes when tearing down the proactor event loop Message-ID: <1578335360.75.0.333919607495.issue39232@roundup.psfhosted.org> New submission from Michael Hall : When using asyncio.run for an asynchronous application utilizing ssl, on windows using the proactor event loop the application crashes when the loop is closed, completely skipping a finally block in the process. This appears to be due to a __del__ method on transports used. Manual handling of the event loop close while including a brief sleep appears to work as intended. Both versions work fine with the selector event loop on linux. This appears to be a somewhat known issue already, as it's been reported to aiohttp, however both the traceback, and the differing behavior seem to indicate this is an issue with the proactor event loop. (On linux this still emits a resource warning without the sleep) While I don't mind handling the loop cleanup, it seems like this case should also emit a resource warning rather than crashing. If it's decided in which way this should be handled, I'm willing to contribute to or help test whatever direction the resolution for this should go. Traceback included below, toy version of the problem attached as code. Exception ignored in: Traceback (most recent call last): File "C:\Users\Michael\AppData\Local\Programs\Python\Python38\lib\asyncio\proactor_events.py", line 116, in __del__ self.close() File "C:\Users\Michael\AppData\Local\Programs\Python\Python38\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\Michael\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 715, in call_soon self._check_closed() File "C:\Users\Michael\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 508, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed ---------- components: asyncio files: example.py messages: 359448 nosy: asvetlov, mikeshardmind, yselivanov priority: normal severity: normal status: open title: asyncio crashes when tearing down the proactor event loop type: crash versions: Python 3.8 Added file: https://bugs.python.org/file48829/example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 13:54:50 2020 From: report at bugs.python.org (Mark Dickinson) Date: Mon, 06 Jan 2020 18:54:50 +0000 Subject: [New-bugs-announce] [issue39233] glossary entry for parameter out-of-date for positional-only parameters Message-ID: <1578336890.4.0.142758348692.issue39233@roundup.psfhosted.org> New submission from Mark Dickinson : The glossary entry for parameter[1] says: > Python has no syntax for defining positional-only parameters. Since PEP 570 landed in Python 3.8, that's no longer true. [1] https://docs.python.org/3/glossary.html#term-parameter ---------- assignee: docs at python components: Documentation messages: 359451 nosy: docs at python, mark.dickinson priority: normal severity: normal status: open title: glossary entry for parameter out-of-date for positional-only parameters versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 14:10:39 2020 From: report at bugs.python.org (YoSTEALTH) Date: Mon, 06 Jan 2020 19:10:39 +0000 Subject: [New-bugs-announce] [issue39234] `enum.auto()` incrementation value not specified. Message-ID: <1578337839.24.0.759731756128.issue39234@roundup.psfhosted.org> New submission from YoSTEALTH : # enum in C # --------- enum { a, b, c } # a = 0 # b = 1 # b = 2 # enum in Python # -------------- class Count(enum.IntEnum): a = enum.auto() b = enum.auto() c = enum.auto() # a = 1 # b = 2 # b = 3 I am not sure why the `enum.auto()` starts with 1 in Python but this has just wasted a week worth of my time. ---------- assignee: docs at python components: Documentation messages: 359452 nosy: YoSTEALTH, docs at python priority: normal severity: normal status: open title: `enum.auto()` incrementation value not specified. versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 14:27:27 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Mon, 06 Jan 2020 19:27:27 +0000 Subject: [New-bugs-announce] [issue39235] Generator expression has wrong line/col info when inside a Call object Message-ID: <1578338847.26.0.00979384316229.issue39235@roundup.psfhosted.org> New submission from Lysandros Nikolaou : A normal generator expression like (i for i in a) produces the following AST: Module( body=[ Expr( value=GeneratorExp( elt=Name( id="i", ctx=Load(), lineno=1, col_offset=1, end_lineno=1, end_col_offset=2 ), generators=[ comprehension( target=Name( id="i", ctx=Store(), lineno=1, col_offset=7, end_lineno=1, end_col_offset=8, ), iter=Name( id="a", ctx=Load(), lineno=1, col_offset=12, end_lineno=1, end_col_offset=13, ), ifs=[], is_async=0, ) ], lineno=1, *col_offset=0,* end_lineno=1, *end_col_offset=14,* ), lineno=1, col_offset=0, end_lineno=1, end_col_offset=14, ) ], type_ignores=[], ) But when calling a function with a generator expression as an argument, something is off: Module( body=[ Expr( value=Call( func=Name( id="f", ctx=Load(), lineno=1, col_offset=0, end_lineno=1, end_col_offset=1 ), args=[ GeneratorExp( elt=Name( id="i", ctx=Load(), lineno=1, col_offset=2, end_lineno=1, end_col_offset=3, ), generators=[ comprehension( target=Name( id="i", ctx=Store(), lineno=1, col_offset=8, end_lineno=1, end_col_offset=9, ), iter=Name( id="a", ctx=Load(), lineno=1, col_offset=13, end_lineno=1, end_col_offset=14, ), ifs=[], is_async=0, ) ], lineno=1, *col_offset=1,* end_lineno=1, *end_col_offset=2,* ) ], keywords=[], lineno=1, col_offset=0, end_lineno=1, end_col_offset=15, ), lineno=1, col_offset=0, end_lineno=1, end_col_offset=15, ) ], type_ignores=[], ) I'm not sure if this is intentional or not, because there is a call to copy_location in Python/ast.c:3149. If this call to copy_location is removed, the inconsistency goes away. ---------- components: Interpreter Core messages: 359454 nosy: lys.nikolaou priority: normal severity: normal status: open title: Generator expression has wrong line/col info when inside a Call object type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 14:45:05 2020 From: report at bugs.python.org (Brett Cannon) Date: Mon, 06 Jan 2020 19:45:05 +0000 Subject: [New-bugs-announce] [issue39236] Adding a .gitignore file to virtual environments Message-ID: <1578339905.74.0.21964655851.issue39236@roundup.psfhosted.org> New submission from Brett Cannon : In a discussion on Twitter, the idea of having venv lay down a .gitignore file in a newly created virtual environment that consisted of nothing but `*` came up (https://twitter.com/codewithanthony/status/1213680829530099713). The purpose would be to help prevent people from inadvertently committing their venv to git. It seems pytest does something similar for .pytest_cache (got one complaint but have chosen to keep it otherwise). To me this seems like a good enhancement. Since this would mostly benefit beginners then it should probably be an opt-out if we do it at all. Maybe make --no-ignore-file to opt out? FYI Mercurial does not support subdirectory hgignore files like git does, so this may be git-specific (for now): https://www.selenic.com/mercurial/hgignore.5.html. ---------- components: Library (Lib) messages: 359459 nosy: brett.cannon, vinay.sajip, xtreak priority: low severity: normal status: open title: Adding a .gitignore file to virtual environments type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 15:22:49 2020 From: report at bugs.python.org (Alex Henrie) Date: Mon, 06 Jan 2020 20:22:49 +0000 Subject: [New-bugs-announce] [issue39237] Redundant call to round in delta_new Message-ID: <1578342169.21.0.342179864502.issue39237@roundup.psfhosted.org> New submission from Alex Henrie : The delta_new function in _datetimemodule.c currently contains the following code: /* Round to nearest whole # of us, and add into x. */ double whole_us = round(leftover_us); int x_is_odd; PyObject *temp; whole_us = round(leftover_us); The second call to the round function produces the same result as the first call and can therefore be safely eliminated. ---------- components: Library (Lib) messages: 359465 nosy: alex.henrie priority: normal severity: normal status: open title: Redundant call to round in delta_new type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 17:16:59 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Jan 2020 22:16:59 +0000 Subject: [New-bugs-announce] [issue39238] test_asyncio: test_cancel_make_subprocess_transport_exec() hangs randomly on PPC64LE Fedora 3.x Message-ID: <1578349019.82.0.0702143761935.issue39238@roundup.psfhosted.org> New submission from STINNER Victor : PPC64LE Fedora 3.x buildbot: https://buildbot.python.org/all/#builders/11/builds/134 0:35:30 load avg: 0.00 [420/420/1] test_asyncio crashed (Exit code 1) Timeout (0:15:00)! Thread 0x00003fff82de5330 (most recent call first): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/selectors.py", line 468 in select File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/asyncio/base_events.py", line 1852 in _run_once File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/asyncio/base_events.py", line 596 in run_forever File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/asyncio/base_events.py", line 629 in run_until_complete File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/test/test_asyncio/test_subprocess.py", line 441 in test_cancel_make_subprocess_transport_exec (...) 0:35:30 load avg: 0.00 Re-running test_asyncio in verbose mode (...) test_shell_loop_deprecated (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ... ok test_start_new_session (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ... ok test_stdin_broken_pipe (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ... ok test_stdin_not_inheritable (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ... ok test_stdin_stdout (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ... ok test_terminate (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ... ok Timeout (0:15:00)! Thread 0x00003fffb25e5330 (most recent call first): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/selectors.py", line 468 in select File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/asyncio/base_events.py", line 1852 in _run_once File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/asyncio/base_events.py", line 596 in run_forever File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/asyncio/base_events.py", line 629 in run_until_complete File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/test/test_asyncio/test_subprocess.py", line 441 in test_cancel_make_subprocess_transport_exec .... ---------- components: Tests, asyncio messages: 359473 nosy: asvetlov, pablogsal, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_cancel_make_subprocess_transport_exec() hangs randomly on PPC64LE Fedora 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 18:34:48 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Jan 2020 23:34:48 +0000 Subject: [New-bugs-announce] [issue39239] select.epoll.unregister(fd) should not ignore EBADF Message-ID: <1578353688.24.0.722884378041.issue39239@roundup.psfhosted.org> New submission from STINNER Victor : The select.epoll.unregister(fd) method currently ignores EBADF error if the file descriptor fd is invalid. I'm surprised by this undocumented behavior: https://docs.python.org/dev/library/select.html#select.epoll.unregister This behavior may lead to bugs if the file descriptor number has been recycled in the meanwhile. I'm not sure that it's a good idea to silently ignore the error. See bpo-18748 for a similar issue: "io.IOBase destructor silence I/O error on close() by default". Note: The method also ignores EBADF error if the epoll file descriptor has been closed. The behavior is as old as the implementation of select.epoll, bpo-1657: commit 0e9ab5f2f0f907b57c70557e21633ce8c341d1d1 Author: Christian Heimes Date: Fri Mar 21 23:49:44 2008 +0000 Applied patch #1657 epoll and kqueue wrappers for the select module The patch adds wrappers for the Linux epoll syscalls and the BSD kqueue syscalls. Thanks to Thomas Herve and the Twisted people for their support a nd help. TODO: Finish documentation documentation Thomas Herve wrote a first implementation in bpo-1675118, but it seems like it was Christian Heimes who wrote the unregister() method. ---------- components: Library (Lib) messages: 359481 nosy: christian.heimes, vstinner priority: normal severity: normal status: open title: select.epoll.unregister(fd) should not ignore EBADF versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 20:01:11 2020 From: report at bugs.python.org (Gerardo) Date: Tue, 07 Jan 2020 01:01:11 +0000 Subject: [New-bugs-announce] [issue39240] keyerror in string format Message-ID: <1578358871.82.0.953798300041.issue39240@roundup.psfhosted.org> New submission from Gerardo : Hi, i think tha this is a problem, i'm not have mutch experiencing in programming with python. I have added in the file the line that create the problem and a line that make fully functional. Thanks for the time. Gerry ---------- components: Regular Expressions files: test.py messages: 359483 nosy: ezio.melotti, gerryc89, mrabarnett priority: normal severity: normal status: open title: keyerror in string format type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file48830/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 6 20:35:08 2020 From: report at bugs.python.org (Xu) Date: Tue, 07 Jan 2020 01:35:08 +0000 Subject: [New-bugs-announce] [issue39241] Popen of python3.6 hangs on os.read(errpipe_read, 50000) Message-ID: <1578360908.03.0.624419556785.issue39241@roundup.psfhosted.org> New submission from Xu : I have a piece code hangs on os.read(errpipe_read, 50000) So I compared the python3.6 with python2.7 on _execute_child, I saw: for python2.7 we create the errpipe_read/write with pipe_cloexec() 1213 # For transferring possible exec failure from child to parent 1214 # The first char specifies the exception type: 0 means 1215 # OSError, 1 means some other error. 1216 errpipe_read, errpipe_write = self.pipe_cloexec() while for python3.6 we create the errpipe_read/write with pipe() 1251 # For transferring possible exec failure from child to parent. 1252 # Data format: "exception name:hex errno:description" 1253 # Pickle is not used; it is complex and involves memory allocation. 1254 errpipe_read, errpipe_write = os.pipe() Does that mean python3.6 doesn't set the the flag FD_CLOEXEC on the pipe ? ---------- components: Library (Lib) messages: 359486 nosy: liuxu1005 priority: normal severity: normal status: open title: Popen of python3.6 hangs on os.read(errpipe_read, 50000) versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 02:28:05 2020 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 07 Jan 2020 07:28:05 +0000 Subject: [New-bugs-announce] [issue39242] Update news.gmane.org to news.gmane.io Message-ID: <1578382085.09.0.293761800021.issue39242@roundup.psfhosted.org> New submission from Dong-hee Na : https://discuss.python.org/t/ot-gmane-server-moving/2967 AFAIK, we have several codes that use news.gmane.org. According to article, don't we have to update it? (sorry I am not a committer, so I don't have permission to write on that) https://github.com/python/cpython/search?q=news.gmane.org&unscoped_q=news.gmane.org ---------- messages: 359491 nosy: corona10, pitrou priority: normal severity: normal status: open title: Update news.gmane.org to news.gmane.io _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 05:53:45 2020 From: report at bugs.python.org (David Heffernan) Date: Tue, 07 Jan 2020 10:53:45 +0000 Subject: [New-bugs-announce] [issue39243] CDLL __init__ no longer supports name being passed as None when the handle is not None Message-ID: <1578394425.89.0.0381750651285.issue39243@roundup.psfhosted.org> New submission from David Heffernan : When creating an instance of CDLL (or indeed WinDLL) for a DLL that is already loaded, you pass the HMODULE in the handle argument to the constructor. In older versions of ctypes you could pass None as the name argument when doing so. However, the changes in https://github.com/python/cpython/commit/2438cdf0e932a341c7613bf4323d06b91ae9f1f1 now mean that such code fails with a NoneType is not iterable error. The relevant change is in __init__ for CDLL. The code inside the if _os.name == "nt" block sets up mode, but this is pointless is handle is not None. Because the mode variable is never used, rightly so because the DLL is already loaded. The issue could be resolved by changing if _os.name == "nt": to if _os.name == "nt" and handle is not None: The following program demonstrates the issue: import ctypes handle = ctypes.windll.kernel32._handle print(handle) lib = ctypes.WinDLL(name=None, handle=handle) print(lib._handle) This runs to completion in Python 3.7 and earlier, but fails in Python 3.8 and later: Traceback (most recent call last): File "test.py", line 5, in lib = ctypes.WinDLL(name=None, handle=handle) File "C:\Program Files (x86)\Python\38\lib\ctypes\__init__.py", line 359, in __init__ if '/' in name or '\\' in name: TypeError: argument of type 'NoneType' is not iterable ---------- components: Windows, ctypes messages: 359501 nosy: David Heffernan, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: CDLL __init__ no longer supports name being passed as None when the handle is not None type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 06:54:47 2020 From: report at bugs.python.org (Stefan Holek) Date: Tue, 07 Jan 2020 11:54:47 +0000 Subject: [New-bugs-announce] [issue39244] multiprocessing.get_all_start_methods() wrong default on macOS Message-ID: <1578398087.26.0.562792969059.issue39244@roundup.psfhosted.org> New submission from Stefan Holek : In Python 3.8 the default start method has changed from fork to spawn on macOS. https://docs.python.org/3/whatsnew/3.8.html#multiprocessing get_all_start_methods() says: "Returns a list of the supported start methods, the first of which is the default." https://docs.python.org/3/library/multiprocessing.html?highlight=finalize#multiprocessing.get_all_start_methods However, it appears to still return fork as default: Python 3.8.1 (default, Dec 22 2019, 03:45:23) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing >>> multiprocessing.get_all_start_methods() ['fork', 'spawn', 'forkserver'] >>> Thank you! ---------- components: Library (Lib), macOS messages: 359503 nosy: ned.deily, ronaldoussoren, stefanholek priority: normal severity: normal status: open title: multiprocessing.get_all_start_methods() wrong default on macOS type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 07:36:10 2020 From: report at bugs.python.org (Petr Viktorin) Date: Tue, 07 Jan 2020 12:36:10 +0000 Subject: [New-bugs-announce] [issue39245] Public API for Vectorcall (PEP 590) Message-ID: <1578400570.24.0.962646656542.issue39245@roundup.psfhosted.org> New submission from Petr Viktorin : As per PEP 590, in Python 3.9 the Vectorcall API will be public, i.e. without leading underscores. ---------- messages: 359506 nosy: petr.viktorin priority: normal severity: normal status: open title: Public API for Vectorcall (PEP 590) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 09:36:26 2020 From: report at bugs.python.org (Felipe A. Hernandez) Date: Tue, 07 Jan 2020 14:36:26 +0000 Subject: [New-bugs-announce] [issue39246] shutil.rmtree is inefficient because of using os.scandir instead of os.walk Message-ID: <1578407786.88.0.0183603543979.issue39246@roundup.psfhosted.org> New submission from Felipe A. Hernandez : os.rmtree has fd-based symlink replacement protection when iterating with scandir (after bpo-28564). This logic could be greatly simplified simply by os.fwalk in supported platforms, which already implements a similar (maybe safer) protection. ---------- components: Library (Lib) messages: 359512 nosy: Felipe A. Hernandez priority: normal severity: normal status: open title: shutil.rmtree is inefficient because of using os.scandir instead of os.walk versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 12:35:37 2020 From: report at bugs.python.org (Michael Robellard) Date: Tue, 07 Jan 2020 17:35:37 +0000 Subject: [New-bugs-announce] [issue39247] dataclass defaults and property don't work together Message-ID: <1578418537.56.0.0565923550129.issue39247@roundup.psfhosted.org> New submission from Michael Robellard : I ran into a strange issue while trying to use a dataclass together with a property. I have it down to a minumum to reproduce it: import dataclasses @dataclasses.dataclass class FileObject: _uploaded_by: str = dataclasses.field(default=None, init=False) uploaded_by: str = None def save(self): print(self.uploaded_by) @property def uploaded_by(self): return self._uploaded_by @uploaded_by.setter def uploaded_by(self, uploaded_by): print('Setter Called with Value ', uploaded_by) self._uploaded_by = uploaded_by p = FileObject() p.save() This outputs: Setter Called with Value I would expect to get None instead Here is the StackOverflow Question where I started this: https://stackoverflow.com/questions/59623952/weird-issue-when-using-dataclass-and-property-together ---------- components: Library (Lib) messages: 359528 nosy: Michael Robellard priority: normal severity: normal status: open title: dataclass defaults and property don't work together type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 13:12:11 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 07 Jan 2020 18:12:11 +0000 Subject: [New-bugs-announce] [issue39248] test_distutils fails on PPC64 Fedora 3.x Message-ID: <1578420731.23.0.752143109381.issue39248@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/8/builds/136 0:11:37 load avg: 2.21 [230/420/1] test_distutils failed Traceback (most recent call last): File "/tmp/tmpo2bw8_ak.py", line 5, in byte_compile(files, optimize=1, force=None, File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/util.py", line 359, in byte_compile import subprocess File "/tmp/subprocess.py", line 28 """Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. For a complete description of this module see the Python documentation. Main API ======== call(...): Runs a command, waits for it to complete, then returns the return code. check_call(...): Same as call() but raises CalledProcessError() if return code is not 0 check_output(...): Same as check_call() but returns the contents of stdout instead of a return code Popen(...): A class for flexibly executing a command in a new process Constants --------- PIPE: Special value that indicates a pipe should be created STDOUT: Special value that indicates that stderr should go to stdout """Instruction context: ^ SyntaxError: invalid syntax Traceback (most recent call last): File "/tmp/tmpkoui1d2n.py", line 5, in byte_compile(files, optimize=1, force=None, File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/util.py", line 359, in byte_compile import subprocess File "/tmp/subprocess.py", line 28 """Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. For a complete description of this module see the Python documentation. Main API ======== call(...): Runs a command, waits for it to complete, then returns the return code. check_call(...): Same as call() but raises CalledProcessError() if return code is not 0 check_output(...): Same as check_call() but returns the contents of stdout instead of a return code Popen(...): A class for flexibly executing a command in a new process Constants --------- PIPE: Special value that indicates a pipe should be created STDOUT: Special value that indicates that stderr should go to stdout """Instruction context: ^ SyntaxError: invalid syntax error: command '/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/python' failed with exit status 1 error: Bad exit status from /var/tmp/rpm-tmp.yc4Iwj (%install) Bad exit status from /var/tmp/rpm-tmp.yc4Iwj (%install) Traceback (most recent call last): File "/tmp/tmp6h6zfdg2.py", line 5, in byte_compile(files, optimize=1, force=None, File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/util.py", line 359, in byte_compile import subprocess File "/tmp/subprocess.py", line 28 """Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. For a complete description of this module see the Python documentation. Main API ======== call(...): Runs a command, waits for it to complete, then returns the return code. check_call(...): Same as call() but raises CalledProcessError() if return code is not 0 check_output(...): Same as check_call() but returns the contents of stdout instead of a return code Popen(...): A class for flexibly executing a command in a new process Constants --------- PIPE: Special value that indicates a pipe should be created STDOUT: Special value that indicates that stderr should go to stdout """Instruction context: ^ SyntaxError: invalid syntax error: command '/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/python' failed with exit status 1 error: Bad exit status from /var/tmp/rpm-tmp.ccdCTP (%install) Bad exit status from /var/tmp/rpm-tmp.ccdCTP (%install) Traceback (most recent call last): File "/tmp/tmp_slztuax.py", line 5, in byte_compile(files, optimize=1, force=None, File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/util.py", line 359, in byte_compile import subprocess File "/tmp/subprocess.py", line 28 """Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. For a complete description of this module see the Python documentation. Main API ======== call(...): Runs a command, waits for it to complete, then returns the return code. check_call(...): Same as call() but raises CalledProcessError() if return code is not 0 check_output(...): Same as check_call() but returns the contents of stdout instead of a return code Popen(...): A class for flexibly executing a command in a new process Constants --------- PIPE: Special value that indicates a pipe should be created STDOUT: Special value that indicates that stderr should go to stdout """Instruction context: ^ SyntaxError: invalid syntax test_byte_compile (distutils.tests.test_install_lib.InstallLibTestCase) ... ERROR (...) test_no_optimize_flag (distutils.tests.test_bdist_rpm.BuildRpmTestCase) ... ERROR test_quiet (distutils.tests.test_bdist_rpm.BuildRpmTestCase) ... ERROR (...) test_byte_compile_optimized (distutils.tests.test_build_py.BuildPyTestCase) ... ERROR ====================================================================== ERROR: test_byte_compile (distutils.tests.test_install_lib.InstallLibTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/tests/test_install_lib.py", line 46, in test_byte_compile cmd.byte_compile([f]) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/command/install_lib.py", line 136, in byte_compile byte_compile(files, optimize=self.optimize, File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/util.py", line 425, in byte_compile spawn(cmd, dry_run=dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 36, in spawn _spawn_posix(cmd, search_path, dry_run=dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 157, in _spawn_posix raise DistutilsExecError( distutils.errors.DistutilsExecError: command '/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/python' failed with exit status 1 ====================================================================== ERROR: test_no_optimize_flag (distutils.tests.test_bdist_rpm.BuildRpmTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/tests/test_bdist_rpm.py", line 120, in test_no_optimize_flag cmd.run() File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/command/bdist_rpm.py", line 363, in run self.spawn(rpm_cmd) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/cmd.py", line 365, in spawn spawn(cmd, search_path, dry_run=self.dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 36, in spawn _spawn_posix(cmd, search_path, dry_run=dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 157, in _spawn_posix raise DistutilsExecError( distutils.errors.DistutilsExecError: command 'rpmbuild' failed with exit status 1 ====================================================================== ERROR: test_quiet (distutils.tests.test_bdist_rpm.BuildRpmTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/tests/test_bdist_rpm.py", line 77, in test_quiet cmd.run() File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/command/bdist_rpm.py", line 363, in run self.spawn(rpm_cmd) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/cmd.py", line 365, in spawn spawn(cmd, search_path, dry_run=self.dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 36, in spawn _spawn_posix(cmd, search_path, dry_run=dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 157, in _spawn_posix raise DistutilsExecError( distutils.errors.DistutilsExecError: command 'rpmbuild' failed with exit status 1 ====================================================================== ERROR: test_byte_compile_optimized (distutils.tests.test_build_py.BuildPyTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/tests/test_build_py.py", line 118, in test_byte_compile_optimized cmd.run() File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/command/build_py.py", line 95, in run self.byte_compile(self.get_outputs(include_bytecode=0)) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/command/build_py.py", line 391, in byte_compile byte_compile(files, optimize=self.optimize, File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/util.py", line 425, in byte_compile spawn(cmd, dry_run=dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 36, in spawn _spawn_posix(cmd, search_path, dry_run=dry_run) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/Lib/distutils/spawn.py", line 157, in _spawn_posix raise DistutilsExecError( distutils.errors.DistutilsExecError: command '/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64/build/python' failed with exit status 1 ---------------------------------------------------------------------- Ran 248 tests in 3.360s FAILED (errors=4, skipped=29) test test_distutils failed ---------- components: Distutils, Tests messages: 359531 nosy: dstufft, eric.araujo, vstinner priority: normal severity: normal status: open title: test_distutils fails on PPC64 Fedora 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 14:42:59 2020 From: report at bugs.python.org (Daniel Pezoa) Date: Tue, 07 Jan 2020 19:42:59 +0000 Subject: [New-bugs-announce] [issue39249] difflib SequenceMatcher 200 char length limitation for ratio calculation Message-ID: <1578426179.61.0.269122492915.issue39249@roundup.psfhosted.org> New submission from Daniel Pezoa : I am using the SequenceMatcher object of the difflib library and I have noticed that a drastic failure occurs when the text strings exceed 200 characters Source code: ===================================================================== from difflib import SequenceMatcher def main(): # Throw a value of 7% when they are almost equal for having more than 200 characters text1 = "aceite y pez hirviendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas y encendidas que no podan desprenderse ni arrancarse sin quemarse las manos" text2 = "aceite y pedir viendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas y encendidas que no podan desprenderse ni arrancarse sin quemarse las manos" m = SequenceMatcher(None, text1, text2) x = m.ratio() porcentaje = (int)(x * 100) print("{}\n\n{}\n\n{}\n\nBad: {}%\n\n".format(text1, text2, x, porcentaje)) # Throw the expected value of 99% for having less than 200 characters text1 = "aceite y pez hirviendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas y encendidas que no podan desprenderse ni arrancarse sin quemarse las" text2 = "aceite y pedir viendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas y encendidas que no podan desprenderse ni arrancarse sin quemarse las" text1 = "aceite y pez hirviendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas" text2 = "aceite y pedir viendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas" m = SequenceMatcher(None, text1, text2) x = m.ratio() porcentaje = (int)(x * 100) print("{}\n\n{}\n\n{}\n\nGood: {}%".format(text1, text2, x, porcentaje)) if __name__== "__main__": main() Output: ====================================================================== aceite y pez hirviendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas y encendidas que no podan desprenderse ni arrancarse sin quemarse las manos aceite y pedir viendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas y encendidas que no podan desprenderse ni arrancarse sin quemarse las manos 0.0794044665012407 Bad: 7% aceite y pez hirviendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas aceite y pedir viendo que vena de la plataforma y de la cual salan tambin muchsimas flechas rodeadas de estopas alquitranadas 0.9800796812749004 Good: 98% ---------- components: Library (Lib) messages: 359534 nosy: Daniel Pezoa priority: normal severity: normal status: open title: difflib SequenceMatcher 200 char length limitation for ratio calculation type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 15:27:13 2020 From: report at bugs.python.org (Filipp Lepalaan) Date: Tue, 07 Jan 2020 20:27:13 +0000 Subject: [New-bugs-announce] [issue39250] os.path.commonpath() not so common Message-ID: <1578428833.37.0.29494253109.issue39250@roundup.psfhosted.org> New submission from Filipp Lepalaan : The documentation describes os.path.commonpath() as: "Return the longest common sub-path of each pathname in the sequence paths. Raise ValueError if paths contain both absolute and relative pathnames, the paths are on the different drives or if paths is empty. Unlike commonprefix(), this returns a valid path." However, in practice the function seems to always return the *shortest* common path. Steps to reproduce: import os.path paths = ['/var', '/var/log', '/var/log/nginx'] os.path.commonpath(paths) Expected results: '/var/log' Actual results: '/var' I've tried this with Python 3.5, 3.6, 3.7 and 3.8.1 on both MacOS and Debian/Linux and the results are consistent. ---------- components: Library (Lib) messages: 359535 nosy: filipp priority: normal severity: normal status: open title: os.path.commonpath() not so common versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 16:13:42 2020 From: report at bugs.python.org (Brian McKim) Date: Tue, 07 Jan 2020 21:13:42 +0000 Subject: [New-bugs-announce] [issue39251] outdated windows store links in WindowsApps folder Message-ID: <1578431622.31.0.461218812938.issue39251@roundup.psfhosted.org> New submission from Brian McKim : When I uninstalled the windows store version 3.8 it appears to have placed two links in my \AppData\Local\Microsoft\WindowsApps folder (though they may have always been there), python.exe and python3.exe. When I run these in PowerShell both send me to the 3.7 version in the store. There is a note on the page stating this version is not guaranteed to be stable and points them to the 3.8 version. As these are to make the install as painless as possible these should point to the most stable version; 3.8 in the store. ---------- components: Windows files: Annotation 2020-01-07 161226.png messages: 359547 nosy: Brian McKim, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: outdated windows store links in WindowsApps folder type: enhancement versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48831/Annotation 2020-01-07 161226.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 17:27:03 2020 From: report at bugs.python.org (Ryan McCampbell) Date: Tue, 07 Jan 2020 22:27:03 +0000 Subject: [New-bugs-announce] [issue39252] email.contentmanager.raw_data_manager bytes handler breaks on 7bit cte Message-ID: <1578436023.85.0.680298303035.issue39252@roundup.psfhosted.org> New submission from Ryan McCampbell : The email.contentmanager.set_bytes_content function which handles bytes content for raw_data_manager fails when passed cte="7bit" with an AttributeError: 'bytes' object has no attribute 'encode'. This is probably not a major use case since bytes are generally not for 7-bit data but the failure is clearly not intentional. ---------- components: Library (Lib) messages: 359555 nosy: rmccampbell7 priority: normal severity: normal status: open title: email.contentmanager.raw_data_manager bytes handler breaks on 7bit cte type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 7 18:54:53 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 07 Jan 2020 23:54:53 +0000 Subject: [New-bugs-announce] [issue39253] Running the test suite with --junit-xml and -R incorrectly reports refleaks Message-ID: <1578441293.88.0.429793229703.issue39253@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : ./python -m test test_list -R : --junit-xml something 0:00:00 load avg: 0.52 Run tests sequentially 0:00:00 load avg: 0.52 [1/1] test_list beginning 9 repetitions 123456789 ......... test_list leaked [798, 798, 798, 798] references, sum=3192 test_list leaked [345, 345, 345, 345] memory blocks, sum=1380 test_list failed == Tests result: FAILURE == 1 test failed: test_list Total duration: 3.4 sec Tests result: FAILURE ---------- components: Tests messages: 359561 nosy: pablogsal, steve.dower, vstinner priority: normal severity: normal status: open title: Running the test suite with --junit-xml and -R incorrectly reports refleaks versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 01:25:14 2020 From: report at bugs.python.org (Arkadiusz Miskiewicz Arkadiusz Miskiewicz) Date: Wed, 08 Jan 2020 06:25:14 +0000 Subject: [New-bugs-announce] [issue39254] python shebang in python3 tarball files Message-ID: <1578464714.96.0.727607440351.issue39254@roundup.psfhosted.org> New submission from Arkadiusz Miskiewicz Arkadiusz Miskiewicz : Python 3.8.1 files: Lib/encodings/rot_13.py \ Lib/lib2to3/tests/data/different_encoding.py \ Lib/lib2to3/tests/data/false_encoding.py \ Tools/gdb/libpython.py \ Tools/pynche/pynche \ Tools/pynche/pynche.pyw \ Tools/scripts/2to3 \ Tools/scripts/smelly.py \ python-gdb.py are calling python (which often points to python2) while should be calling python3 explicitly (unless python2 is required for using these which would be weird in Python 3 package) ---------- components: Build messages: 359567 nosy: arekm priority: normal severity: normal status: open title: python shebang in python3 tarball files versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 03:44:16 2020 From: report at bugs.python.org (Kallah) Date: Wed, 08 Jan 2020 08:44:16 +0000 Subject: [New-bugs-announce] [issue39255] Windows and Unix run-time differences Message-ID: <1578473056.67.0.190561060643.issue39255@roundup.psfhosted.org> New submission from Kallah : In the attached sync.py, running it on windows and Unix (Ubuntu and OSX tested) will grant different results. On windows it will output: x = 1 x = 2 x = 3 y = 1 x = 4 x = 5 x = 6 x = 7 y = 1 While on ubuntu it will output: x = 1 x = 2 x = 3 y = 4 x = 4 x = 5 x = 6 x = 7 y = 8 ---------- components: Windows files: sync.py messages: 359569 nosy: Kallah, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows and Unix run-time differences type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48832/sync.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 04:41:47 2020 From: report at bugs.python.org (Conrad) Date: Wed, 08 Jan 2020 09:41:47 +0000 Subject: [New-bugs-announce] [issue39256] Exception handler set by set_exception_handler is called only when I run coroutine by create_task Message-ID: <1578476507.18.0.348977530849.issue39256@roundup.psfhosted.org> New submission from Conrad : import asyncio async def test(): raise Exception("Something goes wrong") async def main(): #Un-comment either 1 of the following 3 lines # await test() # will not call exception_handler # await asyncio.gather(test()) # will not call exception_handler # asyncio.create_task(test()) # will call exception_handler await asyncio.sleep(5) def exception_handler(loop, context): exception = context.get("exception", None) print("exception_handler", exception) if __name__ == "__main__": loop = asyncio.get_event_loop() loop.set_exception_handler(exception_handler) loop.run_until_complete(main()) print("Job done") ---------- components: asyncio messages: 359571 nosy: asvetlov, conraddd, yselivanov priority: normal severity: normal status: open title: Exception handler set by set_exception_handler is called only when I run coroutine by create_task type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 05:34:34 2020 From: report at bugs.python.org (Ron Serruya) Date: Wed, 08 Jan 2020 10:34:34 +0000 Subject: [New-bugs-announce] [issue39257] contextvars.Context.run hangs forever in ProccessPoolExecutor Message-ID: <1578479674.06.0.280894954081.issue39257@roundup.psfhosted.org> New submission from Ron Serruya : Sending Context.run to another process via ProccessPoolExecutor hangs forever: ``` from contextvars import ContextVar, copy_context from concurrent.futures import ProcessPoolExecutor from multiprocessing import Process var: ContextVar[int] = ContextVar('var',default=None) if __name__ == '__main__': # ***** This hangs forever ***** with ProcessPoolExecutor(max_workers=1) as pp: ctx = copy_context() pp.submit(ctx.run, list) # ****** This throws 'cannot pickle Context' # ***** This hangs forever ***** ctx = copy_context() p = Process(target=ctx.run, args=(list,)) p.start() p.join() ``` python version is 3.8.0 running on Mac OSX 10.15.1 ---------- messages: 359575 nosy: ronserruya priority: normal severity: normal status: open title: contextvars.Context.run hangs forever in ProccessPoolExecutor type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 06:17:19 2020 From: report at bugs.python.org (Tony Hirst) Date: Wed, 08 Jan 2020 11:17:19 +0000 Subject: [New-bugs-announce] [issue39258] json serialiser errors with numpy int64 Message-ID: <1578482239.09.0.866607627274.issue39258@roundup.psfhosted.org> Change by Tony Hirst : ---------- components: Library (Lib) nosy: Tony Hirst priority: normal severity: normal status: open title: json serialiser errors with numpy int64 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 09:55:46 2020 From: report at bugs.python.org (Dong-hee Na) Date: Wed, 08 Jan 2020 14:55:46 +0000 Subject: [New-bugs-announce] [issue39259] poplib.POP3/POP3_SSL should reject timeout = 0 Message-ID: <1578495346.65.0.262866907098.issue39259@roundup.psfhosted.org> New submission from Dong-hee Na : Since poplib.POP3/POP3_SSL's implementation depends on socket.makefile, the client should reject if the timeout is zero. Because socket.makefile said that 'The socket must be in blocking mode' and if we set timeout to zero, the client does not operate as normal. ---------- components: Library (Lib) messages: 359596 nosy: corona10, vstinner priority: normal severity: normal status: open title: poplib.POP3/POP3_SSL should reject timeout = 0 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 10:53:12 2020 From: report at bugs.python.org (Thomas Passin) Date: Wed, 08 Jan 2020 15:53:12 +0000 Subject: [New-bugs-announce] [issue39260] find_executable() Fails To Find Many Executables on Windows Message-ID: <1578498792.76.0.781232335261.issue39260@roundup.psfhosted.org> New submission from Thomas Passin : On Windows, find_executable() in distutils.spawn may fail to find executables that it ought to. This is because the PATH environmental variable no longer includes %ProgramFiles% and %ProgramFiles(x86)%. At least, that is the case on my brand new Windows 10 Computer running Windows 10 Pro. In the past, I'm fairly sure these directories were always included on the PATH. Some programs add their install directory to the Windows PATH, but many don't. For example, on my new computer, Pandoc added itself to the PATH but EditPlus and Notepad++ did not. So >>> find_executable('pandoc') 'C:\\Program Files\\Pandoc\\pandoc.exe' but >>> find_executable('editplus') # no result >>> find_executable('notepad++') # no result I suggest that in Windows, find_executable() should check for and add the %ProgramFiles% and %ProgramFiles(x86)% directories to the system PATH before executing its search. ---------- components: Distutils messages: 359602 nosy: dstufft, eric.araujo, tbpassin priority: normal severity: normal status: open title: find_executable() Fails To Find Many Executables on Windows versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 13:11:55 2020 From: report at bugs.python.org (Alex Henrie) Date: Wed, 08 Jan 2020 18:11:55 +0000 Subject: [New-bugs-announce] [issue39261] Dead assignment in pyinit_config Message-ID: <1578507115.64.0.589303193818.issue39261@roundup.psfhosted.org> New submission from Alex Henrie : The function pyinit_config currently contains the following line: config = &tstate->interp->config; However, the config variable is not used after that point. Victor Stinner has confirmed that this assignment is unnecessary: https://github.com/python/cpython/pull/16267/files#r364216184 ---------- messages: 359619 nosy: alex.henrie priority: normal severity: normal status: open title: Dead assignment in pyinit_config _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 13:22:29 2020 From: report at bugs.python.org (Alex Henrie) Date: Wed, 08 Jan 2020 18:22:29 +0000 Subject: [New-bugs-announce] [issue39262] Unused error message in _sharedexception_bind Message-ID: <1578507749.56.0.495941888024.issue39262@roundup.psfhosted.org> New submission from Alex Henrie : The function _sharedexception_bind currently has the following bit of code in two places: if (PyErr_ExceptionMatches(PyExc_MemoryError)) { failure = "out of memory copying exception type name"; } failure = "unable to encode and copy exception type name"; The "out of memory" message will never appear because it is immediately overwritten with a more generic message. ---------- messages: 359620 nosy: alex.henrie priority: normal severity: normal status: open title: Unused error message in _sharedexception_bind type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 14:36:07 2020 From: report at bugs.python.org (Jameson Nash) Date: Wed, 08 Jan 2020 19:36:07 +0000 Subject: [New-bugs-announce] [issue39263] Windows Installer can't select TargetDir in UI? Message-ID: <1578512167.47.0.555235356189.issue39263@roundup.psfhosted.org> New submission from Jameson Nash : When running the installer on Windows, I wanted to put Python in an easily accessible path (C:\Python38 in my case), however, the GUI didn't seem to provide any way to change the path (from AppData). And additionally, the "install for all users" checkbox seemed to be broken (didn't respond to mouse events and remained checked). Running the installer with the argument TargetDir=C:\Python38 worked, but I wouldn't have expected to need to hack my way to just picking the install directory. ---------- components: Installation, Windows messages: 359627 nosy: Jameson Nash, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Installer can't select TargetDir in UI? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 18:51:16 2020 From: report at bugs.python.org (Bar Harel) Date: Wed, 08 Jan 2020 23:51:16 +0000 Subject: [New-bugs-announce] [issue39264] Fix UserDict.get to account for __missing__ Message-ID: <1578527476.03.0.196405087716.issue39264@roundup.psfhosted.org> New submission from Bar Harel : Unlike dict, UserDict.__missing__ is called on .get(). After a discussion on the Python-Dev mailing list, mimicking dict's behavior was chosen as a solution to the issue. ---------- components: Library (Lib) messages: 359633 nosy: bar.harel priority: normal severity: normal status: open title: Fix UserDict.get to account for __missing__ type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 18:55:35 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 08 Jan 2020 23:55:35 +0000 Subject: [New-bugs-announce] [issue39265] test_ssl failed on AMD64 RHEL8 Refleaks 2.7 Message-ID: <1578527735.81.0.0696130913832.issue39265@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 RHEL8 Refleaks 2.7: https://buildbot.python.org/all/#/builders/102/builds/45 test_protocol_sslv23 (test.test_ssl.ThreadedTests) Connecting to an SSLv23 server with various client options ... Could not scan /etc/ssl/openssl.cnf for MinProtocol: [Errno 2] No such file or directory: '/etc/ssl/openssl.cnf' PROTOCOL_TLS->PROTOCOL_TLS CERT_NONE PROTOCOL_TLSv1->PROTOCOL_TLS CERT_NONE ERROR Connecting to a TLSv1.1 server with various client options. ... Could not scan /etc/ssl/openssl.cnf for MinProtocol: [Errno 2] No such file or directory: '/etc/ssl/openssl.cnf' PROTOCOL_TLSv1_1->PROTOCOL_TLSv1_1 CERT_NONE {PROTOCOL_TLS->PROTOCOL_TLSv1_1} CERT_NONE PROTOCOL_TLSv1_1->PROTOCOL_TLS CERT_NONE ERROR ====================================================================== ERROR: test_protocol_sslv23 (test.test_ssl.ThreadedTests) Connecting to an SSLv23 server with various client options ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 189, in f return func(*args, **kwargs) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 2402, in test_protocol_sslv23 try_protocol_combo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_TLSv1, 'TLSv1') File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 2134, in try_protocol_combo chatty=False, connectionchatty=False) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 2062, in server_params_test s.connect((HOST, server.port)) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/ssl.py", line 864, in connect self._real_connect(addr, False) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/ssl.py", line 855, in _real_connect self.do_handshake() File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/ssl.py", line 828, in do_handshake self._sslobj.do_handshake() SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:727) ====================================================================== ERROR: test_protocol_tlsv1_1 (test.test_ssl.ThreadedTests) Connecting to a TLSv1.1 server with various client options. ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 189, in f return func(*args, **kwargs) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 2477, in test_protocol_tlsv1_1 try_protocol_combo(ssl.PROTOCOL_SSLv23, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 2134, in try_protocol_combo chatty=False, connectionchatty=False) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_ssl.py", line 2062, in server_params_test s.connect((HOST, server.port)) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/ssl.py", line 864, in connect self._real_connect(addr, False) File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/ssl.py", line 855, in _real_connect self.do_handshake() File "/home/buildbot/buildarea/2.7.cstratak-RHEL8-x86_64.refleak/build/Lib/ssl.py", line 828, in do_handshake self._sslobj.do_handshake() SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:727) ---------- assignee: christian.heimes components: SSL, Tests messages: 359634 nosy: christian.heimes, vstinner priority: normal severity: normal status: open title: test_ssl failed on AMD64 RHEL8 Refleaks 2.7 versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 19:04:08 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 09 Jan 2020 00:04:08 +0000 Subject: [New-bugs-announce] [issue39266] [2.7] test_bsddb3 leaked [1, 1, 1] file descriptors on AMD64 RHEL7 Refleaks 2.7 Message-ID: <1578528248.0.0.1734131404.issue39266@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 RHEL7 Refleaks 2.7: https://buildbot.python.org/all/#/builders/51/builds/13 test_bsddb3 leaked [1, 1, 1] file descriptors, sum=3 ---------- components: Tests messages: 359638 nosy: vstinner priority: normal severity: normal status: open title: [2.7] test_bsddb3 leaked [1, 1, 1] file descriptors on AMD64 RHEL7 Refleaks 2.7 versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 19:06:08 2020 From: report at bugs.python.org (Bar Harel) Date: Thu, 09 Jan 2020 00:06:08 +0000 Subject: [New-bugs-announce] [issue39267] Fix dict's __missing__ documentation Message-ID: <1578528368.02.0.98072411611.issue39267@roundup.psfhosted.org> New submission from Bar Harel : Continuing bpo-39264, and according to the mailing list discussion at Python-Dev. Fixing dict's __missing__ documentation. Clarify .get() does not call __missing__, and move __missing__ from the data model to dict's section as it's not a general object or ABC method but a dict-only implementation. ---------- assignee: docs at python components: Documentation messages: 359639 nosy: bar.harel, docs at python priority: normal severity: normal status: open title: Fix dict's __missing__ documentation type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 19:10:59 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 09 Jan 2020 00:10:59 +0000 Subject: [New-bugs-announce] [issue39268] test_asyncio: test_create_server_ssl_verified() failed on AMD64 FreeBSD Non-Debug 3.x Message-ID: <1578528659.39.0.760822597005.issue39268@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 FreeBSD Non-Debug 3.x: https://buildbot.python.org/all/#/builders/214/builds/123 ====================================================================== ERROR: test_create_server_ssl_verified (test.test_asyncio.test_events.SelectEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-9e36.nondebug/build/Lib/test/test_asyncio/test_events.py", line 1106, in test_create_server_ssl_verified proto.transport.close() AttributeError: 'NoneType' object has no attribute 'close' ---------- components: Tests, asyncio messages: 359640 nosy: asvetlov, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_create_server_ssl_verified() failed on AMD64 FreeBSD Non-Debug 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 20:02:13 2020 From: report at bugs.python.org (wim glenn) Date: Thu, 09 Jan 2020 01:02:13 +0000 Subject: [New-bugs-announce] [issue39269] Descriptor how-to guide wanting update for 3.6+ features Message-ID: <1578531733.6.0.729486446761.issue39269@roundup.psfhosted.org> New submission from wim glenn : https://docs.python.org/3/howto/descriptor.html Current descriptor how-to guide, above, has no mention about API features added since Python 3.6 (see __set_name__ in PEP 487) It's an important and useful piece of using descriptors effectively and the guide could be updated to include some info about that. There's some info in datamodel.html (e.g. 3.3.3.6. Creating the class object) but a mention in the how-to guide would be welcome too. ---------- assignee: docs at python components: Documentation messages: 359642 nosy: docs at python, wim.glenn priority: normal severity: normal status: open title: Descriptor how-to guide wanting update for 3.6+ features versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 23:14:48 2020 From: report at bugs.python.org (Alex Henrie) Date: Thu, 09 Jan 2020 04:14:48 +0000 Subject: [New-bugs-announce] [issue39270] Dead assignment in config_init_module_search_paths Message-ID: <1578543288.1.0.568310350054.issue39270@roundup.psfhosted.org> New submission from Alex Henrie : config_init_module_search_paths currently has the following code: const wchar_t *p = sys_path; while (1) { p = wcschr(sys_path, delim); The first assignment to p is unnecessary because it is immediately overwritten. Victor Stinner suggested moving the variable declaration into the loop itself to clarify that it does not need to be initialized elsewhere: https://github.com/python/cpython/pull/16267/files#r364216448 ---------- messages: 359652 nosy: alex.henrie priority: normal severity: normal status: open title: Dead assignment in config_init_module_search_paths type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 23:22:03 2020 From: report at bugs.python.org (Alex Henrie) Date: Thu, 09 Jan 2020 04:22:03 +0000 Subject: [New-bugs-announce] [issue39271] Dead assignment in pattern_subx Message-ID: <1578543723.64.0.00400211549211.issue39271@roundup.psfhosted.org> New submission from Alex Henrie : The function pattern_subx currently sets the variable b to charsize, but that variable is reset to STATE_OFFSET(&state, state.start) before it is ever used. ---------- components: Regular Expressions messages: 359653 nosy: alex.henrie, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: Dead assignment in pattern_subx type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 8 23:27:44 2020 From: report at bugs.python.org (Alex Henrie) Date: Thu, 09 Jan 2020 04:27:44 +0000 Subject: [New-bugs-announce] [issue39272] Dead assignment in _ssl__SSLContext_load_verify_locations_impl Message-ID: <1578544064.68.0.445985334082.issue39272@roundup.psfhosted.org> New submission from Alex Henrie : The function _ssl__SSLContext_load_verify_locations_impl currently contains the following code: if (r != 1) { ok = 0; if (errno != 0) { ERR_clear_error(); PyErr_SetFromErrno(PyExc_OSError); } else { _setSSLError(NULL, 0, __FILE__, __LINE__); } goto error; } } goto end; error: ok = 0; It is unnecessary to set ok to 0 before jumping to error because the first instruction after the error label does the same thing. ---------- assignee: christian.heimes components: SSL messages: 359654 nosy: alex.henrie, christian.heimes priority: normal severity: normal status: open title: Dead assignment in _ssl__SSLContext_load_verify_locations_impl type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 01:59:01 2020 From: report at bugs.python.org (Michael Yoo) Date: Thu, 09 Jan 2020 06:59:01 +0000 Subject: [New-bugs-announce] [issue39273] ncurses does not include BUTTON5_* constants Message-ID: <1578553141.45.0.141104357774.issue39273@roundup.psfhosted.org> New submission from Michael Yoo : Hi, Recently I was working with ncurses, and when handling the mouse scroll events, I noticed that the curses library does not include the BUTTON5_* macros provided by ncurses. On my system, BUTTON5 corresponds to the mouse down event. Is there a reason for this, or has it just not been updated? If so, the expectation is that it exists. Relevant source location: https://github.com/python/cpython/blob/2bc3434/Modules/_cursesmodule.c#L4668 ---------- components: Library (Lib) messages: 359657 nosy: michael.yoo at akunacapital.com priority: normal severity: normal status: open title: ncurses does not include BUTTON5_* constants type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 07:43:51 2020 From: report at bugs.python.org (=?utf-8?q?Fran=C3=A7ois_Durand?=) Date: Thu, 09 Jan 2020 12:43:51 +0000 Subject: [New-bugs-announce] [issue39274] Conversion from fractions.Fraction to bool Message-ID: <1578573831.64.0.0454191861618.issue39274@roundup.psfhosted.org> New submission from Fran?ois Durand : As of now, fractions.Fraction.__bool__ is implemented as: ``return a._numerator != 0``. However, this does not necessary return a bool (which would be desired). In particular, when the numerator is a numpy integer, this returns a numpy bool instead. Another solution would be to implement fractions.Fraction.__bool__ as: ``return bool(numerator)``. What do you think? This message follows a thread here: https://github.com/numpy/numpy/issues/15277 . ---------- messages: 359673 nosy: francois-durand priority: normal severity: normal status: open title: Conversion from fractions.Fraction to bool type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 08:25:48 2020 From: report at bugs.python.org (Facundo Batista) Date: Thu, 09 Jan 2020 13:25:48 +0000 Subject: [New-bugs-announce] [issue39275] Traceback off by one line when Message-ID: <1578576348.64.0.412986299263.issue39275@roundup.psfhosted.org> New submission from Facundo Batista : When using pdb to debug, the traceback is off by one line. For example, this simple script: ``` print("line 1") import pdb;pdb.set_trace() print("line 2") print("line 3", broken) print("line 4") ``` ...when run produces the following traceback (after hitting 'n' in pdb, of course): ``` Traceback (most recent call last): File "/home/facundo/foo.py", line 3, in print("line 2") NameError: name 'broken' is not defined ``` ---------- messages: 359678 nosy: facundobatista priority: normal severity: normal status: open title: Traceback off by one line when versions: Python 3.6, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 08:52:14 2020 From: report at bugs.python.org (=?utf-8?q?Pawe=C5=82_Karczewski?=) Date: Thu, 09 Jan 2020 13:52:14 +0000 Subject: [New-bugs-announce] [issue39276] type() cause segmentation fault in callback function called from C extension Message-ID: <1578577934.03.0.00618526975979.issue39276@roundup.psfhosted.org> New submission from Pawe? Karczewski : How to reproduce: 1. Create callback function, which may take any object and run type() on it def builtin_type_in_callback(obj): type(obj) 2. Create C extension with two types defined in it - Internal and External. Eternal type should implement method (let's name it Call), which can get callback function static PyObject * Call(ExternalObject *self, PyObject* args) { PyObject* python_callback; if (!PyArg_ParseTuple(args, "O:set_callback", &python_callback)) { return NULL; } callback_runner(python_callback); if(PyErr_Occurred() != NULL) return NULL; Py_RETURN_NONE; } Inside this function create object of Internal type and pass it to callback function void callback_runner(void* callback_function) { InternalObject *entry = PyObject_New(InternalObject, &InternalType); PyObject_Init((PyObject*)entry, &InternalType); PyObject *args = PyTuple_New(1); if (args != NULL) { if (PyTuple_SetItem(args, 0, (PyObject *)entry) == 0) { PyObject *res = PyObject_CallObject((PyObject *) callback_function, args); Py_XDECREF(res); } } When type() is called on object of Internal type segmentation fault occur. However, if dir() was called on such object before type(), type() works properly and returns type of Internal Object. For more details please look into reproducer code. ---------- components: C API files: cpython_type_segfaulter.tgz messages: 359680 nosy: karczex priority: normal severity: normal status: open title: type() cause segmentation fault in callback function called from C extension versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48834/cpython_type_segfaulter.tgz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 09:43:45 2020 From: report at bugs.python.org (Thomas Grainger) Date: Thu, 09 Jan 2020 14:43:45 +0000 Subject: [New-bugs-announce] [issue39277] _PyTime_FromDouble() fails to detect an integer overflow when converting a C double to a C int64_t Message-ID: <1578581025.66.0.18478821427.issue39277@roundup.psfhosted.org> New submission from Thomas Grainger : _PyTime_FromDouble() fails to detect an integer overflow when converting a C double to a C int64_t Python 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.sleep(9223372036.854777) Traceback (most recent call last): File "", line 1, in ValueError: sleep length must be non-negative ---------- messages: 359682 nosy: graingert, vstinner priority: normal severity: normal status: open title: _PyTime_FromDouble() fails to detect an integer overflow when converting a C double to a C int64_t type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 12:37:23 2020 From: report at bugs.python.org (Carl Bordum Hansen) Date: Thu, 09 Jan 2020 17:37:23 +0000 Subject: [New-bugs-announce] [issue39278] add docstrings to functions in pdb module Message-ID: <1578591443.64.0.597501227111.issue39278@roundup.psfhosted.org> New submission from Carl Bordum Hansen : The functions are documented, but not in doc strings which means you cannot call help() on them. >From this twitter thread: https://twitter.com/raymondh/status/1211414561468952577 ---------- assignee: docs at python components: Documentation messages: 359689 nosy: carlbordum, docs at python priority: normal severity: normal status: open title: add docstrings to functions in pdb module versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 16:02:11 2020 From: report at bugs.python.org (Ram Rachum) Date: Thu, 09 Jan 2020 21:02:11 +0000 Subject: [New-bugs-announce] [issue39279] Don't allow non-Ascii digits in platform.py Message-ID: <1578603731.26.0.701759063569.issue39279@roundup.psfhosted.org> New submission from Ram Rachum : The platform.py module takes non-Ascii digits in regexes in places it shouldn't. e.g. digits like ? and ? and accepted, when only the Ascii digits between 0-9 should be accepted. ---------- components: Library (Lib) messages: 359694 nosy: cool-RR priority: normal severity: normal status: open title: Don't allow non-Ascii digits in platform.py type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 16:10:55 2020 From: report at bugs.python.org (Ram Rachum) Date: Thu, 09 Jan 2020 21:10:55 +0000 Subject: [New-bugs-announce] [issue39280] Don't allow datetime parsing to accept non-Ascii digits Message-ID: <1578604255.88.0.113689913662.issue39280@roundup.psfhosted.org> New submission from Ram Rachum : I've been doing some research into the use of `\d` in regular expressions in CPython, and any security vulnerabilities that might happen as a result of the fact that it accepts non-Ascii digits like ? and ?. In most places in the CPython codebase, the `re.ASCII` flag is used for such cases, thus ensuring the `re` module prohibits these non-Ascii digits. Personally, my preference is to never use `\d` and always use `[0-9]`. I think that it's rule that's more easy to enforce and less likely to result in a slipup, but that's a matter of personal taste. I found a few places where we don't use the `re.ASCII` flag and we do accept non-Ascii digits. The first and less interesting place is platform.py, where we define patterns used for detecting versions of PyPy and IronPython. I don't know how anyone would exploit that, but personally I'd change that to a [0-9] just to be safe. I've opened bpo-39279 for that. The more sensitive place is the `datetime` module. Happily, the `datetime.datetime.fromisoformat` function rejects non-Ascii digits. But the `datetime.datetime.strptime` function does not: from datetime import datetime time_format = '%Y-%m-%d' parse = lambda s: datetime.strptime(s, time_format) x = '?019-12-22' y = '2019-12-22' assert x != y assert parse(x) == parse(y) print(parse(x)) # Output: 2019-12-22 00:00:00 If user code were to check for uniqueness of a datetime by comparing it as a string, this is where an attacker could fool this logic, by using a non-Ascii digit. Two more interesting points about this: 1. If you'd try the same trick, but you'd insert ? in the day section instead of the year section, Python would reject that. So we definitely have inconsistent behavior. 2. In the documentation for `strptime`, we're referencing the 1989 C standard. Since the first version of Unicode was published in 1991, it's reasonable not to expect the standard to support digits that were introduced in Unicode. If you'd scroll down in that documentation, you'll see that we also implement the less-known ISO 8601 standard, where `%G-%V-%u` represents a year, week number, and day of week. The `%G` is vulnerable: from datetime import datetime time_format = '%G-%V-%u' parse = lambda s: datetime.strptime(s, time_format) x = '?019-53-4' y = '2019-53-4' assert x != y assert parse(x) == parse(y) print(parse(x)) # Output: 2020-01-02 00:00:00 I looked at the ISO 8601:2004 document, and under the "Fundamental principles" chapter, it says: This International Standard gives a set of rules for the representation of time points time intervals recurring time intervals. Both accurate and approximate representations can be identified by means of unique and unambiguous expressions specifying the relevant dates, times of day and durations. Note the "unique and unambiguous". By accepting non-Ascii digits, we're breaking the uniqueness requirement of ISO 8601. ---------- components: Library (Lib) messages: 359695 nosy: cool-RR priority: normal severity: normal status: open title: Don't allow datetime parsing to accept non-Ascii digits type: security versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 20:26:34 2020 From: report at bugs.python.org (Dan Snider) Date: Fri, 10 Jan 2020 01:26:34 +0000 Subject: [New-bugs-announce] [issue39281] The CO_NESTED flag is associated with a significant performance cost Message-ID: <1578619594.45.0.842825761348.issue39281@roundup.psfhosted.org> New submission from Dan Snider : The title was carefully worded as I have no idea how or why what is happening is happening, only that it has been like this since a least 3.6.0. That version in particular, by the way, is able to execute a call to a python function with 1 argument 25% faster than 3.8.0 but that may be due at least in part by whatever it is that makes it much faster to a call a unary function wrapped by functools.partial by utilizing the subcript operator on an instance of a partial subtype whose __getitem__ has been set to the data descriptor partial.func... Eg: class Party(partial): __getitem__ = partial.func fast = Party(hash) slow = partial(hash) # the expression `fast[""]` runs approximately 28% faster than # the expression `slow("")`, and partial.func as __getitem__ is # confusingly 139% faster than partial.__call__... That rather large digression aside, here's a demonstration of two functions identical in every way except the CO_NESTED bit and perhaps the names: if 1: def Slow(): global Slow class Slow: global slow def slow(self): return self return Slow if Slow(): class Fast: global fast def fast(self): return self import dis dis.show_code(slow) print() dis.show_code(fast) ---------- messages: 359700 nosy: bup priority: normal severity: normal status: open title: The CO_NESTED flag is associated with a significant performance cost type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 20:44:21 2020 From: report at bugs.python.org (Milo D Kerr) Date: Fri, 10 Jan 2020 01:44:21 +0000 Subject: [New-bugs-announce] [issue39282] python-config --embed documentation Message-ID: <1578620661.54.0.572519425656.issue39282@roundup.psfhosted.org> New submission from Milo D Kerr : Update 3.8 embedding documentation to reflect changes in PR 13500 bpo 36721 ---------- messages: 359701 nosy: M.Kerr priority: normal severity: normal status: open title: python-config --embed documentation versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 9 23:32:18 2020 From: report at bugs.python.org (Ajay Tripathi) Date: Fri, 10 Jan 2020 04:32:18 +0000 Subject: [New-bugs-announce] [issue39283] Add ability to inherit unittest arguement parser Message-ID: <1578630738.78.0.369523186494.issue39283@roundup.psfhosted.org> New submission from Ajay Tripathi : I am currently writing a unittest script that requires argparser but since the unittest module already has a ArgumentParser instance, I cannot create and use my own ArgumentParser instance. Possible solution: The problem would be solved I could inherit the parent ArgumentParser instance created here: https://github.com/python/cpython/blob/master/Lib/unittest/main.py#L162 Please let me know if it's feasible / acceptable to change `parent_parser` as `self.parent_parent` for inheritance. I would love to create a pull request for it. ---------- components: Library (Lib) messages: 359704 nosy: atb00ker priority: normal severity: normal status: open title: Add ability to inherit unittest arguement parser type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 03:55:34 2020 From: report at bugs.python.org (aggu) Date: Fri, 10 Jan 2020 08:55:34 +0000 Subject: [New-bugs-announce] [issue39284] Flexible indentation Message-ID: <1578646534.54.0.883213343126.issue39284@roundup.psfhosted.org> New submission from aggu : Indentation should not be "too strict", any number of leading whitespaces greater that its "parent" or "peer" should be allowed. For example, the following code should be allow: a = 1 # step 1 # step 1.1 a = a + 1 # step 1.2 a = a * 2 # step 2 # ? , which is more readable, I think, than: a = 1 # step 1 # step 1.1 a = a + 1 # step 1.2 a = a * 2 # step 2 # ? . ---------- messages: 359712 nosy: aggu priority: normal severity: normal status: open title: Flexible indentation type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 05:06:56 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 10 Jan 2020 10:06:56 +0000 Subject: [New-bugs-announce] [issue39285] PurePath.match indicates case-sensitive nature and presents a case-insensitive example Message-ID: <1578650816.04.0.399157633818.issue39285@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.match Under PurePath.match there is a statement that case-sensitivity is followed but presents an example in Windows where case insensitive match returns True. This is confusing since match internally uses fnmatch.fnmatchcase that doesn't normalize case but in Windows files are case insensitive. Either the doc could be clarified that it's platform dependent or present a PosixPath example or present two examples with one for Linux and one for Windows that it's platform dependent. As with other methods, case-sensitivity is observed: >>> PureWindowsPath('b.py').match('*.PY') True ---------- assignee: docs at python components: Documentation messages: 359717 nosy: docs at python, pitrou, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: PurePath.match indicates case-sensitive nature and presents a case-insensitive example type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 06:54:59 2020 From: report at bugs.python.org (Alex Grund) Date: Fri, 10 Jan 2020 11:54:59 +0000 Subject: [New-bugs-announce] [issue39286] Configure includes LIBS but does not pass it to distutils Message-ID: <1578657299.08.0.201349832459.issue39286@roundup.psfhosted.org> New submission from Alex Grund : When configuring with `LIBS=-lpthread` env var set, the pthread detection assumes that no flag is necessary and distutils will build all extensions without any flag for pthreads. This will make them fail, when they use certain pthread symbols on certain platforms Example: On Power9 libpthread is a linker script which does not only link in the dynamic library but also a static part which contains e.g. pthread_atfork. As the extension is linked against libpython and that is linked against libpthread the extension has access to all symbols from the dynamic library, but not to any from the static library (if one exists). This makes extensions fail to build on Power9. Related issue as an example: https://github.com/scipy/scipy/issues/11323 EasyBuild is one example that builds Python with that env var set: https://github.com/easybuilders/easybuild-framework/issues/3154 Very related to https://bugs.python.org/issue31769 as the issue is similar: Flag passed during configure not used by distutils. ---------- components: Build, Distutils messages: 359721 nosy: Alex Grund, dstufft, eric.araujo priority: normal severity: normal status: open title: Configure includes LIBS but does not pass it to distutils versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 07:11:39 2020 From: report at bugs.python.org (Inada Naoki) Date: Fri, 10 Jan 2020 12:11:39 +0000 Subject: [New-bugs-announce] [issue39287] Document UTF-8 mode in the using/windows. Message-ID: <1578658299.42.0.446573986531.issue39287@roundup.psfhosted.org> New submission from Inada Naoki : I think the UTF-8 mode is very useful for Windows users. Let's add section for the UTF-8 mode in the using/windows. ---------- assignee: docs at python components: Documentation messages: 359722 nosy: docs at python, inada.naoki priority: normal severity: normal status: open title: Document UTF-8 mode in the using/windows. versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 10:51:22 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 10 Jan 2020 15:51:22 +0000 Subject: [New-bugs-announce] [issue39288] Add math.nextafter(a, b) Message-ID: <1578671482.19.0.587553829591.issue39288@roundup.psfhosted.org> New submission from STINNER Victor : Linux manual page of nextafter(): """ The nextafter() function return the next representable floating-point value following x in the direction of y. If y is less than x, these functions will return the largest representable number less than x. If x equals y, the functions return y. """ I used this function to round an integer towards zero when casting a float to an integer in bpo-39277. Example in C: #include #include #include int main() { int64_t int64_max = 9223372036854775807LL; double d = (double)int64_max; /* ROUND_HALF_EVEN */ double d2 = nextafter(d, 0.0); printf("i = %ld\n", int64_max); printf("d = %.0f\n", d); printf("d2 = %.0f\n", d2); printf("d - d2 = %.0f\n", d - d2); return 0; } Output: i = 9223372036854775807 d = 9223372036854775808 d2 = 9223372036854774784 d - d2 = 1024 The function exists in numpy: numpy.nextafter(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = Return the next floating-point value after x1 towards x2, element-wise. https://docs.scipy.org/doc/numpy/reference/generated/numpy.nextafter.html Attached PR adds math.nextafter(). ---------- components: Library (Lib) messages: 359731 nosy: vstinner priority: normal severity: normal status: open title: Add math.nextafter(a, b) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 11:21:48 2020 From: report at bugs.python.org (Vinay Sajip) Date: Fri, 10 Jan 2020 16:21:48 +0000 Subject: [New-bugs-announce] [issue39289] crypt.crypt crashes on 3.9 where it didn't on 3.8 Message-ID: <1578673308.87.0.447911789122.issue39289@roundup.psfhosted.org> New submission from Vinay Sajip : The following script (cryptest.py): import crypt for salt in ('foo', '$2a$04$5BJqKfqMQvV7nS.yUguNcueVirQqDBGaLXSqj.rs.pZPlNR0UX/HK'): t = 'test' h = crypt.crypt(t, salt) print("'%s' with '%s' -> %s" % (t, salt, h)) crashes in 3.9, whereas it doesn't in earlier versions: $ python2.7 cryptest.py 'test' with 'foo' -> foy6TgL.HboTE 'test' with '$2a$04$5BJqKfqMQvV7nS.yUguNcueVirQqDBGaLXSqj.rs.pZPlNR0UX/HK' -> None $ python3.7 cryptest.py 'test' with 'foo' -> foy6TgL.HboTE 'test' with '$2a$04$5BJqKfqMQvV7nS.yUguNcueVirQqDBGaLXSqj.rs.pZPlNR0UX/HK' -> None $ python3.8 cryptest.py 'test' with 'foo' -> foy6TgL.HboTE 'test' with '$2a$04$5BJqKfqMQvV7nS.yUguNcueVirQqDBGaLXSqj.rs.pZPlNR0UX/HK' -> None $ python3.9 cryptest.py 'test' with 'foo' -> foy6TgL.HboTE Traceback (most recent call last): File "/home/vinay/projects/scratch/cpython/cryptest.py", line 5, in h = crypt.crypt(t, salt) File "/home/vinay/.local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) OSError: [Errno 22] Invalid argument This is on Ubuntu 18.04, 64-bit. ---------- components: Library (Lib) keywords: 3.9regression messages: 359732 nosy: vinay.sajip priority: normal severity: normal status: open title: crypt.crypt crashes on 3.9 where it didn't on 3.8 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 12:16:47 2020 From: report at bugs.python.org (Batuhan) Date: Fri, 10 Jan 2020 17:16:47 +0000 Subject: [New-bugs-announce] [issue39290] lib2to3.fixes.fix_import: support imports_as_name in traverse_imports Message-ID: <1578676607.69.0.963910000582.issue39290@roundup.psfhosted.org> New submission from Batuhan : I've been working on custom lib2to3 fixers and I use some of the already definied utilites inside the fixers. But traverse_imports can't traverse from import names, which is pretty simple and straight forward to implement. ---------- messages: 359742 nosy: BTaskaya priority: normal severity: normal status: open title: lib2to3.fixes.fix_import: support imports_as_name in traverse_imports _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 12:50:14 2020 From: report at bugs.python.org (Rockmizu) Date: Fri, 10 Jan 2020 17:50:14 +0000 Subject: [New-bugs-announce] [issue39291] "pathlib.Path.link_to()" and "pathlib.Path.symlink_to()" have reversed usage Message-ID: <1578678614.26.0.0980677925629.issue39291@roundup.psfhosted.org> New submission from Rockmizu : Python version: Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32 The usage of symlink_to() is link.symlink_to(target) while the usage of link_to() is target.link_to(link). This could be confusing. Here is an example: >>> import pathlib >>> target = pathlib.Path('target.txt') >>> p1 = pathlib.Path('symlink.txt') >>> p2 = pathlib.Path('hardlink.txt') >>> p1.symlink_to(target) >>> p2.link_to(target) # expected usage Traceback (most recent call last): File "", line 1, in File "D:\Program Files\Python38\lib\pathlib.py", line 1346, in link_to self._accessor.link_to(self, target) FileNotFoundError: [WinError 2] The system cannot find the file specified: 'hardlink.txt' -> 'target.txt' >>> target.link_to(p2) # current usage >>> Since os.symlink() and os.link() have the same argument order, >>> import os >>> os.symlink('target.txt', 'symlink.txt') >>> os.link('target.txt', 'hardlink.txt') >>> it would be nicer if the pathlib has the same argument order too. ---------- components: Library (Lib) messages: 359745 nosy: Rockmizu priority: normal severity: normal status: open title: "pathlib.Path.link_to()" and "pathlib.Path.symlink_to()" have reversed usage type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 12:54:35 2020 From: report at bugs.python.org (Ryan) Date: Fri, 10 Jan 2020 17:54:35 +0000 Subject: [New-bugs-announce] [issue39292] syslog constants behind rfc Message-ID: <1578678875.77.0.925015529037.issue39292@roundup.psfhosted.org> New submission from Ryan : When using the SysLogHandler (https://docs.python.org/3/library/logging.handlers.html#logging.handlers.SysLogHandler) the supported facilities appear to be lagging the RFC (5454 ?), or at least what is being supported in other mainstream languages. I Specifically need LOG_AUDIT and LOG_NTP but there are a couple others. The syslog "openlog" function takes an INT but not sure how to get an INT through the python SysLogHandler because it's based on a static list of names and symbolic values. Wikipedia (https://en.wikipedia.org/wiki/Syslog#Facility) suggests LOG_AUTH and LOG_NTP are in the RFC. This is my first ticket here so hopefully this is the right place for it. Maybe there is a workaround or some re-education needed on my part... ---------- components: Library (Lib) messages: 359746 nosy: tryanunderwood at gmail.com priority: normal severity: normal status: open title: syslog constants behind rfc versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 16:11:34 2020 From: report at bugs.python.org (Tony) Date: Fri, 10 Jan 2020 21:11:34 +0000 Subject: [New-bugs-announce] [issue39293] Windows 10 64-bit needs reboot Message-ID: <1578690694.46.0.197073018811.issue39293@roundup.psfhosted.org> New submission from Tony : After installing python 3.8.1 64-bit, on Windows 10 64-bit version 1909, the system needs to be rebooted to validate all settings in the registry. Otherwise will cause a lot of exceptions, like Path not found etc. ---------- components: Installation messages: 359756 nosy: ToKa priority: normal severity: normal status: open title: Windows 10 64-bit needs reboot type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 16:57:59 2020 From: report at bugs.python.org (Bram Stolk) Date: Fri, 10 Jan 2020 21:57:59 +0000 Subject: [New-bugs-announce] [issue39294] zipfile.ZipInfo objects contain invalid 'extra' fields. Message-ID: <1578693479.87.0.550823648689.issue39294@roundup.psfhosted.org> New submission from Bram Stolk : This has been tested with Windows Python 2.7 and Python 3.8 If you get the ZipInfo objects of a ZIP file that is larger than 2GiB, then all the ZipInfo entries with a header offset > 2G will report phantom 'extra' data. import zipfile zipname = "reallybig.zip" z = zipfile.ZipFile( zipname ) zi = z.infolist() for inf in zi: print( inf.filename, inf.header_offset, inf.extra ) And observe that: * All entries with offset < 2G will report no extra field. * All entries with offset > 2G will report extra field. It's hard to package this up as a self-contained test, because it requires a very large zip to test. ---------- components: IO messages: 359762 nosy: Bram Stolk priority: normal severity: normal status: open title: zipfile.ZipInfo objects contain invalid 'extra' fields. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 17:16:07 2020 From: report at bugs.python.org (Matthew Newville) Date: Fri, 10 Jan 2020 22:16:07 +0000 Subject: [New-bugs-announce] [issue39295] usage of bitfields in ctypes structures changed between 3.7.5 and 3.7.6 Message-ID: <1578694567.2.0.498689301236.issue39295@roundup.psfhosted.org> New submission from Matthew Newville : We have a library (https://github.com/pyepics/pyepics) that wraps several C structures for a communication protocol library that involves many C->Python callbacks. One of the simpler structures we wrap with ctypes is defined with typedef struct ca_access_rights { unsigned read_access:1; unsigned write_access:1; } caar; struct access_rights_handler_args { long chanId; /* channel id */ caar access; /* access rights state */ }; which we had wrapped (perhaps naively) as class access_rights_handler_args(ctypes.Structure): "access rights arguments" _fields_ = [('chid', ctypes.c_long), ('read_access', ctypes.c_uint, 1), ('write_access', ctypes.c_uint, 1)] which we would then this structure as the function argument of a callback function that the underlying library would call, using _Callback = ctypes.CFUNCTYPE(None, ctypes.POINTER(access_rights_handler_args))(access_rights_handler) and the python function `access_righte_handler` would be able to unpack and use this structure. This worked for Python 2.7, 3.3 - 3.7.5 on 64-bit Linux, Windows, and MacOS. This code was well-tested and was used in production code on very many systems. It did not cause segfaults. With Python 3.7.6 this raises an exception at the ctypes.CFUNCTYPE() call with ...../lib/python3.7/ctypes/__init__.py", line 99, in CFUNCTYPE class CFunctionType(_CFuncPtr): TypeError: item 1 in _argtypes_ passes a struct/union with a bitfield by value, which is unsupported. We were able to find a quick work-around this by changing the structure definition to be class access_rights_handler_args(ctypes.Structure): "access rights arguments" _fields_ = [('chid', ctypes.c_long), ('access', ctypes.c_ubyte)] and then explicitly extract the 2 desired bits from the byte. Of course, that byte is more data than is being sent in the structure, so there is trailing garbage. This change seems to have been related to https://bugs.python.org/issue16576. Is there any way to restore the no-really-I'm-not-making-it-up-it-was-most-definitely-working-for-us behavior of Python 3.7.5 and earlier? If this is not possible, what would be the right way to wrap this sort of structure? Thanks ---------- components: ctypes messages: 359763 nosy: Matthew Newville priority: normal severity: normal status: open title: usage of bitfields in ctypes structures changed between 3.7.5 and 3.7.6 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 17:28:07 2020 From: report at bugs.python.org (Tony) Date: Fri, 10 Jan 2020 22:28:07 +0000 Subject: [New-bugs-announce] [issue39296] Windows register keys Message-ID: <1578695287.77.0.549625073253.issue39296@roundup.psfhosted.org> New submission from Tony : It would be more practical to name the Windows main registry keys 'python', with for example 'python32' or 'python64'. This would make searching the registry for registered python versions (single and/or multi users) a lot easier. ---------- components: Windows messages: 359765 nosy: ToKa, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows register keys versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 20:09:02 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 11 Jan 2020 01:09:02 +0000 Subject: [New-bugs-announce] [issue39297] Synchronize importlib.metadata with importlib_metadata 1.4 Message-ID: <1578704942.47.0.915059602595.issue39297@roundup.psfhosted.org> New submission from Jason R. Coombs : Importlib_metadata 1.4 adds performance improvements to the distribution discovery mechanism. Let's incorporate those upstream. ---------- components: Library (Lib) messages: 359773 nosy: jaraco priority: normal severity: normal status: open title: Synchronize importlib.metadata with importlib_metadata 1.4 versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 10 23:27:40 2020 From: report at bugs.python.org (Larry Hastings) Date: Sat, 11 Jan 2020 04:27:40 +0000 Subject: [New-bugs-announce] [issue39298] add BLAKE3 to hashlib Message-ID: <1578716860.16.0.194711020268.issue39298@roundup.psfhosted.org> New submission from Larry Hastings : >From 3/4 of the team that brought you BLAKE2, now comes... BLAKE3! https://github.com/BLAKE3-team/BLAKE3 BLAKE3 is a brand new hashing function. It's fast, it's paralellizeable, and unlike BLAKE2 there's only one variant. I've experimented with it a little. On my laptop (2018 Intel i7 64-bit), the portable implementation is kind of middle-of-the-pack, but with AVX2 enabled it's second only to the "Haswell" build of KangarooTwelve. On a 32-bit ARMv7 machine the results are more impressive--the portable implementation is neck-and-neck with MD4, and with NEON enabled it's definitely the fastest hash function I tested. These tests are all single-threaded and eliminate I/O overhead. The above Github repo has a reference implementation in C which includes Intel and ARM SIMD drivers. Unsurprisingly, the interface looks roughly the same as the BLAKE2 interface(s), so if you took the existing BLAKE2 module and s/blake2b/blake3/ you'd be nearly done. Not quite as close as blake2b and blake2s though ;-) ---------- components: Library (Lib) keywords: patch messages: 359777 nosy: Zooko.Wilcox-O'Hearn, christian.heimes, larry priority: normal severity: normal stage: needs patch status: open title: add BLAKE3 to hashlib type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 01:06:36 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 11 Jan 2020 06:06:36 +0000 Subject: [New-bugs-announce] [issue39299] Improve test coverage for mimetypes module Message-ID: <1578722796.76.0.475900715527.issue39299@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Currently the test coverage for mimetypes module is at 57% https://codecov.io/gh/python/cpython/src/43682f1e39a3c61f0e8a638b887bcdcbfef766c5/Lib/mimetypes.py . I propose adding the following tests to increase the coverage. * Add test for case insensitive check of types and extensions. * Add test for data url with no comma. * Add test for read_mime_types function. * Add tests for the mimetypes cli. ---------- components: Tests messages: 359781 nosy: xtreak priority: normal severity: normal status: open title: Improve test coverage for mimetypes module type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 01:21:26 2020 From: report at bugs.python.org (lijok) Date: Sat, 11 Jan 2020 06:21:26 +0000 Subject: [New-bugs-announce] [issue39300] dataclasses non-default argument follows default argument Message-ID: <1578723686.9.0.201746303828.issue39300@roundup.psfhosted.org> New submission from lijok : from dataclasses import dataclass @dataclass class A: PARAM: int @dataclass class B(A): ARG: int PARAM: int = 1 Traceback (most recent call last): File "", line 2, in File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\dataclasses.py", line 1021, in dataclass return wrap(cls) File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\dataclasses.py", line 1013, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen) File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\dataclasses.py", line 927, in _process_class _init_fn(flds, File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\dataclasses.py", line 503, in _init_fn raise TypeError(f'non-default argument {f.name!r} ' TypeError: non-default argument 'ARG' follows default argument ---------- components: Library (Lib) messages: 359782 nosy: eric.smith, lijok priority: normal severity: normal status: open title: dataclasses non-default argument follows default argument type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 01:59:44 2020 From: report at bugs.python.org (Nick Coghlan) Date: Sat, 11 Jan 2020 06:59:44 +0000 Subject: [New-bugs-announce] [issue39301] Specification of bitshift on integers should clearly state floor division used Message-ID: <1578725984.98.0.631488660974.issue39301@roundup.psfhosted.org> New submission from Nick Coghlan : While reviewing ISO-IECJTC1-SC22-WG23's latest draft of their Python security annex, I noticed that https://docs.python.org/3.7/library/stdtypes.html#bitwise-operations-on-integer-types doesn't explicitly state that *floor* division is used for right shift operations, so right-shifting a negative number by more bits than it contains gives -1 rather than 0. This is consistent with the way the language spec defines both binary right-shifts (as division by "pow(2, n)" and floor division (as rounding towards negative infinity), so this is just a documentation issue to note that we should make it clearer that this behaviour is intentional. ---------- assignee: docs at python components: Documentation messages: 359786 nosy: docs at python, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Specification of bitshift on integers should clearly state floor division used type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 03:41:52 2020 From: report at bugs.python.org (Nick Coghlan) Date: Sat, 11 Jan 2020 08:41:52 +0000 Subject: [New-bugs-announce] [issue39302] Language reference does not clearly describe modern operand coercion Message-ID: <1578732112.23.0.558996012972.issue39302@roundup.psfhosted.org> New submission from Nick Coghlan : While reviewing ISO-IECJTC1-SC22-WG23's latest draft of their Python security annex, I found a description of operand coercion that was based on the legacy coercion model described at https://docs.python.org/2.5/ref/coercion-rules.html That's still the second highest link if you search for "Python operand coercion", while the highest link is this old, very brief, summary from Python in a Nutshell: https://www.oreilly.com/library/view/python-in-a/0596001886/ch04s05.html (still based on the old semantics where the inputs were coerced to a common type before calling the slot method, rather than giving the method direct access to the original operands). The third highest link at least goes to PEP 208 (https://www.python.org/dev/peps/pep-0208/), which correctly describes the modern semantics, but it describes them in terms of the CPython C slot API, not the Python level special method APIs. https://docs.python.org/3.7/reference/datamodel.html#emulating-numeric-types does technically provide the required information, but it's implicit in the description of the numeric arithmetic methods, rather than being clearly spelled out as a clear description of "Python operand coercion". (There are also some oddities around operand coercion for three-argument pow() that I'm going to file as their own issue) https://docs.python.org/3/library/constants.html#NotImplemented references https://docs.python.org/3/library/numbers.html#implementing-the-arithmetic-operations which describes defining new numeric ABCs in a coercion-friendly way, but still doesn't spell out the operand precedence and coercion dance. We could likely improve this situation by adding a new "Special method invocation" subject at the end of https://docs.python.org/3.7/reference/datamodel.html, moving the existing "Special method lookup" subsection under it, and then adding a second subsection called "Operand precedence and coercion". That new subsection would then cover the basic principles of: * for unary operands, there is no ambiguity * for binary operands of the same type, only the forward operation is tried (it is assumed that if the forward operation doesn't work, the reflected one won't either) * for binary operands where the type of the RHS is a subclass of the type of the LHS, the reflected operation is tried first (if it exists), followed by the forward operation if the reflected call does not exist or returns the NotImplemented singleton * for binary operands of unrelated types, the forward operation is tried first (if it exists), followed by the reflected operation if the forward call does not exist or returns the NotImplemented singleton * for ternary operands (i.e. 3 argument pow()), the behaviour is currently implementation defined (as the test suite doesn't enforce any particular behaviour, and what CPython does isn't obviously useful) Other specific points to be covered would be: * any argument coercion that occurs is up to the individual method implementations * logical short-circuiting expressions (and, or, if/else) only call the equivalent of bool(expr) While the corresponding reflected operations for the binary operators are covered in the documentation of the forward operations, it would also likely be worthwhile including a summary table in this new subsection of exactly which special methods support reflection, and what the reflected method names are. ---------- messages: 359788 nosy: ncoghlan priority: normal severity: normal stage: needs patch status: open title: Language reference does not clearly describe modern operand coercion type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 04:52:16 2020 From: report at bugs.python.org (Dan Arad) Date: Sat, 11 Jan 2020 09:52:16 +0000 Subject: [New-bugs-announce] [issue39303] Refactor cmd module Message-ID: <1578736336.99.0.232991806531.issue39303@roundup.psfhosted.org> New submission from Dan Arad : I've stumbled across the `cmd` module, had some difficulties in reading it, and wanted to help in making it more readable. I'm new to contributing to open source, and thought this could be a good exercise for me, and that if I could contribute along the way, then that would be a nice extra. I would be glad if my efforts could be accompanied by someone more experienced. ---------- components: Library (Lib) messages: 359789 nosy: Dan Arad priority: normal severity: normal status: open title: Refactor cmd module type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 08:42:38 2020 From: report at bugs.python.org (Aurora) Date: Sat, 11 Jan 2020 13:42:38 +0000 Subject: [New-bugs-announce] [issue39304] Don't accept a negative number for the count argument in str.replace(old, new[, count]) Message-ID: <1578750158.45.0.245853814213.issue39304@roundup.psfhosted.org> New submission from Aurora : It's meaningless for the count argument to have a negative value, since there's no such thing as negative count for something. ---------- components: Library (Lib) messages: 359795 nosy: opensource-assist priority: normal severity: normal status: open title: Don't accept a negative number for the count argument in str.replace(old, new[,count]) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 10:52:28 2020 From: report at bugs.python.org (Dong-hee Na) Date: Sat, 11 Jan 2020 15:52:28 +0000 Subject: [New-bugs-announce] [issue39305] Merge nntplib._NNTPBase and nntplib.NNTP Message-ID: <1578757948.77.0.383613154988.issue39305@roundup.psfhosted.org> New submission from Dong-hee Na : See: https://github.com/python/cpython/pull/17939#pullrequestreview-341290152 There was partial refactoring through PR 17939. I and Victor think that nntplib._NNTPBase can be removed by merging nntplib._NNTPBase and nntplib.NNTP. The only care point would be rewriting unit testing code which depends on nntplib._NNTPBase. ---------- components: Library (Lib) messages: 359803 nosy: corona10, vstinner priority: normal severity: normal status: open title: Merge nntplib._NNTPBase and nntplib.NNTP type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 14:41:48 2020 From: report at bugs.python.org (Hans Strijker) Date: Sat, 11 Jan 2020 19:41:48 +0000 Subject: [New-bugs-announce] [issue39306] Lib/configparser.py - RawConfigParser.set does not pass non-truthy values through to Interpolation.before_set Message-ID: <1578771708.17.0.449482349848.issue39306@roundup.psfhosted.org> New submission from Hans Strijker : Method ```configparser.RawConfigParser.set()``` has optional parameter *value* with default value ```None``` resulting in the behavior that actually trying to set a config parameter to ```None``` will not be propagated to ```Interpolation.before_set()```. In fact, since it uses ```if value:``` and not ```if value is None:``` none of the non-truthy values will be passed through. Suggested commit [8e008be](https://github.com/HStry/cpython/commit/8e008bea0cf6bd3c698b333fd39a383e124fe026) using already established ```_UNSET``` singleton, but that appears to break compatibility elsewhere. ---------- components: Library (Lib) messages: 359820 nosy: Strijker, taleinat priority: normal pull_requests: 17362 severity: normal status: open title: Lib/configparser.py - RawConfigParser.set does not pass non-truthy values through to Interpolation.before_set type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 15:13:33 2020 From: report at bugs.python.org (Alex Henrie) Date: Sat, 11 Jan 2020 20:13:33 +0000 Subject: [New-bugs-announce] [issue39307] Memory leak in parsetok Message-ID: <1578773613.93.0.403344641263.issue39307@roundup.psfhosted.org> New submission from Alex Henrie : The parsetok function currently contains the following code: if (!growable_comment_array_init(&type_ignores, 10)) { err_ret->error = E_NOMEM; PyTokenizer_Free(tok); return NULL; } if ((ps = PyParser_New(g, start)) == NULL) { err_ret->error = E_NOMEM; PyTokenizer_Free(tok); return NULL; } If PyParser_New fails, there is a memory leak because growable_comment_array_deallocate is not called on type_ignores. ---------- components: Interpreter Core messages: 359821 nosy: alex.henrie priority: normal severity: normal status: open title: Memory leak in parsetok type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 11 18:35:15 2020 From: report at bugs.python.org (=?utf-8?q?Tomasz_Tr=C4=99bski?=) Date: Sat, 11 Jan 2020 23:35:15 +0000 Subject: [New-bugs-announce] [issue39308] Literal[True] interpreted as Literal[1] Message-ID: <1578785715.62.0.203170094807.issue39308@roundup.psfhosted.org> New submission from Tomasz Tr?bski : Consider code (in attachment) that is being run on Python 3.9. An expected output of such code ought to be: (typing_extensions.Literal[1], typing_extensions.Literal[0]) (typing_extensions.Literal[True], typing_extensions.Literal[False]) However that's not the case. An output of the code, given that A is declared first, will be: (typing.Literal[1], typing.Literal[0]) (typing.Literal[1], typing.Literal[0]) and if B is declared first we receive: (typing.Literal[True], typing.Literal[False]) (typing.Literal[True], typing.Literal[False]) I believe a reason for that is having `bool` as subclass of `int` and consecutively having `typing._tp_cache` function that declares untyped cache. Indeed changing `cached = functools.lru_cache()(func)` to `cached = functools.lru_cache(typed=True)(func)` makes the linked code immune to A and B deceleration order. ---------- components: ctypes files: scratch_1.py messages: 359822 nosy: Tomasz Tr?bski priority: normal severity: normal status: open title: Literal[True] interpreted as Literal[1] type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file48835/scratch_1.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 01:15:24 2020 From: report at bugs.python.org (Sfjwlejfawnfsfjwlejfawnf) Date: Sun, 12 Jan 2020 06:15:24 +0000 Subject: [New-bugs-announce] [issue39309] Please delete my account Message-ID: <1578809724.37.0.611407008489.issue39309@roundup.psfhosted.org> Change by Sfjwlejfawnfsfjwlejfawnf : ---------- nosy: sfjwlejfawnf priority: normal severity: normal status: open title: Please delete my account _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 07:33:34 2020 From: report at bugs.python.org (STINNER Victor) Date: Sun, 12 Jan 2020 12:33:34 +0000 Subject: [New-bugs-announce] [issue39310] Add math.ulp(x) Message-ID: <1578832414.46.0.631511260032.issue39310@roundup.psfhosted.org> New submission from STINNER Victor : In bpo-39288, I added math.nextafter(x, y) function. I propose to now add math.ulp() companion function. Examples from tests of my PR: self.assertEqual(math.ulp(1.0), sys.float_info.epsilon) self.assertEqual(math.ulp(2.0 ** 52), 1.0) self.assertEqual(math.ulp(2.0 ** 53), 2.0) self.assertEqual(math.ulp(2.0 ** 64), 4096.0) Unit in the last place: * https://en.wikipedia.org/wiki/Unit_in_the_last_place * Java provides a java.lang.Math.ulp(x) function: https://docs.oracle.com/javase/8/docs/api/java/lang/Math.html#ulp-double- In numpy, I found two references to ULP: * numpy.testing.assert_array_almost_equal_nulp: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.testing.assert_array_almost_equal_nulp.html * numpy.testing.assert_array_max_ulp: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.testing.assert_array_max_ulp.html Attached PR implements math.ulp(x). ---------- components: Library (Lib) messages: 359846 nosy: vstinner priority: normal severity: normal status: open title: Add math.ulp(x) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 07:58:02 2020 From: report at bugs.python.org (Batuhan) Date: Sun, 12 Jan 2020 12:58:02 +0000 Subject: [New-bugs-announce] [issue39311] difflib pathlike support for {unified, context}_diff() {from, to}file Message-ID: <1578833882.54.0.425753104323.issue39311@roundup.psfhosted.org> New submission from Batuhan : >>> tuple(difflib.context_diff(["abc"], ["bcd"], fromfile=Path("example.py"))) Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.9/difflib.py", line 1254, in context_diff _check_types(a, b, fromfile, tofile, fromfiledate, tofiledate, lineterm) File "/usr/local/lib/python3.9/difflib.py", line 1301, in _check_types raise TypeError('all arguments must be str, not: %r' % (arg,)) TypeError: all arguments must be str, not: PosixPath('example.py') IMHO to and from file arguments should accept PathLike objects. If agreed I can prepare a patch. ---------- components: Library (Lib) messages: 359847 nosy: BTaskaya priority: normal severity: normal status: open title: difflib pathlike support for {unified,context}_diff() {from,to}file versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 09:27:40 2020 From: report at bugs.python.org (Eryk Sun) Date: Sun, 12 Jan 2020 14:27:40 +0000 Subject: [New-bugs-announce] [issue39312] Expose placeholder reparse points in Windows Message-ID: <1578839260.17.0.288692192504.issue39312@roundup.psfhosted.org> New submission from Eryk Sun : Windows 10 apparently defaults to disguising placeholder reparse points in python.exe processes, but exposes them to cmd.exe and powershell.exe processes. A common example is a user's OneDrive folder, which extensively uses placeholder reparse points for files and directories. The placeholder file attributes include FILE_ATTRIBUTE_REPARSE_POINT, FILE_ATTRIBUTE_OFFLINE, and FILE_ATTRIBUTE_SPARSE_FILE, and the reparse tags are in the set IO_REPARSE_TAG_CLOUD[_1-F] (0x9000[0-F]01A). Currently, we don't see any of this information in a python.exe process when we call FindFirstFile[Ex]W, GetFileAttributesW, or query file information on a file opened with FILE_FLAG_OPEN_REPARSE_POINT, such as when we call os.lstat. The behavior is determined by the process or per-thread placeholder-compatibility mode. The process mode can be queried via RtlQueryProcessPlaceholderCompatibilityMode [1]. The documentation says that "[m]ost Windows applications see exposed placeholders by default". I don't know what criteria Windows is using here, but in my tests with python.exe and a simple command-line test program, the default mode is PHCM_DISGUISE_PLACEHOLDER. Should Python provide some way to call RtlSetProcessPlaceholderCompatibilityMode [2] to set PHCM_EXPOSE_PLACEHOLDERS mode for the current process? Should os.lstat be modified to temporarily expose placeholders -- for the current thread only -- via RtlSetThreadPlaceholderCompatibilityMode [3]? We can dynamically link to this ntdll function via GetProcAddress. It returns the previous mode, which we can restore after querying the file. [1] https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-rtlqueryprocessplaceholdercompatibilitymode [2] https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-rtlsetprocessplaceholdercompatibilitymode [3] https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-rtlsetthreadplaceholdercompatibilitymode ---------- components: Library (Lib), Windows messages: 359850 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Expose placeholder reparse points in Windows type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 10:15:27 2020 From: report at bugs.python.org (Batuhan) Date: Sun, 12 Jan 2020 15:15:27 +0000 Subject: [New-bugs-announce] [issue39313] lib2to3 RefactoringTool python_grammar_no_print_and_exec_statement Message-ID: <1578842127.44.0.653457423562.issue39313@roundup.psfhosted.org> New submission from Batuhan : issue 23896 introduced a grammar without print and exec statements (they both are functions now) but both the lib2to3 cli script and RefactoringTool lacks of that functionality (which is pretty useful for outside users of lib2to3 like formatters) (RefactoringTool) if self.options["print_function"]: self.grammar = pygram.python_grammar_no_print_statement else: self.grammar = pygram.python_grammar It should be supported here and on the command line script. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 359853 nosy: BTaskaya, benjamin.peterson priority: normal severity: normal status: open title: lib2to3 RefactoringTool python_grammar_no_print_and_exec_statement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 11:31:37 2020 From: report at bugs.python.org (Aurora) Date: Sun, 12 Jan 2020 16:31:37 +0000 Subject: [New-bugs-announce] [issue39314] Autofill the closing paraenthesis during auto-completion for functions which accept no arguments Message-ID: <1578846697.93.0.381295981609.issue39314@roundup.psfhosted.org> New submission from Aurora : If Python is compiled with the GNU readline headers, it will provide autocompletion for Python functions and etc. In the Python interpreter environment, if a function is typed partially, Python will fill in the rest if a tab character is typed. If a function accepts no arguments, Python still doesn't fill in the last closing paraenthesis during autocompletion, in the hope that the user will provide arguments, but in such a case it's pointless. ---------- components: Interpreter Core messages: 359855 nosy: opensource-assist priority: normal severity: normal status: open title: Autofill the closing paraenthesis during auto-completion for functions which accept no arguments type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 12:42:08 2020 From: report at bugs.python.org (hmathers) Date: Sun, 12 Jan 2020 17:42:08 +0000 Subject: [New-bugs-announce] [issue39315] Lists of objects containing lists Message-ID: <1578850928.34.0.888723591235.issue39315@roundup.psfhosted.org> New submission from hmathers : class Folder(): papers = [] shelf = [] shelf.append(Folder) shelf.append(Folder) shelf[0].papers.append("one") shelf[1].papers.append("two") print(shelf[0].papers) #should just print "one" right? ---------- messages: 359858 nosy: hmathers priority: normal severity: normal status: open title: Lists of objects containing lists versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 16:13:44 2020 From: report at bugs.python.org (Alex Hall) Date: Sun, 12 Jan 2020 21:13:44 +0000 Subject: [New-bugs-announce] [issue39316] settrace skips lines when chaining methods without arguments Message-ID: <1578863624.84.0.650995733403.issue39316@roundup.psfhosted.org> New submission from Alex Hall : When stepping through a multiline expression like this: ``` print(slug .replace("_", " ") .title() .upper() .replace("a", "b") .lower() .replace("The ", "the ")) ``` only these lines are hit by the tracer function: 15 print(slug 16 .replace("_", " ") 19 .replace("a", "b") 21 .replace("The ", "the ")) I'm guessing the problem is that there are no expressions on the other lines, as the attributes and calls all start with slug. ---------- components: Interpreter Core files: trace_skipping_lines_bug.py messages: 359878 nosy: alexmojaki priority: normal severity: normal status: open title: settrace skips lines when chaining methods without arguments versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48837/trace_skipping_lines_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 22:53:01 2020 From: report at bugs.python.org (dai dai) Date: Mon, 13 Jan 2020 03:53:01 +0000 Subject: [New-bugs-announce] [issue39317] This new feature or bug about operator "- -"? Message-ID: <1578887581.87.0.228721168056.issue39317@roundup.psfhosted.org> New submission from dai dai : ```py print(3 - - 2) print(3 + + 2) """output 5 5 """ ``` ---------- components: Windows messages: 359886 nosy: dai dai, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: This new feature or bug about operator "- -"? type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 12 23:56:47 2020 From: report at bugs.python.org (Robert Xiao) Date: Mon, 13 Jan 2020 04:56:47 +0000 Subject: [New-bugs-announce] [issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws Message-ID: <1578891407.19.0.452054572225.issue39318@roundup.psfhosted.org> New submission from Robert Xiao : tempfile.NamedTemporaryFile creates its wrapper like so: try: file = _io.open(fd, mode, buffering=buffering, newline=newline, encoding=encoding, errors=errors) return _TemporaryFileWrapper(file, name, delete) except BaseException: _os.unlink(name) _os.close(fd) raise If _TemporaryFileWrapper throws any kind of exception (even KeyboardInterrupt), this closes `fd` but leaks a valid `file` pointing to that fd. The `file` will later attempt to close the `fd` when it is collected, which can lead to subtle bugs. (This particular issue contributed to this bug: https://nedbatchelder.com/blog/202001/bug_915_please_help.html) This should probably be rewritten as: try: file = _io.open(fd, mode, buffering=buffering, newline=newline, encoding=encoding, errors=errors) except: _os.unlink(name) _os.close(fd) raise try: return _TemporaryFileWrapper(file, name, delete) except BaseException: _os.unlink(name) file.close() raise or perhaps use nested try blocks to avoid the _os.unlink duplication. ---------- components: Library (Lib) messages: 359888 nosy: nneonneo priority: normal severity: normal status: open title: NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 05:58:01 2020 From: report at bugs.python.org (Aurora) Date: Mon, 13 Jan 2020 10:58:01 +0000 Subject: [New-bugs-announce] [issue39319] ntpath module must not be available on POSIX platforms Message-ID: <1578913081.93.0.570366653364.issue39319@roundup.psfhosted.org> New submission from Aurora : According to https://docs.python.org/dev/library/undoc.html the 'ntpath' module is an "Implementation of os.path on Win32 and Win64 platforms". Just like all other Windows-specific modules(like winreg),'ntpath' must not be available for use on a POSIX system like Linux. I guess that 'posixpath' is also available on Windows, that if it is, it must not be available too. ---------- components: Interpreter Core messages: 359897 nosy: opensource-assist priority: normal severity: normal status: open title: ntpath module must not be available on POSIX platforms type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 07:28:10 2020 From: report at bugs.python.org (Mark Shannon) Date: Mon, 13 Jan 2020 12:28:10 +0000 Subject: [New-bugs-announce] [issue39320] Handle unpacking of */** arguments and rvalues in the compiler Message-ID: <1578918490.54.0.342025462897.issue39320@roundup.psfhosted.org> New submission from Mark Shannon : Currently the unpacking of starred values in arguments and the right hand side of assignments is handled in the interpreter without any help from the compiler. The layout of arguments and values is visible to the compiler, so the compiler should do more of the work. We can replace the complex bytecodes used in unpacking with simpler more focused ones. Specifically the collection building operations BUILD_LIST_UNPACK, BUILD_TUPLE_UNPACK, BUILD_SET_UNPACK and BUILD_TUPLE_UNPACK_WITH_CALL can be replaced with simpler, and self-explanatory operations: LIST_TO_TUPLE, LIST_EXTEND, SET_UPDATE In addition, the mapping operations BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL can be replaced with DICT_UPDATE and DICT_MERGE. DICT_MERGE is like DICT_UPDATE but raises an exception for duplicate keys. This change would not have much of an effect of performance, as the bytecodes listed are relatively rarely used, but shrinking the interpreter is always beneficial. ---------- components: Interpreter Core messages: 359901 nosy: Mark.Shannon priority: normal severity: normal status: open title: Handle unpacking of */** arguments and rvalues in the compiler type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 09:18:07 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 13 Jan 2020 14:18:07 +0000 Subject: [New-bugs-announce] [issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by Signal 9 Message-ID: <1578925087.43.0.642221025557.issue39321@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/214/builds/152 ... 0:08:21 load avg: 3.66 [240/420] test_wait3 passed -- running: test_multiprocessing_forkserver (1 min 51 sec) 0:08:22 load avg: 3.66 [241/420] test_uuid passed -- running: test_multiprocessing_forkserver (1 min 53 sec) 0:08:25 load avg: 3.53 [242/420] test_tuple passed -- running: test_multiprocessing_forkserver (1 min 55 sec) 0:08:32 load avg: 3.56 [243/420] test___all__ passed -- running: test_multiprocessing_forkserver (2 min 3 sec) *** Signal 9 Stop. make: stopped in /usr/home/buildbot/python/3.x.koobs-freebsd-9e36.nondebug/build program finished with exit code 1 elapsedTime=519.823452 ---------- components: Tests messages: 359904 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by Signal 9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 10:17:08 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 13 Jan 2020 15:17:08 +0000 Subject: [New-bugs-announce] [issue39322] Add gc.is_finalized to check if an object has been finalised by the gc Message-ID: <1578928628.66.0.851687765453.issue39322@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Right now is not possible to check from the Python layer if an object with gc support has been already finalized by the GC (but has been resurrected). When implementing some callbacks for the gc in order to add advanced statistics, I have greatly missed a function like this to check if a certain object has been resurrected / the finalizer has been called. ---------- assignee: pablogsal components: Interpreter Core messages: 359914 nosy: pablogsal priority: normal severity: normal status: open title: Add gc.is_finalized to check if an object has been finalised by the gc type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 11:42:19 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 13 Jan 2020 16:42:19 +0000 Subject: [New-bugs-announce] [issue39323] Add test for imghdr cli Message-ID: <1578933739.07.0.811706238675.issue39323@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : imghdr module has a cli that can display the image type for a given filename and also recurse through directories. I would like to propose following changes : * Add tests for the imghdr cli. * The cli uses hardcoded '/' separator in the end for directories this is a minor issue with windows where the separators are \ and the last separator is displayed with / like c:\Foo\Bar\Spam/ . Using os.sep can be a better option here. I have a PR that I will add shortly for review. ---------- components: Tests messages: 359919 nosy: xtreak priority: normal severity: normal status: open title: Add test for imghdr cli type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 14:53:40 2020 From: report at bugs.python.org (Ryan Batchelder) Date: Mon, 13 Jan 2020 19:53:40 +0000 Subject: [New-bugs-announce] [issue39324] Add mimetype for extension .md (markdown) Message-ID: <1578945220.79.0.19370550446.issue39324@roundup.psfhosted.org> New submission from Ryan Batchelder : I would like to propose that the mimetype for markdown files ending in .md to text/markdown is included in the mimetypes library. This is registered here: https://www.iana.org/assignments/media-types/text/markdown ---------- messages: 359931 nosy: Ryan Batchelder priority: normal severity: normal status: open title: Add mimetype for extension .md (markdown) type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 20:24:01 2020 From: report at bugs.python.org (Irv Kalb) Date: Tue, 14 Jan 2020 01:24:01 +0000 Subject: [New-bugs-announce] [issue39325] Original window focus when opening IDLE by double clicking Python file Mac Message-ID: <1578965041.32.0.467720102462.issue39325@roundup.psfhosted.org> New submission from Irv Kalb : I have my Mac to open ".py" files with IDLE. If IDLE is not running, and I double click on a Python file, the Shell window opens, then the Python file I clicked on opens in front, but the Shell has keyboard focus. In order to edit or run the source file, I must click in that window to give it focus. I have made a two minute video that is available here as a private video: https://youtu.be/Fs_ZAiej-WI Thanks for your consideration. Irv ---------- assignee: terry.reedy components: IDLE messages: 359942 nosy: IrvKalb, terry.reedy priority: normal severity: normal status: open title: Original window focus when opening IDLE by double clicking Python file Mac type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 13 23:52:26 2020 From: report at bugs.python.org (Divyansh tiwari) Date: Tue, 14 Jan 2020 04:52:26 +0000 Subject: [New-bugs-announce] [issue39326] Python-3.8.1 "test_importlib" failed Message-ID: <1578977546.19.0.0039726161306.issue39326@roundup.psfhosted.org> New submission from Divyansh tiwari : Python-3.8.1 after "make test" command in Ubuntu terminal report an error saying "test_importlib" failed. = Tests result: FAILURE then FAILURE == 1 test failed: test_importlib 1 re-run test: test_importlib ========================================== I re-ran the test my the command make test TESTOPTS="-v test_importlib" but again result in failed ---------- components: Build messages: 359945 nosy: Divyansh_tiwari priority: normal severity: normal status: open title: Python-3.8.1 "test_importlib" failed versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 04:45:52 2020 From: report at bugs.python.org (Peter Liedholm) Date: Tue, 14 Jan 2020 09:45:52 +0000 Subject: [New-bugs-announce] [issue39327] shutil.rmtree using vagrant synched folder fails Message-ID: <1578995152.92.0.0617677010821.issue39327@roundup.psfhosted.org> New submission from Peter Liedholm : Python 3.6.9 Ubuntu 18.04 python3 -c 'import shutil; shutil.rmtree("1a")' Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/shutil.py", line 486, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/usr/lib/python3.6/shutil.py", line 424, in _rmtree_safe_fd _rmtree_safe_fd(dirfd, fullname, onerror) File "/usr/lib/python3.6/shutil.py", line 424, in _rmtree_safe_fd _rmtree_safe_fd(dirfd, fullname, onerror) File "/usr/lib/python3.6/shutil.py", line 428, in _rmtree_safe_fd onerror(os.rmdir, fullname, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 426, in _rmtree_safe_fd os.rmdir(name, dir_fd=topfd) OSError: [Errno 26] Text file busy: '4a' ----------------- Reproduction method mkdir synched_folder\1a\2a\3a\4a mkdir synched_folder\1a\2b\3a\4a mkdir synched_folder\1b\2a\3a\4a choco install vagrant Save Vagrantfile in empty folder vagrant box add ubuntu/bionic64 vagrant up vagrant ssh cd synched_folder python3 -c 'import shutil; shutil.rmtree("1a")' *** Error message *** rm -r 1a *** Works fine *** ---------- components: Library (Lib) files: Vagrantfile messages: 359961 nosy: PeterFS priority: normal severity: normal status: open title: shutil.rmtree using vagrant synched folder fails type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48838/Vagrantfile _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 06:53:00 2020 From: report at bugs.python.org (Cheryl Sabella) Date: Tue, 14 Jan 2020 11:53:00 +0000 Subject: [New-bugs-announce] [issue39328] Allow filename mismatch in local and central directories in zipfile.py Message-ID: <1579002780.47.0.570956156071.issue39328@roundup.psfhosted.org> New submission from Cheryl Sabella : This is being opened from the report on GH3035. During malware research I bumped int problem with my Python based file analyzer: miscreants are modifying ZIP file header parts so, that python based automated analysis tools are unable to process the contents but intended clients are able to open the files with end-user applications and extract the possibly malicious contents. Proposed patch makes it possible to process the ZIP files even if such conditions occur. Default behavior remains the same (raise BadZipFile exception). ---------- messages: 359966 nosy: cheryl.sabella, gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: Allow filename mismatch in local and central directories in zipfile.py type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 08:13:51 2020 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 14 Jan 2020 13:13:51 +0000 Subject: [New-bugs-announce] [issue39329] smtplib.LMTP needs timeout parameter Message-ID: <1579007631.6.0.089521926624.issue39329@roundup.psfhosted.org> New submission from Dong-hee Na : see: https://github.com/python/cpython/pull/17958#issuecomment-573390867 I've noticed that LMTP does not support the timeout parameter. See: https://docs.python.org/3.9/library/smtplib.html#smtplib.LMTP However, LMTP also able to use the socket which is created from SMTP. IMHO LMTP needs to support the timeout parameter. ---------- assignee: corona10 components: Library (Lib) messages: 359975 nosy: corona10, vstinner priority: normal severity: normal status: open title: smtplib.LMTP needs timeout parameter type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 09:41:26 2020 From: report at bugs.python.org (Reece Dunham) Date: Tue, 14 Jan 2020 14:41:26 +0000 Subject: [New-bugs-announce] [issue39330] Way to build without IDLE Message-ID: <1579012886.09.0.695559726455.issue39330@roundup.psfhosted.org> New submission from Reece Dunham : It would just be better in my opinion if there was a way to build without IDLE, for people that are building from source and don't want it. This doesn't have to be implemented, it is just something I think would make the build system a bit better. ---------- components: Build messages: 359976 nosy: rdil priority: normal severity: normal status: open title: Way to build without IDLE type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 10:47:38 2020 From: report at bugs.python.org (Guy Galun) Date: Tue, 14 Jan 2020 15:47:38 +0000 Subject: [New-bugs-announce] [issue39331] 2to3 mishandles indented imports Message-ID: <1579016858.37.0.175118756285.issue39331@roundup.psfhosted.org> New submission from Guy Galun : When encountering an import that should be removed in Python 3 (e.g. "from itertools import izip"), 2to3 changes it a blank line, which may cause a runtime error if that import was indented: error: module importing failed: expected an indented block (ptypes.py, line 10) File "temp.py", line 1, in File "./lldbmacros/xnu.py", line 771, in from memory import * File "./lldbmacros/memory.py", line 11, in import macho File "./lldbmacros/macho.py", line 3, in from macholib import MachO as macho File "./lldbmacros/macholib/MachO.py", line 10, in from .mach_o import MH_FILETYPE_SHORTNAMES, LC_DYSYMTAB, LC_SYMTAB File "./lldbmacros/macholib/mach_o.py", line 16, in from macholib.ptypes import p_uint32, p_uint64, Structure, p_long, pypackable Relevant section before 2to3: try: from itertools import izip, imap except ImportError: izip, imap = zip, map from itertools import chain, starmap And after 2to3: try: except ImportError: izip, imap = zip, map from itertools import chain, starmap * Side note: This specific case may only be problematic with scripts that are partially aware of Python 3, otherwise they wouldn't try-catch that import. * Proposed solution: In case of that kind of import being the single line of an indented block, change it to "pass" instead of a blank line. ---------- components: 2to3 (2.x to 3.x conversion tool) files: ptypes.py messages: 359978 nosy: galun.guy priority: normal severity: normal status: open title: 2to3 mishandles indented imports type: crash versions: Python 3.9 Added file: https://bugs.python.org/file48839/ptypes.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 13:25:50 2020 From: report at bugs.python.org (Jason Culligan) Date: Tue, 14 Jan 2020 18:25:50 +0000 Subject: [New-bugs-announce] [issue39332] Python 3.6 compiler protections from Ubuntu distros Message-ID: <1579026350.83.0.0617094618953.issue39332@roundup.psfhosted.org> New submission from Jason Culligan : The python3.6 binary supplied in Ubuntu distros is not compiled with Position Independent Code (PIE) protection enabled. Python2 does. Is this not seen as a problem? Example 1: (checksec) ============ FILE: /usr/bin/python2 RELRO: Full RELRO STACK CANARY: Canary found NX: NX enabled PIE: PIE enabled <<< RPATH: No RPATH RUNPATH: No RUNPATH Symbols: No Symbols FORTIFY: Yes Fortified: 14 Fortifiable: 32 FILE: /usr/bin/python3.6 RELRO: Partial RELRO <<< ISSUE >>> STACK CANARY: Canary found NX: NX enabled PIE: No PIE <<< ISSUE >>> RPATH: No RPATH RUNPATH: No RUNPATH Symbols: No Symbols FORTIFY: Yes Fortified: 18 Fortifiable: 42 Example 2: ============ $ hardening-check /usr/bin/python2 /usr/bin/python2: Position Independent Executable: yes Stack protected: yes Fortify Source functions: yes (some protected functions found) Read-only relocations: yes Immediate binding: yes $ hardening-check /usr/bin/python3.6 /usr/bin/python3.6: Position Independent Executable: no, normal executable! Stack protected: yes Fortify Source functions: yes (some protected functions found) Read-only relocations: yes Immediate binding: no, not found! ---------- components: Build messages: 359986 nosy: hpawdjit priority: normal severity: normal status: open title: Python 3.6 compiler protections from Ubuntu distros type: security versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 15:36:54 2020 From: report at bugs.python.org (Jack Orenstein) Date: Tue, 14 Jan 2020 20:36:54 +0000 Subject: [New-bugs-announce] [issue39333] argparse should offer an alternative to SystemExit in case a parse fails Message-ID: <1579034214.28.0.98730978231.issue39333@roundup.psfhosted.org> New submission from Jack Orenstein : If parse_args fails, SystemExit is raised, carrying an exit code of 2, and the help message is printed. For an embedded usage of argparse, this behavior is undesirable. I am writing an interactive console application, using argparse to parse input. When a parse fails, I would like to print an error message and continue, not terminate the program. Currently, I need to catch SystemExit to be able to do this, which has obvious problems, (e.g., what if something other that argparse is causing the exit?) I'd like to see some way to specify alternative behavior, e.g. raise an exception of a given type. ---------- components: Library (Lib) messages: 359991 nosy: geophile priority: normal severity: normal status: open title: argparse should offer an alternative to SystemExit in case a parse fails type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 15:54:30 2020 From: report at bugs.python.org (Julien Palard) Date: Tue, 14 Jan 2020 20:54:30 +0000 Subject: [New-bugs-announce] [issue39334] python specific index directives in our doc has been deprecated 10 years ago Message-ID: <1579035270.07.0.838861663002.issue39334@roundup.psfhosted.org> New submission from Julien Palard : see: https://github.com/sphinx-doc/sphinx/pull/6970 ---------- assignee: mdk components: Documentation messages: 359996 nosy: mdk priority: normal severity: normal status: open title: python specific index directives in our doc has been deprecated 10 years ago _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 16:08:15 2020 From: report at bugs.python.org (Hrvoje Abraham) Date: Tue, 14 Jan 2020 21:08:15 +0000 Subject: [New-bugs-announce] [issue39335] round Decimal edge case Message-ID: <1579036095.34.0.13379826917.issue39335@roundup.psfhosted.org> New submission from Hrvoje Abraham : >>> from decimal import Decimal >>> round(Decimal('-123.499999999999999999999999999999999999999999')) -124.0 I would expect -123.0, even considering Py2 rounding convention details (away from zero), Decimal rounding convention (default rounding=ROUND_HALF_EVEN), floating point specifics... Works as expected in Py3. Both Py2 and Py3 use same default Decimal rounding=ROUND_HALF_EVEN. Could be I'm missing some detail... ---------- components: Library (Lib) messages: 359999 nosy: ahrvoje priority: normal severity: normal status: open title: round Decimal edge case versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 16:33:31 2020 From: report at bugs.python.org (Dino Viehland) Date: Tue, 14 Jan 2020 21:33:31 +0000 Subject: [New-bugs-announce] [issue39336] Immutable module type can't be used as package in custom loader Message-ID: <1579037611.53.0.713710863743.issue39336@roundup.psfhosted.org> New submission from Dino Viehland : I'm trying to create a custom module type for a custom loader where the returned modules are immutable. But I'm running into an issue where the immutable module type can't be used as a module for a package. That's because the import machinery calls setattr to set the module as an attribute on it's parent in _boostrap.py # Set the module as an attribute on its parent. parent_module = sys.modules[parent] setattr(parent_module, name.rpartition('.')[2], module) I'd be okay if for these immutable module types they simply didn't have their children packages published on them. A simple simulation of this is a package which replaces its self with an object which doesn't support adding arbitrary attributes: x/__init__.py: import sys class MyMod(object): __slots__ = ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__'] def __init__(self): for attr in self.__slots__: setattr(self, attr, globals()[attr]) sys.modules['x'] = MyMod() x/y.py: # Empty file >>> from x import y Traceback (most recent call last): File "", line 1, in File "", line 983, in _find_and_load File "", line 971, in _find_and_load_unlocked AttributeError: 'MyMod' object has no attribute 'y' There's a few different options I could see on how this could be supported: 1) Simply handle the attribute error and allow things to continue 2) Add ability for the modules loader to perform the set, and fallback to setattr if one isn't available. Such as: getattr(parent_module, 'add_child_module', setattr)(parent_module, name.rpartition('.')[2], module) 3) Add the ability for the module type to handle the setattr: getattr(type(parent_module), 'add_child_module', fallback)(parent_module, , name.rpartition('.')[2], module) ---------- assignee: dino.viehland components: Interpreter Core messages: 360000 nosy: dino.viehland priority: normal severity: normal stage: needs patch status: open title: Immutable module type can't be used as package in custom loader type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 16:54:42 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 14 Jan 2020 21:54:42 +0000 Subject: [New-bugs-announce] [issue39337] codecs.lookup() ignores non-ASCII characters, whereas encodings.normalize_encoding() copies them Message-ID: <1579038882.16.0.589810918272.issue39337@roundup.psfhosted.org> New submission from STINNER Victor : bpo-37751 changed codecs.lookup() in a subtle way: non-ASCII characters are now ignored, whereas they were copied unmodified previously. I would prefer that codecs.lookup() and encodings.normalize_encoding() behave the same. Either always ignore or always copy. Moreover, it seems like there is no test on how the encoding names are normalized in codecs.register(). I recall that using codecs.register() in an unit test causes troubles since there is no API to unregister a search function. Maybe we should just add a private function for test in _testcapi. Serhiy Storchaka wrote an example on my PR: https://github.com/python/cpython/pull/17997/files > There are other differences. For example, normalize_encoding("???-8") returns "???_8", but codecs.lookup normalizes it to "8". > The comment in the sources is also not correct. ---------- components: Library (Lib) messages: 360004 nosy: lemburg, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: codecs.lookup() ignores non-ASCII characters, whereas encodings.normalize_encoding() copies them versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 14 17:23:01 2020 From: report at bugs.python.org (Y3Kv Bv) Date: Tue, 14 Jan 2020 22:23:01 +0000 Subject: [New-bugs-announce] [issue39338] Data lost randomly from dictionary after creating the dictionary Message-ID: <1579040581.36.0.0302070904511.issue39338@roundup.psfhosted.org> New submission from Y3Kv Bv : Windows 7 x64, Python 3.8.1 I've encountered a very weird issue where after creating a dictionary from a list the dictionary ends up being shorter/data is lost from it. It's absolutely random when it loses, how many and which items are lost. I've attached the example file with the code that always has a chance to trigger the issue for me. I've managed to figure only this much that when "if useAmp" never triggers, data loss will never occur. I've added checkpoints to verify where the loss occurs and it's not caused by "if useAmp" block directly, data loss happens exactly after the dictionary is created. ---------- files: test.py messages: 360007 nosy: Y3Kv Bv priority: normal severity: normal status: open title: Data lost randomly from dictionary after creating the dictionary type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48841/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 03:24:17 2020 From: report at bugs.python.org (ss) Date: Wed, 15 Jan 2020 08:24:17 +0000 Subject: [New-bugs-announce] [issue39339] Exception in thread QueueManagerThread Message-ID: <1579076657.65.0.185497149152.issue39339@roundup.psfhosted.org> New submission from ss <1162276945 at qq.com>: os.cpu_count() is 64, but 61 to 64 raise Exception in thread QueueManagerThread Error: ValueError: need at most 63 handles, got a sequence of length 63. ---------- components: Windows messages: 360030 nosy: paul.moore, pythonpython, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Exception in thread QueueManagerThread type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 04:35:33 2020 From: report at bugs.python.org (Peter Liedholm) Date: Wed, 15 Jan 2020 09:35:33 +0000 Subject: [New-bugs-announce] [issue39340] shutil.rmtree and write protected files Message-ID: <1579080933.31.0.376881618975.issue39340@roundup.psfhosted.org> New submission from Peter Liedholm : Ubuntu 18.4 and Windows 7 has different behaviour when deleting write protected files with rmtree. Ubuntu silently deletes them (unexpected?) Windows moans about access denied (expected?) Reproduction method linux mkdir test; touch test/file.txt; chmod -w test/file.txt Reproduction method windows mkdir test && type nul > test\file.txt && attrib +R test\file.txt Reproduction method cont. python3 -c "import shutil; shutil.rmtree('test')" ---------- components: Library (Lib) messages: 360033 nosy: PeterFS priority: normal severity: normal status: open title: shutil.rmtree and write protected files versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 04:57:18 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Jan 2020 09:57:18 +0000 Subject: [New-bugs-announce] [issue39341] zipfile: ZIP Bomb vulnerability, don't check announced uncompressed size Message-ID: <1579082238.3.0.125873650379.issue39341@roundup.psfhosted.org> New submission from STINNER Victor : Laish, Amit (GE Digital) reported a vulnerability in the zipfile module to the PSRT list. The module is vulnerable to ZIP Bomb: https://en.wikipedia.org/wiki/Zip_bomb A 100 KB malicious ZIP file announces an uncompressed size of 1 byte but extracting it writes 100 MB on disk. Python 2.7 is vulnerable. Python 3.7 does not seem to be directly vulnerable. The proof of concept fails with: $ python3 poc.py The size of the uncompressed data is: 1 bytes Traceback (most recent call last): File "poc.py", line 18, in extract() # The uncompressed size is more than 20GB :) File "poc.py", line 6, in extract zip_ref.extractall('./') File "/usr/lib64/python3.7/zipfile.py", line 1636, in extractall self._extract_member(zipinfo, path, pwd) File "/usr/lib64/python3.7/zipfile.py", line 1691, in _extract_member shutil.copyfileobj(source, target) File "/usr/lib64/python3.7/shutil.py", line 79, in copyfileobj buf = fsrc.read(length) File "/usr/lib64/python3.7/zipfile.py", line 930, in read data = self._read1(n) File "/usr/lib64/python3.7/zipfile.py", line 1020, in _read1 self._update_crc(data) File "/usr/lib64/python3.7/zipfile.py", line 948, in _update_crc raise BadZipFile("Bad CRC-32 for file %r" % self.name) zipfile.BadZipFile: Bad CRC-32 for file 'dummy1.txt' The malicious ZIP file size is 100 KB. Extracting it writes dummy1.txt: 100 MB only made of a single character "0" (zero, Unicode character U+0030 or byte 0x30) repeated on 100 MB. The original proof of concept used a 20 MB ZIP writing 20 GB on disk. It's just the same text file repeated 200 files. I created a smaller ZIP just to be able to upload it to bugs.python.org. Attached files: * create_zip.py: created malicious.zip from valid.zip: modify the uncompressed size of compressed dummy1.txt * valid.zip: compressed dummy1.txt, file size is 100 KB * poc.py: extract malicious.zip -- The zipfile documentation describes "Decompression pitfalls": https://docs.python.org/dev/library/zipfile.html#decompression-pitfalls The zlib.decompress() function has a max_length parameter: https://docs.python.org/dev/library/zlib.html#zlib.Decompress.decompress See also my notes on "Archives and Zip Bomb": https://python-security.readthedocs.io/security.html#archives-and-zip-bomb -- unzip program of Fedora unzip-6.0-44.fc31.x86_64 package has the same vulnerability: $ unzip malicious.zip Archive: malicious.zip inflating: dummy1.txt $ unzip -l malicious.zip Archive: malicious.zip Length Date Time Name --------- ---------- ----- ---- 1 03-12-2019 14:10 dummy1.txt --------- ------- 1 1 file -- According to Riccardo Schirone (Red Hat), p7zip, on the other hand, seems to use the minimum value between the header value and the file one, so it extracts only 1 byte and correctly complains about CRC failures. ---------- components: Library (Lib) messages: 360034 nosy: vstinner priority: normal severity: normal status: open title: zipfile: ZIP Bomb vulnerability, don't check announced uncompressed size type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 05:03:00 2020 From: report at bugs.python.org (Chris Burr) Date: Wed, 15 Jan 2020 10:03:00 +0000 Subject: [New-bugs-announce] [issue39342] Expose X509_V_FLAG_ALLOW_PROXY_CERTS in ssl Message-ID: <1579082580.52.0.275703305608.issue39342@roundup.psfhosted.org> New submission from Chris Burr : Enabling proxy certificate validation requires X509_V_FLAG_ALLOW_PROXY_CERTS to be included in the verify flags.[1] This should be exposed as ssl.VERIFY_ALLOW_PROXY_CERTS to match with the other X509_V_FLAG_* variables. https://www.openssl.org/docs/man1.1.1/man7/proxy-certificates.html ---------- components: Library (Lib) messages: 360035 nosy: chrisburr priority: normal severity: normal status: open title: Expose X509_V_FLAG_ALLOW_PROXY_CERTS in ssl type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 05:22:54 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Jan 2020 10:22:54 +0000 Subject: [New-bugs-announce] [issue39343] Travis CI: documentation job fails in library/nntplib.rst with random network issue on news.gmane.io Message-ID: <1579083774.85.0.947952738007.issue39343@roundup.psfhosted.org> New submission from STINNER Victor : Should we disable documentation test on nntplib? It's surprising that test_nntplib test on the other Travis CI jobs. https://travis-ci.org/python/cpython/jobs/637325027 Warning, treated as error: ********************************************************************** File "library/nntplib.rst", line ?, in default Failed example: s = NNTP('news.gmane.io') Exception raised: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/doctest.py", line 1329, in __run exec(compile(example.source, filename, "single", File "", line 1, in s = NNTP('news.gmane.io') File "/home/travis/build/python/cpython/Lib/nntplib.py", line 1045, in __init__ self.sock = self._create_socket(timeout) File "/home/travis/build/python/cpython/Lib/nntplib.py", line 1062, in _create_socket return socket.create_connection((self.host, self.port), timeout) File "/home/travis/build/python/cpython/Lib/socket.py", line 843, in create_connection raise err File "/home/travis/build/python/cpython/Lib/socket.py", line 831, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused ---------- components: Tests messages: 360039 nosy: vstinner priority: normal severity: normal status: open title: Travis CI: documentation job fails in library/nntplib.rst with random network issue on news.gmane.io versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 08:35:53 2020 From: report at bugs.python.org (Ajaya) Date: Wed, 15 Jan 2020 13:35:53 +0000 Subject: [New-bugs-announce] [issue39344] Getting error while importing ssl " import _ssl # if we can't import it, let the error propagate ImportError: DLL load failed while importing _ssl: The specified module could not be found." Message-ID: <1579095353.91.0.817394645763.issue39344@roundup.psfhosted.org> New submission from Ajaya : We have built python3.7.5 and python3.8.1 source code in windows 10 machine. I have created an embedded interptreter where i am trying to "import ssl" but it is failing with error "Journal execution results for D:\workdir\PR\9616145\9616145\journal.py... Syntax errors: Line 98: Traceback (most recent call last): File "D:\workdir\PR\9616145\9616145\journal.py", line 1, in import ssl File "", line 259, in load_module File "D:\workdir\PR\PRUnits\PythonIssuefix381\wntx64\kits\nxbin\python\Python38.zip\ssl.py", line 98, in import _ssl # if we can't import it, let the error propagate ImportError: DLL load failed while importing _ssl: The specified module could not be found." This error was coming from python 3.7.4 and it was working fine till python 3.7.3. There is also one work around if i replaced _ssl.pyd with python3.7.3 _ssl.pyd it is working fine. I also found python 3.7.3 is using openssl-1.1.1c where as python3.7.5 and python3.8.1 is using openssl- 1.1.1d. I have also checked in using python 3.7.5 installing and import ssl is working fine. I have also checked that _ssl.pyd in installer and _ssl.pyd is created by manually built size is different. I am not getting the exact root cause what has happened. Could you please me i have already stucked and not able to work. ---------- components: Windows files: journal.py messages: 360055 nosy: Ajaya, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Getting error while importing ssl " import _ssl # if we can't import it, let the error propagate ImportError: DLL load failed while importing _ssl: The specified module could not be found." type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48846/journal.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 13:55:56 2020 From: report at bugs.python.org (Darren Hamilton) Date: Wed, 15 Jan 2020 18:55:56 +0000 Subject: [New-bugs-announce] [issue39345] Py_Initialize Hangs on Windows 10 Message-ID: <1579114556.26.0.996943969176.issue39345@roundup.psfhosted.org> New submission from Darren Hamilton : This is related to https://bugs.python.org/issue17797, which is closed. Using Python 3.7.4, Windows 10.0.18362, Visual Studio 2017 and running as a C Application. Py_Initialize() eventually calls is_valid_fd with STDIN. The behavior appears to cause both dup() and fstat() to hang indefinitely (using RELEASE MSVCRT DLLs, it works correctly using MSVCRT Debug DLLs). The call stack shows Windows is waiting for some Windows Event. The recommended patch in issue17797 will not work. is_valid_fd appears to want to read the 'input' using a file descriptor. since both dup and fstat hang, I realized that isatty() would indicate if the file descriptor is valid and works for any predefined FD descriptor(STDIN-0, STDOUT-1, STDERR-2). #if defined(MS_WINDOWS) struct stat buf; if (fd >= fileno(stdin) && fd <= fileno(stderr)) { return (_isatty(fd) == 0 && errno == EBADF) ? 0 : 1; } else if (fstat(fd, &buf) < 0 && (errno == EBADF || errno == ENOENT)) return 0; return 1; #else ---------- components: Library (Lib), Windows messages: 360070 nosy: dhamilton, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Py_Initialize Hangs on Windows 10 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 15:27:38 2020 From: report at bugs.python.org (Drew DeVault) Date: Wed, 15 Jan 2020 20:27:38 +0000 Subject: [New-bugs-announce] [issue39346] gzip module only supports half of possible read/write scenarios Message-ID: <1579120058.89.0.688292733504.issue39346@roundup.psfhosted.org> New submission from Drew DeVault : A gzip file can have uncompressed data written to it, writing compressed data to the underlying file. It can also have uncompressed data read from it, reading compressed data from the underlying file. However, it does not support reading compressed data from an underlying uncompressed file, nor writing compressed data to an underlying uncompressed file. This makes it impossible to, for example, obtain an arbitrary file-like object and produce another file-like object which transparently compresses data read from the first. ---------- components: Library (Lib) messages: 360072 nosy: ddevault priority: normal severity: normal status: open title: gzip module only supports half of possible read/write scenarios versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 16:03:38 2020 From: report at bugs.python.org (Sebastian Berg) Date: Wed, 15 Jan 2020 21:03:38 +0000 Subject: [New-bugs-announce] [issue39347] Use of argument clinic like parsing and `METH_FASTCALL` support in extension modules Message-ID: <1579122218.66.0.0534330814443.issue39347@roundup.psfhosted.org> New submission from Sebastian Berg : This is mainly an information request, so sorry if its a bit besides the point (I do not mind if you just close it). But it seemed a bit too specific to get answers in most places... In Python you use argument clinic, which supports `METH_FASTCALL`, that seems pretty awesome. For extension modules, I am not sure that argument clinic is a straight forward choice, since it probably generates code specific to a single Python version and also using, while we need to support multiple versions (including versions that do not support `METH_FASTCALL`. So the question would be if there is some idea for providing such C-API, or for example exposing `_PyArg_UnpackKeywords` (which is at the heart of kwarg parsing). My use-case is that some NumPy functions do have a nice speedup when using `METH_FASTCALL` and better argument clinic style faster arg-parsing. Which is why, looking at these things, I practically reimplemented a slightly dumbed down version of `PyArg_ParseTupleAndKeywords` working much like argument clinic (except less smart and only using macros and no code generation). That seems viable, but also feels a bit wrong, so I am curious if there may be a better solution or whether it would be plausible to expose `_PyArg_UnpackKeywords` to reduce code duplication. (although I suppose due to old python version support that would effectively take years) For completeness, my code in question is here: https://github.com/numpy/numpy/pull/15269 with the actual usage pattern being: static PyObject *my_method(PyObject *self, NPY_ARGUMENTS_DEF) { NPY_PREPARE_ARGPARSER; PyObject *argument1; int argument2 = -1; if (!npy_parse_arguments("method", 1, -1, NPY_ARGUMENTS_PASS), "argument1", NULL, &argument1, "argument2", &PyArray_PythonPyIntFromInt, &argument2, NULL, NULL, NULL) { return NULL; } } ---------- components: Argument Clinic messages: 360073 nosy: larry, seberg priority: normal severity: normal status: open title: Use of argument clinic like parsing and `METH_FASTCALL` support in extension modules type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 18:41:24 2020 From: report at bugs.python.org (Oz Tiram) Date: Wed, 15 Jan 2020 23:41:24 +0000 Subject: [New-bugs-announce] [issue39348] wrong rst syntax in socket.rst Message-ID: <1579131684.91.0.584754331907.issue39348@roundup.psfhosted.org> New submission from Oz Tiram : The code block for the isn't hightlighted: Changed in version 3.7: When SOCK_NONBLOCK or SOCK_CLOEXEC bit flags are applied to type they are cleared, and socket.type will not reflect them. They are still passed to the underlying system socket() call. Therefore:: sock = socket.socket( ... This is because the double colon is directly after the word Therefore. This fix is very simple: :attr:`socket.type` will not reflect them. They are still passed - to the underlying system `socket()` call. Therefore:: + to the underlying system `socket()` call. Therefore, + + :: sock = socket.socket( socket.AF_INET, ... I have prepared a PR for this. ---------- assignee: docs at python components: Documentation messages: 360086 nosy: Oz.Tiram, docs at python priority: normal severity: normal status: open title: wrong rst syntax in socket.rst versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 15 23:08:34 2020 From: report at bugs.python.org (Kyle Stanley) Date: Thu, 16 Jan 2020 04:08:34 +0000 Subject: [New-bugs-announce] [issue39349] Add "cancel" parameter to concurrent.futures.Executor.shutdown() Message-ID: <1579147714.25.0.252393666546.issue39349@roundup.psfhosted.org> New submission from Kyle Stanley : This feature enhancement issue is based on the following python-ideas thread: https://mail.python.org/archives/list/python-ideas at python.org/thread/ZSUIFN5GTDO56H6LLOPXOLQK7EQQZJHJ/ In summary, the main suggestion was to implement a new parameter called "cancel" (some bikeshedding over the name is welcome, I was thinking "cancel_futures" might be another option) for Executor.shutdown(), that would be added to both ThreadPoolExecutor and ProcessPoolExecutor. When set to True, this parameter would cancel all pending futures that were scheduled to the executor just after setting self._shutdown. In order to build some experience in working with the internals of the executors (particularly for implementing native pools in asyncio in the future), I plan on working on this issue; assuming Antoine and/or Brian are +1 on it. Guido seemed to approve of the idea. The implementation in ThreadPoolExecutor should be fairly straightforward, as it would use much of the same logic that's in the private method _initializer_failed() (https://github.com/python/cpython/blob/fad8b5674c66d9e00bb788e30adddb0c256c787b/Lib/concurrent/futures/thread.py#L205-L216). Minus the setting of self._broken, and cancelling each of the work_items (pending futures) instead of setting the BrokenThreadPool exception. For ProcessPoolExecutor, I'll likely have to spend some more time looking into the implementation details of it to figure out how the cancellation will work. IIUC, it would involve adding some additional logic in _queue_management_worker(), the function which is used by the queue management thread to communicate between the main process and the worker processes spawned by ProcessPoolExecutor. Specifically, in the "if shutting_down()" block (https://github.com/python/cpython/blob/fad8b5674c66d9e00bb788e30adddb0c256c787b/Lib/concurrent/futures/process.py#L432-L446), I think we could add an additional conditional check to see if self._cancel_pending_work_items is true (new internal flag set during executor.shutdown() if *cancel* is true, just after setting "self._shutdown_thread = True"). In this block, it would iterate through the pending work items, and cancel their associated future. Here's a rough example of what I have in mind: ``` if shutting_down(): try: # Flag the executor as shutting down as early as possible if it # is not gc-ed yet. if executor is not None: executor._shutdown_thread = True + if executor._cancel_pending_work_items: + # We only care about the values in the dict, which are + # the actual work items. + for work_item in pending_work_items.values(): + work_item.future.cancel() # Since no new work items can be added, it is safe to shutdown # this thread if there are no pending work items. if not pending_work_items: shutdown_worker() return except Full: # This is not a problem: we will eventually be woken up (in # result_queue.get()) and be able to send a sentinel again. pass ``` Would something along the lines of the above be a potentially viable method of implementing the *cancel* parameter for ProcessPoolExecutor.shutdown()? The main downside to this implementation is that it can't cancel anything that is already running (pushed from pending_work_items to call_queue). But from my understanding, there isn't a viable means of cancelling anything in the call queue; at that point it's too late. Anyways, I'll work on the ThreadPoolExecutor implementation in the meantime. As mentioned previously, that one should be more straightforward. After getting it working, I'll open a PR for just ThreadPoolExecutor, and then work on ProcessPoolExecutor in another PR after receiving some feedback on the above idea. ---------- assignee: aeros messages: 360093 nosy: aeros, bquinlan, pitrou priority: normal severity: normal stage: needs patch status: open title: Add "cancel" parameter to concurrent.futures.Executor.shutdown() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 03:13:44 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 08:13:44 +0000 Subject: [New-bugs-announce] [issue39350] Remove deprecated fractions.gcd() Message-ID: <1579162424.21.0.493196334096.issue39350@roundup.psfhosted.org> New submission from STINNER Victor : bpo-22486 added math.gcd() and deprecated fractions.gcd() in Python 3.5: commit 48e47aaa28d6dfdae128142ffcbc4b0642422e90. The function was deprecated during 4 cycles (3.5, 3.6, 3.7, 3.8): I propose attached PR to remove it. ---------- components: Library (Lib) messages: 360095 nosy: vstinner priority: normal severity: normal status: open title: Remove deprecated fractions.gcd() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 03:25:17 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 08:25:17 +0000 Subject: [New-bugs-announce] [issue39351] Remove base64.encodestring() and base64.decodestring() aliases, deprecated since Python 3.1 Message-ID: <1579163117.18.0.498681758214.issue39351@roundup.psfhosted.org> New submission from STINNER Victor : base64.encodestring() and base64.decodestring() are aliases deprecated since Python 3.1: encodebytes() and decodebytes() should be used instead. In Python 3, "string" means Unicode, whereas these functions really work at the bytes level: >>> base64.encodestring("text") TypeError: expected bytes-like object, not str >>> base64.decodestring("text") TypeError: expected bytes-like object, not str encodebytes() and decodebytes() names are explicit on the expected types (bytes or bytes-like). This issue is similar to bpo-38916: "Remove array.fromstring() and array.tostring() aliases, deprecated since Python 3.2". Attached PR removes the deprecated aliases base64.encodestring() and base64.decodestring(). ---------- components: Library (Lib) messages: 360096 nosy: vstinner priority: normal severity: normal status: open title: Remove base64.encodestring() and base64.decodestring() aliases, deprecated since Python 3.1 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 03:48:01 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 08:48:01 +0000 Subject: [New-bugs-announce] [issue39352] Remove the formatter module, deprecated since Python 3.4 Message-ID: <1579164481.25.0.406270516611.issue39352@roundup.psfhosted.org> New submission from STINNER Victor : The formatter module has been deprecated in Python 3.4 by bpo-18716: commit 1448ecf470013cee63c0682f615c5256928dc6b0. In 2014, its removal was scheduled in Python 3.6: commit 29636aeaccaf6a1412e0dc7c230db29cccf68381. But bpo-25407 cancelled the removal from Python 3.6: commit 5ad5a7d31f5328c73df523b6ade330d88573717e "The new PEP 4 policy of any module existing in both 2.7 and 3.5 applies here, hence the module will be with us for a bit longer." In the meanwhile, I'm not aware of anyone opposed to the removal. Python 2.7 reached it's end of life, so I propose to remove the module: https://docs.python.org/3.8/library/formatter.html If someone needs this module, it's a single formatter.py file: it can easily be copied from Python 3.8. The intent here is to reduce the size of the standard library to remove the maintenance burden on Python core developers. Note: I'm surprised, but it seems like the formatter module has no test!? Attached PR removes the module. ---------- components: Library (Lib) messages: 360098 nosy: brett.cannon, vstinner priority: normal severity: normal status: open title: Remove the formatter module, deprecated since Python 3.4 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 04:14:45 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 09:14:45 +0000 Subject: [New-bugs-announce] [issue39353] Deprecate the binhex module Message-ID: <1579166085.65.0.30693146669.issue39353@roundup.psfhosted.org> New submission from STINNER Victor : The binhex module encodes and decodes Apple Macintosh binhex4 data. It was originally developed for TRS-80. In the 1980s and early 1990s it was used on classic Mac OS 9 to encode binary email attachments. Mac OS 9 is now heavily outdated, replaced by "macOS" (previously known as "Mac OS X"). I propose to emit a DeprecationWarning in the binhex module and document that it's deprecated. I don't think that we have to schedule its removal yet, it can be decided later. A first deprecation warning emitted at runtime might help to warn last users, if there is any. There are two binhex open issues: * bpo-29566: no activity for almost 2 years (February 2017) * bpo-34063: no activity for 1 year 1/2 (July 2018) If we ignore global cleanup (done on the whole Python code base at once, not specific to binhex), the binhex was basically untouched since it has been ported to Python 3 (10 years ago). Maybe it means that it is very stable, which can be seen as a quality ;-) I looked for "import binhex" in the first 10 pages of GitHub code search (restricted to Python programming language). I mostly found copies of Python test_binhex.py, no real usage of binhex. On Stackoverflow, the latest questions about binhex has been asked in 2012: https://stackoverflow.com/questions/12467973/binhex-decoding-using-java-code I also found an answer suggesting to use binascii.a2b_hex() to decode a string the hexadecimal string "2020202020202020202020205635514d385a5856": https://stackoverflow.com/questions/9683278/how-to-get-hard-disk-drivers-serial-number-in-python/9683837#9683837 But binascii.unhexlify() does the same than binascii.a2b_hex(). Attached PR deprecates binhex. ---------- components: Library (Lib) messages: 360100 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Deprecate the binhex module versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 04:18:53 2020 From: report at bugs.python.org (bizywizy) Date: Thu, 16 Jan 2020 09:18:53 +0000 Subject: [New-bugs-announce] [issue39354] collections.UserString format and format_map return a string Message-ID: <1579166333.43.0.0162767475956.issue39354@roundup.psfhosted.org> New submission from bizywizy : collections.UserString.format and collections.UserString.format_map return a string instaed of UserString. This is quite weird because I expect that the %-syntax and `format` method have to produce the same result. ``` >>> isinstance(UserString('Hello %s') % 'World', UserString) True >>> isinstance(UserString('Hello {}').format('World'), UserString) False ``` ---------- components: Library (Lib) messages: 360101 nosy: bizywizy priority: normal severity: normal status: open title: collections.UserString format and format_map return a string type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 04:25:21 2020 From: report at bugs.python.org (Keith) Date: Thu, 16 Jan 2020 09:25:21 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue39355=5D_The_Python_librar?= =?utf-8?q?y_will_not_compile_with_a_C++2020_compiler_because_the_code_use?= =?utf-8?q?s_the_reserved_=E2=80=9Cmodule=E2=80=9D_keyword?= Message-ID: <1579166721.74.0.203064876936.issue39355@roundup.psfhosted.org> New submission from Keith : The Python library will not compile with a C++2020 compiler because the code uses the reserved ?module? keyword For example, in warnings.h, we have the following code: #ifndef Py_LIMITED_API PyAPI_FUNC(int) PyErr_WarnExplicitObject( PyObject *category, PyObject *message, PyObject *filename, int lineno, PyObject *module, PyObject *registry); In modsupport.h we have the following code: PyAPI_FUNC(int) PyModule_ExecDef(PyObject *module, PyModuleDef *def); We can fix this by using a different identifier, for example ?pyModule? instead of ?module? ---------- components: C API messages: 360103 nosy: aCuria priority: normal severity: normal status: open title: The Python library will not compile with a C++2020 compiler because the code uses the reserved ?module? keyword versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 04:38:13 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 09:38:13 +0000 Subject: [New-bugs-announce] [issue39356] zipfile suprising "except DeprecationWarning:" block Message-ID: <1579167493.26.0.213163602183.issue39356@roundup.psfhosted.org> New submission from STINNER Victor : Lib/zipfile.py contains the following code: try: filename, flag_bits = zinfo._encodeFilenameFlags() centdir = struct.pack(structCentralDir, stringCentralDir, create_version, zinfo.create_system, extract_version, zinfo.reserved, flag_bits, zinfo.compress_type, dostime, dosdate, zinfo.CRC, compress_size, file_size, len(filename), len(extra_data), len(zinfo.comment), 0, zinfo.internal_attr, zinfo.external_attr, header_offset) except DeprecationWarning: print((structCentralDir, stringCentralDir, create_version, zinfo.create_system, extract_version, zinfo.reserved, zinfo.flag_bits, zinfo.compress_type, dostime, dosdate, zinfo.CRC, compress_size, file_size, len(zinfo.filename), len(extra_data), len(zinfo.comment), 0, zinfo.internal_attr, zinfo.external_attr, header_offset), file=sys.stderr) raise It is not considered as good programmating method to put print() statement in production code: usually, it's only used for debugging :-) The "except DeprecationWarning:" with its print has been added 12 years ago by: commit bf02e3bb21b2d75cba4ce409a14ae64dbc2dd6d2 Author: Gregory P. Smith Date: Wed Mar 19 03:14:41 2008 +0000 Fix the struct module DeprecationWarnings that zipfile was triggering by removing all use of signed struct values. test_zipfile and test_zipfile64 pass. no more warnings. But I don't recall any complain about a DeprecationWarning on struct.pack() in zipfile. I propose attached PR to remove it. ---------- components: Library (Lib) messages: 360107 nosy: vstinner priority: normal severity: normal status: open title: zipfile suprising "except DeprecationWarning:" block versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 05:22:45 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 10:22:45 +0000 Subject: [New-bugs-announce] [issue39357] bz2: Remove deprecated buffering parameter of bz2.BZ2File Message-ID: <1579170165.88.0.778090338845.issue39357@roundup.psfhosted.org> New submission from STINNER Victor : The "buffering" parameter of bz2.BZ2File is deprecated for 12 years. Using it was emitting a DeprecationWarning since Python 3.0. Attached PR removes it. ---------- components: Library (Lib) messages: 360114 nosy: vstinner priority: normal severity: normal status: open title: bz2: Remove deprecated buffering parameter of bz2.BZ2File versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 06:58:29 2020 From: report at bugs.python.org (Sergei Lebedev) Date: Thu, 16 Jan 2020 11:58:29 +0000 Subject: [New-bugs-announce] [issue39358] test_code.CoExtra leads to double-free when ce_size >1 Message-ID: <1579175909.32.0.646555203336.issue39358@roundup.psfhosted.org> New submission from Sergei Lebedev : tl;dr Passing a Python function as a freefunc to _PyEval_RequestCodeExtraIndex leads to double-free. In general, I think that freefunc should not be allowed to call other Python functions. --- test_code.CoExtra registers a new co_extra slot with a ctypes-wrapped freefunc # All defined in globals(). LAST_FREED = None def myfree(ptr): global LAST_FREED LAST_FREED = ptr FREE_FUNC = freefunc(myfree) FREE_INDEX = RequestCodeExtraIndex(FREE_FUNC) Imagine that we have registered another co_extra slot FOO_INDEX of type Foo and a freefunc FreeFoo. Furthermore, assume that * FOO_INDEX < FREE_INDEX * FOO_INDEX is set on any executed code object including myfree. Consider what happens when we collect the globals() of the test_code module. myfree is referenced by globals() and FREE_FUNC. If FREE_FUNC is DECREF'd first, then by the time we get to myfree it has a refcount of 1 and DECREF'ing it leads to a code_dealloc call. Recall that the code object corresponding to myfree has two co_extra slots: * FOO_INDEX pointing to some Foo*, and * FREE_INDEX with a value of NULL. So, code_dealloc will first call FreeFoo (because FOO_INDEX < FREE_INDEX) and then the ctypes wrapper of myfree. The following sequence of calls looks roughly like this _CallPythonObject ... PyEval_EvalCodeEx _PyEval_EvalCodeWithName frame_dealloc code_dealloc # ! The argument of the last code_dealloc call is *the same* myfree code object (!). This means that code_dealloc will attempt to call FreeFoo on an already free'd pointer leading to a crash. ---------- components: Tests messages: 360117 nosy: slebedev priority: normal severity: normal status: open title: test_code.CoExtra leads to double-free when ce_size >1 type: crash versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 08:20:14 2020 From: report at bugs.python.org (Daniel Hillier) Date: Thu, 16 Jan 2020 13:20:14 +0000 Subject: [New-bugs-announce] [issue39359] zipfile: add missing "pwd: expected bytes, got str" exception message Message-ID: <1579180814.8.0.49279033626.issue39359@roundup.psfhosted.org> New submission from Daniel Hillier : Setting the ZipFile.pwd attribute directly skips the check to ensure the password is a bytes object and, if not, return a user friendly TypeError("pwd: expected bytes, got ") informing them of that. ---------- components: Library (Lib) messages: 360118 nosy: dhillier priority: normal severity: normal status: open title: zipfile: add missing "pwd: expected bytes, got str" exception message type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 08:23:44 2020 From: report at bugs.python.org (gaborbernat) Date: Thu, 16 Jan 2020 13:23:44 +0000 Subject: [New-bugs-announce] [issue39360] python3.8 regression - ThreadPool join via __del__ hangs forever Message-ID: <1579181024.48.0.144429938254.issue39360@roundup.psfhosted.org> New submission from gaborbernat : Assume the following code: ```python from multiprocessing.pool import ThreadPool class A(object): def __init__(self): self.pool = ThreadPool() def __del__(self): self.pool.close() self.pool.join() a = A() print(a) ``` The code snippet above hangs forever on Python 3.8+ (works ok on Python 3.7 and earlier). An example output where I've added some extra prints on to the thread joins: ``` <__main__.A object at 0x1104d6070> join thread None done join thread None done join thread None done join thread None done join thread None done join thread None done join thread None done join thread None done join thread None done join thread None done join thread None ``` I've tested on MacOs, but could replicate on Linux too within the CI. ---------- components: Library (Lib) messages: 360119 nosy: gaborbernat, vstinner priority: normal severity: normal status: open title: python3.8 regression - ThreadPool join via __del__ hangs forever versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 08:47:29 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 16 Jan 2020 13:47:29 +0000 Subject: [New-bugs-announce] [issue39361] [C API] Document PyTypeObject.tp_print removal in What's New In Python 3.9 Message-ID: <1579182449.25.0.646688097362.issue39361@roundup.psfhosted.org> New submission from STINNER Victor : commit aacc77fbd77640a8f03638216fa09372cc21673d Author: Jeroen Demeyer Date: Wed May 29 20:31:52 2019 +0200 bpo-36974: implement PEP 590 (GH-13185) Co-authored-by: Jeroen Demeyer Co-authored-by: Mark Shannon removed PyTypeObject.tp_print: diff --git a/Include/cpython/object.h b/Include/cpython/object.h index ba52a48358..a65aaf6482 100644 --- a/Include/cpython/object.h +++ b/Include/cpython/object.h @@ -182,7 +182,7 @@ typedef struct _typeobject { /* Methods to implement standard operations */ destructor tp_dealloc; - printfunc tp_print; + Py_ssize_t tp_vectorcall_offset; getattrfunc tp_getattr; setattrfunc tp_setattr; PyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2) Would it be possible to just document it in What's New in Python 3.9? Near: https://docs.python.org/dev/whatsnew/3.9.html#build-and-c-api-changes For example, this incompatible change broke zbar project: https://bugzilla.redhat.com/show_bug.cgi?id=1791745 ---------- components: C API messages: 360120 nosy: vstinner priority: normal severity: normal status: open title: [C API] Document PyTypeObject.tp_print removal in What's New In Python 3.9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 09:56:42 2020 From: report at bugs.python.org (Forest) Date: Thu, 16 Jan 2020 14:56:42 +0000 Subject: [New-bugs-announce] [issue39362] add option to make chunksize adaptive for multiprocessing.pool methods Message-ID: <1579186602.99.0.0759376415978.issue39362@roundup.psfhosted.org> New submission from Forest : In the multiprocessing Pool methods like map, chunksize determines the trade-off between computation per task and inter-process communication. Setting chunksize appropriately has a large effect on efficiency. However, for users directly interacting with the map methods, the way to find the appropriate chunksize is by manually checking different sizes and observing the program behavior. For library developers, you have to hope that you set an reasonable value that will work okay across different hardware, operating systems, and task characteristics. Generally, users of these methods want maximum throughput. It would be great if the map-like methods could adapt their chunksize towards that goal. Something along the lines of this: n_items = 0 queue = Queue(N) while True: chunk = tuple(itertools.islice(iterable, chunk_size)) if chunk: queue.put(chunk) n_items += chunk_size i += 1 if i % 10: time_delta = max(time.perf_counter() - t0, 0.0001) current_rate = n_items / time_delta # chunk_size is always either growing or shrinking, if # the shrinking led to a faster rate, keep # shrinking. Same with growing. If the rate decreased, # reverse directions if current_rate < last_rate: multiplier = 1 / multiplier chunk_size = int(min(max(chunk_size * multiplier, 1), upper_bound)) last_rate = current_rate n_items = 0 t0 = time.perf_counter() Would such a feature be desirable? ---------- components: macOS messages: 360126 nosy: fgregg, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: add option to make chunksize adaptive for multiprocessing.pool methods _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 13:24:07 2020 From: report at bugs.python.org (maxime-lemonnier) Date: Thu, 16 Jan 2020 18:24:07 +0000 Subject: [New-bugs-announce] [issue39363] zipfile with multiprocessing: zipfile.BadZipFile Message-ID: <1579199047.32.0.379077143522.issue39363@roundup.psfhosted.org> New submission from maxime-lemonnier : zipfile sometimes throws zipfile.BadZipFile when opening the same zip file from multiple processes see attached file to reproduce the error. You'll need a zipfile with multiple files in it to reproduce. ---------- components: Library (Lib) files: test_filesource.py messages: 360134 nosy: maxime-lemonnier priority: normal severity: normal status: open title: zipfile with multiprocessing: zipfile.BadZipFile type: crash versions: Python 3.6 Added file: https://bugs.python.org/file48847/test_filesource.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 17:05:46 2020 From: report at bugs.python.org (alex c) Date: Thu, 16 Jan 2020 22:05:46 +0000 Subject: [New-bugs-announce] [issue39364] Automatically tabulate module contents in the docs Message-ID: <1579212346.83.0.45042082539.issue39364@roundup.psfhosted.org> New submission from alex c : By default, the docs.python.org page for a module does not list or tabulate the contents of that module. This makes it difficult to browse a module's functions or get a bird's-eye view. For example, the logging module (https://docs.python.org/3/library/logging.html) has almost 70 functions, methods, and attributes. But it's impossible to scan them without scrolling the entire length of the entry (~18 pages of US letter). Compare to the browsability of itertools (https://docs.python.org/3/library/itertools.html), which manually tabulates its functions in the first section. docs.python.org should automatically generate a TOC of the module's contents (classes, functions, etc) in the navigation sidebar, below the existing sidebar sections (perhaps in a collapsible section). Rust's documentation does this (example: https://doc.rust-lang.org/std/time/struct.Duration.html), and doc.rust-lang.org also effectively allows the entire page to function as a TOC by providing a "collapse page" button. ---------- assignee: docs at python components: Documentation messages: 360148 nosy: alexchandel, docs at python priority: normal severity: normal status: open title: Automatically tabulate module contents in the docs type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 16 23:23:15 2020 From: report at bugs.python.org (random832) Date: Fri, 17 Jan 2020 04:23:15 +0000 Subject: [New-bugs-announce] [issue39365] Support (SEEK_END/SEEK_CUR) relative seeking in StringIO Message-ID: <1579234995.29.0.127298904142.issue39365@roundup.psfhosted.org> New submission from random832 : Currently this fails with a (misleading in the case of SEEK_END) "Can't do nonzero cur-relative seeks" error, but there's no conceptual reason it shouldn't be possible. The offset should simply be taken as a character offset, without any pretense that the "file" represents bytes in some Unicode encoding. This is already done for SEEK_START and tell, and has not caused any problems. ---------- components: IO messages: 360158 nosy: random832 priority: normal severity: normal status: open title: Support (SEEK_END/SEEK_CUR) relative seeking in StringIO type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 04:11:15 2020 From: report at bugs.python.org (Dong-hee Na) Date: Fri, 17 Jan 2020 09:11:15 +0000 Subject: [New-bugs-announce] [issue39366] Remove deprecated nntplib method Message-ID: <1579252275.31.0.83759168902.issue39366@roundup.psfhosted.org> New submission from Dong-hee Na : Remove deprecated methods since Python 3.3. Moreover nntplib.NNTP.xgtitle has not been exposed through docs.python.org https://docs.python.org/3/library/nntplib.html ---------- assignee: corona10 messages: 360163 nosy: corona10, vstinner priority: normal severity: normal status: open title: Remove deprecated nntplib method versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 06:17:46 2020 From: report at bugs.python.org (Horace Stoica) Date: Fri, 17 Jan 2020 11:17:46 +0000 Subject: [New-bugs-announce] [issue39367] readline module core dumps Python 3.8.1 when calling exit() Message-ID: <1579259866.32.0.542672703415.issue39367@roundup.psfhosted.org> New submission from Horace Stoica : Built Python 3.8.1 from source on Fedora 30: kernel: 5.1.8-300.fc30.x86_64 Python 3.8.1 (default, Jan 15 2020, 08:49:34) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] on linux Installed the readline module as the arrows do not work when running Python interactively. ~/local/bin/pip3.8 install readline Collecting readline Using cached https://files.pythonhosted.org/packages/f4/01/2cf081af8d880b44939a5f1b446551a7f8d59eae414277fd0c303757ff1b/readline-6.2.4.1.tar.gz However, after installing readline Python core-dumps on Crtl+D or exit(): >>> exit() munmap_chunk(): invalid pointer Aborted (core dumped) I uninstalled the readline module and now it no longer core dumps. ---------- components: Extension Modules messages: 360170 nosy: fhstoica priority: normal severity: normal status: open title: readline module core dumps Python 3.8.1 when calling exit() type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 07:55:23 2020 From: report at bugs.python.org (Jaap Woldringh) Date: Fri, 17 Jan 2020 12:55:23 +0000 Subject: [New-bugs-announce] [issue39368] A matrix (list of lists) behaves differently, depending how it is created Message-ID: <1579265723.61.0.0732366373577.issue39368@roundup.psfhosted.org> New submission from Jaap Woldringh : Python used: Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. In Ubuntu 18.04.3 But in any other version of Python3, and Python2, that I tried, the behaviour of a (square) matrix depends on how it is created; as I can demonstrate in a test program matrix_experiment.py that is attached to this report. 1. it behaves as expected when created by entering all it?s elements like so: A = [[ 1,2,3],[1,2,3],[1,2,3]] 2. If it is created by appending predefined rows, it behaves as if all rows are the same as the last row: row = [1,2,3] B=[] for i in range(3): B.appends(row) The result matrix is the same as A: [[1, 2, 3], [1, 2, 3], [1, 2, 3]] Both results are equal: print(A==B) gives True. But when using B the result is disastrous as the attached matrix_experiment.py program shows. I consider this a very serious bug, and first filed it at Ubuntu?s Launchpad, but I don't find it there. So now I file this again, at Python.org itself, using my new account. ---------- components: Tests files: matrix_experiment.py messages: 360182 nosy: jjhwoldringh priority: normal severity: normal status: open title: A matrix (list of lists) behaves differently, depending how it is created type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48849/matrix_experiment.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 08:55:27 2020 From: report at bugs.python.org (Wellington PF) Date: Fri, 17 Jan 2020 13:55:27 +0000 Subject: [New-bugs-announce] [issue39369] Doc: Update mmap readline method documentation Message-ID: <1579269327.81.0.0431905682961.issue39369@roundup.psfhosted.org> New submission from Wellington PF : Update mmap readline method description. The fact that the readline method does update the file position should not be ignored since this might give the impression for the programmer that it doesn't update it. ---------- assignee: docs at python components: Documentation messages: 360191 nosy: docs at python, wellpardim priority: normal pull_requests: 17435 severity: normal status: open title: Doc: Update mmap readline method documentation type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 09:47:07 2020 From: report at bugs.python.org (Dreas Nielsen) Date: Fri, 17 Jan 2020 14:47:07 +0000 Subject: [New-bugs-announce] [issue39370] askopenfilename is missing from the Tkinter filedialog library in 2.7.17 Message-ID: <1579272427.57.0.817870554676.issue39370@roundup.psfhosted.org> New submission from Dreas Nielsen : In Python 2.7.17 on Linux, the code: import tkinter.filedialog as tkfiledialog dir(tkfiledialog.askopenfilename) results in: AttributeError: 'module' object has no attribute 'askopenfilename' Any attempt to use 'askopenfilename' has the same result, of course. ---------- components: Tkinter messages: 360194 nosy: rdnielsen priority: normal severity: normal status: open title: askopenfilename is missing from the Tkinter filedialog library in 2.7.17 type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 11:21:17 2020 From: report at bugs.python.org (Arden) Date: Fri, 17 Jan 2020 16:21:17 +0000 Subject: [New-bugs-announce] [issue39371] http.client.HTTPResponse raises IncompleteRead on chunked encoding Message-ID: <1579278077.15.0.801446785359.issue39371@roundup.psfhosted.org> New submission from Arden : http.client.HTTPResponse._readinto_chunked has two problems in python 3.8 - 3.9. 1. _safe_readinto assumes that self.fp.readinto(b) will read exactly len(b) bytes. This is not always true, especially in case of SSL traffic. But _safe_readinto raises IncompleteRead if less len(b) bytes was read. So, _safe_readinto should be removed and substituted with self.fp.readinto 2. _readinto_chunked may lose chunked block boundary because of this line: self.chunk_left = 0 it should be changed to self.chunk_left = chunk_left - n in order to self._get_chunk_left() be able to find real chunk boundary Corrected function looks like this: def _readinto_chunked(self, b): assert self.chunked != _UNKNOWN total_bytes = 0 mvb = memoryview(b) try: while True: chunk_left = self._get_chunk_left() if chunk_left is None: return total_bytes if len(mvb) <= chunk_left: n = self.fp.readinto(mvb) self.chunk_left = chunk_left - n return total_bytes + n temp_mvb = mvb[:chunk_left] n = self.fp.readinto(temp_mvb) mvb = mvb[n:] total_bytes += n self.chunk_left = chunk_left - n except IncompleteRead: raise IncompleteRead(bytes(b[0:total_bytes])) ---------- components: Library (Lib) messages: 360199 nosy: Arden priority: normal severity: normal status: open title: http.client.HTTPResponse raises IncompleteRead on chunked encoding versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 12:47:14 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 17 Jan 2020 17:47:14 +0000 Subject: [New-bugs-announce] [issue39372] The header files in Include/ have many declarations with no definition Message-ID: <1579283234.13.0.0884348783159.issue39372@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : There are many declarations that lack definitions in our header files that should be cleaned (linking against those symbols will fail so removing them should be safe if I am not missing something). ---------- components: Build messages: 360204 nosy: pablogsal priority: normal severity: normal status: open title: The header files in Include/ have many declarations with no definition type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 16:59:35 2020 From: report at bugs.python.org (John Haley) Date: Fri, 17 Jan 2020 21:59:35 +0000 Subject: [New-bugs-announce] [issue39373] new world Message-ID: <1579298375.7.0.386428246694.issue39373@roundup.psfhosted.org> Change by John Haley : ---------- nosy: John Haley priority: normal severity: normal status: open title: new world _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 18:11:22 2020 From: report at bugs.python.org (=?utf-8?q?Carlos_Segura_Gonz=C3=A1lez?=) Date: Fri, 17 Jan 2020 23:11:22 +0000 Subject: [New-bugs-announce] [issue39374] Key in sort -> Callable Object instead of function Message-ID: <1579302682.83.0.203843605883.issue39374@roundup.psfhosted.org> New submission from Carlos Segura Gonz?lez : In the Documentation, the "Sorting HOW TO" (https://docs.python.org/3/howto/sorting.html) states that "have a key parameter to specify a function to be called". However, it might be other callable objects. In fact, some of the examples given in the documentation are not with functions. I suggest: "have a key parameter to specify a callable object that is called..." ---------- assignee: docs at python components: Documentation messages: 360219 nosy: Carlos Segura Gonz?lez, docs at python priority: normal severity: normal status: open title: Key in sort -> Callable Object instead of function versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 19:36:08 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Sat, 18 Jan 2020 00:36:08 +0000 Subject: [New-bugs-announce] [issue39375] Document os.environ[x] = y and os.putenv() as thread unsafe Message-ID: <1579307768.04.0.648412303004.issue39375@roundup.psfhosted.org> New submission from Gregory P. Smith : The underlying API calls made by os.putenv() and os.environ[name] = value syntax are not thread safe on POSIX systems. POSIX _does not have_ any thread safe way to access the process global environment. In a pure Python program, the GIL prevents this from being an issue. But when embedded in a C/C++ program or using extension modules that launch their own threads from C, those threads could also make the invalid assumption that they can safely _read_ the environment. Which is a race condition when a Python thread is doing a putenv() at the same time. We should document the danger. CPython's os module snapshots a copy of the environment into a dict at import time (during CPython startup). But os.environ[] assignment and os.putenv() modify the actual process global environment in addition to updating this dict. (If an embedded interpreter is launched from a process with other threads already running, even that initial environment reading could be unsafe if the larger application has a thread that wrongly assumes it has exclusive environment access) For people modifying os.environ so that the change is visible to child processes, we can recommend using the env= parameter on subprocess API calls to supply a new environment. A broader issue of should we be modifying the process global environment state at all from os.putenv() and os.environ[] assignment still exists. I'll track that in another issue (to be opened). ---------- assignee: docs at python components: Documentation messages: 360221 nosy: docs at python, gregory.p.smith priority: normal severity: normal status: open title: Document os.environ[x] = y and os.putenv() as thread unsafe versions: Python 2.7, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 17 19:47:44 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Sat, 18 Jan 2020 00:47:44 +0000 Subject: [New-bugs-announce] [issue39376] Avoid modifying the process global environment (not thread safe) Message-ID: <1579308464.24.0.791579553009.issue39376@roundup.psfhosted.org> New submission from Gregory P. Smith : For more context, see https://bugs.python.org/issue39375 which seeks to document the existing caveats. POSIX lacks any APIs to access the process global environment in a thread safe manner. Given this, we could _consider_ preventing os.putenv() and os.environ[x] = y assignment from actually modifying the process global environment. They'd save their changes in our local os.environ underlying dict, set a flag that it was modified, but not modify the global. This would be a visible behavior change and break _some_ class of code. :/ Our stdlib codepaths that launch a new process on POSIX could be modified to to always pass our a newly constructed envp from os.environ to exec/spawn APIs. The os.system() API would need to stop using the POSIX system() API call in order for that to work. Downside API breakage: Extension module modifications to the environment would not be picked up by Python interpreter launched subprocesses. How much of a problem would that be in practice? We may decide to close this as infeasible and just stick with the documentation of the sorry state of POSIX and not attempt to offer any safe non-crash-possible workarounds. ---------- components: Interpreter Core messages: 360222 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: Avoid modifying the process global environment (not thread safe) type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 00:27:40 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 18 Jan 2020 05:27:40 +0000 Subject: [New-bugs-announce] [issue39377] json.loads encoding parameter deprecation removal in Python 3.9 Message-ID: <1579325260.22.0.801200056997.issue39377@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : This is a followup of issue33461. The warning says about removal of the encoding parameter in 3.9 . It's already ignored since 3.1 hence I assume this should be raising a TypeError in 3.9 removing the deprecation warning. I am finding some projects using the encoding parameter though it has no effect. Since Python 3.9 has alpha 3 to be released it will be good to fix the deprecation in the early stage of release cycle. ---------- components: Library (Lib) messages: 360235 nosy: inada.naoki, serhiy.storchaka, vstinner, xtreak priority: normal severity: normal status: open title: json.loads encoding parameter deprecation removal in Python 3.9 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 00:31:27 2020 From: report at bugs.python.org (hai shi) Date: Sat, 18 Jan 2020 05:31:27 +0000 Subject: [New-bugs-announce] [issue39378] partial of PickleState struct should be traversed. Message-ID: <1579325487.94.0.252462243653.issue39378@roundup.psfhosted.org> New submission from hai shi : As subject, looks partial of PickleState struct should be traversed. ---------- components: Extension Modules messages: 360236 nosy: shihai1991 priority: normal severity: normal status: open title: partial of PickleState struct should be traversed. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 02:46:57 2020 From: report at bugs.python.org (Ken Sato) Date: Sat, 18 Jan 2020 07:46:57 +0000 Subject: [New-bugs-announce] [issue39379] sys.path[0] is already absolute path Message-ID: <1579333617.11.0.372896670301.issue39379@roundup.psfhosted.org> New submission from Ken Sato : In the "What?s New In Python 3.9" (Doc/whatsnew/3.9.rst), it says > Python now gets the absolute path of the script filename specified on the command line (ex: python3 script.py): the __file__ attribute of the __main__ module and sys.path[0] become an absolute path, rather than a relative path. However, I believe sys.path[0] is already absolute path since the previous versions. We can probably remove "and sys.path[0]" from the phrase to avoid possible confusions. ---------- assignee: docs at python components: Documentation messages: 360239 nosy: docs at python, ksato9700 priority: normal severity: normal status: open title: sys.path[0] is already absolute path versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 04:57:38 2020 From: report at bugs.python.org (Sebastian G Pedersen) Date: Sat, 18 Jan 2020 09:57:38 +0000 Subject: [New-bugs-announce] [issue39380] ftplib uses latin-1 as default encoding Message-ID: <1579341458.55.0.271160828625.issue39380@roundup.psfhosted.org> Change by Sebastian G Pedersen : ---------- components: Library (Lib) nosy: SebastianGPedersen priority: normal severity: normal status: open title: ftplib uses latin-1 as default encoding type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 08:30:41 2020 From: report at bugs.python.org (Andrew Svetlov) Date: Sat, 18 Jan 2020 13:30:41 +0000 Subject: [New-bugs-announce] [issue39381] Fix get_event_loop documentation Message-ID: <1579354241.45.0.104349162739.issue39381@roundup.psfhosted.org> New submission from Andrew Svetlov : The current documentation says: "If there is no current event loop set in the current OS thread and set_event_loop() has not yet been called, asyncio will create a new event loop and set it as the current one." https://docs.python.org/3.7/library/asyncio-eventloop.html#asyncio.get_event_loop This is not correct, a new loop is created implicitly only for the main thread, all other threads require set_event_loop() call ---------- components: asyncio messages: 360244 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Fix get_event_loop documentation versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 12:30:12 2020 From: report at bugs.python.org (Yonatan Goldschmidt) Date: Sat, 18 Jan 2020 17:30:12 +0000 Subject: [New-bugs-announce] [issue39382] abstract_issubclass() doesn't take bases tuple item ref Message-ID: <1579368612.83.0.483106486083.issue39382@roundup.psfhosted.org> New submission from Yonatan Goldschmidt : I encountered a crash using rpyc. Since rpyc is pure-Python, I guessed the problem is with Python itself. Originally tested on v3.7, the rest of this issue is based on v3.9.0a2 (compiling on an Ubuntu 18.04 docker). I narrowed down the rpyc-based snippet to this: # server side class X: pass x_instance = X() from rpyc.core import SlaveService from rpyc.utils.classic import DEFAULT_SERVER_PORT from rpyc.utils.server import ThreadedServer t = ThreadedServer(SlaveService, port=DEFAULT_SERVER_PORT, reuse_addr=True) t.start() # client side import rpyc conn = rpyc.classic.connect("localhost") x = conn.modules.__main__.x_instance y = x.__class__ issubclass(y, int) Client side gets a SIGSEGV in `_PyObject_LookupAttr`, dereferencing an invalid `tp` pointer read from a posioned `v` object. After some reference count debugging, I found that for the rpyc `y` object (in the client code), accessing `__bases__` returns a tuple with refcount=1, and it has a single item whose refcount is 1 as well. abstract_issubclass() calls abstract_get_bases() to get this refcount=1 tuple, and in the fastpath for single inheritance (tuple size = 1) it loads the single item from the tuple (`derived = PyTuple_GET_ITEM(bases, 0)`) and then decrements the refcount on the tuple, effectively deallocating the tuple and the `derived` object (whose only reference was from the tuple). I tried to mimic the Python magic rpyc does to get the same crash without rpyc, and reached the following snippet (which doesn't exhibit the problem): class Meta(type): def __getattribute__(self, attr): if attr == "__bases__": class Base: pass return (Base, ) return type.__getattribute__(self, attr) class X(metaclass=Meta): pass issubclass(X().__class__, int) In this case, the refcount is decremented from 4 to 3 as abstract_issubclass() gets rid of the tuple (instead of from 1 to 0 as happens in the rpyc case). I don't know how rpyc does it. Attached is a patch that solves the problem (takes a ref of the tuple item before releasing the ref of the tuple itself). I'm not sure this change is worth the cost because, well, I don't fully understand the severity of it since I couldn't reproduce it without using rpyc. I assume dynamically-created, unreferenced `__bases__` tuples as I have here are not so common. Anyway, if you do decide it's worth it, I'd be happy to improve the patch (it's quite messy the way this function is structured) and post it to GitHub :) Yonatan ---------- components: Interpreter Core files: abstract_issubclass_refcount_fix.diff keywords: patch messages: 360247 nosy: Yonatan Goldschmidt priority: normal severity: normal status: open title: abstract_issubclass() doesn't take bases tuple item ref type: crash versions: Python 3.9 Added file: https://bugs.python.org/file48851/abstract_issubclass_refcount_fix.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 12:57:43 2020 From: report at bugs.python.org (Peter Bittner) Date: Sat, 18 Jan 2020 17:57:43 +0000 Subject: [New-bugs-announce] [issue39383] Mention Darwin as a potential value for platform.system() Message-ID: <1579370263.63.0.178339292533.issue39383@roundup.psfhosted.org> New submission from Peter Bittner : The platform module's documentation mentions 'Linux', 'Windows' and 'Java' explicitly as values for `platform.system()`. https://docs.python.org/3/library/platform.html#platform.system Given the popularity of macOS among developers, this gives the impression that the module won't detect 'Darwin' as a separate system type; developers may suspect this will be identified also as a "Linux-y" system (or so). Hence, 'Darwin' should be mentioned explicitly as one of the possible values. ---------- assignee: docs at python components: Documentation messages: 360248 nosy: bittner, docs at python priority: normal pull_requests: 17448 severity: normal status: open title: Mention Darwin as a potential value for platform.system() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 14:48:43 2020 From: report at bugs.python.org (Mark Sapiro) Date: Sat, 18 Jan 2020 19:48:43 +0000 Subject: [New-bugs-announce] [issue39384] Email parser creates a message object that can't be flattened as bytes. Message-ID: <1579376923.42.0.439623129413.issue39384@roundup.psfhosted.org> New submission from Mark Sapiro : This is similar to https://bugs.python.org/issue32330 but is the opposite behavior. In that issue, the message couldn't be flattened as a string but could be flattened as bytes. Here, the message can be flattened as a string but can't be flattened as bytes. The original message was created by an arguably defective email client that quoted a message containing a utf8 encoded RIGHT SINGLE QUOTATION MARK and utf-8 encoded separately the three bytes resulting in `?**` instead of `?`. That's not really relevant but is just to show how such a message can be generated. The following interactive python session shows the issue. ``` >>> import email >>> msg = email.message_from_string("""From user at example.com Sat Jan 18 04:09:40 2020 ... From: user at example.com ... To: recip at example.com ... Subject: Century Dates for Insurance purposes ... Date: Fri, 17 Jan 2020 20:09:26 -0800 ... Message-ID: <75ccdd72-d71c-407c-96bd-0ca95abcfa03 at email.android.com> ... MIME-Version: 1.0 ... Content-Type: text/plain; charset="utf-8" ... Content-Transfer-Encoding: 8bit ... ... Thursday-Monday will cover both days of staging and then storing goods ... post-century. I think that?**s the way to go. ... ... """) >>> msg.as_bytes() Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.7/email/message.py", line 178, in as_bytes g.flatten(self, unixfrom=unixfrom) File "/usr/local/lib/python3.7/email/generator.py", line 116, in flatten self._write(msg) File "/usr/local/lib/python3.7/email/generator.py", line 181, in _write self._dispatch(msg) File "/usr/local/lib/python3.7/email/generator.py", line 214, in _dispatch meth(msg) File "/usr/local/lib/python3.7/email/generator.py", line 432, in _handle_text super(BytesGenerator,self)._handle_text(msg) File "/usr/local/lib/python3.7/email/generator.py", line 249, in _handle_text self._write_lines(payload) File "/usr/local/lib/python3.7/email/generator.py", line 155, in _write_lines self.write(line) File "/usr/local/lib/python3.7/email/generator.py", line 406, in write self._fp.write(s.encode('ascii', 'surrogateescape')) UnicodeEncodeError: 'ascii' codec can't encode character '\xe2' in position 33: ordinal not in range(128) >>> ``` ---------- components: email messages: 360249 nosy: barry, msapiro, r.david.murray priority: normal severity: normal status: open title: Email parser creates a message object that can't be flattened as bytes. versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 18 15:25:23 2020 From: report at bugs.python.org (Kit Yan Choi) Date: Sat, 18 Jan 2020 20:25:23 +0000 Subject: [New-bugs-announce] [issue39385] Add an assertNoLogs context manager to unittest TestCase Message-ID: <1579379123.37.0.908861475494.issue39385@roundup.psfhosted.org> New submission from Kit Yan Choi : assertLogs is really useful (issue18937). Unfortunately it does not cover the use cases where one wants to ensure no logs are emitted. Similar to assertLogs, we can have a context manager for asserting no logs, something like this?: with assertNoLogs(logger, level): ... If logs are unexpected found, the test would fail with the logs captured included in the error message. Happy to submit a PR if there is interest. ---------- components: Library (Lib) messages: 360250 nosy: Kit Yan Choi priority: normal severity: normal status: open title: Add an assertNoLogs context manager to unittest TestCase type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 19 01:43:40 2020 From: report at bugs.python.org (John-Mark Gurney) Date: Sun, 19 Jan 2020 06:43:40 +0000 Subject: [New-bugs-announce] [issue39386] getting invalid data from async iterator Message-ID: <1579416220.77.0.52401810488.issue39386@roundup.psfhosted.org> New submission from John-Mark Gurney : If I create a coro from an async iterator, then wait_for it w/ a timeout, but shielded, so it won't get canceled, and then await upon it, it returns invalid data. See the attached test case. The reason I do the following is to make sure that an async iterator that I have written doesn't return data early, and needs to wait till later. If I didn't shield it, then the async iterator would get cancelled, and I don't want this. I'd expect either correct results to be returned, or an exception to be raised, but in this case, and the docs for wait_for ( https://docs.python.org/3/library/asyncio-task.html#asyncio.wait_for ), I'd expect the correct results to be returned. In the attached case, this is the results that I get: $python3.7 asyncitertc.py 3.7.5 (default, Oct 18 2019, 23:59:39) [Clang 7.0.2 (clang-700.1.81)] timed out yielding 1 results: None getting 2: 2 I do not have python 3.8 to test with. ---------- files: asyncitertc.py messages: 360254 nosy: jmg priority: normal severity: normal status: open title: getting invalid data from async iterator Added file: https://bugs.python.org/file48852/asyncitertc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 19 07:59:22 2020 From: report at bugs.python.org (Mattia Verga) Date: Sun, 19 Jan 2020 12:59:22 +0000 Subject: [New-bugs-announce] [issue39387] configparser read_file() with variable Message-ID: <1579438762.29.0.201953236535.issue39387@roundup.psfhosted.org> New submission from Mattia Verga : I'm trying to assign a file object to a variable and then pass this variable to configparse.read_file(), but for some reason that doesn't work: >>> import configparser >>> config = configparser.ConfigParser() >>> config.read_file(open('review-stats.cfg')) >>> config.sections() ['global'] >>> >>> config2 = configparser.ConfigParser() >>> f = open('review-stats.cfg') >>> f <_io.TextIOWrapper name='review-stats.cfg' mode='r' encoding='UTF-8'> >>> config2.read_file(f) >>> config2.sections() [] Shouldn't those results be the same? ---------- components: Library (Lib) messages: 360257 nosy: Mattia Verga priority: normal severity: normal status: open title: configparser read_file() with variable type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 19 12:26:54 2020 From: report at bugs.python.org (Cheryl Sabella) Date: Sun, 19 Jan 2020 17:26:54 +0000 Subject: [New-bugs-announce] [issue39388] IDLE: Changes to keybindings aren't reverted on cancel Message-ID: <1579454814.83.0.214082959531.issue39388@roundup.psfhosted.org> New submission from Cheryl Sabella : In https://bugs.python.org/issue35598#msg332634, Terry mentioned a bug when updating the configuration of a key, but then cancelling out of configdialog. > Change a key binding. Cancel. Re-open config dialog. Try to change back. It says original binding is in use -- which it is if one closes IDLE and reopens, or opens a different instance. It seems that cancel is not properly undoing the temporary change. ---------- assignee: terry.reedy components: IDLE messages: 360262 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Changes to keybindings aren't reverted on cancel type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 19 15:23:58 2020 From: report at bugs.python.org (William Chargin) Date: Sun, 19 Jan 2020 20:23:58 +0000 Subject: [New-bugs-announce] [issue39389] gzip metadata fails to reflect compresslevel Message-ID: <1579465438.04.0.91237843457.issue39389@roundup.psfhosted.org> New submission from William Chargin : The `gzip` module properly uses the user-specified compression level to control the underlying zlib stream compression level, but always writes metadata that indicates that the maximum compression level was used. Repro: ``` import gzip blob = b"The quick brown fox jumps over the lazy dog." * 32 with gzip.GzipFile("fast.gz", mode="wb", compresslevel=1) as outfile: outfile.write(blob) with gzip.GzipFile("best.gz", mode="wb", compresslevel=9) as outfile: outfile.write(blob) ``` Run this script, then run `wc -c *.gz` and `file *.gz`: ``` $ wc -c *.gz 82 best.gz 84 fast.gz 166 total $ file *.gz best.gz: gzip compressed data, was "best", last modified: Sun Jan 19 20:15:23 2020, max compression fast.gz: gzip compressed data, was "fast", last modified: Sun Jan 19 20:15:23 2020, max compression ``` The file sizes correctly reflect the difference, but `file` thinks that both archives are written at max compression. The error is that the ninth byte of the header in the output stream is hard-coded to `\002` at Lib/gzip.py:260 (as of 558f07891170), which indicates maximum compression. The correct value to indicate maximum speed is `\004`. See RFC 1952, section 2.3.1: Using GNU `gzip(1)` with `--fast` creates the same output file as the one emitted by the `gzip` module, except for two bytes: the metadata and the OS (the ninth and tenth bytes). ---------- components: Library (Lib) files: repro.py messages: 360268 nosy: wchargin priority: normal severity: normal status: open title: gzip metadata fails to reflect compresslevel versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48853/repro.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 19 15:40:46 2020 From: report at bugs.python.org (Manuel Barkhau) Date: Sun, 19 Jan 2020 20:40:46 +0000 Subject: [New-bugs-announce] [issue39390] shutil.copytree - ignore callback behaviour change Message-ID: <1579466446.68.0.178285979921.issue39390@roundup.psfhosted.org> New submission from Manuel Barkhau : In Python 3.8, the types of the parameters to the ignore callable appear to have changed. Previously the `src` parameter was a string and the `names` parameter was a list of strings. Now the `src` parameter appears to be either a `pathlib.Path` or an `os.DirEntry`, while the `names` parameter is a set of strings. I would suggest adding the following to the documentation https://github.com/python/cpython/blob/master/Doc/library/shutil.rst .. versionchanged:: 3.8 The types of arguments to *ignore* have changed. The first argument (the directory being visited) is a func:`os.DirEntry` or a func:`pathlib.Path`; Previously it was a string. The second argument is a set of strings; previously it was a list of strings. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 360271 nosy: docs at python, mbarkhau priority: normal severity: normal status: open title: shutil.copytree - ignore callback behaviour change type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 19 18:03:57 2020 From: report at bugs.python.org (Peter Occil) Date: Sun, 19 Jan 2020 23:03:57 +0000 Subject: [New-bugs-announce] [issue39391] Nondeterministic Pydoc output on functions that have functions as default parameters Message-ID: <1579475037.48.0.632958566945.issue39391@roundup.psfhosted.org> New submission from Peter Occil : It appears that if a method has default parameters set to functions, as in this example: def f1(): pass def f2(a, b=f1): pass The resulting Pydoc output produces a different, nondeterministic rendering for the f2 method each time it generates the documentation, such as `m1(a, b=)` or `m1(a, b=)`. And this is problematic for version control systems, among other things, especially since this is not a meaningful change to the documentation. One solution may be to write, say, `m1(a, b=f1)` instead. ---------- assignee: docs at python components: Documentation messages: 360278 nosy: Peter Occil, docs at python priority: normal severity: normal status: open title: Nondeterministic Pydoc output on functions that have functions as default parameters type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 00:14:23 2020 From: report at bugs.python.org (Lijo) Date: Mon, 20 Jan 2020 05:14:23 +0000 Subject: [New-bugs-announce] [issue39392] Python Turtle is not filling alternate overlapping areas of a shape with same color Message-ID: <1579497263.75.0.682788787706.issue39392@roundup.psfhosted.org> New submission from Lijo : Alternate overlapping areas of shape are not getting filled with same color. But instead its white. Reproducible code from turtle import * color('black', 'yellow') begin_fill() circle(40) circle(60) circle(80) end_fill() Generated image ubuntu at python3.7.4 https://ibb.co/jG0bCBz Raised a stackoverflow question https://stackoverflow.com/questions/59811915/python-turtle-is-not-filling-alternate-overlapping-areas-of-a-shape-with-same-co ---------- components: Tkinter messages: 360290 nosy: lijose priority: normal severity: normal status: open title: Python Turtle is not filling alternate overlapping areas of a shape with same color type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 04:27:30 2020 From: report at bugs.python.org (plimkilde) Date: Mon, 20 Jan 2020 09:27:30 +0000 Subject: [New-bugs-announce] [issue39393] Misleading error message upon dependent DLL resolution failure Message-ID: <1579512450.11.0.864670825995.issue39393@roundup.psfhosted.org> New submission from plimkilde : Under Windows with Python 3.8+, trying to load a DLL whose dependencies cannot be resolved may produce a misleading error message. For example, if we are trying to load a library foo.dll that depends on bar.dll, and bar.dll cannot be resolved while foo.dll itself can, Python gives this error message: "FileNotFoundError: Could not find module 'foo.dll'. Try using the full path with constructor syntax." (behavior introduced with PR #12302) Personally, I'd be happy to see a fix that simply adds " (or one of its dependencies)" to the error message. ---------- components: Windows, ctypes messages: 360305 nosy: paul.moore, plimkilde, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Misleading error message upon dependent DLL resolution failure type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 05:44:04 2020 From: report at bugs.python.org (=?utf-8?q?J=C3=BCrgen_Gmach?=) Date: Mon, 20 Jan 2020 10:44:04 +0000 Subject: [New-bugs-announce] [issue39394] DeprecationWarning for `flag not at the start of expression` is cutoff too early Message-ID: <1579517044.97.0.328160626256.issue39394@roundup.psfhosted.org> New submission from J?rgen Gmach : The usage of flags not at the start of an expression is deprecated. Also see "Deprecate the use of flags not at the start of regular expression" / https://bugs.python.org/issue22493 A deprecation warning is issued, but is cutoff at 20 characters. For complex expressions this is way too small. Example ( https://github.com/jedie/python-creole/issues/31 ): current output /home/jugmac00/Projects/bliss_deployment/work/_/home/jugmac00/.batou-shared-eggs/python_creole-1.3.2-py3.7.egg/creole/parser/creol2html_parser.py:48 /home/jugmac00/Projects/bliss_deployment/work/_/home/jugmac00/.batou-shared-eggs/python_creole-1.3.2-py3.7.egg/creole/parser/creol2html_parser.py:48: DeprecationWarning: Flags not at the start of the expression '(?P\n ' (truncated) re.VERBOSE | re.UNICODE output with patched sre_parse.py creole/parser/creol2html_parser.py:51 /home/jugmac00/Projects/python-creole/creole/parser/creol2html_parser.py:51: DeprecationWarning: Flags not at the start of the expression '\n \\| \\s*\n (\n (?P [=][^|]+ ) |\n (?P ( (?P\n \\[\\[\n (?P.+?) \\s*\n ([|] \\s* (?P.+?) \\s*)?\n ]]\n )|\n (?P\n << \\s* (?P\\w+) \\s* (?P.*?) \\s* >>\n (?P(.|\\n)*?)\n <>\n )\n |(?P\n <<(?P \\w+) (?P.*?) \\s* /*>>\n )|(?i)(?P\n {{\n (?P.+?) \\s*\n (\\| \\s* (?P.+?) \\s*)?\n }}\n )|(?P {{{ (?P.*?) }}} ) | [^|])+ )\n ) \\s*\n ' cell_re = re.compile(x, re.VERBOSE | re.UNICODE) (Line number differs because there was a change in the source between these two test runs). I would like to create a pr and remove the limitation to 20 characters completely, but wanted to get feedback before I do so. The deprecation warning was created by Tim Graham - maybe he could elaborate why it was cut at 20 chars at first? https://github.com/python/cpython/commit/abf275af5804c5f76fbe10c5cb1dd3d2e4b04c5b ---------- components: Regular Expressions messages: 360306 nosy: ezio.melotti, jugmac00, mrabarnett priority: normal severity: normal status: open title: DeprecationWarning for `flag not at the start of expression` is cutoff too early type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 06:58:54 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 20 Jan 2020 11:58:54 +0000 Subject: [New-bugs-announce] [issue39395] The os module should unset() environment variable at exit Message-ID: <1579521534.52.0.738486244649.issue39395@roundup.psfhosted.org> New submission from STINNER Victor : os.environ[key] = value has to keep internally the string "key=value\0" after putenv("key=value\0") has been called, since the glibc doesn't copy the string. Python has to manage the string memory. Internally, the posix module uses a "posix_putenv_garbage" dictionary mapping key (bytes) to value (bytes). Values are "key=value\0" strings. The bpo-35381 issue converted the os ("posix" in practice) module PEP 384: "Remove all static state from posixmodule": commit b3966639d28313809774ca3859a347b9007be8d2. The _posix_clear() function is now called by _PyImport_Cleanup(). Problem: the glibc is not aware that Python is exiting and that the memory of the environment variable has been released. Next access to environment variables ("environ" C variable, putenv, setenv, unsetenv, ...) can crash. Sometimes, it doesn't crash even if the memory has been released, because free() does not always dig immediately holes in the heap memory (the complex problelm of memory fragmentation). The posix module should notify the glibc that the memory will be released before releasing the memory, to avoid keeping dangling pointers in the "environ" C variable. The following crash in the Elements module is an example of crash introduced by commit b3966639d28313809774ca3859a347b9007be8d2 which introduced this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1791761 ---------- components: Interpreter Core messages: 360309 nosy: vstinner priority: normal severity: normal status: open title: The os module should unset() environment variable at exit versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 07:20:42 2020 From: report at bugs.python.org (Michael Felt) Date: Mon, 20 Jan 2020 12:20:42 +0000 Subject: [New-bugs-announce] [issue39396] AIX: math.nextafter(a, b) breaks AIX bot Message-ID: <1579522842.97.0.0983912597487.issue39396@roundup.psfhosted.org> New submission from Michael Felt : As issue39288 (that introduces this breakage) is closed, opening a new issue. Back from away - and only starting my investigation - and that will probably be slow. Have not done anything with IEEE754 in over 30 years. ---------- messages: 360312 nosy: Michael.Felt priority: normal severity: normal status: open title: AIX: math.nextafter(a, b) breaks AIX bot versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 10:57:36 2020 From: report at bugs.python.org (Sebastien Foo) Date: Mon, 20 Jan 2020 15:57:36 +0000 Subject: [New-bugs-announce] [issue39397] Mac : fail to launch Python 3.8 Message-ID: <1579535856.72.0.329847745345.issue39397@roundup.psfhosted.org> New submission from Sebastien Foo : Hello, I am facing an issue with python on mac and there is not much information that I can find to fix it. When I installed the latest cli for Azure (brew upgrade azure-cli) it installed python 3.8 And then the az cli failed and running the python 3.8 failed too with the following error. /usr/local/Cellar/python at 3.8/3.8.1/Frameworks/Python.framework/Versions/3.8/bin/python3.8 Fatal Python error: config_get_locale_encoding: failed to get the locale encoding: nl_langinfo(CODESET) failed Python runtime state: preinitialized I have tried to reinstall python and the azure cli without success. Any help would be much appreciated. ---------- components: Installation messages: 360322 nosy: Sebastien Foo priority: normal severity: normal status: open title: Mac : fail to launch Python 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 12:14:25 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 20 Jan 2020 17:14:25 +0000 Subject: [New-bugs-announce] [issue39398] AMD64 Fedora Rawhide Clang 3.x: C compiler cannot create executables Message-ID: <1579540465.66.0.318494432862.issue39398@roundup.psfhosted.org> New submission from STINNER Victor : "AMD64 Fedora Rawhide Clang 3.x" buildbot worker is currently broken: https://buildbot.python.org/all/#/builders/169/builds/168 clang cannot build (statically linked) binary using UBSan: $ ./configure --prefix '$(PWD)/target' CC=clang LD=clang CFLAGS=-fsanitize=undefined LDFLAGS=-fsanitize=undefined ... checking for gcc... clang checking whether the C compiler works... no configure: error: in `/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.clang-ubsan/build': configure: error: C compiler cannot create executables See `config.log' for more details I reproduced the issue on the worker. The issue comes from a version conflict between clang and compiler-rt packages: compiler-rt-9.0.0-1.fc32.x86_64 uses /usr/lib64/clang/9.0.0/... vs clang-9.0.1-2.fc32.x86_64 uses /usr/lib64/clang/9.0.1/... Charalampos created https://src.fedoraproject.org/rpms/compiler-rt/pull-request/10 to propose to update compiler-rt. ---------- components: Tests keywords: buildbot messages: 360324 nosy: vstinner priority: normal severity: normal status: open title: AMD64 Fedora Rawhide Clang 3.x: C compiler cannot create executables versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 14:46:07 2020 From: report at bugs.python.org (Andrew Aladjev) Date: Mon, 20 Jan 2020 19:46:07 +0000 Subject: [New-bugs-announce] [issue39399] Cross compilation using different libc is broken Message-ID: <1579549567.58.0.700801348732.issue39399@roundup.psfhosted.org> New submission from Andrew Aladjev : Hello. I am implementing python cross compilation using "x86_64-pc-linux-musl" toolchain: "x86_64-pc-linux-musl-emerge -v1 python:3.6". Please see the following build log https://gist.github.com/andrew-aladev/e10fa5a8151ffb3c5782edd64ae08b28. We can see the following part: Traceback (most recent call last): File "/usr/x86_64-pc-linux-musl/tmp/portage/dev-lang/python-3.6.9/image//usr/lib/python3.6/compileall.py", line 17, in import struct File "/usr/x86_64-pc-linux-musl/tmp/portage/dev-lang/python-3.6.9/image/usr/lib/python3.6/struct.py", line 13, in from _struct import * ImportError: libc.so: cannot open shared object file: No such file or directory It means that cross compilation of python is not reliable today by design. Python is trying to use PYTHON_FOR_BUILD for loading cross compiled modules. It is not possible in general case. PYTHON_FOR_BUILD should not try to load cross compiled modules. Please see the following gentoo issue https://bugs.gentoo.org/705970. I've attached a gentoo specific workaround there. ---------- components: Build messages: 360330 nosy: puchenyaka priority: normal severity: normal status: open title: Cross compilation using different libc is broken type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 16:02:23 2020 From: report at bugs.python.org (Ronan Pigott) Date: Mon, 20 Jan 2020 21:02:23 +0000 Subject: [New-bugs-announce] [issue39400] pydoc: Use of MANPAGER variable is incorrect Message-ID: <1579554143.61.0.454786328579.issue39400@roundup.psfhosted.org> New submission from Ronan Pigott : pydoc references the value of both MANPAGER and PAGER variables when selecting a command to present the user with documentation. Those values are passed directly to subprocess.Popen. However, MANPAGER may contain arguments that need splitting, and is explicitly documented as such in the `man 1 man` manpage: > If $MANPAGER or $PAGER is set ($MANPAGER is used in preference), its value is used as the name of the program used to display the manual page. [...] The value may be a simple command name or a command with arguments, and may use shell quoting (backslashes, single quotes, or double quotes). It may not use pipes to connect multiple commands; if you need that, use a wrapper script [...] pydoc should perform word splitting a la shlex.split on the values of MANPAGER and PAGER to retain compatibility with man. ---------- components: Library (Lib) messages: 360332 nosy: Brocellous priority: normal severity: normal status: open title: pydoc: Use of MANPAGER variable is incorrect type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 20:02:15 2020 From: report at bugs.python.org (Anthony Wee) Date: Tue, 21 Jan 2020 01:02:15 +0000 Subject: [New-bugs-announce] [issue39401] Unsafe dll loading in getpathp.c on Win7 Message-ID: <1579568535.47.0.867871543099.issue39401@roundup.psfhosted.org> New submission from Anthony Wee : On Win7, running Python in the terminal will attempt to load the "api-ms-win-core-path-l1-1-0.dll" from various paths outside of the Python directory and the C:\Windows\System32 directories. This behavior can be verified using Process Monitor (see attachment). This is happening due to direct calls to LoadLibraryW() in getpathp.c without any "LOAD_LIBRARY_SEARCH*" flags. In join(): https://github.com/python/cpython/blob/c02b41b1fb115c87693530ea6a480b2e15460424/PC/getpathp.c#L255 and canonicalize(): https://github.com/python/cpython/blob/c02b41b1fb115c87693530ea6a480b2e15460424/PC/getpathp.c#L291 For both cases, the methods they are trying to load from api-ms-win-core-path-l1-1-0.dll (PathCchCanonicalizeEx and PathCchCombineEx) were introduced in Win8. I tested on Win7 and Win10 and they differ in how they load these api-ms-win-* dll's and whether they appear in process monitor. In Win7, a CreateFile event appears in procmon, while in Win10 it seems like the OS is automatically loading the module from kernelbase.dll. Also in Win7 the loading of api-ms-win-core-path-l1-1-0.dll will fail while in Win10 it succeeds. However, in Win7 when it fails it results in the standard dll search strategy, which will eventually search outside of the secure directories such as the directories in the PATH env var: https://docs.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order Each of the problematic methods in cpython have a pattern of attempting to load the dll, then falling back to an older version of the method. Thus in Win7, the dll fails to load and it falls back to the older version of the method. In Win10, the dll load succeeds and we use the new versions of the methods. I'm working on a fix to pass the LOAD_LIBRARY_SEARCH_DEFAULT_DIRS flag to limit to the dll search path scope. ---------- files: python unsafe dll loading.png messages: 360348 nosy: anthonywee priority: normal severity: normal status: open title: Unsafe dll loading in getpathp.c on Win7 Added file: https://bugs.python.org/file48855/python unsafe dll loading.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 21:57:42 2020 From: report at bugs.python.org (Raymond Leiter) Date: Tue, 21 Jan 2020 02:57:42 +0000 Subject: [New-bugs-announce] [issue39402] Consistent use of terms Message-ID: <1579575462.08.0.745933853116.issue39402@roundup.psfhosted.org> New submission from Raymond Leiter : This is my idea of an improvement to the documentation, but I doubt anyone would agree with me. Nevertheless, here it is: There are at least 4 commonly used characters used to group other constructs to clearly call attention to their meaning. 1. [] Brackets 2. {} Braces 3. () Parentheses 4. <> Less than/Greater than The problem I have with the way these symbols are spoken of (in writing as well as oral discourse) is the lack of consistent names for them. Brackets are often referred to as square Brackets, even though there is apparently no alternative such as rectangular Brackets, etc. Braces are often referred to as curly Braces or some times curly Brackets. Parentheses are usually called, correctly, Parentheses, but also referred to as round Brackets. I've never encountered 'round Braces', but I'm hopeful. Less then and Greater then symbols are referred to correctly when they are used in mathematics speak. However, when they are used as a 'grouping' mechanism, they are usually called Angle Brackets -- not Angle Braces. My proposal is this: The most consistent way I can think of for referring to these 4 symbols when used as a 'grouping' mechanism is: 1. [] SQUARE BRACKETS 2. {} CURLY BRACKETS 3. () ROUND BRACKETS 4. <> ANGLE BRACKETS There will be no more Braces, since that term is apparently quite unpopular with most programmers today. The 'shape' modifiers (SQUARE, CURLY, ROUND, ANGLE), applied to the common term BRACKETS, would appear to be much more consistent than current usage. I'm well aware of the difficulty in garnering support for this kind of an 'improvement', but I felt it needed said. ---------- assignee: docs at python components: Documentation messages: 360349 nosy: Raymond Leiter, docs at python priority: normal severity: normal status: open title: Consistent use of terms type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 20 23:51:26 2020 From: report at bugs.python.org (wh1te r4bb1t) Date: Tue, 21 Jan 2020 04:51:26 +0000 Subject: [New-bugs-announce] [issue39403] Objects equal (assertEqual return True) but behave differently Message-ID: <1579582286.19.0.437122488407.issue39403@roundup.psfhosted.org> New submission from wh1te r4bb1t : Here is a code highlighting a very strange behavior. This has been noticed in python 3.7, 3.8 and 3.9.0a2 def function(input_list, a='x'): [input_list[i].append(a) for i in range(len(input_list))] return input_list list1 = [[0], [0], [0]] list2 = [[0]] * 3 list1 == list2 # return True function(list1) # return [[0, 'x'], [0, 'x'], [0, 'x']] function(list2) # return [[0, 'x', 'x', 'x'], [0, 'x', 'x', 'x'], [0, 'x', 'x', 'x']] list1 == list2 # return false ---------- messages: 360351 nosy: wh1te r4bb1t priority: normal severity: normal status: open title: Objects equal (assertEqual return True) but behave differently type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 02:01:51 2020 From: report at bugs.python.org (Archana Pandey) Date: Tue, 21 Jan 2020 07:01:51 +0000 Subject: [New-bugs-announce] [issue39404] Pexpect : setwinsize() not working in SLES 12.4 kernel 4.12.14-94.41-default Message-ID: <1579590111.47.0.0702544598598.issue39404@roundup.psfhosted.org> New submission from Archana Pandey : use of setwinsize function returns empty if the values of rows and cols differes from(24,80). Issue occurs only on SLES 12.4. sample code: #!/usr/bin/env python from pexpect import pxssh try: s = pxssh.pxssh() hostname = 'someIp' username = 'username' password = 'password' s.login(hostname, username, password) s.setwinsize(1000,1000) # setting default winsize works s.sendline('uname -r') # run a command s.prompt() # match the prompt print(s.before) # print everything before the prompt. s.logout() except pxssh.ExceptionPxssh as e: print("pxssh failed on login.") print(e) ---------- components: Library (Lib) messages: 360356 nosy: archi-pandey priority: normal severity: normal status: open title: Pexpect : setwinsize() not working in SLES 12.4 kernel 4.12.14-94.41-default type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 02:52:12 2020 From: report at bugs.python.org (Sankar) Date: Tue, 21 Jan 2020 07:52:12 +0000 Subject: [New-bugs-announce] [issue39405] Using relative path as a --prefix during configure. Message-ID: <1579593132.28.0.601505757933.issue39405@roundup.psfhosted.org> New submission from Sankar : Is it possible to provide the relative path as a --prefix during configure? I want to compile a python as a distributive package, so using an absolute path won't help. It should have a relative path like "../../Python". The compiled python needs to have a relative everywhere, is it possible to achieve this? ---------- components: Build messages: 360360 nosy: Sankark priority: normal severity: normal status: open title: Using relative path as a --prefix during configure. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 03:30:02 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 21 Jan 2020 08:30:02 +0000 Subject: [New-bugs-announce] [issue39406] Implement os.putenv() with setenv() if available Message-ID: <1579595402.07.0.326081279431.issue39406@roundup.psfhosted.org> New submission from STINNER Victor : Currently, os.putenv() is always implemented with putenv(). The problem is that putenv(str) puts directly the string into the environment, the string is not copied. So Python has to keep track of this memory. In Python 3.9, this string is now cleared at Python exit, without unsetting the environment variable which cause bpo-39395 crash. I propose to implement os.putenv() with setenv() if available, which avoids bpo-39395 on platforms providing setenv(). ---------- components: Library (Lib) messages: 360365 nosy: vstinner priority: normal severity: normal status: open title: Implement os.putenv() with setenv() if available versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 04:20:46 2020 From: report at bugs.python.org (James) Date: Tue, 21 Jan 2020 09:20:46 +0000 Subject: [New-bugs-announce] [issue39407] Bitfield Union does not work for bit widths greater than 8 bits Message-ID: <1579598446.4.0.992885601922.issue39407@roundup.psfhosted.org> New submission from James : Creating a Bitfield from a ctypes union and structure results in unexpected behaviour. It seems when you set the bit-width of a structure field to be greater than 8 bits it results in the subsequent bits being set to zero. class BitFieldStruct(ctypes.LittleEndianStructure): _fields_ = [ ("long_field", C_UINT32, 29), ("short_field_0", C_UINT8, 1), ("short_field_1", C_UINT8, 1), ("short_field_2", C_UINT8, 1), ] class BitField(ctypes.Union): _anonymous_ = ("fields",) _fields_ = [ ("fields", BitFieldStruct), ("as32bit", C_UINT32) ] def test_bit_field_union(): f = BitField() f.as32bit = int.from_bytes([255, 255, 255, 255], byteorder='little') assert f.long_field == int.from_bytes([255, 255, 255, 31], byteorder='little') assert f.short_field_0 == 1 assert f.short_field_1 == 1 assert f.short_field_2 == 1 test_bit_field_union() # this call will fail with an assertion error Equivalent C which does not fail https://rextester.com/FWV78514 I'm running on Ubuntu 16.04 with python3.6 but I have tested on 3.5, 3.7 and on repl.it with the same behaviour. It seems as though setting any of the struct fields to be greater than 8 bit width results in any of the following fields being set to zero. ---------- components: ctypes files: python_struct_union_bug.py messages: 360372 nosy: jschulte priority: normal severity: normal status: open title: Bitfield Union does not work for bit widths greater than 8 bits type: behavior versions: Python 3.5, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48856/python_struct_union_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 04:32:50 2020 From: report at bugs.python.org (Sebastian Noack) Date: Tue, 21 Jan 2020 09:32:50 +0000 Subject: [New-bugs-announce] [issue39408] Add support for SQLCipher Message-ID: <1579599170.77.0.875579101001.issue39408@roundup.psfhosted.org> New submission from Sebastian Noack : SQLCipher is industry-standard technology for managing an encrypting SQLite databases. It has been implemented as a fork of SQLite3. So the sqlite3 corelib module would build as-is against it. But rather than a fork (of this module), I'd rather see integration of SQLCiper in upstream Python. I'm happy to volunteer if this changes have any chance of landing. By just adding 2 lines to the cpython repository (and changing ~10 lines), I could make SQLCipher (based on the current sqlite3 module) available as a separate module (e.g. sqlcipher or sqlite3.cipher). However, IMO the ideal interface would be sqlilte3.connect(..., sqlcipher=True). Any thoughts? ---------- messages: 360373 nosy: Sebastian.Noack priority: normal severity: normal status: open title: Add support for SQLCipher _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 06:23:36 2020 From: report at bugs.python.org (Michael Felt) Date: Tue, 21 Jan 2020 11:23:36 +0000 Subject: [New-bugs-announce] [issue39409] AIX: FAIL: test_specific_values (test.test_cmath.CMathTests) Message-ID: <1579605816.87.0.587336321661.issue39409@roundup.psfhosted.org> New submission from Michael Felt : Per message: https://bugs.python.org/issue39396#msg360362 opening new issue. Research (as requested) to follow. ---------- components: Tests messages: 360389 nosy: Michael.Felt, vstinner priority: normal severity: normal status: open title: AIX: FAIL: test_specific_values (test.test_cmath.CMathTests) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 07:53:42 2020 From: report at bugs.python.org (Igor Ceh) Date: Tue, 21 Jan 2020 12:53:42 +0000 Subject: [New-bugs-announce] [issue39410] CentOS 6.10 SQLite 3.30.1 - _sqlite3 builds successfully but is removed because it cannot be imported. Message-ID: <1579611222.47.0.288920213647.issue39410@roundup.psfhosted.org> New submission from Igor Ceh : While trying to build Python 3.8.1 from source with Sqlite 3.30.1 on a CentOS 6.10 I get the following warning: *** WARNING: renaming "_sqlite3" since importing it failed: build/lib.linux-x86_64-3.8/_sqlite3.cpython-38-x86_64-linux-gnu.so: undefined symbol: sqlite3_close_v2 The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc atexit pwd time Failed to build these modules: _uuid Following modules built successfully but were removed because they could not be imported: _sqlite3 If I try to import sqlite in python: [vagrant at centos6 Python-3.8.1]$ ./python Python 3.8.1 (default, Jan 21 2020, 04:22:59) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 Traceback (most recent call last): File "", line 1, in File "/usr/local/src/Python-3.8.1/Lib/sqlite3/__init__.py", line 23, in from sqlite3.dbapi2 import * File "/usr/local/src/Python-3.8.1/Lib/sqlite3/dbapi2.py", line 27, in from _sqlite3 import * ModuleNotFoundError: No module named '_sqlite3' >>> Also tried building with SQLite version 3.7.9 from atomic repository with same error. ---------- messages: 360395 nosy: cehovski priority: normal severity: normal status: open title: CentOS 6.10 SQLite 3.30.1 - _sqlite3 builds successfully but is removed because it cannot be imported. type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 08:29:29 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Tue, 21 Jan 2020 13:29:29 +0000 Subject: [New-bugs-announce] [issue39411] pyclbr rewrite on AST Message-ID: <1579613369.33.0.216761296366.issue39411@roundup.psfhosted.org> New submission from Batuhan Taskaya : pyclbr currently uses token streams to analyze but it can be alot simpler with usage of AST. There are already many flaws, including some comments about limitations of this token stream processing. I have a draft about this. Initial PR wont change any behavior, it will just make code much simpler with the usage of AST (just an addition to Function about handling of async functions, is_async field). If agreed I can propose a second PR (or append the inital one) that will enhance Function/Class objects with various identifiers (like keywords, metaclasses, end position information etc.). The second PR will be alot easier to do thanks to AST. ---------- components: Library (Lib) messages: 360397 nosy: Batuhan Taskaya, pablogsal priority: normal severity: normal status: open title: pyclbr rewrite on AST versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 09:34:24 2020 From: report at bugs.python.org (ccetsii) Date: Tue, 21 Jan 2020 14:34:24 +0000 Subject: [New-bugs-announce] [issue39412] Install launcher for all users Message-ID: <1579617264.58.0.64528608209.issue39412@roundup.psfhosted.org> New submission from ccetsii : In Python 3.8.1 (32 bits) Windows Installer, the first page show a checkbox for "install launcher for all users (recommended)", but software install in user directory (see screenchot attachment). Casually, in "customize installation" option, exist other "Install for all users" options, and that option works correctly. ---------- files: python_installer.png messages: 360401 nosy: ccetsii priority: normal severity: normal status: open title: Install launcher for all users type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48857/python_installer.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 09:39:20 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 21 Jan 2020 14:39:20 +0000 Subject: [New-bugs-announce] [issue39413] Implement os.unsetenv() on Windows Message-ID: <1579617560.05.0.467186991348.issue39413@roundup.psfhosted.org> New submission from STINNER Victor : os.unsetenv() is documented to be available on Windows, but it's not. In Python 3.8, "del os.environ[key]" is implemented as: os.putenv(key.upper(), "") Attached PR implements it using SetEnvironmentVariableW(name, NULL). ---------- components: Library (Lib), Windows messages: 360402 nosy: paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Implement os.unsetenv() on Windows versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 12:07:47 2020 From: report at bugs.python.org (Reece Dunham) Date: Tue, 21 Jan 2020 17:07:47 +0000 Subject: [New-bugs-announce] [issue39414] Multiprocessing resolving object as None Message-ID: <1579626467.92.0.021176656048.issue39414@roundup.psfhosted.org> New submission from Reece Dunham : Exception ignored in: Traceback (most recent call last): File "/root/conda/lib/python3.8/multiprocessing/pool.py", line 268, in __del__ File "/root/conda/lib/python3.8/multiprocessing/queues.py", line 362, in put AttributeError: 'NoneType' object has no attribute 'dumps' Pretty sure that shouldn't be None. ---------- components: Library (Lib) messages: 360407 nosy: rdil priority: normal severity: normal status: open title: Multiprocessing resolving object as None versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 12:09:17 2020 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 21 Jan 2020 17:09:17 +0000 Subject: [New-bugs-announce] [issue39415] Remove unused code from longobject.c complexobject.c floatobject.c Message-ID: <1579626557.12.0.506722213055.issue39415@roundup.psfhosted.org> New submission from Dong-hee Na : For example, long_is_finite has not been used for 12 years. ---------- assignee: corona10 messages: 360408 nosy: corona10 priority: normal severity: normal status: open title: Remove unused code from longobject.c complexobject.c floatobject.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 21 21:35:29 2020 From: report at bugs.python.org (Karl O. Pinc) Date: Wed, 22 Jan 2020 02:35:29 +0000 Subject: [New-bugs-announce] [issue39416] Document default numeric string formats Message-ID: <1579660529.58.0.0341841917197.issue39416@roundup.psfhosted.org> New submission from Karl O. Pinc : Seems sane to put _some_ restrictions on the string representations of the Numeric classes. This would be a change to the Python language specification. Suggestions made in a pull request. See the email thread: Subject: Documenting Python's float.__str__() https://mail.python.org/archives/list/python-dev at python.org/thread/FV22TKT3S2Q3P7PNN6MCXI6IX3HRRNAL/ ---------- assignee: docs at python components: Documentation messages: 360442 nosy: docs at python, kop priority: normal severity: normal status: open title: Document default numeric string formats _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 04:32:01 2020 From: report at bugs.python.org (Angel Cervera Claudio) Date: Wed, 22 Jan 2020 09:32:01 +0000 Subject: [New-bugs-announce] [issue39417] Link to "Python Packaging User Guide: Creating and using virtual environments" is broken Message-ID: <1579685521.64.0.0587075259852.issue39417@roundup.psfhosted.org> New submission from Angel Cervera Claudio : The link "See also: Python Packaging User Guide: Creating and using virtual environments" Is broken. The problem is in line 30 of https://github.com/python/cpython/blob/3.8/Doc/library/venv.rst I don't know the right link, so I can not fix it. I any one provive me the right link, I can create a PR. ---------- assignee: docs at python components: Documentation messages: 360454 nosy: angelcervera, docs at python priority: normal severity: normal status: open title: Link to "Python Packaging User Guide: Creating and using virtual environments" is broken versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 07:16:56 2020 From: report at bugs.python.org (Natalie Amery) Date: Wed, 22 Jan 2020 12:16:56 +0000 Subject: [New-bugs-announce] [issue39418] str.strip() should have a means of adding to the default behaviour Message-ID: <1579695416.2.0.0994259543651.issue39418@roundup.psfhosted.org> New submission from Natalie Amery : If I want to remove the default set of 'whitespace' characters plus something else from a string there's currently no way to cleanly specify that. In addition there's no way to programatically acquire what characters are considered whitespace so you can't call split with an argument constructed of existing whitespace characters with the new things you need. As an example you could have an additionally= parameter such that: " ( 123 ) ".strip() gives "( 123 )" and " ( 123 ) ".strip(additionally="()") gives "123" I've not given that any thought so it's probably not the best way of solving the problem. ---------- messages: 360459 nosy: senji priority: normal severity: normal status: open title: str.strip() should have a means of adding to the default behaviour type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 09:11:07 2020 From: report at bugs.python.org (Gerrit Holl) Date: Wed, 22 Jan 2020 14:11:07 +0000 Subject: [New-bugs-announce] [issue39419] Core dump when trying to use workaround for custom warning category (Fatal Python error: init_sys_streams: can't initialize sys standard streams) Message-ID: <1579702267.6.0.24270262544.issue39419@roundup.psfhosted.org> New submission from Gerrit Holl : Pythons commandline warning filter cannot currently handle custom warning categories. See https://bugs.python.org/issue22543 I tried a workaround as suggested on Stack Overflow to filter on a warning category defined in pandas.errors.__init__.py (DtypeWarning): PYTHONPATH=/media/nas/x21324/miniconda3/envs/py37e/lib/python3.7/site-packages/pandas/ python -Werror::errors.DtypeWarning However, this results in a Fatal Python error and core dump: Fatal Python error: init_sys_streams: can't initialize sys standard streams AttributeError: module 'io' has no attribute 'OpenWrapper' Current thread 0x00007f7bb3be76c0 (most recent call first): Aborted (core dumped) ---------- messages: 360469 nosy: Gerrit.Holl priority: normal severity: normal status: open title: Core dump when trying to use workaround for custom warning category (Fatal Python error: init_sys_streams: can't initialize sys standard streams) type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 10:10:19 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 22 Jan 2020 15:10:19 +0000 Subject: [New-bugs-announce] [issue39420] Windows: convertenviron() doesn't parse environment variables properly Message-ID: <1579705819.39.0.675106611885.issue39420@roundup.psfhosted.org> New submission from STINNER Victor : os.environ is created by convertenviron() of posixmodule.c. The Windows implementation calls _wgetenv(L"") to initialize _wenviron, and then parses the _wenviron string. The _wenviron string is parsed by search for the first "=" character to split between the variable name and the variable value. For example, "USER=vstinner" is parsed as name="USER" and value="vstinner". The problem is that the _wputenv() function allows to insert variable names containing the "=" character (but reject names starting with "=" character). Python can inherit an environment with a name containing "=". One solution can be to use GetEnvironmentStringsW() which uses null characters to separate variable name and variable value. It returns a string like "name1\0value1\0name2\0value2\0\0": the string ends with a null character as well, to mark the end of the list. https://docs.microsoft.com/en-us/windows/win32/api/processenv/nf-processenv-getenvironmentstrings?redirectedfrom=MSDN Python 3.8 *explicitly* rejects variable names containing "=", at least on Windows, likely to workaround this issue. But another program can inject such variable in the environment. Example with a Python modified to not reject explicitly "=" in the varaible name: --- import subprocess, os, sys os.putenv("victor=", "secret") code = """import os; print(f"victor: {os.getenv('victor')!r}"); print(f"victor=: {os.getenv('victor=')!r}")""" subprocess.run([sys.executable, "-c", code]) --- Output: --- victor: '=secret' victor=: None --- Expected output: --- victor: None victor=: '=secret' --- ---------- components: Library (Lib) messages: 360473 nosy: vstinner priority: normal severity: normal status: open title: Windows: convertenviron() doesn't parse environment variables properly versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 10:11:46 2020 From: report at bugs.python.org (Dk0n9) Date: Wed, 22 Jan 2020 15:11:46 +0000 Subject: [New-bugs-announce] [issue39421] Use-after-free in heappushpop() of heapq module Message-ID: <1579705906.5.0.185144914651.issue39421@roundup.psfhosted.org> Change by Dk0n9 : ---------- components: Extension Modules nosy: dk0n9 priority: normal severity: normal status: open title: Use-after-free in heappushpop() of heapq module type: crash versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 10:56:03 2020 From: report at bugs.python.org (Cleland Loszewski) Date: Wed, 22 Jan 2020 15:56:03 +0000 Subject: [New-bugs-announce] [issue39422] datetime.datetime.strptime incorrectly interpretting format '%Y%m%d' Message-ID: <1579708563.29.0.511970364899.issue39422@roundup.psfhosted.org> New submission from Cleland Loszewski : ```python from datetime import datetime print(datetime.strptime('2020016', '%Y%m%d')) print(datetime.strptime('20200116', '%Y%m%d')) ``` The former string has a format that does not match '%Y%m%d', but the latter does. Both report the same datatime output. ---------- components: Library (Lib) messages: 360480 nosy: losze1cj priority: normal severity: normal status: open title: datetime.datetime.strptime incorrectly interpretting format '%Y%m%d' type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 11:05:55 2020 From: report at bugs.python.org (mapf) Date: Wed, 22 Jan 2020 16:05:55 +0000 Subject: [New-bugs-announce] [issue39423] Process finished with exit code -1073741819 (0xC0000005) when trying to access data from a pickled file Message-ID: <1579709155.26.0.28815846735.issue39423@roundup.psfhosted.org> New submission from mapf : I have a program where I create some relatively nested data and within the same session, I have no issues accessing the data. I then use picke.dump() with pickle.HIGHEST_PROTOCOL to save the data so I can access it in a later session. These files are usually over 2GB large since they contain many images in the form of numpy arrays and I have never had any issues loading them. However there is one data structure that is a structured numpy array of type "a" with currently 16 different dtypes and they can all be accessed in the same session where they were created without any problems sometimes even after dumping and loading the data again. They can also all be accessed after they have been loaded in a different session with the exeption of one field. This field contains rather nested data which is why I thought that this might be the issue, but I have honestly no idea. Each entry in this field is a list of len 20, whose entries are either None or a 1-d slice of "()"-shape from another structured array of type "b". This slice in turn has 37 different dtypes, most of which are either int, fload or bool. But there is one entry which is a list that can contain several dicts. The entries of this dict are floats, however one can be a slice of type "b" again, so there is some cross-referencing going on. As a test I already removed this entry though and it still crashed. My point is, the data that is stored is not of some crazy custom type. All the data is either of type bool, int, fload, list, dict or numpy.array. As I said, ALL the other stored data can be accessed without any problems. It is only this one field that can only be accessed during the same session it was created. My program runs using a PyQt5 GUI and I use PyCharm as the editor. I have already read that in the past, these two in combination seem to cause this error rather frequently maybe that has something to do with it. I have already tried reinstalling my Python distribution as well as PyCharm as well as running the code on a different machine to no avail. I am also pretty certain that this used to work just last week ago. I didn't change my code but now it doesn't work anymore. Relevant specs: Windows 10 Home 64 bit PyCharm 2019.3.1 Professional Python 3.7.4 via Anaconda Numpy 1.16.5 PyQt 5.9.2 ---------- messages: 360481 nosy: mapf priority: normal severity: normal status: open title: Process finished with exit code -1073741819 (0xC0000005) when trying to access data from a pickled file type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 11:56:21 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 22 Jan 2020 16:56:21 +0000 Subject: [New-bugs-announce] [issue39424] test_signal: test_pidfd_send_signal() uses deprecated assertRaisesRegexp() method Message-ID: <1579712181.32.0.0974422649306.issue39424@roundup.psfhosted.org> New submission from STINNER Victor : test_signal.test_pidfd_send_signal() should use assertRaisesRegex() rather than assertRaisesRegexp(): $ ./python -Werror -m test -v test_signal -m test_pidfd_send_signal (...) ====================================================================== ERROR: test_pidfd_send_signal (test.test_signal.PidfdSignalTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/master/Lib/test/test_signal.py", line 1292, in test_pidfd_send_signal with self.assertRaisesRegexp(TypeError, "^siginfo must be None$"): File "/home/vstinner/python/master/Lib/unittest/case.py", line 1390, in deprecated_func warnings.warn( DeprecationWarning: Please use assertRaisesRegex instead. ---------- components: Tests keywords: newcomer friendly messages: 360486 nosy: vstinner priority: normal severity: normal status: open title: test_signal: test_pidfd_send_signal() uses deprecated assertRaisesRegexp() method versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 12:16:03 2020 From: report at bugs.python.org (Dong-hee Na) Date: Wed, 22 Jan 2020 17:16:03 +0000 Subject: [New-bugs-announce] [issue39425] list.count performance regression Message-ID: <1579713363.79.0.989493842853.issue39425@roundup.psfhosted.org> New submission from Dong-hee Na : ./python.exe -m pyperf timeit -s 'a = [1]*100' 'a.count(1)' Current Master: Mean +- std dev: 1.05 us +- 0.03 us My patch: Mean +- std dev: 423 ns +- 11 ns This is the side-effect of pr 17022. ---------- assignee: corona10 messages: 360488 nosy: corona10 priority: normal severity: normal status: open title: list.count performance regression type: performance versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 13:40:01 2020 From: report at bugs.python.org (Mark Dickinson) Date: Wed, 22 Jan 2020 18:40:01 +0000 Subject: [New-bugs-announce] [issue39426] Pickler docstring misstates default and highest protocols Message-ID: <1579718401.91.0.32620424207.issue39426@roundup.psfhosted.org> New submission from Mark Dickinson : >From the pickle.Pickler docstring: > The optional *protocol* argument tells the pickler to use the given > protocol; supported protocols are 0, 1, 2, 3 and 4. The default > protocol is 3; a backward-incompatible protocol designed for Python 3. That's out of date since Python 3.8, where the default protocol is 4 and the highest available is 5. For future-proofing, it may be worth rewording the docstring to refer directly to the DEFAULT_PROTOCOL and HIGHEST_PROTOCOL constants. ---------- assignee: docs at python components: Documentation messages: 360497 nosy: docs at python, mark.dickinson, pitrou priority: normal severity: normal status: open title: Pickler docstring misstates default and highest protocols versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 17:26:44 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 22 Jan 2020 22:26:44 +0000 Subject: [New-bugs-announce] [issue39427] python -X options are not documented in the CLI --help Message-ID: <1579732004.75.0.619158264246.issue39427@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : When running python --help there is no documentation on the -X opt options available. It just says: -X opt : set implementation-specific option without what 'opt' can be. ---------- assignee: pablogsal components: Interpreter Core messages: 360516 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: python -X options are not documented in the CLI --help versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 18:44:18 2020 From: report at bugs.python.org (Carl Meyer) Date: Wed, 22 Jan 2020 23:44:18 +0000 Subject: [New-bugs-announce] [issue39428] allow creation of "symtable entry" objects from Python Message-ID: <1579736658.27.0.540256787328.issue39428@roundup.psfhosted.org> New submission from Carl Meyer : Currently the "symtable entry" extension type (PySTEntry_Type) defined in `Python/symtable.c` defines no `tp_new` or `tp_init`, making it impossible to create instances of this type from Python code. I have a use case for pickling symbol tables (as part of a cache subsystem for a static analyzer), but the inability to create instances of symtable entries from attributes makes this impossible, even with custom pickle support via dispatch_table or copyreg. If the idea of making instances of this type creatable from Python is accepted in principle, I can submit a PR for it. Thanks! ---------- messages: 360522 nosy: carljm priority: normal severity: normal status: open title: allow creation of "symtable entry" objects from Python type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 22 20:03:16 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 23 Jan 2020 01:03:16 +0000 Subject: [New-bugs-announce] [issue39429] Add a new "Python Development Mode" page to the documentation Message-ID: <1579741396.58.0.897370884771.issue39429@roundup.psfhosted.org> New submission from STINNER Victor : Currently, the Python Development Mode is under-documented. The documentation lives in the documentation of the -X command line option which doesn't give much space to elaborate on effects of the development mode, how it should be used, suggest ways to get more information, etc. Attached PR adds a new "Python Development Mode" page to the documentation to suggest more advices and add examples. ---------- assignee: docs at python components: Documentation messages: 360527 nosy: docs at python, vstinner priority: normal severity: normal status: open title: Add a new "Python Development Mode" page to the documentation versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 06:15:38 2020 From: report at bugs.python.org (Maciej Gol) Date: Thu, 23 Jan 2020 11:15:38 +0000 Subject: [New-bugs-announce] [issue39430] tarfile.open(mode="r") race condition when importing lzma Message-ID: <1579778138.71.0.872285273373.issue39430@roundup.psfhosted.org> New submission from Maciej Gol : Hey guys, We have a component that archives and unarchives multiple files in separate threads that started to misbehave recently. We have noticed a bunch of `AttributeError: module 'lzma' has no attribute 'LZMAFile'` errors, which are unexpected because our python is not compiled with LZMA support. What is unfortunate, is that given the traceback: Traceback (most recent call last): File "test.py", line 18, in list(pool.map(test_lzma, range(100))) File "/opt/lang/python37/lib/python3.7/concurrent/futures/_base.py", line 598, in result_iterator yield fs.pop().result() File "/opt/lang/python37/lib/python3.7/concurrent/futures/_base.py", line 428, in result return self.__get_result() File "/opt/lang/python37/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/opt/lang/python37/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "test.py", line 14, in test_lzma tarfile.open(fileobj=buf, mode="r") File "/opt/lang/python37/lib/python3.7/tarfile.py", line 1573, in open return func(name, "r", fileobj, **kwargs) File "/opt/lang/python37/lib/python3.7/tarfile.py", line 1699, in xzopen fileobj = lzma.LZMAFile(fileobj or name, mode, preset=preset) AttributeError: module 'lzma' has no attribute 'LZMAFile' the last line of the traceback is right AFTER this block (tarfile.py:1694): try: import lzma except ImportError: raise CompressionError("lzma module is not available") Importing lzma in ipython fails properly: In [2]: import lzma --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) in ----> 1 import lzma /opt/lang/python37/lib/python3.7/lzma.py in 25 import io 26 import os ---> 27 from _lzma import * 28 from _lzma import _encode_filter_properties, _decode_filter_properties 29 import _compression ModuleNotFoundError: No module named '_lzma' When trying to debug the problem, we have noticed it's not deterministic. In order to reproduce it, we have created a test python that repeatedly writes an archive to BytesIO and then reads from it. Using it with 5 threads and 100 calls, gives very good chances of reproducing the issue. For us it was almost every time. Race condition occurs both on Python 3.7.3 and 3.7.6. Test script used to reproduce it attached. I know that the test script writes uncompressed archives and during opening tries to guess the compression. But I guess this is a legitimate scenario and should not matter in this case. ---------- files: test.py messages: 360551 nosy: Maciej Gol priority: normal severity: normal status: open title: tarfile.open(mode="r") race condition when importing lzma type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48860/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 06:40:24 2020 From: report at bugs.python.org (Shanavas M) Date: Thu, 23 Jan 2020 11:40:24 +0000 Subject: [New-bugs-announce] [issue39431] Mention nonlocal too in assignment quirk Message-ID: <1579779624.25.0.960153197022.issue39431@roundup.psfhosted.org> New submission from Shanavas M : Doc says "A special quirk of Python is that -- if no :keyword:`global` statement is in A special quirk of Python is that -- if no :keyword:`global` or :keyword:`nonlocal` effect -- assignments to names always go into the innermost scope." nonlocal should also be mentioned along with global ---------- assignee: docs at python components: Documentation messages: 360553 nosy: docs at python, shanavasm priority: normal pull_requests: 17528 severity: normal status: open title: Mention nonlocal too in assignment quirk versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 08:48:15 2020 From: report at bugs.python.org (da-woods) Date: Thu, 23 Jan 2020 13:48:15 +0000 Subject: [New-bugs-announce] [issue39432] Distutils generates the wrong export symbol for unicode module names Message-ID: <1579787295.61.0.899007188072.issue39432@roundup.psfhosted.org> New submission from da-woods : Distuitls generates "export symbols" for extension modules to help ensure that they have have the correct linkage on Windows. https://github.com/python/cpython/blob/0d30ae1a03102de07758650af9243fd31211325a/Lib/distutils/command/build_ext.py#L692 It generates the correct symbol in most causes, but if the filename contains unicode characters then it creates the wrong symbol, causing linkage errors. The behaviour should be updated to reflect PEP-489: https://www.python.org/dev/peps/pep-0489/#export-hook-name ---------- components: Distutils messages: 360555 nosy: da-woods, dstufft, eric.araujo priority: normal severity: normal status: open title: Distutils generates the wrong export symbol for unicode module names type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 08:57:19 2020 From: report at bugs.python.org (Julien Palard) Date: Thu, 23 Jan 2020 13:57:19 +0000 Subject: [New-bugs-announce] [issue39433] curses.setupterm can raise _curses.error Message-ID: <1579787839.53.0.046587762459.issue39433@roundup.psfhosted.org> New submission from Julien Palard : Currently the curses module can raise some `_curses.error` exception directly inheriting `Exception`. This make it non-trivial for a newcomer to catch (they think they need a `from _curses import error`, or an `except Exception`, but in fact `error` is imported, in `curses/__init__.py`, in an `_curses import *`). The `curses.error` is documented, but it's not documented that `curses.setupterm` can raise it and what the user sees on the exception message is "_curses.error" not "curses.error". Questions: - Should we create a properly named curse.CurseException, inheriting from _curses.error, so people can slowly migrate to use a "properly" named exception class? - Should we document that setupterm can raise it? - Should we introduce a dedicated sphinx directive to document exceptions? I know the third question opens a whole field of work in the doc, it's only an anecdote but a student of mine pointed out yesterday that the doc is *not* telling what `int()` raises when an invalid argument is given. It's obvious for "us", but not for everybody (Yes I can teach it, yes he can just try it, but I'm not behind everyone on earth learning Python, some are learning alone, and I also want them to succeed). ---------- components: Library (Lib) messages: 360556 nosy: mdk priority: normal severity: normal status: open title: curses.setupterm can raise _curses.error type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 09:31:39 2020 From: report at bugs.python.org (Dong-hee Na) Date: Thu, 23 Jan 2020 14:31:39 +0000 Subject: [New-bugs-announce] [issue39434] Add float __floordiv__ fast path Message-ID: <1579789899.85.0.741189319798.issue39434@roundup.psfhosted.org> New submission from Dong-hee Na : ./python.exe -m pyperf timeit "a = 3.5" "b = a // 2" AS-IS: Mean +- std dev: 377 ns +- 4 ns my patch: Mean +- std dev: 204 ns +- 2 ns ---------- assignee: corona10 messages: 360559 nosy: corona10 priority: normal severity: normal status: open title: Add float __floordiv__ fast path _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 12:12:19 2020 From: report at bugs.python.org (Guido Imperiale) Date: Thu, 23 Jan 2020 17:12:19 +0000 Subject: [New-bugs-announce] [issue39435] pickle: inconsistent arguments pickle.py vs _pickle.c vs docs Message-ID: <1579799539.42.0.511265830635.issue39435@roundup.psfhosted.org> New submission from Guido Imperiale : (1) In the documentation for loads(), the name for the first argument of loads is 'bytes_object'. The actual signature, both in pickle.py and _pickle.c, it is instead 'data'. (2) In the documentation and in pickle.py, the default value for the 'buffers' parameter is None. However, in _pickle.c, it is an empty tuple (); this is also reflected by running the interpreter: In [1]: inspect.signature(pickle.loads).parameters['buffers'] Out[1]: Thanks to @hauntsaninja for spotting these in https://github.com/python/typeshed/pull/3636 ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 360569 nosy: crusaderky, docs at python priority: normal severity: normal status: open title: pickle: inconsistent arguments pickle.py vs _pickle.c vs docs versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 12:56:35 2020 From: report at bugs.python.org (Petr Pisl) Date: Thu, 23 Jan 2020 17:56:35 +0000 Subject: [New-bugs-announce] [issue39436] Strange behavior of comparing int and float numbers Message-ID: <1579802195.37.0.228274815759.issue39436@roundup.psfhosted.org> New submission from Petr Pisl : When python compares float and int created from the same int number should be equal like int(1) == float(1) but from 9007199254740993 this is not true. int(9007199254740993) == float(9007199254740993) is not true. The same behavior is for bigger odd numbers. The even numbers are still equal. So it looks like: int(9007199254740989) == float(9007199254740989) # True int(9007199254740990) == float(9007199254740990) # True int(9007199254740991) == float(9007199254740991) # True int(9007199254740992) == float(9007199254740992) # True int(9007199254740993) == float(9007199254740993) # False int(9007199254740994) == float(9007199254740994) # True int(9007199254740995) == float(9007199254740995) # False int(9007199254740996) == float(9007199254740996) # True int(9007199254740997) == float(9007199254740997) # False int(9007199254740998) == float(9007199254740998) # True ---------- messages: 360571 nosy: Petr Pisl priority: normal severity: normal status: open title: Strange behavior of comparing int and float numbers type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 13:12:47 2020 From: report at bugs.python.org (Dominick Johnson) Date: Thu, 23 Jan 2020 18:12:47 +0000 Subject: [New-bugs-announce] [issue39437] collections.Counter support multiplication Message-ID: <1579803167.84.0.997525929027.issue39437@roundup.psfhosted.org> New submission from Dominick Johnson : I would love to see collections.Counter support scalar multiplication. ## My Use Case: I am writing a python script to manage hardware selections for custom-build computers. Users are able to select various options from a web interface, and the script compiles a list of parts that need purchased. Each option for user selection has a corresponding Counter object to keep track of what parts are needed for that selection. For example, the user option "i5 Processor" would have a corresponding counter that might looks roughly like this: {"i5-xxxx": 1, "xxxxx Motherboard"}. A user option for "2 TB RAID 10" might have {"1 TB HDD": 4}. The script adds all the counters for the selected options together to produce a counter detailing the full parts list for the build. I'd like to add a feature to the script that allows the user to also specify a quantity for certain selections. (Maybe the user wants 2 1TB Storage Drives without any kind of RAID setup). It would be really convenient to be able to simply multiply the selection's counter by it's quantity before adding it to the main counter. e.g. `main_counter += secondary_counter * selection_quantity` This seems like an extremely simple feature to add, and I would of course be willing to add it myself as long as the Python team is willing to accept a pull request from me adding this feature. However, I've never contributed to such a large project before, so I'm not sure what kind of procedures are in place to make it happen. ---------- components: Library (Lib) messages: 360573 nosy: Dominick Johnson priority: normal severity: normal status: open title: collections.Counter support multiplication type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 15:28:37 2020 From: report at bugs.python.org (Steven G. Johnson) Date: Thu, 23 Jan 2020 20:28:37 +0000 Subject: [New-bugs-announce] [issue39438] better handling of foreign signal handlers in signal.signal Message-ID: <1579811317.28.0.209534127017.issue39438@roundup.psfhosted.org> New submission from Steven G. Johnson : In embedded Python, if the embedding code sets a signal handler (e.g. for SIGINT), then signal.getsignal(SIGINT) returns None. However, signal.signal(SIGINT, signal.getsignal(SIGINT)) throws a TypeError, even though it should logically be a no-op. This behavior is all implemented in Modules/signalmodule.c and seems to have been around since 2.7 at least. (Background: We observed this in Julia, which embeds Python in order to call Python code, where matplotlib code that temporarily set signal(SIGINT, SIG_DFL) led to an exception when it tried to restore the original signal handler. See https://github.com/JuliaPy/PyPlot.jl/issues/459) The C program below exhibits this problem [it sets its own SIGINT handler and then starts up Python to execute signal(SIGINT,getsignal)]. Running it results in "TypeError: signal handler must be signal.SIG_IGN, signal.SIG_DFL, or a callable object" Recommended changes: 1) if Handlers[signalnum].func == NULL, then signal(signalnum, None) should be a no-op, returning None. This will allow signal(signalnum, getsignal(signalnum)) to always succeed (as a no-op). 2) if Handlers[signalnum].func == NULL, then signal(signalnum, SIG_DFL) should be a no-op, returning None. That is, the default signal handler should be the foreign signal handler if one is installed. 3) The signal-handling documentation should warn against overriding the signal handler for any signalnum where getsignal(signalnum) returns None (i.e. a foreign signal handler), since there is no way to restore the original signal handler afterwards. Anyway, you should be cautious about overriding signal handlers that don't come from Python. test code that throws a TypeError (compile and link with libpython): #include #include #include void myhandler(int sig) { printf("got signal %d\n", sig); } int main(void) { signal(SIGINT, myhandler); Py_InitializeEx(0); PyRun_SimpleString("import signal\n" "old_signal = signal.getsignal(signal.SIGINT)\n" "signal.signal(signal.SIGINT, old_signal)\n" "print(old_signal)\n"); Py_Finalize(); return 0; } ---------- components: Library (Lib) messages: 360578 nosy: Steven G. Johnson priority: normal severity: normal status: open title: better handling of foreign signal handlers in signal.signal type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 16:04:50 2020 From: report at bugs.python.org (Adam Meily) Date: Thu, 23 Jan 2020 21:04:50 +0000 Subject: [New-bugs-announce] [issue39439] Windows Multiprocessing in Virtualenv: sys.prefix is incorrect Message-ID: <1579813490.23.0.271185160871.issue39439@roundup.psfhosted.org> New submission from Adam Meily : I upgraded from Python 3.7.1 to 3.7.6 and began noticing a behavior that was breaking my code. My code detects if it's running in a virtualenv. This check worked in 3.7.1 but is broken in 3.7.6. >From the documentation, sys.prefix and sys.exec_prefix should point to the virtualenv when one is active. However, I'm seeing that both of these constants are pointing to the system installation directory and not my virtualenv when I am in a multiprocessing child. Here is an example output of a test application running in 3.7.6 (I've attached the test script to this ticket): Parent process ============================================= sys.prefix: C:\Users\user\project\venv sys.exec_prefix: C:\Users\user\project\venv sys.base_prefix: C:\Program Files\Python37 sys.base_exec_prefix: C:\Program Files\Python37 ============================================= Subprocess ============================================= sys.prefix: C:\Program Files\Python37 sys.exec_prefix: C:\Program Files\Python37 sys.base_prefix: C:\Program Files\Python37 sys.base_exec_prefix: C:\Program Files\Python37 ============================================= I would expect that sys.prefix and sys.exec_prefix to be identical in the parent and child process. I verified that this behavior is present in 3.7.5, 3.7.6, and 3.8.1. I am on a Windows 10 x64 system. Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32 ---------- components: Windows files: multiproc_venv_prefix.py messages: 360581 nosy: meilyadam, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Multiprocessing in Virtualenv: sys.prefix is incorrect versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48862/multiproc_venv_prefix.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 23 18:00:12 2020 From: report at bugs.python.org (edk) Date: Thu, 23 Jan 2020 23:00:12 +0000 Subject: [New-bugs-announce] [issue39440] Use PyNumber_InPlaceAdd in sum() for the second iteration onward Message-ID: <1579820412.98.0.670054717235.issue39440@roundup.psfhosted.org> New submission from edk : The C implementation of sum() contains this comment: /* It's tempting to use PyNumber_InPlaceAdd instead of PyNumber_Add here, to avoid quadratic running time when doing 'sum(list_of_lists, [])'. However, this would produce a change in behaviour: a snippet like empty = [] sum([[x] for x in range(10)], empty) would change the value of empty. */ But that doesn't hold after PyNumber_Add has been called once, because from that point onward the accumulator value is something we got from an __add__, and the caller can't know if we reuse it. The in-place version is substantially faster for some workloads-- the example in the comment is an obvious one, but the context in which I ran into this was using a sum of Counters in a one-liner a bit like this: sum((Counter({line.split("|", 3): len(line)}) for line in sys.stdin), Counter()) in which significant time seems to be spent adding the contents of the previous accumulator value to the new one. # before ; ./python -m timeit 'sum(([x] for x in range(1000)), [])' 500 loops, best of 5: 888 usec per loop # after ; ./python -m timeit 'sum(([x] for x in range(1000)), [])' 5000 loops, best of 5: 65.3 usec per loop ; cat test_.py from collections import Counter import timeit import random data = [Counter({random.choice(['foo', 'bar', 'baz', 'qux']): random.randint(1,1000000)}) for _ in range(10000)] print(min(timeit.repeat('sum(data, Counter())', 'from collections import Counter', number=100, globals={'data': data}))) print(min(timeit.repeat('reduce(Counter.__iadd__, data, Counter())', 'from collections import Counter; from functools import reduce', number=100, globals={'data': data}))) # before ; ./python test_.py 1.8981186050223187 0.7094596439856105 # after ; ./python test_.py 0.715508968976792 0.7050370009965263 ---------- messages: 360583 nosy: edk priority: normal severity: normal status: open title: Use PyNumber_InPlaceAdd in sum() for the second iteration onward type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 04:50:19 2020 From: report at bugs.python.org (=?utf-8?q?Gu=C3=A9na=C3=ABl_Muller?=) Date: Fri, 24 Jan 2020 09:50:19 +0000 Subject: [New-bugs-announce] [issue39441] mimetypes.guess_extension unable to get non-lowercase mimetype Message-ID: <1579859419.07.0.145257790189.issue39441@roundup.psfhosted.org> New submission from Gu?na?l Muller : mimetypes.guess_extension and mimetypes.guess_all_extensions doesn't work correctly with non-lowercase mimetype. >>> import mimetypes >>> mimetypes.guess_type('file.pptm') ('application/vnd.ms-powerpoint.presentation.macroEnabled.12', None) >>> mimetypes.guess_extension("application/vnd.ms powerpoint.presentation.macroEnabled.12") >>> This issue exist because we automatically convert type as lower in guess_all_extensions, but we do not prevent added type to be lowercase. ---------- messages: 360601 nosy: Inkhey priority: normal severity: normal status: open title: mimetypes.guess_extension unable to get non-lowercase mimetype type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 06:48:40 2020 From: report at bugs.python.org (=?utf-8?q?Wojciech_=C5=81opata?=) Date: Fri, 24 Jan 2020 11:48:40 +0000 Subject: [New-bugs-announce] [issue39442] from __future__ import annotations breaks dataclasses.Field.type Message-ID: <1579866520.75.0.652812484582.issue39442@roundup.psfhosted.org> New submission from Wojciech ?opata : I've checked this behaviour under Python 3.7.5 and 3.8.1. ``` from __future__ import annotations from dataclasses import dataclass, fields @dataclass class Foo: x: int print(fields(Foo)[0].type) ``` With annotations imported, the `type` field of Field class becomes a string with a name of a type, and the program outputs 'int'. Without annotations, the `type` field of Field class is a type, and the program outputs . I found this out when using dataclasses_serialization module. Following code works fine when we remove import of annotations: ``` from __future__ import annotations from dataclasses import dataclass from dataclasses_serialization.json import JSONSerializer @dataclass class Foo: x: int JSONSerializer.deserialize(Foo, {'x': 42}) ``` TypeError: issubclass() arg 1 must be a class ---------- components: Library (Lib) messages: 360611 nosy: lopek priority: normal severity: normal status: open title: from __future__ import annotations breaks dataclasses.Field.type type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 10:17:34 2020 From: report at bugs.python.org (Hugo Ricateau) Date: Fri, 24 Jan 2020 15:17:34 +0000 Subject: [New-bugs-announce] [issue39443] Inhomogeneous behaviour for descriptors in between the class-instance and metaclass-class pairs Message-ID: <1579879054.45.0.122838509696.issue39443@roundup.psfhosted.org> New submission from Hugo Ricateau : Assume one has defined the following descriptor: ``` class Descriptor: def __set__(self, instance, value): print('SET') ``` On the one hand, for the class-instance pair, the behaviour is as follows: ``` class FirstClass: descriptor = Descriptor() def __init__(self): self.descriptor = None FirstClass().descriptor = None ``` results in "SET" being displayed twice; i.e. both assignations triggered the __set__ method of the descriptor. On the other hand, for the metaclass-class pair, the behaviour is the following: ``` class SecondClassMeta(type): descriptor = Descriptor() class SecondClass(metaclass=SecondClassMeta): descriptor = None SecondClass.descriptor = None ``` results in "SET" being displayed only once: the first assignation (the one in the class definition) did not triggered __set__. It looks to me like an undesirable asymmetry between the descriptors behaviour when in classes vs when in metaclasses. Is that intended? If it is, I think it should be highlighted in the descriptors documentation. Best ---------- components: Interpreter Core messages: 360623 nosy: Hugo Ricateau priority: normal severity: normal status: open title: Inhomogeneous behaviour for descriptors in between the class-instance and metaclass-class pairs type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 12:01:37 2020 From: report at bugs.python.org (Steven DeRose) Date: Fri, 24 Jan 2020 17:01:37 +0000 Subject: [New-bugs-announce] [issue39444] Incorrect description of sorting for PrettyPrinter Message-ID: <1579885297.01.0.729190737587.issue39444@roundup.psfhosted.org> New submission from Steven DeRose : The doc for pprint.PrettyPrinter at https://docs.python.org/3/library/pprint.html says: If sort_dicts is true (the default), dictionaries will be formatted with their keys sorted, otherwise they will display in insertion order I believe the insertion order is not even known by normal dicts (only by OrderedDict), so I think the last phrase must be wrong. If it's somehow correct, it deserves explaining.... ---------- assignee: docs at python components: Documentation messages: 360630 nosy: TextGeek, docs at python priority: normal severity: normal status: open title: Incorrect description of sorting for PrettyPrinter versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 13:21:59 2020 From: report at bugs.python.org (=?utf-8?q?Rafael_Laboissi=C3=A8re?=) Date: Fri, 24 Jan 2020 18:21:59 +0000 Subject: [New-bugs-announce] [issue39445] h5py not playing nicely with subprocess and mpirun Message-ID: <1579890119.79.0.971867657807.issue39445@roundup.psfhosted.org> New submission from Rafael Laboissi?re : * Preamble: The problem reported hereafter possibly comes from the h5py module, which is not part of Python per se. This problem has been already reported to the h5py developers: https://github.com/h5py/h5py/issues/1467 and also against the Debian package python3-h5py: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=946986 I apologize if this issue report is considered abusive. Please, feel free to close it, if it is the case. * The problem: The combination of "import h5py", "subprocess.Popen", and "mpirun" is yielding a weird result. Consider these two scripts: ############################################################# ### File name: bugtest-without-h5py.py import subprocess simulationProc = subprocess.Popen("mpirun", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = simulationProc.communicate() returnCode = simulationProc.wait() print("stdout = ", stdout) print("stderr = ", stderr) print("return code = ", returnCode) ############################################################# ############################################################# ### File name: bugtest-with-h5py.py import subprocess import h5py simulationProc = subprocess.Popen("mpirun", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = simulationProc.communicate() returnCode = simulationProc.wait() print("stdout = ", stdout) print("stderr = ", stderr) print("return code = ", returnCode) ############################################################# The only difference between them is the line containing "import h5py" in the second. Here is the result when running the first script: $ python3 bugtest-without-h5py.py stdout = b'' stderr = b'--------------------------------------------------------------------------\nmpirun could not find anything to do.\n\nIt is possible that you forgot to specify how many processes to run\nvia the "-np" argument.\n--------------------------------------------------------------------------\n' return code = 1 and here is the result for the second script: $ python3 bugtest-with-h5py.py stdout = b'' stderr = b'' return code = 1 It seems that, when h5py is imported, the mpirun command is not even launched by subprocess.Popen, even though there is noting in this call that is related to h5py. When "mpirun" is replaced by other commands (e.g. "date"), then the output is identical for both scripts, as it should be. As I wrote in the preamble, this is possibly a problem with h5py. I am reporting it here because the developers of the subprocess module may have an idea about the origin of the problem or give me a hint on how to debug the it. The tests were done on a Debian bullseye system with the following versions: h5py 2.10.0 (compiled with MPI support) Python 3.7.6 ---------- components: Library (Lib) messages: 360638 nosy: Rafael Laboissi?re priority: normal severity: normal status: open title: h5py not playing nicely with subprocess and mpirun type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 14:27:50 2020 From: report at bugs.python.org (Michael Shields) Date: Fri, 24 Jan 2020 19:27:50 +0000 Subject: [New-bugs-announce] [issue39446] Documentation should reflect that all dicts are now ordered Message-ID: <1579894070.69.0.0178320617027.issue39446@roundup.psfhosted.org> New submission from Michael Shields : As of Python 3.7, dicts always preserve insertion order. This is mentioned briefly in the release notes, but it would also be helpful to mention it in the language reference, and in the discussion of collections.OrderedDict. ---------- assignee: docs at python components: Documentation messages: 360642 nosy: Michael Shields, docs at python priority: normal severity: normal status: open title: Documentation should reflect that all dicts are now ordered versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 15:24:36 2020 From: report at bugs.python.org (Daniel Kahn Gillmor) Date: Fri, 24 Jan 2020 20:24:36 +0000 Subject: [New-bugs-announce] [issue39447] imaplib documentation claims that commands return a string, but they return bytes Message-ID: <1579897476.32.0.1859965096.issue39447@roundup.psfhosted.org> New submission from Daniel Kahn Gillmor : The imaplib documentation says: > Each command returns a tuple: (type, [data, ...]) where type is usually > 'OK' or 'NO', and data is either the text from the command response, or > mandated results from the command. Each data is either a string, or a > tuple. If a tuple, then the first part is the header of the response, > and the second part contains the data (ie: ?literal? value). However, "Each data is either a string, or a tuple" does not appear to be correct. If the element of data is not a tuple, it appears to be a bytes object, not a string (because it is dealing with network streams of bytes internally) This is probably old documentation left over from python 2, when strings and bytes were the same. ---------- components: email messages: 360652 nosy: barry, dkg, r.david.murray priority: normal severity: normal status: open title: imaplib documentation claims that commands return a string, but they return bytes type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 15:52:47 2020 From: report at bugs.python.org (Neil Schemenauer) Date: Fri, 24 Jan 2020 20:52:47 +0000 Subject: [New-bugs-announce] [issue39448] Add regen-frozen makefile target Message-ID: <1579899167.9.0.743060780992.issue39448@roundup.psfhosted.org> New submission from Neil Schemenauer : Updating the frozen module "__hello__" code inside Python/frozen.c is currently a manual process. That's a bit tedious since it adds some extra work in the case that bytecode changes are made. I've created a small script and a makefile target to automates the process. ---------- messages: 360655 nosy: nascheme priority: normal severity: normal stage: patch review status: open title: Add regen-frozen makefile target type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 18:13:50 2020 From: report at bugs.python.org (Arthur Fibich) Date: Fri, 24 Jan 2020 23:13:50 +0000 Subject: [New-bugs-announce] [issue39449] New Assignment operator Message-ID: <1579907630.13.0.450663882045.issue39449@roundup.psfhosted.org> New submission from Arthur Fibich : It's just a personal thing, but I kind of miss the following possibility in Python (and likewise every other language I know): Like a += 1 is the same as a = a + 1 I'd love to see a .= b() as an opportunity to express a = a.b() Possible usages are for example linked lists or other structures, where a class's attributes or methods are/return an object of the same type. ---------- components: Interpreter Core messages: 360659 nosy: Arthur Fibich priority: normal severity: normal status: open title: New Assignment operator type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 24 20:03:33 2020 From: report at bugs.python.org (Steve C) Date: Sat, 25 Jan 2020 01:03:33 +0000 Subject: [New-bugs-announce] [issue39450] unittest TestCase shortDescription does not strip whitespace Message-ID: <1579914213.48.0.14030240002.issue39450@roundup.psfhosted.org> New submission from Steve C : When running unit tests with the --verbose flag test descriptions are run using the first line of the test case's docstring. If the first character of the docstring is a newline, no description is printed. Examples: Current code expects docstrings to look like '''It should return blah blah This is a test... ''' Where the description starts on the first line. Some Python developers start the string on the next line. Example: ''' It should return blah blah This is a test... ''' Lib.unittest.case.TestCase:shortDescription should first strip the docstrip of beginning and trailing whitespace. ---------- components: Library (Lib) messages: 360666 nosy: Steve C2 priority: normal severity: normal status: open title: unittest TestCase shortDescription does not strip whitespace type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 25 01:30:33 2020 From: report at bugs.python.org (Dan Gass) Date: Sat, 25 Jan 2020 06:30:33 +0000 Subject: [New-bugs-announce] [issue39451] enum.Enum reference count leaks Message-ID: <1579933833.85.0.945475764886.issue39451@roundup.psfhosted.org> New submission from Dan Gass : Given (1) instantiation of an enumeration class with an invalid value (2) a try/except around the instantiation where the exception is ignored Then: An unneeded reference to the bad value is lost (as well as other values that I suspect are local variables within a participating method) When run, the attached sample script shows before and after reference counts which demonstrate the potential resource leaks. The sample script includes the output from running the script on Python version 3.7.5 within the module docstring. The root cause appears to be in the exception handling in the Enum.__new__ method (in the area where it calls the _missing_ hook). The attached sample script includes a simplified version of those methods that should help pinpoint the code in question and confirm the root cause. Not being an exception nitty-gritty expert, I have suspicions that users should be warned about using this pattern of exception handling. I suspect this pattern would be worth avoiding in the Enum implementation. I am willing to take a stab at submitting a patch for Enum. I hesitate slightly not knowing if there are specific reasons for the code existing in its current form. Alternatively, I plan on being at PyCon2020 for the sprints and could be available then to work on it. ---------- components: Library (Lib) files: sample.py messages: 360670 nosy: dan.gass at gmail.com priority: normal severity: normal status: open title: enum.Enum reference count leaks type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file48863/sample.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 25 09:00:07 2020 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sat, 25 Jan 2020 14:00:07 +0000 Subject: [New-bugs-announce] [issue39452] Improve the __main__ module documentation Message-ID: <1579960807.69.0.285569916248.issue39452@roundup.psfhosted.org> New submission from G?ry : This PR will apply the following changes on the [`__main__` module documentation](https://docs.python.org/3.7/library/__main__.html): - correct the phrase "run as script" by "run from the file system" (as used in the [`runpy`](https://docs.python.org/3/library/runpy.html) documentation) since "run as script" does not mean the intended `python foo.py` but `python -m foo` (cf. [PEP 338](https://www.python.org/dev/peps/pep-0338/)); - replace the phrase "run with `-m`" by "run from the module namespace" (as used in the [`runpy`](https://docs.python.org/3/library/runpy.html) documentation) since the module can be equivalently run with `runpy.run_module('foo')` instead of `python -m foo`; - make the block comment [PEP 8](https://www.python.org/dev/peps/pep-0008/#comments)-compliant (located before the `if` block, capital initialised, period ended); - add a missing case for which a package's \_\_main\_\_.py is executed (when the package is run from the file system: `python foo/`). ---------- assignee: docs at python components: Documentation messages: 360682 nosy: docs at python, maggyero priority: normal pull_requests: 17565 severity: normal status: open title: Improve the __main__ module documentation type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 25 09:22:26 2020 From: report at bugs.python.org (Dong-hee Na) Date: Sat, 25 Jan 2020 14:22:26 +0000 Subject: [New-bugs-announce] [issue39453] Use-after-free in list contain Message-ID: <1579962146.28.0.764462410957.issue39453@roundup.psfhosted.org> New submission from Dong-hee Na : class poc() : def __eq__(self,other) : l.clear() return NotImplemented l = [poc(), poc(), poc()] 3 in l [1] 2606 segmentation fault sigh.. ---------- assignee: corona10 messages: 360686 nosy: corona10, pablogsal, vstinner priority: normal severity: normal status: open title: Use-after-free in list contain type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 25 10:17:41 2020 From: report at bugs.python.org (YPf) Date: Sat, 25 Jan 2020 15:17:41 +0000 Subject: [New-bugs-announce] [issue39454] when \\u in byte_string , byte_string.decode('raw_unicode_escape') maybe has problem Message-ID: <1579965461.7.0.207168437905.issue39454@roundup.psfhosted.org> New submission from YPf : >>> path=r'C:\Users\Administrator\Desktop' >>> path.encode('raw_unicode_escape') b'C:\\Users\\Administrator\\Desktop' >>> path.encode('raw_unicode_escape').decode('raw_unicode_escape') Traceback (most recent call last): File "", line 1, in path.encode('raw_unicode_escape').decode('raw_unicode_escape') UnicodeDecodeError: 'rawunicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ---------- messages: 360691 nosy: yayiba1223 priority: normal severity: normal status: open title: when \\u in byte_string ,byte_string.decode('raw_unicode_escape') maybe has problem versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 25 15:27:32 2020 From: report at bugs.python.org (Aurora) Date: Sat, 25 Jan 2020 20:27:32 +0000 Subject: [New-bugs-announce] [issue39455] Update the documentation for linecache module Message-ID: <1579984052.28.0.0961825166353.issue39455@roundup.psfhosted.org> New submission from Aurora : Added the definitions for two undocumented functions. ---------- assignee: docs at python components: Documentation messages: 360709 nosy: docs at python, opensource-assist priority: normal pull_requests: 17572 severity: normal status: open title: Update the documentation for linecache module type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 26 03:12:12 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 26 Jan 2020 08:12:12 +0000 Subject: [New-bugs-announce] [issue39456] Make IDLE calltip tests work when there are no docstrings Message-ID: <1580026332.03.0.0808003249294.issue39456@roundup.psfhosted.org> New submission from Terry J. Reedy : IDLE should run and calltips and tests should work even callables lack docstrings, either because none is defined or because there are suppressed (CPython compile switch for builtins, CPython runtime switch for user objects). I believe calltips work with or without docstrings present, but: One User class test is skipped with -OO. It should be changed to still work. Multiple builtin tests fail. #37501 proposes to skip them when compiled without docstrings. The right long-term solution for IDLE is to change the tests. My idea is to expand tiptest with 'out' replaced by the signature part, the processed docstring part, and the docstring object. I want to try something like def tiptest(obj, docobj, sig, doc): out = sig += doc if docobj.__doc__ is not None else '' self.assertEqual(get_spec(obj), out) ---------- assignee: terry.reedy components: IDLE messages: 360722 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: Make IDLE calltip tests work when there are no docstrings type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 26 14:17:09 2020 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sun, 26 Jan 2020 19:17:09 +0000 Subject: [New-bugs-announce] [issue39457] Add an autocommit property to sqlite3.Connection with truly PEP 249 compliant manual commit mode and migrate Message-ID: <1580066229.26.0.0229158792904.issue39457@roundup.psfhosted.org> New submission from G?ry : In non-autocommit mode (manual commit mode), the sqlite3 database driver implicitly issues a BEGIN statement before each DML statement (INSERT, UPDATE, DELETE, REPLACE) not already in a database transaction, BUT NOT before DDL statements (CREATE, DROP) nor before DQL statements (SELECT) (cf. https://github.com/python/cpython/blob/master/Modules/_sqlite/cursor.c#L480): ``` /* We start a transaction implicitly before a DML statement. SELECT is the only exception. See #9924. */ if (self->connection->begin_statement && self->statement->is_dml) { if (sqlite3_get_autocommit(self->connection->db)) { result = _pysqlite_connection_begin(self->connection); if (!result) { goto error; } Py_DECREF(result); } } ``` Like Mike Bayer explained in issue #9924, this is not what other database drivers do, and this is not PEP 249 compliant (Python Database API Specification v2.0), as its author Marc-Andr? Lemburg explained (cf. https://mail.python.org/pipermail/db-sig/2010-September/005645.html): > Randall Nortman wrote: > # PEP 249 says that transactions end on commit() or rollback(), but it > # doesn't explicitly state when transactions should begin, and there is > # no begin() method. > > Transactions start implicitly after you connect and after you call .commit() or .rollback(). They are not started for each statement. > > # I think the implication is that transactions begin > # on the first execute(), but that's not explicitly stated. At least > # one driver, pysqlite2/sqlite3, does not start a transaction for a > # SELECT statement. It waits for a DML statement (INSERT, UPDATE, > # DELETE) before opening a transaction. Other drivers open transactions > # on any statement, including SELECT. > # > # My question for the DB-SIG is: Can I call it a bug in pysqlite2 that > # it does not open transactions on SELECT? Should the spec be amended > # to make this explicit? Or are both behaviors acceptable, in which > # case perhaps a begin() method needs to be added for when the user > # wants control over opening transactions? > > I should probably add a note to PEP 249 about this. Aymeric Augustin said in issue #10740: > While you're there, it would be cool to provide "connection.autocommit = True" as an API to enable autocommit, because "connection.isolation_level = None" isn't a good API at all -- it's very obscure and has nothing to do with isolation level whatsoever. So I suggest that we introduce a new autocommit property and use it to enable a truly PEP 249 compliant manual commit mode (that is to say with transactions starting implicitly after connect(), commit() and rollback() calls, allowing transactional DDL and DQL): ``` autocommit = True # enable the autocommit mode autocommit = False # disable the autocommit mode (enable the new PEP 249 manual commit mode) autocommit = None # fallback to the commit mode set by isolation_level ``` I also suggest that we use this new PEP 249 manual commit mode (with transactional DDL and DQL) by default and drop the old manual commit mode (without transactional DDL and DQL). We could use the following migration strategy: 1. During the deprecation period: - Add the new autocommit property with the value None by default, so that the old manual commit mode is still the default. - Add a deprecation warning for the value None of the autocommit property, in favor of the other values True and False. It will prompt users who enabled the autocommit mode with isolation_level = None to use autocommit = True instead, and users who disabled the autocommit mode (that is to say users who enabled the old manual commit mode) with isolation_level = DEFERRED/IMMEDIATE/EXCLUSIVE to use autocommit = False instead AND add to their code the potentially missing commit() calls required by the new PEP 249 manual commit mode. 2. After the deprecation period: - Set the value of the autocommit property to False by default, so that the new PEP 249 manual commit mode becomes the new default. - Remove the value None of the autocommit property and its deprecation warning. - Remove the value None of the isolation_level property, so that the old manual commit mode disappears. ---------- components: Library (Lib) messages: 360732 nosy: ghaering, lemburg, maggyero, r.david.murray, zzzeek priority: normal severity: normal status: open title: Add an autocommit property to sqlite3.Connection with truly PEP 249 compliant manual commit mode and migrate type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 26 17:30:15 2020 From: report at bugs.python.org (Gabriel Tardif) Date: Sun, 26 Jan 2020 22:30:15 +0000 Subject: [New-bugs-announce] [issue39458] Multiprocessing.Pool maxtasksperchild=1 doesn't work Message-ID: <1580077815.77.0.653421680761.issue39458@roundup.psfhosted.org> New submission from Gabriel Tardif : Hello This bug is about the maxtasksperchild parameter in the Pool object constructor of the multiprocessing module. When you set processes = 1 in the Pool constructor maxtasksperchild value is double by two for unknow raison whatever the maxtaskperchild value. As mentionned in the documentation, once the process has reach the maxtasksperchil value it should rebuild itself in the memory from the parent process. In the short python exemple provided below, you can see the value of showedFiles of each process incresing over 1 which is not normal if Pool constructor is set to processes = 1, maxtasksperchil = 1. The only running process should destroy / reset itself and so set its value 'showedFiles' to 0 first and 1 for each os.listdir() entry. ---------- assignee: docs at python components: Documentation files: reader.py messages: 360736 nosy: Gabriel Tardif, docs at python priority: normal severity: normal status: open title: Multiprocessing.Pool maxtasksperchild=1 doesn't work type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48864/reader.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 04:19:04 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Jan 2020 09:19:04 +0000 Subject: [New-bugs-announce] [issue39459] test_import: test_unwritable_module() fails on AMD64 Fedora Stable Clang Installed 3.x Message-ID: <1580116744.62.0.562593515869.issue39459@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Fedora Stable Clang Installed 3.x: https://buildbot.python.org/all/#/builders/127/builds/212 test_unwritable_module (test.test_import.CircularImportTests) ... ERROR ====================================================================== ERROR: test_unwritable_module (test.test_import.CircularImportTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-x86_64.clang-installed/build/target/lib/python3.9/test/test_import/__init__.py", line 1347, in test_unwritable_module import test.test_import.data.unwritable as unwritable ModuleNotFoundError: No module named 'test.test_import.data.unwritable' ---------- components: Tests messages: 360743 nosy: vstinner priority: normal severity: normal status: open title: test_import: test_unwritable_module() fails on AMD64 Fedora Stable Clang Installed 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 04:24:50 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Jan 2020 09:24:50 +0000 Subject: [New-bugs-announce] [issue39460] test_zipfile: test_add_file_after_2107() fails on s390x Fedora Rawhide 3.x Message-ID: <1580117090.75.0.217568835577.issue39460@roundup.psfhosted.org> New submission from STINNER Victor : s390x Fedora Rawhide 3.x: https://buildbot.python.org/all/#/builders/323/builds/6 ====================================================================== FAIL: test_add_file_after_2107 (test.test_zipfile.StoredTestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.x.edelsohn-fedora-rawhide-z/build/Lib/test/test_zipfile.py", line 620, in test_add_file_after_2107 self.assertRaises(struct.error, zipfp.write, TESTFN) AssertionError: error not raised by write Test added in bpo-34097 by: commit a2fe1e52eb94c41d9ebce1ab284180d7b1faa2a4 Author: Marcel Plch Date: Thu Aug 2 15:04:52 2018 +0200 bpo-34097: Add support for zipping files older than 1980-01-01 (GH-8270) ZipFile can zip files older than 1980-01-01 and newer than 2107-12-31 using a new strict_timestamps parameter at the cost of setting the timestamp to the limit. see also the following fix: commit 7b41dbad78c6b03ca2f98800a92a1977d3946643 Author: Marcel Plch Date: Fri Aug 3 17:59:19 2018 +0200 bpo-34325: Skip zipfile test for large timestamps when filesystem don't support them. (GH-8656) When the filesystem doesn't support files with large timestamps, skip testing that such files can be zipped. ---------- components: Tests messages: 360747 nosy: vstinner priority: normal severity: normal status: open title: test_zipfile: test_add_file_after_2107() fails on s390x Fedora Rawhide 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 05:25:58 2020 From: report at bugs.python.org (Antony Lee) Date: Mon, 27 Jan 2020 10:25:58 +0000 Subject: [New-bugs-announce] [issue39461] os.environ does not support Path-like values, but subprocess(..., env=...) does Message-ID: <1580120758.52.0.283644207368.issue39461@roundup.psfhosted.org> New submission from Antony Lee : As of Py3.8/Linux: In [1]: os.environ["foo"] = Path("bar") --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in ----> 1 os.environ["foo"] = Path("bar") ~/miniconda3/envs/default/lib/python3.8/os.py in __setitem__(self, key, value) 676 def __setitem__(self, key, value): 677 key = self.encodekey(key) --> 678 value = self.encodevalue(value) 679 self.putenv(key, value) 680 self._data[key] = value ~/miniconda3/envs/default/lib/python3.8/os.py in encode(value) 746 def encode(value): 747 if not isinstance(value, str): --> 748 raise TypeError("str expected, not %s" % type(value).__name__) 749 return value.encode(encoding, 'surrogateescape') 750 def decode(value): TypeError: str expected, not PosixPath In [2]: subprocess.run('echo "$foo"', env={**os.environ, "foo": Path("bar")}, shell=True) bar Out[2]: CompletedProcess(args='echo "$foo"', returncode=0) I guess it would be nice if it was possible to set os.environ entries to Path-like values, but most importantly, it seems a bit inconsistent that doing so is not possible on os.environ, but works when setting the `env` of a subprocess call. ---------- components: Library (Lib) messages: 360750 nosy: Antony.Lee priority: normal severity: normal status: open title: os.environ does not support Path-like values, but subprocess(..., env=...) does versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 06:09:27 2020 From: report at bugs.python.org (Marcel) Date: Mon, 27 Jan 2020 11:09:27 +0000 Subject: [New-bugs-announce] [issue39462] DataClass typo-unsafe attribute creation & unexpected behaviour (dataclasses) Message-ID: <1580123367.77.0.985368571212.issue39462@roundup.psfhosted.org> New submission from Marcel : After instantiation of a variable of a DataClass, it is possible to assign new attributes (that were not defined in defining the DataClass): data.new_attribute = 3.0 # does NOT raise Error! This gives unexpected behaviour: if you print the variable, then 'new_attribute' is not printed (since it is not in the definition of the DataClass). Assigning to an attribute is therefore not typo-safe (which users may expect from a DataClass). I would expect the behaviour of the DataClass be consistent and typo-safe. Attached is a file that demonstrates the bug (behaviour) and provides a 'SafeDataClass' by overriding the __setattr__ method. My suggestion would be to the adjust the library __setattr__ for the DataClass such that is will be typo-safe. ---------- components: Library (Lib) files: bug_demo_dataclass_typo_unsafe.py messages: 360752 nosy: marcelpvisser priority: normal severity: normal status: open title: DataClass typo-unsafe attribute creation & unexpected behaviour (dataclasses) type: behavior versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48865/bug_demo_dataclass_typo_unsafe.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 06:40:03 2020 From: report at bugs.python.org (Tal Ben-Nun) Date: Mon, 27 Jan 2020 11:40:03 +0000 Subject: [New-bugs-announce] [issue39463] ast.Constant, bytes, and ast.unparse Message-ID: <1580125203.43.0.757991294489.issue39463@roundup.psfhosted.org> New submission from Tal Ben-Nun : In Python 3.8, the "kind" field was introduced into the Constant AST class. This brings about a problem when unparsing the AST for various packages. First, it breaks backward compatibility for older code that creates ast.Num without specifying kind (which is optional anyway and does not exist in its fields). Second, since bytes are parsed as a Constant without a kind, one can create the following (valid as of now) AST and unparse it: ast.unparse(ast.Constant(value=b"bad", kind="u")) Getting "ub'bad'", which is invalid Python syntax AFAIU. Could something be done with the classes that extend ast.Constant and with bytes being a Constant with a "kind" of "b"? ---------- components: Library (Lib) messages: 360754 nosy: Tal Ben-Nun priority: normal severity: normal status: open title: ast.Constant, bytes, and ast.unparse versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 08:55:37 2020 From: report at bugs.python.org (=?utf-8?q?Jos=C3=A9_Manuel_Ferrer?=) Date: Mon, 27 Jan 2020 13:55:37 +0000 Subject: [New-bugs-announce] [issue39464] Allow translating argument error messages Message-ID: <1580133337.82.0.00677207154431.issue39464@roundup.psfhosted.org> New submission from Jos? Manuel Ferrer : Argument error messages display the untranslatable text 'argument ', which should be translatable to other languages, just like it's possible to do with the rest of the constructed error message. ---------- components: Library (Lib) messages: 360764 nosy: DjMorgul priority: normal pull_requests: 17578 severity: normal status: open title: Allow translating argument error messages type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 09:28:30 2020 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 27 Jan 2020 14:28:30 +0000 Subject: [New-bugs-announce] [issue39465] Design a subinterpreter friendly alternative to _Py_IDENTIFIER Message-ID: <1580135310.72.0.978238650594.issue39465@roundup.psfhosted.org> New submission from Nick Coghlan : Both https://github.com/python/cpython/pull/18066 (collections module) and https://github.com/python/cpython/pull/18032 (asyncio module) ran into the problem where porting them to multi-phase initialisation involves replacing their usage of the `_Py_IDENTIFIER` macro with some other mechanism. When _posixsubprocess was ported, the replacement was a relatively ad hoc combination of string interning and the interpreter-managed module-specific state: https://github.com/python/cpython/commit/5a7d2e11aaea2dd32878dc5c6b1aae8caf56cb44 I'm wondering if we may able to devise a comparable struct-field based system that replaces the `_Py_IDENTIFIER` local static variable declaration macro and the `Py_Id_` lookup convention with a combination like (using the posix subprocess module conversion as an example): // Identifier usage declaration (replaces _Py_IDENTIFIER) _Py_USE_CACHED_IDENTIFIER(_posixsubprocessstate(m), disable); // Identifier usage remains unchanged, but uses a regular local variable // rather than the static variable declared by _Py_IDENTIFIER result = _PyObject_CallMethodIdNoArgs(gc_module, &PyId_disable); And then the following additional state management macros would be needed to handle the string interning and reference counting: // Module state struct declaration typedef struct { // This would declare an initialised array of _Py_Identifier structs // under a name like __cached_identifiers__. The end of the array // would be indicated by a strict with "value" set to NULL. _Py_START_CACHED_IDENTIFIERS; _Py_CACHED_IDENTIFIER(disable); _Py_CACHED_IDENTIFIER(enable); _Py_CACHED_IDENTIFIER(isenabled); _Py_END_CACHED_IDENTIFIERS; ); } _posixsubprocessstate; // Module tp_traverse implementation _Py_VISIT_CACHED_IDENTIFIERS(_posixsubprocessstate(m)); // Module tp_clear implementation (also called by tp_free) _Py_CLEAR_CACHED_IDENTIFIERS(_posixsubprocessstate(m)); With the requirement to declare usage of the cached identifiers, they could be lazily initialized the same way the existing static variables are (even re-using the same struct declaration). Note: this is just a draft of one possible design, the intent of this issue is to highlight the fact that this issue has now come up multiple times, and it would be good to have a standard answer available. ---------- messages: 360766 nosy: eric.snow, ncoghlan, petr.viktorin, shihai1991 priority: normal severity: normal status: open title: Design a subinterpreter friendly alternative to _Py_IDENTIFIER _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 10:01:53 2020 From: report at bugs.python.org (chris) Date: Mon, 27 Jan 2020 15:01:53 +0000 Subject: [New-bugs-announce] [issue39466] Great Message-ID: <1580137313.79.0.716991951607.issue39466@roundup.psfhosted.org> New submission from chris : How do I start creating my own code, is there tutorial for this? https://logingit.com/amazon-from-a-to-z-www-atoz-amazon-work/ ---------- messages: 360768 nosy: Nadas priority: normal severity: normal status: open title: Great _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 13:14:11 2020 From: report at bugs.python.org (=?utf-8?b?aGVydsOp?=) Date: Mon, 27 Jan 2020 18:14:11 +0000 Subject: [New-bugs-announce] [issue39467] Allow to deprecate CLI arguments in argparse Message-ID: <1580148851.76.0.444481366951.issue39467@roundup.psfhosted.org> New submission from herv? : Today it's not possible to deprecate CLI arguments designed with argparse, it could be useful to introduce deprecation feature in argparse to allow developers to inform their apps's users when an argument is planed to be removed in the future. ---------- components: Library (Lib) messages: 360786 nosy: 4383 priority: normal severity: normal status: open title: Allow to deprecate CLI arguments in argparse type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 14:46:52 2020 From: report at bugs.python.org (Aurora) Date: Mon, 27 Jan 2020 19:46:52 +0000 Subject: [New-bugs-announce] [issue39468] .python_history write permission improvements Message-ID: <1580154412.97.0.35138425226.issue39468@roundup.psfhosted.org> New submission from Aurora : On a typical Linux system, if you run 'chattr +i /home/user/.python_history', and then run python, then exit, the following error message will be printed out: Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/lib/python3.9/site.py", line 446, in write_history readline.write_history_file(history) OSError: [Errno -1] Unknown error -1 With a simple improvement, the site module can check and suggest the user to run 'chattr -i' on the .python_history file. Additionaly, I don't know if it's a good idea to automatically run 'chattr -i' in such a situation or not. ---------- components: Library (Lib) messages: 360790 nosy: opensource-assist priority: normal severity: normal status: open title: .python_history write permission improvements type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 15:58:08 2020 From: report at bugs.python.org (Jeff Edwards) Date: Mon, 27 Jan 2020 20:58:08 +0000 Subject: [New-bugs-announce] [issue39469] Support for relative home path in pyvenv.cfg Message-ID: <1580158688.69.0.521881077968.issue39469@roundup.psfhosted.org> New submission from Jeff Edwards : Currently, the interpreter only supports absolute paths for the 'home' directory in the pyvenv.cfg file. While this works when the interpreter is always installed at a fixed location, it impacts the portability of virtual environments and can make it notably more-difficult if multiple virtual environments are shipped with a shared interpreter and are intended to be portable and working in any directory. Many of these issues can be solved for if 'home' can use a directory relative to the directory of the pyvenv.cfg file. This is detected by the presence of a starting '.' in the value. A common use-case for this is that a script-based tool (e.g. black or supervisor) may be shipped with a larger portable application where they are intended to share the same interpreter (to save on deployment size), but may have conflicting dependencies. Since the application only depends on the executable scripts, those packages could be packaged into their own virtual environments with their dependencies. ---------- components: Interpreter Core messages: 360800 nosy: Jeff.Edwards priority: normal severity: normal status: open title: Support for relative home path in pyvenv.cfg type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 16:50:47 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Mon, 27 Jan 2020 21:50:47 +0000 Subject: [New-bugs-announce] [issue39470] Indicate that os.makedirs is equivalent to Path.mkdir Message-ID: <1580161847.67.0.0849845535184.issue39470@roundup.psfhosted.org> New submission from Joannah Nanjekye : :func:`os.makedirs` is equivalent to ``mkdir -p`` and :meth:`Path.mkdir()` when given an optional *exist_ok* argument. ---------- messages: 360808 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Indicate that os.makedirs is equivalent to Path.mkdir _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 17:14:19 2020 From: report at bugs.python.org (Sebastian Berg) Date: Mon, 27 Jan 2020 22:14:19 +0000 Subject: [New-bugs-announce] [issue39471] Meaning and clarification of PyBuffer_Release() Message-ID: <1580163259.11.0.807397733698.issue39471@roundup.psfhosted.org> New submission from Sebastian Berg : The current documentation of ``PyBuffer_Release()`` and the PEP is a bit fuzzy about what the function can and cannot do. When an object exposes the buffer interface, I believe it should always return a `view` (in NumPy speak) of its own data, i.e. the data exposed by the object is also owned by it directly. On the other hand the buffer view _itself_ has fields such as `strides`, etc. which may need allocating. In other words, I think `PyBuffer_Release()` should be documented to deallocate/invalidate the `Py_buffer`. But, it *must not* invalidate the actual memory it points to. If I copy all information out of the `Py_buffer` and then free it, the copy must still be valid. I think this is the intention, but it is not spelled out clear enough, it is also the reason for the behaviour of the "#s", etc. keyword argument parsers failing due to the code: if (pb != NULL && pb->bf_releasebuffer != NULL) { *errmsg = "read-only bytes-like object"; return -1; } which in turn currently means NumPy decides to _not_ implement bf_releasebuffer at all (leading to very ugly work arounds). I am happy to make a PR, if we can get to a point where everyone is absolutely certain that the above interpretation was always correct, we could clean up a lot of code inside NumPy as well! ---------- components: C API messages: 360809 nosy: seberg priority: normal severity: normal status: open title: Meaning and clarification of PyBuffer_Release() type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 27 18:34:59 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 27 Jan 2020 23:34:59 +0000 Subject: [New-bugs-announce] [issue39472] IDLE: improve handling of int entry in settings dialog Message-ID: <1580168099.75.0.367298873438.issue39472@roundup.psfhosted.org> New submission from Terry J. Reedy : Spinoff from #31414, about int entry fields. It claims: Note: a deeper problem is attaching a tracer that get called with each keystroke. Using a StringVar avoids the error when the entry is blanked, but currently allows non-ints to be saved. A better solution would be to not do the auto tracing, but use a IntVar and only call var_changed when the user 'leaves' the box, after checking for a count in a sane range. Verify claim and proposed solution. ---------- assignee: terry.reedy components: IDLE messages: 360821 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: improve handling of int entry in settings dialog type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 02:17:04 2020 From: report at bugs.python.org (DarkTrick) Date: Tue, 28 Jan 2020 07:17:04 +0000 Subject: [New-bugs-announce] [issue39473] Enable import behavior consistency option Message-ID: <1580195824.75.0.658735928246.issue39473@roundup.psfhosted.org> New submission from DarkTrick : Matter: ======== `import`s are not handled the same throughout the different ways of calling. Current situation: =================== The resolution of `import` is dependant on the way of calling the script. Three ways of calling a script are shown below: 1) python myscript.py # as script in cwd 2) python -m myscript # as module in cwd 3) python -m src.myscript # as module in subpackage of cwd Given the following situation: ./src | |---main.py | |_________________________________ | | from subdir.funca import funcA | | | funca() | | |_________________________________| | |---subdir | |--- __init__.py | |--- funca.py | |____________________________ | | from .funcb import funcB | | | def funcA(): | | | funcb() | | |____________________________| | | |--- funcb.py |____________________________ | def funcB(): | | print("funcB") | |____________________________| (A) The following call will succeed: `./src>python -m main` (B) The following call will succeed: `./src>python main.py` (C) The following call will succeed: `./src>python -m subdir.funca (D) The following call will not succeed: `./src>python ./subdir/funca.py (E) The following call will not succeed: `./src/subdir>python funca.py Suggestion: =========== Supply a functionality / an option that will allow all of A~E to succeed. S1) So it doesn't matter, if the script is called with or without the -m option (D) S2) So a toplevel script can refer to the package it's placed in by ".", even if called direclty (E) Implementation idea: ===================== Problem: The current import logic can't be change for compatibility reasons. And maybe it should not. Therefore I thought of an option within the python script like `Option treatAsModule` or `Option relImports` If such an option would be given, the python interpreter would handle imports differently. ---------- components: Interpreter Core messages: 360839 nosy: bluelantern priority: normal severity: normal status: open title: Enable import behavior consistency option type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 03:41:48 2020 From: report at bugs.python.org (Batuhan) Date: Tue, 28 Jan 2020 08:41:48 +0000 Subject: [New-bugs-announce] [issue39474] col_offset for parenthesized expressions looks weird on attribute access Message-ID: <1580200908.73.0.745179365649.issue39474@roundup.psfhosted.org> New submission from Batuhan : Python 3.9.0a2+ (heads/master:65ecc390c1, Jan 26 2020, 15:39:11) [GCC 9.2.1 20191008] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import ast >>> source = "(2+2).source" >>> ast.get_source_segment(source, ast.parse(source).body[0].value) '2+2).source' >>> source = "(2+2)[1]" >>> ast.get_source_segment(source, ast.parse(source).body[0].value) '2+2)[1]' I can prepare a patch to extend attribute's col_offset into parens if it is any if approved. ---------- components: Interpreter Core messages: 360844 nosy: BTaskaya, benjamin.peterson, pablogsal priority: normal severity: normal status: open title: col_offset for parenthesized expressions looks weird on attribute access versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 04:43:09 2020 From: report at bugs.python.org (nova) Date: Tue, 28 Jan 2020 09:43:09 +0000 Subject: [New-bugs-announce] [issue39475] window.getmaxyx() doesn't return updated height when window is resized Message-ID: <1580204589.93.0.986649856634.issue39475@roundup.psfhosted.org> New submission from nova : Package : python(v3.6.9) Severity: normal When a window object has been created using curses.newwin(), increasing the terminal size produces the KEY_RESIZE events, but getmaxyx() returns the previous terminal size. Only by decreasing the terminal size does it return the correct terminal dimensions. Attachment includes: 1. video demonstrating the effect Following is the script to reproduce the effect: import curses def init_curses(): curses.initscr() window = curses.newwin(curses.LINES - 1, curses.COLS, 0, 0) # window = curses.initscr() curses.raw() curses.noecho() curses.cbreak() window.keypad(True) return window def restore_terminal(window): curses.noraw() curses.nocbreak() window.keypad(False) curses.echo() curses.endwin() def main(): try: window = init_curses() resize_no = 0 maxy, maxx = window.getmaxyx() dimension_string = "resize_no: " + str(resize_no) + ". maxy: " + str(maxy) + "; maxx: " + str(maxx) + '\n' window.addstr(dimension_string) while True: ch = window.getch() window.clear() if ch == curses.KEY_RESIZE: resize_no += 1 maxy, maxx = window.getmaxyx() dimension_string = "resize_no: " + str(resize_no) + ". maxy: " + str(maxy) + "; maxx: " + str(maxx) + '\n' window.addstr(dimension_string) finally: restore_terminal(window) if __name__ == '__main__': main() ---------- components: Extension Modules files: bug_curses.mp4 messages: 360849 nosy: nova priority: normal severity: normal status: open title: window.getmaxyx() doesn't return updated height when window is resized type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48868/bug_curses.mp4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 07:01:38 2020 From: report at bugs.python.org (Sushma) Date: Tue, 28 Jan 2020 12:01:38 +0000 Subject: [New-bugs-announce] [issue39476] Not convinced with the dynamic data type assignment Message-ID: <1580212898.77.0.322385429118.issue39476@roundup.psfhosted.org> New submission from Sushma : Hi Please find below example and the compiler error, when i'm assigning value dynamically and when we comparing in "if" loop it is throwing compiler error. It should not throw error it should assign and act as int why it is thinking as string. Code Snippet: print("Hello World") num = input("Enter number ") print(num) if(num%3 == 0): num+=num print(num) Output in Console: Hello World Enter number 15 15 Traceback (most recent call last): File "main.py", line 15, in if(num%3 == 0): TypeError: not all arguments converted during string formatting ---------- messages: 360865 nosy: Sush0907 priority: normal severity: normal status: open title: Not convinced with the dynamic data type assignment type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 08:04:56 2020 From: report at bugs.python.org (=?utf-8?b?VG9tw6HFoSBKZXppb3Jza8O9?=) Date: Tue, 28 Jan 2020 13:04:56 +0000 Subject: [New-bugs-announce] [issue39477] multiprocessing Pool maxtasksperchild=0 raises exception with endless traceback Message-ID: <1580216696.5.0.546840504905.issue39477@roundup.psfhosted.org> New submission from Tom?? Jeziorsk? : The following code is expected to fail: import multiprocessing def f(x): return x if __name__ == '__main__': with multiprocessing.Pool(2, maxtasksperchild=0) as pool: pool.map(f, range(3)) since it uses a wrong value of the 'maxtasksperchild' parameter. I expect it to raise a ValueError but instead it starts to fill the stderr with practically endless traceback: Process ForkPoolWorker-2: Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.6/multiprocessing/pool.py", line 95, in worker assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0) Traceback (most recent call last): AssertionError File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.6/multiprocessing/pool.py", line 95, in worker assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0) AssertionError Process ForkPoolWorker-4: Process ForkPoolWorker-3: ... I don't think this is expected behavior. Tested with Python 3.6.9 on Ubuntu 18.04.3. ---------- messages: 360872 nosy: jeyekomon priority: normal severity: normal status: open title: multiprocessing Pool maxtasksperchild=0 raises exception with endless traceback type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 08:28:13 2020 From: report at bugs.python.org (Ananthakrishnan A S) Date: Tue, 28 Jan 2020 13:28:13 +0000 Subject: [New-bugs-announce] [issue39478] can we add a median function Message-ID: <1580218093.29.0.978454951198.issue39478@roundup.psfhosted.org> New submission from Ananthakrishnan A S : add a function called 'median' that we can use like: list=[1,2,3,4,5,6,7,8,9] # declaring list median(list) #returns 5 ---------- components: Library (Lib) messages: 360873 nosy: Ananthakrishnan A S priority: normal severity: normal status: open title: can we add a median function type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 08:52:43 2020 From: report at bugs.python.org (Ananthakrishnan A S) Date: Tue, 28 Jan 2020 13:52:43 +0000 Subject: [New-bugs-announce] [issue39479] can we add a lcm and gcd function. Message-ID: <1580219563.58.0.839733743791.issue39479@roundup.psfhosted.org> New submission from Ananthakrishnan A S : can we add an lcm and gcd function that can work as: lcm(4,6) # returns 12 gcd(4,6) # returns 2 ---------- components: Library (Lib) messages: 360875 nosy: Ananthakrishnan A S priority: normal severity: normal status: open title: can we add a lcm and gcd function. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 12:07:14 2020 From: report at bugs.python.org (Ian Jackson) Date: Tue, 28 Jan 2020 17:07:14 +0000 Subject: [New-bugs-announce] [issue39480] referendum reference is needlessly annoying Message-ID: <1580231234.47.0.658599858582.issue39480@roundup.psfhosted.org> New submission from Ian Jackson : The section "Fancier Output Formatting" has the example below. This will remind many UK readers of the 2016 EU referendum. About half of those readers will be quite annoyed. This annoyance seems entirely avoidable; a different example which did not refer to politics would demonstrate the behaviour just as well. Changing this example would (in the words of the CoC) also show more empathy, and be more considerate towards, python contributors unhappy with recent political developments in the UK, without having to make anyone else upset in turn. >>> year = 2016 >>> event = 'Referendum' >>> f'Results of the {year} {event}' 'Results of the 2016 Referendum' >>> yes_votes = 42_572_654 >>> no_votes = 43_132_495 >>> percentage = yes_votes / (yes_votes + no_votes) >>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage)' 42572654 YES votes 49.67%' ---------- assignee: docs at python components: Documentation messages: 360883 nosy: diziet, docs at python priority: normal severity: normal status: open title: referendum reference is needlessly annoying versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 12:16:57 2020 From: report at bugs.python.org (Guido van Rossum) Date: Tue, 28 Jan 2020 17:16:57 +0000 Subject: [New-bugs-announce] [issue39481] Implement PEP 585 (Type Hinting Generics In Standard Collections) Message-ID: <1580231817.93.0.47144630055.issue39481@roundup.psfhosted.org> New submission from Guido van Rossum : See PEP 585, which is still under review and may change in response to this work. https://www.python.org/dev/peps/pep-0585/ ---------- components: Interpreter Core messages: 360885 nosy: gvanrossum priority: normal severity: normal status: open title: Implement PEP 585 (Type Hinting Generics In Standard Collections) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 28 23:03:13 2020 From: report at bugs.python.org (Orion Poplawski) Date: Wed, 29 Jan 2020 04:03:13 +0000 Subject: [New-bugs-announce] [issue39482] Write 2to3 fixer for MutableMapping Message-ID: <1580270593.04.0.0568532174463.issue39482@roundup.psfhosted.org> New submission from Orion Poplawski : fail2ban currently relies on 2to3 for python 3 support. Build now fails with python 3.9: Traceback (most recent call last): File "/builddir/build/BUILD/fail2ban-0.11.1/bin/fail2ban-testcases", line 61, in tests = gatherTests(regexps, opts) File "./fail2ban/tests/utils.py", line 373, in gatherTests from . import clientreadertestcase File "./fail2ban/tests/clientreadertestcase.py", line 34, in from ..client.jailreader import JailReader, extractOptions, splitWithOptions File "./fail2ban/client/jailreader.py", line 34, in from .actionreader import ActionReader File "./fail2ban/client/actionreader.py", line 31, in from ..server.action import CommandAction File "./fail2ban/server/action.py", line 33, in from collections import MutableMapping ImportError: cannot import name 'MutableMapping' from 'collections' (/usr/lib64/python3.9/collections/__init__.py) ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 360936 nosy: opoplawski priority: normal severity: normal status: open title: Write 2to3 fixer for MutableMapping type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 06:52:27 2020 From: report at bugs.python.org (=?utf-8?b?0JDQvdC00YDQtdC5INCa0LDQt9Cw0L3RhtC10LI=?=) Date: Wed, 29 Jan 2020 11:52:27 +0000 Subject: [New-bugs-announce] [issue39483] Proposial add loop parametr to run in asyncio Message-ID: <1580298747.78.0.00209746550488.issue39483@roundup.psfhosted.org> New submission from ?????? ???????? : Sometimes need get loop from another place and run corutine in it. For example when use teleton lib. Example from this lib https://docs.telethon.dev/en/latest/basic/signing-in.html#id2 suggests using ```client.loop.run_until_complete``` but it's not handle errors like in run method. ---------- components: asyncio messages: 360957 nosy: asvetlov, heckad, yselivanov priority: normal severity: normal status: open title: Proposial add loop parametr to run in asyncio versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 08:02:45 2020 From: report at bugs.python.org (Vincent Michel) Date: Wed, 29 Jan 2020 13:02:45 +0000 Subject: [New-bugs-announce] [issue39484] time_ns() and time() cannot be compared on windows Message-ID: <1580302965.78.0.0683890137903.issue39484@roundup.psfhosted.org> New submission from Vincent Michel : On windows, the timestamps produced by time.time() often end up being equal because of the 15 ms resolution: >>> time.time(), time.time() (1580301469.6875124, 1580301469.6875124) The problem I noticed is that a value produced by time_ns() might end up being higher then a value produced time() even though time_ns() was called before: >>> a, b = time.time_ns(), time.time() >>> a, b (1580301619906185300, 1580301619.9061852) >>> a / 10**9 <= b False This break in causality can lead to very obscure bugs since timestamps are often compared to one another. Note that those timestamps can also come from non-python sources, i.e a C program using `GetSystemTimeAsFileTime`. This problem seems to be related to the conversion `_PyTime_AsSecondsDouble`: https://github.com/python/cpython/blob/f1c19031fd5f4cf6faad539e30796b42954527db/Python/pytime.c#L460-L461 # Float produced by `time.time()` >>> b.hex() '0x1.78c5f4cf9fef0p+30' # Basically what `_PyTime_AsSecondsDouble` does: >>> (float(a) / 10**9).hex() '0x1.78c5f4cf9fef0p+30' # What I would expect from `time.time()` >>> (a / 10**9).hex() '0x1.78c5f4cf9fef1p+30' However I don't know if this would be enough to fix all causality issues since, as Tim Peters noted in another thread: > Just noting for the record that a C double (time.time() result) isn't quite enough to hold a full-precision Windows time regardless (https://bugs.python.org/issue19738#msg204112) ---------- components: Library (Lib) messages: 360958 nosy: vxgmichel priority: normal severity: normal status: open title: time_ns() and time() cannot be compared on windows type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 08:49:55 2020 From: report at bugs.python.org (Carl Friedrich Bolz-Tereick) Date: Wed, 29 Jan 2020 13:49:55 +0000 Subject: [New-bugs-announce] [issue39485] Bug in mock running on PyPy3 Message-ID: <1580305795.64.0.514699285902.issue39485@roundup.psfhosted.org> New submission from Carl Friedrich Bolz-Tereick : One of the new-in-3.8 tests for unittest.mock, test_spec_has_descriptor_returning_function, is failing on PyPy. This exposes a bug in unittest.mock. The bug is most noticeable on PyPy, where it can be triggered by simply writing a slightly weird descriptor (CrazyDescriptor in the test). Getting it to trigger on CPython would be possible too, by implementing the same descriptor in C, but I did not actually do that. The relevant part of the test looks like this: from unittest.mock import create_autospec class CrazyDescriptor(object): def __get__(self, obj, type_): if obj is None: return lambda x: None class MyClass(object): some_attr = CrazyDescriptor() mock = create_autospec(MyClass) mock.some_attr(1) On CPython this just works, on PyPy it fails with: Traceback (most recent call last): File "x.py", line 13, in mock.some_attr(1) File "/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/unittest/mock.py", line 938, in __call__ _mock_self._mock_check_sig(*args, **kwargs) File "/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/unittest/mock.py", line 101, in checksig sig.bind(*args, **kwargs) File "/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/inspect.py", line 3034, in bind return args[0]._bind(args[1:], kwargs) File "/home/cfbolz/bin/.pyenv/versions/pypy3.6-7.2.0/lib-python/3/inspect.py", line 2955, in _bind raise TypeError('too many positional arguments') from None TypeError: too many positional arguments The reason for this problem is that mock deduced that MyClass.some_attr is a method on PyPy. Since mock thinks the lambda returned by the descriptor is a method, it adds self as an argument, which leads to the TypeError. Checking whether something is a method is done by _must_skip in mock.py. The relevant condition is this one: elif isinstance(getattr(result, '__get__', None), MethodWrapperTypes): # Normal method => skip if looked up on type # (if looked up on instance, self is already skipped) return is_type else: return False MethodWrapperTypes is defined as: MethodWrapperTypes = ( type(ANY.__eq__.__get__), ) which is just types.MethodType on PyPy, because there is no such thing as a method wrapper (the builtin types look pretty much like python-defined types in PyPy). On PyPy the condition isinstance(getattr...) is thus True for all descriptors! so as soon as result has a __get__, it counts as a method, even in the above case where it's a custom descriptor. Now even on CPython the condition makes no sense to me. It would be True for a C-defined version of CrazyDescriptor, it's just not a good way to check whether result is a method. I would propose to replace the condition with the much more straightforward check: elif isinstance(result, FunctionTypes): ... something is a method if it's a function on the class. Doing that change makes the test pass on PyPy, and doesn't introduce any test failures on CPython either. Will open a pull request. ---------- messages: 360961 nosy: Carl.Friedrich.Bolz, cjw296 priority: normal severity: normal status: open title: Bug in mock running on PyPy3 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 09:49:09 2020 From: report at bugs.python.org (Carl Friedrich Bolz-Tereick) Date: Wed, 29 Jan 2020 14:49:09 +0000 Subject: [New-bugs-announce] [issue39486] bug in %-formatting in Python, related to escaped %-characters Message-ID: <1580309349.27.0.404972763443.issue39486@roundup.psfhosted.org> New submission from Carl Friedrich Bolz-Tereick : The following behaviour of %-formatting changed between Python3.6 and Python3.7, and is in my opinion a bug that was introduced. So far, it has been possible to add conversion flags to a conversion specifier in %-formatting, even if the conversion is '%' (meaning a literal % is emitted and no argument consumed). Eg this works in Python3.6: >>>> "%+%abc% %" % () '%abc%' The conversion flags '+' and ' ' are ignored. Was it discussed and documented anywhere that this is now an error? Because Python3.7 has the following strange behaviour instead: >>> "%+%abc% %" % () Traceback (most recent call last): File "", line 1, in TypeError: not enough arguments for format string That error message is just confusing, because the amount of arguments is not the problem here. If I pass a dict (thus making the number of arguments irrelevant) I get instead: >>> "%+%abc% %" % {} Traceback (most recent call last): File "", line 1, in ValueError: unsupported format character '%' (0x25) at index 2 (also a confusing message, because '%' is a perfectly fine format character) In my opinion this behaviour should either be reverted to how Python3.6 worked, or the new restrictions should be documented and the error messages improved. ---------- messages: 360965 nosy: Carl.Friedrich.Bolz priority: normal severity: normal status: open title: bug in %-formatting in Python, related to escaped %-characters versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 09:52:08 2020 From: report at bugs.python.org (hai shi) Date: Wed, 29 Jan 2020 14:52:08 +0000 Subject: [New-bugs-announce] [issue39487] Merge duplicated _Py_IDENTIFIER identifiers in C code Message-ID: <1580309528.55.0.307294388242.issue39487@roundup.psfhosted.org> New submission from hai shi : As stinner said in issue19514 those _Py_IDENTIFIER should be merged: ./Modules/_ctypes/_ctypes.c:1054: _Py_IDENTIFIER(_type_); ./Modules/_ctypes/_ctypes.c:1132: _Py_IDENTIFIER(_type_); ./Modules/_ctypes/_ctypes.c:1494: _Py_IDENTIFIER(_type_); ./Modules/_ctypes/_ctypes.c:2071: _Py_IDENTIFIER(_type_); ./Modules/_ctypes/_ctypes.c:1692: _Py_IDENTIFIER(_as_parameter_); ./Modules/_ctypes/_ctypes.c:1759: _Py_IDENTIFIER(_as_parameter_); ./Modules/_ctypes/_ctypes.c:1826: _Py_IDENTIFIER(_as_parameter_); ./Modules/_ctypes/_ctypes.c:2256: _Py_IDENTIFIER(_as_parameter_); ./Modules/_ctypes/_ctypes.c:2474: _Py_IDENTIFIER(_check_retval_); ./Modules/_ctypes/_ctypes.c:3280: _Py_IDENTIFIER(_check_retval_); ./Modules/_pickle.c:3560: _Py_IDENTIFIER(__name__); ./Modules/_pickle.c:3979: _Py_IDENTIFIER(__name__); ./Modules/_pickle.c:4042: _Py_IDENTIFIER(__new__); ./Modules/_pickle.c:5771: _Py_IDENTIFIER(__new__); ./Python/ceval.c:5058: _Py_IDENTIFIER(__name__); ./Python/ceval.c:5134: _Py_IDENTIFIER(__name__); ./Python/import.c:386: _Py_IDENTIFIER(__spec__); ./Python/import.c:1569: _Py_IDENTIFIER(__spec__); ./Python/import.c:1571: _Py_IDENTIFIER(__path__); ./Python/import.c:1933: _Py_IDENTIFIER(__path__); ./Python/_warnings.c:487: _Py_IDENTIFIER(__name__); ./Python/_warnings.c:821: _Py_IDENTIFIER(__name__); ./Python/_warnings.c:972: _Py_IDENTIFIER(__name__); ./Python/errors.c:1012: _Py_IDENTIFIER(__module__); ./Python/errors.c:1238: _Py_IDENTIFIER(__module__); ./Objects/bytesobject.c:546: _Py_IDENTIFIER(__bytes__); ./Objects/bytesobject.c:2488: _Py_IDENTIFIER(__bytes__); ./Objects/moduleobject.c:61: _Py_IDENTIFIER(__name__); ./Objects/moduleobject.c:488: _Py_IDENTIFIER(__name__); ./Objects/moduleobject.c:741: _Py_IDENTIFIER(__name__); ./Objects/moduleobject.c:62: _Py_IDENTIFIER(__doc__); ./Objects/moduleobject.c:461: _Py_IDENTIFIER(__doc__); ./Objects/moduleobject.c:65: _Py_IDENTIFIER(__spec__); ./Objects/moduleobject.c:744: _Py_IDENTIFIER(__spec__); ./Objects/iterobject.c:107: _Py_IDENTIFIER(iter); ./Objects/iterobject.c:247: _Py_IDENTIFIER(iter); ./Objects/rangeobject.c:760: _Py_IDENTIFIER(iter); ./Objects/rangeobject.c:918: _Py_IDENTIFIER(iter); ./Objects/descrobject.c:574: _Py_IDENTIFIER(getattr); ./Objects/descrobject.c:1243: _Py_IDENTIFIER(getattr); ./Objects/odictobject.c:899: _Py_IDENTIFIER(items); ./Objects/odictobject.c:1378: _Py_IDENTIFIER(items); ./Objects/odictobject.c:2198: _Py_IDENTIFIER(items); ./Objects/fileobject.c:35: _Py_IDENTIFIER(open); ./Objects/fileobject.c:550: _Py_IDENTIFIER(open); ./Objects/typeobject.c:312: _Py_IDENTIFIER(mro); ./Objects/typeobject.c:1893: _Py_IDENTIFIER(mro); ---------- components: Interpreter Core messages: 360966 nosy: shihai1991 priority: normal severity: normal status: open title: Merge duplicated _Py_IDENTIFIER identifiers in C code type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 12:14:09 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 29 Jan 2020 17:14:09 +0000 Subject: [New-bugs-announce] [issue39488] test_largefile: TestSocketSendfile.test_it() uses too much disk space Message-ID: <1580318049.66.0.254124049267.issue39488@roundup.psfhosted.org> New submission from STINNER Victor : TestSocketSendfile.test_it() failed with "OSError: [Errno 28] No space left on device" on PPC64LE Fedora 3.x buildbot. It also caused troubles on "AMD64 Fedora Rawhide Clang 3.x" worker. If I recall correctly, it writes like 8 GB of real data, not just empty files made of holes. I suggest to either remove the test or to use way less disk space. https://buildbot.python.org/all/#builders/11/builds/259 Traceback (most recent call last): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/test/test_largefile.py", line 161, in test_it shutil.copyfile(TESTFN, TESTFN2) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/shutil.py", line 270, in copyfile _fastcopy_sendfile(fsrc, fdst) File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/shutil.py", line 163, in _fastcopy_sendfile raise err from None File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/shutil.py", line 149, in _fastcopy_sendfile sent = os.sendfile(outfd, infd, offset, blocksize) OSError: [Errno 28] No space left on device: '@test_38097_tmp' -> '@test_38097_tmp2' ---------- components: Tests keywords: buildbot messages: 360976 nosy: giampaolo.rodola, vstinner priority: normal severity: normal status: open title: test_largefile: TestSocketSendfile.test_it() uses too much disk space versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 13:16:16 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 29 Jan 2020 18:16:16 +0000 Subject: [New-bugs-announce] [issue39489] Remove COUNT_ALLOCS special build Message-ID: <1580321776.5.0.0072987667606.issue39489@roundup.psfhosted.org> New submission from STINNER Victor : Python has a COUNT_ALLOCS special build which adds sys.getcounts() function and shows statistics on Python types at exit if -X showalloccount command line option is used. I never ever used this feature and I don't know anyone using it. But "#ifdef COUNT_ALLOCS" code is scattered all around the code. It requires maintenance. I propose to remove the code to ease maintenance. Attached PR shows how much code is requires to support this special build. There are now more advanced tools to have similar features: * tracemalloc can be used to track memory leaks * gc.getobjects() can be called frequently to compute statistics on Python types * There are many tools built around gc.getobjects() The previous large change related to COUNT_ALLOCS was done in Python 3.6 by bpo-23034: "The output of a special Python build with defined COUNT_ALLOCS, SHOW_ALLOC_COUNT or SHOW_TRACK_COUNT macros is now off by default. It can be re-enabled using the -X showalloccount option. It now outputs to stderr instead of stdout. (Contributed by Serhiy Storchaka in bpo-23034.)" https://docs.python.org/dev/whatsnew/3.6.html#changes-in-python-command-behavior ---------- components: Build messages: 360978 nosy: vstinner priority: normal severity: normal status: open title: Remove COUNT_ALLOCS special build versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 15:19:05 2020 From: report at bugs.python.org (CJ Long) Date: Wed, 29 Jan 2020 20:19:05 +0000 Subject: [New-bugs-announce] [issue39490] Python Uninstaller fails to clean up the old path variables when uninstalling Message-ID: <1580329145.11.0.630511319021.issue39490@roundup.psfhosted.org> New submission from CJ Long : I had Python 3.7 installed on my machine. However, I started having issues with it, so I uninstalled Python. However, when I reinstalled and attempted to run pip from Powershell, the old path was still in my variable, and therefore, could not run pip. Python still works. ---------- components: Installation messages: 360984 nosy: brucelong priority: normal severity: normal status: open title: Python Uninstaller fails to clean up the old path variables when uninstalling type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 16:46:08 2020 From: report at bugs.python.org (Jakub Stasiak) Date: Wed, 29 Jan 2020 21:46:08 +0000 Subject: [New-bugs-announce] [issue39491] Import PEP 593 (Flexible function and variable annotations) support already implemented in typing_extensions Message-ID: <1580334368.43.0.748250162443.issue39491@roundup.psfhosted.org> Change by Jakub Stasiak : ---------- components: Library (Lib) nosy: jstasiak priority: normal severity: normal status: open title: Import PEP 593 (Flexible function and variable annotations) support already implemented in typing_extensions versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 18:39:28 2020 From: report at bugs.python.org (Pierre Glaser) Date: Wed, 29 Jan 2020 23:39:28 +0000 Subject: [New-bugs-announce] [issue39492] reference cycle affecting Pickler instances (Python3.8+) Message-ID: <1580341168.06.0.481762102037.issue39492@roundup.psfhosted.org> New submission from Pierre Glaser : The new Pickler reducer_override mechanism introduced in `Python3.8` generates a reference cycle: for optimization purposes, a the pickler.reducer_override bound method is referenced into the reducer_override attribute of the Pickler's struct. Thus, until as a gc.collect call is performed, both the Pickler and all the elements it pickled (as they are part of its memo), wont be collected. We should break this cycle a the end of the dump() method. See reproducer below: ``` import threading import weakref import pickle import io class MyClass: pass my_object = MyClass() collect = threading.Event() _ = weakref.ref(my_object, lambda obj: collect.set()) # noqa class MyPickler(pickle.Pickler): def reducer_override(self, obj): return NotImplemented my_pickler = MyPickler(io.BytesIO()) my_pickler.dump(my_object) del my_object del my_pickler # import gc # gc.collect() for i in range(5): collected = collect.wait(timeout=0.1) if collected: print('my_object was successfully collected') break ``` ---------- components: Library (Lib) messages: 360995 nosy: pierreglaser, pitrou priority: normal severity: normal status: open title: reference cycle affecting Pickler instances (Python3.8+) type: resource usage versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 19:37:08 2020 From: report at bugs.python.org (Shantanu) Date: Thu, 30 Jan 2020 00:37:08 +0000 Subject: [New-bugs-announce] [issue39493] typing.py has an incorrect definition of closed Message-ID: <1580344628.01.0.169235800927.issue39493@roundup.psfhosted.org> New submission from Shantanu : Hello! typing.py has the following definition of `closed`: https://github.com/python/cpython/blob/master/Lib/typing.py#L1834 ``` @abstractmethod def closed(self) -> bool: pass ``` This is inconsistent with the behaviour at runtime: ``` In [17]: sys.version Out[17]: '3.8.1 (default, Jan 23 2020, 23:36:06) \n[Clang 11.0.0 (clang-1100.0.33.17)]' In [18]: f = open("test", "w") In [19]: f.closed Out[19]: False ``` It seems like the right thing to do is add an @property, as we do with e.g. `mode` and `name`. I'll submit a PR with this change. Note typeshed also types this as a property to indicate a read-only attribute. https://github.com/python/typeshed/blob/master/stdlib/3/typing.pyi#L459 First time filing a bug on BPO, thanks a lot in advance! ---------- messages: 360996 nosy: hauntsaninja priority: normal severity: normal status: open title: typing.py has an incorrect definition of closed versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 20:43:52 2020 From: report at bugs.python.org (Alex Henrie) Date: Thu, 30 Jan 2020 01:43:52 +0000 Subject: [New-bugs-announce] [issue39494] Extra null terminators in keyword arrays in sqlite module Message-ID: <1580348632.36.0.607657173065.issue39494@roundup.psfhosted.org> New submission from Alex Henrie : Modules/_sqlite/cursor.c currently has the following variable declaration: static char *kwlist[] = {"size", NULL, NULL}; The second null terminator is unnecessary and detrimental in that it makes the code harder to read and understand. Modules/_sqlite/module.c has two additional kwlist variables with the same problem. ---------- components: Library (Lib) messages: 361001 nosy: alex.henrie priority: normal severity: normal status: open title: Extra null terminators in keyword arrays in sqlite module type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 21:11:47 2020 From: report at bugs.python.org (Shantanu) Date: Thu, 30 Jan 2020 02:11:47 +0000 Subject: [New-bugs-announce] [issue39495] xml.etree.ElementTree.TreeBuilder.start differs between pure Python and C implementations Message-ID: <1580350307.52.0.118414531826.issue39495@roundup.psfhosted.org> New submission from Shantanu : The C accelerated version of `xml.etree.ElementTree.TreeBuilder.start` has a default value for `attrs`, whereas the pure Python version does not. ``` In [41]: sys.version Out[41]: '3.8.1 (default, Jan 23 2020, 23:36:06) \n[Clang 11.0.0 (clang-1100.0.33.17)]' In [42]: import xml.etree.ElementTree In [43]: inspect.signature(xml.etree.ElementTree.TreeBuilder.start) Out[43]: In [44]: from test.support import import_fresh_module In [45]: pyElementTree = import_fresh_module('xml.etree.ElementTree', blocked=['_elementtree']) In [46]: inspect.signature(pyElementTree.TreeBuilder.start) Out[46]: ``` >From PEP 399 (https://www.python.org/dev/peps/pep-0399/) ``` Acting as a drop-in replacement also dictates that no public API be provided in accelerated code that does not exist in the pure Python code. Without this requirement people could accidentally come to rely on a detail in the accelerated code which is not made available to other VMs that use the pure Python implementation. ``` ---------- components: Library (Lib) messages: 361002 nosy: hauntsaninja priority: normal severity: normal status: open title: xml.etree.ElementTree.TreeBuilder.start differs between pure Python and C implementations _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 22:18:40 2020 From: report at bugs.python.org (Alex Henrie) Date: Thu, 30 Jan 2020 03:18:40 +0000 Subject: [New-bugs-announce] [issue39496] Inelegant loops in Modules/_sqlite/cursor.c Message-ID: <1580354320.44.0.87062873054.issue39496@roundup.psfhosted.org> New submission from Alex Henrie : pysqlite_cursor_fetchall currently has the following bit of code: /* just make sure we enter the loop */ row = (PyObject*)Py_None; while (row) { row = pysqlite_cursor_iternext(self); if (row) { PyList_Append(list, row); Py_DECREF(row); } } This can and should be rewritten as a for loop to avoid the unnecessary initialization to Py_None and the redundant if statement inside the loop. pysqlite_cursor_fetchmany has the same problem. ---------- components: Library (Lib) messages: 361006 nosy: alex.henrie priority: normal severity: normal status: open title: Inelegant loops in Modules/_sqlite/cursor.c type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 29 23:19:15 2020 From: report at bugs.python.org (Alex Henrie) Date: Thu, 30 Jan 2020 04:19:15 +0000 Subject: [New-bugs-announce] [issue39497] Unused variable script_str in pysqlite_cursor_executescript Message-ID: <1580357955.97.0.537280563038.issue39497@roundup.psfhosted.org> New submission from Alex Henrie : The function pysqlite_cursor_executescript defines a variable called script_str, initializes it to NULL, and calls Py_XDECREF on it. However, this variable has been unused since August 2007: https://github.com/python/cpython/commit/6d21456137836b8acd551cf6a51999ad4ff10a91#diff-26f74db3527991715b482a5ed2603870L752 ---------- components: Library (Lib) messages: 361008 nosy: alex.henrie priority: normal severity: normal status: open title: Unused variable script_str in pysqlite_cursor_executescript type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 00:14:32 2020 From: report at bugs.python.org (anthony shaw) Date: Thu, 30 Jan 2020 05:14:32 +0000 Subject: [New-bugs-announce] [issue39498] Signpost security considerations in library Message-ID: <1580361272.17.0.099312904306.issue39498@roundup.psfhosted.org> New submission from anthony shaw : Within the documentation, there are some really important security considerations for standard library modules. e.g. subprocess, ssl, pickle, xml. There is currently no "index" of these, so you have to go hunting for them. They're easter eggs within the docs. There isn't a unique admonition type either, so you have to search across many criteria. In particular for security researchers, it would be useful to consolidate and signpost these security best-practices in one index. PR to follow, ---------- assignee: docs at python components: Documentation messages: 361009 nosy: anthonypjshaw, docs at python priority: normal severity: normal status: open title: Signpost security considerations in library type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 02:41:48 2020 From: report at bugs.python.org (Oscar) Date: Thu, 30 Jan 2020 07:41:48 +0000 Subject: [New-bugs-announce] [issue39499] ValueError using index on tuple is not showing the tuple value Message-ID: <1580370108.47.0.271074829024.issue39499@roundup.psfhosted.org> New submission from Oscar : When trying to retrieve the index of an element that is not in a tuple the error message of ValueError is not showing the value looking for but instead a static message tuple.index(x): x not in tuple >>> b = (1, 2, 3, 4) >>> b.index(5) Traceback (most recent call last): File "", line 1, in ValueError: tuple.index(x): x not in tuple I would expect something like what happen in lists where the element (5 in this case) is showed on the ValueError. >>> a = [1, 2, 3, 4] >>> a.index(5) Traceback (most recent call last): File "", line 1, in ValueError: 5 is not in list ---------- messages: 361016 nosy: tuxskar priority: normal severity: normal status: open title: ValueError using index on tuple is not showing the tuple value versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 04:22:35 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 30 Jan 2020 09:22:35 +0000 Subject: [New-bugs-announce] [issue39500] Document PyUnicode_IsIdentifier() function Message-ID: <1580376155.12.0.83956121235.issue39500@roundup.psfhosted.org> New submission from STINNER Victor : The PyUnicode_IsIdentifier() function should be documented. Attachd PR documents it. ---------- components: C API, Unicode messages: 361027 nosy: ezio.melotti, vstinner priority: normal severity: normal status: open title: Document PyUnicode_IsIdentifier() function versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 07:48:53 2020 From: report at bugs.python.org (Thomas Perret) Date: Thu, 30 Jan 2020 12:48:53 +0000 Subject: [New-bugs-announce] [issue39501] gettext's default localedir does not match documentation Message-ID: <1580388533.66.0.705426741296.issue39501@roundup.psfhosted.org> New submission from Thomas Perret : gettext's documentation (Doc/library/gettext.rst:724) states that default locale directory is: "sys.prefix/share/locale" but the code in gettext module (Lib/gettext.py:63) uses "sys.base_prefix/share/locale" ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 361054 nosy: docs at python, moht priority: normal severity: normal status: open title: gettext's default localedir does not match documentation type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 08:51:37 2020 From: report at bugs.python.org (EGuesnet) Date: Thu, 30 Jan 2020 13:51:37 +0000 Subject: [New-bugs-announce] [issue39502] test_zipfile fails on AIX due to time.localtime Message-ID: <1580392297.71.0.07109648974.issue39502@roundup.psfhosted.org> New submission from EGuesnet : Hi, I have an error during regression tests with Python3.8.1 on AIX 6.1 compiled with GCC 8.3. It occurs only on 64 bit. Test passes on 32 bit. ``` ====================================================================== ERROR: test_add_file_after_2107 (test.test_zipfile.StoredTestsWithSourceFile) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/freeware/src/packages/BUILD/Python-3.8.1/64bit/Lib/test/test_zipfile.py", line 606, in test_add_file_after_2107 self.assertRaises(struct.error, zipfp.write, TESTFN) File "/opt/freeware/src/packages/BUILD/Python-3.8.1/64bit/Lib/unittest/case.py", line 816, in assertRaises return context.handle('assertRaises', args, kwargs) File "/opt/freeware/src/packages/BUILD/Python-3.8.1/64bit/Lib/unittest/case.py", line 202, in handle callable_obj(*args, **kwargs) File "/opt/freeware/src/packages/BUILD/Python-3.8.1/64bit/Lib/zipfile.py", line 1739, in write zinfo = ZipInfo.from_file(filename, arcname, File "/opt/freeware/src/packages/BUILD/Python-3.8.1/64bit/Lib/zipfile.py", line 523, in from_file mtime = time.localtime(st.st_mtime) OverflowError: localtime argument out of range ``` The PR associated to the new behavior is: https://github.com/python/cpython/pull/12726 (new on Python 3.8). Code is AIX specific. Is the code 32 bit only, or maybe the test was not updated? ----- I can reproduce the behavior as follow: ``` $ python3.8_32 Python 3.8.1 (default, Jan 27 2020, 11:34:59) [GCC 8.3.0] on aix Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.localtime(4325562452) Traceback (most recent call last): File "", line 1, in OverflowError: timestamp out of range for platform time_t $ python3.8_64 Python 3.8.1 (default, Jan 27 2020, 11:30:15) [GCC 8.3.0] on aix Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.localtime(4325562452) Traceback (most recent call last): File "", line 1, in OverflowError: localtime argument out of range $ python3.7_32 Python 3.7.4 (default, Jan 15 2020, 15:50:53) [GCC 8.3.0] on aix6 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.localtime(4325562452) Traceback (most recent call last): File "", line 1, in OverflowError: timestamp out of range for platform time_t $ python3.7_64 Python 3.7.4 (default, Jan 15 2020, 15:46:22) [GCC 8.3.0] on aix6 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.localtime(4325562452) time.struct_time(tm_year=2107, tm_mon=1, tm_mday=27, tm_hour=10, tm_min=7, tm_sec=32, tm_wday=3, tm_yday=27, tm_isdst=0) ``` ---------- components: Tests messages: 361058 nosy: EGuesnet priority: normal severity: normal status: open title: test_zipfile fails on AIX due to time.localtime type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 10:11:29 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 30 Jan 2020 15:11:29 +0000 Subject: [New-bugs-announce] [issue39503] [security] Denial of service in urllib.request.AbstractBasicAuthHandler Message-ID: <1580397089.41.0.564267118679.issue39503@roundup.psfhosted.org> New submission from STINNER Victor : Copy of an email received on the Python Security Response team, 9 days ago. I consider that it's not worth it to have an embargo on this vulnerability, so I make it public. Hi there, I believe I've found a denial-of-service (DoS) bug in urllib.request.AbstractBasicAuthHandler. To start, I'm operating on some background information from this document: HTTP authentication . The bug itself is a ReDoS bug causing catastrophic backtracking. To reproduce the issue we can use the following code: from urllib.request import AbstractBasicAuthHandler auth_handler = AbstractBasicAuthHandler() auth_handler.http_error_auth_reqed( 'www-authenticate', 'unused', 'unused', { 'www-authenticate': 'Basic ' + ',' * 64 + ' ' + 'foo' + ' ' + 'realm' } ) The issue itself is in the following regular expression: rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' 'realm=(["\']?)([^"\']*)\\2', re.I) In particular, the (?:.*,)* portion. Since "." and "," overlap and there are nested quantifiers we can cause catastrophic backtracking by repeating a comma. Note that since AbstractBasicAuthHandler is vulnerable, then both HTTPBasicAuthHandler and ProxyBasicAuthHandler are as well because they call http_error_auth_reqed. Building from the HTTP authentication document above, this means a server can send a specially crafted header along with an HTTP 401 or HTTP 407 and cause a DoS on the client. I won't speculate on the severity of the issue too much - you will surely understand the impact better than I will. Although, the fact that this is client-side as opposed to server-side appears to reduce the severity, however the fact that it's a security-sensitive context (HTTP authentication) may raise the severity. One possible fix would be changing the rx expression to the following: rx = re.compile('(?:[^,]*,)*[ \t]*([^ \t]+)[ \t]+' 'realm=(["\']?)([^"\']*)\\2', re.I) This removes the character overlap in the nested quantifier and thus negates the catastrophic backtracking. Let me know if you have any questions or what the next steps are from here. Thanks for supporting Python security! -- Matt Schwager ---------- components: Library (Lib) messages: 361072 nosy: vstinner priority: normal severity: normal status: open title: [security] Denial of service in urllib.request.AbstractBasicAuthHandler type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 30 21:12:38 2020 From: report at bugs.python.org (Shantanu) Date: Fri, 31 Jan 2020 02:12:38 +0000 Subject: [New-bugs-announce] [issue39504] inspect.signature throws RuntimeError on select.epoll.register Message-ID: <1580436758.94.0.47641276989.issue39504@roundup.psfhosted.org> New submission from Shantanu : >From the documentation of `inspect.signature` it seems we should never have a RuntimeError: ``` Raises ValueError if no signature can be provided, and TypeError if that type of object is not supported. ``` The easiest thing to do is just turn the RuntimeError into a ValueError... but I'll take a deeper look and see if I can actually fix this. Traceback below: ``` >>> import sys >>> sys.version '3.8.0 (default, Nov 14 2019, 22:29:45) \n[GCC 5.4.0 20160609]' >>> import inspect >>> import select >>> inspect.signature(select.epoll.register) Traceback (most recent call last): File "/usr/lib/python3.8/inspect.py", line 2004, in wrap_value value = eval(s, module_dict) File "", line 1, in NameError: name 'EPOLLIN' is not defined During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.8/inspect.py", line 2007, in wrap_value value = eval(s, sys_module_dict) File "", line 1, in NameError: name 'EPOLLIN' is not defined During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/inspect.py", line 3093, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped) File "/usr/lib/python3.8/inspect.py", line 2842, in from_callable return _signature_from_callable(obj, sigcls=cls, File "/usr/lib/python3.8/inspect.py", line 2296, in _signature_from_callable return _signature_from_builtin(sigcls, obj, File "/usr/lib/python3.8/inspect.py", line 2109, in _signature_from_builtin return _signature_fromstr(cls, func, s, skip_bound_arg) File "/usr/lib/python3.8/inspect.py", line 2057, in _signature_fromstr p(name, default) File "/usr/lib/python3.8/inspect.py", line 2039, in p default_node = RewriteSymbolics().visit(default_node) File "/usr/lib/python3.8/ast.py", line 360, in visit return visitor(node) File "/usr/lib/python3.8/ast.py", line 445, in generic_visit new_node = self.visit(old_value) File "/usr/lib/python3.8/ast.py", line 360, in visit return visitor(node) File "/usr/lib/python3.8/ast.py", line 445, in generic_visit new_node = self.visit(old_value) File "/usr/lib/python3.8/ast.py", line 360, in visit return visitor(node) File "/usr/lib/python3.8/inspect.py", line 2031, in visit_Name return wrap_value(node.id) File "/usr/lib/python3.8/inspect.py", line 2009, in wrap_value raise RuntimeError() RuntimeError ``` ---------- components: Library (Lib) messages: 361089 nosy: hauntsaninja priority: normal severity: normal status: open title: inspect.signature throws RuntimeError on select.epoll.register _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 03:24:20 2020 From: report at bugs.python.org (Fuzheng Duan) Date: Fri, 31 Jan 2020 08:24:20 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue39505=5D_redundant_?= =?utf-8?b?4oCYL+KAmSBpbiAkZW52OlZJUlRVQUxfRU5WIHdoZW4gdXNlIHZlbnYgaW4g?= =?utf-8?q?powershell?= Message-ID: <1580459060.25.0.800819724263.issue39505@roundup.psfhosted.org> New submission from Fuzheng Duan : When windows users use "python -m venv ENV_DIR", a python virtual environment will be created in ENV_DIR. Powershell users use ENV_DIR\Scripts\Activate.ps1 to activate virtual environment. In powershell, a environment variable, "$env:VIRTUAL_ENV", is set and used by many tools to determine that there is an activated venv. In bash, it is "$VIRTUAL_ENV" In python3.8 and python3.9, $env:VIRTUAL_ENV has a redundant '/', for example: PS C:\Users\Test> python -m venv test_venv PS C:\Users\Test> .\test_venv\Scripts\Activate.ps1 PS C:\Users\Test> $env:VIRTUAL_ENV C:\Users\Test\test_venv/ using python3.7, or using virtualenv with python3.8 or 3.9, or in linux, there will be no such a '/' in the end. This '/' matters because some tools many tools use this environment variable, for example, oh-my-posh will take "test_venv/" as virtual environment name rather than "test_venv"(Although venv's activate.ps 1 itself's default prompt is correct). And from the perspective of semantics and consistency with other platform, the '/' is redundant. ---------- components: Library (Lib) messages: 361094 nosy: Schwarzichet priority: normal severity: normal status: open title: redundant ?/? in $env:VIRTUAL_ENV when use venv in powershell type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 05:30:54 2020 From: report at bugs.python.org (Gabriele Tornetta) Date: Fri, 31 Jan 2020 10:30:54 +0000 Subject: [New-bugs-announce] [issue39506] operator |= on sets does not behave like the update method Message-ID: <1580466654.94.0.209518584933.issue39506@roundup.psfhosted.org> New submission from Gabriele Tornetta : def outer(): a=set() def inner(): a |= set(["A"]) inner() return a print(outer()) Traceback (most recent call last): File "main.py", line 8, in print(outer()) File "main.py", line 5, in outer inner() File "main.py", line 4, in inner a |= set(["A"]) UnboundLocalError: local variable 'a' referenced before assignment However, the update method works as expected: def outer(): a=set() def inner(): a.update(set(["A"])) inner() return a print(outer()) {'A'} ---------- components: Interpreter Core messages: 361097 nosy: Gabriele Tornetta priority: normal severity: normal status: open title: operator |= on sets does not behave like the update method type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 08:32:41 2020 From: report at bugs.python.org (Ross Rhodes) Date: Fri, 31 Jan 2020 13:32:41 +0000 Subject: [New-bugs-announce] [issue39507] http library missing HTTP status code 418 "I'm a teapot" Message-ID: <1580477561.46.0.283786859287.issue39507@roundup.psfhosted.org> New submission from Ross Rhodes : http library missing HTTP status code 418 "I'm a teapot". ---------- messages: 361106 nosy: trrhodes priority: normal severity: normal status: open title: http library missing HTTP status code 418 "I'm a teapot" _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 09:48:02 2020 From: report at bugs.python.org (haim) Date: Fri, 31 Jan 2020 14:48:02 +0000 Subject: [New-bugs-announce] [issue39508] no module curses error although i downloaded the module - windows 10 Message-ID: <1580482082.1.0.922196738704.issue39508@roundup.psfhosted.org> New submission from haim : hi, when i run my code its get error that ModuleNotFoundError: No module named '_curses' altough i download windoows curses 2.1.0 library i add a pictures thanx ---------- files: p204.PNG messages: 361112 nosy: haim986 priority: normal severity: normal status: open title: no module curses error although i downloaded the module - windows 10 type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file48874/p204.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 09:50:49 2020 From: report at bugs.python.org (Dong-hee Na) Date: Fri, 31 Jan 2020 14:50:49 +0000 Subject: [New-bugs-announce] [issue39509] Update HTTP status code to follow IANA Message-ID: <1580482249.3.0.655297807601.issue39509@roundup.psfhosted.org> New submission from Dong-hee Na : status code 103 and 425 is missing. https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml ---------- components: Library (Lib) messages: 361114 nosy: corona10, martin.panter priority: normal severity: normal status: open title: Update HTTP status code to follow IANA type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 10:19:52 2020 From: report at bugs.python.org (Philipp Gesang) Date: Fri, 31 Jan 2020 15:19:52 +0000 Subject: [New-bugs-announce] [issue39510] use-after-free in BufferedReader.readinto() Message-ID: <1580483992.12.0.718198144205.issue39510@roundup.psfhosted.org> New submission from Philipp Gesang : reader = open ("/dev/zero", "rb") _void = reader.read (42) reader.close () reader.readinto (bytearray (42)) ### BANG! Bisected to commit dc469454ec. PR on Github to follow. ---------- messages: 361119 nosy: phg priority: normal severity: normal status: open title: use-after-free in BufferedReader.readinto() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 10:33:35 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 31 Jan 2020 15:33:35 +0000 Subject: [New-bugs-announce] [issue39511] [subinterpreters] Per-interpreter singletons (None, True, False, etc.) Message-ID: <1580484815.27.0.407070570821.issue39511@roundup.psfhosted.org> New submission from STINNER Victor : The long-term goal of the PEP 554 is to run two Python interpreters in parallel. To achieve this goal, no object must be shared between two interpreters. See for example my article "Pass the Python thread state explicitly" which gives a longer rationale: https://vstinner.github.io/cpython-pass-tstate.html In bpo-38858, I modified Objects/longobject.c to have per-interpreter small integer singletons: commit 630c8df5cf126594f8c1c4579c1888ca80a29d59. This issue is about other singletons like None or Py_True which are currently shared between two interpreters. I propose to add new functions. Example for None: * Py_GetNone(): return a *borrowed* reference to the None singleton (similar to existing Py_None macro) * Py_GetNoneRef(): return a *strong* reference to the None singleton (similar to "Py_INCREF(Py_None); return Py_None;" and Py_RETURN_NONE macro) And add PyInterpreterState.none field: strong reference to the per-interpreter None object. We should do that for each singletons: * None (Py_None) * True (Py_True) * False (Py_False) * Ellipsis (Py_Ellipsis) GIL issue ========= Py_GetNone() would look like: PyObject* Py_GetNone(void) { return _PyThreadState_GET()->interp->none; } Problem: _PyThreadState_GET() returns NULL if the caller function doesn't hold the GIL. Using the Python C API when the GIL is not held is a violation of the API: it is not supported. But it worked previously. One solution is to fail with an assertion error (abort the process) in debug mode, and let Python crash in release mode. Another option is to only fail with an assertion error in debug mode in Python 3.9. In Python 3.9, Py_GetNone() would use PyGILState_GetThisThreadState() function which works even when the GIL is released. In Python 3.10, we would switch to _PyThreadState_GET() and so crash in release mode. One concrete example of such issue can be found in the multiprocessing C code, in semlock_acquire(): Py_BEGIN_ALLOW_THREADS if (timeout_obj == Py_None) { res = sem_wait(self->handle); } else { res = sem_timedwait(self->handle, &deadline); } Py_END_ALLOW_THREADS Py_None is accessed when the GIL is released. ---------- components: Interpreter Core messages: 361121 nosy: vstinner priority: normal severity: normal status: open title: [subinterpreters] Per-interpreter singletons (None, True, False, etc.) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 13:26:08 2020 From: report at bugs.python.org (Malte Forkel) Date: Fri, 31 Jan 2020 18:26:08 +0000 Subject: [New-bugs-announce] [issue39512] expat parser not xml 1.1 (breaks xmlrpclib) - still Message-ID: <1580495168.43.0.888386577415.issue39512@roundup.psfhosted.org> New submission from Malte Forkel : xmlrpc uses expat, which is not XML 1.1 compliant. Therefore, when transferring text, some characters which a valid according to the XML-RPC specification (http://xmlrpc.com/spec.md) will trigger expat to raise xml.parsers.expat.ExpatError: not well-formed (invalid token) exceptions. Issue 11804 (https://bugs.python.org/issue11804) which reported this problem has been closed almost 20 years ago, referencing an expat bug report for XML 1.1 support. That bug report is still open and there is no current plan to support XML 1.1 in expat (https://github.com/libexpat/libexpat/issues/378#issuecomment-578914067). I would like to suggest to replace expat as the default parser in xmlrpc or at least make it easier to override the default (see https://bugs.python.org/issue6701). ---------- components: XML messages: 361124 nosy: mforkel priority: normal severity: normal status: open title: expat parser not xml 1.1 (breaks xmlrpclib) - still _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 14:15:26 2020 From: report at bugs.python.org (Sandeep) Date: Fri, 31 Jan 2020 19:15:26 +0000 Subject: [New-bugs-announce] [issue39513] NameError: name 'open' is not defined Message-ID: <1580498126.56.0.466830375597.issue39513@roundup.psfhosted.org> Change by Sandeep : ---------- components: Library (Lib) nosy: Sandeep priority: normal severity: normal status: open title: NameError: name 'open' is not defined type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 16:36:43 2020 From: report at bugs.python.org (Carlos ESTEVES) Date: Fri, 31 Jan 2020 21:36:43 +0000 Subject: [New-bugs-announce] [issue39514] http://sphinx.pocoo.org/ Message-ID: <1580506603.75.0.50119765059.issue39514@roundup.psfhosted.org> New submission from Carlos ESTEVES : Hi, "Sphinx" link is break on webpage: https://docs.python.org/3/ See: Sphinx Thank You ---------- assignee: docs at python components: Documentation messages: 361135 nosy: cesteves, docs at python priority: normal severity: normal status: open title: http://sphinx.pocoo.org/ _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 31 22:45:08 2020 From: report at bugs.python.org (=?utf-8?b?5b6Q5b27?=) Date: Sat, 01 Feb 2020 03:45:08 +0000 Subject: [New-bugs-announce] [issue39515] pathlib won't strip "\n" in path Message-ID: <1580528708.47.0.815684087896.issue39515@roundup.psfhosted.org> New submission from ?? : Pathlib won't strip "\n" in path. Of course, "\n" should exist in a legal path. For example: >>>a=pathlib.Path(pathlib.Path("C:/Program Files/\n"),"./JetBrains/\n") >>>a WindowsPath('C:/Program Files/\n/JetBrains/\n') ---------- components: Library (Lib) messages: 361149 nosy: ?? priority: normal severity: normal status: open title: pathlib won't strip "\n" in path type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________