From report at bugs.python.org Mon Apr 1 09:54:16 2019 From: report at bugs.python.org (Thomas Buhrmann) Date: Mon, 01 Apr 2019 13:54:16 +0000 Subject: [New-bugs-announce] [issue36497] Undocumented behavior in csv.Sniffer (preferred delimiters) Message-ID: <1554126856.48.0.397677994988.issue36497@roundup.psfhosted.org> New submission from Thomas Buhrmann : When the Sniffer detects more than one possible delimiter, as e.g. in the following file "a;b;c;d,e;f;g;h" the result will always be the ',' delimiter, independent of how "dominant" another delimiter is. This is because the codepath analyzing dominance will only get executed if the undocumented Sniffer member Sniffer.preferred is overwritten by the user after initialization. While not strictly a bug, the behavior should probably be documented, and the 'preferred' member could be exposed as an argument in __init__() perhaps? ---------- components: Library (Lib) messages: 339291 nosy: thomas priority: normal severity: normal status: open title: Undocumented behavior in csv.Sniffer (preferred delimiters) type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 1 10:25:39 2019 From: report at bugs.python.org (Benjamin Krala) Date: Mon, 01 Apr 2019 14:25:39 +0000 Subject: [New-bugs-announce] [issue36498] combining dict comprehensing and lists lead to IndexError Message-ID: <1554128739.17.0.108677031645.issue36498@roundup.psfhosted.org> New submission from Benjamin Krala : Following code snipped leads to an IndexError in the last line. It basically puts EN_cmw into a dict where is a split on '->'. It avoid the bug you can change the 1 into -1. (By definition it shouldnt make a difference) EN_cmw = '''abandonned->abandoned aberation->aberration abilityes->abilities abilties->abilities abilty->ability abondon->abandon abbout->about ''' EN_cmw = EN_cmw.split('\n') EN_cmw = [string.strip() for string in EN_cmw] { line.split('->')[0]: line.split('->')[1] for line in EN_cmw } ---------- components: Interpreter Core messages: 339293 nosy: Benjamin Krala priority: normal severity: normal status: open title: combining dict comprehensing and lists lead to IndexError type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 1 19:02:43 2019 From: report at bugs.python.org (Vadim) Date: Mon, 01 Apr 2019 23:02:43 +0000 Subject: [New-bugs-announce] [issue36499] unpickling of a datetime object in 3.5 fails when pickled with 2.7 Message-ID: <1554159763.02.0.091153973207.issue36499@roundup.psfhosted.org> New submission from Vadim : Unpickling fails when pickling is performed with 2.7 (pickledatetime.py) and unpickling is done with 3.5 (Tested on Ubuntu 16.04) Please see detailed error description and workaround in the comments to the attached files. ---------- components: Library (Lib) files: pickle_unpickle.tar.gz messages: 339308 nosy: vadimf priority: normal severity: normal status: open title: unpickling of a datetime object in 3.5 fails when pickled with 2.7 type: crash versions: Python 2.7, Python 3.5 Added file: https://bugs.python.org/file48240/pickle_unpickle.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 1 20:13:34 2019 From: report at bugs.python.org (anthony shaw) Date: Tue, 02 Apr 2019 00:13:34 +0000 Subject: [New-bugs-announce] [issue36500] Add "regen-*" equivalent projects for Windows builds Message-ID: <1554164014.38.0.603917502843.issue36500@roundup.psfhosted.org> New submission from anthony shaw : Now that pgen is written in Python, it'd be useful for Windows users to be able to rebuild grammar and tokens into the parser table. The current hook (make regen-grammar) is built into the Makefile. Add support for VS2017+ vcxproj files to call the script directly ---------- components: Build messages: 339309 nosy: anthony shaw priority: normal severity: normal status: open title: Add "regen-*" equivalent projects for Windows builds versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 1 22:54:06 2019 From: report at bugs.python.org (Ivan Pozdeev) Date: Tue, 02 Apr 2019 02:54:06 +0000 Subject: [New-bugs-announce] [issue36501] Remove POSIX.1e ACLs in tests that rely on default permissions behavior Message-ID: <1554173646.74.0.526062009257.issue36501@roundup.psfhosted.org> New submission from Ivan Pozdeev : In Linuxes with ACLs enabled, the following tests fail, as Steve Dower discovered in https://mail.python.org/pipermail/python-dev/2019-March/156929.html: ====================================================================== FAIL: test_mode (test.test_os.MakedirTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/osboxes/Documents/cpython/Lib/test/test_os.py", line 1157, in test_mode self.assertEqual(os.stat(parent).st_mode & 0o777, 0o775) AssertionError: 493 != 509 ====================================================================== FAIL: test_open_mode (test.test_pathlib.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/osboxes/Documents/cpython/Lib/test/test_pathlib.py", line 2104, in test_open_mode self.assertEqual(stat.S_IMODE(st.st_mode), 0o666) AssertionError: 420 != 438 ====================================================================== FAIL: test_touch_mode (test.test_pathlib.PosixPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/osboxes/Documents/cpython/Lib/test/test_pathlib.py", line 2117, in test_touch_mode self.assertEqual(stat.S_IMODE(st.st_mode), 0o666) AssertionError: 420 != 438 POSIX.1e is supported by major distros even though it's officially withdrawn; see https://en.wikipedia.org/wiki/Access_control_list#Filesystem_ACLs . ---------- components: Tests messages: 339313 nosy: Ivan.Pozdeev priority: normal severity: normal status: open title: Remove POSIX.1e ACLs in tests that rely on default permissions behavior type: behavior versions: Python 2.7, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 02:36:07 2019 From: report at bugs.python.org (Jun) Date: Tue, 02 Apr 2019 06:36:07 +0000 Subject: [New-bugs-announce] [issue36502] The behavior of str.isspace() for U+00A0 and U+202F is different from what is documented Message-ID: <1554186967.56.0.21483381742.issue36502@roundup.psfhosted.org> New submission from Jun : I was looking for a list of Unicode codepoints that str.isspace() returns true. According to https://docs.python.org/3/library/stdtypes.html#str.isspace, it's "Whitespace characters are those characters defined in the Unicode character database as ?Other? or ?Separator? and those with bidirectional property being one of ?WS?, ?B?, or ?S?." However, for U+202F(https://www.fileformat.info/info/unicode/char/202f/index.htm) which is a "Separator" and its bidirectional property is "CS", str.isspace() returns True while it shouldn't if we follow the definition above. >>> "\u202f".isspace() True I'm not sure either the documentation should be updated or behavior should be updated, but at least those should be consistent. ---------- assignee: docs at python components: Documentation, Unicode messages: 339317 nosy: Jun, docs at python, ezio.melotti, vstinner priority: normal severity: normal status: open title: The behavior of str.isspace() for U+00A0 and U+202F is different from what is documented type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 05:21:16 2019 From: report at bugs.python.org (Michael Felt) Date: Tue, 02 Apr 2019 09:21:16 +0000 Subject: [New-bugs-announce] [issue36503] remove references to aix3 and aix4 in \*.py Message-ID: <1554196876.37.0.11821989431.issue36503@roundup.psfhosted.org> New submission from Michael Felt : sys.platform returns "aix[3|4|4|5|6|7" AIX 3 and AIX 4 are no longer around, in any supported form, so references to these specific releases is pointless. This will remove the (two) places where they are still referenced. ---------- components: Build, Tests messages: 339322 nosy: Michael.Felt priority: normal severity: normal status: open title: remove references to aix3 and aix4 in \*.py versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 06:09:01 2019 From: report at bugs.python.org (Zackery Spytz) Date: Tue, 02 Apr 2019 10:09:01 +0000 Subject: [New-bugs-announce] [issue36504] Signed integer overflow in _ctypes.c's PyCArrayType_new() Message-ID: <1554199741.58.0.0522165887843.issue36504@roundup.psfhosted.org> New submission from Zackery Spytz : Signed integer overflow can occur in the overflow check in PyCArrayType_new() if "itemsize" is large enough. ---------- components: Extension Modules, ctypes messages: 339326 nosy: ZackerySpytz priority: normal severity: normal status: open title: Signed integer overflow in _ctypes.c's PyCArrayType_new() type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 09:43:45 2019 From: report at bugs.python.org (tejesh) Date: Tue, 02 Apr 2019 13:43:45 +0000 Subject: [New-bugs-announce] [issue36505] PYTHON-CAN with vector Message-ID: <1554212625.82.0.669161451897.issue36505@roundup.psfhosted.org> New submission from tejesh : Hi team, I am trying to send CAN message from python I can able to send CAN message with the time period(Limited duration). But, I want to send particular number of times(example: I want to send only 2 CAN Messages). How can I send?? Can please help me to resolve this issue ASAP.. ---------- components: Build messages: 339332 nosy: tejesh priority: normal severity: normal status: open title: PYTHON-CAN with vector type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 10:48:46 2019 From: report at bugs.python.org (bigbigliang) Date: Tue, 02 Apr 2019 14:48:46 +0000 Subject: [New-bugs-announce] [issue36506] An arbitrary execution vulnerability exists in the built-in function getattr Message-ID: <1554216526.55.0.671553927904.issue36506@roundup.psfhosted.org> New submission from bigbigliang : Dear Python Community, We?ve found a bug in cpython Lib and already received a cve number (CVE-2019-10268).But to be honest, I'm not sure if it's a loophole. Please tell me what to do next. bigbigliang ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 339337 nosy: 18z, bigbigliang, christian.heimes, krnick, serhiy.storchaka, vstinner, xtreak priority: normal severity: normal status: open title: An arbitrary execution vulnerability exists in the built-in function getattr type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 10:57:33 2019 From: report at bugs.python.org (George Shuklin) Date: Tue, 02 Apr 2019 14:57:33 +0000 Subject: [New-bugs-announce] [issue36507] frozenset type breaks ZFC Message-ID: <1554217053.66.0.153832403851.issue36507@roundup.psfhosted.org> New submission from George Shuklin : ZFC (https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory) defines numbers as nested empty sets. 0 is {} 1 is {{}} 2 is {{{}}} Sets can not be nested in python (as they are mutable), so next best type is frozen set. Unfortunately, nested sets are equal to each other no matter how deep they are nested. This behavior means that 3==2, and it breaks all set operations for ZFC. Minimal example: frozenset({frozenset()}) >>> x=frozenset() >>> y=frozenset(frozenset()) >>> x is y True ---------- components: Interpreter Core messages: 339340 nosy: george-shuklin priority: normal severity: normal status: open title: frozenset type breaks ZFC type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 11:00:19 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 02 Apr 2019 15:00:19 +0000 Subject: [New-bugs-announce] [issue36508] python-config --ldflags must not contain LINKFORSHARED Message-ID: <1554217219.99.0.0709585421369.issue36508@roundup.psfhosted.org> New submission from STINNER Victor : python-config --ldflags must not contain LINKFORSHARED. Attached PR modifies python-config --ldflags to no longer include LINKFORSHARED. This similar change was already made on macOS: see bpo-14197. -- Python build system uses a LINKFORSHARED variable, extract of configure.ac: # LINKFORSHARED are the flags passed to the $(CC) command that links # the python executable -- this is only needed for a few systems This variable is set to "-Xlinker -export-dynamic" on Linux. Extract of ld manual page for --export-dynamic option: --- When creating a dynamically linked executable, using the -E option or the --export-dynamic option causes the linker to add all symbols to the dynamic symbol table. The dynamic symbol table is the set of symbols which are visible from dynamic objects at run time. If you do not use either of these options (or use the --no-export-dynamic option to restore the default behavior), the dynamic symbol table will normally contain only those symbols which are referenced by some dynamic object mentioned in the link. If you use "dlopen" to load a dynamic object which needs to refer back to the symbols defined by the program, rather than some other dynamic object, then you will probably need to use this option when linking the program itself. You can also use the dynamic list to control what symbols should be added to the dynamic symbol table if the output format supports it. See the description of --dynamic-list. Note that this option is specific to ELF targeted ports. PE targets support a similar function to export all symbols from a DLL or EXE; see the description of --export-all-symbols below. --- Both configure.ac and ld manual page mention an "executable", whereas LINKFORSHARED is currently exported in python-config --ldflags. Example on Fedora 29: $ python3-config --ldflags -L/usr/lib64 -lpython3.7m -lpthread -ldl -lutil -lm -Xlinker -export-dynamic The "-export-dynamic" flag causes non-obvious dynamic linking bug like the following bug in Samba which embeds Python: * https://bugzilla.redhat.com/show_bug.cgi?id=1198161 (python bug) * https://bugzilla.redhat.com/show_bug.cgi?id=1198158 (bug reported on samba) * https://bugzilla.redhat.com/show_bug.cgi?id=1197914 (bug originally reported on gdb) -- History of the LINKFORSHARED variable. (*) Python build system uses a LINKFORSHARED variable since this commit: commit 7cc5abd4548629cc41d3951576f41ff2ddd7b5f7 Author: Guido van Rossum Date: Mon Sep 12 10:42:20 1994 +0000 Support shared library creation. The value of the variable changed on Linux with: commit b65a48e2b631f9a171e6eab699974bd2074f40d7 (HEAD) Author: Guido van Rossum Date: Wed Jun 14 18:21:23 1995 +0000 linux elf shlib; sys/wait.h; don't add -posix for NeXT Extract of the configure.in change: if test -z "$LINKFORSHARED" then case $ac_sys_system in hp*|HP*) LINKFORSHARED="-Wl,-E";; + Linux*) LINKFORSHARED="-rdynamic";; esac fi The variable was only used to build the "python" executable. Extract of Modules/Makefile.in (at commit 7cc5abd4548629cc41d3951576f41ff2ddd7b5f7): ../python: config.o $(MYLIBS) Makefile $(CC) $(OPT) config.o $(LINKFORSHARED) \ $(MYLIBS) $(MODLIBS) $(LIBS) $(SYSLIBS) -o python mv python ../python (*) The python-config script was created as a Python script by: commit c90b17ec8233009e4745dd8f77401f52c5d4a8d5 Author: Martin v. L?wis Date: Sat Apr 15 08:13:05 2006 +0000 Patch #1161914: Add python-config. The following commit modified Misc/python-config.in to add LINKFORSHARED to python-config --ldflags: commit a70f3496203cd68d88208a21d90f0ca3503aa2f6 Author: Collin Winter Date: Fri Mar 19 00:08:44 2010 +0000 Make python-config support multiple option flags on the same command line, rather than requiring one invocation per flag. Extract: + elif opt in ('--libs', '--ldflags'): + libs = getvar('LIBS').split() + getvar('SYSLIBS').split() + libs.append('-lpython'+pyver) + # add the prefix/lib/pythonX.Y/config dir, but only if there is no + # shared library in prefix/lib/. + if opt == '--ldflags': + if not getvar('Py_ENABLE_SHARED'): + libs.insert(0, '-L' + getvar('LIBPL')) + libs.extend(getvar('LINKFORSHARED').split()) + print ' '.join(libs) The following commit modified Misc/python-config.in to not add LINKFORSHARED into "libs" when built on macOS (if PYTHONFRAMEWORK is defined): commit ecd4e9de5afab6a5d75a6fa7ebfb62804ba69264 Author: Ned Deily Date: Tue Jul 24 03:31:48 2012 -0700 Issue #14197: For OS X framework builds, ensure links to the shared library are created with the proper ABI suffix. diff --git a/Misc/python-config.in b/Misc/python-config.in index 1d4a81d850..79f0bb14c1 100644 --- a/Misc/python-config.in +++ b/Misc/python-config.in @@ -52,7 +52,8 @@ for opt in opt_flags: if opt == '--ldflags': if not getvar('Py_ENABLE_SHARED'): libs.insert(0, '-L' + getvar('LIBPL')) - libs.extend(getvar('LINKFORSHARED').split()) + if not getvar('PYTHONFRAMEWORK'): + libs.extend(getvar('LINKFORSHARED').split()) print(' '.join(libs)) (*) A shell version of python-config has been added by: commit 874211978c8097b8e747c90fa3ff41aacabe340f Author: doko at python.org Date: Sat Jan 26 11:39:31 2013 +0100 - Issue #16235: Implement python-config as a shell script. Extract of Misc/python-config.sh.in at this commit: --ldflags) LINKFORSHAREDUSED= if [ -z "$PYTHONFRAMEWORK" ] ; then LINKFORSHAREDUSED=$LINKFORSHARED fi LIBPLUSED= if [ "$PY_ENABLE_SHARED" = "0" ] ; then LIBPLUSED="-L$LIBPL" fi echo "$LIBPLUSED -L$libdir $LIBS $LINKFORSHAREDUSED" ;; ---------- components: Build messages: 339341 nosy: vstinner priority: normal severity: normal status: open title: python-config --ldflags must not contain LINKFORSHARED versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 12:59:39 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 02 Apr 2019 16:59:39 +0000 Subject: [New-bugs-announce] [issue36509] Add iot layout for windows iot containers Message-ID: <1554224379.78.0.511205388496.issue36509@roundup.psfhosted.org> New submission from Paul Monson : The layout should not contain tcl/tk, tkinter, distutils since ARM is cross-compiled and these features will not be useful on target ARM devices. ---------- components: Build, Windows messages: 339352 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add iot layout for windows iot containers type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 14:06:08 2019 From: report at bugs.python.org (DJR) Date: Tue, 02 Apr 2019 18:06:08 +0000 Subject: [New-bugs-announce] [issue36510] Regular Expression Dot-Star patter matching - re- text skipping Message-ID: <1554228368.59.0.219643697594.issue36510@roundup.psfhosted.org> New submission from DJR : #Python 3.7.2 Tk version 8.6.8 #IDLE version: 3.7.2 #Following code does not return ALL the partial matched patters within a string import re #_____________ #Normal Mode #_____________ atRegex = re.compile(r'...at') TextStr=(atRegex.findall(':at-at~at: The Bigcat sat in the hat sat on the flat sat mat sat.')) print('\nFull Text String:---> :at-at~at: The Bigcat sat in the hat sat on the flat sat mat sat\n') print('\n Normal Mode: Returns\n' + ':--->' + str(TextStr)) #_____________ #Greedy Mode #_____________ atRegex = re.compile(r'...at*') TextStr=(atRegex.findall(':at-at~at: The Bigcat sat in the hat sat on the flat sat mat sat.')) print('\nFull Text String:---> :at-at~at: The Bigcat sat in the hat sat on the flat sat mat sat mat\n') print('\n Greedy Mode: Returns\n' + ':---> ' + str(TextStr)+'\n') """ #=================================================================== # IDLE OutPut Normal Mode and Greedy Mode: multiple 'sat' are missing #=================================================================== Full Text String:---> :at-at~at: The Bigcat sat in the hat sat on the flat sat mat sat. Normal Mode: Returns :---> ['at-at', 'igcat', 'e hat', ' flat', 't mat'] Full Text String:---> :at-at~at: The Bigcat sat in the hat sat on the flat sat mat sat. Greedy Mode: Returns :---> ['at-at', 'igcat', 'e hat', ' flat', 't mat'] """ ---------- assignee: terry.reedy components: IDLE, Library (Lib), Regular Expressions, Windows messages: 339357 nosy: djr_python, ezio.melotti, mrabarnett, paul.moore, steve.dower, terry.reedy, tim.golden, zach.ware priority: normal severity: normal status: open title: Regular Expression Dot-Star patter matching - re- text skipping type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 15:09:21 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 02 Apr 2019 19:09:21 +0000 Subject: [New-bugs-announce] [issue36511] Add Windows ARM32 buildbot Message-ID: <1554232161.58.0.393661272603.issue36511@roundup.psfhosted.org> New submission from Paul Monson : Zachary Ware suggested I create an issue to discuss this: I've started a worker using the worker name monson-win-arm32 and the password provided. I think it is waiting for a change, there were no errors, but it didn't print anything. Also, I don?t see anything in the list of builders that looks like it would be windows arm32, and it's not showing in the list of workers. I'm looking at tools/buildbot/test.bat and it seems like it might be a good place to use SSH to run the test on arm32 device, but I'm not clear on where it is called from or what the best way to detect that project is being cross-compiled. Should I add an "-arm32" switch here? ---------- components: Build, Windows messages: 339362 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add Windows ARM32 buildbot type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 15:52:53 2019 From: report at bugs.python.org (=?utf-8?q?Stefan_H=C3=B6lzl?=) Date: Tue, 02 Apr 2019 19:52:53 +0000 Subject: [New-bugs-announce] [issue36512] future_factory argument for Thread/ProcessPoolExecutor Message-ID: <1554234773.66.0.185942114441.issue36512@roundup.psfhosted.org> New submission from Stefan H?lzl : adding a future_factory argument to Thread/ProcessPoolExecutor to control which type of Future should be created by Executor.submit ---------- components: Library (Lib) messages: 339364 nosy: stefanhoelzl priority: normal severity: normal status: open title: future_factory argument for Thread/ProcessPoolExecutor type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 20:51:17 2019 From: report at bugs.python.org (Paul Monson) Date: Wed, 03 Apr 2019 00:51:17 +0000 Subject: [New-bugs-announce] [issue36513] Add support for building arm32 nuget package Message-ID: <1554252677.73.0.380918905446.issue36513@roundup.psfhosted.org> Change by Paul Monson : ---------- components: Build, Windows nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add support for building arm32 nuget package type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 2 22:14:02 2019 From: report at bugs.python.org (Colin Dick) Date: Wed, 03 Apr 2019 02:14:02 +0000 Subject: [New-bugs-announce] [issue36514] -m switch revisited Message-ID: <1554257642.04.0.214620522086.issue36514@roundup.psfhosted.org> New submission from Colin Dick : -m switch revisited - see issue 27487 Win 10 64bit python 3.6.3 & 3.7.3 initially running code using py 3.6.3 with this command python -m vixsd.py produced C:\Python36\python.exe: Error while finding module specification for 'vixsd.py' (AttributeError: module 'vixsd' has no attribute '__path__') updated python from 3.6.3 to 3.7.3 searched & read web retried the 4 options with & without "-m" & ".py" results reproduced below c:\shared\python\vmw>python vixsd python: can't open file 'vixsd': [Errno 2] No such file or directory c:\shared\python\vmw>python vixsd.py A c:\shared\python\vmw>python -m vixsd A c:\shared\python\vmw>python -m vixsd.py A C:\Python3\python.exe: Error while finding module specification for 'vixsd.py' (ModuleNotFoundError: __path__ attribute not found on 'vixsd' while trying to find 'vixsd.py') while this was initially produced thru my ignorance, handling all 4 options still does not work correctly appears to have been a problem at least since issue 27487 cheers team, keep up the great work ColinDNZ ---------- messages: 339374 nosy: Colin Dick priority: normal severity: normal status: open title: -m switch revisited type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 01:41:19 2019 From: report at bugs.python.org (Matthias Klose) Date: Wed, 03 Apr 2019 05:41:19 +0000 Subject: [New-bugs-announce] [issue36515] unaligned memory access in the _sha3 extension Message-ID: <1554270079.02.0.655835284074.issue36515@roundup.psfhosted.org> New submission from Matthias Klose : This was seen when running an armhf binary on a 64bit kernel. The problem is that the implementation uses unaligned memory accesses, and even is well aware of that. The module allows misaligned memory accesses by default. The NO_MISALIGNED_ACCESSES macro is never defined. Now you can define it only on architectures where unaligned memory accesses are not allowed (ARM32 on 64bit kernels), or where there are performance penalties (AArch64), or just don't try to outsmart modern compilers and always define this macro. The attached patch only fixes the issue on ARM32 and AArch64, however the safe fix should be to always define the macro. ---------- components: Extension Modules files: arm-alignment.diff keywords: patch messages: 339379 nosy: christian.heimes, doko priority: high severity: normal status: open title: unaligned memory access in the _sha3 extension type: crash versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48243/arm-alignment.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 02:27:07 2019 From: report at bugs.python.org (Jiongjiong Gao) Date: Wed, 03 Apr 2019 06:27:07 +0000 Subject: [New-bugs-announce] [issue36516] Python Launcher can not recognize pyw file as Python GUI Script file type correctly. Message-ID: <1554272827.64.0.548414335806.issue36516@roundup.psfhosted.org> New submission from Jiongjiong Gao : In Python Launcher Preferences there are two settings for file type, Python Script and Python GUI Script. But Python Launcher can not recognize pyw file as Python GUI Script file type correctly. ---------- components: macOS messages: 339380 nosy: gjj2828, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Python Launcher can not recognize pyw file as Python GUI Script file type correctly. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 09:07:30 2019 From: report at bugs.python.org (Andrew Wason) Date: Wed, 03 Apr 2019 13:07:30 +0000 Subject: [New-bugs-announce] [issue36517] typing.NamedTuple does not support mixins Message-ID: <1554296850.04.0.71784053328.issue36517@roundup.psfhosted.org> New submission from Andrew Wason : Subclassing typing.NamedTuple an inheriting from a mixin class does not work. It does work for collections.namedtuple, and can be worked around by modifying typing.NamedTupleMeta: >>> import collections >>> import typing >>> >>> >>> class Mixin: ... def mixin(self): ... return "mixin" ... >>> >>> class CollectionsNamedTuple(Mixin, collections.namedtuple('CollectionsNamedTuple', [ ... "a", ... "b", ... ])): ... pass ... >>> >>> class TypingNamedTuple(Mixin, typing.NamedTuple): ... a: str ... b: str ... >>> >>> class NamedTupleMeta(typing.NamedTupleMeta): ... def __new__(cls, typename, bases, ns): ... cls_obj = super().__new__(cls, typename + '_nm_base', bases, ns) ... bases = bases + (cls_obj,) ... return type(typename, bases, {}) ... >>> >>> class FixedTypingNamedTuple(Mixin, metaclass=NamedTupleMeta): ... a: str ... b: str ... >>> >>> cnt = CollectionsNamedTuple("av", "bv") >>> tnt = TypingNamedTuple("av", "bv") >>> ftnt = FixedTypingNamedTuple("av", "bv") >>> >>> cnt.mixin() 'mixin' >>> ftnt.mixin() 'mixin' >>> tnt.mixin() Traceback (most recent call last): File "", line 1, in AttributeError: 'TypingNamedTuple' object has no attribute 'mixin' ---------- components: Library (Lib) messages: 339390 nosy: rectalogic priority: normal severity: normal status: open title: typing.NamedTuple does not support mixins type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 10:14:02 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 03 Apr 2019 14:14:02 +0000 Subject: [New-bugs-announce] [issue36518] Avoid conflicts when pass arbitrary keyword arguments to Python function Message-ID: <1554300842.91.0.496796216897.issue36518@roundup.psfhosted.org> New submission from Serhiy Storchaka : This is yet one alternative to PEP 570. It does not solve all problems that PEP 570 is purposed to solve, but it significantly reduces the need in positional-only parameters. Currently the problem with implementing in Python functions that should accept arbitrary keyword arguments is that argument names can conflict with names of other parameters (in particularly "self"). For example, look at the function def log(fmt, **kwargs): print(fmt.format_map(kwargs)) You cannot call log('Format: {fmt}', fmt='binary'), because the argument for parameter "fmt" is specified twice: as positional argument and as keyword argument. The idea is that if the function has the var-keyword parameter, then keyword arguments with names which match passed positional arguments will be saved into the var-keyword dict instead of be error. The advantage of this idea over alternatives is that it does not need changing the user code. Implementing this feature will fix the user code that we do not even see. Most functions that otherwise would need positional only parameters (over 60 in the stdlib) will be automatically fixed by this feature. We could revert the deprecations added in issue36492 and simplify few functions that used the *args hack before. The change itself is very simple, just modification of few lines in ceval.c and inspect.py. The disadvantage is that it does not help with optional parameters. For example: def make_dict(dict=(), **kwargs): res = {} res.update(dict) res.update(kwargs) return res make_dict(dict={}, list=[]) will still return {'list': []} instead of {'dict': {}, list: []}. You still need to use the *args hack to get the latter result. But there are not much such functions. This idea was proposed by Steve [1]. [1] https://discuss.python.org/t/pep-570-python-positional-only-parameters/1078/39 ---------- components: Interpreter Core messages: 339392 nosy: gvanrossum, serhiy.storchaka, steve.dower priority: normal severity: normal status: open title: Avoid conflicts when pass arbitrary keyword arguments to Python function type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 11:04:35 2019 From: report at bugs.python.org (George King) Date: Wed, 03 Apr 2019 15:04:35 +0000 Subject: [New-bugs-announce] [issue36519] Blake2b/s implementations have minor GIL issues Message-ID: <1554303875.24.0.336244422636.issue36519@roundup.psfhosted.org> New submission from George King : I was browsing the Blake2b module implementation in master and noticed two subtle issues in blake2b_impl.c. There are two places where the GIL gets released; both of them appear flawed. py_blake2b_new_impl, line 221. The ALLOW_THREADS block fails to acquire/release self->lock. _blake2_blake2b_update, line 279. The lock is lazily allocated correctly on line 279. However the test on 282 that chooses to release the GIL or not fails to take into account the length test. This means that once a large block is fed to `update`, then every call to update will release the GIL, even if it is a single byte. It should look something more like this: ``` bool should_allow_threads = (buf.len >= HASHLIB_GIL_MINSIZE); if (should_allow_threads && self->lock == NULL) self->lock = PyThread_allocate_lock(); if (should_allow_threads && self->lock != NULL) { ... } else { ... } ``` This respects the size criterion, and also protects against the case where the lock allocation fails. ---------- components: Extension Modules messages: 339394 nosy: gwk priority: normal severity: normal status: open title: Blake2b/s implementations have minor GIL issues versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 19:15:00 2019 From: report at bugs.python.org (Jonathan Horn) Date: Wed, 03 Apr 2019 23:15:00 +0000 Subject: [New-bugs-announce] [issue36520] Email header folded incorrectly Message-ID: <1554333300.49.0.663084865399.issue36520@roundup.psfhosted.org> New submission from Jonathan Horn : I encountered a problem with replacing the 'Subject' header of an email. After serializing it again, the utf8 encoding was wrong. It seems to be occurring when folding the internal header objects. Example: >> email.policy.default.fold_binary('Subject', email.policy.default.header_store_parse('Subject', 'Hello W?rld! Hello W?rld! Hello W?rld! Hello W?rld!Hello W?rld!')[1]) Expected output: b'Subject: Hello =?utf-8?q?W=C3=B6rld!_Hello_W=C3=B6rld!_Hello_W=C3=B6rld!?=\n Hello =?utf-8?q?W=C3=B6rld!Hello_W=C3=B6rld!?=\n' (or similar) Actual output: b'Subject: Hello =?utf-8?q?W=C3=B6rld!_Hello_W=C3=B6rld!_Hello_W=C3=B6rld!?=\n Hello =?utf-8?=?utf-8?q?q=3FW=3DC3=3DB6rld!Hello=3F=3D_W=C3=B6rld!?=\n' I'm running Python 3.7.3 on Arch Linux using Linux 5.0. ---------- components: email messages: 339419 nosy: Jonathan Horn, barry, r.david.murray priority: normal severity: normal status: open title: Email header folded incorrectly type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 21:04:15 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 04 Apr 2019 01:04:15 +0000 Subject: [New-bugs-announce] [issue36521] Consider removing docstrings from co_consts in code objects Message-ID: <1554339855.64.0.463077014069.issue36521@roundup.psfhosted.org> New submission from Raymond Hettinger : Function objects provide __doc__ as a documented writeable attribute. However, code objects also have the same information in co_consts[0]. When __doc__ is changed, the latter keeps a reference to the old string. Also, the disassembly shows that co_consts[0] is never used. Can we remove the entry in co_consts? It looks like a compilation artifact rather than something that we need or want. >>> def f(x): 'y' >>> f.__doc__ 'y' >>> f.__code__.co_consts[0] 'y' >>> f.__doc__ = 'z' >>> f.__code__.co_consts[0] 'y' >>> from dis import dis >>> dis(f) 2 0 LOAD_CONST 1 (None) 2 RETURN_VALUE ---------- components: Interpreter Core messages: 339422 nosy: rhettinger priority: normal severity: normal status: open title: Consider removing docstrings from co_consts in code objects type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 3 22:45:33 2019 From: report at bugs.python.org (Matt Houglum) Date: Thu, 04 Apr 2019 02:45:33 +0000 Subject: [New-bugs-announce] [issue36522] http/client.py does not print duplicate header values in debug Message-ID: <1554345933.73.0.102710422294.issue36522@roundup.psfhosted.org> New submission from Matt Houglum : This is a follow-up to https://bugs.python.org/issue33365. The fix for that issue (see https://github.com/python/cpython/pull/6611) added a statement to also print header values, but it does not account for the case where multiple values exist for the same header name, e.g. if my response contained these headers: x-goog-hash: crc32c=KAwGng== x-goog-hash: md5=eB5eJF1ptWaXm4bijSPyxw== then the debug output would print whichever of those values is returned from `self.headers.get("x-goog-hash")` for both prints: header: x-goog-hash: crc32c=KAwGng== header: x-goog-hash: crc32c=KAwGng== The iteration should instead be done using self.headers.items(), which will return the key and value pair to be printed. I'll send a GitHub PR shortly. ---------- components: Library (Lib) messages: 339424 nosy: Matt Houglum priority: normal severity: normal status: open title: http/client.py does not print duplicate header values in debug versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 04:03:57 2019 From: report at bugs.python.org (Marcin Niemira) Date: Thu, 04 Apr 2019 08:03:57 +0000 Subject: [New-bugs-announce] [issue36523] missing docs for IOBase writelines Message-ID: <1554365037.57.0.454839708261.issue36523@roundup.psfhosted.org> New submission from Marcin Niemira : Hey, There is a missing function doc in `io.IOBase` ```python import os help(io.IOBase.writelines) ``` produces output like: ``` Help on method_descriptor: writelines(self, lines, /) ``` I'll be happy to provide PR for this issue. Cheers, Marcin ---------- assignee: docs at python components: Documentation messages: 339434 nosy: Marcin Niemira, docs at python priority: normal severity: normal status: open title: missing docs for IOBase writelines versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 08:28:43 2019 From: report at bugs.python.org (Rocco Santoro) Date: Thu, 04 Apr 2019 12:28:43 +0000 Subject: [New-bugs-announce] [issue36524] identity operator Message-ID: <1554380923.58.0.948935452429.issue36524@roundup.psfhosted.org> New submission from Rocco Santoro : Hi all Why the identity operator and '==' are both applied to the type (see above)? Is it not convenient to distinguish them? I mean the identity operator applied to the type and '==' applied to the outcome. Thanks for the attention Best regards Rocco Santoro Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license()" for more information. >>> import math >>> x = math.log(10000000) >>> y = math.log(10) >>> print(x/y) == x/y 7.0 False >>> print(math.log(10000000)/math.log(10)) == math.log(10000000)/math.log(10) 7.0 False >>> x = math.sin(32) >>> y = math.cos(41) >>> print(y/x) == y/x -1.7905177807148493 False >>> x = math.pi >>> y = math.tau >>> x/y == print(x/y) 0.5 False >>> x = 153 >>> y = 245 >>> print(x/y) == x/y 0.6244897959183674 False >>> print(x+y) == x + y 398 False >>> print(x*y) == x*y 37485 False >>> s1 = 'Hello, ' >>> s2 = 'how are you?' >>> print(s1 + s2) == s1 + s2 Hello, how are you? False >>> print(s1 + s2) is s1 + s2 Hello, how are you? False >>> type(print(s1 + s2)) Hello, how are you? >>> type(s1 + s2) >>> type(print(y/x)) 1.6013071895424837 >>> type(x/y) ---------- assignee: terry.reedy components: IDLE messages: 339441 nosy: roccosan, terry.reedy priority: normal severity: normal status: open title: identity operator type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 09:08:32 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Thu, 04 Apr 2019 13:08:32 +0000 Subject: [New-bugs-announce] [issue36525] Deprecate instance method Message-ID: <1554383312.87.0.791083072255.issue36525@roundup.psfhosted.org> New submission from Jeroen Demeyer : The "instance method" class is not used anywhere and there are no obvious use cases. We should just deprecate it to simplify Python. See discussion at https://mail.python.org/pipermail/python-dev/2019-April/156975.html ---------- messages: 339444 nosy: christian.heimes, jdemeyer priority: normal severity: normal status: open title: Deprecate instance method _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 11:12:11 2019 From: report at bugs.python.org (Ahmed Soliman) Date: Thu, 04 Apr 2019 15:12:11 +0000 Subject: [New-bugs-announce] [issue36526] python crash when loading some .pyc file Message-ID: <1554390731.41.0.407793699332.issue36526@roundup.psfhosted.org> New submission from Ahmed Soliman : I was fuzzing python pyc and I got this segmentation fault ``` ==25016==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000007d147f bp 0x7ffc6875cfc0 sp 0x7ffc6875c7e0 T0) ==25016==The signal is caused by a WRITE memory access. ==25016==Hint: address points to the zero page. #0 0x7d147e in _Py_INCREF /home/cpython/./Include/object.h:453:18 #1 0x7d147e in _PyEval_EvalFrameDefault /home/cpython/Python/ceval.c:1186 #2 0x7e38bc in PyEval_EvalFrameEx /home/cpython/Python/ceval.c:625:12 #3 0x7e38bc in _PyEval_EvalCodeWithName /home/cpython/Python/ceval.c:4036 #4 0x7b72d3 in PyEval_EvalCodeEx /home/cpython/Python/ceval.c:4065:12 #5 0x7b72d3 in PyEval_EvalCode /home/cpython/Python/ceval.c:602 #6 0x911643 in run_eval_code_obj /home/cpython/Python/pythonrun.c:1047:9 #7 0x911643 in run_pyc_file /home/cpython/Python/pythonrun.c:1100 #8 0x911643 in PyRun_SimpleFileExFlags /home/cpython/Python/pythonrun.c:420 #9 0x9102cb in PyRun_AnyFileExFlags /home/cpython/Python/pythonrun.c:85:16 #10 0x517df8 in pymain_run_file /home/cpython/Modules/main.c:346:15 #11 0x517df8 in pymain_run_python /home/cpython/Modules/main.c:511 #12 0x517df8 in _Py_RunMain /home/cpython/Modules/main.c:583 #13 0x51901a in pymain_main /home/cpython/Modules/main.c:612:12 #14 0x5193e3 in _Py_UnixMain /home/cpython/Modules/main.c:636:12 #15 0x7fd06244375a in __libc_start_main (/lib64/libc.so.6+0x2375a) #16 0x437919 in _start (/home/cpython/python+0x437919) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /home/cpython/./Include/object.h:453:18 in _Py_INCREF ==25016==ABORTING ``` Python version Python 3.8.0a3+ (heads/master:cb0748d393, Apr 4 2019, 16:40:18) [Clang 8.0.0 (tags/RELEASE_800/final)] on linux ---------- files: id:000147,sig:11,src:000000,op:arith8,pos:53,val:-23 messages: 339448 nosy: Ahmed Soliman priority: normal severity: normal status: open title: python crash when loading some .pyc file versions: Python 3.8 Added file: https://bugs.python.org/file48244/id:000147,sig:11,src:000000,op:arith8,pos:53,val:-23 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 12:08:41 2019 From: report at bugs.python.org (Dmitry Marakasov) Date: Thu, 04 Apr 2019 16:08:41 +0000 Subject: [New-bugs-announce] [issue36527] unused parameter warnings in Include/object.h (affecting building third party code) Message-ID: <1554394121.78.0.720691487188.issue36527@roundup.psfhosted.org> New submission from Dmitry Marakasov : Python 3.8 and nightly introduces unused (in some cases) parameters in object.h header. This makes compilation of third party code which includes the header fail if it's built with -Werror. Build log excerpt: --- g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -UNDEBUG -Wall -Wextra -Werror -fPIC -I/usr/include/yajl -I/opt/python/3.8-dev/include/python3.8m -c src/construct_handlers.cc -o build/temp.linux-x86_64-3.8/src/construct_handlers.o -std=c++11 -DJSONSLICER_VERSION="0.1.4" -DUSE_BYTES_INTERNALLY -fno-exceptions -fno-rtti In file included from /opt/python/3.8-dev/include/python3.8m/pytime.h:6:0, from /opt/python/3.8-dev/include/python3.8m/Python.h:85, from src/pyobjlist.hh:26, from src/jsonslicer.hh:26, from src/construct_handlers.hh:26, from src/construct_handlers.cc:23: /opt/python/3.8-dev/include/python3.8m/object.h:441:50: error: unused parameter ?op? [-Werror=unused-parameter] static inline void _Py_ForgetReference(PyObject *op) ^ /opt/python/3.8-dev/include/python3.8m/object.h:458:43: error: unused parameter ?filename? [-Werror=unused-parameter] static inline void _Py_DECREF(const char *filename, int lineno, ^ /opt/python/3.8-dev/include/python3.8m/object.h:458:57: error: unused parameter ?lineno? [-Werror=unused-parameter] static inline void _Py_DECREF(const char *filename, int lineno, ^ --- Full build log: https://travis-ci.org/AMDmi3/jsonslicer/jobs/515771366 Possible solutions: - Add (void)param; to silence the warning as it's already done in Include/internal/pycore_atomic.h - Make distutils list python include directory with -isystem instead of -I. This way, python include won't generate warnings for third party code. ---------- components: Build messages: 339452 nosy: amdmi3 priority: normal severity: normal status: open title: unused parameter warnings in Include/object.h (affecting building third party code) type: compile error versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 13:29:02 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Thu, 04 Apr 2019 17:29:02 +0000 Subject: [New-bugs-announce] [issue36528] Remove duplicate tests in Lib/tests/re_tests.py Message-ID: <1554398942.85.0.973114316102.issue36528@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Currently Lib/tests/re_tests.py has duplicate elements in the tests list variable. This is a followup on https://github.com/python/cpython/pull/12662 where there seems to be around 100 elements that are duplicates in the list. This seems to have been due to some merge commits made long time ago. Sample duplicates : https://github.com/python/cpython/pull/12662#issuecomment-479852101 ---------- components: Tests messages: 339457 nosy: serhiy.storchaka, xtreak priority: normal severity: normal status: open title: Remove duplicate tests in Lib/tests/re_tests.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 15:57:28 2019 From: report at bugs.python.org (Ilya Kazakevich) Date: Thu, 04 Apr 2019 19:57:28 +0000 Subject: [New-bugs-announce] [issue36529] Python from WindowsStore: can't install package using "-m pip" Message-ID: <1554407848.88.0.523569372633.issue36529@roundup.psfhosted.org> New submission from Ilya Kazakevich : No packages could be installed with "-m pip" because of "Access Denied". It seems that it tries to install package to "site-packages' instead of "local-packages". However, "pip.exe" works. Does it mean "pip.exe" is patched somehow, but not python itself? c:\>"c:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0\python" -m pip install flask Collecting flask Using cached https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl ... Installing collected packages: flask Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0\\Lib\\site-packages\\flask' Consider using the `--user` option or check the permissions. ---- But: c:\>"c:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0\pip" install flask Collecting flask Using cached https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl Installing collected packages: flask The script flask.exe is installed in 'C:\Users\SomeUser\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed flask-1.0.2 ---------- components: Installation messages: 339460 nosy: Ilya Kazakevich priority: normal severity: normal status: open title: Python from WindowsStore: can't install package using "-m pip" type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 18:26:09 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Thu, 04 Apr 2019 22:26:09 +0000 Subject: [New-bugs-announce] [issue36530] Document codecs decode_encode() and encode_decode() APIs Message-ID: <1554416769.95.0.261313535534.issue36530@roundup.psfhosted.org> New submission from Gregory P. Smith : The codecs module has public decode_encode() and encode_decode() functions. They have never been documented, but are recommended for some uses such as: https://stackoverflow.com/questions/14820429/how-do-i-decodestring-escape-in-python3/23151714#23151714 As public APIs, we should document them. ---------- assignee: docs at python components: Documentation messages: 339467 nosy: docs at python, gregory.p.smith, njs priority: normal severity: normal stage: needs patch status: open title: Document codecs decode_encode() and encode_decode() APIs versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 4 19:53:40 2019 From: report at bugs.python.org (Eddie Elizondo) Date: Thu, 04 Apr 2019 23:53:40 +0000 Subject: [New-bugs-announce] [issue36531] PyType_FromSpec wrong behavior with multiple Py_tp_members Message-ID: <1554422020.37.0.786317222832.issue36531@roundup.psfhosted.org> New submission from Eddie Elizondo : If a user accidentally defined more than one Py_tp_members in the spec, PyType_FromSpec will ignore all but the last use case. However, the number of members count will cause the type to allocate more memory than needed. This leads to weird behavior and crashes. The solution is a one line fix to just restart the count if multiple Py_tp_members are defined. ---------- messages: 339468 nosy: eelizondo priority: normal severity: normal status: open title: PyType_FromSpec wrong behavior with multiple Py_tp_members _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 01:14:14 2019 From: report at bugs.python.org (spaceman_spiff) Date: Fri, 05 Apr 2019 05:14:14 +0000 Subject: [New-bugs-announce] [issue36532] Example of logging.formatter with new str.format style Message-ID: <1554441254.46.0.943587700045.issue36532@roundup.psfhosted.org> New submission from spaceman_spiff : It was not quite clear how to use the logging library with the new str.format style so I added an example in the logging cookbook ---------- assignee: docs at python components: Documentation messages: 339470 nosy: docs at python, spaceman_spiff priority: normal severity: normal status: open title: Example of logging.formatter with new str.format style versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 04:15:51 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Fri, 05 Apr 2019 08:15:51 +0000 Subject: [New-bugs-announce] [issue36533] logging regression with threading + fork are mixed in 3.7.1rc2 (deadlock potential) Message-ID: <1554452151.49.0.211568667786.issue36533@roundup.psfhosted.org> New submission from Gregory P. Smith : I'm spawning a dicussion buried in the way too long thread of https://bugs.python.org/issue6721 over here into its own specific issue to treat as a 3.7 release blocker for a rollback or repair decision before 3.7.4. https://github.com/python/cpython/commit/3b699932e5ac3e76031bbb6d700fbea07492641d I believe that was released in 3.7.1 is leading to a behavior regression for an application (the Fedora installer's libreswan kvmrunner?). Full details can be found in the messages of the other issue starting with: https://bugs.python.org/issue6721#msg329474 TL;DR - logging.Handler instances each have their own threading.Rlock. libreswan implements at least one logging.Handler subclass. That subclass's custom emit() implementation directly calls potentially many other sub-handlers emit() methods. Some of those emit() methods (such as logging.StreamHandler) call flush() which acquires the handler's lock. So they've got a dependency between these two locks, the first's must be acquired before the second. But the logging module APIs have no concept of sub-handlers and lock ordering. I see many flaws with the libreswan code's design (I'm already ignoring the futility of threading + fork) but this still caused a behavior regression in the stable 3.7 release. (more comments coming as followups to avoid a wall of text with too many topics) ---------- assignee: gregory.p.smith components: Library (Lib) keywords: 3.7regression messages: 339472 nosy: cagney, gregory.p.smith, ned.deily, vstinner priority: release blocker severity: normal status: open title: logging regression with threading + fork are mixed in 3.7.1rc2 (deadlock potential) type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 06:04:31 2019 From: report at bugs.python.org (Cristi Fati) Date: Fri, 05 Apr 2019 10:04:31 +0000 Subject: [New-bugs-announce] [issue36534] tarfile: handling Windows (path) illegal characters in archive member names Message-ID: <1554458671.74.0.11308540457.issue36534@roundup.psfhosted.org> New submission from Cristi Fati : Although tar is a Nix based (and mostly used) format, it gains popularity on Win too. As tarfile is running on Win, I think it should handle (work around) path incompatibilities, as zipfile (`ZipFile._sanitize_windows_name`) does. Applies to all branches. More details on [Tarfile/Zipfile extractall() changing filename of some files](https://stackoverflow.com/questions/55340013/tarfile-zipfile-extractall-changing-filename-of-some-files/55348443#55348443). Regarding the current zipfile handling: it also can be improved (as it has a small bug), for example if the archive contains 2 files ("file:" and "file_") it won't work as expected. But this is a rare corner case. I didn't prepare a patch, since I did so for another issue (https://bugs.python.org/issue36247 - which I consider an ugly one), and it wasn't well received, also it was rejected (for different reasons). If this issue gets the green light from whomever is in charge, I'll be happy to provide one. ---------- components: Library (Lib) messages: 339486 nosy: CristiFati priority: normal severity: normal status: open title: tarfile: handling Windows (path) illegal characters in archive member names type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 07:00:57 2019 From: report at bugs.python.org (Manjusaka) Date: Fri, 05 Apr 2019 11:00:57 +0000 Subject: [New-bugs-announce] [issue36535] Windows build failure when use the code from the GitHub master branch Message-ID: <1554462057.39.0.0549572180819.issue36535@roundup.psfhosted.org> New submission from Manjusaka : I use Visual Studio 2017 to build the source code from the master branch. But it failed. The output message shows that I lose my ffi.h. I find that the developer had removed the libffi_module directory under cpython/modules/_ctypes since 32119e10b792ad7ee4e5f951a2d89ddbaf111cc5 I guess that maybe that's the problem why I get failure ---------- components: Windows messages: 339494 nosy: Manjusaka, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows build failure when use the code from the GitHub master branch versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 08:40:52 2019 From: report at bugs.python.org (wis) Date: Fri, 05 Apr 2019 12:40:52 +0000 Subject: [New-bugs-announce] [issue36536] is there a python implementation of the cpython commandline interpretor? Message-ID: <1554468052.35.0.435218217698.issue36536@roundup.psfhosted.org> New submission from wis : this algorithm: https://github.com/python/cpython/blob/34ef64fe5947bd7e1b075c785fc1125c4e600cd4/Python/coreconfig.c#L1644 I need a python library that does that to fix this answer https://stackoverflow.com/a/55413882/4178053 I need to get the correct path of the script from the commandline of the python process. I searched for libraries or implementations and couldn't find one, can someone help me? I can't read C. ---------- components: Argument Clinic messages: 339499 nosy: larry, wis priority: normal severity: normal status: open title: is there a python implementation of the cpython commandline interpretor? _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 14:43:36 2019 From: report at bugs.python.org (Saim Raza) Date: Fri, 05 Apr 2019 18:43:36 +0000 Subject: [New-bugs-announce] [issue36537] except statement block incorrectly assumes end of scope(?). Message-ID: <1554489816.2.0.715067150447.issue36537@roundup.psfhosted.org> New submission from Saim Raza : If pdb.set_trace() is the last statement in the first code snippet, variable 'err' is (wrongly?) excluded from locals(). Adding any code after the pdb.set_trace() statement makes 'err' available in locals. In [2]: try: ...: raise ValueError("I am ValueError") ...: except ValueError as err: ...: print("err" in locals()) ...: import pdb; pdb.set_trace() ...: True --Return-- > (5)()->None -> import pdb; pdb.set_trace() (Pdb) print("err" in locals()) False <-------------- BUG?? (Pdb) c In [3]: try: ...: raise ValueError("I am ValueError") ...: except ValueError as err: ...: print("err" in locals()) ...: import pdb; pdb.set_trace() ...: import os # Dummy code - makes variable 'err' available inside the debugger. ...: True > (6)()->None -> import os # Dummy code - makes variable err available inside debugger (Pdb) print("err" in locals()) True In [4]: sys.version_info Out[4]: sys.version_info(major=3, minor=7, micro=3, releaselevel='final', serial=0) FTR, the variable 'err' is available in both cases in case of Python 2.7. Also, this happens with ipdb as well. Please note that I am aware that I need to assign the variable 'err' to some variable inside the except block to access it outside the except block. However, the pdb statement is still inside the except block in both cases. ---------- components: Library (Lib) messages: 339510 nosy: Saim Raza priority: normal severity: normal status: open title: except statement block incorrectly assumes end of scope(?). type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 16:07:07 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Fri, 05 Apr 2019 20:07:07 +0000 Subject: [New-bugs-announce] [issue36538] _thread.interrupt_main() no longer interrupts Lock.wait Message-ID: <1554494827.92.0.565969475506.issue36538@roundup.psfhosted.org> New submission from Gregory P. Smith : In Python 2.7 our threading implementation was so poor that a thread join ultimately called our lock wait implementation that busy looped polling and sleeping to check for a lock acquisition success. calling thread.interrupt_main() which is just PyErr_SetInterrupt() C API in disguise successfully broke out of that lock wait loop. In Python 3 with our drastically improved threading implementation, a lock wait is a pthreads sem_timedwait() or sem_trywait() API call, blocking within the pthreads library or OS kernel. PyErr_SetInterrupt() obviously has no effect on that. Only an actual signal arriving can interrupt that. Thus instead of code using _thread.interrupt_main() - in 2and3 compatible applications, six.moves._thread.interrupt_main() - they should instead write: os.kill(os.getpid(), signal.SIGINT) Given that _thread is a private module making _thread.interrupt_main() a private API, do we need to keep it? If we do, we should at least document this behavior and recommend actually sending the signal. It is less capable of actually interrupting the main thread from some common blocking operations today. Sending the signal seems like it would always be better. ---------- assignee: docs at python components: Documentation, Extension Modules, Library (Lib) messages: 339518 nosy: docs at python, gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: _thread.interrupt_main() no longer interrupts Lock.wait type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 17:13:22 2019 From: report at bugs.python.org (Dan Yeaw) Date: Fri, 05 Apr 2019 21:13:22 +0000 Subject: [New-bugs-announce] [issue36539] Distutils VC 6.0 Errors When Using mingw-w64 GCC Message-ID: <1554498802.41.0.202471650795.issue36539@roundup.psfhosted.org> New submission from Dan Yeaw : I am using the mingw-w64-x86_64-python3 in MSYS2 on Windows to package a PyGObject app. When I try to pip install the app, I am getting errors about the VC 6.0 isn't supported. It looks like setuptools is trying to patch distutils msvc. The msvc9compiler module in distutils uses a get_build_version function to get the version of MSVC on the system. This version of Python 3 is compiled with GCC 8.3.0 so the function doesn't find the "MSC v." prefix and returns version 6. Here is the full error: $ pip install -e . Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... error Complete output from command C:/tools/msys64/mingw64/bin/python3.exe C:/tools/msys64/mingw64/lib/python3.7/site-packages\pip install --ignore-installed --no-user --prefix C:/Users/dyeaw/AppData/Local/Temp/pip-build-env-bfwge_92/normal --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- gaphas>=1.0.0,<2.0.0 PyGObject>=3.30,<4.0 pycairo>=1.18,<2.0 zope.component>=4.5,<5.0: Collecting gaphas<2.0.0,>=1.0.0 Using cached https://files.pythonhosted.org/packages/68/1d/4c8501535889538fe2144b5b8836fa2b3296e06d4a3d9f7e4e7e8cc1e90f/gaphas-1.0.0-py2.py3-none-any.whl Collecting PyGObject<4.0,>=3.30 Using cached https://files.pythonhosted.org/packages/0b/fd/56ac6898afc5c7f5718026103bd8f0b44714b6f79ac20d7eb8990c9a7eab/PyGObject-3.32.0.tar.gz Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' Complete output from command C:/tools/msys64/mingw64/bin/python3.exe C:/tools/msys64/mingw64/lib/python3.7/site-packages/pep517/_in_process.py get_requires_for_build_wheel C:/Users/dyeaw/AppData/Local/Temp/tmp8lre6w7i: Traceback (most recent call last): File "C:/tools/msys64/mingw64/lib/python3.7/site-packages/pep517/_in_process.py", line 207, in main() File "C:/tools/msys64/mingw64/lib/python3.7/site-packages/pep517/_in_process.py", line 197, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "C:/tools/msys64/mingw64/lib/python3.7/site-packages/pep517/_in_process.py", line 48, in get_requires_for_build_wheel backend = _build_backend() File "C:/tools/msys64/mingw64/lib/python3.7/site-packages/pep517/_in_process.py", line 34, in _build_backend obj = import_module(mod_path) File "C:/tools/msys64/mingw64/lib/python3.7\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 953, in _find_and_load_unlocked File "", line 219, in _call_with_frames_removed File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "C:/Users/dyeaw/AppData/Local/Temp/pip-build-env-5pamh2nb/overlay/lib/python3.7/site-packages\setuptools\__init__.py", line 228, in monkey.patch_all() File "C:/Users/dyeaw/AppData/Local/Temp/pip-build-env-5pamh2nb/overlay/lib/python3.7/site-packages\setuptools\monkey.py", line 101, in patch_all patch_for_msvc_specialized_compiler() File "C:/Users/dyeaw/AppData/Local/Temp/pip-build-env-5pamh2nb/overlay/lib/python3.7/site-packages\setuptools\monkey.py", line 164, in patch_for_msvc_specialized_compiler patch_func(*msvc9('find_vcvarsall')) File "C:/Users/dyeaw/AppData/Local/Temp/pip-build-env-5pamh2nb/overlay/lib/python3.7/site-packages\setuptools\monkey.py", line 151, in patch_params mod = import_module(mod_name) File "C:/tools/msys64/mingw64/lib/python3.7\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "C:/tools/msys64/mingw64/lib/python3.7\distutils\msvc9compiler.py", line 296, in raise DistutilsPlatformError("VC %0.1f is not supported by this module" % VERSION) distutils.errors.DistutilsPlatformError: VC 6.0 is not supported by this module Others are getting this error as well: https://stackoverflow.com/questions/52166914/msys2-mingw64-pip-vc-6-0-is-not-supported-by-this-module One way to fix may be to not raise the error if it occurs: VERSION = get_build_version() if VERSION < 8.0: - raise DistutilsPlatformError("VC %0.1f is not supported by this module" % VERSION) + pass If you think this should be fixed in Setuptools to not try to patch distutils in this instance, or downstream in mingw-w64 packages, please point me in the right direction. I am also glad to help submit a PR if I can have some guidance on the best approach. ---------- components: Distutils messages: 339520 nosy: danyeaw, dstufft, eric.araujo priority: normal severity: normal status: open title: Distutils VC 6.0 Errors When Using mingw-w64 GCC type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 21:06:03 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 06 Apr 2019 01:06:03 +0000 Subject: [New-bugs-announce] [issue36540] PEP 570: Python Positional-Only Parameters Message-ID: <1554512763.87.0.931597726758.issue36540@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : This issue will serve to track development and PRs for the implementation of PEP 570: Python Positional-Only Parameters. ---------- assignee: pablogsal components: Interpreter Core messages: 339521 nosy: pablogsal priority: normal severity: normal status: open title: PEP 570: Python Positional-Only Parameters versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 5 21:41:20 2019 From: report at bugs.python.org (Tim Hatch) Date: Sat, 06 Apr 2019 01:41:20 +0000 Subject: [New-bugs-announce] [issue36541] Make lib2to3 grammar more closely match Python Message-ID: <1554514880.92.0.277848417721.issue36541@roundup.psfhosted.org> New submission from Tim Hatch : The grammar in lib2to3 is out of date and can't parse `:=` nor `f(**not x)` from running on real code. I've done a cursory `diff -uw Grammar/Grammar Lib/lib2to3/grammar.txt`, and would like to fix lib2to3 so we can merge into both fissix and blib2to3, to avoid further divergence of the forks. I'm unsure if I need a separate bug per pull request, but need at least one to get started. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 339522 nosy: georg.brandl, lukasz.langa, thatch priority: normal severity: normal status: open title: Make lib2to3 grammar more closely match Python type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 6 04:33:41 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 06 Apr 2019 08:33:41 +0000 Subject: [New-bugs-announce] [issue36542] Allow to overwrite the signature for Python functions Message-ID: <1554539621.71.0.0901781243935.issue36542@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently __text_signature__ can be used for specifying signature of functions implemented in C. It is ignored for functions implemented in Python. The proposed PR allows to override the real signature of Python functions by setting the __text_signature__ attribute. This is needed to restore useful signatures in functions that use the *args hack to implement positional-only parameters. See the discussion for PR 12637. ---------- components: Library (Lib) messages: 339530 nosy: gvanrossum, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Allow to overwrite the signature for Python functions type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 6 12:06:22 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 06 Apr 2019 16:06:22 +0000 Subject: [New-bugs-announce] [issue36543] Remove old-deprecated ElementTree features (part 2) Message-ID: <1554566782.39.0.176680363975.issue36543@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR removes old-deprecated ElementTree features. * Methods Element.getchildren(), Element.getiterator() and ElementTree.getiterator() deprecated in 2.7 and 3.2. They were deprecated in the documentation only, and started to emit a warning in 3.8. Use list(elem) or iteration instead of getchildren(), methods iter() instead of getiterator(). * The xml.etree.cElementTree module deprecated in 3.3. It was deprecated documentation only because adding a runtime warning will cause more harm than removing it because of the common idiom of using it since Python 2: try: import xml.etree.cElementTree as ET except ImportError: import xml.etree.ElementTree as ET TODO: Add a What's New entry after the start of developing 3.9. ---------- assignee: serhiy.storchaka components: XML messages: 339533 nosy: eli.bendersky, scoder, serhiy.storchaka priority: normal severity: normal status: open title: Remove old-deprecated ElementTree features (part 2) type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 6 12:18:21 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 06 Apr 2019 16:18:21 +0000 Subject: [New-bugs-announce] [issue36544] cannot import hashlib when openssl is missing Message-ID: <1554567501.08.0.227124733254.issue36544@roundup.psfhosted.org> New submission from Xavier de Gaye : Python is built natively in a docker container (based on ubuntu bionic) that lacks openssl. pydev at 979e9e009b08:~/build/python-native$ ./python Python 3.8.0a3+ (heads/master-dirty:d6bf6f2, Apr 6 2019, 14:43:30) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import hashlib ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha512 ERROR:root:code for hash blake2b was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type blake2b ERROR:root:code for hash blake2s was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type blake2s ERROR:root:code for hash sha3_224 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha3_224 ERROR:root:code for hash sha3_256 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha3_256 ERROR:root:code for hash sha3_384 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha3_384 ERROR:root:code for hash sha3_512 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha3_512 ERROR:root:code for hash shake_128 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type shake_128 ERROR:root:code for hash shake_256 was not found. Traceback (most recent call last): File "/home/pydev/cpython/Lib/hashlib.py", line 244, in globals()[__func_name] = __get_hash(__func_name) File "/home/pydev/cpython/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type shake_256 >>> ---------- components: Build messages: 339536 nosy: vstinner, xdegaye priority: normal severity: normal stage: needs patch status: open title: cannot import hashlib when openssl is missing type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 6 14:04:09 2019 From: report at bugs.python.org (Philip Deegan) Date: Sat, 06 Apr 2019 18:04:09 +0000 Subject: [New-bugs-announce] [issue36545] Python 3.5 OOM during test_socket on make Message-ID: <1554573849.92.0.0783609732408.issue36545@roundup.psfhosted.org> New submission from Philip Deegan : Building Python 3.5.3 or 3.5.6 on my Kernel 5.0.2 Debian 9 install has runaway memory usage during "test_socket" while running make after ./configure CFLAGS="-g3 -O3 -march=native -fPIC -I/usr/include/openssl" CXXFLAGS="-g3 -O3 -march=native -fPIC -I/usr/include/openssl" --enable-shared LDFLAGS="-L/usr/lib -L/usr/lib/x86_64-linux-gnu -Wl,-rpath=/usr/lib/x86_64-linux-gnu" --prefix=$PWD --with-valgrind --enable-optimizations --with-ensurepip=install ---------- messages: 339541 nosy: dekken priority: normal severity: normal status: open title: Python 3.5 OOM during test_socket on make _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 6 17:22:16 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 06 Apr 2019 21:22:16 +0000 Subject: [New-bugs-announce] [issue36546] Add quantiles() to the statistics module Message-ID: <1554585736.61.0.903931747573.issue36546@roundup.psfhosted.org> New submission from Raymond Hettinger : It is a common and useful data analysis technique to examine quartiles, deciles, and percentiles. It is especially helpful for comparing distinct datasets (heights of boys versus heights of girls) or for comparing against a reference distribution (empirical data versus a normal distribution for example). --- sample session --- >>> from statistics import NormalDist, quantiles >>> from pylab import plot # SAT exam scores >>> sat = NormalDist(1060, 195) >>> list(map(round, quantiles(sat, n=4))) # quartiles [928, 1060, 1192] >>> list(map(round, quantiles(sat, n=10))) # deciles [810, 896, 958, 1011, 1060, 1109, 1162, 1224, 1310] # Summarize a dataset >>> data = [110, 96, 155, 87, 98, 82, 156, 88, 172, 102, 91, 184, 105, 114, 104] >>> quantiles(data, n=2) # median [104.0] >>> quantiles(data, n=4) # quartiles [91.0, 104.0, 155.0] >>> quantiles(data, n=10) # deciles [85.0, 88.6, 95.0, 99.6, 104.0, 108.0, 122.2, 155.8, 176.8] # Assess when data is normally distributed by comparing quantiles >>> reference_dist = NormalDist.from_samples(data) >>> quantiles(reference_dist, n=4) [93.81594518619364, 116.26666666666667, 138.71738814713967] # Make a QQ plot to visualize how well the data matches a normal distribution # plot(quantiles(data, n=7), quantiles(reference_dist, n=7)) ---------- components: Library (Lib) messages: 339544 nosy: rhettinger priority: normal severity: normal status: open title: Add quantiles() to the statistics module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 6 21:57:01 2019 From: report at bugs.python.org (Emmanuel Arias) Date: Sun, 07 Apr 2019 01:57:01 +0000 Subject: [New-bugs-announce] [issue36547] bedevere is not working Message-ID: <1554602221.86.0.735080241242.issue36547@roundup.psfhosted.org> New submission from Emmanuel Arias : Hi! I don't know if here is the correct place to this, but bedevere is not working from a day ago. ---------- messages: 339552 nosy: eamanu priority: normal severity: normal status: open title: bedevere is not working _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 7 06:28:01 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 07 Apr 2019 10:28:01 +0000 Subject: [New-bugs-announce] [issue36548] Make the repr of re flags more readable Message-ID: <1554632881.49.0.427945168441.issue36548@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently the repr of re flags contains the name of the private class and it is not an evaluable expression: >>> re.I >>> re.I|re.S|re.X The repr of inverted flags is even more verbose: >>> ~(re.I|re.S|re.X) The result of str() starves from the same issues. >>> print(re.I) RegexFlag.IGNORECASE >>> print(re.I|re.S|re.X) RegexFlag.VERBOSE|DOTALL|IGNORECASE >>> print(~re.I) RegexFlag.ASCII|DEBUG|VERBOSE|UNICODE|DOTALL|MULTILINE|LOCALE|TEMPLATE >>> print(~(re.I|re.S|re.X)) RegexFlag.ASCII|DEBUG|UNICODE|MULTILINE|LOCALE|TEMPLATE If the value contains unrecognized flags, it looks even more weird, and this information is not shown for invered flags: >>> re.I|re.S|re.X|(1<<10) >>> ~(re.I|re.S|re.X|(1<<10)) >>> print(re.I|re.S|re.X|(1<<10)) RegexFlag.1024|VERBOSE|DOTALL|IGNORECASE >>> print(~(re.I|re.S|re.X|(1<<10))) RegexFlag.ASCII|DEBUG|UNICODE|MULTILINE|LOCALE|TEMPLATE This repr is also not consistent with the represenation of flags in the repr of the compiled pattern: >>> re.compile('x', re.I|re.S|re.X) re.compile('x', re.IGNORECASE|re.DOTALL|re.VERBOSE) The proposed PR makes the repr be the same as for flags in the repr of the compiled pattern and represents inverted flags as the result of inversion: >>> re.I re.IGNORECASE >>> re.I|re.S|re.X re.IGNORECASE|re.DOTALL|re.VERBOSE >>> ~re.I ~re.IGNORECASE >>> ~(re.I|re.S|re.X) ~(re.IGNORECASE|re.DOTALL|re.VERBOSE) >>> re.I|re.S|re.X|(1<<10) re.IGNORECASE|re.DOTALL|re.VERBOSE|0x400 >>> ~(re.I|re.S|re.X|(1<<10)) ~(re.IGNORECASE|re.DOTALL|re.VERBOSE|0x400) __str__ is set to object.__str__, so that str() will return the same as repr(). ---------- components: Library (Lib), Regular Expressions messages: 339567 nosy: ethan.furman, ezio.melotti, mrabarnett, serhiy.storchaka priority: normal severity: normal status: open title: Make the repr of re flags more readable type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 7 06:40:51 2019 From: report at bugs.python.org (Steven D'Aprano) Date: Sun, 07 Apr 2019 10:40:51 +0000 Subject: [New-bugs-announce] [issue36549] str.capitalize should titlecase the first character not uppercase Message-ID: <1554633651.74.0.204071668882.issue36549@roundup.psfhosted.org> New submission from Steven D'Aprano : str.capitalize appears to uppercase the first character of the string, which is okay for ASCII but not for non-English letters. For example, the letter NJ in Croatian appears as Nj at the start of words when the first character is capitalized: ?ema?ka ('Germany'), not ?ema?ka. (In ASCII, that's Njemacka not NJemacka.) https://en.wikipedia.org/wiki/Gaj's_Latin_alphabet#Digraphs But using any of: U+01CA LATIN CAPITAL LETTER NJ U+01CB LATIN CAPITAL LETTER N WITH SMALL LETTER J U+01CC LATIN SMALL LETTER NJ we get the wrong result with capitalize: py> '?ema?ka'.capitalize() '?ema?ka' py> '?ema?ka'.capitalize() '?ema?ka' py> '?ema?ka'.capitalize() '?ema?ka' I believe that the correct behaviour is to titlecase the first code point and lowercase the rest, which is what the Apache library here does: https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/StringUtils.html#capitalize-java.lang.String- ---------- messages: 339568 nosy: steven.daprano priority: normal severity: normal status: open title: str.capitalize should titlecase the first character not uppercase _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 7 15:15:47 2019 From: report at bugs.python.org (daniel hahler) Date: Sun, 07 Apr 2019 19:15:47 +0000 Subject: [New-bugs-announce] [issue36550] Avoid creating AttributeError exceptions in the debugger Message-ID: <1554664547.38.0.626848411176.issue36550@roundup.psfhosted.org> New submission from daniel hahler : pdb should try (hard) to avoid creating unnecessary exceptions, e.g. ``AttributeError`` when looking up commands, since this will show up in exception chains then (as "'Pdb' object has no attribute 'do_foo'"). See https://github.com/python/cpython/pull/4666 for an older PR in this regard. My use case is to display the traceback for exceptions caused within/via Pdb.default(), to see more context when running code from pdb's prompt directly, where currently it would only display the exception itself. ---------- components: Library (Lib) messages: 339583 nosy: blueyed priority: normal severity: normal status: open title: Avoid creating AttributeError exceptions in the debugger versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 7 21:28:21 2019 From: report at bugs.python.org (anthony shaw) Date: Mon, 08 Apr 2019 01:28:21 +0000 Subject: [New-bugs-announce] [issue36551] Optimize list comprehensions with preallocate size and protect against overflow Message-ID: <1554686901.99.0.0374079193358.issue36551@roundup.psfhosted.org> New submission from anthony shaw : List comprehensions currently create a series of opcodes inside a code object, the first of which is BUILD_LIST with an oparg of 0, effectively creating a zero-length list with a preallocated size of 0. If you're doing a simple list comprehension on an iterator, e.g. def foo(): a = iterable return [x for x in a] Disassembly of at 0x109db2c40, file "", line 3>: 3 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) >> 4 FOR_ITER 8 (to 14) 6 STORE_FAST 1 (x) 8 LOAD_FAST 1 (x) 10 LIST_APPEND 2 12 JUMP_ABSOLUTE 4 >> 14 RETURN_VALUE The list comprehension will do a list_resize on the 4, 8, 16, 25, 35, 46, 58, 72, 88th iterations, etc. This PR preallocates the list created in a list comprehension to the length of the iterator using PyObject_LengthHint(). It uses a new BUILD_LIST_PREALLOC opcode which builds a list with the allocated size of PyObject_LengthHint(co_varnames[oparg]). [x for x in iterable] compiles to: Disassembly of at 0x109db2c40, file "", line 3>: 3 0 BUILD_LIST_PREALLOC 0 2 LOAD_FAST 0 (.0) >> 4 FOR_ITER 8 (to 14) 6 STORE_FAST 1 (x) 8 LOAD_FAST 1 (x) 10 LIST_APPEND 2 12 JUMP_ABSOLUTE 4 >> 14 RETURN_VALUE If the comprehension has ifs, then it will use the existing BUILD_LIST opcode Testing using a range length of 10000 ./python.exe -m timeit "x=list(range(10000)); [y for y in x]" Gives 392us on the current 3.8 branch and 372us with this change (about 8-10% faster) the longer the iterable, the bigger the impact. This change also catches the issue that a very large iterator, like a range object : [a for a in range(2**256)] Would cause the 3.8< interpreter to consume all memory and crash because there is no check against PY_SSIZE_MAX currently. With this change (assuming there is no if inside the comprehension) is now caught and thrown as an OverflowError: >>> [a for a in range(2**256)] Traceback (most recent call last): File "", line 1, in File "", line 1, in OverflowError: Python int too large to convert to C ssize_t ---------- components: Interpreter Core messages: 339586 nosy: anthony shaw, ncoghlan priority: normal severity: normal status: open title: Optimize list comprehensions with preallocate size and protect against overflow versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 00:38:16 2019 From: report at bugs.python.org (anthony shaw) Date: Mon, 08 Apr 2019 04:38:16 +0000 Subject: [New-bugs-announce] [issue36552] Replace OverflowError with ValueError when calculating length of range objects > PY_SIZE_MAX Message-ID: <1554698296.6.0.519379747153.issue36552@roundup.psfhosted.org> New submission from anthony shaw : When calculating length of range() objects that have an r->length > PY_SIZE_MAX, the underlying PyLong_AsSsize_t() function will raise an OverflowError: >>> a = list(range(2**256)) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t >>> a = range(2**256) >>> len(a) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t This is expected behaviour, but to the average user, who won't know what ssize_t is, or what this has to do with Python int, the user message is confusing and OverflowError is the symptom but not the cause. The cause is that the length sent to range was in a value too large to calculate. This patch changes OverflowError to ValueError to hint to the user that the value sent to the range object constructor is too large. >>> a = list(range(2**256)) Traceback (most recent call last): File "", line 1, in ValueError: Range object too large to calculate length (Overflow Error) >>> a = range(2**256) >>> len(a) Traceback (most recent call last): File "", line 1, in ValueError: Range object too large to calculate length (Overflow Error) ---------- components: Library (Lib) messages: 339589 nosy: anthony shaw priority: normal severity: normal status: open title: Replace OverflowError with ValueError when calculating length of range objects > PY_SIZE_MAX versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 04:21:55 2019 From: report at bugs.python.org (Sylvain Marie) Date: Mon, 08 Apr 2019 08:21:55 +0000 Subject: [New-bugs-announce] [issue36553] inspect.is_decorator_call(frame) Message-ID: <1554711715.55.0.758701327001.issue36553@roundup.psfhosted.org> New submission from Sylvain Marie : Python decorators are frequently proposed by libraries as an easy way to add functionality to user-written functions: see `attrs`, `pytest`, `click`, `marshmallow`, etc. A common pattern in most such libraries, is that they do not want to provide users with two different symbols for the same function. So they end up implementing decorators that can be used both as decorators (no arguments no parenthesis) AND decorator factories (arguments in parenthesis). This is convenient and intuitive for users. Unfortunately this is not something trivial to implement because the python language does not make any difference between a no-parenthesis decorator call and a with-parenthesis decorator factory call. So these libraries have to rely on "tricks", the most common one being to check existence of a non-default first parameter that is a callable. Examples: https://github.com/python-attrs/attrs/blob/c2a9dd8e113a0dc72f86490e330f25bc0111971a/src/attr/_make.py#L940 https://github.com/pytest-dev/pytest/blob/13a9d876f74f17907ad04b13132cbd4aa4ad5842/src/_pytest/fixtures.py#L1041 https://github.com/marshmallow-code/marshmallow/blob/ec51dff98999f2189a255fb8bbc22e549e3cc673/src/marshmallow/decorators.py#L161 Implementing these tricks is a bit ugly, but more importantly it is a waste of development time because when one changes his decorators signatures, the trick has to possibly be changed (order of arguments, default values, etc). Therefore it is quite a brake to agile development in the first phase of a project, where the api is not very stable. I regrouped all known and possible tricks in a library https://github.com/smarie/python-decopatch/ to provide a handy way to solve this problem. But it is still "a bunch of tricks". This library, or the manual implementations such as the examples above, could be much faster/efficient if there were at least, a way to determine if a frame is a call to `@`. So this is a request to at least have a `inspect.is_decorator_call(frame)` feature in the stdlib. That function would return `True` if the frame is a decorator call using `@`. Note that a more convenient way to solve this problem is also proposed in https://smarie.github.io/python-decopatch/pep_proposal/#2-preserving-backwards-compatibility : it would be to offer a `@decorator_factory` helper in the stdlib. But first feedback from python-ideas mailing list showed that this was maybe too disruptive :) ---------- components: Library (Lib) messages: 339599 nosy: smarie priority: normal severity: normal status: open title: inspect.is_decorator_call(frame) type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 06:24:31 2019 From: report at bugs.python.org (Dieter Maurer) Date: Mon, 08 Apr 2019 10:24:31 +0000 Subject: [New-bugs-announce] [issue36554] unittest.TestCase: "subTest" cannot be used together with "debug" Message-ID: <1554719071.94.0.114334441524.issue36554@roundup.psfhosted.org> New submission from Dieter Maurer : "subTest" accesses "self._outcome" which is "None" when the test is performed via "debug". ---------- components: Library (Lib) messages: 339607 nosy: dmaurer priority: normal severity: normal status: open title: unittest.TestCase: "subTest" cannot be used together with "debug" type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 06:50:12 2019 From: report at bugs.python.org (Crusader Ky) Date: Mon, 08 Apr 2019 10:50:12 +0000 Subject: [New-bugs-announce] [issue36555] PEP484 @overload vs. str/bytes Message-ID: <1554720612.88.0.913385974113.issue36555@roundup.psfhosted.org> New submission from Crusader Ky : An exceedingly common pattern in many Python libraries is for a function to accept either a string or a list of strings, and change the function output accordingly. This however does not play nice with @typing.overload, as a str variable is also an Iterable[str] that yields individual characters; a bytes variable is also an Iterable[bytes]. The example below confuses tools like mypy: @overload def f(x: str) -> int ... @overload def f(x: Iterable[str]) -> List[int] ... def f(x): if isinstance(x, str): return len(x) return [len(i) for i in x] mypy output: error: Overloaded function signatures 1 and 2 overlap with incompatible return types The proposed solution is to modify PEP484 to specify that, in case of ambiguity, whatever overloaded typing is defined first wins. This would be coherent with the behaviour of @functools.singledispatch. ---------- components: Library (Lib) messages: 339610 nosy: Crusader Ky priority: normal severity: normal status: open title: PEP484 @overload vs. str/bytes type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 06:57:49 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Mon, 08 Apr 2019 10:57:49 +0000 Subject: [New-bugs-announce] [issue36556] Trashcan causing duplicated __del__ calls Message-ID: <1554721069.78.0.635003677155.issue36556@roundup.psfhosted.org> New submission from Jeroen Demeyer : NOTE: because of PEP 442, this issue is specific to Python 2. This bug was discovered while adding testcases for bpo-35983 to the Python 2.7 backport. There is a nasty interaction between the trashcan and __del__: if you're very close to the trashcan limit and you're calling __del__, then objects that should have been deallocated in __del__ (in particular, an object involving self) might instead end up in the trashcan. This way, temporary references to self are not cleaned up and self might be resurrected when it shouldn't be. This in turns causes __del__ to be called multiple times. Testcase: class ObjectCounter(object): count = 0 def __init__(self): type(self).count += 1 def __del__(self): L = [self] type(self).count -= 1 L = None for i in range(60): L = [L, ObjectCounter()] del L print(ObjectCounter.count) This is expected to print 0 but in facts it prints -1. There are various ways of fixing this, with varying effectiveness. An obvious solution is bypassing the trashcan completely in __del__. This will deallocate objects correctly but it will cause a stack overflow (on the C level, so crashing Python) if __del__ is called recursively with deep recursion (this is what the trashcan is supposed to prevent). A compromise solution would be lowering the trashcan limit for heap types from 50 to 40: this gives __del__ at least 10 stack frames to work with. Assuming that __del__ code is relatively simple and won't create objects that are too deeply nested, this should work correctly. ---------- components: Interpreter Core messages: 339611 nosy: eric.snow, jdemeyer, matrixise, pitrou, scoder, serhiy.storchaka priority: normal severity: normal status: open title: Trashcan causing duplicated __del__ calls versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 08:20:23 2019 From: report at bugs.python.org (mattcher_h) Date: Mon, 08 Apr 2019 12:20:23 +0000 Subject: [New-bugs-announce] [issue36557] Python (Launcher)3.7.3 CMDLine install/uninstall Message-ID: <1554726023.31.0.350138244198.issue36557@roundup.psfhosted.org> New submission from mattcher_h : Hi, I?m trying to generate an automated install and uninstall from Python. For this I normally use cmdlines, but I got some issues. If I try to uninstall by my automated version I got the problem that it doesn?t finish. When I do this at the PC himself it works fine with PathToPython.exe /uninstall but i have to "click" close at the end of the setup by myself. So I think my problem with the automated version is that it doesn?t "click" close, because the uninstall itself seems to work fine. Are there some more parameters I could give? Another Issue is the PythonLauncher. Is there an cmdline by himself to uninstall? ciao ---------- components: Installation messages: 339629 nosy: mattcher_h priority: normal severity: normal status: open title: Python (Launcher)3.7.3 CMDLine install/uninstall versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 08:54:17 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Apr 2019 12:54:17 +0000 Subject: [New-bugs-announce] [issue36558] Change time.mktime() return type from float to int? Message-ID: <1554728057.82.0.282756231506.issue36558@roundup.psfhosted.org> New submission from STINNER Victor : time.mktime() returns a floating point number: >>> type(time.mktime(time.localtime())) The documentation says: "It returns a floating point number, for compatibility with :func:`.time`." time.time() returns a float because it has sub-second resolution, but mktime() returns an integer number of seconds. Would it make sense to change mktime() return type from float to int? I would like to change mktime() return type to make the function more consistent: inputs are integers, it sounds wrong to me to return float. The result should be integer as well. How much code would it break? I guess that the main impact are unit tests relying on repr(time.mktime(t)) exact value. But it's easy to fix the tests: use int(time.mktime(t)) or "%.0f" % time.mktime(t) to never get ".0", or use float(time.mktime(t))) to explicitly cast for a float (that which be a bad but quick fix). Note: I wrote and implemented the PEP 564 to avoid any precision loss. mktime() will not start loosing precision before year 285,422,891 (which is quite far in the future ;-)). ---------- components: Library (Lib) messages: 339632 nosy: belopolsky, p-ganssle, vstinner priority: normal severity: normal status: open title: Change time.mktime() return type from float to int? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 09:11:01 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Apr 2019 13:11:01 +0000 Subject: [New-bugs-announce] [issue36559] "import random" should import hashlib on demand (nor load OpenSSL) Message-ID: <1554729061.73.0.692414846311.issue36559@roundup.psfhosted.org> New submission from STINNER Victor : Currently, when the random module is imported, the hashlib module is always imported which loads the OpenSSL library, whereas hashlib is only needed when a Random() instance is created with a string seed. For example, "rnd = random.Random()" and "rnd = random.Random(12345)" don't need hashlib. Example on Linux: $ python3 Python 3.7.2 (default, Mar 21 2019, 10:09:12) >>> import os, sys >>> 'hashlib' in sys.modules False >>> res=os.system(f"grep ssl /proc/{os.getpid()}/maps") >>> import random >>> 'hashlib' in sys.modules True >>> res=os.system(f"grep ssl /proc/{os.getpid()}/maps") 7f463ec38000-7f463ec55000 r--p 00000000 00:2a 5791335 /usr/lib64/libssl.so.1.1.1b 7f463ec55000-7f463eca5000 r-xp 0001d000 00:2a 5791335 /usr/lib64/libssl.so.1.1.1b ... Attached PR only imports hashlib on demand. Note: I noticed this issue while working on adding OpenSSL 1.1.1 support to Python 3.4 :-) ---------- components: Library (Lib) messages: 339637 nosy: vstinner priority: normal severity: normal status: open title: "import random" should import hashlib on demand (nor load OpenSSL) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 10:22:16 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Apr 2019 14:22:16 +0000 Subject: [New-bugs-announce] [issue36560] test_functools leaks randomly 1 memory block Message-ID: <1554733336.52.0.331530875309.issue36560@roundup.psfhosted.org> New submission from STINNER Victor : Sometimes, each run of test_functools leaks exactly 1 memory block, even when the whole test is "re-run in verbose mode". Sometimes, it doesn't leak. https://buildbot.python.org/all/#/builders/80/builds/550 test_functools leaked [1, 1, 1] memory blocks, sum=3 Re-running test 'test_functools' in verbose mode test_functools leaked [1, 1, 1] memory blocks, sum=3 Maybe the problem comes from Example on Linux: $ ./python -m test -F -r -j1 -R 3:3 test_functools Using random seed 3891892 Run tests in parallel using 1 child processes 0:00:01 load avg: 2.38 [ 1] test_functools passed beginning 6 repetitions 123456 ...... (...) 0:00:06 load avg: 2.27 [ 6] test_functools passed beginning 6 repetitions 123456 ...... 0:00:07 load avg: 2.27 [ 7/1] test_functools failed beginning 6 repetitions 123456 ...... test_functools leaked [1, 2, 1] memory blocks, sum=4 0:00:08 load avg: 2.27 [ 8/1] test_functools passed beginning 6 repetitions 123456 ...... == Tests result: FAILURE == 7 tests OK. 1 test failed: test_functools Total duration: 8 sec 333 ms Tests result: FAILURE ---------- components: Tests messages: 339643 nosy: vstinner priority: normal severity: normal status: open title: test_functools leaks randomly 1 memory block versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 10:59:10 2019 From: report at bugs.python.org (JP Zhang) Date: Mon, 08 Apr 2019 14:59:10 +0000 Subject: [New-bugs-announce] [issue36561] Python argparse doesn't work in the presence of my custom module Message-ID: <1554735550.49.0.179926102868.issue36561@roundup.psfhosted.org> New submission from JP Zhang : Github repo for reproducing: https://github.com/zjplab/gc-mc-pytorch/tree/bug, test.py. In the presence of my custom data_loader, it will error as unrecognized argument. But without importing it(comment it out) everything is just fine. ---------- components: Library (Lib) messages: 339646 nosy: JP Zhang priority: normal severity: normal status: open title: Python argparse doesn't work in the presence of my custom module versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 11:37:32 2019 From: report at bugs.python.org (Joao Paulo) Date: Mon, 08 Apr 2019 15:37:32 +0000 Subject: [New-bugs-announce] [issue36562] Can't call a method from a module built in Python C API Message-ID: <1554737852.45.0.259157063198.issue36562@roundup.psfhosted.org> New submission from Joao Paulo : I'm trying to build a python module in C++ using the Python C API and the code is attached. The problem is when I run my_module.runTester() in PyRun_SimpleString. I get the following error message: SystemError: Bad call flags in PyCFunction_Call. METH_OLDARGS is no longer supported! I'm not using METH_OLDARGS. As you can see, I'm using METH_VARARGS | METH_KEYWORDS. What I could be missing here? I'm using Windows 7 x64. ---------- files: module.cpp messages: 339653 nosy: jjppof priority: normal severity: normal status: open title: Can't call a method from a module built in Python C API type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48247/module.cpp _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 12:31:36 2019 From: report at bugs.python.org (daniel hahler) Date: Mon, 08 Apr 2019 16:31:36 +0000 Subject: [New-bugs-announce] [issue36563] pdbrc home twice Message-ID: <1554741096.03.0.623055077477.issue36563@roundup.psfhosted.org> Change by daniel hahler : ---------- nosy: blueyed priority: normal severity: normal status: open title: pdbrc home twice _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 12:45:34 2019 From: report at bugs.python.org (Paul Ganssle) Date: Mon, 08 Apr 2019 16:45:34 +0000 Subject: [New-bugs-announce] [issue36564] Infinite loop with short maximum line lengths in EmailPolicy Message-ID: <1554741934.49.0.490765940263.issue36564@roundup.psfhosted.org> New submission from Paul Ganssle : When reviewing PR 12020 fixing an infinite loop in the e-mail module, I noticed that a *different* infinite loop is documented with a "# XXX" comment on line 2724: https://github.com/python/cpython/blob/58721a903074d28151d008d8990c98fc31d1e798/Lib/email/_header_value_parser.py#L2724 This is triggered when the policy's `max_line_length` is set to be shorter than minimum line length required by the "RFC 2047 chrome". It can be reproduced with: from email.policy import default policy = default.clone(max_line_length=7) # max_line_length = 78 policy.fold("Subject", "12345678") I could not find an entry on the tracker for this bug, but it is documented in the source code itself, so maybe I just didn't try hard enough. Related but distinct bugs: #33529, #33524 I will submit a patch to fix this. ---------- messages: 339660 nosy: barry, p-ganssle, r.david.murray priority: normal severity: normal status: open title: Infinite loop with short maximum line lengths in EmailPolicy versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 12:56:30 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Apr 2019 16:56:30 +0000 Subject: [New-bugs-announce] [issue36565] Reference hunting (python3 -m test -R 3:3) doesn't work if the _abc module is missing Message-ID: <1554742590.4.0.251583762845.issue36565@roundup.psfhosted.org> New submission from STINNER Victor : Disable the compilation of the built-in _abc module. For example, on Python 3.7 apply the following patch: diff --git a/Modules/Setup.dist b/Modules/Setup.dist index 8cc6bf0540..4015527b32 100644 --- a/Modules/Setup.dist +++ b/Modules/Setup.dist @@ -114,7 +114,7 @@ _weakref _weakref.c # weak references _functools -DPy_BUILD_CORE _functoolsmodule.c # Tools for working with functions and callable objects _operator _operator.c # operator.add() and similar goodies _collections _collectionsmodule.c # Container types -_abc _abc.c # Abstract base classes +#_abc _abc.c # Abstract base classes itertools itertoolsmodule.c # Functions creating iterators for efficient looping atexit atexitmodule.c # Register functions to be run at interpreter-shutdown _signal -DPy_BUILD_CORE signalmodule.c @@ -363,7 +363,8 @@ xxsubtype xxsubtype.c # Uncommenting the following line tells makesetup that all following modules # are not built (see above for more detail). # -#*disabled* +*disabled* # #_sqlite3 _tkinter _curses pyexpat #_codecs_jp _codecs_kr _codecs_tw unicodedata +_abc Recompile Python, check: $ ./python -c 'import _abc' ModuleNotFoundError: No module named '_abc' Run: $ ./python -u -m test -R 3:3 test_functools -m test_mro_conflicts Error without _abc: test test_functools crashed -- Traceback (most recent call last): File "/home/vstinner/prog/python/3.7/Lib/test/libregrtest/runtest.py", line 180, in runtest_inner refleak = dash_R(the_module, test, test_runner, ns.huntrleaks) File "/home/vstinner/prog/python/3.7/Lib/test/libregrtest/refleak.py", line 71, in dash_R abcs) File "/home/vstinner/prog/python/3.7/Lib/test/libregrtest/refleak.py", line 148, in dash_R_cleanup obj.register(ref()) File "/home/vstinner/prog/python/3.7/Lib/_py_abc.py", line 60, in register raise TypeError("Can only register classes") TypeError: Can only register classes With built-in _abc module, regrtest is fine. The problem comes from pure-Python reimplementation of abc._get_dump() in Lib/test/libregrtest/refleak.py: def _get_dump(cls): # For legacy Python version return (cls._abc_registry, cls._abc_cache, cls._abc_negative_cache, cls._abc_negative_cache_version) The first item tuple must be a set of weak references. Currently, it's a weak set of strong references. Attached PR fix the issue. ---------- components: Library (Lib) messages: 339661 nosy: vstinner priority: normal severity: normal status: open title: Reference hunting (python3 -m test -R 3:3) doesn't work if the _abc module is missing versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 16:05:07 2019 From: report at bugs.python.org (Steven Vascellaro) Date: Mon, 08 Apr 2019 20:05:07 +0000 Subject: [New-bugs-announce] [issue36566] Support password masking in getpass.getpass() Message-ID: <1554753907.34.0.232598875583.issue36566@roundup.psfhosted.org> New submission from Steven Vascellaro : Support password masking in getpass.getpass() Currently, getpass.getpass() hides all user input when entering a password. This can throw off non-Unix users who are used to passwords being masked with asterisks *. This has led some users to write their own libraries for this functionality. Proposal: - Add an optional argument to `getpass.getpass()` for a character to mask user input Usage Example: > import getpass > password = getpass.getpass(mask='*') Password: ********** > password = getpass.getpass() Password: ---------- components: Library (Lib) messages: 339671 nosy: stevoisiak priority: normal severity: normal status: open title: Support password masking in getpass.getpass() type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 17:40:47 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Mon, 08 Apr 2019 21:40:47 +0000 Subject: [New-bugs-announce] [issue36567] DOC: manpage directive doesn't create hyperlink Message-ID: <1554759647.95.0.266283511597.issue36567@roundup.psfhosted.org> New submission from Cheryl Sabella : The `manpage` directive in the docs is not creating a hyperlink to the Unix manual page. As of Sphinx 1.7, the `manpage` directive needs to have a `manpages_url` defined in the conf.py file. [1] http://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage ---------- assignee: docs at python components: Documentation messages: 339676 nosy: cheryl.sabella, docs at python, eric.araujo, ezio.melotti, mdk, willingc priority: normal severity: normal stage: needs patch status: open title: DOC: manpage directive doesn't create hyperlink type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 8 22:17:39 2019 From: report at bugs.python.org (Carl Cerecke) Date: Tue, 09 Apr 2019 02:17:39 +0000 Subject: [New-bugs-announce] [issue36568] Typo in socket.CAN_RAW_FD_FRAMES library documentation Message-ID: <1554776259.35.0.774393301329.issue36568@roundup.psfhosted.org> New submission from Carl Cerecke : https://docs.python.org/3/library/socket.html#socket.CAN_RAW_FD_FRAMES The wording "...however, you one must accept..." doesn't make sense. I think the "you one" should be "your application", but I'm not sure. ---------- assignee: docs at python components: Documentation messages: 339691 nosy: Carl Cerecke, docs at python priority: normal severity: normal status: open title: Typo in socket.CAN_RAW_FD_FRAMES library documentation versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 01:40:35 2019 From: report at bugs.python.org (Peter de Blanc) Date: Tue, 09 Apr 2019 05:40:35 +0000 Subject: [New-bugs-announce] [issue36569] @staticmethod seems to work with setUpClass, but docs say it shouldn't Message-ID: <1554788435.05.0.183130492369.issue36569@roundup.psfhosted.org> New submission from Peter de Blanc : According to unittest docs: https://docs.python.org/3.7/library/unittest.html#module-unittest `setUpClass is called with the class as the only argument and must be decorated as a classmethod()` and: `tearDownClass is called with the class as the only argument and must be decorated as a classmethod()` However, I was able to create a passing test case where `setUpClass` and `tearDownClass` are decorated with `@staticmethod` instead of `@classmethod`: I tested this with Python versions 3.6.4 and 3.7.1. Please update the documentation to indicate that `@staticmethod` is allowed here, or else indicate why it's bad. ---------- components: Library (Lib) files: test_bar.py messages: 339700 nosy: Peter de Blanc, ezio.melotti, michael.foord, rbcollins priority: normal severity: normal status: open title: @staticmethod seems to work with setUpClass, but docs say it shouldn't type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48251/test_bar.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 03:57:09 2019 From: report at bugs.python.org (=?utf-8?b?RMSBdmlz?=) Date: Tue, 09 Apr 2019 07:57:09 +0000 Subject: [New-bugs-announce] [issue36570] ftplib timeouts for misconfigured server Message-ID: <1554796629.79.0.175383506692.issue36570@roundup.psfhosted.org> New submission from D?vis : It's not uncommon to encounter FTP servers which are misconfigured and return unroutable host IP (eg. internal IP) when using passive mode See: https://superuser.com/a/1195591 Most FTP clients such as FileZilla and WinSCP use a workaround when they encounter such servers and connect to user's specified host instead. > Command: PASV > Answer: 227 Entering Passive Mode (10,250,250,25,219,237). > Status: Server sent passive reply with unroutable address. Using server address instead. Currently Python's ftplib simply timeouts for these and doesn't work. ---------- messages: 339712 nosy: davispuh, giampaolo.rodola priority: normal severity: normal status: open title: ftplib timeouts for misconfigured server type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 04:29:14 2019 From: report at bugs.python.org (Marcin Niemira) Date: Tue, 09 Apr 2019 08:29:14 +0000 Subject: [New-bugs-announce] [issue36571] Lib/smtplib.py have some pep8 issues Message-ID: <1554798554.15.0.922719157535.issue36571@roundup.psfhosted.org> New submission from Marcin Niemira : pycodestyle (pep8) reports some issues on linting for Lib/smtplib.py I believe we can fix most of them and apply some improvements due to pep-572. PR on GH. Are contributions like this valuable? ---------- components: Library (Lib) messages: 339714 nosy: Marcin Niemira priority: normal severity: normal status: open title: Lib/smtplib.py have some pep8 issues versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 05:47:31 2019 From: report at bugs.python.org (Slim) Date: Tue, 09 Apr 2019 09:47:31 +0000 Subject: [New-bugs-announce] [issue36572] python-snappy install issue during Crossbar install with Python 3.7.3 (Windows x86 executable installer) Message-ID: <1554803251.97.0.1929107908.issue36572@roundup.psfhosted.org> New submission from Slim : In a Windows 2016 VM, when trying to install Crossbar in offline mode, we observed the following error: error Running setup.py install for python-snappy ... error ... snappy/snappymodule.cc(31): fatal error C1083: Cannot open include file: 'snappy-c.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.20.27508\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2 Here the steps to reproduce this issue: 0) In a Windows 10 machine (where we create our MSI): -> we download the corresponding python-3.7.3.exe (Windows x86 executable installer) -> we download pyWin32 & crossbar & pip in a specific folder: pip download -d "C:\Crossbar373" PyWin32==224 pip download -d "C:\Crossbar373" crossbar==19.3.5 pip download -d "C:\Crossbar373" pip==19.0.3 -> python-3.7.3.exe /quiet /layout "C:\Temp\Python373Layout" -> Our MSI use this folder as an input to install Python 3.7.3 in the target Windows 2016 VM in offline mode 1) In the target Windows 2016 VM (where we launch our MSI): -> Our MSI install Visual Studio 2019 from https://visualstudio.microsoft.com/downloads/ (default values) -> It also removes the MAX_PATH Limitation -> It copies all the files needed to install crossbar in the "C:\Crossbar373" folder -> Then it executes the following custom actions in this machine: a) python-3.7.3.exe /quiet InstallAllUsers=1 TargetDir="C:\Python373x86ExecInstaller" PrependPath=1 Include_pip=0 Include_launcher=0 Include_test=0 b) python -m pip install --upgrade pip==19.0.3 c) pip.exe install --no-warn-script-location --disable-pip-version-check --no-index --find-links "C:\Crossbar373" PyWin32==224 d) pip.exe install --no-warn-script-location --disable-pip-version-check --no-index --find-links "C:\Crossbar373" incremental e) pip.exe install --no-warn-script-location --disable-pip-version-check --no-index --find-links "C:\Crossbar373" crossbar f) Finally, this issue occurs. The only workaround found to fix this issue is to: 1) download python_snappy-0.5.4-cp37-cp37m-win32.whl from https://www.lfd.uci.edu/~gohlke/pythonlibs/ 2) and install it with our MSI in the target VM before step e): pip install C:\Crossbar372\python_snappy-0.5.4-cp37-cp37m-win32.whl This issue seems also to occur with Python 3.7.2 version. Questions: 1) Is there any invalid step in our process? 2) Is this workaround validated by you? As this wheel file does not seem to be official. 3) Is this issue related to https://github.com/crossbario/crossbar/issues/1521 one? and thus will be fixed in forthcoming versions? ---------- messages: 339721 nosy: telatoa priority: normal severity: normal status: open title: python-snappy install issue during Crossbar install with Python 3.7.3 (Windows x86 executable installer) type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 05:49:43 2019 From: report at bugs.python.org (Jozef Cernak) Date: Tue, 09 Apr 2019 09:49:43 +0000 Subject: [New-bugs-announce] [issue36573] zipfile zipfile.BadZipFile: Bad CRC-32 for file '11_02_2019.pdf' Message-ID: <1554803383.11.0.72187159386.issue36573@roundup.psfhosted.org> New submission from Jozef Cernak : Hi, in the short program, that works well for password of 4 character, when I change password length I got this error (parameter MAXD) Traceback (most recent call last): File "p33.py", line 54, in zf.extractall( pwd=password.encode('cp850','replace')) File "/usr/lib/python3.5/zipfile.py", line 1347, in extractall self.extract(zipinfo, path, pwd) File "/usr/lib/python3.5/zipfile.py", line 1335, in extract return self._extract_member(member, path, pwd) File "/usr/lib/python3.5/zipfile.py", line 1399, in _extract_member shutil.copyfileobj(source, target) File "/usr/lib/python3.5/shutil.py", line 73, in copyfileobj buf = fsrc.read(length) File "/usr/lib/python3.5/zipfile.py", line 844, in read data = self._read1(n) File "/usr/lib/python3.5/zipfile.py", line 934, in _read1 self._update_crc(data) File "/usr/lib/python3.5/zipfile.py", line 862, in _update_crc raise BadZipFile("Bad CRC-32 for file %r" % self.name) zipfile.BadZipFile: Bad CRC-32 for file '11_02_2019.pdf' program: import string, zipfile, zlib from zipfile import ZipFile zf= ZipFile('11_02_2019.pdf.zip') MAXD=6 upper_case=string.ascii_uppercase uc=list(upper_case) n=len(uc) print (n) pos=[] for k in range(0,MAXD): pos.append(0) print (pos) for let in range(0,n): print (let, uc[let]) let=0 koniec=0; k3=0 p=0 while koniec != MAXD : k=0 password='' for k2 in range(0,MAXD): password=password+uc[pos[k2]] print (password) try: with zipfile.ZipFile('11_02_2019.pdf.zip') as zf: zf.extractall( pwd=password.encode('cp850','replace')) print ("Password found:" + password) exit(0) except RuntimeError: pass except zlib.error: pass #print "ppppppppppppppppppppppppp",p, paswd pos[0]=pos[0]+1 for k2 in range(0,MAXD-1): if pos[k2]>=n: pos[k2]=0 pos[k2+1]=pos[k2+1]+1 koniec=0 for k2 in range(0,MAXD): if pos[k2] >= n-1: koniec=koniec+1 Similar behaviuor I observed in older version of python (2.7) and correspondig library. The zip archive is procted by simple password 'ABCD', the file is not big less tha 1MB. Best regards Jozef ---------- components: Library (Lib) messages: 339722 nosy: Jozef Cernak priority: normal severity: normal status: open title: zipfile zipfile.BadZipFile: Bad CRC-32 for file '11_02_2019.pdf' type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 09:25:40 2019 From: report at bugs.python.org (=?utf-8?q?Joan_Tomas_=28Tommy=29_Pujol_Mu=C3=B1oz?=) Date: Tue, 09 Apr 2019 13:25:40 +0000 Subject: [New-bugs-announce] [issue36574] Error with self in python Message-ID: <1554816340.61.0.00352106981466.issue36574@roundup.psfhosted.org> New submission from Joan Tomas (Tommy) Pujol Mu?oz : I try to use self with the __init__ function in a class, but when I enter the other values e.g. def __init__(self, name): self.name = name /// and when I call the class with a name it says that it need another value because it uses self an another value. It happened when I was using Windows 10, but normally I use Linux. ---------- files: pySelf.cmd messages: 339746 nosy: tommypm priority: normal severity: normal status: open title: Error with self in python type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48252/pySelf.cmd _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 09:35:46 2019 From: report at bugs.python.org (Inada Naoki) Date: Tue, 09 Apr 2019 13:35:46 +0000 Subject: [New-bugs-announce] [issue36575] Use _PyTime_GetPerfCounter() in lsprof Message-ID: <1554816946.4.0.309099513597.issue36575@roundup.psfhosted.org> New submission from Inada Naoki : Current lsprof uses `gettimeofday` on non-Windows. _PyTime_GetPerfCounter() is better time for profiling. ---------- components: Library (Lib) messages: 339747 nosy: inada.naoki priority: normal severity: normal status: open title: Use _PyTime_GetPerfCounter() in lsprof versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 10:41:46 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 09 Apr 2019 14:41:46 +0000 Subject: [New-bugs-announce] [issue36576] Some test_ssl and test_asyncio tests fail with OpenSSL 1.1.1 on Python 3.4 and 3.5 Message-ID: <1554820906.29.0.786226900608.issue36576@roundup.psfhosted.org> New submission from STINNER Victor : On Fedora 29, test_ssl and test_asyncio when Python 3.5 is linked with OpenSSL 1.1.1b (Fedora package openssl-1.1.1b-3.fc29.x86_64): test_ssl: * test_options (test.test_ssl.ContextTests) * test_alpn_protocols (test.test_ssl.ThreadedTests) * test_default_ecdh_curve (test.test_ssl.ThreadedTests) * test_shared_ciphers (test.test_ssl.ThreadedTests) test_asyncio: * test_create_server_ssl_match_failed (test.test_asyncio.test_events.EPollEventLoopTests) * test_create_server_ssl_match_failed (test.test_asyncio.test_events.PollEventLoopTests) * test_create_server_ssl_match_failed (test.test_asyncio.test_events.SelectEventLoopTests) Fixing these tests would require to backport some ssl features, and I don't think that it's worth it. Attached PR 12694 skip these tests on OpenSSL 1.1.1. Note: these tests pass with OpenSSL 1.1.0. FYI for Fedora, we also care of having the Python 3.4 test suite passing with OpenSSL 1.1.1 and so we will maintain a similar change downstream. ====================================================================== FAIL: test_options (test.test_ssl.ContextTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_ssl.py", line 866, in test_options self.assertEqual(default, ctx.options) AssertionError: 2181169236 != 2182217812 ====================================================================== FAIL: test_alpn_protocols (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_ssl.py", line 3205, in test_alpn_protocols self.assertIsInstance(stats, ssl.SSLError) AssertionError: {'client_alpn_protocol': None, 'server_alpn_protocols': [None], 'version': 'TLSv1.2', 'client_npn_protocol': None, 'server_npn_protocols': [None], 'server_shared_ciphers': [[('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256), ('TLS_CHACHA20_POLY1305_SHA256', 'TLSv1.3', 256), ('TLS_AES_128_GCM_SHA256', 'TLSv1.3', 128), ('TLS_AES_128_CCM_SHA256', 'TLSv1.3', 128), ('ECDHE-ECDSA-AES256-GCM-SHA384', 'TLSv1.2', 256), ('ECDHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES128-GCM-SHA256', 'TLSv1.2', 128), ('ECDHE-RSA-AES128-GCM-SHA256', 'TLSv1.2', 128), ('ECDHE-ECDSA-CHACHA20-POLY1305', 'TLSv1.2', 256), ('ECDHE-RSA-CHACHA20-POLY1305', 'TLSv1.2', 256), ('DHE-DSS-AES256-GCM-SHA384', 'TLSv1.2', 256), ('DHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256), ('DHE-DSS-AES128-GCM-SHA256', 'TLSv1.2', 128), ('DHE-RSA-AES128-GCM-SHA256', 'TLSv1.2', 128), ('DHE-RSA-CHACHA20-POLY1305', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-CCM8', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-CCM', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-SHA384', 'TLSv1.2', 256), ('ECDHE-RSA-AES256-SHA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-SHA', 'TLSv1.0', 256), ('ECDHE-RSA-AES256-SHA', 'TLSv1.0', 256), ('DHE-RSA-AES256-CCM8', 'TLSv1.2', 256), ('DHE-RSA-AES256-CCM', 'TLSv1.2', 256), ('DHE-RSA-AES256-SHA256', 'TLSv1.2', 256), ('DHE-DSS-AES256-SHA256', 'TLSv1.2', 256), ('DHE-RSA-AES256-SHA', 'SSLv3', 256), ('DHE-DSS-AES256-SHA', 'SSLv3', 256), ('ECDHE-ECDSA-AES128-CCM8', 'TLSv1.2', 128), ('ECDHE-ECDSA-AES128-CCM', 'TLSv1.2', 128), ('ECDHE-ECDSA-AES128-SHA256', 'TLSv1.2', 128), ('ECDHE-RSA-AES128-SHA256', 'TLSv1.2', 128), ('ECDHE-ECDSA-AES128-SHA', 'TLSv1.0', 128), ('ECDHE-RSA-AES128-SHA', 'TLSv1.0', 128), ('DHE-RSA-AES128-CCM8', 'TLSv1.2', 128), ('DHE-RSA-AES128-CCM', 'TLSv1.2', 128), ('DHE-RSA-AES128-SHA256', 'TLSv1.2', 128), ('DHE-DSS-AES128-SHA256', 'TLSv1.2', 128), ('DHE-RSA-AES128-SHA', 'SSLv3', 128), ('DHE-DSS-AES128-SHA', 'SSLv3', 128), ('ECDHE-ECDSA-ARIA256-GCM-SHA384', 'TLSv1.2', 256), ('ECDHE-ARIA256-GCM-SHA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-ARIA128-GCM-SHA256', 'TLSv1.2', 128), ('ECDHE-ARIA128-GCM-SHA256', 'TLSv1.2', 128), ('ECDHE-ECDSA-CAMELLIA256-SHA384', 'TLSv1.2', 256), ('ECDHE-RSA-CAMELLIA256-SHA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('ECDHE-RSA-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('DHE-DSS-ARIA256-GCM-SHA384', 'TLSv1.2', 256), ('DHE-RSA-ARIA256-GCM-SHA384', 'TLSv1.2', 256), ('DHE-DSS-ARIA128-GCM-SHA256', 'TLSv1.2', 128), ('DHE-RSA-ARIA128-GCM-SHA256', 'TLSv1.2', 128), ('DHE-RSA-CAMELLIA256-SHA256', 'TLSv1.2', 256), ('DHE-DSS-CAMELLIA256-SHA256', 'TLSv1.2', 256), ('DHE-RSA-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('DHE-DSS-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('DHE-RSA-CAMELLIA256-SHA', 'SSLv3', 256), ('DHE-DSS-CAMELLIA256-SHA', 'SSLv3', 256), ('DHE-RSA-CAMELLIA128-SHA', 'SSLv3', 128), ('DHE-DSS-CAMELLIA128-SHA', 'SSLv3', 128), ('AES256-GCM-SHA384', 'TLSv1.2', 256), ('AES128-GCM-SHA256', 'TLSv1.2', 128), ('AES256-CCM8', 'TLSv1.2', 256), ('AES256-CCM', 'TLSv1.2', 256), ('AES128-CCM8', 'TLSv1.2', 128), ('AES128-CCM', 'TLSv1.2', 128), ('AES256-SHA256', 'TLSv1.2', 256), ('AES128-SHA256', 'TLSv1.2', 128), ('AES256-SHA', 'SSLv3', 256), ('AES128-SHA', 'SSLv3', 128), ('ARIA256-GCM-SHA384', 'TLSv1.2', 256), ('ARIA128-GCM-SHA256', 'TLSv1.2', 128), ('CAMELLIA256-SHA256', 'TLSv1.2', 256), ('CAMELLIA128-SHA256', 'TLSv1.2', 128), ('CAMELLIA256-SHA', 'SSLv3', 256), ('CAMELLIA128-SHA', 'SSLv3', 128)]], 'peercert': {}, 'cipher': ('ECDHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256), 'compression': None} is not an instance of ====================================================================== FAIL: test_default_ecdh_curve (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_ssl.py", line 3064, in test_default_ecdh_curve self.assertIn("ECDH", s.cipher()[0]) AssertionError: 'ECDH' not found in 'TLS_AES_256_GCM_SHA384' ====================================================================== FAIL: test_shared_ciphers (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_ssl.py", line 3381, in test_shared_ciphers self.fail(name) AssertionError: TLS_AES_256_GCM_SHA384 ====================================================================== ERROR: test_create_server_ssl_match_failed (test.test_asyncio.test_events.EPollEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_asyncio/test_events.py", line 1172, in test_create_server_ssl_match_failed proto.transport.close() AttributeError: 'NoneType' object has no attribute 'close' ====================================================================== ERROR: test_create_server_ssl_match_failed (test.test_asyncio.test_events.PollEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_asyncio/test_events.py", line 1172, in test_create_server_ssl_match_failed proto.transport.close() AttributeError: 'NoneType' object has no attribute 'close' ====================================================================== ERROR: test_create_server_ssl_match_failed (test.test_asyncio.test_events.SelectEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/3.5/Lib/test/test_asyncio/test_events.py", line 1172, in test_create_server_ssl_match_failed proto.transport.close() AttributeError: 'NoneType' object has no attribute 'close' ---------- components: Tests messages: 339756 nosy: vstinner priority: normal severity: normal status: open title: Some test_ssl and test_asyncio tests fail with OpenSSL 1.1.1 on Python 3.4 and 3.5 versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 11:30:25 2019 From: report at bugs.python.org (Christian Heimes) Date: Tue, 09 Apr 2019 15:30:25 +0000 Subject: [New-bugs-announce] [issue36577] setup doesn't report missing _ssl and _hashlib Message-ID: <1554823825.45.0.360858351265.issue36577@roundup.psfhosted.org> New submission from Christian Heimes : setup does not report _ssl and _hashlib as failed to build in case OpenSSL libs or headers are missing. Related to #36544 and #36146 Reproducer: $ ./configure --with-openssl=/invalid $ make ... running build running build_ext The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc atexit pwd time running build_scripts ... With fix: $ ./configure --with-openssl=/invalid $ make ... running build running build_ext Python build finished successfully! The necessary bits to build these optional modules were not found: _hashlib _ssl To find the necessary bits, look in setup.py in detect_modules() for the module's name. The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc atexit pwd time Could not build the ssl module! Python requires an OpenSSL 1.0.2 or 1.1 compatible libssl with X509_VERIFY_PARAM_set1_host(). LibreSSL 2.6.4 and earlier do not provide the necessary APIs, https://github.com/libressl-portable/portable/issues/381 running build_scripts ... ---------- components: Build messages: 339765 nosy: christian.heimes priority: normal severity: normal status: open title: setup doesn't report missing _ssl and _hashlib versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 11:54:39 2019 From: report at bugs.python.org (=?utf-8?b?5a2R5b2x?=) Date: Tue, 09 Apr 2019 15:54:39 +0000 Subject: [New-bugs-announce] [issue36578] multiprocessing pool + subprocess ValueError: empty range for randrange Message-ID: <1554825279.75.0.461430616172.issue36578@roundup.psfhosted.org> New submission from ?? : == output == python2 /tmp/demo.py 31749 task#1 result:(False, 'ls: cannot access alksdfjalkdsfadsfk: No such file or directoryn') 31751 task#2 result:(False, 'ls: cannot access alksdfjalkdsfadsfk: No such file or directoryn') 31752 task#3 result:(False, '3n') 31750 task#4 result:(False, '4n') 31749 task#6 result:(False, '6n') 31752 task#7 result:(False, '7n') 31750 task#8 result:(False, '8n') 31751 task#9 result:(False, '9n') Traceback (most recent call last): File "/tmp/demo.py", line 74, in runner() File "/tmp/demo.py", line 64, in runner rc_orig = value.get() File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get raise self._value ValueError: empty range for randrange() (1,1, 0) == The python2.7 demo == http://paste.ubuntu.org.cn/4379593 == More == The interesting thing is if you modify demo.py like this . http://paste.ubuntu.org.cn/4379595 Almost the code can be run normal. ---------- components: Library (Lib) files: demo.py messages: 339767 nosy: ?? priority: normal severity: normal status: open title: multiprocessing pool + subprocess ValueError: empty range for randrange versions: Python 3.6 Added file: https://bugs.python.org/file48253/demo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 12:51:40 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 09 Apr 2019 16:51:40 +0000 Subject: [New-bugs-announce] [issue36579] test_venv: test_with_pip() hangs on PPC64 AIX 3.x Message-ID: <1554828700.04.0.163813636834.issue36579@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/10/builds/2389 0:45:36 [412/420/1] test_venv crashed (Exit code 1) Timeout (0:15:00)! Thread 0x00000001 (most recent call first): File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/subprocess.py", line 987 in communicate File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/subprocess.py", line 476 in run File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/subprocess.py", line 396 in check_output File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/venv/__init__.py", line 271 in _setup_pip File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/venv/__init__.py", line 68 in create File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/venv/__init__.py", line 373 in create File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/test_venv.py", line 68 in run_with_capture File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/test_venv.py", line 400 in do_test_with_pip File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/test_venv.py", line 460 in test_with_pip ... Re-running test 'test_venv' in verbose mode test_defaults (test.test_venv.BasicTest) ... ok ... test_devnull (test.test_venv.EnsurePipTest) ... ok test_explicit_no_pip (test.test_venv.EnsurePipTest) ... ok test_no_pip_by_default (test.test_venv.EnsurePipTest) ... ok Timeout (0:15:00)! Thread 0x00000001 (most recent call first): File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/selectors.py", line 415 in select File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/subprocess.py", line 1807 in _communicate File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/subprocess.py", line 1000 in communicate File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/test_venv.py", line 39 in check_output File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/test_venv.py", line 428 in do_test_with_pip File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/test/test_venv.py", line 460 in test_with_pip File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/unittest/case.py", line 680 in run File "/home/shager/cpython-buildarea/3.x.edelsohn-aix-ppc64/build/Lib/unittest/case.py", line 740 in __call__ ... test_with_pip (test.test_venv.EnsurePipTest) ... Makefile:1139: recipe for target 'buildbottest' failed make: *** [buildbottest] Error 1 program finished with exit code 2 ---------- components: Tests messages: 339778 nosy: David.Edelsohn, Michael.Felt, vstinner priority: normal severity: normal status: open title: test_venv: test_with_pip() hangs on PPC64 AIX 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 17:08:33 2019 From: report at bugs.python.org (John Parejko) Date: Tue, 09 Apr 2019 21:08:33 +0000 Subject: [New-bugs-announce] [issue36580] unittest.mock does not understand dataclasses Message-ID: <1554844113.82.0.464270307904.issue36580@roundup.psfhosted.org> New submission from John Parejko : The new dataclasses.dataclass is very useful for describing the properties of a class, but it appears that Mocks of such decorated classes do not catch the members that are defined in the dataclass. I believe the root cause of this is the fact that unittest.mock.Mock generates the attributes of its spec object via `dir`, and the non-defaulted dataclass attributes do not appear in dir. Given the utility in building classes with dataclass, it would be very useful if Mocks could see the class attributes of the dataclass. Example code: import dataclasses import unittest.mock @dataclasses.dataclass class Foo: name: str baz: float bar: int = 12 FooMock = unittest.mock.Mock(Foo) fooMock = FooMock() # should fail: Foo.__init__ takes two arguments # I would expect these to be True, but they are False 'name' in dir(fooMock) 'baz' in dir(fooMock) 'bar' in dir(fooMock) ---------- components: Library (Lib), Tests messages: 339808 nosy: John Parejko2 priority: normal severity: normal status: open title: unittest.mock does not understand dataclasses type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 18:46:11 2019 From: report at bugs.python.org (Dylan Semler) Date: Tue, 09 Apr 2019 22:46:11 +0000 Subject: [New-bugs-announce] [issue36581] __dir__ on unittest.mock not safe for all spec types Message-ID: <1554849971.17.0.495097985154.issue36581@roundup.psfhosted.org> New submission from Dylan Semler : If a MagicMock is created with a spec or spec_set that is a non-list iterable of strings (like a tuple), calling dir() on said mock produces a Traceback. Here's a minimum example: ? cat poc.py from unittest.mock import MagicMock mock = MagicMock(spec=('a', 'tuple')) dir(mock) ? python3 poc.py Traceback (most recent call last): File "poc.py", line 4, in dir(mock) File "/usr/lib64/python3.6/unittest/mock.py", line 677, in __dir__ return sorted(set(extras + from_type + from_dict + TypeError: can only concatenate tuple (not "list") to tuple ---------- components: Library (Lib) messages: 339813 nosy: Dylan Semler priority: normal severity: normal status: open title: __dir__ on unittest.mock not safe for all spec types type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 9 19:38:53 2019 From: report at bugs.python.org (Trey Hunner) Date: Tue, 09 Apr 2019 23:38:53 +0000 Subject: [New-bugs-announce] [issue36582] collections.UserString encode method returns a string Message-ID: <1554853133.32.0.976890929788.issue36582@roundup.psfhosted.org> New submission from Trey Hunner : It looks like the encode method for UserString incorrectly wraps its return value in a str call. ``` >>> from collections import UserString >>> UserString("hello").encode('utf-8') == b'hello' False >>> UserString("hello").encode('utf-8') "b'hello'" >>> type(UserString("hello").encode('utf-8')) ``` ---------- components: Library (Lib) messages: 339818 nosy: trey priority: normal severity: normal status: open title: collections.UserString encode method returns a string versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 03:22:39 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 10 Apr 2019 07:22:39 +0000 Subject: [New-bugs-announce] [issue36583] Do not swallow exceptions in the _ssl module Message-ID: <1554880959.84.0.895759680892.issue36583@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently some exceptions can be swallowed in the _ssl module. The proposed PR fixes this. Some examples: * Use PyDict_GetItemWithError() instead of PyDict_GetItem(). The latter swallows any exceptions. Although it is very unlikely that an exception be raised here, it may be possible. * Do not overwrite arbitrary exceptions in PyUnicode_FSConverter(), PyUnicode_AsASCIIString() and PyObject_GetBuffer(). MemoryError most likely can be raised in the first two cases. Only expected exceptions (TypeError or UnicodeEncodeError) will now be replaced with a TypeError, and cadata type will be checked before trying to get a buffer or encode. ---------- components: Library (Lib) messages: 339827 nosy: alex, christian.heimes, dstufft, janssen, serhiy.storchaka priority: normal severity: normal status: open title: Do not swallow exceptions in the _ssl module versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 03:24:54 2019 From: report at bugs.python.org (beruhan) Date: Wed, 10 Apr 2019 07:24:54 +0000 Subject: [New-bugs-announce] [issue36584] cython nametuple TypeError Message-ID: <1554881094.13.0.241658399375.issue36584@roundup.psfhosted.org> New submission from beruhan : I have a class that inherits from NamedTuple,I have compile it to pyd file on windows use cython,when I import the class and create a object in another py file,It throws error 'TypeError: __new__() takes 1 positional argument but 4 were given' when I don't compile it to pyd,It can use normally,How to deal with it? ---------- components: Cross-Build messages: 339828 nosy: Alex.Willmer, beruhan, gvanrossum, levkivskyi priority: normal severity: normal status: open title: cython nametuple TypeError type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 06:28:27 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Wed, 10 Apr 2019 10:28:27 +0000 Subject: [New-bugs-announce] [issue36585] test_posix.py fails due to unsupported RWF_HIPRI Message-ID: <1554892107.72.0.359102203411.issue36585@roundup.psfhosted.org> New submission from Jeroen Demeyer : On Linux with an old kernel: 0:03:59 load avg: 5.97 [300/420/1] test_posix failed -- running: test_tools (1 min 11 sec), test_concurrent_futures (2 min 42 sec) test test_posix failed -- Traceback (most recent call last): File "/usr/local/src/sage-config/local/src/cpython/Lib/test/test_posix.py", line 311, in test_preadv_flags self.assertEqual(posix.preadv(fd, buf, 3, os.RWF_HIPRI), 10) OSError: [Errno 95] Operation not supported The problem is obvious: it's testing a flag which is not supported by this kernel. The fact that the macro RWF_HIPRI is defined (which is a compile-time condition) does not imply that the kernel actually supports it (which is a run-time condition). ---------- messages: 339844 nosy: jdemeyer, pablogsal priority: normal severity: normal status: open title: test_posix.py fails due to unsupported RWF_HIPRI _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 06:42:31 2019 From: report at bugs.python.org (Thomas Grainger) Date: Wed, 10 Apr 2019 10:42:31 +0000 Subject: [New-bugs-announce] [issue36586] multiprocessing.Queue.close doesn't behave as documented Message-ID: <1554892951.92.0.81637767288.issue36586@roundup.psfhosted.org> New submission from Thomas Grainger : The docs for https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.close read: > Indicate that no more data will be put on this queue by the current process. The background thread will quit once it has flushed all buffered data to the pipe. This is called automatically when the queue is garbage collected. >From this text it seems to me as though the queue should be used as follows: import contextlib import multiprocessing def worker(q): with contextlib.closing(q): q.put_nowait('hello') def controller(): q = multiprocessing.Queue() q.close() # no more 'put's from this process p = multiprocessing.Process(target=worker, args=(q, )) p.start() assert q.get() == 'hello' p.join() assert p.exitcode == 0 print('OK!') if __name__ == '__main__': controller() however I get this: Traceback (most recent call last): File "controller.py", line 22, in controller() File "controller.py", line 15, in controller assert q.get() == 'hello' File "/usr/lib/python3.7/multiprocessing/queues.py", line 94, in get res = self._recv_bytes() File "/usr/lib/python3.7/multiprocessing/connection.py", line 212, in recv_bytes self._check_closed() File "/usr/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed raise OSError("handle is closed") OSError: handle is closed ---------- messages: 339847 nosy: graingert priority: normal severity: normal status: open title: multiprocessing.Queue.close doesn't behave as documented _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 10:14:38 2019 From: report at bugs.python.org (cagney) Date: Wed, 10 Apr 2019 14:14:38 +0000 Subject: [New-bugs-announce] [issue36587] race in logging code when fork() Message-ID: <1554905678.69.0.300642105392.issue36587@roundup.psfhosted.org> New submission from cagney : Buried in issue36533; it should probably be turned into a test case. Exception ignored in: Traceback (most recent call last): File "/home/python/v3.7.3/lib/python3.7/logging/__init__.py", line 269, in _after_at_fork_weak_calls _at_fork_weak_calls('release') File "/home/python/v3.7.3/lib/python3.7/logging/__init__.py", line 254, in _at_fork_weak_calls for instance in _at_fork_acquire_release_weakset: File "/home/python/v3.7.3/lib/python3.7/_weakrefset.py", line 60, in __iter__ for itemref in self.data: RuntimeError: Set changed size during iteration Exception in thread Thread-1: Traceback (most recent call last): File "/home/python/v3.7.3/lib/python3.7/threading.py", line 917, in _bootstrap_inner self.run() File "/home/python/v3.7.3/lib/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "./btc.py", line 11, in lockie h = logging.Handler() File "/home/python/v3.7.3/lib/python3.7/logging/__init__.py", line 824, in __init__ self.createLock() File "/home/python/v3.7.3/lib/python3.7/logging/__init__.py", line 847, in createLock _register_at_fork_acquire_release(self) File "/home/python/v3.7.3/lib/python3.7/logging/__init__.py", line 250, in _register_at_fork_acquire_release _at_fork_acquire_release_weakset.add(instance) File "/home/python/v3.7.3/lib/python3.7/_weakrefset.py", line 83, in add self._commit_removals() File "/home/python/v3.7.3/lib/python3.7/_weakrefset.py", line 56, in _commit_removals discard(l.pop()) IndexError: pop from empty list ---------- components: Library (Lib) files: btc.py messages: 339866 nosy: cagney priority: normal severity: normal status: open title: race in logging code when fork() type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48258/btc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 11:04:50 2019 From: report at bugs.python.org (Michael Felt) Date: Wed, 10 Apr 2019 15:04:50 +0000 Subject: [New-bugs-announce] [issue36588] change sys.platform() to just "aix" for AIX Message-ID: <1554908690.57.0.898884397677.issue36588@roundup.psfhosted.org> New submission from Michael Felt : This is something that probably shouts - boring - but back in 2012 it was a hot topic for linux2 and linux3. Maybe - as far back as 1996 (when AIX4 was new) "aix3" and "aix4" made sense. Whether that is true, or not - is pointless these days - for Python3. In the python code I have reviewed - for various reasons - I have never seen any code with "sys.platform() == "aixX" (where X is any of 5, 6, 7). There was a reference to "aix3" and "aix4" in setup.py (recently removed, and there is no replacement for aix5, aix6, or aix7 - not needed!) What I mostly see is sys.platform.startswith("aix"). The other form of the same test is sys.platform[:3] == 'aix' sys.platform is "build" related, e.g., potentially bound to libc issues. Even if this was the case (AIX offers since 2007 - official binary compatibility from old to new (when libc is dynamically linked, not static linked), was "unofficial" for aix3 and aix4). Yes, I am sure there are arguments along the line of "why change" since we have been updating it - always, and/or the documentation says so. linux2 had to stay, because there was known code that compared with linux2 (and that code was having problems when python was built on linux3 - hence the change to make sys.platform return linux2 for all Python3.2 and younger). FYI: in Cpython (master) there are no references to "aixX". All the references there are (in .py) are: michael at x071:[/data/prj/python/git/cpython-master]find . -name \*.py | xargs egrep "[\"']aix" ./Lib/asyncio/unix_events.py: if is_socket or (is_fifo and not sys.platform.startswith("aix")): ./Lib/ctypes/__init__.py: if _sys.platform.startswith("aix"): ./Lib/ctypes/util.py:elif sys.platform.startswith("aix"): ./Lib/ctypes/util.py: elif sys.platform.startswith("aix"): ./Lib/distutils/command/build_ext.py: elif sys.platform[:3] == 'aix': ./Lib/distutils/util.py: elif osname[:3] == "aix": ./Lib/sysconfig.py: elif osname[:3] == "aix": ./Lib/test/test_asyncio/test_events.py: if sys.platform.startswith("aix"): ./Lib/test/test_faulthandler.py: @unittest.skipIf(sys.platform.startswith('aix'), ./Lib/test/test_strftime.py: or sys.platform.startswith(("aix", "sunos", "solaris"))): ./Lib/test/test_strptime.py: @unittest.skipIf(sys.platform.startswith('aix'), ./Lib/test/test_locale.py: @unittest.skipIf(sys.platform.startswith('aix'), ./Lib/test/test_locale.py: @unittest.skipIf(sys.platform.startswith('aix'), ./Lib/test/test_fileio.py: not sys.platform.startswith(('sunos', 'aix')): ./Lib/test/test_tools/test_i18n.py: @unittest.skipIf(sys.platform.startswith('aix'), ./Lib/test/test_wait4.py: if sys.platform.startswith('aix'): ./Lib/test/test_c_locale_coercion.py:elif sys.platform.startswith("aix"): ./Lib/test/test_shutil.py:AIX = sys.platform[:3] == 'aix' ./Lib/test/test_utf8_mode.py: elif sys.platform.startswith("aix"): I'll write the patch - if I recall it should be a one-liner in configure.ac, but I think some discussion (or blessing) first is appropriate. Maybe even review whether other platforms no longer rely on the X for the platform. Hoping this helps! ---------- components: Build messages: 339869 nosy: Michael.Felt priority: normal severity: normal status: open title: change sys.platform() to just "aix" for AIX versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 15:03:55 2019 From: report at bugs.python.org (Zackery Spytz) Date: Wed, 10 Apr 2019 19:03:55 +0000 Subject: [New-bugs-announce] [issue36589] Incorrect error handling in curses.update_lines_cols() Message-ID: <1554923035.79.0.592038232821.issue36589@roundup.psfhosted.org> New submission from Zackery Spytz : update_lines_cols() returns 0 if an error occurs, but the generated AC code checks for a return value of -1. ---------- components: Extension Modules messages: 339881 nosy: ZackerySpytz priority: normal severity: normal status: open title: Incorrect error handling in curses.update_lines_cols() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 16:21:11 2019 From: report at bugs.python.org (Greg Bowser) Date: Wed, 10 Apr 2019 20:21:11 +0000 Subject: [New-bugs-announce] [issue36590] Add Bluetooth RFCOMM Support for Windows Message-ID: <1554927671.26.0.0558973169259.issue36590@roundup.psfhosted.org> New submission from Greg Bowser : socketmodule supports Bluetooth RFCOMM sockets for Linux. Given that winsock supports this under windows, it is possible to add windows support as well. ---------- components: IO, Windows messages: 339888 nosy: Greg Bowser, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add Bluetooth RFCOMM Support for Windows type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 17:19:33 2019 From: report at bugs.python.org (Terry Davis) Date: Wed, 10 Apr 2019 21:19:33 +0000 Subject: [New-bugs-announce] [issue36591] Should be a typing.UserNamedTuple Message-ID: <1554931173.44.0.541756182017.issue36591@roundup.psfhosted.org> New submission from Terry Davis : There should be a builtin alias for `Type[NamedTuple]` so that library authors user-supplied `NamedTuple`s can properly type-check their code. Here's a code sample that causes an issue in my IDE (PyCharm) ******************************** from typing import NamedTuple, Type def fun(NT: NamedTuple, fill): # Complains that NamedTuple is not callable nt = NT(*fill) return nt UserNamedTuple = Type[NamedTuple] def fun(NT: UserNamedTuple, fill): # No complaints nt = NT(*fill) return nt ******************************** This could just be an issue with PyCharm (I don't use mypy), but the correct to annotate this is with a Type[NamedTuple], so I hope mypy et. al. wouldn't this as a special case... ---------- components: Library (Lib) messages: 339893 nosy: Terry Davis priority: normal severity: normal status: open title: Should be a typing.UserNamedTuple type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 19:55:24 2019 From: report at bugs.python.org (Geraldo Xexeo) Date: Wed, 10 Apr 2019 23:55:24 +0000 Subject: [New-bugs-announce] [issue36592] is behave different for integers in 3.6 and 3.7 Message-ID: <1554940525.0.0.506385110502.issue36592@roundup.psfhosted.org> New submission from Geraldo Xexeo : # When you run the program: a,b=300,300 print(a is b) #you get different results in 3.6 (True) and 3.7 (False) ---------- components: Interpreter Core files: testisbehavior.py messages: 339900 nosy: Geraldo.Xexeo priority: normal severity: normal status: open title: is behave different for integers in 3.6 and 3.7 versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48259/testisbehavior.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 20:04:16 2019 From: report at bugs.python.org (Ned Batchelder) Date: Thu, 11 Apr 2019 00:04:16 +0000 Subject: [New-bugs-announce] [issue36593] Trace function interferes with MagicMock isinstance? Message-ID: <1554941056.81.0.519333265817.issue36593@roundup.psfhosted.org> New submission from Ned Batchelder : In Python 3.7.3, having a trace function in effect while mock is imported causes isinstance to be wrong for MagicMocks. I know, this sounds unlikely... $ cat traced_sscce.py # sscce.py import sys def trace(frame, event, arg): return trace if len(sys.argv) > 1: sys.settrace(trace) from unittest.mock import MagicMock class A: pass m = MagicMock(spec=A) print("isinstance: ", isinstance(m, A)) $ python3.7.2 traced_sscce.py isinstance: True $ python3.7.2 traced_sscce.py 1 isinstance: True $ python3.7.2 -VV Python 3.7.2 (default, Feb 17 2019, 16:54:12) [Clang 10.0.0 (clang-1000.10.44.4)] $ python3.7.3 traced_sscce.py isinstance: True $ python3.7.3 traced_sscce.py 1 isinstance: False $ python3.7.3 -VV Python 3.7.3 (default, Apr 10 2019, 10:27:53) [Clang 10.0.0 (clang-1000.10.44.4)] Note that if you move the mock import to before the settrace call, everything works fine. ---------- components: Library (Lib) files: traced_sscce.py messages: 339903 nosy: nedbat priority: normal severity: normal status: open title: Trace function interferes with MagicMock isinstance? versions: Python 3.7 Added file: https://bugs.python.org/file48260/traced_sscce.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 20:10:57 2019 From: report at bugs.python.org (Zackery Spytz) Date: Thu, 11 Apr 2019 00:10:57 +0000 Subject: [New-bugs-announce] [issue36594] Undefined behavior due to incorrect usage of %p in format strings Message-ID: <1554941457.39.0.484149638582.issue36594@roundup.psfhosted.org> Change by Zackery Spytz : ---------- components: Extension Modules, Interpreter Core nosy: ZackerySpytz priority: normal severity: normal status: open title: Undefined behavior due to incorrect usage of %p in format strings versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 20:12:01 2019 From: report at bugs.python.org (Shane) Date: Thu, 11 Apr 2019 00:12:01 +0000 Subject: [New-bugs-announce] [issue36595] Text Search in Squeezed Output Viewer Message-ID: <1554941521.08.0.779423976753.issue36595@roundup.psfhosted.org> New submission from Shane : Would it be possible to enhance IDLE's new Squeezed Output Viewer (which I LOVE, btw), with a text search feature? If I'm in a module's help documentation, I'm usually looking for something, and I often end up copying the text into notepad and searching for it there. Seems like text search would be a useful feature. Thanks for reading, ---------- assignee: terry.reedy components: IDLE messages: 339906 nosy: Shane Smith, terry.reedy priority: normal severity: normal status: open title: Text Search in Squeezed Output Viewer type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 22:07:20 2019 From: report at bugs.python.org (Chris Siebenmann) Date: Thu, 11 Apr 2019 02:07:20 +0000 Subject: [New-bugs-announce] [issue36596] tarfile module considers anything starting with 512 bytes of zero bytes to be a valid tar file Message-ID: <1554948440.07.0.28828566641.issue36596@roundup.psfhosted.org> New submission from Chris Siebenmann : The easiest reproduction of this is: import tarfile tarfile.open("/dev/zero", "r:") (If you use plain "r" you get a hang in attempted lzma decoding.) I believe this is probably due to a missing 'elif self.offset == 0:' in the 'except EOFHeaderError' exception handling case that almost all of the other exception handlers have. This appears to be a very long standing issue based on the history of the code. ---------- components: Library (Lib) messages: 339915 nosy: cks priority: normal severity: normal status: open title: tarfile module considers anything starting with 512 bytes of zero bytes to be a valid tar file versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 10 22:49:25 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 11 Apr 2019 02:49:25 +0000 Subject: [New-bugs-announce] [issue36597] Travis CI: doctest failure Message-ID: <1554950965.55.0.0913094506207.issue36597@roundup.psfhosted.org> New submission from STINNER Victor : On my https://github.com/python/cpython/pull/12770 the doctest job of Travis CI failed with: https://travis-ci.org/python/cpython/jobs/518572326 mkdir -p build Building NEWS from Misc/NEWS.d with blurb PATH=./venv/bin:$PATH sphinx-build -b doctest -d build/doctrees -D latex_elements.papersize= -q -W -j4 -W . build/doctest /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) /home/travis/build/python/cpython/Doc/tools/extensions/pyspecific.py:274: RemovedInSphinx30Warning: env.note_versionchange() is deprecated. Please use ChangeSetDomain.note_changeset() instead. env.note_versionchange('deprecated', version[0], node, self.lineno) Warning, treated as error: ********************************************************************** File "library/unittest.mock-examples.rst", line ?, in default Failed example: m.one().two().three() Expected: Got: obj dead or exiting Makefile:44: recipe for target 'build' failed make[1]: *** [build] Error 2 -- I can reproduce this issue on Linux with: $ cd Doc $ make venv $ PATH=./venv/bin:$PATH sphinx-build -b doctest -d build/doctrees -D latex_elements.papersize= -q -W -j4 -W . build/doctest I get random errors: Warning, treated as error: ********************************************************************** File "library/datetime.rst", line 686, in default Failed example: d.strftime("%A %d. %B %Y") Expected: 'Monday 11. March 2002' Got: 'lundi 11. mars 2002' Warning, treated as error: ********************************************************************** File "library/collections.rst", line 914, in default Failed example: p._asdict() Expected: {'x': 11, 'y': 22} Got: OrderedDict([('x', 11), ('y', 22)]) Warning, treated as error: ********************************************************************** File "library/unittest.mock.rst", line ?, in default Failed example: mock.call_args.args Expected: (3, 4) Got: args The virtual environment uses Sphinx 2.0.1. Can it be a change in Sphinx 2 default configuration? ---------- assignee: docs at python components: Documentation messages: 339919 nosy: docs at python, vstinner priority: normal severity: normal status: open title: Travis CI: doctest failure versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 00:12:58 2019 From: report at bugs.python.org (Gregory Ronin) Date: Thu, 11 Apr 2019 04:12:58 +0000 Subject: [New-bugs-announce] [issue36598] mock side_effect should be checked for iterable not callable Message-ID: <1554955978.5.0.68760722366.issue36598@roundup.psfhosted.org> New submission from Gregory Ronin : In mock.py, in method: def _mock_call(_mock_self, *args, **kwargs): There is a following piece of code: if not _callable(effect): result = next(effect) if _is_exception(result): raise result if result is DEFAULT: result = self.return_value return result ret_val = effect(*args, **kwargs) This works correctly for iterables (such as lists) that are not defined as generators. However, if one defined a generator as a function this would not work. It seems like the check should be not for callable, but for iterable: try: iter(effect) except TypeError: # If not iterable then callable or exception if _callable(effect): ret_val = effect(*args, **kwargs) else: raise effect else: # Iterable result = next(effect) if _is_exception(result): raise result if result is DEFAULT: result = self.return_value return result ---------- components: Tests messages: 339923 nosy: jazzblue priority: normal severity: normal status: open title: mock side_effect should be checked for iterable not callable type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 06:11:10 2019 From: report at bugs.python.org (Inada Naoki) Date: Thu, 11 Apr 2019 10:11:10 +0000 Subject: [New-bugs-announce] [issue36599] weakref document says dict order is unstable Message-ID: <1554977470.77.0.948736759807.issue36599@roundup.psfhosted.org> New submission from Inada Naoki : https://docs.python.org/3/library/doctest.html#warnings "For example, when printing a dict, Python doesn?t guarantee that the key-value pairs will be printed in any particular order," This example should be rewritten with set. ---------- assignee: docs at python components: Documentation messages: 339952 nosy: docs at python, inada.naoki priority: normal severity: normal status: open title: weakref document says dict order is unstable versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 06:34:58 2019 From: report at bugs.python.org (Marcin Niemira) Date: Thu, 11 Apr 2019 10:34:58 +0000 Subject: [New-bugs-announce] [issue36600] re-enable test in nntplib Message-ID: <1554978898.45.0.176655214392.issue36600@roundup.psfhosted.org> New submission from Marcin Niemira : Disabled test in not failing anymore. ```./python -m test -u all -v test_nntplib -m test_article_head_body == CPython 3.8.0a3+ (heads/feature/pep-572-improvement-in-smtplib-dirty:f4efa312d1, Apr 8 2019, 21:0) [GCC 7.3.0] == Linux-4.15.0-46-generic-x86_64-with-glibc2.26 little-endian == cwd: /home/n0npax/workspace/cpython/build/test_python_15162 == CPU count: 4 == encodings: locale=UTF-8, FS=utf-8 Run tests sequentially 0:00:00 load avg: 1.13 [1/1] test_nntplib test_article_head_body (test.test_nntplib.NetworkedNNTPTests) ... ok test_article_head_body (test.test_nntplib.NetworkedNNTP_SSLTests) ... ok ---------------------------------------------------------------------- Ran 2 tests in 7.172s OK == Tests result: SUCCESS == 1 test OK. Total duration: 7 sec 282 ms Tests result: SUCCESS ``` ---------- messages: 339955 nosy: Marcin Niemira priority: normal severity: normal status: open title: re-enable test in nntplib _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 06:40:13 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Thu, 11 Apr 2019 10:40:13 +0000 Subject: [New-bugs-announce] [issue36601] signals can be caught by any thread Message-ID: <1554979213.03.0.286656005091.issue36601@roundup.psfhosted.org> New submission from Jeroen Demeyer : Because of some discussion that is happening on #1583 I noticed this bit of code in the OS-level signal handler (set by the C function sigaction() or signal()): static void signal_handler(int sig_num) { /* See NOTES section above */ if (getpid() == main_pid) { trip_signal(sig_num); } The check getpid() == main_pid is claimed to check for the main *thread* but in fact it's checking the process ID, which is the same for all threads. So as far as I can tell, this condition is always true. This code essentially goes back to 1994 (commit bb4ba12242), so it may have been true at that time that threads were implemented as processes and that getpid() returned a different value for different threads. Note that this code refers to receiving a signal from the OS. In Python, it's always handled (by the function registered by signal.signal) by the main thread. But the current behaviour actually makes sense, so we should just remove the superfluous check and fix the comments in the code. ---------- messages: 339958 nosy: Rhamphoryncus, jdemeyer priority: normal severity: normal status: open title: signals can be caught by any thread _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 06:59:34 2019 From: report at bugs.python.org (Laurie Opperman) Date: Thu, 11 Apr 2019 10:59:34 +0000 Subject: [New-bugs-announce] [issue36602] Recursive directory list with pathlib.Path.iterdir Message-ID: <1554980374.32.0.410571505677.issue36602@roundup.psfhosted.org> New submission from Laurie Opperman : Currently, 'pathlib.Path.iterdir' can only list the contents of the instance directory. It is common to also want the contents of subdirectories recursively. The proposal is for 'pathlib.Path.iterdir' to have an argument 'recursive' which when 'True' will cause 'iterdir' to yield contents of subdirectories recursively. This would be trivial to implement as 'iterdir' can simply yield from subdirectories' 'iterdir'. A decision would have to be made whether to continue to yield the subdirectories, or skip them. Another decision would be for whether each path should be resolved before checking if it is a directory to be recursed into. ---------- components: Library (Lib) messages: 339959 nosy: Epic_Wink priority: normal severity: normal status: open title: Recursive directory list with pathlib.Path.iterdir type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 11:17:03 2019 From: report at bugs.python.org (cagney) Date: Thu, 11 Apr 2019 15:17:03 +0000 Subject: [New-bugs-announce] [issue36603] should pty.openpty() set pty/tty inheritable? Message-ID: <1554995823.59.0.731055677634.issue36603@roundup.psfhosted.org> New submission from cagney : pty.openpty(), on systems with a working os.openpty() / openpty(3) executes: if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0) goto posix_error; if (_Py_set_inheritable(master_fd, 0, NULL) < 0) goto error; if (_Py_set_inheritable(slave_fd, 0, NULL) < 0) goto error; where as on systems where this is fails it instead executes: master_fd, slave_name = _open_terminal() slave_fd = slave_open(slave_name) i.e., result = os.open(tty_name, os.O_RDWR) return master_fd, slave_fd where os.open() was "Changed in version 3.4: The new file descriptor is now non-inheritable." (personally I'd deprecate pty.openpty(), but that is just me) ---------- components: IO messages: 339982 nosy: cagney priority: normal severity: normal status: open title: should pty.openpty() set pty/tty inheritable? versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 11:53:30 2019 From: report at bugs.python.org (Bjorn Madsen) Date: Thu, 11 Apr 2019 15:53:30 +0000 Subject: [New-bugs-announce] [issue36604] Add recipe to itertools Message-ID: <1554998010.79.0.74047877309.issue36604@roundup.psfhosted.org> New submission from Bjorn Madsen : I would like to add a recipe to the itertools documentation (if it belongs there?) The recipe presents a method to generate set(powerset([iterable])) in a fraction of the runtime. I thought others might find this method helpful and pushed it to github under MIT license. The recipe is available with test here: https://github.com/root-11/python_recipes ---------- components: Extension Modules messages: 339984 nosy: Bjorn.Madsen priority: normal severity: normal status: open title: Add recipe to itertools type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 12:50:06 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 11 Apr 2019 16:50:06 +0000 Subject: [New-bugs-announce] [issue36605] make tags should also parse Modules/_io/*.c and Modules/_io/*.h Message-ID: <1555001406.69.0.640978574567.issue36605@roundup.psfhosted.org> New submission from STINNER Victor : Attached PR fix the issue. ---------- components: Build messages: 339986 nosy: vstinner priority: normal severity: normal status: open title: make tags should also parse Modules/_io/*.c and Modules/_io/*.h versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 13:18:57 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Thu, 11 Apr 2019 17:18:57 +0000 Subject: [New-bugs-announce] [issue36606] calling super() causes __class__ to be not defined when sys.settrace(trace) is set Message-ID: <1555003137.23.0.80747971032.issue36606@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I came across this issue in issue36593 where MagicMock had a custom __class__ attribute set and one of the methods used super() which caused __class__ not to be set. This seems to have been fixed in the past with issue12370 and a workaround to alias super at module level and use it was suggested in msg161704. Usage of the alias seems to solve the issue for Mock but the fix for __class__ breaks when sys.settrace is set. Example code as below with custom __class__ defined and with running the code under sys.settrace() super() doesn't set __class__ but using _safe_super alias works. Another aspect in the mock related issue is that the call to super() is under a codepath that is not executed during normal run but executed when sys.settrace during import itself. import sys _safe_super = super def trace(frame, event, arg): return trace if len(sys.argv) > 1: sys.settrace(trace) class SuperClass(object): def __init__(self): super().__init__() @property def __class__(self): return int class SafeSuperClass(object): def __init__(self): _safe_super(SafeSuperClass, self).__init__() @property def __class__(self): return int print(isinstance(SuperClass(), int)) print(isinstance(SafeSuperClass(), int)) Running above code with trace and without trace ? cpython git:(master) ? ./python.exe /tmp/buz.py True True ? cpython git:(master) ? ./python.exe /tmp/buz.py 1 False True There is a test for the above in Lib/test/test_super.py at https://github.com/python/cpython/blob/4c409beb4c360a73d054f37807d3daad58d1b567/Lib/test/test_super.py#L87 Add a trace as below in test_super.py at the top and the test case fails import sys def trace(frame, event, arg): return trace sys.settrace(trace) ? cpython git:(master) ? ./python.exe Lib/test/test_super.py ....................F ====================================================================== FAIL: test_various___class___pathologies (__main__.TestSuper) ---------------------------------------------------------------------- Traceback (most recent call last): File "Lib/test/test_super.py", line 100, in test_various___class___pathologies self.assertEqual(x.__class__, 413) AssertionError: .X'> != 413 ---------------------------------------------------------------------- Ran 21 tests in 0.058s FAILED (failures=1) ---------- components: Interpreter Core messages: 339988 nosy: benjamin.peterson, eric.snow, michael.foord, ncoghlan, nedbat, xtreak priority: normal severity: normal status: open title: calling super() causes __class__ to be not defined when sys.settrace(trace) is set type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 14:12:48 2019 From: report at bugs.python.org (Nick Davies) Date: Thu, 11 Apr 2019 18:12:48 +0000 Subject: [New-bugs-announce] [issue36607] asyncio.all_tasks() crashes if asyncio is used in multiple threads Message-ID: <1555006368.14.0.78767463153.issue36607@roundup.psfhosted.org> New submission from Nick Davies : This problem was identified in https://bugs.python.org/issue34970 but I think the fix might have been incorrect. The theory in issue34970 was that GC was causing the weakrefset for `all_tasks` to change during iteration. However Weakset provides an `_IterationGuard` class to prevent GC from changing the set during iteration and hence preventing this problem in a single thread. My thoughts on this problem are: - `asyncio.tasks._all_tasks` is shared for all users of asyncio (https://github.com/python/cpython/blob/3.7/Lib/asyncio/tasks.py#L818) - Any new Task constructed mutates `_all_tasks` (https://github.com/python/cpython/blob/3.7/Lib/asyncio/tasks.py#L117) - _IterationGuard won't protect iterations in this case because calls to Weakset.add will always commit changes even if there is something iterating (https://github.com/python/cpython/blob/3.6/Lib/_weakrefset.py#L83) - calls to `asyncio.all_tasks` or `asyncio.tasks.Task.all_tasks` crash if any task is started on any thread during iteration. Repro code: ``` import asyncio from threading import Thread async def do_nothing(): await asyncio.sleep(0) async def loop_tasks(): loop = asyncio.get_event_loop() while True: loop.create_task(do_nothing()) await asyncio.sleep(0.01) def old_thread(): loop = asyncio.new_event_loop() while True: asyncio.tasks.Task.all_tasks(loop=loop) def new_thread(): loop = asyncio.new_event_loop() while True: asyncio.all_tasks(loop=loop) old_t = Thread(target=old_thread) new_t = Thread(target=new_thread) old_t.start() new_t.start() asyncio.run(loop_tasks()) ``` Output: ``` Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "tmp/test_asyncio.py", line 25, in new_thread asyncio.all_tasks(loop=loop) File "/usr/lib/python3.7/asyncio/tasks.py", line 40, in all_tasks return {t for t in list(_all_tasks) File "/usr/lib/python3.7/_weakrefset.py", line 60, in __iter__ for itemref in self.data: RuntimeError: Set changed size during iteration Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "tmp/test_asyncio.py", line 19, in old_thread asyncio.tasks.Task.all_tasks(loop=loop) File "/usr/lib/python3.7/asyncio/tasks.py", line 52, in _all_tasks_compat return {t for t in list(_all_tasks) if futures._get_loop(t) is loop} File "/usr/lib/python3.7/_weakrefset.py", line 60, in __iter__ for itemref in self.data: RuntimeError: Set changed size during iteration ``` ---------- components: asyncio messages: 339991 nosy: Nick Davies, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.all_tasks() crashes if asyncio is used in multiple threads versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 11 18:13:02 2019 From: report at bugs.python.org (Sviatoslav Sydorenko) Date: Thu, 11 Apr 2019 22:13:02 +0000 Subject: [New-bugs-announce] [issue36608] Replace bundled pip and setuptools with a downloader in the ensurepip module Message-ID: <1555020782.09.0.948988048159.issue36608@roundup.psfhosted.org> New submission from Sviatoslav Sydorenko : Hi, I've noticed that there's an idea to not pollute Git tree with vendored blobs. In particular, `ensurepip` is one of the components doing this. Such a wish was expressed here: https://bugs.python.org/issue35277#msg330098 So I thought I'd take a stab at it... ---------- components: Library (Lib) messages: 339998 nosy: dstufft, pradyunsg, serhiy.storchaka, webknjaz priority: normal severity: normal status: open title: Replace bundled pip and setuptools with a downloader in the ensurepip module type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 04:27:01 2019 From: report at bugs.python.org (=?utf-8?b?7KCV7ZWc7IaU?=) Date: Fri, 12 Apr 2019 08:27:01 +0000 Subject: [New-bugs-announce] [issue36609] activate.ps1 in venv for Windows should encoded with BOM Message-ID: <1555057621.3.0.569629165063.issue36609@roundup.psfhosted.org> New submission from ??? : "activate.ps1" (venv) is currently encoded as UTF8 without BOM. But this cause an error if path of an environment contains non-ASCII characters. It seems Powershell can't recognize UTF8 without BOM. If I changed encoding of activate.ps1 to UTF8-BOM, it works well. So I think activate.ps1 should be encoded as UTF8-BOM. https://stackoverflow.com/questions/14482253/utf8-script-in-powershell-outputs-incorrect-characters ---------- messages: 340014 nosy: ??? priority: normal severity: normal status: open title: activate.ps1 in venv for Windows should encoded with BOM type: crash versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 04:56:34 2019 From: report at bugs.python.org (Jakub Kulik) Date: Fri, 12 Apr 2019 08:56:34 +0000 Subject: [New-bugs-announce] [issue36610] os.sendfile can return EINVAL on Solaris Message-ID: <1555059394.06.0.893471228224.issue36610@roundup.psfhosted.org> New submission from Jakub Kulik : Hi, We have several tests failing on Solaris due to the slightly different behavior of os.sendfile function. Sendfile on Solaris can raise EINVAL if offset is equal or bigger than the size of the file (Python expects that it will return 0 bytes sent in that case). I managed to patch `socked.py` with additional checks on two places (patch attached), Python 3.8 introduced sendfile in shutil.py module, where I don't have fsize variable so easily accessible and so I am unsure what to do with it. Also, I am not even sure if this is a correct way to handle this. Maybe this should be patched somewhere in the .c file? Or there might be other systems with the same behavior and all I need to do is adjust some define guards there... EINVAL can also mean other things and so I guess I cannot just catch that errno and continue as with returned 0. Thanks ---------- components: Library (Lib) files: sendfile.patch keywords: patch messages: 340017 nosy: kulikjak priority: normal severity: normal status: open title: os.sendfile can return EINVAL on Solaris type: crash versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48262/sendfile.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 06:16:34 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 12 Apr 2019 10:16:34 +0000 Subject: [New-bugs-announce] [issue36611] Debug memory allocators: remove useless "serialno" field to reduce memory footprint Message-ID: <1555064194.89.0.870992246861.issue36611@roundup.psfhosted.org> New submission from STINNER Victor : When PYTHONMALLOC=debug environment variable or -X dev command line option is used, Python installs debug hooks on memory allocators which add 2 size_t before and 2 size_t after each memory block: it adds 32 bytes to every memory allocation. I'm debugging crashes and memory leaks in CPython for 10 years, and I simply never had to use "serialno". So I simply propose attached pull request to remove it to reduce the memory footprint: I measured a reduction around -5% (ex: 1.2 MiB on 33.0 MiB when running test_asyncio). A smaller memory footprint allows to use this feature on devices with small memory, like embedded devices. The change also fix race condition in debug memory allocators: bpo-31473, "Debug hooks on memory allocators are not thread safe (serialno variable)". Using tracemalloc, it is already possible (since Python 3.6) to find where a memory block has been allocated, and so decide where to put a breakpoint when debugging. If someone cares about the "serialno" field, maybe we can keep code using a compilation flag, like a C #define. "serialno" is documented as: "an excellent way to set a breakpoint on the next run, to capture the instant at which this block was passed out." But again, I never used it... -- Some examples of the *peak* memory usage without => with the change: * -c pass: 2321.8 kB => 2437.1 kB (-115.3 kiB, -5%) * -m test test_os test_sys: 14252.3 kB => 13598.6 kB (-653.7 kiB, -5%) * -m test test_asyncio: 34194.2 kB => 32963.1 kB (-1231.1 kiB, -4%) Command used to measure the memory consumption: $ ./python -i -X tracemalloc -c pass >>> import tracemalloc; print("%.1f kB" % (tracemalloc.get_traced_memory()[1] / 1024.)) With the patch: diff --git a/Modules/_tracemalloc.c b/Modules/_tracemalloc.c index c5d5671032..e010c2ef84 100644 --- a/Modules/_tracemalloc.c +++ b/Modules/_tracemalloc.c @@ -582,6 +582,8 @@ tracemalloc_add_trace(unsigned int domain, uintptr_t ptr, _Py_hashtable_entry_t* entry; int res; + size += 4 * sizeof(size_t); + assert(_Py_tracemalloc_config.tracing); traceback = traceback_new(); Replace 4 with 3 to measure memory used with the change. -- Since Python 3.6, when the debug memory allocator detects a bug (ex: buffer overflow), it now also displays the Python traceback where the memory block has been allocated if tracemalloc is tracing Python memory allocations. Example with buffer_overflow.py: --- import _testcapi def func(): _testcapi.pymem_buffer_overflow() def main(): func() if __name__ == "__main__": main() --- Output: --- $ ./python -X tracemalloc=10 -X dev bug.py Debug memory block at address p=0x7f45e85c3270: API 'm' 16 bytes originally requested The 7 pad bytes at p-7 are FORBIDDENBYTE, as expected. The 8 pad bytes at tail=0x7f45e85c3280 are not all FORBIDDENBYTE (0xfd): at tail+0: 0x78 *** OUCH at tail+1: 0xfd at tail+2: 0xfd at tail+3: 0xfd at tail+4: 0xfd at tail+5: 0xfd at tail+6: 0xfd at tail+7: 0xfd Data at p: cd cd cd cd cd cd cd cd cd cd cd cd cd cd cd cd Memory block allocated at (most recent call first): File "bug.py", line 4 File "bug.py", line 7 File "bug.py", line 10 Fatal Python error: bad trailing pad byte Current thread 0x00007f45f5660740 (most recent call first): File "bug.py", line 4 in func File "bug.py", line 7 in main File "bug.py", line 10 in Aborted (core dumped) --- The interesting part is "Memory block allocated at (most recent call first):". Traceback reconstructed manually: --- Memory block allocated at (most recent call first): File "bug.py", line 4 _testcapi.pymem_buffer_overflow() File "bug.py", line 7 func() File "bug.py", line 10 main() --- You can see exactly where the memory block has been allocated. Note: Internally, the _PyTraceMalloc_GetTraceback() function is used to get the traceback where a memory block has been allocated. -- Extract of _PyMem_DebugRawAlloc() in Objects/obmalloc.c: /* Let S = sizeof(size_t). The debug malloc asks for 4*S extra bytes and fills them with useful stuff, here calling the underlying malloc's result p: p[0: S] Number of bytes originally asked for. This is a size_t, big-endian (easier to read in a memory dump). p[S] API ID. See PEP 445. This is a character, but seems undocumented. p[S+1: 2*S] Copies of FORBIDDENBYTE. Used to catch under- writes and reads. p[2*S: 2*S+n] The requested memory, filled with copies of CLEANBYTE. Used to catch reference to uninitialized memory. &p[2*S] is returned. Note that this is 8-byte aligned if pymalloc handled the request itself. p[2*S+n: 2*S+n+S] Copies of FORBIDDENBYTE. Used to catch over- writes and reads. p[2*S+n+S: 2*S+n+2*S] A serial number, incremented by 1 on each call to _PyMem_DebugMalloc and _PyMem_DebugRealloc. This is a big-endian size_t. If "bad memory" is detected later, the serial number gives an excellent way to set a breakpoint on the next run, to capture the instant at which this block was passed out. */ /* Layout: [SSSS IFFF CCCC...CCCC FFFF NNNN] * ^--- p ^--- data ^--- tail S: nbytes stored as size_t I: API identifier (1 byte) F: Forbidden bytes (size_t - 1 bytes before, size_t bytes after) C: Clean bytes used later to store actual data N: Serial number stored as size_t */ The last size_t written at the end of each memory block is "serialno". It is documented as: "an excellent way to set a breakpoint on the next run, to capture the instant at which this block was passed out." ---------- components: Interpreter Core messages: 340019 nosy: vstinner priority: normal severity: normal status: open title: Debug memory allocators: remove useless "serialno" field to reduce memory footprint versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 08:09:53 2019 From: report at bugs.python.org (Vratko Polak) Date: Fri, 12 Apr 2019 12:09:53 +0000 Subject: [New-bugs-announce] [issue36612] Unittest document is not clear on SetUpClass calls Message-ID: <1555070993.71.0.91547720267.issue36612@roundup.psfhosted.org> New submission from Vratko Polak : One particular paragraph from unittest.rst is not clear enough: "If you want the setUpClass and tearDownClass on base classes called then you must call up to them yourself. The implementations in TestCase are empty." It has sparkled a debate here [0]. Example: A class SuperTestCase, which inherits from unittest.TestCase, defines some non-trivial setUpClass class method. Then a class SubTestCase, which inherits from SuperTestCase, wants to have SuperTestCase.setUpClass executed as its setUpClass. Does SubTestCase need to override setUpClass just to call SuperTestCase.setUpClass (as the paragraphs might suggest), or can it rely in inheritance to have it executed without overriding? I will create GitHub PR soon. [0] https://gerrit.fd.io/r/#/c/18579/1/test/test_sparse_vec.py at 14 ---------- assignee: docs at python components: Documentation messages: 340028 nosy: docs at python, vrpolakatcisco priority: normal severity: normal status: open title: Unittest document is not clear on SetUpClass calls type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 08:38:08 2019 From: report at bugs.python.org (Aleksandr Balezin) Date: Fri, 12 Apr 2019 12:38:08 +0000 Subject: [New-bugs-announce] [issue36613] asyncio._wait() don't remove callback in case of exception Message-ID: <1555072688.4.0.270726893934.issue36613@roundup.psfhosted.org> New submission from Aleksandr Balezin : Attached script shows unexpected behavior of the wait() function. The wait_ function adds done callback on every call and removes it only if a waiter is successfully awaited. In case of CancelledError exception during "await waiter", callbacks are being accumulated infinitely in task._callbacks. ---------- components: asyncio files: asyncio_wait_callbacks_leak.py messages: 340034 nosy: asvetlov, gescheit, yselivanov priority: normal severity: normal status: open title: asyncio._wait() don't remove callback in case of exception type: resource usage versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48263/asyncio_wait_callbacks_leak.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 10:01:34 2019 From: report at bugs.python.org (weispinc) Date: Fri, 12 Apr 2019 14:01:34 +0000 Subject: [New-bugs-announce] [issue36614] Popen Message-ID: <1555077694.98.0.0934543483823.issue36614@roundup.psfhosted.org> New submission from weispinc : Popen, when run on Windows server 2019 does not output binary by default. Tried Python 3.5 3.6 3.7. OK on Windows server 2016 and 1012. ---------- messages: 340044 nosy: weispinc priority: normal severity: normal status: open title: Popen type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 10:46:33 2019 From: report at bugs.python.org (cagney) Date: Fri, 12 Apr 2019 14:46:33 +0000 Subject: [New-bugs-announce] [issue36615] why call _Py_set_inheritable(0) from os.open() when O_CLOEXEC? Message-ID: <1555080393.29.0.115799834889.issue36615@roundup.psfhosted.org> New submission from cagney : When O_CLOEXEC is defined the file is opened with that flag (YA! - this means that the operation is atomic and, by default, the FD will be closed across os.posix_spawn()). However the code then goes on an executes: #ifndef MS_WINDOWS if (_Py_set_inheritable(fd, 0, atomic_flag_works) < 0) { close(fd); return -1; } #endif should this also be #ifndef O_CLOEXEC? ---------- messages: 340050 nosy: cagney priority: normal severity: normal status: open title: why call _Py_set_inheritable(0) from os.open() when O_CLOEXEC? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 12:25:26 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Fri, 12 Apr 2019 16:25:26 +0000 Subject: [New-bugs-announce] [issue36616] Optimize thread state handling in function call code Message-ID: <1555086326.32.0.447085915153.issue36616@roundup.psfhosted.org> New submission from Jeroen Demeyer : The bytecode interpreter uses an inline function call_function() to handle most function calls. To check for profiling, call_function() needs to call to PyThreadState_GET(). In the reference implementation of PEP 590, I saw that we can remove these PyThreadState_GET() calls by passing the thread state from the main eval loop to call_function(). I suggest to apply this optimization now, because they make sense independently of PEP 580 and PEP 590 and to give a better baseline for performance comparisons. ---------- components: Interpreter Core messages: 340078 nosy: Mark.Shannon, jdemeyer, petr.viktorin priority: normal severity: normal status: open title: Optimize thread state handling in function call code versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 13:07:05 2019 From: report at bugs.python.org (Dan Snider) Date: Fri, 12 Apr 2019 17:07:05 +0000 Subject: [New-bugs-announce] [issue36617] The rich comparison operators are second class citizens Message-ID: <1555088825.83.0.858105919161.issue36617@roundup.psfhosted.org> New submission from Dan Snider : The rich comparison operators have an (far as I can tell, unnecessary) limitation compared to the other binary operators, being that the result of an unparenthesized comparison expression cannot be unpacked using the *iterable "unpack" operator (does that thing have an official name?) Here's a silly demonstration of what I'm talking about: >>> if 1: ... parser.expr("[*+-~d< Traceback (most recent call last): File "", line 3, in File "", line 1 [*+-~d<=b-~+_] ^ SyntaxError: invalid syntax >>> if 1: ... parser.expr("f(*+d<<-b)") ... parser.expr("f(*+d<=-b)") ... Because the limitation is not present for function calls, I suspect this is simply a "typo" that's gone unnoticed for years, due to nobody ever trying it. I'm hardly an expert on the parser and can barely read the grammar file so i might be totally wrong here. But then, what would be the difference between the expressions: [*a+b+c+d, *e-f] and [*a>> class S(list): __lt__ = list.__add__ ---------- messages: 340084 nosy: bup priority: normal severity: normal status: open title: The rich comparison operators are second class citizens _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 13:36:22 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 12 Apr 2019 17:36:22 +0000 Subject: [New-bugs-announce] [issue36618] clang expects memory aligned on 16 bytes, but pymalloc aligns to 8 bytes Message-ID: <1555090582.18.0.289146441616.issue36618@roundup.psfhosted.org> New submission from STINNER Victor : On x86-64, clang -O3 compiles the following function: PyCArgObject * PyCArgObject_new(void) { PyCArgObject *p; p = PyObject_New(PyCArgObject, &PyCArg_Type); if (p == NULL) return NULL; p->pffi_type = NULL; p->tag = '\0'; p->obj = NULL; memset(&p->value, 0, sizeof(p->value)); return p; } like that: 0x00007fffe9c6acb0 <+0>: push rax 0x00007fffe9c6acb1 <+1>: mov rdi,QWORD PTR [rip+0xe308] # 0x7fffe9c78fc0 0x00007fffe9c6acb8 <+8>: call 0x7fffe9c5e8a0 <_PyObject_New at plt> 0x00007fffe9c6acbd <+13>: test rax,rax 0x00007fffe9c6acc0 <+16>: je 0x7fffe9c6acdf 0x00007fffe9c6acc2 <+18>: mov QWORD PTR [rax+0x20],0x0 0x00007fffe9c6acca <+26>: mov BYTE PTR [rax+0x28],0x0 0x00007fffe9c6acce <+30>: xorps xmm0,xmm0 0x00007fffe9c6acd1 <+33>: movaps XMMWORD PTR [rax+0x30],xmm0 0x00007fffe9c6acd5 <+37>: mov QWORD PTR [rax+0x40],0x0 0x00007fffe9c6acdd <+45>: pop rcx 0x00007fffe9c6acde <+46>: ret 0x00007fffe9c6acdf <+47>: xor eax,eax 0x00007fffe9c6ace1 <+49>: pop rcx 0x00007fffe9c6ace2 <+50>: ret The problem is that movaps requires the memory address to be aligned on 16 bytes, whereas PyObject_New() uses pymalloc allocator (the requested size is 80 bytes, pymalloc supports allocations up to 512 bytes) and pymalloc only provides alignment on 8 bytes. If PyObject_New() returns an address not aligned on 16 bytes, PyCArgObject_new() crash immediately with a segmentation fault (SIGSEGV). CPython must be compiled using -fmax-type-align=8 to avoid such alignment crash. Using this compiler flag, clag emits expected machine code: 0x00007fffe9caacb0 <+0>: push rax 0x00007fffe9caacb1 <+1>: mov rdi,QWORD PTR [rip+0xe308] # 0x7fffe9cb8fc0 0x00007fffe9caacb8 <+8>: call 0x7fffe9c9e8a0 <_PyObject_New at plt> 0x00007fffe9caacbd <+13>: test rax,rax 0x00007fffe9caacc0 <+16>: je 0x7fffe9caacdf 0x00007fffe9caacc2 <+18>: mov QWORD PTR [rax+0x20],0x0 0x00007fffe9caacca <+26>: mov BYTE PTR [rax+0x28],0x0 0x00007fffe9caacce <+30>: xorps xmm0,xmm0 0x00007fffe9caacd1 <+33>: movups XMMWORD PTR [rax+0x30],xmm0 0x00007fffe9caacd5 <+37>: mov QWORD PTR [rax+0x40],0x0 0x00007fffe9caacdd <+45>: pop rcx 0x00007fffe9caacde <+46>: ret 0x00007fffe9caacdf <+47>: xor eax,eax 0x00007fffe9caace1 <+49>: pop rcx 0x00007fffe9caace2 <+50>: ret "movaps" instruction becomes "movups" instruction: "a" stands for "aligned" in movaps, whereas "u" stands for "unaligned" in movups. ---------- components: Build messages: 340087 nosy: vstinner priority: normal severity: normal status: open title: clang expects memory aligned on 16 bytes, but pymalloc aligns to 8 bytes versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 13:58:10 2019 From: report at bugs.python.org (cagney) Date: Fri, 12 Apr 2019 17:58:10 +0000 Subject: [New-bugs-announce] [issue36619] when is os.posix_spawn(setsid=True) safe? Message-ID: <1555091890.2.0.391429247327.issue36619@roundup.psfhosted.org> New submission from cagney : How can I detect that os.posix_spawn(setsid=True) is available at runtime? I'd like to use os.posix_spawn(setsid=True) when it is available, and (assuming I'm getting this right) os.posix_spawn(setpgroup=0) as a poor fallback. ---------- components: IO messages: 340091 nosy: cagney priority: normal severity: normal status: open title: when is os.posix_spawn(setsid=True) safe? type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 16:09:17 2019 From: report at bugs.python.org (Charles Merriam) Date: Fri, 12 Apr 2019 20:09:17 +0000 Subject: [New-bugs-announce] [issue36620] Documentation missing parameter for Itertools.zip_longest Message-ID: <1555099757.43.0.201440165942.issue36620@roundup.psfhosted.org> New submission from Charles Merriam : On page: https://docs.python.org/3.8/library/itertools.html In the heading summary, in the "Iterators terminating on the shortest input sequence:" section, in the "zip_longest()" table row, in the "Arguments" column, the text "p, q, ..." should be "p, q, ... [, fillvalue=None]" ---------- assignee: docs at python components: Documentation messages: 340107 nosy: CharlesMerriam, docs at python priority: normal severity: normal status: open title: Documentation missing parameter for Itertools.zip_longest type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 16:35:45 2019 From: report at bugs.python.org (Jordan Hueckstaedt) Date: Fri, 12 Apr 2019 20:35:45 +0000 Subject: [New-bugs-announce] [issue36621] shutil.rmtree follows junctions on windows Message-ID: <1555101345.3.0.431387403068.issue36621@roundup.psfhosted.org> New submission from Jordan Hueckstaedt : shutil.rmtree follows junctions / reparse points on windows and will delete files in the target link directory. ---------- components: IO, Windows messages: 340111 nosy: Jordan Hueckstaedt, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: shutil.rmtree follows junctions on windows versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 20:36:55 2019 From: report at bugs.python.org (Sep Dehpour) Date: Sat, 13 Apr 2019 00:36:55 +0000 Subject: [New-bugs-announce] [issue36622] Inconsistent exponent notation formatting Message-ID: <1555115815.06.0.0326129419561.issue36622@roundup.psfhosted.org> New submission from Sep Dehpour : Floats and Decimals have inconsistent exponent notation formatting: >>> '{:.5e}'.format(Decimal('2.0001')) '2.00010e+0' >>> '{:.5e}'.format(2.0001) '2.00010e+00' This is causing issues for us since we use the scientific notation formatted string of numbers to compare them. Between decimals and floats, one produces '+0' while the other one produces '+00' ---------- messages: 340136 nosy: seperman priority: normal severity: normal status: open title: Inconsistent exponent notation formatting type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 12 21:05:24 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 13 Apr 2019 01:05:24 +0000 Subject: [New-bugs-announce] [issue36623] Clean unused parser headers Message-ID: <1555117524.95.0.888153929323.issue36623@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : After the removal of pgen, there are multiple parser headers that are not used anymore or ar lacking implementations. ---------- components: Interpreter Core messages: 340140 nosy: pablogsal priority: normal severity: normal status: open title: Clean unused parser headers versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 13 08:25:43 2019 From: report at bugs.python.org (Michael Felt) Date: Sat, 13 Apr 2019 12:25:43 +0000 Subject: [New-bugs-announce] [issue36624] cleanup the stdlib and tests with regard to sys.platform usage Message-ID: <1555158343.77.0.950141438455.issue36624@roundup.psfhosted.org> New submission from Michael Felt : Back in 2012 (issue12326 and issue12795), and just recently (issue36588) sys.platform has been modified (and documented) to not return the platform version. Additionally, the recommendation is to use the form sys.platform.startswith() - to continue to be backwards compatible. IMHO - looking forward - Python3.8 and later - we should not be using the recommendation for 'backwards-compatibility' in our code (so this PR will not be considered for back-porting) - in our stdlib, tests, and - should it occur - in "core" code. We should be testing for equality. Further, imho, the change should not be sys.platform == but should be platform.system() == , or platform.system() in ('AIX', 'Darwin', 'Linux') -- and adjust the list so that the most frequently used platform is tested first (e.g., performance-wise ('Linux', 'Darwin', 'AIX') would better reflect platform importance. OR - should the change just continue to use sys.platform - even though this is a build-time value, not a run-time value. I propose to do this in separate PR - one for each platform of AIX, Darwin and Linux. (I would also add Windows, but that would be to replace the equivalence of sys.platform == 'win32' with platform.system() == 'Windows', and perhaps, os.name == 'nt' with platform.system() == 'Windows'. Reaction from other platforms dependent on os.name == 'nt' (cygwin?) would be helpful.) Finally, while I do not want to rush this - I would like to try and target getting this complete in time for Python3.8 ---------- components: Library (Lib), Tests messages: 340155 nosy: Michael.Felt priority: normal severity: normal status: open title: cleanup the stdlib and tests with regard to sys.platform usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 13 18:58:43 2019 From: report at bugs.python.org (=?utf-8?q?Jakub_Moli=C5=84ski?=) Date: Sat, 13 Apr 2019 22:58:43 +0000 Subject: [New-bugs-announce] [issue36625] Obsolete comments in docstrings in fractions module Message-ID: <1555196323.81.0.314132124963.issue36625@roundup.psfhosted.org> New submission from Jakub Moli?ski : 3 docstrings in fractions.Fraction contain comments referring to python 3.0. def __floor__(a): """Will be math.floor(a) in 3.0.""" def __ceil__(a): """Will be math.ceil(a) in 3.0.""" def __round__(self, ndigits=None): """Will be round(self, ndigits) in 3.0. Rounds half toward even. """ To make it consistent with other docstrings in the module these should be changed to """math.floor(a)""", """math.ceil(a)""", and """round(self, ndigits) Rounds half toward even. """ ---------- assignee: docs at python components: Documentation messages: 340174 nosy: docs at python, jakub.molinski priority: normal severity: normal status: open title: Obsolete comments in docstrings in fractions module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 14 01:57:31 2019 From: report at bugs.python.org (Dan Timofte) Date: Sun, 14 Apr 2019 05:57:31 +0000 Subject: [New-bugs-announce] [issue36626] asyncio run_forever blocks indefinitely Message-ID: <1555221451.6.0.952991660079.issue36626@roundup.psfhosted.org> New submission from Dan Timofte : after starting run_forever if all scheduled tasks are consumed run_once will issue a KqueueSelector.select(None) which will block indefinitely : https://www.freebsd.org/cgi/man.cgi?query=select&sektion=2&apropos=0&manpath=FreeBSD+12.0-RELEASE+and+Ports#DESCRIPTION after this new tasks are not being processed, trying to stop event loop with stop() is not working. this blocks immediatly : import asyncio import sys import signal def cb_signal_handler(signum, frame): asyncio.get_event_loop().stop() def main(): signal.signal(signal.SIGINT, cb_signal_handler) # asyncio.get_event_loop().create_task(asyncio.sleep(1)) asyncio.get_event_loop().run_forever() main() With asyncio.sleep uncomment it will block after 4 cycles. ---------- components: asyncio, macOS messages: 340182 nosy: asvetlov, dantimofte, ned.deily, ronaldoussoren, yselivanov priority: normal severity: normal status: open title: asyncio run_forever blocks indefinitely versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 14 05:32:03 2019 From: report at bugs.python.org (Tsvika Shapira) Date: Sun, 14 Apr 2019 09:32:03 +0000 Subject: [New-bugs-announce] [issue36627] composing filter() doesn't work as expected Message-ID: <1555234323.36.0.791779964301.issue36627@roundup.psfhosted.org> New submission from Tsvika Shapira : the following code: ``` lists_to_filter = [ ['a', 'exclude'], ['b'] ] # notice that when 'exclude' is the last element, the code returns the expected result for exclude_label in ['exclude', 'something']: lists_to_filter = (labels_list for labels_list in lists_to_filter if exclude_label not in labels_list) # notice that changing the line above to the commented line below (i.e. expanding the generator to a list) will make the code output the expected result, # i.e. the issue is only when using filter on another filter, and not on a list # lists_to_filter = [labels_list for labels_list in lists_to_filter if exclude_label not in labels_list] lists_to_filter = list(lists_to_filter) print(lists_to_filter) ``` as far as i understand, the code above should output "[['b']]" instead it outputs "[['a', 'exclude'], ['b']]" ---------- messages: 340200 nosy: Tsvika Shapira priority: normal severity: normal status: open title: composing filter() doesn't work as expected type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 14 09:58:27 2019 From: report at bugs.python.org (Aditya Shankar) Date: Sun, 14 Apr 2019 13:58:27 +0000 Subject: [New-bugs-announce] [issue36628] Enhancement: i-Strings Message-ID: <1555250307.08.0.914630498101.issue36628@roundup.psfhosted.org> New submission from Aditya Shankar : Problem: multiline strings are a pain to represent (other than of-course in docstrings), representing a multiline string inside a function looks something like this - def foo(): # some code ... ... # some code text = """abc meta alpha chronos dudes uptomes this text is nonsense""" return somethingwith(text) or def foo(): # some code ... ... # some code text = "\n".join(["abc meta alpha chronos", "dudes uptomes this text", "is nonsense"]) return somethingwith(text) an enhancement would be - def foo(): # some code ... ... # some code text = i""" abc meta alpha chronos dudes uptomes this text is nonsense """ return somethingwith(text) i.e. all initial spaces are not considered as a part of the string in each ine for example while throwing an exception - def foo(bad_param): ... try: some_function_on(bad_param) except someException: throw(fi""" you cant do that because, and I'm gonna explain this in a paragraph of text with this {variable} because it explains things more clearly, also here is the {bad_param} """) ... which is far neater than - def foo(bad_param): ... try: some_function_on(bad_param) except someException: throw(f"""you cant do that because, and I'm gonna explain this in a paragraph of text with this {variable} because it explains things more clearly, also here is the {bad_param}""") ... pros: - represented code is closer to output text - implementation should not be too hard ---------- components: Interpreter Core messages: 340208 nosy: Aditya Shankar priority: normal severity: normal status: open title: Enhancement: i-Strings type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 14 10:34:01 2019 From: report at bugs.python.org (Marat Sharafutdinov) Date: Sun, 14 Apr 2019 14:34:01 +0000 Subject: [New-bugs-announce] [issue36629] imaplib test fails with errno 101 Message-ID: <1555252441.31.0.294648595666.issue36629@roundup.psfhosted.org> New submission from Marat Sharafutdinov : ====================================================================== FAIL: test_imap4_host_default_value (test.test_imaplib.TestImaplib) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/python/Lib/test/test_imaplib.py", line 94, in test_imap4_host_default_value self.assertIn(cm.exception.errno, expected_errnos) AssertionError: 101 not found in [111, 99] ---------------------------------------------------------------------- I guess `errno.ENETUNREACH` should be added to the `expected_errnos` as it done within `test_create_connection` (test.test_socket.NetworkConnectionNoServer). ---------- components: Tests messages: 340212 nosy: decaz priority: normal severity: normal status: open title: imaplib test fails with errno 101 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 14 15:32:58 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sun, 14 Apr 2019 19:32:58 +0000 Subject: [New-bugs-announce] [issue36630] failure of test_colors_funcs in test_curses with ncurses 6.1 Message-ID: <1555270378.07.0.766030130411.issue36630@roundup.psfhosted.org> New submission from Xavier de Gaye : ncurses version: 6.1 TERM: screen-256color $ ./python -m test -u curses test_curses Run tests sequentially 0:00:00 load avg: 0.55 [1/1] test_curses test test_curses failed -- Traceback (most recent call last): File "/path/to/Lib/test/test_curses.py", line 285, in test_colors_funcs curses.pair_content(curses.COLOR_PAIRS - 1) OverflowError: signed short integer is greater than maximum test_curses failed == Tests result: FAILURE == Not sure if the following is relevant. In /usr/include/ncurses.h: NCURSES_WRAPPED_VAR(int, COLOR_PAIRS); ... #define COLOR_PAIRS NCURSES_PUBLIC_VAR(COLOR_PAIRS()) ... extern NCURSES_EXPORT_VAR(int) COLOR_PAIRS; ncurses 6.1 release notes [1] says: The TERMINAL structure in is now opaque. Doing that allowed making the structure larger, to hold the extended numeric data. ... The new data in TERMINAL holds the same information as TERMTYPE, but with larger numbers (?int? versus ?short?). It is named TERMTYPE2. [1] https://www.gnu.org/software/ncurses/ ---------- components: Tests messages: 340228 nosy: xdegaye priority: normal severity: normal status: open title: failure of test_colors_funcs in test_curses with ncurses 6.1 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 06:39:11 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 15 Apr 2019 10:39:11 +0000 Subject: [New-bugs-announce] [issue36631] test_urllib2net: test_ftp_no_timeout() killed after a timeout of 15 min Message-ID: <1555324751.9.0.575658866792.issue36631@roundup.psfhosted.org> New submission from STINNER Victor : It seems like test_urllib2net.test_ftp_no_timeout() has no timeout: it should use a timeout to not block the whole test suite if the FTP server is down. x86 Gentoo Non-Debug with X 3.7: https://buildbot.python.org/all/#/builders/115/builds/1044 0:26:50 load avg: 1.68 [413/416] test_genericpath passed -- running: test_urllib2net (11 min 59 sec) 0:26:52 load avg: 1.68 [414/416] test_tempfile passed -- running: test_urllib2net (12 min 1 sec) 0:26:52 load avg: 1.68 [415/416] test_pipes passed -- running: test_urllib2net (12 min 1 sec) running: test_urllib2net (12 min 31 sec) running: test_urllib2net (13 min 1 sec) running: test_urllib2net (13 min 31 sec) running: test_urllib2net (14 min 1 sec) running: test_urllib2net (14 min 31 sec) 0:29:51 load avg: 1.49 [416/416/1] test_urllib2net crashed (Exit code 1) Timeout (0:15:00)! Thread 0xb7be2700 (most recent call first): File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/socket.py", line 716 in create_connection File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/ftplib.py", line 152 in connect File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 2384 in init File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 2375 in __init__ File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 1555 in connect_ftp File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 1533 in ftp_open File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 503 in _call_chain File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 543 in _open File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 525 in open File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/urllib/request.py", line 222 in urlopen File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/test/test_urllib2net.py", line 19 in _retry_thrice File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/test/test_urllib2net.py", line 27 in wrapped File "/buildbot/buildarea/cpython/3.7.ware-gentoo-x86.nondebug/build/Lib/test/test_urllib2net.py", line 327 in test_ftp_no_timeout ---------- components: Tests messages: 340257 nosy: vstinner priority: normal severity: normal status: open title: test_urllib2net: test_ftp_no_timeout() killed after a timeout of 15 min versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 07:01:40 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 15 Apr 2019 11:01:40 +0000 Subject: [New-bugs-announce] [issue36632] test_multiprocessing_forkserver: test_rapid_restart() leaked a dangling process on AMD64 FreeBSD 10-STABLE Non-Debug 3.x Message-ID: <1555326100.12.0.0433329524894.issue36632@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 FreeBSD 10-STABLE Non-Debug 3.x https://buildbot.python.org/all/#/builders/167/builds/777 0:04:20 load avg: 4.05 [113/420/1] test_multiprocessing_forkserver failed (env changed) (3 min 43 sec) ... test_listener_client (test.test_multiprocessing_forkserver.WithProcessesTestListenerClient) ... ok test_lock (test.test_multiprocessing_forkserver.WithProcessesTestLock) ... ok test_lock_context (test.test_multiprocessing_forkserver.WithProcessesTestLock) ... ok test_rlock (test.test_multiprocessing_forkserver.WithProcessesTestLock) ... ok test_enable_logging (test.test_multiprocessing_forkserver.WithProcessesTestLogging) ... ok test_level (test.test_multiprocessing_forkserver.WithProcessesTestLogging) ... ok test_rapid_restart (test.test_multiprocessing_forkserver.WithProcessesTestManagerRestart) ... ok Warning -- Dangling processes: {} test_access (test.test_multiprocessing_forkserver.WithProcessesTestPicklingConnections) ... ok test_pickling (test.test_multiprocessing_forkserver.WithProcessesTestPicklingConnections) ... ok test_boundaries (test.test_multiprocessing_forkserver.WithProcessesTestPoll) ... ok ... ---------- components: Tests messages: 340260 nosy: vstinner priority: normal severity: normal status: open title: test_multiprocessing_forkserver: test_rapid_restart() leaked a dangling process on AMD64 FreeBSD 10-STABLE Non-Debug 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 09:51:34 2019 From: report at bugs.python.org (Jens Vagelpohl) Date: Mon, 15 Apr 2019 13:51:34 +0000 Subject: [New-bugs-announce] [issue36633] py_compile.compile: AttributeError on importlib.utils Message-ID: <1555336294.55.0.159986341873.issue36633@roundup.psfhosted.org> New submission from Jens Vagelpohl : The following code in py_compile.compile fails (tested on 3.6.6 and 3.7.3) with tracebacks that end like the one shown at the bottom. There's an AttributeError about importlib.utils. """ if cfile is None: if optimize >= 0: optimization = optimize if optimize >= 1 else '' cfile = importlib.util.cache_from_source(file, optimization=optimization) else: cfile = importlib.util.cache_from_source(file) """ Sample tail end of traceback: """ File "/Users/jens/src/.eggs/Chameleon-3.6-py3.7.egg/chameleon/template.py", line 243, in _cook cooked = self.loader.build(source, filename) File "/Users/jens/src/.eggs/Chameleon-3.6-py3.7.egg/chameleon/loader.py", line 177, in build py_compile.compile(name) File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/py_compile.py", line 130, in compile cfile = importlib.util.cache_from_source(file) AttributeError: module 'importlib' has no attribute 'util' """ ---------- components: Library (Lib) messages: 340271 nosy: dataflake priority: normal severity: normal status: open title: py_compile.compile: AttributeError on importlib.utils versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 10:15:59 2019 From: report at bugs.python.org (Bastian Wenzel) Date: Mon, 15 Apr 2019 14:15:59 +0000 Subject: [New-bugs-announce] [issue36634] venv: activate.bat fails for venv with parentheses in PATH Message-ID: <1555337759.6.0.97149991925.issue36634@roundup.psfhosted.org> New submission from Bastian Wenzel : After creating a virtual environment on win 7 (64bit) with: py -3.7 -m venv venv Running venv\Scripts\activate.bat will yield this result: \Common was unexpected at this time. (venv) C:\... My PATH variable contains a path that starts with: C:\Program Files (x86)\Common Files\... To me this looks like this issue for virtualenv: https://github.com/pypa/virtualenv/issues/35 https://github.com/pypa/virtualenv/pull/839 Running: (venv) C:\Tools\venv_test>where python C:\Python34\python.exe This is my default python on PATH. Doing this with virtualenv: (virtualenv) C:\Tools\venv_test>where python C:\Tools\venv_test\virtualenv\Scripts\python.exe C:\Python34\python.exe I really hope this is not a duplicate. ---------- components: Library (Lib) messages: 340274 nosy: BWenzel priority: normal severity: normal status: open title: venv: activate.bat fails for venv with parentheses in PATH type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 10:51:20 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 15 Apr 2019 14:51:20 +0000 Subject: [New-bugs-announce] [issue36635] Add _testinternalcapi module Message-ID: <1555339880.27.0.999764272678.issue36635@roundup.psfhosted.org> New submission from STINNER Victor : Python headers are being reorganized to clarify what's public, specific to CPython or "internal". See issues bpo-35134 (Add a new Include/cpython/ subdirectory) and bpo-35081 (Move internal headers to Include/internal/). Problem: the _testcapi module designed to only test the *public* API. Functions tested by _testcapi cannot be made internal. I propose to add a new _testinternalcapi module reserved to test internal APIs. Attached PR implements this idea: it makes _Py_GetConfigsAsDict() private and moves _testcapi.get_configs() to _testinternalcapi.get_configs(). ---------- components: Tests messages: 340282 nosy: vstinner priority: normal severity: normal status: open title: Add _testinternalcapi module versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 13:52:43 2019 From: report at bugs.python.org (Drew Budwin) Date: Mon, 15 Apr 2019 17:52:43 +0000 Subject: [New-bugs-announce] [issue36636] Inner exception is not being raised using asyncio.gather Message-ID: <1555350763.65.0.175708085.issue36636@roundup.psfhosted.org> New submission from Drew Budwin : Using Python 3.7, I am trying to catch an exception and re-raise it by following an example I found on StackOverflow (https://stackoverflow.com/a/6246394/1595510). While the example does work, it doesn't seem to work for all situations. Below I have two asynchronous Python scripts that try to re-raise exceptions. The first example works, it will print both the inner and outer exception. import asyncio class Foo: async def throw_exception(self): raise Exception("This is the inner exception") async def do_the_thing(self): try: await self.throw_exception() except Exception as e: raise Exception("This is the outer exception") from e async def run(): await Foo().do_the_thing() def main(): loop = asyncio.get_event_loop() loop.run_until_complete(run()) if __name__ == "__main__": main() Running this will correctly output the following exception stack trace: $ py test.py Traceback (most recent call last): File "test.py", line 9, in do_the_thing await self.throw_exception() File "test.py", line 5, in throw_exception raise Exception("This is the inner exception") Exception: This is the inner exception The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 21, in main() File "test.py", line 18, in main loop.run_until_complete(run()) File "C:\Python37\lib\asyncio\base_events.py", line 584, in run_until_complete return future.result() File "test.py", line 14, in run await Foo().do_the_thing() File "test.py", line 11, in do_the_thing raise Exception("This is the outer exception") from e Exception: This is the outer exception However, in my next Python script, I have multiple tasks that I queue up that I want to get a similar exception stack trace from. Essentially, I except the above stack trace to be printed 3 times (once for each task in the following script). The only difference between the above and below scripts is the run() function. import asyncio class Foo: async def throw_exception(self): raise Exception("This is the inner exception") async def do_the_thing(self): try: await self.throw_exception() except Exception as e: raise Exception("This is the outer exception") from e async def run(): tasks = [] foo = Foo() tasks.append(asyncio.create_task(foo.do_the_thing())) tasks.append(asyncio.create_task(foo.do_the_thing())) tasks.append(asyncio.create_task(foo.do_the_thing())) results = await asyncio.gather(*tasks, return_exceptions=True) for result in results: if isinstance(result, Exception): print(f"Unexpected exception: {result}") def main(): loop = asyncio.get_event_loop() loop.run_until_complete(run()) if __name__ == "__main__": main() The above code snippet produces the disappointingly short exceptions lacking stack traces. $ py test.py Unexpected exception: This is the outer exception Unexpected exception: This is the outer exception Unexpected exception: This is the outer exception If I change return_exceptions to be False, I will get the exceptions and stack trace printed out once and then execution stops and the remaining two tasks are cancelled. The output is identical to the output from the first script. The downside of this approach is, I want to continue processing tasks even when exceptions are encountered and then display all the exceptions at the end when all the tasks are completed. ---------- components: asyncio messages: 340297 nosy: Drew Budwin, asvetlov, yselivanov priority: normal severity: normal status: open title: Inner exception is not being raised using asyncio.gather type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 14:27:31 2019 From: report at bugs.python.org (danijar) Date: Mon, 15 Apr 2019 18:27:31 +0000 Subject: [New-bugs-announce] [issue36637] Restrict syntax for tuple literals with one element Message-ID: <1555352851.43.0.0863954988621.issue36637@roundup.psfhosted.org> New submission from danijar : A tuple can be created with or without parentheses: a = (1, 2, 3) a = 1, 2, 3 While both are intuitive in this example, omitting the parentheses can lead to hard to find errors when there is only one element: a = (1,) a = 1, The first is clear but the second can easily occur as a typo when the programmer actually just wanted to assign an integer (comma is next to enter on many keyboards). I think ideally, omitting parentheses in the single element case would throw a SyntaxError. On the other hand, I assume that it could be difficult to separate the behavior or tuple creating with an without parentheses, since the parentheses are probably not actually part of the tuple literal. ---------- components: Interpreter Core messages: 340298 nosy: benjamin.peterson, brett.cannon, danijar, xtreak, yselivanov priority: normal severity: normal status: open title: Restrict syntax for tuple literals with one element _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 15 21:08:35 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 16 Apr 2019 01:08:35 +0000 Subject: [New-bugs-announce] [issue36638] typeperf.exe is not in all skus of Windows Message-ID: <1555376915.06.0.702327922476.issue36638@roundup.psfhosted.org> New submission from Paul Monson : typeperf.exe is not present on small editions of windows like Windows IoT Core or nanoserver This causes WindowsLoadTracker to throw an exception during test initialization. ---------- components: Tests messages: 340309 nosy: Paul Monson priority: normal severity: normal status: open title: typeperf.exe is not in all skus of Windows type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 02:46:24 2019 From: report at bugs.python.org (=?utf-8?b?5p6X6Ieq5Z2H?=) Date: Tue, 16 Apr 2019 06:46:24 +0000 Subject: [New-bugs-announce] [issue36639] Provide list.rindex() Message-ID: <1555397184.25.0.31391745163.issue36639@roundup.psfhosted.org> New submission from ??? : There are str.index() and str.rindex(), but there is only list.index() and no list.rindex(). It will be very handy if we provide it. ---------- components: Library (Lib) messages: 340312 nosy: johnlinp priority: normal severity: normal status: open title: Provide list.rindex() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 05:22:51 2019 From: report at bugs.python.org (Saba Kauser) Date: Tue, 16 Apr 2019 09:22:51 +0000 Subject: [New-bugs-announce] [issue36640] python ibm_db setup.py post install script does not seem to work from Anaconda Message-ID: <1555406571.1.0.830957588596.issue36640@roundup.psfhosted.org> New submission from Saba Kauser : Hi, I have added a post install class that's working fine when I do "pip install ibm_db" on MAC. However, when I use the python/pip from anaconda3(python 3.7), the same pip is not executing the post install script. Can some one please take a look at assist. The class can be seen at: https://github.com/ibmdb/python-ibmdb/blob/master/IBM_DB/ibm_db/setup.py#L52 Post install, I am expecting following output: BLR-D-MACOS03:site-packages skauser$ otool -L ibm_db.cpython-37m-darwin.so ibm_db.cpython-37m-darwin.so: @loader_path/clidriver/lib/libdb2.dylib (compatibility version 0.0.0, current version 0.0.0) When executing from Anaconda, the name of libdb2.dylib is unchanged. I would also like to know how can I verbose the print/log statements of my setup.py via pip install. ---------- components: Build messages: 340324 nosy: sabakauser priority: normal severity: normal status: open title: python ibm_db setup.py post install script does not seem to work from Anaconda type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 08:05:17 2019 From: report at bugs.python.org (Inada Naoki) Date: Tue, 16 Apr 2019 12:05:17 +0000 Subject: [New-bugs-announce] [issue36641] make docstring in C const Message-ID: <1555416317.5.0.961282490804.issue36641@roundup.psfhosted.org> New submission from Inada Naoki : In most case, docstring in C is constant. Can we add "const"? If we can, it can avoid allocating and copying several KBs. --- a/Include/pymacro.h +++ b/Include/pymacro.h @@ -69,4 +69,4 @@ /* Define macros for inline documentation. */ -#define PyDoc_VAR(name) static char name[] +#define PyDoc_VAR(name) static const char name[] #define PyDoc_STRVAR(name,str) PyDoc_VAR(name) = PyDoc_STR(str) #ifdef WITH_DOC_STRINGS Some drastic impacts: before: text data bss dec hex filename 110446 57371 96 167913 28fe9 Modules/posixmodule.o 91937 32236 208 124381 1e5dd build/temp.linux-x86_64-3.8/home/inada-n/work/python/cpython/Modules/_decimal/_decimal.o 61070 31534 472 93076 16b94 build/temp.linux-x86_64-3.8/home/inada-n/work/python/cpython/Modules/_cursesmodule.o after: $ size **/*.o text data bss dec hex filename 150761 17064 96 167921 28ff1 Modules/posixmodule.o 115213 8976 208 124397 1e5ed build/temp.linux-x86_64-3.8/home/inada-n/work/python/cpython/Modules/_decimal/_decimal.o 86878 5736 472 93086 16b9e build/temp.linux-x86_64-3.8/home/inada-n/work/python/cpython/Modules/_cursesmodule.o ---------- components: Interpreter Core messages: 340333 nosy: inada.naoki priority: normal severity: normal status: open title: make docstring in C const versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 08:49:58 2019 From: report at bugs.python.org (Inada Naoki) Date: Tue, 16 Apr 2019 12:49:58 +0000 Subject: [New-bugs-announce] [issue36642] make unicodedata "const" Message-ID: <1555418998.59.0.0730766026669.issue36642@roundup.psfhosted.org> New submission from Inada Naoki : diff --git a/Tools/unicode/makeunicodedata.py b/Tools/unicode/makeunicodedata.py index 9327693a17..2550b8f940 100644 --- a/Tools/unicode/makeunicodedata.py +++ b/Tools/unicode/makeunicodedata.py @@ -1249,7 +1249,7 @@ class Array: size = getsize(self.data) if trace: print(self.name+":", size*len(self.data), "bytes", file=sys.stderr) - file.write("static ") + file.write("static const ") if size == 1: file.write("unsigned char") elif size == 2: ---------- components: Unicode messages: 340336 nosy: benjamin.peterson, ezio.melotti, inada.naoki, vstinner priority: normal severity: normal status: open title: make unicodedata "const" versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 15:44:38 2019 From: report at bugs.python.org (Misha Drachuk) Date: Tue, 16 Apr 2019 19:44:38 +0000 Subject: [New-bugs-announce] [issue36643] Forward reference is not resolved by dataclasses.fields() Message-ID: <1555443878.01.0.308772794144.issue36643@roundup.psfhosted.org> New submission from Misha Drachuk : Forward reference is not resolved by `dataclasses.fields()`, but it works with `typing.get_type_hints()`. E.g. from dataclasses import dataclass, fields from typing import Optional, get_type_hints @dataclass class Nestable: child: Optional['Nestable'] o = Nestable(None) print('fields:', fields(o)) print('type hints:', get_type_hints(Nestable)) ... outputs the following: fields: (Field(name='child',type=typing.Union[ForwardRef('Nestable'), NoneType] ... ) type hints: {'child': typing.Union[__main__.Nestable, NoneType]} ---------- components: Library (Lib) messages: 340361 nosy: mdrachuk priority: normal severity: normal status: open title: Forward reference is not resolved by dataclasses.fields() type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 16:43:52 2019 From: report at bugs.python.org (PEW's Corner) Date: Tue, 16 Apr 2019 20:43:52 +0000 Subject: [New-bugs-announce] [issue36644] Improve documentation of slice.indices() Message-ID: <1555447432.64.0.0569137704921.issue36644@roundup.psfhosted.org> New submission from PEW's Corner : The slice class is described in the Built-In Functions document: https://docs.python.org/3/library/functions.html#slice ... but that entry fails to mention the indices() method, and states that slice objects "have no other explicit functionality" beyond the start, stop, and step attributes. The entry links only to a glossary item which doesn't provide more info. However, it turns out that there is another description of slice objects - including the indices() method - in the Data model document: https://docs.python.org/3/reference/datamodel.html#slice.indices ... but (as the rejected issue 11842 in my opinion correctly argues) this entry is not clear about how to interpret the return values from the indices() method, i.e. that they are appropriate as arguments to range() - not as arguments to a new slice(). So, right now the best documentation of the indices() method is the old Python 2.3 "what's new" documentation of extended slices: https://docs.python.org/2.3/whatsnew/section-slices.html "To simplify implementing sequences that support extended slicing, slice objects now have a method indices(length) which, given the length of a sequence, returns a (start, stop, step) tuple that can be passed directly to range(). indices() handles omitted and out-of-bounds indices in a manner consistent with regular slices (and this innocuous phrase hides a welter of confusing details!)." I would propose to at least: * Add a link from the slice class in the Built-In Functions doc to the slice object section of the Data model doc. * Delete the statement about "no other explicit functionality" in the Built-In Functions doc. * Mention in the Data model doc that the return values from indices() can be passed to range() to obtain the sequence of indices described by the slice when applied to a sequence object of the specified length, and perhaps make it clear that the indices() values do not in general represent the new start, stop, and step attributes of a truncated slice object. ---------- assignee: docs at python components: Documentation messages: 340364 nosy: docs at python, pewscorner priority: normal severity: normal status: open title: Improve documentation of slice.indices() type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 18:42:59 2019 From: report at bugs.python.org (mollison) Date: Tue, 16 Apr 2019 22:42:59 +0000 Subject: [New-bugs-announce] [issue36645] re.sub() library entry does not adequately document surprising change in behavior between versions Message-ID: <1555454579.44.0.38794635257.issue36645@roundup.psfhosted.org> New submission from mollison : This is regarding the change to re.sub() between 3.6 and 3.7 that results in different behavior even for simple cases like the following: re.sub('a*','b', 'a') returns 'b' in 3.6 and 'bb' in 3.7 This change is well documented here: https://docs.python.org/3/whatsnew/3.7.html#changes-in-the-python-api However, it is not well documented here: https://docs.python.org/3.7/library/re.html The latter document does actually contain the appropriate text: "Empty matches for the pattern are replaced when adjacent to a previous non-empty match." However, the formatting makes this text look like it was always there, and is not part of the 3.7 changes announcement. That is how I interpreted it, leading to some lost productivity. After so many years, people don't expect the regex engine to change like this, and that only makes it easier to misinterpret that text as always have been there vs. being new to 3.7. Related: https://bugs.python.org/issue32308 ---------- assignee: docs at python components: Documentation messages: 340370 nosy: docs at python, mollison priority: normal severity: normal status: open title: re.sub() library entry does not adequately document surprising change in behavior between versions versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 16 20:46:40 2019 From: report at bugs.python.org (Ryan) Date: Wed, 17 Apr 2019 00:46:40 +0000 Subject: [New-bugs-announce] [issue36646] os.listdir() got permission error in Python3.6 but it's fine in Python2.7 Message-ID: <1555462000.28.0.955431957524.issue36646@roundup.psfhosted.org> New submission from Ryan : My script need scan a netdisk directory to get the content of it. I use os.listdir() method for an easy implement, then I got permission error when executing in Python 3.x, but the same code is working fine in Python 2.7,I attached a screenshot for explaining the problem. ---------- components: Windows files: X1ONx.png messages: 340373 nosy: Ryan_D at 163.com, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.listdir() got permission error in Python3.6 but it's fine in Python2.7 type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48269/X1ONx.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 10:24:22 2019 From: report at bugs.python.org (=?utf-8?q?Jos=C3=A9_Luis_Segura_Lucas?=) Date: Wed, 17 Apr 2019 14:24:22 +0000 Subject: [New-bugs-announce] [issue36647] TextTestRunner doesn't honour "buffer" argument Message-ID: <1555511062.15.0.629417801049.issue36647@roundup.psfhosted.org> New submission from Jos? Luis Segura Lucas : When using "buffer = True" in a TextTestRunner, the test result behaviour doesn't change at all. This is because TextTestRunner.stream is initialised using a decorator (_WritelnDecorator). When "buffer" is passed, the TestResult base class will try to redirect the stdout and stderr to 2 different io.StringIO objects. As the TextTestRunner.stream is initialised before that "redirection", all the "self.stream.write" calls will end using the original stream (stderr by default), and resulting in not buffering at all. ---------- components: Tests messages: 340398 nosy: Jos? Luis Segura Lucas priority: normal severity: normal status: open title: TextTestRunner doesn't honour "buffer" argument type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 11:53:41 2019 From: report at bugs.python.org (LihuaZhao) Date: Wed, 17 Apr 2019 15:53:41 +0000 Subject: [New-bugs-announce] [issue36648] MAP_SHARED isn't proper for anonymous mappings for VxWorks Message-ID: <1555516421.89.0.141629661161.issue36648@roundup.psfhosted.org> New submission from LihuaZhao : anonymous mappings is not part of the POSIX standard, python user just need to specified -1 as fd value when do anonymous map, for example: m = mmap.mmap(-1, 100) then python adapter module(mmapmodule.c) try to specify MAP_SHARED or MAP_PRIVATE based on operate system requirement, Linux require MAP_SHARED, VxWorks require MAP_PRIVATE, this different should be hidden by this module, and python user won't be affected. Currently, mmap is only adapted for the system which use MAP_SHARED when do anonymous map, VxWorks need be supported. https://en.wikipedia.org/wiki/Mmap ---------- components: Library (Lib) messages: 340411 nosy: lzhao priority: normal pull_requests: 12787 severity: normal status: open title: MAP_SHARED isn't proper for anonymous mappings for VxWorks versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 14:30:48 2019 From: report at bugs.python.org (Hugues Valois) Date: Wed, 17 Apr 2019 18:30:48 +0000 Subject: [New-bugs-announce] [issue36649] Windows Store app install registry keys have incorrect paths Message-ID: <1555525848.37.0.287329960957.issue36649@roundup.psfhosted.org> New submission from Hugues Valois : When reading registry values under HKCU\SOFTWARE\Python\PythonCore\3.7 that were written by the Windows Store app install, all file and folder paths are incorrect. Notice the extra [ ] as well as the missing backslash before python.exe and pythonw.exe Paths read from registry are: ``` C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0[ ] C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0python.exe[ ] C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0pythonw.exe[ ] ``` Paths on disk are: ``` C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0 C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0\python.exe C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1008.0_x64__qbz5n2kfra8p0\pythonw.exe ``` ---------- components: Installation, Windows messages: 340426 nosy: Hugues Valois, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Store app install registry keys have incorrect paths type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 14:45:43 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Wed, 17 Apr 2019 18:45:43 +0000 Subject: [New-bugs-announce] [issue36650] Cached method implementation no longer works on Python 3.7.3 Message-ID: <1555526743.8.0.00197120630523.issue36650@roundup.psfhosted.org> New submission from Jason R. Coombs : In [this ticket](https://github.com/jaraco/jaraco.functools/issues/12), I learned that [jaraco.functools.method_cache](https://github.com/jaraco/jaraco.functools/blob/6b32ee0dfd3e7c88f99e88cd87c35fa9b76f261f/jaraco/functools.py#L109-L180) no longer works on Python 3.7.3. A distilled version of what's not working is this example: ``` >>> import jaraco.functools >>> class MyClass: ... calls = 0 ... @jaraco.functools.method_cache ... def call_me_maybe(self, val): ... self.calls += 1 ... return val ... >>> a = MyClass() >>> a.call_me_maybe(0) 0 >>> a.call_me_maybe(0) 0 >>> a.calls 2 ``` The second call to the cached function is missing the cache even though the parameters to the function are the same. ``` >>> a.call_me_maybe >>> a.call_me_maybe.cache_info() CacheInfo(hits=0, misses=2, maxsize=128, currsize=2) ``` Here's a further distilled example not relying on any code from jaraco.functools: ``` >>> def method_cache(method): ... def wrapper(self, *args, **kwargs): ... # it's the first call, replace the method with a cached, bound method ... bound_method = functools.partial(method, self) ... cached_method = functools.lru_cache()(bound_method) ... setattr(self, method.__name__, cached_method) ... return cached_method(*args, **kwargs) ... return wrapper ... >>> import functools >>> class MyClass: ... calls = 0 ... @method_cache ... def call_me_maybe(self, val): ... self.calls += 1 ... return val ... >>> a = MyClass() >>> a.call_me_maybe(0) 0 >>> a.call_me_maybe(0) 0 >>> a.calls 2 ``` I was not able to replicate the issue with a simple lru_cache on a partial object: ``` >>> def func(a, b): ... global calls ... calls += 1 ... >>> import functools >>> cached = functools.lru_cache()(functools.partial(func, 'a')) >>> calls = 0 >>> cached(0) >>> cached(0) >>> calls 1 ``` Suggesting that there's some interaction with the instance attribute and the caching functionality. I suspect the issue arose as a result of changes in issue35780. ---------- assignee: rhettinger keywords: 3.7regression messages: 340429 nosy: jaraco, rhettinger priority: normal severity: normal status: open title: Cached method implementation no longer works on Python 3.7.3 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 14:53:02 2019 From: report at bugs.python.org (Enrico Carbognani) Date: Wed, 17 Apr 2019 18:53:02 +0000 Subject: [New-bugs-announce] [issue36651] Asyncio Event Loop documentation inconsistency (call_later and call_at methods) Message-ID: <1555527182.24.0.96593759188.issue36651@roundup.psfhosted.org> New submission from Enrico Carbognani : In the documentation for the call_later and the call_at methods there is a note which says that the delay cannot be longer than a day, but both methods have a note saying that this limitation was removed in Python 3.8. ---------- assignee: docs at python components: Documentation files: documenation_incosistency.png messages: 340434 nosy: Enrico Carbognani, docs at python priority: normal severity: normal status: open title: Asyncio Event Loop documentation inconsistency (call_later and call_at methods) versions: Python 3.8 Added file: https://bugs.python.org/file48273/documenation_incosistency.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 16:47:47 2019 From: report at bugs.python.org (wheelerlaw) Date: Wed, 17 Apr 2019 20:47:47 +0000 Subject: [New-bugs-announce] [issue36652] Non-embedded zip distribution Message-ID: <1555534067.61.0.67923116935.issue36652@roundup.psfhosted.org> New submission from wheelerlaw : Pretty straight forward request. It would be nice if there was an installation method where I can just unzip a Python distribution rather than running an installer. Specifically this is for getting Python to run in Wine. Right now, Python for Windows runs fine under Wine, but the installer doesn't, so a manual process of running the installer on a Windows machine and then copying the installed resources to a Linux machine with Wine installed. A zip distribution would solve this, since I could just unzip it and run it under Wine. ---------- components: Installation messages: 340445 nosy: wheelerlaw priority: normal severity: normal status: open title: Non-embedded zip distribution type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 18:34:23 2019 From: report at bugs.python.org (PushkarVaity) Date: Wed, 17 Apr 2019 22:34:23 +0000 Subject: [New-bugs-announce] [issue36653] Dictionary Key is without ' ' quotes Message-ID: <1555540463.86.0.601540134814.issue36653@roundup.psfhosted.org> New submission from PushkarVaity : I am using Python 3.7 with anaconda install. I am trying to write out a dictionary with similar key's. for ex: proc_dict = {'add': ('/home/file.tcl', 'args'), 'add': ('/home/file2.tcl', 'args'), 'sub': ('/home/file2.tcl', 'args')} To do this, I am using the following class definition and functions: class ProcOne(object): def __init__(self, name): self.name = name def __repr__(self): return self.name I am writing out the dictionary in the following way: proc_dict[ProcOne(proc_name)] = (full_file, proc_args) Now, the dictionary key as shown in the example at top is of string type. proc_name is the variable holding this string. The values are tuples. Both elements in the tuple are strings. When the dictionary is finally written out, the format is as below: proc_dict = {add: ('/home/file.tcl', 'args'), add: ('/home/file2.tcl', 'args'), sub: ('/home/file2.tcl', 'args')} Please note the difference from the first example. The key values don't have a ' ' quote in spite of being a string variable type. Since the string quotes are missing, it is very difficult to do post processing on this dictionary key. I am a student and I though that this is an issue because now I am not able to compare the key value with a normal string because of the missing quotes. The in or not in checking operations do not evaluate to true/false because of the missing quotes. Please let me know if this has never been reported before as I am just a novice programmer and would be a big boost to my morale :-) Also, please let me know if this issue was already known or wasn't an issue at all. ---------- components: Regular Expressions messages: 340452 nosy: PushkarVaity, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: Dictionary Key is without ' ' quotes type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 17 23:43:56 2019 From: report at bugs.python.org (Windson Yang) Date: Thu, 18 Apr 2019 03:43:56 +0000 Subject: [New-bugs-announce] [issue36654] Add example to tokenize.tokenize Message-ID: <1555559036.7.0.0522143852215.issue36654@roundup.psfhosted.org> New submission from Windson Yang : > The tokenize() generator requires one argument, readline, which must be a callable object which provides the same interface as the io.IOBase.readline() method of file objects. Each call to the function should return one line of input as bytes. Add an example like this should be easier to understand: # example.py def foo: pass # tokenize_example.py import tokenize f = open('example.py', 'rb') token_gen = tokenize.tokenize(f.readline) for token in token_gen: # Something like this # TokenInfo(type=1 (NAME), string='class', start=(1, 0), end=(1, 5), line='class Foo:\n') # TokenInfo(type=1 (NAME), string='Foo', start=(1, 6), end=(1, 9), line='class Foo:\n') # TokenInfo(type=53 (OP), string=':', start=(1, 9), end=(1, 10), line='class Foo:\n') print(token) ---------- assignee: docs at python components: Documentation messages: 340467 nosy: Windson Yang, docs at python priority: normal severity: normal status: open title: Add example to tokenize.tokenize type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 04:25:15 2019 From: report at bugs.python.org (kulopo) Date: Thu, 18 Apr 2019 08:25:15 +0000 Subject: [New-bugs-announce] [issue36655] Division Precision Problem Message-ID: <1555575915.63.0.127590822512.issue36655@roundup.psfhosted.org> New submission from kulopo : >>> a=224847175712806907706081280 >>> b=4294967296 >>> assert int(a*b/b)==int(a) Traceback (most recent call last): File "", line 1, in AssertionError (a can be exact divided by b) ---------- messages: 340471 nosy: kulopo priority: normal severity: normal status: open title: Division Precision Problem type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 04:44:47 2019 From: report at bugs.python.org (Tom Hale) Date: Thu, 18 Apr 2019 08:44:47 +0000 Subject: [New-bugs-announce] [issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions Message-ID: <1555577087.92.0.769196427693.issue36656@roundup.psfhosted.org> New submission from Tom Hale : I cannot find a race-condition-free way to force overwrite an existing symlink. os.symlink() requires that the target does not exist, meaning that it could be created via race condition the two workaround solutions that I've seen: 1. Unlink existing symlink (could be recreated, causing following symlink to fail) 2. Create a new temporary symlink, then overwrite target (temp could be changed between creation and replace. The additional gotcha with the safer (because the attack filename is unknown) option (2) is that replace() may fail if the two files are on separate filesystems. I suggest an additional `force=` argument to os.symlink(), defaulting to `False` for backward compatibility, but allowing atomic overwriting of a symlink when set to `True`. I would be willing to look into a PR for this. Prior art: https://stackoverflow.com/a/55742015/5353461 ---------- messages: 340474 nosy: Tom Hale priority: normal severity: normal status: open title: Allow os.symlink(src, target, force=True) to prevent race conditions versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 10:42:24 2019 From: report at bugs.python.org (maak) Date: Thu, 18 Apr 2019 14:42:24 +0000 Subject: [New-bugs-announce] [issue36657] AttributeError Message-ID: <1555598544.39.0.0302839002466.issue36657@roundup.psfhosted.org> New submission from maak : elif path == '' or path.endswith('/'): AttributeError: 'bool' object has no attribute 'endswith' ---------- assignee: docs at python components: Documentation messages: 340490 nosy: docs at python, maakvol priority: normal severity: normal status: open title: AttributeError type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 10:57:16 2019 From: report at bugs.python.org (RimacV) Date: Thu, 18 Apr 2019 14:57:16 +0000 Subject: [New-bugs-announce] [issue36658] Py_Initialze() throws error 'unable to load the file system encoding' when calling Py_SetPath with a path to a directory Message-ID: <1555599436.89.0.215369051789.issue36658@roundup.psfhosted.org> New submission from RimacV : I compiled the source of CPython 3.7.3 myself on Windows with Visual Studio 2017 together with some packages like e.g numpy. When I start the Python Interpreter I am able to import and use numpy. However when I am running the same script via the C-API I get an ModuleNotFoundError. So the first thing I did, was to check if numpy is in my site-packages directory and indeed there is a folder named numpy-1.16.2-py3.7-win-amd64.egg. (Makes sense because the python interpreter can find numpy) The next thing I did was get some information about the sys.path variable created when running the script via the C-API. ##### sys.path content #### C:\Work\build\product\python37.zip C:\Work\build\product\DLLs C:\Work\build\product\lib C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO\2017\PROFESSIONAL\COMMON7\IDE\EXTENSIONS\TESTPLATFORM C:\Users\rvq\AppData\Roaming\Python\Python37\site-packages Examining the content of sys.path I noticed two things. 1. C:\Work\build\product\python37.zip has the correct path 'C:\Work\build\product\'. There was just no zip file. All my files and directory were unpacked. So I zipped the files to an archive named python37.zip and this resolved the import error. 2. C:\Users\rvq\AppData\Roaming\Python\Python37\site-packages is wrong it should be C:\Work\build\product\Lib\site-packages but I dont know how this wrong path is created. The next thing I tried was to use Py_SetPath(L"C:/Work/build/product/Lib/site-packages") before calling Py_Initialize(). This led to the Fatal Python Error 'unable to load the file system encoding' ModuleNotFoundError: No module named 'encodings' I created a minimal c++ project with exact these two calls and started to debug Cpython. int main() { Py_SetPath(L"C:/Work/build/product/Lib/site-packages"); Py_Initialize(); } I tracked the call of Py_Initialize() down to the call of static int zipimport_zipimporter___init___impl(ZipImporter *self, PyObject *path) inside of zipimport.c The comment above this function states the following: Create a new zipimporter instance. 'archivepath' must be a path-like object to a zipfile, or to a specific path inside a zipfile. For example, it can be '/tmp/myimport.zip', or '/tmp/myimport.zip/mydirectory', if mydirectory is a valid directory inside the archive. 'ZipImportError' is raised if 'archivepath' doesn't point to a valid Zip archive. The 'archive' attribute of the zipimporter object contains the name of the zipfile targeted. So for me it seems that the C-API expects the path set with Py_SetPath to be a path to a zipfile. Is this expected behaviour or is it a bug? If it is not a bug is there a way to changes this so that it can also detect directories? PS: The ModuleNotFoundError did not occur for me when using Python 3.5.2+, which was the version I used in my project before. I also checked if I had set any PYTHONHOME or PYTHONPATH environment variables but I did not see one of them on my system. ---------- components: Library (Lib) files: Capture.PNG messages: 340494 nosy: rvq priority: normal severity: normal status: open title: Py_Initialze() throws error 'unable to load the file system encoding' when calling Py_SetPath with a path to a directory type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48274/Capture.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 11:02:16 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 18 Apr 2019 15:02:16 +0000 Subject: [New-bugs-announce] [issue36659] distutils UnixCCompiler: Remove standard library path from rpath Message-ID: <1555599736.54.0.585610547316.issue36659@roundup.psfhosted.org> New submission from STINNER Victor : Since 2010, the Fedora packages of Python are using a patch on distutils UnixCCompiler to remove standard library path from rpath. The patch has been written by David Malcolm for Python 2.6.4: * https://src.fedoraproject.org/rpms/python38/blob/master/f/00001-rpath.patch * https://src.fedoraproject.org/rpms/python2/c/f5df1f834310948b32407933e3b8713e1121105b I propose to make this change upstream so other Linux distributions will benefit on this change: see attached PR. "rpath" stands for "run-time search path". Dynamic linking loaders use the rpath to find required libraries: https://en.wikipedia.org/wiki/Rpath Full example. Install Python in /opt/py38 with RPATH=/opt/py38/lib, to ensure that Python looks for libpython in this directory: $ cd path/to/python/sources $ ./configure --prefix /opt/py38 LDFLAGS="-Wl,-rpath=/opt/py38/lib/" --enable-shared $ make $ make install # on my system, my user can write into /opt ;-) $ objdump -a -x /opt/py38/bin/python3.8|grep -i rpath RPATH /opt/py38/lib/ $ objdump -a -x /opt/py38/lib/libpython3.8m.so|grep -i rpath RPATH /opt/py38/lib/ Python is installed with RPATH: $ /opt/py38/bin/python3.8 -m sysconfig|grep -i rpath BLDSHARED = "gcc -pthread -shared -Wl,-rpath=/opt/py38/lib/" CONFIGURE_LDFLAGS = "-Wl,-rpath=/opt/py38/lib/" CONFIG_ARGS = "'--prefix' '/opt/py38' 'LDFLAGS=-Wl,-rpath=/opt/py38/lib/' '--enable-shared'" LDFLAGS = "-Wl,-rpath=/opt/py38/lib/" LDSHARED = "gcc -pthread -shared -Wl,-rpath=/opt/py38/lib/" PY_CORE_LDFLAGS = "-Wl,-rpath=/opt/py38/lib/" PY_LDFLAGS = "-Wl,-rpath=/opt/py38/lib/" Now the difference is how these flags are passed to third party C extensions. $ cd $HOME $ /opt/py38/bin/python3.8 -m venv opt_env $ opt_env/bin/python -m pip install lxml $ objdump -a -x $(opt_env/bin/python -c 'import lxml.etree; print(lxml.etree.__file__)')|grep -i rpath RPATH /opt/py38/lib/ lxml is compiled with the RPATH. This issue proposes to omit the Python RPATH here. Comparison with Fedora Python which already contains the change: $ python3 -m venv fed_venv # FYI: it's Python 3.7 on Fedora 29 $ fed_venv/bin/python -m pip install lxml $ objdump -a -x $(fed_venv/bin/python -c 'import lxml.etree; print(lxml.etree.__file__)')|grep -i rpath ^^ empty output: no RPATH, it's the expected behavior ... I'm not sure that the example using /usr/bin/python3.7 is useful, because it's not built using RPATH ... $ objdump -a -x /usr/bin/python3.7 |grep -i rpath $ python3.7 -m sysconfig|grep -i rpath $ objdump -a -x /usr/lib64/libpython3.7m.so |grep -i rpath ^^ no output, it's not built with RPATH ---------- components: Library (Lib) messages: 340496 nosy: vstinner priority: normal severity: normal status: open title: distutils UnixCCompiler: Remove standard library path from rpath versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 11:55:57 2019 From: report at bugs.python.org (maak) Date: Thu, 18 Apr 2019 15:55:57 +0000 Subject: [New-bugs-announce] [issue36660] TypeError Message-ID: <1555602957.9.0.785158362438.issue36660@roundup.psfhosted.org> New submission from maak : TypeError: coercing to Unicode: need string or buffer, bool found ---------- components: Unicode messages: 340504 nosy: ezio.melotti, maakvol, vstinner priority: normal severity: normal status: open title: TypeError type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 15:36:48 2019 From: report at bugs.python.org (Merlin Fisher-Levine) Date: Thu, 18 Apr 2019 19:36:48 +0000 Subject: [New-bugs-announce] [issue36661] Missing import in docs Message-ID: <1555616208.44.0.536731608649.issue36661@roundup.psfhosted.org> New submission from Merlin Fisher-Levine : Dataclasses docs don't mention needing import for @dataclass decorator https://docs.python.org/3/library/dataclasses.html ---------- assignee: docs at python components: Documentation messages: 340510 nosy: docs at python, mfisherlevine priority: normal severity: normal status: open title: Missing import in docs type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 16:31:10 2019 From: report at bugs.python.org (George Sakkis) Date: Thu, 18 Apr 2019 20:31:10 +0000 Subject: [New-bugs-announce] [issue36662] asdict/astuple Dataclass methods Message-ID: <1555619470.94.0.171933894359.issue36662@roundup.psfhosted.org> New submission from George Sakkis : I'd like to propose two new optional boolean parameters to the @dataclass() decorator, `asdict` and `astuple`, that if true, the respective methods are generated as equivalent to the module-level namesake functions. In addition to saving an extra imported name, the main benefit is performance. By having access to the specific fields of the decorated class, it should be possible to generate a more efficient implementation than the one in the respective function. To illustrate the difference in performance, the asdict method is 28 times faster than the function in the following PEP 557 example: @dataclass class InventoryItem: '''Class for keeping track of an item in inventory.''' name: str unit_price: float quantity_on_hand: int = 0 def asdict(self): return { 'name': self.name, 'unit_price': self.unit_price, 'quantity_on_hand': self.quantity_on_hand, } In [4]: i = InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=10) In [5]: asdict(i) == i.asdict() Out[5]: True In [6]: %timeit asdict(i) 5.45 ?s ? 14.1 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) In [7]: %timeit i.asdict() 193 ns ? 0.443 ns per loop (mean ? std. dev. of 7 runs, 10000000 loops each) Thoughts? ---------- components: Library (Lib) messages: 340511 nosy: gsakkis priority: normal severity: normal status: open title: asdict/astuple Dataclass methods type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 16:50:11 2019 From: report at bugs.python.org (daniel hahler) Date: Thu, 18 Apr 2019 20:50:11 +0000 Subject: [New-bugs-announce] [issue36663] pdb: store whole exception information in locals (via user_exception) Message-ID: <1555620611.54.0.476670185148.issue36663@roundup.psfhosted.org> New submission from daniel hahler : Currently Pdb.user_exception does not store the traceback in "user_exception", but only passes it to `interaction`: def user_exception(self, frame, exc_info): """This function is called if an exception occurs, but only if we are to stop at or just below this level.""" if self._wait_for_mainpyfile: return exc_type, exc_value, exc_traceback = exc_info frame.f_locals['__exception__'] = exc_type, exc_value ? self.interaction(frame, exc_traceback) I think it would be useful to have the whole exception info at hand in the debugger (via the frame locals) directly. If backward compatible is important it should use a new name for this maybe (`__excinfo__`), i.e. if current code would assume `__exception__` to be of length 2 only. But on the other hand this only affects extensions to the debugger, and not "real" programs, and therefore backward compatibility is not really required here? Currenly pdb extensions (e.g. pdbpp) can get it either by going up in the stack, or grabbing it via `interaction`, but this issue is mainly about making it available in plain pdb for the user to interact with. Code ref: https://github.com/python/cpython/blob/e8113f51a8bdf33188ee30a1c038a298329e7bfa/Lib/pdb.py#L295-L301 ---------- components: Library (Lib) messages: 340512 nosy: blueyed priority: normal severity: normal status: open title: pdb: store whole exception information in locals (via user_exception) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 19:30:32 2019 From: report at bugs.python.org (Peter McEldowney) Date: Thu, 18 Apr 2019 23:30:32 +0000 Subject: [New-bugs-announce] [issue36664] argparse: parser aliases in subparsers stores alias in dest variable Message-ID: <1555630232.08.0.518571967107.issue36664@roundup.psfhosted.org> New submission from Peter McEldowney : I noticed that I have to add a lot more code to handle contexts in subparsers that I was expecting would be necessary. This is something I feel should be handled by the argparse library. What are your thoughts on this? If you run the sample code with the commands below, you can see that although I would want them to do the same thing, I have to add more lines into my code to achieve this. This becomes cumbersome/annoying when dealing with subparser trees. python3 sample.py subsection python3 sample.py s Sample code (also attached): import argparse def get_args(args=None): parser = argparse.ArgumentParser() subparser = parser.add_subparsers(dest='context') sub = subparser.add_parser('subsection', aliases=['s', 'sub', 'subsect']) return parser.parse_args(args) def my_subsection_function(args): print('my subsection was called') def invalid_context(args): print('my functon was not called ') def main(args=get_args()): return { 'subsection': my_subsection_function }.get(args.context, invalid_context)(args) if __name__ == "__main__": main() ---------- components: Library (Lib) files: sample.py messages: 340515 nosy: Peter McEldowney priority: normal severity: normal status: open title: argparse: parser aliases in subparsers stores alias in dest variable type: enhancement versions: Python 3.7 Added file: https://bugs.python.org/file48275/sample.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 21:01:56 2019 From: report at bugs.python.org (Nick Coghlan) Date: Fri, 19 Apr 2019 01:01:56 +0000 Subject: [New-bugs-announce] [issue36665] Dropping __main__ from sys.modules clears the REPL namespace Message-ID: <1555635716.01.0.208381953469.issue36665@roundup.psfhosted.org> New submission from Nick Coghlan : While trying to create an example for a pickle bug discussion, I deliberately dropped `__main__` out of sys.modules, and the REPL session lost all of its runtime state. Simplified reproducer: ``` >>> import sys >>> mod = sys.modules[__name__] >>> sys.modules[__name__] = object() >>> dir() Traceback (most recent call last): File "", line 1, in NameError: name 'dir' is not defined ``` (Initially encountered on Python 2.7, reproduced on Python 3.7) If I'd just dropped the reference to `__main__` entirely, that would make sense (since modules clear their namespaces when they go away), but I didn't: I saved a reference in a local variable first. So it appears the CPython REPL isn't keeping a strong reference to either `__main__` or `__main__.__dict__` between statements, so the cyclic GC kicked in and decided the module could be destroyed. ---------- messages: 340516 nosy: ncoghlan priority: normal severity: normal stage: test needed status: open title: Dropping __main__ from sys.modules clears the REPL namespace type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 18 21:36:15 2019 From: report at bugs.python.org (Joel Croteau) Date: Fri, 19 Apr 2019 01:36:15 +0000 Subject: [New-bugs-announce] [issue36666] threading.Thread should have way to catch an exception thrown within Message-ID: <1555637775.9.0.13238688405.issue36666@roundup.psfhosted.org> New submission from Joel Croteau : This has been commented on numerous times by others (https://stackoverflow.com/questions/2829329/catch-a-threads-exception-in-the-caller-thread-in-python, http://benno.id.au/blog/2012/10/06/python-thread-exceptions, to name a few), but there is no in-built mechanism in threading to catch an unhandled exception thrown by a thread. The default behavior of dumping to stderr is completely useless for error handling in many scenarios. Solutions do exist, but I have yet to see one that is not exceptionally complicated. It seems like checking for exceptions should be a very basic part of any threading library. The simplest solution would be to just have the Thread store any unhandled exceptions and have them raised by Thread.join(). There could also be additional methods to check if exceptions were raised. ---------- components: Library (Lib) messages: 340520 nosy: Joel Croteau priority: normal severity: normal status: open title: threading.Thread should have way to catch an exception thrown within versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 19 07:07:53 2019 From: report at bugs.python.org (daniel hahler) Date: Fri, 19 Apr 2019 11:07:53 +0000 Subject: [New-bugs-announce] [issue36667] pdb: restore SIGINT handler in sigint_handler already Message-ID: <1555672073.52.0.288169385007.issue36667@roundup.psfhosted.org> New submission from daniel hahler : Without this, and additional SIGINT while waiting for the next statement (e.g. during `time.sleep`) will stop at `sigint_handler`. With this patch: > ?/t-pdb-sigint-in-sleep.py(10)() -> sleep() (Pdb) c ^C Program interrupted. (Use 'cont' to resume). ^CKeyboardInterrupt > ?/t-pdb-sigint-in-sleep.py(6)sleep() -> time.sleep(10) (Pdb) Without this patch: > ?/t-pdb-sigint-in-sleep.py(10)() -> sleep() (Pdb) c ^C Program interrupted. (Use 'cont' to resume). ^C--Call-- > ?/cpython/Lib/pdb.py(188)sigint_handler() -> def sigint_handler(self, signum, frame): (Pdb) This was changed / regressed in https://github.com/python/cpython/commit/10e54aeaa234f2806b367c66e3fb4ac6568b39f6 (3.5.3rc1?), when it was moved while fixing issue 20766. ---------- components: Library (Lib) messages: 340539 nosy: blueyed priority: normal severity: normal status: open title: pdb: restore SIGINT handler in sigint_handler already type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 19 09:44:01 2019 From: report at bugs.python.org (Thomas Moreau) Date: Fri, 19 Apr 2019 13:44:01 +0000 Subject: [New-bugs-announce] [issue36668] semaphore_tracker is not reused by child processes Message-ID: <1555681441.41.0.870057455059.issue36668@roundup.psfhosted.org> New submission from Thomas Moreau : The current implementation of the semaphore_tracker creates a new process for each children. The easy fix would be to pass the _pid to the children but the current mechanism to check if the semaphore_tracker is alive relies on waitpid which cannot be used in child processes (the semaphore_tracker is only a sibling of these processes). The main issue is to have a reliable check that either: The pipe is open. This is what is done here by sending a message. I don't know if there is a more efficient way to check it. Check that a given pid is alive. As we cannot rely on waitpid, I don't see an efficient mechanism. I propose to add a PROBE command in the semaphore tracker. When the pipe is closed, the send command will fail, meaning that the semaphore tracker is down. ---------- components: Library (Lib) messages: 340543 nosy: tomMoral priority: normal severity: normal status: open title: semaphore_tracker is not reused by child processes type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 19 10:08:57 2019 From: report at bugs.python.org (Dan Snider) Date: Fri, 19 Apr 2019 14:08:57 +0000 Subject: [New-bugs-announce] [issue36669] weakref proxy doesn't support the matrix multiplication operator Message-ID: <1555682937.13.0.0196154562382.issue36669@roundup.psfhosted.org> Change by Dan Snider : ---------- nosy: bup priority: normal severity: normal status: open title: weakref proxy doesn't support the matrix multiplication operator _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 19 11:52:37 2019 From: report at bugs.python.org (Lorenz Mende) Date: Fri, 19 Apr 2019 15:52:37 +0000 Subject: [New-bugs-announce] [issue36670] test suite broken due to cpu usage feature on win 10/ german Message-ID: <1555689157.38.0.435665423105.issue36670@roundup.psfhosted.org> New submission from Lorenz Mende : The test suite fails with the first tests (I assume 1st call of getloadavg of WindowsLoadTracker). Traceback (most recent call last): File "P:\Repos\CPython\cpython\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "P:\Repos\CPython\cpython\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "P:\Repos\CPython\cpython\lib\test\__main__.py", line 2, in main() File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 653, in main Regrtest().main(tests=tests, **kwargs) File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 586, in main self._main(tests, kwargs) File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 632, in _main self.run_tests() File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 515, in run_tests self.run_tests_sequential() File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 396, in run_tests_sequential self.display_progress(test_index, text) File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 150, in display_progress load_avg_1min = self.getloadavg() File "P:\Repos\CPython\cpython\lib\test\libregrtest\win_utils.py", line 81, in getloadavg typeperf_output = self.read_output() File "P:\Repos\CPython\cpython\lib\test\libregrtest\win_utils.py", line 78, in read_output return response.decode() UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 67: invalid start byte ########################################################## The windows 'typeperf "\System\Processor Queue Length" -si 1' command unluckily returns an string with an umlaut which leads to the Decode-Error. This comes up because the for the typeperf is location dependend. (In german the counter would read \System\Prozessor-Warteschlangenl?nge) I see two possible solutions to this issue. 1. Raising an exception earlier on creation of WindowsLoadTracker resulting in the same behaviour as if there is no typeperf available (german pythoneers would have a drawback with this) 2. Getting the typeperf counter correctly from registry (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib\CurrentLanguage, described here https://social.technet.microsoft.com/Forums/de-DE/25bc6907-cf2c-4dc8-8687-974b799ba754/powershell-ausgabesprache-umstellen?forum=powershell_de) environment: Windows 10 x64, 1809, german cpython @e16467af0bfcc9f399df251495ff2d2ad20a1669 commit of assumed root cause of https://bugs.python.org/issue34060 ---------- components: Tests messages: 340547 nosy: LorenzMende priority: normal severity: normal status: open title: test suite broken due to cpu usage feature on win 10/ german type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 03:02:43 2019 From: report at bugs.python.org (Kadam Parikh) Date: Sat, 20 Apr 2019 07:02:43 +0000 Subject: [New-bugs-announce] [issue36671] str.lower() looses character information when working with UTF-8 Message-ID: <1555743763.0.0.340048392729.issue36671@roundup.psfhosted.org> New submission from Kadam Parikh : When converting a particular UTF-8 character "?" to lowercase, it doesn't behave correctly. It returns two lowercase characters instead of one. This is not as desired. Code: >>> print("\u0130") ? >>> print("\u0130".lower()) i? >>> ---------- components: Unicode messages: 340563 nosy: Kadam Parikh, ezio.melotti, vstinner priority: normal severity: normal status: open title: str.lower() looses character information when working with UTF-8 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 03:10:18 2019 From: report at bugs.python.org (Zackery Spytz) Date: Sat, 20 Apr 2019 07:10:18 +0000 Subject: [New-bugs-announce] [issue36672] A compiler warning in winreg.SetValue() Message-ID: <1555744218.52.0.272239256481.issue36672@roundup.psfhosted.org> New submission from Zackery Spytz : The warning can be seen on some buildbots (e.g. https://buildbot.python.org/all/#/builders/12/builds/2269). d:\buildarea\3.x.ware-win81-release\build\pc\winreg.c(1617): warning C4244: 'function': conversion from 'Py_ssize_clean_t' to 'DWORD', possible loss of data ---------- components: Windows messages: 340564 nosy: ZackerySpytz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: A compiler warning in winreg.SetValue() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 03:33:22 2019 From: report at bugs.python.org (Stefan Behnel) Date: Sat, 20 Apr 2019 07:33:22 +0000 Subject: [New-bugs-announce] [issue36673] Comment/PI parsing support for ElementTree Message-ID: <1555745602.83.0.523900497502.issue36673@roundup.psfhosted.org> New submission from Stefan Behnel : The TreeBuilder in xml.etree.ElementTree ignores comments and processing instructions. It should at least have a way to pass them through, even if there is not currently a way to append comments and PIs to the tree when they appear *outside* of the root element. The pull parser interface would directly benefit from this, because it can then report "comment" and "pi" events. ---------- assignee: scoder components: Library (Lib), XML messages: 340565 nosy: scoder priority: normal severity: normal stage: needs patch status: open title: Comment/PI parsing support for ElementTree type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 05:20:54 2019 From: report at bugs.python.org (Dieter Maurer) Date: Sat, 20 Apr 2019 09:20:54 +0000 Subject: [New-bugs-announce] [issue36674] "unittest.TestCase.debug" should honour "skip" (and other test controls) Message-ID: <1555752054.75.0.256314796752.issue36674@roundup.psfhosted.org> New submission from Dieter Maurer : Currently, "TestCase.run" supports several features to control testing - among others, a test can be skipped via the attribute "__unittest_skip__". "TestCase.debug" ignores all those controls and calls the test method unconditionally. I am using "zope.testrunner" to run test suites. Its "-D" option switches from "TestCase.run" to "TestCase.debug" in order to allow the analysis of the state of a failing test in the Python debugger. "-D" is typically used if a test in a larger suite failed and a detailed analysis is required to determine the failure's cause. It is important that this second run executes the same tests as the first run; it is not helpful when the second run fails in a test skipped in the first run. Therefore, "TestCase.debug" should honour all test controls supported by "TestCase.run". One could argue that the testsuite runner should implement this logic. However, this would force the runner to duplicate the test control logic using internal implementation details of "unittest". Conceptually, it is much nicer to have the test control encapsulated by "unittest". ---------- components: Library (Lib) messages: 340569 nosy: dmaurer priority: normal severity: normal status: open title: "unittest.TestCase.debug" should honour "skip" (and other test controls) type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 06:43:50 2019 From: report at bugs.python.org (Steven D'Aprano) Date: Sat, 20 Apr 2019 10:43:50 +0000 Subject: [New-bugs-announce] [issue36675] Doctest directives and comments not visible or missing from code samples Message-ID: <1555757030.56.0.8094269644.issue36675@roundup.psfhosted.org> New submission from Steven D'Aprano : (Apologies if this is the wrong place for reporting website bugs.) The website is not rendering doctest directives or comments, either that or the comments have been stripped from the examples. On the doctest page itself, all the comments are missing: https://docs.python.org/3/library/doctest.html#directives The first example says: >>> print(list(range(20))) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] but without the directive, the test would fail. Screen shot attached. Doctest directives are also missing from here: https://docs.python.org/3/library/ctypes.html My browser: Firefox 45.1.1 Also checked with text browser "lynx". ---------- assignee: docs at python components: Documentation files: missing_directives.png messages: 340570 nosy: docs at python, steven.daprano priority: normal severity: normal status: open title: Doctest directives and comments not visible or missing from code samples type: behavior Added file: https://bugs.python.org/file48277/missing_directives.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 07:06:44 2019 From: report at bugs.python.org (Stefan Behnel) Date: Sat, 20 Apr 2019 11:06:44 +0000 Subject: [New-bugs-announce] [issue36676] Make TreeBuilder aware of namespace prefixes Message-ID: <1555758404.88.0.535141671486.issue36676@roundup.psfhosted.org> New submission from Stefan Behnel : The XMLPullParser has 'start-ns' and 'end-ns' events, but the parser targets don't see them. They should have "start_ns()" and "end_ns()" callback methods to allow namespace prefix aware parsing. ---------- assignee: scoder components: Library (Lib), XML messages: 340571 nosy: scoder priority: normal severity: normal status: open title: Make TreeBuilder aware of namespace prefixes type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 09:25:45 2019 From: report at bugs.python.org (Manjusaka) Date: Sat, 20 Apr 2019 13:25:45 +0000 Subject: [New-bugs-announce] [issue36677] support visual studio multiprocess compile Message-ID: <1555766745.04.0.89455558055.issue36677@roundup.psfhosted.org> New submission from Manjusaka : Support multiprocess compile when the developer uses the Visual studio on the Windows ---------- components: Windows messages: 340573 nosy: Manjusaka, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: support visual studio multiprocess compile versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 11:55:28 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 20 Apr 2019 15:55:28 +0000 Subject: [New-bugs-announce] [issue36678] duplicate method definitions in Lib/test/test_dataclasses.py Message-ID: <1555775728.19.0.814114535778.issue36678@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following methodes are duplicates: Lib/test/test_dataclasses.py:700 TestCase.test_not_tuple Lib/test/test_dataclasses.py:1406 TestCase.test_helper_asdict_builtin_containers Lib/test/test_dataclasses.py:1579 TestCase.test_helper_astuple_builtin_containers Lib/test/test_dataclasses.py:3245 TestReplace.test_recursive_repr_two_attrs ---------- components: Library (Lib) messages: 340578 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definitions in Lib/test/test_dataclasses.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 11:59:38 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 20 Apr 2019 15:59:38 +0000 Subject: [New-bugs-announce] [issue36679] duplicate method definition in Lib/test/test_genericclass.py Message-ID: <1555775978.71.0.520201003479.issue36679@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/test/test_genericclass.py:161 TestClassGetitem.test_class_getitem ---------- components: Library (Lib) messages: 340579 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definition in Lib/test/test_genericclass.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 12:02:29 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 20 Apr 2019 16:02:29 +0000 Subject: [New-bugs-announce] [issue36680] duplicate method definition in Lib/test/test_importlib/test_util.py Message-ID: <1555776149.55.0.853112327072.issue36680@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/test/test_importlib/test_util.py:755 PEP3147Tests.test_source_from_cache_path_like_arg ---------- components: Library (Lib) messages: 340580 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definition in Lib/test/test_importlib/test_util.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 12:04:43 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 20 Apr 2019 16:04:43 +0000 Subject: [New-bugs-announce] [issue36681] duplicate method definition in Lib/test/test_logging.py Message-ID: <1555776283.44.0.382290590702.issue36681@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/test/test_logging.py:328 BuiltinLevelsTest.test_regression_29220 ---------- components: Library (Lib) messages: 340581 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definition in Lib/test/test_logging.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 12:08:20 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 20 Apr 2019 16:08:20 +0000 Subject: [New-bugs-announce] [issue36682] duplicate method definitions in Lib/test/test_sys_setprofile.py Message-ID: <1555776500.63.0.359750321582.issue36682@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following methods are duplicates: Lib/test/test_sys_setprofile.py:354 ProfileSimulatorTestCase.test_unbound_method_no_args Lib/test/test_sys_setprofile.py:363 ProfileSimulatorTestCase.test_unbound_method_invalid_args ---------- components: Library (Lib) messages: 340582 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definitions in Lib/test/test_sys_setprofile.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 12:10:14 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 20 Apr 2019 16:10:14 +0000 Subject: [New-bugs-announce] [issue36683] duplicate method definition in Lib/test/test_utf8_mode.py Message-ID: <1555776614.31.0.832242060156.issue36683@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/test/test_utf8_mode.py: UTF8ModeTests.test_io_encoding ---------- components: Library (Lib) messages: 340583 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definition in Lib/test/test_utf8_mode.py type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 19:55:19 2019 From: report at bugs.python.org (Gordon P. Hemsley) Date: Sat, 20 Apr 2019 23:55:19 +0000 Subject: [New-bugs-announce] [issue36684] codecov.io code coverage has not updated since 2019-04-13 Message-ID: <1555804519.07.0.0840911433327.issue36684@roundup.psfhosted.org> New submission from Gordon P. Hemsley : The last commit available on codecov.io is from a week ago (d28aaa7df8bcd46f4135d240d041b0b171b664cc): https://codecov.io/gh/python/cpython And the widget on the README is showing a status of "unknown". ---------- messages: 340588 nosy: gphemsley priority: normal severity: normal status: open title: codecov.io code coverage has not updated since 2019-04-13 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 20:07:40 2019 From: report at bugs.python.org (Gordon P. Hemsley) Date: Sun, 21 Apr 2019 00:07:40 +0000 Subject: [New-bugs-announce] [issue36685] C implementation of xml.etree.ElementTree does not make a copy of attrib argument when creating new Element Message-ID: <1555805260.45.0.823683701543.issue36685@roundup.psfhosted.org> New submission from Gordon P. Hemsley : In the process of investigating and writing tests for issue32424, I discovered that the C implementation of xml.etree.ElementTree does not make a copy of the attrib argument when creating a new element, allowing the attributes of the element to be modified outside of creation. The Python implementation does not have this problem. ---------- components: Library (Lib), XML messages: 340590 nosy: asvetlov, eli.bendersky, gphemsley, mdk, p-ganssle, r.david.murray, scoder, serhiy.storchaka, thatiparthy priority: normal severity: normal status: open title: C implementation of xml.etree.ElementTree does not make a copy of attrib argument when creating new Element type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 21:04:45 2019 From: report at bugs.python.org (Simon Bernier St-Pierre) Date: Sun, 21 Apr 2019 01:04:45 +0000 Subject: [New-bugs-announce] [issue36686] Docs: asyncio.loop.subprocess_exec documentation is confusing, it's not clear how to inherit stdin, stdout or stderr in the subprocess Message-ID: <1555808685.63.0.20312678962.issue36686@roundup.psfhosted.org> New submission from Simon Bernier St-Pierre : I had trouble figuring out how to simply inherit stdin, stdout, or stderr in the asyncio.create_subprocess_exec / asyncio.subprocess_exec docs. My experiments show that passing either None or `sys.std*` works but the way the docs are written make it hard to figure that out in my opinion. > stdout: either a file-like object representing the pipe to be connected to the subprocess?s standard output stream using connect_read_pipe(), or the subprocess.PIPE constant (default). By default a new pipe will be created and connected. I would add a mention that using None makes the subprocess inherit the file descriptor. ---------- components: asyncio messages: 340593 nosy: asvetlov, sbstp, yselivanov priority: normal severity: normal status: open title: Docs: asyncio.loop.subprocess_exec documentation is confusing, it's not clear how to inherit stdin, stdout or stderr in the subprocess versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 20 21:14:55 2019 From: report at bugs.python.org (Simon Bernier St-Pierre) Date: Sun, 21 Apr 2019 01:14:55 +0000 Subject: [New-bugs-announce] [issue36687] subprocess encoding Message-ID: <1555809295.79.0.124868679874.issue36687@roundup.psfhosted.org> Change by Simon Bernier St-Pierre : ---------- nosy: sbstp priority: normal severity: normal status: open title: subprocess encoding _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 02:20:08 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sun, 21 Apr 2019 06:20:08 +0000 Subject: [New-bugs-announce] [issue36688] import dummy_threading causes ImportError Message-ID: <1555827608.59.0.767546227499.issue36688@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : importing dummy_threading causes ImportError. It used to work on 3.6. There are tests at Lib/test/test_dummy_threading.py and rearranging the import so that "import dummy_threading as _threading" is the first line also causes error. This module was deprecated from 3.7 with respect to threading enabled always but I thought to add a report anyway. Looking at git log it seems a6a4dc816d68df04a7d592e0b6af8c7ecc4d4344 did some changes where catching the ImportError on Lib/functools.py was removed that could be causing this issue. Importing functools before dummy_threading works. # master with functools imported before dummy_threading ? cpython git:(master) ? ./python.exe -c 'import functools; import dummy_threading; print("hello")' hello # Python 3.6 $ python3.6 -c 'import dummy_threading; print("hello")' hello # Python 3.7 $ python3.7 -c 'import dummy_threading' Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/dummy_threading.py", line 45, in import threading File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 8, in from traceback import format_exc as _format_exc File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/traceback.py", line 5, in import linecache File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/linecache.py", line 8, in import functools File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/functools.py", line 24, in from _thread import RLock ImportError: cannot import name 'RLock' from '_dummy_thread' (/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/_dummy_thread.py) # master $ cpython git:(master) ./python.exe -c 'import dummy_threading' Traceback (most recent call last): File "", line 1, in File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/dummy_threading.py", line 45, in import threading File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/threading.py", line 8, in from traceback import format_exc as _format_exc File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/traceback.py", line 5, in import linecache File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/linecache.py", line 8, in import functools File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/functools.py", line 20, in from _thread import RLock ImportError: cannot import name 'RLock' from '_dummy_thread' (/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/_dummy_thread.py) # Patch to move dummy_threading import as first line diff --git a/Lib/test/test_dummy_threading.py b/Lib/test/test_dummy_threading.py index a0c2972a60..dc40abeda5 100644 --- a/Lib/test/test_dummy_threading.py +++ b/Lib/test/test_dummy_threading.py @@ -1,6 +1,6 @@ +import dummy_threading as _threading from test import support import unittest -import dummy_threading as _threading import time class DummyThreadingTestCase(unittest.TestCase): ? cpython git:(master) ? ./python.exe Lib/test/test_dummy_threading.py Traceback (most recent call last): File "Lib/test/test_dummy_threading.py", line 1, in import dummy_threading as _threading File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/dummy_threading.py", line 45, in import threading File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/threading.py", line 8, in from traceback import format_exc as _format_exc File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/traceback.py", line 5, in import linecache File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/linecache.py", line 8, in import functools File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/functools.py", line 20, in from _thread import RLock ImportError: cannot import name 'RLock' from '_dummy_thread' (/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/_dummy_thread.py) ---------- components: Library (Lib) messages: 340597 nosy: brett.cannon, pitrou, xtreak priority: normal severity: normal status: open title: import dummy_threading causes ImportError type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 05:09:18 2019 From: report at bugs.python.org (Christoph Reiter) Date: Sun, 21 Apr 2019 09:09:18 +0000 Subject: [New-bugs-announce] [issue36689] docs: os.path.commonpath raises ValueError for different drives Message-ID: <1555837758.83.0.507858438764.issue36689@roundup.psfhosted.org> New submission from Christoph Reiter : Since I just got bit by this despite reading the docs: https://docs.python.org/3.8/library/os.path.html#os.path.commonpath It lists various error cases where ValueError is raised but is missing the case where absolute paths on Windows are on different drives and I forgot to handle that: File "C:/building/msys64/mingw64/lib/python3.7\ntpath.py", line 631, in commonpath raise ValueError("Paths don't have the same drive") ValueError: Paths don't have the same drive ---------- assignee: docs at python components: Documentation messages: 340604 nosy: docs at python, lazka priority: normal severity: normal status: open title: docs: os.path.commonpath raises ValueError for different drives type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 08:52:55 2019 From: report at bugs.python.org (jiawei zhou) Date: Sun, 21 Apr 2019 12:52:55 +0000 Subject: [New-bugs-announce] [issue36690] A typing error in demo rpython.py Message-ID: <1555851175.4.0.109460284038.issue36690@roundup.psfhosted.org> New submission from jiawei zhou : Hi. There is an error in file `Tools/demo/rpython.py` at line 22. The original statement is `port = int(port[i+1:])`, but it will crash if the port is specified as parameters. The correct code should be `port = int(host[i+1:])`. Then the program can read specified port from parameter sys.argv[1]. ---------- components: Demos and Tools messages: 340606 nosy: jiawei zhou priority: normal severity: normal status: open title: A typing error in demo rpython.py type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 09:04:32 2019 From: report at bugs.python.org (Thomas Kluyver) Date: Sun, 21 Apr 2019 13:04:32 +0000 Subject: [New-bugs-announce] [issue36691] SystemExit & sys.exit : Allow both exit status and message Message-ID: <1555851872.77.0.954312917312.issue36691@roundup.psfhosted.org> New submission from Thomas Kluyver : The SystemExit exception, and consequently the sys.exit() function, take a single parameter which is either an integer exit status for the process, or a message to print to stderr before exiting - in which case the exit status is implicitly 1. In certain situations, it would be useful to pass both an exit status and a message. E.g. when argparse handles '--help', it wants to display a message and exit successfully (status 0). You may also use specific exit codes to indicate different kinds of failure. Printing the message separately before raising SystemExit is not an entirely satisfactory subsitute, because the message attached to the exception is only printed if it is unhandled. E.g. for testing code that may raise SystemExit, it's useful to have the message as part of the exception. I imagine that the trickiest bit of changing this would be ensuring as much backwards compatibility as possible. In particular, SystemExit exceptions have a 'code' attribute which can be either the exit status or the message. ---------- messages: 340607 nosy: takluyver priority: normal severity: normal status: open title: SystemExit & sys.exit : Allow both exit status and message type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 09:33:08 2019 From: report at bugs.python.org (Nick Coghlan) Date: Sun, 21 Apr 2019 13:33:08 +0000 Subject: [New-bugs-announce] [issue36692] Unexpected stderr output from test_sys_settrace Message-ID: <1555853588.91.0.708286420314.issue36692@roundup.psfhosted.org> New submission from Nick Coghlan : The test output from test_sys_settrace makes it look like a couple of the async tracing tests aren't cleaning up after themselves properly: ``` [ncoghlan at localhost cpython]$ ./python -m test test_sys_settrace Run tests sequentially 0:00:00 load avg: 1.27 [1/1] test_sys_settrace unhandled exception during asyncio.run() shutdown task: ()> exception=RuntimeError("can't send non-None value to a just-started coroutine")> RuntimeError: can't send non-None value to a just-started coroutine unhandled exception during asyncio.run() shutdown task: ()> exception=RuntimeError("can't send non-None value to a just-started coroutine")> RuntimeError: can't send non-None value to a just-started coroutine == Tests result: SUCCESS == 1 test OK. Total duration: 102 ms Tests result: SUCCESS ``` If that output is actually expected as part of the test, it would be helpful if the test printed a message beforehand saying to expect it. Otherwise, it would be desirable for the test to clean up after itself and keep the messages from being displayed in the first place. ---------- components: Tests messages: 340608 nosy: asvetlov, ncoghlan, yselivanov priority: low severity: normal stage: needs patch status: open title: Unexpected stderr output from test_sys_settrace type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 12:28:47 2019 From: report at bugs.python.org (Donald Hobson) Date: Sun, 21 Apr 2019 16:28:47 +0000 Subject: [New-bugs-announce] [issue36693] Minor inconsistancy with types. Message-ID: <1555864127.32.0.643984814529.issue36693@roundup.psfhosted.org> New submission from Donald Hobson : Almost all of python makes the abstraction that ints are a single type of thing, the fact that some ints are too big to store in 8 bytes of memory is abstracted away. This abstraction fails when you try to reverse large ranges. >>> reversed(range(1<<63)) >>> reversed(range(1<<63-1)) >>> type(reversed(range(1<<63-1))) >>> type(reversed(range(1<<63))) >>> type(reversed(range(1<<63-2)))==type(reversed(range(1<<63-1))) True >>> type(reversed(range(1<<63-1)))==type(reversed(range(1<<63))) False ---------- components: Interpreter Core messages: 340614 nosy: Donald Hobson priority: normal severity: normal status: open title: Minor inconsistancy with types. type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 21 13:11:20 2019 From: report at bugs.python.org (Paul Ellenbogen) Date: Sun, 21 Apr 2019 17:11:20 +0000 Subject: [New-bugs-announce] [issue36694] Excessive memory use or memory fragmentation when unpickling many small objects Message-ID: <1555866680.03.0.817045016777.issue36694@roundup.psfhosted.org> New submission from Paul Ellenbogen : Python encounters significant memory fragmentation when unpickling many small objects. I have attached two scripts that I believe demonstrate the issue. When you run "dumpy.py" it will generate a large list of namedtuples, then write that list to a file using pickle. Before it does so, it pauses for user input. Before exiting the script you can view the memory usage in htop or whatever your preferred method is. The "load.py" script loads the file written by dump.py. After loading the data is complete, it waits for user input. The memory usage at the point where the script is waiting for user input is (more than) twice as much in the "load" case as the "dump" case. The small objects in the list I am storing have 3 values, and I have tested three alternative representations: tuple, namedtuple, and a custom class. The namedtuple and custom class both have the memory use/fragmentation issue. The built in tuple type does not have this issue. Using optimize in pickletools doesn't seem to make a difference. Matthew Cowles from the python help list had some good suggestions, and found that the object size themselves, as observed by sys.getsizeof was different before and after pickling. Perhaps this is something other than memory fragmentation, or something in addition to memory fragmentation. Although high water mark is similar for both scripts, the pickling script settles down on a reasonably smaller memory footprint. I would still consider the long run memory waste of unpickling a bug. For example in my use case I will run one instance of the equivalent of pickling script, then run many many instances of the script that unpickles. These scripts were run with Python 3.6.7 (GCC 8.2.0) on Ubuntu 18.10. ---------- components: Library (Lib) files: dump.py messages: 340615 nosy: Ellenbogen, alexandre.vassalotti priority: normal severity: normal status: open title: Excessive memory use or memory fragmentation when unpickling many small objects type: resource usage versions: Python 3.6 Added file: https://bugs.python.org/file48278/dump.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 22 00:40:27 2019 From: report at bugs.python.org (Brian Skinn) Date: Mon, 22 Apr 2019 04:40:27 +0000 Subject: [New-bugs-announce] [issue36695] Change (regression?) in v3.8.0a3 doctest output after capturing the stderr output from a raised warning Message-ID: <1555908027.52.0.640391164815.issue36695@roundup.psfhosted.org> New submission from Brian Skinn : In [this project](https://github.com/bskinn/stdio-mgr) of mine, I have a tox matrix set up with Pythons from 3.3. to 3.8. I have pytest set up to run doctest on my [`README.rst`](https://github.com/bskinn/stdio-mgr/blob/6444cce8e5866e2d519c1c0630551d8867f30c9a/README.rst). For Pythons 3.4 to 3.7 (3.4.10, 3.5.7, 3.6.8, 3.7.2), the following doctest example passes: ``` >>> import warnings >>> with stdio_mgr() as (in_, out_, err_): ... warnings.warn("'foo' has no 'bar'") ... err_cap = err_.getvalue() >>> err_cap "...UserWarning: 'foo' has no 'bar'\n..." ``` Under Python 3.8.0a3, though, it fails (actual local paths elided): ``` $ tox -re py38-attrs_latest .package recreate: .../.tox/.package .package installdeps: wheel, setuptools, attrs>=17.1 py38-attrs_latest recreate: .../.tox/py38-attrs_latest py38-attrs_latest installdeps: attrs, pytest py38-attrs_latest inst: .../.tox/.tmp/package/1/stdio-mgr-1.0.2.dev1.tar.gz py38-attrs_latest installed: atomicwrites==1.3.0,attrs==19.1.0,more-itertools==7.0.0,pluggy==0.9.0,py==1.8.0,pytest==4.4.1,six==1.12.0,stdio-mgr==1.0.2.dev1 py38-attrs_latest run-test-pre: PYTHONHASHSEED='2720295779' py38-attrs_latest run-test: commands[0] | pytest =============================================================================================== test session starts ================================================================================================ platform linux -- Python 3.8.0a3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0 cachedir: .tox/py38-attrs_latest/.pytest_cache rootdir: ..., inifile: tox.ini collected 6 items README.rst F [ 16%] tests/test_stdiomgr_base.py ..... [100%] ===================================================================================================== FAILURES ===================================================================================================== _______________________________________________________________________________________________ [doctest] README.rst _______________________________________________________________________________________________ 077 078 **Mock** ``stderr``\ **:** 079 080 .. code :: 081 082 >>> import warnings 083 >>> with stdio_mgr() as (in_, out_, err_): 084 ... warnings.warn("'foo' has no 'bar'") 085 ... err_cap = err_.getvalue() 086 >>> err_cap Expected: "...UserWarning: 'foo' has no 'bar'\n..." Got: ':2: UserWarning: \'foo\' has no \'bar\'\n warnings.warn("\'foo\' has no \'bar\'")\n' .../README.rst:86: DocTestFailure ======================================================================================== 1 failed, 5 passed in 0.06 seconds ======================================================================================== ERROR: InvocationError for command .../.tox/py38-attrs_latest/bin/pytest (exited with code 1) _____________________________________________________________________________________________________ summary ______________________________________________________________________________________________________ ERROR: py38-attrs_latest: commands failed ``` If I change the doctest in README to the following, where the expected output is surrounded by single-quotes instead of double-quotes, and the internal single quotes are escaped, it passes fine in 3.8.0a3: ``` >>> import warnings >>> with stdio_mgr() as (in_, out_, err_): ... warnings.warn("'foo' has no 'bar'") ... err_cap = err_.getvalue() >>> err_cap '...UserWarning: \'foo\' has no \'bar\'\n...' ``` But, naturally, it fails in 3.7 and below. It *looks* like this is probably a glitch somewhere in 3.8.0a3, where this string containing single quotes is rendered (at the REPL?) using enclosing single quotes and escaped internal single quotes, rather than enclosing double-quotes and non-escaped internal single-quotes? ---------- components: Library (Lib) messages: 340637 nosy: bskinn priority: normal severity: normal status: open title: Change (regression?) in v3.8.0a3 doctest output after capturing the stderr output from a raised warning type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 22 02:58:37 2019 From: report at bugs.python.org (Michael Felt) Date: Mon, 22 Apr 2019 06:58:37 +0000 Subject: [New-bugs-announce] [issue36696] possible multiple regressions on AIX Message-ID: <1555916317.7.0.351129409796.issue36696@roundup.psfhosted.org> New submission from Michael Felt : My AIX bot has been very consistent - only the multiprocessing tests failing when run by bot, but 4 or 5 days ago 3 to 5 additional tests - that, afaik, had never failed before, are now failing. These may also be compiler related specifics, or the presence (better lack of) 3rd party packages. Anyway - summary of output: INFO: Can't locate Tcl/Tk libs and/or headers building '_pickle' extension xlc_r -O2 -I/opt/include -I/opt/buildaix/include -g -I/opt/include -I/opt/buildaix/include -g -I./Include/internal -I./Include -I. -I/opt/include -I/opt/buildaix/include -I/home/buildbot/cpython-master/Include -I/home/buildbot/cpython-master -c /home/buildbot/cpython-master/Modules/_pickle.c -o build/temp.aix-7.1-3.8-pydebug/home/buildbot/cpython-master/Modules/_pickle.o -D Py_BUILD_CORE_MODULE 1506-261 (W) Suboption Py_BUILD_CORE_MODULE is not valid for option D. "/home/buildbot/cpython-master/Modules/_pickle.c", line 8.4: 1506-205 (S) #error "Py_BUILD_CORE_BUILTIN or Py_BUILD_CORE_MODULE must be defined" building '_json' extension xlc_r -O2 -I/opt/include -I/opt/buildaix/include -g -I/opt/include -I/opt/buildaix/include -g -I./Include/internal -I./Include -I. -I/opt/include -I/opt/buildaix/include -I/home/buildbot/cpython-master/Include -I/home/buildbot/cpython-master -c /home/buildbot/cpython-master/Modules/_json.c -o build/temp.aix-7.1-3.8-pydebug/home/buildbot/cpython-master/Modules/_json.o -D Py_BUILD_CORE_MODULE 1506-261 (W) Suboption Py_BUILD_CORE_MODULE is not valid for option D. "/home/buildbot/cpython-master/Modules/_json.c", line 8.4: 1506-205 (S) #error "Py_BUILD_CORE_BUILTIN or Py_BUILD_CORE_MODULE must be defined" "./Include/internal/pycore_accu.h", line 13.4: 1506-205 (S) #error "this header requires Py_BUILD_CORE define" building '_testinternalcapi' extension xlc_r -O2 -I/opt/include -I/opt/buildaix/include -g -I/opt/include -I/opt/buildaix/include -g -I./Include/internal -I./Include -I. -I/opt/include -I/opt/buildaix/include -I/home/buildbot/cpython-master/Include -I/home/buildbot/cpython-master -c /home/buildbot/cpython-master/Modules/_testinternalcapi.c -o build/temp.aix-7.1-3.8-pydebug/home/buildbot/cpython-master/Modules/_testinternalcapi.o -D Py_BUILD_CORE_MODULE 1506-261 (W) Suboption Py_BUILD_CORE_MODULE is not valid for option D. "/home/buildbot/cpython-master/Modules/_testinternalcapi.c", line 6.4: 1506-205 (S) #error "Py_BUILD_CORE_BUILTIN or Py_BUILD_CORE_MODULE must be defined" "./Include/internal/pycore_coreconfig.h", line 8.4: 1506-205 (S) #error "this header requires Py_BUILD_CORE define" Python build finished successfully! The necessary bits to build these optional modules were not found: _curses_panel _gdbm _tkinter ossaudiodev readline spwd To find the necessary bits, look in setup.py in detect_modules() for the module's name. The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc atexit pwd time Failed to build these modules: _json _pickle _testinternalcapi running build_scripts copying and adjusting /home/buildbot/cpython-master/Tools/scripts/pydoc3 -> build/scripts-3.8 copying and adjusting /home/buildbot/cpython-master/Tools/scripts/idle3 -> build/scripts-3.8 copying and adjusting /home/buildbot/cpython-master/Tools/scripts/2to3 -> build/scripts-3.8 changing mode of build/scripts-3.8/pydoc3 from 644 to 755 changing mode of build/scripts-3.8/idle3 from 644 to 755 changing mode of build/scripts-3.8/2to3 from 644 to 755 renaming build/scripts-3.8/pydoc3 to build/scripts-3.8/pydoc3.8 renaming build/scripts-3.8/idle3 to build/scripts-3.8/idle3.8 renaming build/scripts-3.8/2to3 to build/scripts-3.8/2to3-3.8 ./python -E -c 'import sys ; from sysconfig import get_platform ; print("%s-%d.%d" % (get_platform(), *sys.version_info[:2]))' >platform ./python ./Tools/scripts/run_tests.py /home/buildbot/cpython-master/python -u -W default -bb -E -m test -r -w -j 0 -u all,-largefile,-audio,-gui == CPython 3.8.0a3+ (heads/master:3e986de0d6, Apr 21 2019, 17:04:13) [C] ... running: test_venv (3 min 38 sec), test_decimal (2 min 27 sec), test_ftplib (30 sec 50 ms), test_zipfile (1 min 43 sec) 0:04:00 [ 40/420/1] test_ftplib failed -- running: test_venv (3 min 53 sec), test_decimal (2 min 42 sec), test_zipfile (1 min 58 sec) test test_ftplib failed -- Traceback (most recent call last): File "/home/buildbot/cpython-master/Lib/test/test_ftplib.py", line 605, in test_storlines self.client.storlines('stor', f) File "/home/buildbot/cpython-master/Lib/ftplib.py", line 526, in storlines conn.unwrap() File "/home/buildbot/cpython-master/Lib/ssl.py", line 1094, in unwrap s = self._sslobj.shutdown() socket.timeout: The read operation timed out 0:04:05 [ 41/420/1] test_locale passed -- running: test_venv (3 min 58 sec), test_decimal (2 min 47 sec), test_zipfile (2 min 3 sec) ... 0:12:14 [110/420/2] test_embed failed -- running: test_multiprocessing_forkserver (1 min 25 sec), test_compileall (2 min 9 sec) test test_embed failed -- multiple errors occurred; run in verbose mode for details running: test_multiprocessing_forkserver (1 min 55 sec), test_faulthandler (30 sec 42 ms), test_compile (56 sec 571 ms), test_compileall (2 min 39 sec) 0:12:55 [111/420/2] test_faulthandler passed (36 sec 350 ms) -- running: test_multiprocessing_forkserver (2 min 6 sec), test_compile (1 min 7 sec), test_compileall (2 min 50 sec) 0:13:46 [125/420/3] test_json failed -- running: test_multiprocessing_forkserver (2 min 57 sec) test test_json crashed -- Traceback (most recent call last): File "/home/buildbot/cpython-master/Lib/test/libregrtest/runtest.py", line 166, in runtest_inner the_module = importlib.import_module(abstest) File "/home/buildbot/cpython-master/Lib/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 975, in _find_and_load_unlocked File "", line 671, in _load_unlocked File "", line 777, in exec_module File "", line 219, in _call_with_frames_removed File "/home/buildbot/cpython-master/Lib/test/test_json/__init__.py", line 12, in cjson.JSONDecodeError = cjson.decoder.JSONDecodeError = json.JSONDecodeError AttributeError: 'NoneType' object has no attribute 'JSONDecodeError' ... 0:24:49 [212/420/5] test_pydoc failed -- running: test_pyclbr (1 min 2 sec), test_subprocess (11 min 1 sec) test test_pydoc crashed -- Traceback (most recent call last): File "/home/buildbot/cpython-master/Lib/test/libregrtest/runtest.py", line 166, in runtest_inner the_module = importlib.import_module(abstest) File "/home/buildbot/cpython-master/Lib/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 975, in _find_and_load_unlocked File "", line 671, in _load_unlocked File "", line 777, in exec_module File "", line 219, in _call_with_frames_removed File "/home/buildbot/cpython-master/Lib/test/test_pydoc.py", line 9, in import _pickle ModuleNotFoundError: No module named '_pickle' ... 0:24:59 [214/420/6] test_pyclbr failed -- running: test_subprocess (11 min 12 sec) *** Pickler test test_pyclbr failed -- Traceback (most recent call last): File "/home/buildbot/cpython-master/Lib/test/test_pyclbr.py", line 227, in test_others cm('pickle', ignore=('partial',)) File "/home/buildbot/cpython-master/Lib/test/test_pyclbr.py", line 144, in checkModule self.assertHaskey(dict, name, ignore) File "/home/buildbot/cpython-master/Lib/test/test_pyclbr.py", line 48, in assertHaskey self.assertIn(key, obj) AssertionError: 'Pickler' not found in {'partial': , 'PickleError': , 'PicklingError': , 'UnpicklingError': , '_Stop': , '_Framer': , '_Unframer': , '_getattribute': , 'whichmodule': , 'encode _long': , 'decode_long': , '_Pickler': , '_Unpickler': , '_dump': , '_dumps': , '_load': , '_loads': , '_test': } ... 1:08:14 [398/420/8] test_inspect failed -- running: test_tools (10 min 44 sec), test_multiprocessing_spawn (11 min 35 sec), test_io (5 min 23 sec) test test_inspect crashed -- Traceback (most recent call last): File "/home/buildbot/cpython-master/Lib/test/libregrtest/runtest.py", line 166, in runtest_inner the_module = importlib.import_module(abstest) File "/home/buildbot/cpython-master/Lib/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 975, in _find_and_load_unlocked File "", line 671, in _load_unlocked File "", line 777, in exec_module File "", line 219, in _call_with_frames_removed File "/home/buildbot/cpython-master/Lib/test/test_inspect.py", line 11, in import _pickle ModuleNotFoundError: No module named '_pickle' ... 388 tests OK. 9 tests failed: test_embed test_ftplib test_inspect test_json test_multiprocessing_fork test_multiprocessing_forkserver test_multiprocessing_spawn test_pyclbr test_pydoc ---------- messages: 340643 nosy: Michael.Felt priority: normal severity: normal status: open title: possible multiple regressions on AIX _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 22 04:49:29 2019 From: report at bugs.python.org (Noitul) Date: Mon, 22 Apr 2019 08:49:29 +0000 Subject: [New-bugs-announce] [issue36697] inspect.getclosurevars returns wrong globals dict Message-ID: <1555922969.45.0.450842121898.issue36697@roundup.psfhosted.org> New submission from Noitul : >>> import inspect >>> a = 0 >>> b = 1 >>> def abc(): >>> return a.b >>> print(inspect.getclosurevars(abc)) ClosureVars(nonlocals={}, globals={'a': 0, 'b': 1}, builtins={}, unbound=set()) Should "'b': 1" be in globals dict? ---------- components: Library (Lib) messages: 340645 nosy: Noitul priority: normal severity: normal status: open title: inspect.getclosurevars returns wrong globals dict versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 22 13:15:57 2019 From: report at bugs.python.org (TheMathsGod) Date: Mon, 22 Apr 2019 17:15:57 +0000 Subject: [New-bugs-announce] [issue36698] Shell restart when error message contains non-BMP characters Message-ID: <1555953357.75.0.808052597637.issue36698@roundup.psfhosted.org> New submission from TheMathsGod : When attempting to raise an error with a message containing non-BMP characters (Unicode ordinals above U+0xFFFF), the python shell displays a long traceback including several UnicodeEncodeErrors and then restarts. Example: >>> raise Exception('\U0001f603') Traceback (most recent call last): File "", line 1, in raise Exception('\U0001f603') Traceback (most recent call last): File "", line 1, in raise Exception('\U0001f603') Traceback (most recent call last): File "D:\Python37\lib\idlelib\run.py", line 474, in runcode exec(code, self.locals) File "", line 1, in Traceback (most recent call last): File "D:\Python37\lib\idlelib\run.py", line 474, in runcode exec(code, self.locals) File "", line 1, in Exception: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Python37\lib\idlelib\run.py", line 144, in main ret = method(*args, **kwargs) File "D:\Python37\lib\idlelib\run.py", line 486, in runcode print_exception() File "D:\Python37\lib\idlelib\run.py", line 234, in print_exception print_exc(typ, val, tb) File "D:\Python37\lib\idlelib\run.py", line 232, in print_exc print(line, end='', file=efile) File "D:\Python37\lib\idlelib\run.py", line 362, in write return self.shell.write(s, self.tags) File "D:\Python37\lib\idlelib\rpc.py", line 608, in __call__ value = self.sockio.remotecall(self.oid, self.name, args, kwargs) File "D:\Python37\lib\idlelib\rpc.py", line 220, in remotecall return self.asyncreturn(seq) File "D:\Python37\lib\idlelib\rpc.py", line 251, in asyncreturn return self.decoderesponse(response) File "D:\Python37\lib\idlelib\rpc.py", line 271, in decoderesponse raise what UnicodeEncodeError: 'UCS-2' codec can't encode characters in position 11-11: Non-BMP character not supported in Tk During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Python37\lib\idlelib\run.py", line 158, in main print_exception() File "D:\Python37\lib\idlelib\run.py", line 234, in print_exception print_exc(typ, val, tb) File "D:\Python37\lib\idlelib\run.py", line 220, in print_exc print_exc(type(context), context, context.__traceback__) File "D:\Python37\lib\idlelib\run.py", line 232, in print_exc print(line, end='', file=efile) File "D:\Python37\lib\idlelib\run.py", line 362, in write return self.shell.write(s, self.tags) File "D:\Python37\lib\idlelib\rpc.py", line 608, in __call__ value = self.sockio.remotecall(self.oid, self.name, args, kwargs) File "D:\Python37\lib\idlelib\rpc.py", line 220, in remotecall return self.asyncreturn(seq) File "D:\Python37\lib\idlelib\rpc.py", line 251, in asyncreturn return self.decoderesponse(response) File "D:\Python37\lib\idlelib\rpc.py", line 271, in decoderesponse raise what UnicodeEncodeError: 'UCS-2' codec can't encode characters in position 11-11: Non-BMP character not supported in Tk During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "D:\Python37\lib\idlelib\run.py", line 162, in main traceback.print_exception(type, value, tb, file=sys.__stderr__) File "D:\Python37\lib\traceback.py", line 105, in print_exception print(line, file=file, end="") File "D:\Python37\lib\idlelib\run.py", line 362, in write return self.shell.write(s, self.tags) File "D:\Python37\lib\idlelib\rpc.py", line 608, in __call__ value = self.sockio.remotecall(self.oid, self.name, args, kwargs) File "D:\Python37\lib\idlelib\rpc.py", line 220, in remotecall return self.asyncreturn(seq) File "D:\Python37\lib\idlelib\rpc.py", line 251, in asyncreturn return self.decoderesponse(response) File "D:\Python37\lib\idlelib\rpc.py", line 271, in decoderesponse raise what UnicodeEncodeError: 'UCS-2' codec can't encode characters in position 11-11: Non-BMP character not supported in Tk =============================== RESTART: Shell =============================== >>> I presume the error is caused by Tk being unable to display the characters in the error message, but being forced to anyway by the traceback, causing a series of UnicodeEncodeErrors. Perhaps the error handler should use repr() or similar methods to convert the message into a displayable form? ---------- assignee: terry.reedy components: IDLE messages: 340662 nosy: TheMathsGod, terry.reedy priority: normal severity: normal status: open title: Shell restart when error message contains non-BMP characters type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 22 14:24:51 2019 From: report at bugs.python.org (=?utf-8?q?Andreas_K=2E_H=C3=BCttel?=) Date: Mon, 22 Apr 2019 18:24:51 +0000 Subject: [New-bugs-announce] [issue36699] building for riscv multilib (patch attached) Message-ID: <1555957491.02.0.328369650892.issue36699@roundup.psfhosted.org> New submission from Andreas K. H?ttel : Hi. I have been trying to install Python on a (well prototype of a) risc-v multilib Gentoo system, where the system library directory is /usr/lib64/lp64d (!). See as reference for the directories https://www.sifive.com/blog/all-aboard-part-5-risc-v-multilib Python as is builds and installs fine but the results are pretty much unuseable. Symptoms are "/usr/lib64/lib64/lp64d/python3.6/site-packages" directory name and distutils installs unable to find Python.h (it is correctly installed in /usr/include/..., but distutils passes /usr/lib64/include as include path). I've tracked this down to bad values in sys.base_prefix and sys.exec_prefix: >>> sys.base_prefix '/usr/lib/python-exec/python3.6/../../../lib64' Even if I set PYTHONHOME=/usr , I get '/usr/lib64' The fix, specific for this directory layout, is to have one more directory component stripped in Modules/getpath.c , see patch below. With this I have been able to install Python with a normal-looking directory layout, and distutils things install fine. Posting this here so it gets your attention, and hoping that you're better in coming up with a general solution than I am... probably the number of components stripped should depend on the number of slashes in the library path, e.g., "lib" versus "lib64/lp64d" diff -ruN Python-3.6.8.orig/Modules/getpath.c Python-3.6.8/Modules/getpath.c --- Python-3.6.8.orig/Modules/getpath.c 2018-12-23 22:37:14.000000000 +0100 +++ Python-3.6.8/Modules/getpath.c 2019-04-21 01:05:35.127440301 +0200 @@ -796,6 +796,7 @@ if (pfound > 0) { reduce(prefix); reduce(prefix); + reduce(prefix); /* The prefix is the root directory, but reduce() chopped * off the "/". */ if (!prefix[0]) @@ -808,6 +809,7 @@ reduce(exec_prefix); reduce(exec_prefix); reduce(exec_prefix); + reduce(exec_prefix); if (!exec_prefix[0]) wcscpy(exec_prefix, separator); } Thanks. ---------- components: Build messages: 340667 nosy: Andreas K. H?ttel priority: normal severity: normal status: open title: building for riscv multilib (patch attached) type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 22 14:33:12 2019 From: report at bugs.python.org (Paul Hoffman) Date: Mon, 22 Apr 2019 18:33:12 +0000 Subject: [New-bugs-announce] [issue36700] base64 has old references that should be updated Message-ID: <1555957992.77.0.818777064587.issue36700@roundup.psfhosted.org> New submission from Paul Hoffman : The documentation for base64 library has an RFC that is obsolete. ---------- assignee: docs at python components: Documentation messages: 340668 nosy: docs at python, paulehoffman priority: normal pull_requests: 12839 severity: normal status: open title: base64 has old references that should be updated _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 01:56:21 2019 From: report at bugs.python.org (Piyush) Date: Tue, 23 Apr 2019 05:56:21 +0000 Subject: [New-bugs-announce] [issue36701] module 'urllib' has no attribute 'request' Message-ID: <1555998981.48.0.859846222459.issue36701@roundup.psfhosted.org> New submission from Piyush : The current way to use one of `urllib.request` APIs is like this: ``` import urllib.request urllib.request.urlretrieve ``` Can we change this to: ``` import urllib urllib.request.urlretrieve ``` This will require adding 1 line at https://github.com/python/cpython/blob/master/Lib/urllib/__init__.py This is required because help on `urllib` says that `request` is part of `urllib` suggesting that `urllib.request` should be available if I `import urllib`. Moreover `import urllib.request` is not at all intuitive. I can submit a PR if other's think what I'm proposing makes sense. ---------- files: Screen Shot 2019-04-23 at 11.22.48 AM.png messages: 340690 nosy: piyush-kgp priority: normal severity: normal status: open title: module 'urllib' has no attribute 'request' versions: Python 3.6 Added file: https://bugs.python.org/file48283/Screen Shot 2019-04-23 at 11.22.48 AM.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 02:48:53 2019 From: report at bugs.python.org (sayno996) Date: Tue, 23 Apr 2019 06:48:53 +0000 Subject: [New-bugs-announce] [issue36702] test_dtrace failed Message-ID: <1556002133.06.0.696794109378.issue36702@roundup.psfhosted.org> New submission from sayno996 : I install Python 3.7.3 on CentOS 7.6. However, when I run "make test", I got a failure on test_dtrace as: Ran 8 tests in 4.752s FAILED (failures=6, skipped=2) test test_dtrace failed test_dtrace failed == Tests result: FAILURE == 1 test failed: test_dtrace Total duration: 4 sec 771 ms Tests result: FAILURE ---------- components: Build files: test.log messages: 340692 nosy: sayno996 priority: normal severity: normal status: open title: test_dtrace failed type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file48284/test.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 04:00:42 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 23 Apr 2019 08:00:42 +0000 Subject: [New-bugs-announce] [issue36703] [Easy][Windows] test_subprocess: test_close_fds_with_stdio() has a race condition Message-ID: <1556006442.08.0.426230409523.issue36703@roundup.psfhosted.org> New submission from STINNER Victor : test_subprocess: test_close_fds_with_stdio() pass when run alone, but fail when run in parallel. I tagged the issue as "easy" for new contributors to Python. If someone is interested to work on this issue, please contact me in private. https://buildbot.python.org/all/#/builders/3/builds/2446 I can reproduce the issue. The test pass when run alone: > python -m test test_subprocess -m test_close_fds_with_stdio -v Running Debug|x64 interpreter... == CPython 3.7.3+ (heads/3.7:9344d74f7b, Apr 23 2019, 09:53:41) [MSC v.1915 64 bit (AMD64)] == Windows-10-10.0.17763-SP0 little-endian == cwd: C:\vstinner\python\3.7\build\test_python_6116 == CPU count: 2 == encodings: locale=cp1252, FS=utf-8 Run tests sequentially 0:00:00 [1/1] test_subprocess test_close_fds_with_stdio (test.test_subprocess.Win32ProcessTestCase) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.302s OK == Tests result: SUCCESS == 1 test OK. Total duration: 391 ms Tests result: SUCCESS But the test fails when run in parallel: > python -m test test_subprocess -m test_close_fds_with_stdio -F -j4 Running Debug|x64 interpreter... Run tests in parallel using 4 child processes 0:00:01 [ 1/1] test_subprocess failed test test_subprocess failed -- Traceback (most recent call last): File "C:\vstinner\python\3.7\lib\test\test_subprocess.py", line 2930, in test_close_fds_with_stdio self.assertEqual(p.returncode, 1) AssertionError: 0 != 1 0:00:01 [ 2/1] test_subprocess passed 0:00:01 [ 3/2] test_subprocess failed test test_subprocess failed -- Traceback (most recent call last): File "C:\vstinner\python\3.7\lib\test\test_subprocess.py", line 2930, in test_close_fds_with_stdio self.assertEqual(p.returncode, 1) AssertionError: 0 != 1 0:00:01 [ 4/3] test_subprocess failed test test_subprocess failed -- Traceback (most recent call last): File "C:\vstinner\python\3.7\lib\test\test_subprocess.py", line 2942, in test_close_fds_with_stdio self.assertEqual(p.returncode, 1) AssertionError: 0 != 1 0:00:02 [ 5/3] test_subprocess passed 0:00:02 [ 6/3] test_subprocess passed == Tests result: FAILURE == 3 tests OK. 3 tests failed: test_subprocess test_subprocess test_subprocess Total duration: 2 sec 313 ms Tests result: FAILURE ---------- keywords: easy messages: 340698 nosy: vstinner priority: normal severity: normal status: open title: [Easy][Windows] test_subprocess: test_close_fds_with_stdio() has a race condition _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 07:38:37 2019 From: report at bugs.python.org (Alan Jenkins) Date: Tue, 23 Apr 2019 11:38:37 +0000 Subject: [New-bugs-announce] [issue36704] logging.FileHandler currently hardcodes errors='strict' Message-ID: <1556019517.33.0.109747791974.issue36704@roundup.psfhosted.org> New submission from Alan Jenkins : ## Problem description ``` import os import logging logging.basicConfig(filename='example.log',level=logging.DEBUG) for name in os.listdir(): logging.error('Hypothetical syntax error at line 1 of file: {}'.format(name)) ``` The above program is incorrect.[*] Because it is liable to raise UnicodeEncodeError. This can be reproduced on a standard UTF-8 locale by creating a filename which is not valid UTF-8, e.g. `touch $(printf '\xff')`. Plenty of arguments have been written on this topic, but this is not my problem. The user can report the program error, and it should not be too hard to debug. And there is a fairly straightforward workaround, following Python's IOError: use repr() when outputing filenames. But there is another issue with the above. The docs advise that most programs deployed to production, will want to set `logging.raiseExceptions = false`. My motivating example is the Linux package manager dnf, which followed this advice. Specifically when they wanted to avoid UnicodeEncodeError. [**][***] Link: https://bugzilla.redhat.com/show_bug.cgi?id=1303293#c17 I think UnicodeEncodeError is an important case to handle, but the logging module does not handle it in a good way. When we have `logging.raiseExceptions = false`, the problem messages will be lost. Those messages could be critical to troubleshooting the user's problem. It is even possible that all messages are lost - I think this would be very frustrating to troubleshoot. ## Alternative solutions which have been considered * All debugging problems could of course be avoided, by simply writing correct programs in the first place. The existence of debuggers suggests this is not a practical answer, even for Python :-). * FileHandler can be worked around fairly simply, using StreamHandler instead. However if you wanted to use RotatingFileHandler, there is no (documented) interface that would let you work around it. SyslogHandler also seems important enough to be worth worrying about. ## A possible solution When you set `raiseExceptions = false`, logging.FileHandler and friends should use errors='backslashreplace'. errors='backslashreplace' is already the default for sys.stderr. Matching this seems nice in case the program uses the default stderr logging in some configurations. A log file full of encoding noise will be a specific sign, that can be used in troubleshooting. And in cases similar to my example program, parts of the message could be entirely readable. The end-user might be able to use the log message without noticing the incorrectness at all. This is entirely consistent with the rationale for `logging.raiseExceptions = false`. Previously you could set logging.raiseExceptions *after* you create the logger. It will be a bit inconsistent if FileHandler relies on setting the `errors` parameter of `open()`. It seems fairly easy to document this restriction. But if that is not considered acceptable, we would either need some weird system that calls stream.reconfigure(), or muck around like dnf.i18n.UnicodeStream does: try: stream.write(s) except UnicodeEncodeError: if logging.raiseExceptions: raise else: stream.flush() # make sure nothing gets out of order! s = s.encode(stream.encoding, 'backslashreplace') stream.buffer.write(s) --- [*] C programs which segfault are said to be incorrect (or there is an error in system software or hardware). I am using similar phrasing for python programs which raise unhandled UnicodeError's. I am not sure if it is a good phrase to use for a Python program, but I hope my overall point is fairly clear. [**] dnf developers did not appear to work on the correctness issue they found. It might be a bug in gettext. [***] In the linked case, *none* of dnf's messages were readable. But I sympathize that dnf is so critical, it seems useful for dnf to try and hobble along. Even in cases like this one. As the user attempts to work towards some desired end state... The user might temporarily ignore the non-fatal problem in dnf, because their current problem seems more important, e.g. trying to install some software needed to recover or back up document files. At best, they might know some diagnostic software that knows how to diagnose they have a locale problem :-). Or more serendipitously, installing certain other software that also suffers a locale problem might help them realize what is happening. ---------- components: Library (Lib) messages: 340717 nosy: sourcejedi priority: normal severity: normal status: open title: logging.FileHandler currently hardcodes errors='strict' type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 08:47:49 2019 From: report at bugs.python.org (Gavin D'souza) Date: Tue, 23 Apr 2019 12:47:49 +0000 Subject: [New-bugs-announce] [issue36705] Unexpected Behaviour of pprint.pprint Message-ID: <1556023669.77.0.896724829798.issue36705@roundup.psfhosted.org> New submission from Gavin D'souza : For a simple string input, pprint would be expected to return an output similar to print. However, the functionality differs ### Code: import time from pprint import pprint start = time.time() time.sleep(0.5) object_made = time.time() time.sleep(0.5) done = time.time() time.sleep(0.5) shown = time.time() pprint( f"Time to create object: {object_made - start}s\n" + f"Time to insert 100000 rows: {done - object_made}s\n" + f"Time to retrieve 100000 rows: {shown - done}s\n" ) ### Output Received: ('Time to create object: 0.5010814666748047s\n' 'Time to insert 100000 rows: 0.5010972023010254s\n' 'Time to retrieve 100000 rows: 0.501101016998291s\n') ### Expected Output: Time to create object: 0.5010814666748047s Time to insert 100000 rows: 0.5010972023010254s Time to retrieve 100000 rows: 0.501101016998291s ---------- components: Library (Lib) messages: 340720 nosy: Gavin D'souza, fdrake priority: normal severity: normal status: open title: Unexpected Behaviour of pprint.pprint type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 09:04:47 2019 From: report at bugs.python.org (serge g) Date: Tue, 23 Apr 2019 13:04:47 +0000 Subject: [New-bugs-announce] [issue36706] Python script on startup stucks at import Message-ID: <1556024687.06.0.0109255005812.issue36706@roundup.psfhosted.org> New submission from serge g : I am not sure if it is python's issue (correct me if this is wrong place for report). But sometimes (probably every 3-4 attempt) when I run script, interpreter just stucks for some time (0.5-3 minutes) and then actually runs the script normally. To figure out what happens I increased verbosity (python -vv) and actually python hangs on import on this line: ... # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/kdf/scrypt.abi3.so # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/kdf/scrypt.so # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/kdf/scrypt.py # /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/kdf/__pycache__/scrypt.cpython-37.pyc matches /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/kdf/scrypt.py # code object from '/home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/kdf/__pycache__/scrypt.cpython-37.(pyc' (here >>>>>) import 'cryptography.hazmat.primitives.kdf.scrypt' # <_frozen_importlib_external.SourceFileLoader object at 0x7f5c6e3c73c8> import 'cryptography.hazmat.backends.openssl.backend' # <_frozen_importlib_external.SourceFileLoader object at 0x7f5c6c129c88> import 'cryptography.hazmat.backends.openssl' # <_frozen_importlib_external.SourceFileLoader object at 0x7f5c6d30d240> # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/x509/UnsupportedExtension.cpython-37m-x86_64-linux-gnu.so # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/x509/UnsupportedExtension.abi3.so # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/x509/UnsupportedExtension.so # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/x509/UnsupportedExtension.py # trying /home/2kg/spider/venv/lib/python3.7/site-packages/cryptography/x509/UnsupportedExtension.pyc ... I had version 3.5 installed before and didn't notice such issue. Other scripts from my project suffer from this symptom as well. My configuration is: debian 9 python 3.7.3 [GCC 6.3.0 20170516] on linux cryptography module ? version 2.6.1 ---------- components: Library (Lib) messages: 340722 nosy: serge g priority: normal severity: normal status: open title: Python script on startup stucks at import type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 23 20:27:25 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 24 Apr 2019 00:27:25 +0000 Subject: [New-bugs-announce] [issue36707] The "m" ABI flag of SOABI for pymalloc is no longer needed Message-ID: <1556065645.57.0.820338293941.issue36707@roundup.psfhosted.org> New submission from STINNER Victor : In Python 2.7, the WITH_PYMALLOC define (defined by ./configure --with-pymalloc which is the default) modified the Python API. For example, PyObject_MALLOC() is a macro replaced with PyObject_Malloc() with pymalloc but replaced with PyMem_MALLOC() without pymalloc. In Python 3.8, it's no longer the case, PyObject_MALLOC is no longer modified by WITH_PYMALLOC, it's always an alias to PyObject_Malloc() for backward compatibility. #define PyObject_MALLOC PyObject_Malloc More generally, WITH_PYMALLOC has no impact on the Python headers nor on the ABI in any way. Memory allocators can be switched at runtime using PYTHONMALLOC environment variable and -X dev command line option. For example, PYTHONMALLOC=malloc disables pymalloc (forces the usage of malloc() of the C library). I propose attached PR which removes the "m" flag from SOABI in Python 3.8 when pymalloc is enabled. ---------- messages: 340748 nosy: vstinner priority: normal severity: normal status: open title: The "m" ABI flag of SOABI for pymalloc is no longer needed versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 03:50:35 2019 From: report at bugs.python.org (sakamotoaya) Date: Wed, 24 Apr 2019 07:50:35 +0000 Subject: [New-bugs-announce] [issue36708] can not execute the python + version, to launch python under windows. Message-ID: <1556092235.7.0.350859031075.issue36708@roundup.psfhosted.org> New submission from sakamotoaya : I am sorry if it is existing problem to launch Python 3.6, execute the command in Command Prompt under windows py -3.6?Success python3.6?Fail to launch Python 3.6, execute the command in Command Prompt under Linux py -3.6?Fail python3.6?Success I would like to unify the command. What are your thoughts on that? ---------- components: Windows messages: 340761 nosy: HiyashiChuka, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: can not execute the python + version, to launch python under windows. type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 04:48:27 2019 From: report at bugs.python.org (Tom Christie) Date: Wed, 24 Apr 2019 08:48:27 +0000 Subject: [New-bugs-announce] [issue36709] Asyncio SSL keep-alive connections raise errors after loop close. Message-ID: <1556095707.61.0.53219784779.issue36709@roundup.psfhosted.org> New submission from Tom Christie : If an asyncio SSL connection is left open (eg. any kind of keep-alive connection) then after closing the event loop, an exception will be raised... Python: ``` import asyncio import ssl import certifi async def f(): ssl_context = ssl.create_default_context() ssl_context.load_verify_locations(cafile=certifi.where()) await asyncio.open_connection('example.org', 443, ssl=ssl_context) loop = asyncio.get_event_loop() loop.run_until_complete(f()) loop.close() ``` Traceback: ``` $ python example.py Fatal write error on socket transport protocol: transport: <_SelectorSocketTransport fd=8> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py", line 868, in write n = self._sock.send(data) OSError: [Errno 9] Bad file descriptor Fatal error on SSL transport protocol: transport: <_SelectorSocketTransport closing fd=8> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py", line 868, in write n = self._sock.send(data) OSError: [Errno 9] Bad file descriptor During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/sslproto.py", line 676, in _process_write_backlog self._transport.write(chunk) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py", line 872, in write self._fatal_error(exc, 'Fatal write error on socket transport') File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py", line 681, in _fatal_error self._force_close(exc) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py", line 693, in _force_close self._loop.call_soon(self._call_connection_lost, exc) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 677, in call_soon self._check_closed() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 469, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed ``` It looks to me like the original "OSError: [Errno 9] Bad file descriptor" probably shouldn't be raised in any case - if when attempting to tear down the SSL connection, then we should probably pass silently in the case that the socket has already been closed uncleanly. Bought to my attention via: https://github.com/encode/httpcore/issues/16 ---------- assignee: christian.heimes components: SSL, asyncio messages: 340764 nosy: asvetlov, christian.heimes, tomchristie, yselivanov priority: normal severity: normal status: open title: Asyncio SSL keep-alive connections raise errors after loop close. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 08:58:00 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 24 Apr 2019 12:58:00 +0000 Subject: [New-bugs-announce] [issue36710] Pass _PyRuntimeState as an argument rather than using the _PyRuntime global variable Message-ID: <1556110680.79.0.127639989845.issue36710@roundup.psfhosted.org> New submission from STINNER Victor : Eric Snow moved global variables into a _PyRuntimeState structure which is made of sub-structures. There is a single instance of _PyRuntimeState: the _PyRuntime global variable. I would like to add "_PyRuntimeState *" parameters to functions to avoid relying directly on _PyRuntime global variable. The long term goal is to have "stateless" code: don't rely on global variables, only on input parameters. In practice, we will continue to use thread local storage (TLS) to get the "current context" like the current interpreter and the current Python thread state. ---------- components: Interpreter Core messages: 340772 nosy: vstinner priority: normal severity: normal status: open title: Pass _PyRuntimeState as an argument rather than using the _PyRuntime global variable versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 11:14:24 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Wed, 24 Apr 2019 15:14:24 +0000 Subject: [New-bugs-announce] [issue36711] duplicate method definition in Lib/email/feedparser.py Message-ID: <1556118864.95.0.640018454718.issue36711@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/email/feedparser.py:140 BufferedSubFile.pushlines ---------- components: Library (Lib) messages: 340778 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definition in Lib/email/feedparser.py type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 11:19:07 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Wed, 24 Apr 2019 15:19:07 +0000 Subject: [New-bugs-announce] [issue36713] uplicate method definition in Lib/ctypes/test/test_unicode.py Message-ID: <1556119147.21.0.373187833971.issue36713@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/ctypes/test/test_unicode.py:110 StringTestCase.test_ascii_replace ---------- components: Library (Lib) messages: 340782 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: uplicate method definition in Lib/ctypes/test/test_unicode.py type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 11:17:17 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Wed, 24 Apr 2019 15:17:17 +0000 Subject: [New-bugs-announce] [issue36712] duplicate method definition in Lib/email/test/test_email_renamed.py Message-ID: <1556119037.17.0.0080436933891.issue36712@roundup.psfhosted.org> New submission from Xavier de Gaye : As reported in issue 16079, the following method is a duplicate: Lib/email/test/test_email_renamed.py:521 TestEncoders.test_default_cte ---------- components: Library (Lib) messages: 340781 nosy: xdegaye priority: normal severity: normal stage: needs patch status: open title: duplicate method definition in Lib/email/test/test_email_renamed.py type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 12:22:38 2019 From: report at bugs.python.org (Brian Skinn) Date: Wed, 24 Apr 2019 16:22:38 +0000 Subject: [New-bugs-announce] [issue36714] Tweak doctest 'example' regex to allow a leading ellipsis in 'want' line Message-ID: <1556122958.82.0.669799236777.issue36714@roundup.psfhosted.org> New submission from Brian Skinn : doctest requires code examples have PS1 as ">>> " and PS2 as "... " -- that is, each is three printed characters, followed by a space: ``` $ cat ell_err.py import doctest class Foo: """Test docstring. >>>print("This is a test sentence.") ...a test... """ doctest.run_docstring_examples( Foo(), {}, optionflags=doctest.ELLIPSIS, ) $ python3.8 --version Python 3.8.0a3 $ python3.8 ell_err.py Traceback (most recent call last): ... ValueError: line 3 of the docstring for NoName lacks blank after >>>: ' >>>print("This is a test sentence.")' $ cat ell_print.py import doctest class Foo: """Test docstring. >>> print("This is a test sentence.") ...a test... """ doctest.run_docstring_examples( Foo(), {}, optionflags=doctest.ELLIPSIS, ) $ python3.8 ell_print.py Traceback (most recent call last): ... ValueError: line 4 of the docstring for NoName lacks blank after ...: ' ...a test...' ``` AFAICT, this behavior is consistent across 3.4.10, 3.5.7, 3.6.8, 3.7.3, and 3.8.0a3. **However**, in this `ell_print.py` above, that "PS2" line isn't actually meant to be a continuation of the 'source' portion of the example; it's meant to be the *output* (the 'want') of the example, with a leading ellipsis to be matched per `doctest.ELLIPSIS` rules. The regex currently used to look for the 'source' of an example is (https://github.com/python/cpython/blob/4f5a3493b534a95fbb01d593b1ffe320db6b395e/Lib/doctest.py#L583-L586): ``` (?P (?:^(?P [ ]*) >>> .*) # PS1 line (?:\n [ ]* \.\.\. .*)*) # PS2 lines \n? ``` Since this pattern is compiled with re.VERBOSE (https://github.com/python/cpython/blob/4f5a3493b534a95fbb01d593b1ffe320db6b395e/Lib/doctest.py#L592), the space-as-fourth-character in PS1/PS2 is not explicitly matched. I propose changing the regex to: ``` (?P (?:^(?P [ ]*) >>>[ ] .*) # PS1 line (?:\n [ ]* \.\.\.[ ] .*)*) # PS2 lines \n? ``` This will then *explicitly* match the trailing space of PS1; it *shouldn't* break any existing doctests, because the parsing code lower down has already been requiring that space to be present in PS1, as shown for `ell_err.py` above. This will also require an *explicit trailing space* to be present in order for a line starting with three periods to be interpreted as a PS2 line of 'source'; otherwise, it will be treated as part of the 'want'. I made this change in my local user install of 3.8's doctest.py, and it works as I expect on `ell_print.py`, passing the test: ``` $ python3.8 ell_print.py $ $ cat ell_wrongprint.py import doctest class Foo: """Test docstring. >>> print("This is a test sentence.") ...a foo test... """ doctest.run_docstring_examples( Foo(), {}, optionflags=doctest.ELLIPSIS, ) $ python3.8 ell_wrongprint.py ********************************************************************** File "ell_wrongprint.py", line ?, in NoName Failed example: print("This is a test sentence.") Expected: ...a foo test... Got: This is a test sentence. ``` For completeness, the following piece of regex in the 'want' section (https://github.com/python/cpython/blob/4f5a3493b534a95fbb01d593b1ffe320db6b395e/Lib/doctest.py#L589): ``` (?![ ]*>>>) # Not a line starting with PS1 ``` should probably also be changed to: ``` (?![ ]*>>>[ ]) # Not a line starting with PS1 ``` I would be happy to put together a PR for this; I would plan to take a ~TDD style approach, implementing a few tests first and then making the regex change. ---------- components: Library (Lib) messages: 340788 nosy: bskinn priority: normal severity: normal status: open title: Tweak doctest 'example' regex to allow a leading ellipsis in 'want' line type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 15:49:57 2019 From: report at bugs.python.org (Aditya Sane) Date: Wed, 24 Apr 2019 19:49:57 +0000 Subject: [New-bugs-announce] [issue36715] Dictionary initialization Message-ID: <1556135397.5.0.612221590326.issue36715@roundup.psfhosted.org> New submission from Aditya Sane : When initializing a dictionary with dict.fromkeys, if an object is used as initial value, then the value is taken by reference instead of by value. This results in incorrect behavior since Python doesn't have "by reference" or pointers by default. Attached file shows an example and one work-around. ---------- files: DictionaryBug.py messages: 340805 nosy: Aditya Sane priority: normal severity: normal status: open title: Dictionary initialization type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48286/DictionaryBug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 18:07:39 2019 From: report at bugs.python.org (Eric Cosatto) Date: Wed, 24 Apr 2019 22:07:39 +0000 Subject: [New-bugs-announce] [issue36716] Embedded Python fails to import module files with version_platform extensions Message-ID: <1556143659.79.0.1234879845.issue36716@roundup.psfhosted.org> New submission from Eric Cosatto : I have an application with embedded python. When trying to import numpy, or any other module that was installed by pip, it fails with error 'ModuleNotFoundError("No module named ...")'. Yet, on the command line python, all works fine. The problem is not in the PATH, as only specific files (those with ..pyd extensions, e.g. .cp37-win_amd64.pyd) cannot be found. Example1: numpy import numpy >> ImportError("No module named 'numpy.core._multiarray_umath') The line which fails is: from . import multiarray The file it is trying to load: C:\Program Files\Python37\Lib\site-packages\numpy\core_multiarray_umath.cp37-win_amd64.pyd Example 2: cv2 import cv2 >> ModuleNotFoundError("No module named 'cv2.cv2'") The line which fails is: from .cv2 import * The file it is trying to load: C:\Program Files\Python37\Lib\site-packages\cv2\cv2.cp37-win_amd64.pyd ---------- components: Extension Modules, Windows messages: 340808 nosy: ecosatto, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Embedded Python fails to import module files with version_platform extensions type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 24 20:22:54 2019 From: report at bugs.python.org (Joel Croteau) Date: Thu, 25 Apr 2019 00:22:54 +0000 Subject: [New-bugs-announce] [issue36717] Allow retrieval of return value from the target of a threading.Thread Message-ID: <1556151774.22.0.129640685398.issue36717@roundup.psfhosted.org> New submission from Joel Croteau : It would be nice if, after a threading.Thread has completed its run, it were possible to retrieve the return value of the target function. You can do this currently by setting a variable from your target or by subclassing Thread, but this should really be built in. My suggested changes: * Add an attribute to Thread, retval, initially set to None, that contains the return value of the target after a successful completion. * Thread.run() should set self.retval to the return value of the target upon completion, and also return this value. * Thread.join() should return self.retval after a successful completion. If you're not using Thread.join(), you can directly access Thread.retval to get the return result after a successful run. Thread.run() and Thread.join() both return None in all cases now, so I think a change in their return value would have minimal if any effect on existing code. ---------- components: Library (Lib) messages: 340815 nosy: Joel Croteau2 priority: normal severity: normal status: open title: Allow retrieval of return value from the target of a threading.Thread type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 09:32:03 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Apr 2019 13:32:03 +0000 Subject: [New-bugs-announce] [issue36718] Python 2.7 compilation fails on AMD64 Ubuntu Shared 2.7 buildbot with: relocation R_X86_64_PC32 against symbol ... can not be used ... Message-ID: <1556199123.89.0.154216583131.issue36718@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/139/builds/251 Example: building 'time' extension gcc -pthread -fPIC -fno-strict-aliasing -g -O2 -g -O0 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -I/usr/include/x86_64-linux-gnu -I/usr/local/include -I/srv/buildbot/buildarea/2.7.bolen-ubuntu/build/Include -I/srv/buildbot/buildarea/2.7.bolen-ubuntu/build -c /srv/buildbot/buildarea/2.7.bolen-ubuntu/build/Modules/timemodule.c -o build/temp.linux-x86_64-2.7-pydebug/srv/buildbot/buildarea/2.7.bolen-ubuntu/build/Modules/timemodule.o gcc -pthread -shared build/temp.linux-x86_64-2.7-pydebug/srv/buildbot/buildarea/2.7.bolen-ubuntu/build/Modules/timemodule.o -L/usr/lib/x86_64-linux-gnu -L/usr/local/lib -L. -lm -lpython2.7 -o build/lib.linux-x86_64-2.7-pydebug/time.so /usr/local/lib/libpython2.7.a(posixmodule.o): In function `posix_tmpnam': /srv/buildbot/Python-2.7.16/./Modules/posixmodule.c:7648: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp' /usr/local/lib/libpython2.7.a(posixmodule.o): In function `posix_tempnam': /srv/buildbot/Python-2.7.16/./Modules/posixmodule.c:7595: warning: the use of `tempnam' is dangerous, better use `mkstemp' /usr/bin/ld: /usr/local/lib/libpython2.7.a(ceval.o): relocation R_X86_64_PC32 against symbol `PyFunction_Type' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: error: ld returned 1 exit status It seems to be a regression caused by my change, commit f4edd39017a211d4544570a1e2ac2110ef8e51b4, PR 12875 for bpo-28552. ---------- components: Build messages: 340840 nosy: vstinner priority: normal severity: normal status: open title: Python 2.7 compilation fails on AMD64 Ubuntu Shared 2.7 buildbot with: relocation R_X86_64_PC32 against symbol ... can not be used ... versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 10:49:40 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Apr 2019 14:49:40 +0000 Subject: [New-bugs-announce] [issue36719] regrtest --findleaks should fail if an uncollectable object is found Message-ID: <1556203780.64.0.387499693172.issue36719@roundup.psfhosted.org> New submission from STINNER Victor : regrtest (python3 -m test) has a --findleaks option to log warning if the garbage collector finds uncollectable objects. Problem: regrtest only logs a warning, but exit with a success in that case. Attached PR makes regrtest fail in that case and adds an unit test for --findleaks. ---------- components: Tests messages: 340843 nosy: vstinner priority: normal severity: normal status: open title: regrtest --findleaks should fail if an uncollectable object is found versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 10:52:18 2019 From: report at bugs.python.org (Michal Kononenko) Date: Thu, 25 Apr 2019 14:52:18 +0000 Subject: [New-bugs-announce] [issue36720] Correct Should to Must in Definition of object.__len__ Message-ID: <1556203938.29.0.467406560077.issue36720@roundup.psfhosted.org> New submission from Michal Kononenko : The link below defines __len__ https://docs.python.org/3/reference/datamodel.html?highlight=__len__#object.__len__ However, I was reading in the StackOverflow thread below that CPython does some validation to check that the return value of __len__ should be >= 0. Does this mean that len must return a value >= 0, in the RFC 2119 sense of the word? https://stackoverflow.com/questions/42521449/how-does-python-ensure-the-return-value-of-len-is-an-integer-when-len-is-cal ---------- assignee: docs at python components: Documentation messages: 340844 nosy: Michal Kononenko, docs at python priority: normal severity: normal status: open title: Correct Should to Must in Definition of object.__len__ type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 14:25:48 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Apr 2019 18:25:48 +0000 Subject: [New-bugs-announce] [issue36721] Add pkg-config python-3.8-embed Message-ID: <1556216748.62.0.992604554579.issue36721@roundup.psfhosted.org> New submission from STINNER Victor : The bpo-21536 modified how C extensions are built: they are no longer linked to libpython. The problem is that when an application embeds Python: the application wants to be linked to libpython. commit 8c3ecc6bacc8d0cd534f2b5b53ed962dd1368c7b (HEAD -> master, upstream/master) Author: Victor Stinner Date: Thu Apr 25 20:13:10 2019 +0200 bpo-21536: C extensions are no longer linked to libpython (GH-12946) On Unix, C extensions are no longer linked to libpython. It is now possible to load a C extension built using a shared library Python with a statically linked Python. When Python is embedded, libpython must not be loaded with RTLD_LOCAL, but RTLD_GLOBAL instead. Previously, using RTLD_LOCAL, it was already not possible to load C extensions which were not linked to libpython, like C extensions of the standard library built by the "*shared*" section of Modules/Setup. distutils, python-config and python-config.py have been modified. I chose to modify distutils, python-config (shell) and python-config.py (Python), but *not* Misc/python.pc (pkg-config). Previously, we had: $ pkg-config python-3.7 --libs -lpython3.7m $ python3.7-config --libs -lpython3.7m -lcrypt -lpthread -ldl -lutil -lm $ python3.7-config.py --libs -lpython3.7m -lcrypt -lpthread -ldl -lutil -lm Now, we get: $ pkg-config python-3.8 --libs -lpython3.8 $ python3.8-config --libs -lcrypt -lpthread -ldl -lutil -lm -lm $ python-config.py --libs -lcrypt -lpthread -ldl -lutil -lm -lm python-config and python-config.py can now be used to build C extensions, but not to build an application embedding Python. pkg-config should not be used to build a C extenstion since it links to libpython, but we don't want to do that (see bpo-21536). I'm not sure that different tools should return different results. I propose: * Add a new command "pkg-config python-3.8-embed" * Add a new "--embed" option to python3.8-config and python3.8-config.py commands * Remove "-lpython at VERSION@@ABIFLAGS@" from "Libs: -L${libdir} -lpython at VERSION@@ABIFLAGS@" of Misc/python.pc.in On Windows, we already provide different binaries for embedded Python with "-embed" suffix: * Download Windows x86-64 embeddable zip file: python-3.7.3-embed-amd64.zip * Download Windows x86-64 executable installer: python-3.7.3-amd64.exe https://www.python.org/downloads/windows/ ---------- components: Build messages: 340853 nosy: doko, vstinner priority: normal severity: normal status: open title: Add pkg-config python-3.8-embed versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 15:00:47 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Apr 2019 19:00:47 +0000 Subject: [New-bugs-announce] [issue36722] In debug build, load also C extensions compiled in release mode or compiled using the stable ABI Message-ID: <1556218847.85.0.311466244133.issue36722@roundup.psfhosted.org> New submission from STINNER Victor : bpo-36465 modified the ABI of debug build so release and debug build now have the same ABI. bpo-21536 modified how C extensions are built: they are no longer linked to libpython. In a debug build, it becomes possible to load a C extension built in release mode: https://bugs.python.org/issue21536#msg340821 But I had to modify SOABI for that. I propose to modify how Python looks for C extensions: look also for dynamic libraries without the "d" SOABI flag and for C extensions built using the stable ABI. Release build: $ ./python -c 'import _imp; print(_imp.extension_suffixes())' ['.cpython-38-x86_64-linux-gnu.so', '.abi3.so', '.so'] Debug build, *WITHOUT* my change: $ ./python -c 'import _imp; print(_imp.extension_suffixes())' ['.cpython-38d-x86_64-linux-gnu.so', '.so'] Debug build, *WITH* my change: $ ./python -c 'import _imp; print(_imp.extension_suffixes())' ['.cpython-38d-x86_64-linux-gnu.so', '.cpython-38-x86_64-linux-gnu.so', '.abi3.so', '.so'] ---------- components: Build messages: 340856 nosy: vstinner priority: normal severity: normal status: open title: In debug build, load also C extensions compiled in release mode or compiled using the stable ABI versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 15:04:03 2019 From: report at bugs.python.org (Jonny Fuller) Date: Thu, 25 Apr 2019 19:04:03 +0000 Subject: [New-bugs-announce] [issue36723] Unittest Discovery for namespace subpackages dot notation fails Message-ID: <1556219043.29.0.793780129066.issue36723@roundup.psfhosted.org> New submission from Jonny Fuller : Hi friends, I noticed strange behavior involving unittest discovery with namespace packages. Using dot notation to discover test packages within a namespace package will fail, but will succeed when using path notation. It feels awkward that dot path would fail but normal path would succeed. Is this the desired behavior? I created a demo repo and fully documented this odd behavior here https://github.com/JonnyWaffles/djangonamespacetestfail. ---------- messages: 340857 nosy: mrwaffles priority: normal severity: normal status: open title: Unittest Discovery for namespace subpackages dot notation fails type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 16:13:49 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Apr 2019 20:13:49 +0000 Subject: [New-bugs-announce] [issue36724] Clear _PyRuntime at exit Message-ID: <1556223229.48.0.664845556761.issue36724@roundup.psfhosted.org> New submission from STINNER Victor : _PyRuntime.warnings is not cleared at Python exit: 3 objects are kept alive even after Py_Finalize(). See bpo-36356 which cleared some other variables. PR 12453 "bpo-36356: Destroy the GIL at exit" is still open. ---------- components: Interpreter Core messages: 340859 nosy: eric.snow, vstinner priority: normal severity: normal status: open title: Clear _PyRuntime at exit versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 16:28:27 2019 From: report at bugs.python.org (Kay Hayen) Date: Thu, 25 Apr 2019 20:28:27 +0000 Subject: [New-bugs-announce] [issue36725] Reference leak regression with Python3.8a3 Message-ID: <1556224107.78.0.397981137362.issue36725@roundup.psfhosted.org> New submission from Kay Hayen : Much like #9366 the same file can be used. This reference leaks according to Nuitka comparative testing: simpleFunction59: FAILED 129511 129512 leaked 1 def simpleFunction59(): a = 3 b = 5 try: a = a * 2 return a finally: return a / b I would be guessing, that you are leaking the return value when returning again. ---------- messages: 340861 nosy: kayhayen priority: normal severity: normal status: open title: Reference leak regression with Python3.8a3 type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 17:01:32 2019 From: report at bugs.python.org (Ralph Loader) Date: Thu, 25 Apr 2019 21:01:32 +0000 Subject: [New-bugs-announce] [issue36726] Empty select() on windows gives error. Message-ID: <1556226092.9.0.288691315085.issue36726@roundup.psfhosted.org> New submission from Ralph Loader : The following one liner gives an error on windows but not on linux: ``` import selectors ; selectors.DefaultSelector().select(timeout=1) ``` It appears that the windows select() function with no FDs set gives an error but on Linux it is equivalent to a sleep. The error on windows: ``` Traceback (most recent call last): File "", line 1, in File "C:\Program Files\Python37\lib\selectors.py", line 323, in select r, w, _ = self._select(self._readers, self._writers, [], timeout) File "C:\Program Files\Python37\lib\selectors.py", line 314, in _select r, w, x = select.select(r, w, w, timeout) OSError: [WinError 10022] An invalid argument was supplied ``` ---------- components: Windows messages: 340862 nosy: Ralph Loader, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Empty select() on windows gives error. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 18:47:08 2019 From: report at bugs.python.org (Oliver Too, Eh?) Date: Thu, 25 Apr 2019 22:47:08 +0000 Subject: [New-bugs-announce] [issue36727] python 3.6+ docs use ul tags instead of ol tags Message-ID: <1556232428.79.0.0883162506271.issue36727@roundup.psfhosted.org> New submission from Oliver Too, Eh? : Documentation contents are ordered, and readers familiar with the section numbers/ordering from the Python 2 or Python 3.5 documentation will have an easier time transitioning to Python 3.6 or later if the sections remain numbered. ---------- assignee: docs at python components: Documentation messages: 340866 nosy: Oliver Too, Eh?, docs at python priority: normal severity: normal status: open title: python 3.6+ docs use ul tags instead of ol tags versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 19:22:09 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Apr 2019 23:22:09 +0000 Subject: [New-bugs-announce] [issue36728] Remove PyEval_ReInitThreads() from the public C API Message-ID: <1556234529.34.0.747871379917.issue36728@roundup.psfhosted.org> New submission from STINNER Victor : PyEval_ReInitThreads() is used internally by PyOS_AfterFork_Child(). I don't see the point of calling it directly. If you care of threads after fork, just call PyOS_AfterFork_Child(), no? I propose to remove PyEval_ReInitThreads() from the public C API. Problem: it's documented as a public function in High-level API of the Python Initialization API: https://docs.python.org/dev/c-api/init.html#c.PyEval_ReInitThreads On the Internet, I found a single project calling directly this function, but it's only in a test used to the test Python threading API: https://github.com/DataDog/go-python3/blob/master/thread_test.go func TestThreadInitialization(t *testing.T) { Py_Initialize() PyEval_InitThreads() assert.True(t, PyEval_ThreadsInitialized()) PyEval_ReInitThreads() } I don't think that the PyEval_ReInitThreads() call here makes any sense. ---------- components: Interpreter Core messages: 340868 nosy: vstinner priority: normal severity: normal status: open title: Remove PyEval_ReInitThreads() from the public C API versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 20:25:36 2019 From: report at bugs.python.org (Emmanuel Arias) Date: Fri, 26 Apr 2019 00:25:36 +0000 Subject: [New-bugs-announce] [issue36729] Delete unused text variable on tests Message-ID: <1556238336.33.0.148344310049.issue36729@roundup.psfhosted.org> New submission from Emmanuel Arias : On ```test_custom_non_data_descriptor``` and ```test_custom_data_descriptor``` from Lib/test/test_pydoc.py there was a text variable not used. ---------- components: Tests messages: 340872 nosy: eamanu priority: normal severity: normal status: open title: Delete unused text variable on tests type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 21:11:32 2019 From: report at bugs.python.org (Sebastian Bassi) Date: Fri, 26 Apr 2019 01:11:32 +0000 Subject: [New-bugs-announce] [issue36730] Change outdated references to macOS Message-ID: <1556241092.16.0.423050564241.issue36730@roundup.psfhosted.org> New submission from Sebastian Bassi : There are multiple occurences in the web page of "Mac OS X", like "Download the latest version for Mac OS X". This OS is called macOS since some years. It may be confusing for a new user. ---------- assignee: docs at python components: Documentation messages: 340875 nosy: Sebastian Bassi, docs at python priority: normal severity: normal status: open title: Change outdated references to macOS type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 25 23:01:37 2019 From: report at bugs.python.org (Windson Yang) Date: Fri, 26 Apr 2019 03:01:37 +0000 Subject: [New-bugs-announce] [issue36731] Add example to priority queue Message-ID: <1556247697.17.0.567840188783.issue36731@roundup.psfhosted.org> New submission from Windson Yang : We don't have the base example for priority queue in https://docs.python.org/3.8/library/heapq.html#priority-queue-implementation-notes, We can add something like: > q = Q.PriorityQueue() > q.put(10) > q.put(1) > q.put(5) > while not q.empty(): print q.get() We may also need to add Notes about the PriorityQueue will block when we use max size > q = Q.PriorityQueue(1) > q.put(10) > q.put(1) # will block until the Queue is available again. ---------- assignee: docs at python components: Documentation messages: 340878 nosy: Windson Yang, docs at python priority: normal severity: normal status: open title: Add example to priority queue type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 03:44:21 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 26 Apr 2019 07:44:21 +0000 Subject: [New-bugs-announce] [issue36732] test_asyncio: test_huge_content_recvinto() fails randomly Message-ID: <1556264661.54.0.31283754406.issue36732@roundup.psfhosted.org> New submission from STINNER Victor : Failure on AMD64 Windows7 SP1 3.x: https://buildbot.python.org/all/#/builders/40/builds/2053 ... test_start_unix_server_1 (test.test_asyncio.test_server.SelectorStartServerTests) ... skipped 'no Unix sockets' test_create_connection_sock (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... ok test_huge_content (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... ERROR test_sock_accept (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py:1627: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... ok test_sock_client_ops (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... ok test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ... skipped 'No UNIX Sockets' test_create_connection_sock (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok test_huge_content (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ... ====================================================================== ERROR: test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.ProactorEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\asyncio\windows_events.py", line 474, in finish_recv return ov.getresult() OSError: [WinError 64] The specified network name is no longer available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_asyncio\test_sock_lowlevel.py", line 225, in test_huge_content_recvinto self.loop.run_until_complete( File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\asyncio\base_events.py", line 590, in run_until_complete return future.result() File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_asyncio\test_sock_lowlevel.py", line 211, in _basetest_huge_content_recvinto nbytes = await self.loop.sock_recv_into(sock, buf) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\asyncio\proactor_events.py", line 551, in sock_recv_into return await self._proactor.recv_into(sock, buf) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\asyncio\windows_events.py", line 760, in _poll value = callback(transferred, key, ov) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\asyncio\windows_events.py", line 478, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 64] The specified network name is no longer available ---------- components: Tests, asyncio messages: 340892 nosy: asvetlov, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_huge_content_recvinto() fails randomly versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 06:17:42 2019 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Fri, 26 Apr 2019 10:17:42 +0000 Subject: [New-bugs-announce] [issue36733] make regen-all doesn't work in subfolder: No module named Parser.pgen Message-ID: <1556273862.71.0.703011468281.issue36733@roundup.psfhosted.org> New submission from Miro Hron?ok : When I attempt to build CPython from a subfolder (as we do in Fedora) make regen-all dies with: python3.8 -m Parser.pgen ../../Grammar/Grammar \ ../../Grammar/Tokens \ ../../Include/graminit.h.new \ ../../Python/graminit.c.new /usr/bin/python3.8: No module named Parser.pgen To reproduce, run: $ rm -rf build && mkdir -p build/mybuild && (cd build/mybuild && ../../configure && make regen-all) This is probably a regression, as it works in the 3.7 branch. Setting PYTHON_FOR_REGEN="python3" (python3.7 on my system) doesn't make a difference: python3 -m Parser.pgen ../../Grammar/Grammar \ ../../Grammar/Tokens \ ../../Include/graminit.h.new \ ../../Python/graminit.c.new /usr/bin/python3: No module named Parser.pgen build/mybuild/Parser exists but it is empty directory. Setting PYTHON_FOR_REGEN="PYTHOINPATH= python3" workarounds the issue. On branch 3.7 (3076a3e0d1), build/mybuild/Parser is also empty, but the make-regen populates it. I'll bisect. ---------- components: Build messages: 340902 nosy: hroncok, vstinner priority: normal severity: normal status: open title: make regen-all doesn't work in subfolder: No module named Parser.pgen versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 07:29:08 2019 From: report at bugs.python.org (Michael Osipov) Date: Fri, 26 Apr 2019 11:29:08 +0000 Subject: [New-bugs-announce] [issue36734] Modules/faulthandler.c does not compile on HP-UX due to bpo-35214/1584a0081500d35dc93ff88e5836df35faf3e3e2 Message-ID: <1556278148.43.0.578360230422.issue36734@roundup.psfhosted.org> New submission from Michael Osipov <1983-01-06 at gmx.net>: > /opt/aCC/bin/aCC -Ae -O -I./Include/internal -I. -I./Include -I/opt/ports/include -I/opt/ports/include -DPy_BUILD_CORE_BUILTIN -c ./Modules/faulthandler.c -o Modules/faulthandler.o > "./Modules/faulthandler.c", line 1373: error #2029: expected an expression > stack_t current_stack = {}; The fix is trivial: > stack_t current_stack = {0}; Can also provide a PR for that. ---------- components: Build messages: 340913 nosy: gregory.p.smith, michael-o priority: normal severity: normal status: open title: Modules/faulthandler.c does not compile on HP-UX due to bpo-35214/1584a0081500d35dc93ff88e5836df35faf3e3e2 type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 14:50:06 2019 From: report at bugs.python.org (Michal Gregorczyk) Date: Fri, 26 Apr 2019 18:50:06 +0000 Subject: [New-bugs-announce] [issue36735] minimize disk size of cross-compiled python3.6 Message-ID: <1556304606.39.0.95403682645.issue36735@roundup.psfhosted.org> New submission from Michal Gregorczyk : Hi I am cross-compiling Python3.6 for Android and noticed that the final result is quite large (12mb of python3 binary + over 130mb of files under lib/python3.6). Do you have any suggestions how to reduce that size so that the result is more suitable for devices with constrained disks ? Here are two ideas that come to my mind: 1. not compiling everything to *.pyc files *.pyc files contribute 69mb, I expect that most of the files will not be imported 2. not including lib/python3.6/test Documentation says that the module contains regression tests for Python (https://docs.python.org/3/library/test.html). The directory adds 56MB, can I just remove it ? Should I keep some of the files because remaining parts of standard library refer it ? Does any of these seem unreasonable or fishy ? Are there configure options or make targets that already skip pyc and test ? Are there any other tips how to reduce size? Thanks ---------- components: Cross-Build messages: 340937 nosy: Alex.Willmer, michalgr priority: normal severity: normal status: open title: minimize disk size of cross-compiled python3.6 versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 17:57:50 2019 From: report at bugs.python.org (Anand Arumugam) Date: Fri, 26 Apr 2019 21:57:50 +0000 Subject: [New-bugs-announce] [issue36736] Python crashes when calling win32file.LockFileEx Message-ID: <1556315870.81.0.352629640603.issue36736@roundup.psfhosted.org> New submission from Anand Arumugam : Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import win32file >>> import win32con >>> import platform >>> h = win32file.CreateFile(f'd:\\temp\\{platform.node()}.lock', (win32file.GENERIC_READ | win32file.GENERIC_WRITE), 0, None, win32con.CREATE_NEW, 0, None) >>> win32file.LockFileEx(h, win32con.LOCKFILE_EXCLUSIVE_LOCK, 5, 5, None) The moment I hit enter, python command prompt crashes. I'm unable to attach the crash dump file. If you cannot repro the crash, let me know. ---------- assignee: terry.reedy components: IDLE, Windows messages: 340947 nosy: paul.moore, steve.dower, terry.reedy, tim.golden, yapydev, zach.ware priority: normal severity: normal status: open title: Python crashes when calling win32file.LockFileEx type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 18:28:58 2019 From: report at bugs.python.org (Eric Snow) Date: Fri, 26 Apr 2019 22:28:58 +0000 Subject: [New-bugs-announce] [issue36737] Warnings operate out of global runtime state. Message-ID: <1556317738.23.0.687262373674.issue36737@roundup.psfhosted.org> New submission from Eric Snow : (See Include/internal/pycore_warnings.h and Python/_warnings.c.) The warnings module's state (filters, default action, etc.) is currently stored at the level of the global runtime. That's a problem for the following reasons: * Python objects are getting stored in _PyRuntimeState * it breaks the isolation of behavior between interpreters * objects are leaking between interpreters * importing the module in a subinterpreter effectively resets the module's state While those are all a problem in a future where interpreters don't share the GIL, that last one is a problem right now for people using subinterpreters. One of the following should happen: * move warnings state down to PyInterpreterState * move warnings state into PyInterpreterState.dict * use the module-state API (PEP 3121) * just work out of the module's __dict__ I could also see use cases for *also* configuring warnings process-wide but that could be handled separately if actually desired. ---------- components: Interpreter Core messages: 340951 nosy: brett.cannon, eric.snow, steve.dower priority: normal severity: normal stage: needs patch status: open title: Warnings operate out of global runtime state. type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 26 21:56:58 2019 From: report at bugs.python.org (matt farrugia) Date: Sat, 27 Apr 2019 01:56:58 +0000 Subject: [New-bugs-announce] [issue36738] Add 'array_hook' for json module Message-ID: <1556330218.68.0.648450414936.issue36738@roundup.psfhosted.org> New submission from matt farrugia : The json module allows a user to provide an `object_hook` function, which, if provided, is called to transform the dict that is created as a result of parsing a JSON Object. It'd be nice if there was something analogous for JSON Arrays: an `array_hook` function to transform the list that is created as a result of parsing a JSON Array. At the moment transforming JSON Arrays requires one of the following approaches (as far as I can see): (1) Providing an object_hook function that will recursively transform any lists in the values of an Object/dict, including any nested lists, AND recursively transforming the final result in the event that the top level JSON object being parsed is an array (this array is never inside a JSON Object that goes through the `object_hook` transformation). (2) Transforming the entire parsed result after parsing is finished by recursively transforming any lists in the final result, including recursively traversing nested lists AND nested dicts. Providing an array_hook would cut out the need for either approach, as the recursive case from the recursive functions I mentioned could be used as the `array_hook` function directly (without the recursion). ## An example of usage: Let's say we want JSON Arrays represented using tuples rather than lists, e.g. so that they are hashable straight out-of-the-(json)-box. Before this enhancement, this change requires one of the two methods I mentioned above. It is not so difficult to implement these recursive functions, but seems inelegant. After the change, `tuple` could be used as the `array_hook` directly: ``` >>> json.loads('{"foo": [[1, 2], "spam", [], ["eggs"]]}', array_hook=tuple) {'foo': ((1, 2), 'spam', (), ('eggs',))} ``` It seems (in my opinion) this is more elegant than converting via an `object_hook` or traversing the whole structure after parsing. ## The patch: I am submitting a patch that adds an `array_hook` kwarg to the `json` module's functions `load` and `loads`, and to the `json.decoder` module's `JSONDecoder`, `JSONArray` and `JSONObject` classes. I also hooked these together in the `json.scanner` module's `py_make_scanner` function. It seems that `json.scanner` will prefer the `c_make_scanner` function defined in `Modules/_json.c` when it is available. I am not confident enough in my C skills or C x Python knowledge to dive into this module and make the analogous changes. But I assume they will be simple for someone who can read C x Python code, and that the changes will be analogous to those required to `Lib/json/scanner.py`. I need help to accomplish this part of the patch. ## Testing: In the mean time, I added a test to `test_json.test_decode`. It's CURRENTLY FAILING because the implementation of the patch is incomplete (I believe this is only due to the missing part of the patch---the required changes to `Modules/_json.c` I identified above). When I manually reset `json.scanner.make_scanner` to `json.scanner.py_make_scanner` and play around with the new `array_hook` functionality, it seems to work. ---------- components: Extension Modules, Library (Lib) messages: 340957 nosy: matomatical priority: normal severity: normal status: open title: Add 'array_hook' for json module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 27 00:24:09 2019 From: report at bugs.python.org (Peter Bauer) Date: Sat, 27 Apr 2019 04:24:09 +0000 Subject: [New-bugs-announce] [issue36739] "4.6. Defining Functions" should mention nonlocal Message-ID: <1556339049.57.0.959218106285.issue36739@roundup.psfhosted.org> New submission from Peter Bauer : In the fourth paragraph, the sentence "Thus, global variables cannot be directly assigned a value within a function (unless named in a global statement)" should somehow be extended to mention the nonlocal-statements: Thus, global variables or variables of enclosing functions cannot be directly assigned a value within a function (unless named in a global statement (for global variables) or named in a nonlocal statement (for variables of enclosing functions) ---------- assignee: docs at python components: Documentation messages: 340963 nosy: docs at python, pbhd0815 priority: normal severity: normal status: open title: "4.6. Defining Functions" should mention nonlocal type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 27 04:04:15 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 27 Apr 2019 08:04:15 +0000 Subject: [New-bugs-announce] [issue36740] zipimporter misses namespace packages for implicit dirs Message-ID: <1556352255.87.0.293489853138.issue36740@roundup.psfhosted.org> New submission from Jason R. Coombs : As discovered in https://github.com/pypa/packaging-problems/issues/212, if a PEP 420 namespace package is represented by an implicit directory (that is, there's no explicit entry for the directory, only entries for the contents of the directory), that directory won't be picked up as a namespace package. The following code illustrates the issue: ``` zp $ cat make-pkgs.py import zipfile def make_pkgs(): zf = zipfile.ZipFile('simple.zip', 'w') zf.writestr('pkg/__init__.py', b'') zf.close() zf = zipfile.ZipFile('namespace.zip', 'w') zf.writestr('ns/pkg/__init__.py', b'') zf.close() __name__ == '__main__' and make_pkgs() zp $ python make-pkgs.py zp $ env PYTHONPATH=simple.zip python3.7 -c "import pkg" zp $ env PYTHONPATH=namespace.zip python3.7 -c "import ns.pkg" Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'ns' ``` As you can see, in simple.zip, the `pkg` directory is implied, but despite that condition, `pkg` is importable. However, with namespace.zip, the name `ns` is not visible even though it's present in the zipfile and would be importable if that zipfile were extracted to a file system. ``` zp $ unzip namespace.zip Archive: namespace.zip extracting: ns/pkg/__init__.py zp $ python3.7 -c "import ns.pkg" && echo done done ``` If you were to reconstruct that zip file on the file system using standard tools or explicitly include 'ns/' in the zip entries, the namespace package becomes visible: ``` zp $ rm namespace.zip zp $ zip -r namespace.zip ns adding: ns/ (stored 0%) adding: ns/pkg/ (stored 0%) adding: ns/pkg/__init__.py (stored 0%) adding: ns/pkg/__pycache__/ (stored 0%) adding: ns/pkg/__pycache__/__init__.cpython-37.pyc (deflated 23%) zp $ rm -r ns zp $ env PYTHONPATH=namespace.zip python3.7 -c "import ns.pkg" && echo done done ``` For consistency, the zip import logic should probably honor implicit directories in zip files. ---------- components: Library (Lib) messages: 340975 nosy: jaraco priority: normal severity: normal status: open title: zipimporter misses namespace packages for implicit dirs type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 27 08:30:16 2019 From: report at bugs.python.org (Chihiro Ito) Date: Sat, 27 Apr 2019 12:30:16 +0000 Subject: [New-bugs-announce] [issue36742] urlsplit doesn't accept a NFKD hostname with a port number Message-ID: <1556368216.92.0.317568776121.issue36742@roundup.psfhosted.org> New submission from Chihiro Ito : urllib.parse.urlsplit raises an exception for an url including a non-ascii hostname in NFKD form and a port number. example: >>> urlsplit('http://\u30d5\u309a:80') Traceback (most recent call last): File "", line 1, in File "/Users/ito/.maltybrew/deen/lib/python3.7/urllib/parse.py", line 437, in urlsplit _checknetloc(netloc) File "/Users/ito/.maltybrew/deen/lib/python3.7/urllib/parse.py", line 407, in _checknetloc "characters under NFKC normalization") ValueError: netloc '?:80' contains invalid characters under NFKC normalization >>> urlsplit('http://\u30d5\u309a') SplitResult(scheme='http', netloc='??', path='', query='', fragment='') >>> urlsplit(unicodedata.normalize('NFKC', 'http://\u30d5\u309a:80')) SplitResult(scheme='http', netloc='?:80', path='', query='', fragment='') I believe this behavior was introduced at Python 3.7.3. Python 3.7.2 doesn't raise any exception for these lines. ---------- components: Unicode messages: 340983 nosy: ezio.melotti, hokousya, vstinner priority: normal severity: normal status: open title: urlsplit doesn't accept a NFKD hostname with a port number versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 27 08:27:08 2019 From: report at bugs.python.org (=?utf-8?b?5by15pmo6Z+c?=) Date: Sat, 27 Apr 2019 12:27:08 +0000 Subject: [New-bugs-announce] [issue36741] Variable about function and list Message-ID: <1556368028.48.0.990345863587.issue36741@roundup.psfhosted.org> New submission from ??? : Hello,I'm a Taiwanese student. First,I will say sorry because of my poor English.If I have an offense,please forgive me. Then,look at the picture about program.I declare a list "cards" to the function "Flush",and divide them by 13 in the function.The function Flash will return boolean finally. After that,I write "main" which declare a list and call "Flush" for it to show what value it is.However,when I print this list again,its value will be changed. I'm confused that isn't Flush.cards a local variable belong to Flush? Why Flush can change the list's value for the other function without return? I asked my friend and he was confused,too.So I send this issue and hope to know why it is.Thanks. ---------- components: Windows files: ???.png messages: 340982 nosy: paul.moore, steve.dower, tim.golden, zach.ware, ??? priority: normal severity: normal status: open title: Variable about function and list type: compile error versions: Python 3.6 Added file: https://bugs.python.org/file48288/???.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 27 13:56:43 2019 From: report at bugs.python.org (Jon Dufresne) Date: Sat, 27 Apr 2019 17:56:43 +0000 Subject: [New-bugs-announce] [issue36743] Docs: Descript __get__ signature defined differently across the docs Message-ID: <1556387803.45.0.177981206888.issue36743@roundup.psfhosted.org> New submission from Jon Dufresne : Here: https://docs.python.org/3/reference/datamodel.html#object.__get__ The __get__ signature is defined as: object.__get__(self, instance, owner) But here: https://docs.python.org/3/howto/descriptor.html#descriptor-protocol It is defined as: descr.__get__(self, obj, type=None) It is not clear to me as a reader if all descriptors should have the owner/type argument default to None or if it should be required. If it should default to None, I think all doc examples should follow this expectation to make it clear to someone implementing a descriptor for the first time. As best I can tell, the owner/type is always passed. So perhaps the =None shouldn't be there. Grepping the CPython code, I see lots of definitions for both required and optional, adding more confusion for me. If there is a definitive answer, I'm happy to follow through by updating the docs. ---------- assignee: docs at python components: Documentation messages: 341004 nosy: docs at python, jdufresne priority: normal severity: normal status: open title: Docs: Descript __get__ signature defined differently across the docs versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 27 21:21:36 2019 From: report at bugs.python.org (Kevin) Date: Sun, 28 Apr 2019 01:21:36 +0000 Subject: [New-bugs-announce] [issue36744] functools.singledispatch: Shouldn't require a positional argument if there is only one keyword argument Message-ID: <1556414496.49.0.573446462504.issue36744@roundup.psfhosted.org> New submission from Kevin : Passing a single argument as a keyword argument to a function decorated with @functools.singledispatch results in an error: $ python Python 3.7.2 (default, Feb 12 2019, 08:15:36) [Clang 10.0.0 (clang-1000.11.45.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from functools import singledispatch >>> @singledispatch ... def f(x): ... pass ... >>> f(x=1) Traceback (most recent call last): File "", line 1, in File "/lib/python3.7/functools.py", line 821, in wrapper raise TypeError(f'{funcname} requires at least ' TypeError: f requires at least 1 positional argument I think it's reasonable to expect f(x=1) to do the same as f(1) in this case. Since there is only one argument, it should be the one passed to dispatch(). Relevant code: def wrapper(*args, **kw): if not args: raise TypeError(f'{funcname} requires at least ' '1 positional argument') return dispatch(args[0].__class__)(*args, **kw) https://github.com/python/cpython/blob/445f1b35ce8461268438c8a6b327ddc764287e05/Lib/functools.py#L819-L824 I think the wrapper method could use something like next(iter(d.values())) instead of args[0] when there are no args, but exactly one keyword argument. I am happy to make the change myself ---------- components: Library (Lib) messages: 341016 nosy: KevinG, rhettinger priority: normal severity: normal status: open title: functools.singledispatch: Shouldn't require a positional argument if there is only one keyword argument type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 28 08:17:00 2019 From: report at bugs.python.org (Zackery Spytz) Date: Sun, 28 Apr 2019 12:17:00 +0000 Subject: [New-bugs-announce] [issue36745] A possible reference leak in PyObject_SetAttr() Message-ID: <1556453820.15.0.458693960323.issue36745@roundup.psfhosted.org> New submission from Zackery Spytz : If the PyUnicode_AsUTF8() call happens to fail in PyObject_SetAttr(), "name" will be leaked. ---------- components: Interpreter Core messages: 341025 nosy: ZackerySpytz priority: normal severity: normal status: open title: A possible reference leak in PyObject_SetAttr() versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 28 14:32:58 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Sun, 28 Apr 2019 18:32:58 +0000 Subject: [New-bugs-announce] [issue36746] Create test for fcntl.lockf() Message-ID: <1556476378.45.0.872717637619.issue36746@roundup.psfhosted.org> New submission from Joannah Nanjekye : We need to implement a test for fcntl.lockf(). ---------- components: Tests messages: 341031 nosy: christian.heimes, nanjekyejoannah priority: normal severity: normal status: open title: Create test for fcntl.lockf() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 28 16:24:38 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sun, 28 Apr 2019 20:24:38 +0000 Subject: [New-bugs-announce] [issue36747] Tools/scripts/setup.py is missing Message-ID: <1556483078.82.0.709276726367.issue36747@roundup.psfhosted.org> New submission from Xavier de Gaye : The 'scriptsinstall' Makefile target runs the $(srcdir)/Tools/scripts/setup.py script that does not exist anymore. It has been removed by changeset d3f467ac7441a100eb26412424c2dd96ec3ceb67 (found after running 'cd Tools/scripts/ && git log --diff-filter=D --summary .'). Its content was then: from distutils.core import setup if __name__ == '__main__': setup( scripts=[ 'byteyears.py', 'checkpyc.py', 'copytime.py', 'crlf.py', 'dutree.py', 'ftpmirror.py', 'h2py.py', 'lfcr.py', '../i18n/pygettext.py', 'logmerge.py', '../../Lib/tabnanny.py', '../../Lib/timeit.py', 'untabify.py', ], ) ---------- components: Build messages: 341035 nosy: xdegaye priority: normal severity: normal status: open title: Tools/scripts/setup.py is missing type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 02:12:38 2019 From: report at bugs.python.org (Inada Naoki) Date: Mon, 29 Apr 2019 06:12:38 +0000 Subject: [New-bugs-announce] [issue36748] Optimize textio write buffering Message-ID: <1556518358.69.0.33229526654.issue36748@roundup.psfhosted.org> New submission from Inada Naoki : textio uses list of textio for internal buffering. There are two inefficiency: * When textio is line buffered and all written strings are line (it's very common), list object is allocated and freed. * We convert texts into bytes, and call b''.join(list_of_bytes). But when texts are ASCII and codecs are ASCII-compat, we can skip temporary bytes objects. Attached patch is benchmark for buffered and line buffered write. Faster (6): - write_ascii_32k: 101 ns +- 1 ns -> 73.1 ns +- 0.4 ns: 1.39x faster (-28%) - write_ascii_8k: 102 ns +- 1 ns -> 73.4 ns +- 0.4 ns: 1.38x faster (-28%) - write_ascii_linebuffered: 815 ns +- 12 ns -> 731 ns +- 3 ns: 1.12x faster (-10%) - write_unicode_linebuffered: 840 ns +- 11 ns -> 789 ns +- 15 ns: 1.06x faster (-6%) - write_unicode_8k: 124 ns +- 1 ns -> 122 ns +- 1 ns: 1.01x faster (-1%) - write_unicode_32k: 124 ns +- 1 ns -> 122 ns +- 1 ns: 1.01x faster (-1%) ---------- components: IO files: bm_textio.py messages: 341044 nosy: inada.naoki priority: normal severity: normal status: open title: Optimize textio write buffering type: performance versions: Python 3.8 Added file: https://bugs.python.org/file48289/bm_textio.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 03:55:40 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 29 Apr 2019 07:55:40 +0000 Subject: [New-bugs-announce] [issue36749] PPC64 AIX 3.x: compilation issue, linker fails to locate symbols Message-ID: <1556524540.38.0.725004152071.issue36749@roundup.psfhosted.org> New submission from STINNER Victor : First failure: https://buildbot.python.org/all/#/builders/10/builds/2507 Running the test suite fails with "ModuleNotFoundError: No module named '_socket'". Example of (dynamic) linker issue: *** WARNING: renaming "_socket" since importing it failed: 0509-130 Symbol resolution failed for build/lib.aix-7.2-3.8-pydebug/_socket.so because: 0509-136 Symbol PyCapsule_New (number 8) is not exported from dependent module python. 0509-136 Symbol PyErr_CheckSignals (number 9) is not exported from dependent module python. 0509-136 Symbol PyErr_Clear (number 10) is not exported from dependent module python. 0509-136 Symbol PyErr_ExceptionMatches (number 11) is not exported from dependent module python. 0509-136 Symbol PyErr_Fetch (number 12) is not exported from dependent module python. 0509-136 Symbol PyErr_Format (number 13) is not exported from dependent module python. 0509-021 Additional errors occurred but are not reported. 0509-192 Examine .loader section symbols with the 'dump -Tv' command. IMHO it's a regression caused by my commit 8c3ecc6bacc8d0cd534f2b5b53ed962dd1368c7b for bpo-21536. I guess that ---------- components: Build messages: 341049 nosy: vstinner priority: normal severity: normal status: open title: PPC64 AIX 3.x: compilation issue, linker fails to locate symbols versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 04:26:44 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 29 Apr 2019 08:26:44 +0000 Subject: [New-bugs-announce] [issue36750] test_socket failed (env changed) on Azure pipeline Message-ID: <1556526404.64.0.613873773496.issue36750@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : This PR https://github.com/python/cpython/pull/12271 has consistent build failures in test_socket even after merging the master branch. Sample build failure : https://dev.azure.com/Python/cpython/_build/results?buildId=41411 I tried reproducing this and I can't reproduce it in Ubuntu. Mac OS seems to fail with leaked references which I think is unrelated to the above Azure failure. The Mac issue is open (issue35092 reported by Victor closed as duplicate of issue23828) but it was about warning though running with regrtest seems to leak references in addition to warnings. Ubuntu build : karthi at ubuntu-s-1vcpu-1gb-blr1-01:~/cpython$ ./python -m test -R 3:3 test_socket Run tests sequentially 0:00:00 load avg: 0.01 [1/1] test_socket beginning 6 repetitions 123456 ...... test_socket passed in 3 min 7 sec == Tests result: SUCCESS == 1 test OK. Total duration: 3 min 7 sec Tests result: SUCCESS Mac OS build (Mac OS 10.10.4 (14E46)) ? cpython git:(master) ./python.exe -m test -R 3:3 test_socket Run tests sequentially 0:00:00 load avg: 2.00 [1/1] test_socket beginning 6 repetitions 123456 /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2419: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg(bufsize, *args) /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2510: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg_into([buf], *args) ./Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2419: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg(bufsize, *args) /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2510: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg_into([buf], *args) ./Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2419: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg(bufsize, *args) /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2510: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg_into([buf], *args) ./Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2419: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg(bufsize, *args) /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2510: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg_into([buf], *args) ./Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2419: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg(bufsize, *args) /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2510: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg_into([buf], *args) ./Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2419: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg(bufsize, *args) /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/test/test_socket.py:2510: RuntimeWarning: received malformed or improperly-truncated ancillary data result = sock.recvmsg_into([buf], *args) . test_socket leaked [20, 20, 20] file descriptors, sum=60 test_socket failed in 2 min 31 sec == Tests result: FAILURE == 1 test failed: test_socket Total duration: 2 min 31 sec Tests result: FAILURE ---------- components: Library (Lib) messages: 341052 nosy: giampaolo.rodola, vstinner, xtreak priority: normal severity: normal status: open title: test_socket failed (env changed) on Azure pipeline type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 08:13:17 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 29 Apr 2019 12:13:17 +0000 Subject: [New-bugs-announce] [issue36751] Changes in the inspect module for PEP 570 Message-ID: <1556539997.22.0.720636636419.issue36751@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : This issue is to discuss how to handle the changes in the inspect module for PEP 570. In particular, how to handle: * getfullargspec * formatargspec for positional-only parameters. ---------- components: Interpreter Core messages: 341070 nosy: pablogsal, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Changes in the inspect module for PEP 570 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 09:07:20 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 29 Apr 2019 13:07:20 +0000 Subject: [New-bugs-announce] [issue36752] test multiprocessing: test_rapid_restart() crash on AIX Message-ID: <1556543240.21.0.411629530832.issue36752@roundup.psfhosted.org> New submission from STINNER Victor : POWER6 AIX 3.x: https://buildbot.python.org/all/#/builders/161/builds/1050 ====================================================================== ERROR: test_rapid_restart (test.test_multiprocessing_forkserver.WithManagerTestManagerRestart) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2872, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 737, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 620, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused ====================================================================== ERROR: test_remote (test.test_multiprocessing_forkserver.WithManagerTestRemoteManager) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2835, in test_remote manager2.connect() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 545, in connect conn = Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 508, in Client answer_challenge(c, authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 751, in answer_challenge message = connection.recv_bytes(256) # reject large message File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 414, in _recv_bytes buf = self._recv(4) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 379, in _recv chunk = read(handle, remaining) ConnectionResetError: [Errno 73] Connection reset by peer ====================================================================== ERROR: test_rapid_restart (test.test_multiprocessing_forkserver.WithProcessesTestManagerRestart) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2872, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 737, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 620, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused ====================================================================== ERROR: test_rapid_restart (test.test_multiprocessing_forkserver.WithThreadsTestManagerRestart) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2872, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 737, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 620, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused ---------------------------------------------------------------------- Ran 345 tests in 268.109s FAILED (errors=4, skipped=29) Warning -- files was modified by test_multiprocessing_forkserver Before: [] After: ['core'] test test_multiprocessing_forkserver failed ====================================================================== ERROR: test_rapid_restart (test.test_multiprocessing_spawn.WithManagerTestManagerRestart) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2872, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 737, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 620, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused ====================================================================== ERROR: test_remote (test.test_multiprocessing_spawn.WithManagerTestRemoteManager) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2835, in test_remote manager2.connect() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 545, in connect conn = Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 508, in Client answer_challenge(c, authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 751, in answer_challenge message = connection.recv_bytes(256) # reject large message File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 414, in _recv_bytes buf = self._recv(4) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 379, in _recv chunk = read(handle, remaining) ConnectionResetError: [Errno 73] Connection reset by peer ====================================================================== ERROR: test_rapid_restart (test.test_multiprocessing_spawn.WithProcessesTestManagerRestart) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2872, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 737, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 620, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused ====================================================================== ERROR: test_rapid_restart (test.test_multiprocessing_spawn.WithThreadsTestManagerRestart) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/test/_test_multiprocessing.py", line 2872, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 737, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/managers.py", line 620, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused ---------------------------------------------------------------------- Ran 345 tests in 632.619s FAILED (errors=4, skipped=32) Warning -- files was modified by test_multiprocessing_spawn Before: [] After: ['core'] test test_multiprocessing_spawn failed ---------- components: Tests messages: 341076 nosy: vstinner priority: normal severity: normal status: open title: test multiprocessing: test_rapid_restart() crash on AIX versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 12:16:00 2019 From: report at bugs.python.org (reimar) Date: Mon, 29 Apr 2019 16:16:00 +0000 Subject: [New-bugs-announce] [issue36753] Python modules not linking to libpython causes issues for RTLD_LOCAL system-wide Message-ID: <1556554560.81.0.844742203447.issue36753@roundup.psfhosted.org> New submission from reimar : Most affected platforms: RedHat and Debian, but with the changes from issue21536 probably all Linux distributions will be affected. issue34814 and issue21536 and https://bugzilla.redhat.com/show_bug.cgi?id=1585201 make statements along the lines of "In short, RTLD_LOCAL is not supported." This might have been considered a reasonable stance because of the specific example opening libpython directly. However Python modules not linking to libpython also breaks things when libpython is loaded in the most indirect ways via dlopen. E.g. dlopen("libA.so", RTLD_LOCAL | RTLD_NOW) libA might have linked against libB, libB against libC and libC might optionally link against libpython. As a developer generally cannot really know if some library might ever pull in a most indirect reference to libpython, not supporting RTLD_LOCAL in Python essentially means RTLD_LOCAL can NEVER EVER be used safely. A test-case that will fail the import command when modules have not been linked against libpython is attached (demonstrating only one layer of indirection, but much more complex cases are of course possible). You will need to adjust the (include, lib) paths in test.sh for your Python version, it was written to demonstrate the issue against RedHat's modifications of Python 2.7 (to my knowledge, RedHat and Debian has been affected by this issue much longer than mainline Python). While dlmopen is an alternative with similar behaviour to RTLD_LOCAL on recent Linux versions for this case, it is not portable. ---------- components: Library (Lib) files: pytest.tar.gz messages: 341094 nosy: reimar priority: normal severity: normal status: open title: Python modules not linking to libpython causes issues for RTLD_LOCAL system-wide type: behavior Added file: https://bugs.python.org/file48291/pytest.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 13:53:25 2019 From: report at bugs.python.org (Steve Dower) Date: Mon, 29 Apr 2019 17:53:25 +0000 Subject: [New-bugs-announce] [issue36754] Remove smart quotes in pydoc text Message-ID: <1556560405.97.0.464374588222.issue36754@roundup.psfhosted.org> New submission from Steve Dower : Not all console configurations can correctly render smart quotes in help() text. See the "?" in "superclass's" below. When building for pydoc-topics, it would be ideal to disable smart quotes. (I'm assuming from issue31793 that this can be done in configuration, though I'm not entirely sure how - it's not clear to me from those PRs) --- >>> help("BASICMETHODS") Basic customization ******************* object.__new__(cls[, ...]) ... Typical implementations create a new instance of the class by invoking the superclass?s "__new__()" method using "super().__new__(cls[, ...])" with appropriate arguments and then modifying the newly-created instance as necessary before returning it. ---------- assignee: docs at python components: Documentation messages: 341107 nosy: docs at python, steve.dower priority: normal severity: normal stage: needs patch status: open title: Remove smart quotes in pydoc text type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 29 20:45:01 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 30 Apr 2019 00:45:01 +0000 Subject: [New-bugs-announce] [issue36755] [2.7] test_httplib leaked [8, 8, 8] references with OpenSSL 1.1.1 Message-ID: <1556585101.72.0.886455290046.issue36755@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Fedora Rawhide Refleaks 2.7 buildbot: https://buildbot.python.org/all/#/builders/190/builds/18 test_httplib leaked [8, 8, 8] references, sum=24 When I run the test on my Fedora 29 ("OpenSSL 1.1.1b FIPS 26 Feb 2019"), I can reproduce leak: $ ./python -m test -R 3:3 -m test.test_httplib.HTTPSTest.test_local_bad_hostname test_httplib ... test_httplib leaked [8, 8, 8] references, sum=24 ... My bet is that the issue is related to OpenSSL 1.1.1 which changes how a TLS connection is terminated. Running the test in verbose mode logs a traceback: $ ./python -m test -v test_httplib ... test_local_bad_hostname (test.test_httplib.HTTPSTest) ... ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 48554) Traceback (most recent call last): File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 293, in _handle_request_noblock self.process_request(request, client_address) File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 655, in __init__ self.handle() File "/home/vstinner/prog/python/2.7/Lib/BaseHTTPServer.py", line 340, in handle self.handle_one_request() File "/home/vstinner/prog/python/2.7/Lib/BaseHTTPServer.py", line 310, in handle_one_request self.raw_requestline = self.rfile.readline(65537) File "/home/vstinner/prog/python/2.7/Lib/socket.py", line 480, in readline data = self._sock.recv(self._rbufsize) File "/home/vstinner/prog/python/2.7/Lib/ssl.py", line 754, in recv return self.read(buflen) File "/home/vstinner/prog/python/2.7/Lib/ssl.py", line 641, in read v = self._sslobj.read(len) error: [Errno 104] Connection reset by peer ---------------------------------------- server (('127.0.0.1', 44923):44923 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [30/Apr/2019 02:40:01] code 404, message File not found server (('127.0.0.1', 44923):44923 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [30/Apr/2019 02:40:01] "GET /nonexistent HTTP/1.1" 404 - stopping HTTPS server joining HTTPS thread ok Without -v, the test fails with: vstinner at apu$ ./python -m test test_httplib Run tests sequentially 0:00:00 load avg: 0.63 [1/1] test_httplib Traceback (most recent call last): File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 293, in _handle_request_noblock self.process_request(request, client_address) File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/home/vstinner/prog/python/2.7/Lib/SocketServer.py", line 655, in __init__ self.handle() File "/home/vstinner/prog/python/2.7/Lib/BaseHTTPServer.py", line 340, in handle self.handle_one_request() File "/home/vstinner/prog/python/2.7/Lib/BaseHTTPServer.py", line 310, in handle_one_request self.raw_requestline = self.rfile.readline(65537) File "/home/vstinner/prog/python/2.7/Lib/socket.py", line 480, in readline data = self._sock.recv(self._rbufsize) File "/home/vstinner/prog/python/2.7/Lib/ssl.py", line 754, in recv return self.read(buflen) File "/home/vstinner/prog/python/2.7/Lib/ssl.py", line 641, in read v = self._sslobj.read(len) error: [Errno 104] Connection reset by peer test test_httplib produced unexpected output: ********************************************************************** ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 56044) ---------------------------------------- ********************************************************************** == Tests result: FAILURE == 1 test failed: test_httplib Total duration: 281 ms Tests result: FAILURE My attempt to fix the warning: diff --git a/Lib/BaseHTTPServer.py b/Lib/BaseHTTPServer.py index 3df3323a97..8fe29e9d3e 100644 --- a/Lib/BaseHTTPServer.py +++ b/Lib/BaseHTTPServer.py @@ -332,6 +332,12 @@ class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler): self.log_error("Request timed out: %r", e) self.close_connection = 1 return + except socket.error as exc: + # Using ssl and OpenSSL 1.1.1, sometimes readline() can fail + # with error(104, 'Connection reset by peer') + self.close_connection = 1 + exc = None + return def handle(self): """Handle multiple requests if necessary.""" ---------- components: Tests messages: 341127 nosy: vstinner priority: normal severity: normal status: open title: [2.7] test_httplib leaked [8, 8, 8] references with OpenSSL 1.1.1 versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 05:09:45 2019 From: report at bugs.python.org (Wolfram Kraus) Date: Tue, 30 Apr 2019 09:09:45 +0000 Subject: [New-bugs-announce] [issue36756] createcommand memory leak Message-ID: <1556615385.08.0.753953745903.issue36756@roundup.psfhosted.org> New submission from Wolfram Kraus : When using tk.createcommand you get a memory leak when you don't explicitly call tk.deletecommand to remove this command. See attached file: __del__ never get's called due to the memory leak and because of that calling tk.deletecommand inside __del__ has no effect. If you remove the tk.createcommand everything works fine. ---------- components: Tkinter files: tclmem_bug.py messages: 341140 nosy: WKraus priority: normal severity: normal status: open title: createcommand memory leak type: resource usage versions: Python 2.7, Python 3.7 Added file: https://bugs.python.org/file48292/tclmem_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 05:19:48 2019 From: report at bugs.python.org (=?utf-8?q?C=C3=A9dric_Cabessa?=) Date: Tue, 30 Apr 2019 09:19:48 +0000 Subject: [New-bugs-announce] [issue36757] uuid constructor accept invalid strings (extra dash) Message-ID: <1556615988.91.0.308330120393.issue36757@roundup.psfhosted.org> New submission from C?dric Cabessa : UUID constructor accept string with too many dashes or keyword like urn: / uuid: For eg, this code do not raise ``` >>> import uuid >>> uuid.UUID('0be--468urn:urn:urn:urn:54-4bf9-41----------d4-9697-41d735uuid:4fbe85uuid:') UUID('0be46854-4bf9-41d4-9697-41d7354fbe85') ``` For the context, we use a validator based on `uuid.UUID` for an API. Some customer send string with a UUID followed by extra `-`, the validator let it pass but the sql connector raise an exception We workaround this in our validator, but UUID constructor should not accept string like the one in exemple ---------- components: Library (Lib) messages: 341141 nosy: C?dric Cabessa priority: normal severity: normal status: open title: uuid constructor accept invalid strings (extra dash) versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 07:15:48 2019 From: report at bugs.python.org (Michael Osipov) Date: Tue, 30 Apr 2019 11:15:48 +0000 Subject: [New-bugs-announce] [issue36758] configured libdir not correctly passed to Python executable Message-ID: <1556622948.14.0.900799269401.issue36758@roundup.psfhosted.org> New submission from Michael Osipov <1983-01-06 at gmx.net>: I do compile Python from master on HP-UX with aCC: # echo $LDFLAGS $CPPFLAGS -L/opt/ports/lib/hpux32 -I/opt/ports/include UNIX_STD=1998 LDFLAGS="$LDFLAGS -lreadline" CPPFLAGS="-I$PREFIX/include/ncurses $CPPFLAGS" ./configure --prefix=/opt/python \ --libdir=/opt/python/lib/hpux32 --with-system-expat --with-openssl=/opt/openssl having libs in hpux32 or hpux64 is a convention on this platform. When Python is installed the following happens: $ /opt/python/bin/python3 Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] Python 3.8.0a3+ (default, Apr 30 2019, 12:09:29) [C] on hp-ux11 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path ['', '/opt/python/lib/python38.zip', '/opt/python/lib/python3.8', '/opt/python/lib/lib-dynload', '/net/home/osipovmi/.local/lib/python3.8/site-packages', '/opt/python/lib/python3.8/site-packages'] I don't see hpux32 anywhere here. Though all shared objects are there. Reconfiguring: # echo $LDFLAGS $CPPFLAGS -L/opt/ports/lib/hpux32 -I/opt/ports/include UNIX_STD=1998 LDFLAGS="$LDFLAGS -lreadline" CPPFLAGS="-I$PREFIX/include/ncurses $CPPFLAGS" ./configure --prefix=/opt/python \ --with-system-expat --with-openssl=/opt/openssl gives me the expected result: $ /opt/python/bin/python3 Python 3.8.0a3+ (default, Apr 30 2019, 12:21:15) [C] on hp-ux11 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path ['', '/opt/python/lib/python38.zip', '/opt/python/lib/python3.8', '/opt/python/lib/python3.8/lib-dynload', '/net/home/osipovmi/.local/lib/python3.8/site-packages', '/opt/python/lib/python3.8/site-packages'] >>> It pretty much seems like '--libdir' is not correctly passed down to the installation. I can see that at least this is wrong: ./Lib/distutils/tests/test_install.py:59: libdir = os.path.join(destination, "lib", "python") ./Lib/distutils/command/install.py:32: 'purelib': '$base/lib/python$py_version_short/site-packages', ./Lib/distutils/command/install.py:33: 'platlib': '$platbase/lib/python$py_version_short/site-packages', ./Lib/distutils/command/install.py:39: 'purelib': '$base/lib/python', ./Lib/distutils/command/install.py:40: 'platlib': '$base/lib/python', ./Lib/sysconfig.py:23: 'stdlib': '{installed_base}/lib/python{py_version_short}', ./Lib/sysconfig.py:24: 'platstdlib': '{platbase}/lib/python{py_version_short}', ./Lib/sysconfig.py:25: 'purelib': '{base}/lib/python{py_version_short}/site-packages', ./Lib/sysconfig.py:26: 'platlib': '{platbase}/lib/python{py_version_short}/site-packages', ./Lib/sysconfig.py:35: 'stdlib': '{installed_base}/lib/python', ./Lib/sysconfig.py:36: 'platstdlib': '{base}/lib/python', ./Lib/sysconfig.py:37: 'purelib': '{base}/lib/python', ./Lib/sysconfig.py:38: 'platlib': '{base}/lib/python', ./Lib/sysconfig.py:65: 'stdlib': '{userbase}/lib/python{py_version_short}', ./Lib/sysconfig.py:66: 'platstdlib': '{userbase}/lib/python{py_version_short}', ./Lib/sysconfig.py:67: 'purelib': '{userbase}/lib/python{py_version_short}/site-packages', ./Lib/sysconfig.py:68: 'platlib': '{userbase}/lib/python{py_version_short}/site-packages', ./Lib/sysconfig.py:74: 'stdlib': '{userbase}/lib/python', ./Lib/sysconfig.py:75: 'platstdlib': '{userbase}/lib/python', ./Lib/sysconfig.py:76: 'purelib': '{userbase}/lib/python/site-packages', ./Lib/sysconfig.py:77: 'platlib': '{userbase}/lib/python/site-packages', ./configure.ac:4653: LIBPL='$(prefix)'"/lib/python${VERSION}/config-${LDVERSION}" ./Misc/python-config.sh.in:50:LIBDEST=${prefix_real}/lib/python${VERSION} ./Modules/getpath.c:129: wchar_t *lib_python; /* "lib/pythonX.Y" */ ./Modules/getpath.c:131: wchar_t zip_path[MAXPATHLEN+1]; /* ".../lib/pythonXY.zip" */ ./Modules/getpath.c:520: * e.g. /usr/local/lib/python1.5 is reduced to /usr/local. ./Modules/getpath.c:1018: err = joinpath(calculate->zip_path, L"lib/python00.zip", zip_path_len); ./Modules/getpath.c:1147: calculate->lib_python = Py_DecodeLocale("lib/python" VERSION, &len); ./Python/coreconfig.c:103:# define PYTHONHOMEHELP "/lib/pythonX.X" I have changed those files manually by adding '/hpux32' everywhere and ran configure with custom libdir: no avail. 'lib/python' will still be used. If this cannot be changed, a warning should be issued with ./configure that custom libdir will lead to loading issues. ---------- components: Build, Installation messages: 341148 nosy: michael-o, vstinner priority: normal severity: normal status: open title: configured libdir not correctly passed to Python executable versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 07:36:26 2019 From: report at bugs.python.org (Snidhi Sofpro) Date: Tue, 30 Apr 2019 11:36:26 +0000 Subject: [New-bugs-announce] [issue36759] datetime: astimezone() results in OSError: [Errno 22] Invalid argument Message-ID: <1556624186.01.0.847614978011.issue36759@roundup.psfhosted.org> New submission from Snidhi Sofpro : With: Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)] on win32 import datetime; d_Time = datetime.datetime.strptime('03:30 PM', '%I:%M %p'); d_Time = d_Time.astimezone(datetime.timezone.utc); # RESULTS IN OSError: [Errno 22] Invalid argument # WHEREAS the foll. does not have the issue! d_Time = datetime.datetime(year = d_Time.year, month = d_Time.month, day = d_Time.day, hour = d_Time.hour, minute = d_Time.minute, second = d_Time.second, tzinfo = datetime.timezone.utc); print(d_Time); ---------- components: Library (Lib) messages: 341149 nosy: Snidhi priority: normal severity: normal status: open title: datetime: astimezone() results in OSError: [Errno 22] Invalid argument type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 12:38:56 2019 From: report at bugs.python.org (Joe Borg) Date: Tue, 30 Apr 2019 16:38:56 +0000 Subject: [New-bugs-announce] [issue36760] subprocess.run fails with capture_output=True and stderr=STDOUT Message-ID: <1556642336.59.0.253389839782.issue36760@roundup.psfhosted.org> New submission from Joe Borg : Reading from https://docs.python.org/3/library/subprocess.html#subprocess.CompletedProcess """ If you ran the process with stderr=subprocess.STDOUT, stdout and stderr will be combined in this attribute, and stderr will be None. """ But, if you run `run()` with `capture_output=True`, you get the following exception: """ ValueError: stdout and stderr arguments may not be used with capture_output. """ So, it seems impossible to get the combined outputs of stdout and stderr with `run()`. ---------- components: Library (Lib) messages: 341158 nosy: Joe.Borg priority: normal severity: normal status: open title: subprocess.run fails with capture_output=True and stderr=STDOUT type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 14:48:02 2019 From: report at bugs.python.org (wim glenn) Date: Tue, 30 Apr 2019 18:48:02 +0000 Subject: [New-bugs-announce] [issue36761] Extended slice assignment + iterable unpacking Message-ID: <1556650082.38.0.163364239874.issue36761@roundup.psfhosted.org> New submission from wim glenn : Could cases like these be made to work? *Should* cases like these be made to work? L = [0, 1, 2] L[::2], *rest = "abcdef" # ValueError: attempt to assign sequence of size 1 to extended slice of size 2 a, L[::2] = "abc" # ValueError: too many values to unpack (expected 2) The list slice knows exactly how many slots need to be filled, so I can't immediately think of any obvious ambiguity. Maybe there are some implementation complications with supporting e.g. generators on the RHS (because RHS must be evaluated before LHS - https://docs.python.org/3/reference/expressions.html#evaluation-order). ---------- components: Interpreter Core messages: 341160 nosy: wim.glenn priority: normal severity: normal status: open title: Extended slice assignment + iterable unpacking type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 18:31:42 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Tue, 30 Apr 2019 22:31:42 +0000 Subject: [New-bugs-announce] [issue36762] Teach "import *" to warn when overwriting globals or builtins Message-ID: <1556663502.82.0.194449689186.issue36762@roundup.psfhosted.org> New submission from Raymond Hettinger : One reason we usually suggest that people don't use star imports is that it is too easy to shadow a builtin or overwrite an existing global. Momma Gump always used to say, "import star is like a box of chocolates, you never know what you've going to get". >>> from os import * Warning (from warnings module): File "__main__", line 1 ImportWarning: The 'open' variable in the 'os' module shadows a variable in the 'builtins' module >>> alpha = 2.0 >>> beta = 3.0 >>> gamma = 4.5 >>> delta = 5.5 >>> from math import * >>> from os import * Warning (from warnings module): File "__main__", line 8 ImportWarning: The 'gamma' variable in the 'math' overwrites an existing variable in the globals. ---------- components: Interpreter Core messages: 341166 nosy: rhettinger priority: normal severity: normal status: open title: Teach "import *" to warn when overwriting globals or builtins type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 30 18:38:59 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 30 Apr 2019 22:38:59 +0000 Subject: [New-bugs-announce] [issue36763] PEP 587: Rework initialization API to prepare second version of the PEP Message-ID: <1556663939.96.0.509924822981.issue36763@roundup.psfhosted.org> New submission from STINNER Victor : I'm working on changes to complete the PEP 587, Python initiaization API. ---------- components: Interpreter Core messages: 341167 nosy: vstinner priority: normal severity: normal status: open title: PEP 587: Rework initialization API to prepare second version of the PEP versions: Python 3.8 _______________________________________ Python tracker _______________________________________