From report at bugs.python.org Sat Dec 1 05:40:37 2018 From: report at bugs.python.org (Fabio Zadrozny) Date: Sat, 01 Dec 2018 10:40:37 +0000 Subject: [New-bugs-announce] [issue35370] Provide API to set the tracing function to be used for running threads. Message-ID: <1543660837.51.0.788709270274.issue35370@psf.upfronthosting.co.za> New submission from Fabio Zadrozny : Right now it's hard for debuggers to set the tracing function to be used for running threads. This would be really handy for debuggers when attaching to a running program to debug all threads. -- Note: currently there is a way to achieve that by pausing all the threads then selectively switching to a thread to make it current and setting the tracing function using the C-API (see: https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd_attach_to_process/dll/attach.cpp#L1224), but I believe this is very hacky and not portable to other Python implementations. ---------- components: Interpreter Core messages: 330849 nosy: fabioz priority: normal severity: normal status: open title: Provide API to set the tracing function to be used for running threads. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 1 06:42:08 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 01 Dec 2018 11:42:08 +0000 Subject: [New-bugs-announce] [issue35371] Fix undefined behavior in os.utime() on Windows Message-ID: <1543664528.12.0.788709270274.issue35371@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The hFile variable is used uninitialized in os.utime() on Windows when an error is raised in arguments parsing. This is an undefined behavior, and can cause a crash. ---------- assignee: serhiy.storchaka components: Extension Modules messages: 330850 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Fix undefined behavior in os.utime() on Windows type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 1 12:17:56 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 01 Dec 2018 17:17:56 +0000 Subject: [New-bugs-announce] [issue35372] Code page decoder incorrectly handles input >2GiB Message-ID: <1543684676.55.0.788709270274.issue35372@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : >>> b = b'a'*(2**31-2)+b'\xff'*2 >>> x, y = codecs.code_page_decode(932, b, 'replace', True) >>> len(x) 2 >>> x, y ('aa', 2147483648) ---------- assignee: serhiy.storchaka components: Interpreter Core messages: 330855 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Code page decoder incorrectly handles input >2GiB type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 1 13:03:04 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 01 Dec 2018 18:03:04 +0000 Subject: [New-bugs-announce] [issue35373] PyInit_timezone() must return a value Message-ID: <1543687384.46.0.788709270274.issue35373@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : PyInit_timezone() is declared as returning int, but it contains return statements without value. >From compiler output on Windows: timemodule.c ..\Modules\timemodule.c(1584): warning C4033: 'PyInit_timezone' must return a value [C:\py\cpython3.8\PCbuild\pythoncore.vcxproj] ..\Modules\timemodule.c(1589): warning C4033: 'PyInit_timezone' must return a value [C:\py\cpython3.8\PCbuild\pythoncore.vcxproj] ..\Modules\timemodule.c(1593): warning C4033: 'PyInit_timezone' must return a value [C:\py\cpython3.8\PCbuild\pythoncore.vcxproj] c:\py\cpython3.8\modules\timemodule.c(1647): warning C4715: 'PyInit_timezone': not all control paths return a value [C:\py\cpython3.8\PCbuild\pythoncore.vcxproj] ---------- components: Extension Modules messages: 330858 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: PyInit_timezone() must return a value type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 04:42:08 2018 From: report at bugs.python.org (Christian Ullrich) Date: Sun, 02 Dec 2018 09:42:08 +0000 Subject: [New-bugs-announce] [issue35374] Windows doc build does not find autodetected hhc.exe Message-ID: <1543743728.61.0.788709270274.issue35374@psf.upfronthosting.co.za> New submission from Christian Ullrich : If hhc.exe is on the PATH when HTML Help is being built, the build fails because make.bat does not correctly remember the fact. The set command is very finicky with trailing spaces, leading to this: '"hhc "' is not recognized as an internal or external command, operable program or batch file. I suppose the "official" build does not rely on autodetection. In that case, a better fix would be to require setting HTMLHELP explicitly. PR (for the single-character fix) incoming. ---------- assignee: docs at python components: Documentation messages: 330872 nosy: chrullrich, docs at python priority: normal severity: normal status: open title: Windows doc build does not find autodetected hhc.exe type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 05:11:52 2018 From: report at bugs.python.org (Sriram Krishna) Date: Sun, 02 Dec 2018 10:11:52 +0000 Subject: [New-bugs-announce] [issue35375] name shadowing while a module tries to import another Message-ID: <1543745512.2.0.788709270274.issue35375@psf.upfronthosting.co.za> New submission from Sriram Krishna : Suppose I have a file profile.py in the same directory as the file I am running (say test.py) Let the contents of the files be: profile.py: raise Exception test.py: import cProfile now if I run test.py $ python test.py Traceback (most recent call last): File "test.py", line 1, in import cProfile File "/usr/lib/python3.7/cProfile.py", line 10, in import profile as _pyprofile File "/home/username/profile.py", line 1, in raise Exception Exception The file profile.py in '/usr/lib/python3.7' should have been loaded. This would also happen if test.py imported a module or package which imported cProfile. The only possible way of avoiding this problem completely is by ensuring that the name of any the python files don't match a builtin python file or the name of any installed package. A python user can't be expected to know the name of every possible file in the Python standard library. Maybe the current working directory should be removed from sys.path when importing from within another module not in the same directory. ---------- components: Interpreter Core messages: 330874 nosy: ksriram priority: normal severity: normal status: open title: name shadowing while a module tries to import another type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 10:00:55 2018 From: report at bugs.python.org (rdb) Date: Sun, 02 Dec 2018 15:00:55 +0000 Subject: [New-bugs-announce] [issue35376] modulefinder skips nested modules with same name as top-level bad module Message-ID: <1543762855.5.0.788709270274.issue35376@psf.upfronthosting.co.za> New submission from rdb : If modulefinder finds a nested module import (eg. 'import a.b.c') while there is a top-level module with the same name (eg. 'c') that failed to import and got added to the badmodules list, it will skip it entirely without even trying to import it. This has a trivial fix (attached). The right thing to do is clearly to check it by fqname in the badmodules dict since that's also what it expects in other locations. I can make a PR as soon as my CLA gets validated, if that is more convenient. (Which branch should I make the PR against?) ---------- components: Library (Lib) files: patch.diff keywords: patch messages: 330883 nosy: rdb priority: normal severity: normal status: open title: modulefinder skips nested modules with same name as top-level bad module type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47968/patch.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 10:27:37 2018 From: report at bugs.python.org (devkral) Date: Sun, 02 Dec 2018 15:27:37 +0000 Subject: [New-bugs-announce] [issue35377] urlsplit scheme argument broken Message-ID: <1543764457.53.0.788709270274.issue35377@psf.upfronthosting.co.za> New submission from devkral : the scheme argument of urlsplit/urlparse is completely broken. here two examples: urlunsplit(urlsplit("httpbin.org", scheme="https://")) 'https://:httpbin.org' urlunsplit(urlsplit("httpbin.org", scheme="https")) 'https:///httpbin.org' Fix: change urlsplit logic like this: ... url, scheme, _coerce_result = _coerce_args(url, scheme) scheme = scheme.rstrip("://") # this removes :// ... i = url.find('://') # harden against arbitrary : if i > 0: ... elif scheme: netloc, url = _splitnetloc(url, 0) # if scheme is specified, netloc is implied sry too lazy to create a patch from this. Most probably are all python versions affected but I checked only 2.7 and 3.7 . ---------- components: Library (Lib) messages: 330884 nosy: devkral priority: normal severity: normal status: open title: urlsplit scheme argument broken versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 11:43:32 2018 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 02 Dec 2018 16:43:32 +0000 Subject: [New-bugs-announce] [issue35378] multiprocessing.Pool.imaps iterators do not maintain alive the multiprocessing.Pool objects Message-ID: <1543769012.57.0.788709270274.issue35378@psf.upfronthosting.co.za> New submission from Pablo Galindo Salgado : After applying the PRs in issue34172, multiprocessing.Pool.imap hangs on MacOs and Linux. This is a simple reproducer: import multiprocessing def the_test(): print("Begin") for x in multiprocessing.Pool().imap(int, ["4", "3"]): print(x) print("End") the_test() This happens because the IMapIterator does not maintain alive the multiprocessing.Pool object while it is still alive. ---------- components: Library (Lib) messages: 330890 nosy: pablogsal, pitrou, tzickel priority: normal severity: normal status: open title: multiprocessing.Pool.imaps iterators do not maintain alive the multiprocessing.Pool objects versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 13:30:10 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 02 Dec 2018 18:30:10 +0000 Subject: [New-bugs-announce] [issue35379] IDLE's close fails when io.filename set to None Message-ID: <1543775410.03.0.788709270274.issue35379@psf.upfronthosting.co.za> New submission from Raymond Hettinger : I'm not sure that sequence of events that causes this, but more than once I've gotten the following traceback. Exception in Tkinter callback Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/tkinter/__init__.py", line 1705, in __call__ return self.func(*args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/idlelib/multicall.py", line 176, in handler r = l[i](event) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/idlelib/filelist.py", line 54, in close_all_callback reply = edit.close() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/idlelib/editor.py", line 1017, in close self._close() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/idlelib/pyshell.py", line 309, in _close EditorWindow._close(self) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/idlelib/editor.py", line 1021, in _close if self.io.filename: AttributeError: 'NoneType' object has no attribute 'filename' ---------- assignee: terry.reedy components: IDLE messages: 330894 nosy: rhettinger, terry.reedy priority: normal severity: normal status: open title: IDLE's close fails when io.filename set to None versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 2 18:04:50 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Sun, 02 Dec 2018 23:04:50 +0000 Subject: [New-bugs-announce] [issue35380] Enable TCP_NODELAY for proactor event loop Message-ID: <1543791890.63.0.788709270274.issue35380@psf.upfronthosting.co.za> New submission from Andrew Svetlov : We do it for selector based loops already, let's be consistent. I think the feature should be backported to 3.7 too. ---------- components: asyncio messages: 330903 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Enable TCP_NODELAY for proactor event loop versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 01:33:59 2018 From: report at bugs.python.org (Eddie Elizondo) Date: Mon, 03 Dec 2018 06:33:59 +0000 Subject: [New-bugs-announce] [issue35381] Heap-allocated Posixmodule Message-ID: <1543818839.39.0.788709270274.issue35381@psf.upfronthosting.co.za> New submission from Eddie Elizondo : After bpo34784, there are still two more cases of statically allocated types (DirEntryType & ScandirIteratorType). These should also be heap allocated to make posixmodule fully compatible with PEP384. ---------- components: Library (Lib) messages: 330906 nosy: eelizondo priority: normal severity: normal status: open title: Heap-allocated Posixmodule versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 03:27:45 2018 From: report at bugs.python.org (Leung) Date: Mon, 03 Dec 2018 08:27:45 +0000 Subject: [New-bugs-announce] [issue35382] Something wrong with pymysql Message-ID: <1543825665.91.0.788709270274.issue35382@psf.upfronthosting.co.za> New submission from Leung : when i use like that userinfo = dbsession2.query(func.count(radcheck1.username)).\ outerjoin(radcheck2,radcheck1.username==radcheck2.username).\ filter(radcheck1.admin_user==g.user.name,or_(radcheck1.username.like('im_%'),and_(radcheck1.attribute=='Cleartext-Password',radcheck2.attribute=='Expiration'))). python3.6 is ok.But python 3.7 and 3.7.1 show me name 'byte2int' is not defined in pymysql/_auth.py ---------- messages: 330910 nosy: leung priority: normal severity: normal status: open title: Something wrong with pymysql type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 05:20:07 2018 From: report at bugs.python.org (Niklas Rosenstein) Date: Mon, 03 Dec 2018 10:20:07 +0000 Subject: [New-bugs-announce] [issue35383] lib2to3 raises ParseError on argument called "print" Message-ID: <1543832407.47.0.788709270274.issue35383@psf.upfronthosting.co.za> New submission from Niklas Rosenstein : On Python 3.7.0 lib2to3 will not parse code like this: def foo(print=None): pass and yield the following error instead lib2to3.pgen2.parse.ParseError: bad input: type=1, value='print', context=('', (1, 8)) ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 330926 nosy: n_rosenstein priority: normal severity: normal status: open title: lib2to3 raises ParseError on argument called "print" type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 05:52:28 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 03 Dec 2018 10:52:28 +0000 Subject: [New-bugs-announce] [issue35384] The repr of ctypes.CArgObject fails for non-ascii character Message-ID: <1543834348.49.0.788709270274.issue35384@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The repr of the ctypes.CArgObject instance will fail when the value is a non-ascii character. The code is: sprintf(buffer, "", self->tag, self->value.c); ... return PyUnicode_FromString(buffer); If self->value.c is out of range 0-127, buffer will contain a string not decodable with UTF-8. There is a similar problem with non-ascii self->tag. The following PR is purposed to fix this, but I don't know how to test it. Current tests only create CArgObject instances with tag='P' (in byref()). ---------- components: Extension Modules, ctypes messages: 330931 nosy: amaury.forgeotdarc, belopolsky, meador.inge, serhiy.storchaka priority: normal severity: normal status: open title: The repr of ctypes.CArgObject fails for non-ascii character type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 05:56:29 2018 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Dec 2018 10:56:29 +0000 Subject: [New-bugs-announce] [issue35385] time module: why not using tzname from the glibc? Message-ID: <1543834589.7.0.788709270274.issue35385@psf.upfronthosting.co.za> New submission from STINNER Victor : Currently, the time module uses tm_zone and tm_gmtoff of struct tm with localtime_r() to get the timezone (name and offset) on my Fedora 29. But it seems like glibc provides "tzname", "daylight" and "timezone" variables. Why not using them? Python also provides "altzone", I'm not sure how to get this value from the glibc. See also the documentation: https://docs.python.org/dev/library/time.html#timezone-constants It's not a bug, it's more a question :-) It seems like the configure script always undefine HAVE_DECL_TZNAME: --- /* Define to 1 if you have the declaration of `tzname', and to 0 if you don't. */ #undef HAVE_DECL_TZNAME --- Example of C program: --- #include #include #include int main() { putenv("TZ=UTC"); tzset(); printf("tzname = {%s, %s}\n", tzname[0], tzname[1]); exit(EXIT_SUCCESS); } --- Output on Fedora 29, glibc 2.28: --- tzname = {UTC, UTC} --- Note: tzname is not always defined by : --- /* Defined in localtime.c. */ extern char *__tzname[2]; /* Current timezone names. */ extern int __daylight; /* If daylight-saving time is ever in use. */ extern long int __timezone; /* Seconds west of UTC. */ #ifdef __USE_POSIX /* Same as above. */ extern char *tzname[2]; /* Set time conversion information from the TZ environment variable. If TZ is not defined, a locale-dependent default is used. */ extern void tzset (void) __THROW; #endif #if defined __USE_MISC || defined __USE_XOPEN extern int daylight; extern long int timezone; #endif --- configure should try to tzname is available no? For HAVE_WORKING_TZSET, configure contains a C program which uses HAVE_TZNAME. Extract: --- #if HAVE_TZNAME extern char *tzname[]; #endif ... putenv("TZ=UTC+0"); tzset(); if (localtime(&groundhogday)->tm_hour != 0) exit(1); #if HAVE_TZNAME /* For UTC, tzname[1] is sometimes "", sometimes " " */ if (strcmp(tzname[0], "UTC") || (tzname[1][0] != 0 && tzname[1][0] != ' ')) exit(1); #endif --- I don't understand the test on the TZ=UTC+0 timezone: I get tzname[0]="UTC" and tzname[1]="UTC" which fails the test... In Python 2.7, there is: --- /* This code moved from inittime wholesale to allow calling it from time_tzset. In the future, some parts of it can be moved back (for platforms that don't HAVE_WORKING_TZSET, when we know what they are), and the extraneous calls to tzset(3) should be removed. I haven't done this yet, as I don't want to change this code as little as possible when introducing the time.tzset and time.tzsetwall methods. This should simply be a method of doing the following once, at the top of this function and removing the call to tzset() from time_tzset(): #ifdef HAVE_TZSET tzset() #endif And I'm lazy and hate C so nyer. */ #if defined(HAVE_TZNAME) && !defined(__GLIBC__) && !defined(__CYGWIN__) --- The glibc is explicitly excluded from platforms which support "tzname". The "!defined(__GLIBC__)" test is quite old... commit ea424e19f152638260c91d5fd6a805a288c931d2 Author: Guido van Rossum Date: Fri Apr 23 20:59:05 1999 +0000 Apparently __GNU_LIBRARY__ is defined for glibc as well as for libc5. The test really wanted to distinguish between the two. So now we test for __GLIBC__ instead. I have confirmed that this works for glibc and I have an email from Christian Tanzer confirming that it works for libc5, so it should be fine. ---------- components: Library (Lib) messages: 330932 nosy: vstinner priority: normal severity: normal status: open title: time module: why not using tzname from the glibc? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 07:39:52 2018 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Dec 2018 12:39:52 +0000 Subject: [New-bugs-announce] [issue35386] ftp://www.pythontest.net/ returns error 500 Message-ID: <1543840792.75.0.788709270274.issue35386@psf.upfronthosting.co.za> New submission from STINNER Victor : The FTP server running at www.pythontest.net returns randomly errors with the code 500. Example: $ lftp www.pythontest.net lftp www.pythontest.net:~> ls -r--r--r-- 1 33 33 123 Jun 06 04:15 README lftp www.pythontest.net:/> get README ? README ? ? 0 (0%) [500 OOPS: vsf_sysutil_bind] You can try in a brower: ftp://www.pythontest.net/README Firefox popup: "500 OOPS: vsf_sysutil_bind". ---------- components: Tests messages: 330941 nosy: vstinner priority: normal severity: normal status: open title: ftp://www.pythontest.net/ returns error 500 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 08:59:42 2018 From: report at bugs.python.org (Kevin Walzer) Date: Mon, 03 Dec 2018 13:59:42 +0000 Subject: [New-bugs-announce] [issue35387] Dialogs on IDLE are accompanied by a small black window Message-ID: <1543845582.88.0.788709270274.issue35387@psf.upfronthosting.co.za> New submission from Kevin Walzer : The "About IDLE" and "Preferences" dialogs on IDLE are accompanied by a small black window titled "idle" when IDLE is run agains the tip of Tk 8.6 on macOS 10.14. This is likely owing to the multiple changes in Tk to accommodate the Mac's API changes on Mojave. I suspect the dialog's [wm transient] implementation is part of the issue; the parent windows for the dialog are not hidden when run against the Tk tip, and thus they have this ugly display. Hopefully the fix is not too complicated. ---------- assignee: terry.reedy components: IDLE messages: 330945 nosy: terry.reedy, wordtech priority: normal severity: normal status: open title: Dialogs on IDLE are accompanied by a small black window versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 10:13:10 2018 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Dec 2018 15:13:10 +0000 Subject: [New-bugs-announce] [issue35388] _PyRuntime_Initialize() called after Py_Finalize() does nothing Message-ID: <1543849990.49.0.788709270274.issue35388@psf.upfronthosting.co.za> New submission from STINNER Victor : When Python is embedded, it should be possible to call the following Python function multiple times: void func(void) { Py_Initialize(); /* do something in Python */ Py_Finalize(); } Py_Finalize() ends by calling _PyRuntime_Finalize(). Problem: when Py_Initialize() is called the second time, _PyRuntime_Initialize() does nothing: _PyInitError _PyRuntime_Initialize(void) { /* XXX We only initialize once in the process, which aligns with the static initialization of the former globals now found in _PyRuntime. However, _PyRuntime *should* be initialized with every Py_Initialize() call, but doing so breaks the runtime. This is because the runtime state is not properly finalized currently. */ static int initialized = 0; if (initialized) { return _Py_INIT_OK(); } initialized = 1; return _PyRuntimeState_Init(&_PyRuntime); } For example, Py_Finalize() clears runtime->interpreters.mutex and runtime->xidregistry.mutex, whereas mutexes are still needed the second time func() is called. There is currently a *workaround*: _PyInitError _PyInterpreterState_Enable(_PyRuntimeState *runtime) { ... if (runtime->interpreters.mutex == NULL) { ... runtime->interpreters.mutex = PyThread_allocate_lock(); ... } ... } I would prefer that _PyRuntime_Initialize() calls _PyRuntimeState_Init() each time, and that _PyRuntimeState_Init() does nothing at the following call (except after Py_Finalize?). Note: _PyRuntimeState_Fini() doesn't free runtime->xidregistry.mutex currently. ---------- messages: 330946 nosy: vstinner priority: normal severity: normal status: open title: _PyRuntime_Initialize() called after Py_Finalize() does nothing versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 11:02:13 2018 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Dec 2018 16:02:13 +0000 Subject: [New-bugs-announce] [issue35389] Use gnu_get_libc_version() in platform.libc_ver()? Message-ID: <1543852933.45.0.788709270274.issue35389@psf.upfronthosting.co.za> New submission from STINNER Victor : Currently, platform.libc_ver() opens Python binary file (ex: /usr/bin/python3) and looks for a string like "GLIBC-2.28". Maybe gnu_get_libc_version() should be exposed in Python to get the version of the running glibc version? And use it if available, or fall back on parsing the binary file (as done currenetly) otherwise. Example: $ cat x.c #include #include #include int main(int argc, char *argv[]) { printf("GNU libc version: %s\n", gnu_get_libc_version()); printf("GNU libc release: %s\n", gnu_get_libc_release()); exit(EXIT_SUCCESS); } $ ./x GNU libc version: 2.28 GNU libc release: stable I'm not sure if it's possible that Python is compiled with glibc but run with a different libc implementation? -- Alternative: run a program to get the libc version which *might* be different than the libc version of Python if the libc is upgraded in the meanwhile (unlikely, but is technically possible on a server running for days): $ ldd --version ldd (GNU libc) 2.28 ... $ /lib64/libc.so.6 GNU C Library (GNU libc) stable release version 2.28. ... $ rpm -q glibc glibc-2.28-17.fc29.x86_64 ... etc. -- See also discussions on platform.libc_ver() performance: https://github.com/python/cpython/pull/10868 ---------- components: Library (Lib) messages: 330952 nosy: vstinner priority: normal severity: normal status: open title: Use gnu_get_libc_version() in platform.libc_ver()? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 12:11:49 2018 From: report at bugs.python.org (Dan) Date: Mon, 03 Dec 2018 17:11:49 +0000 Subject: [New-bugs-announce] [issue35390] ctypes not possible to pass NULL c_void_p in structure by reference Message-ID: <1543857109.69.0.788709270274.issue35390@psf.upfronthosting.co.za> New submission from Dan : I have a C struct typedef struct Effect { void* ptr; } Effect; where when I allocate the memory, the void* gets initialized to NULL, and pass back a pointer: Effect* get_effect(){ Effect* pEffect = malloc(sizeof(*pEffect)); pEffect->ptr = NULL; return pEffect; } In Python, I need to call the C function to initialize, and then pass a REFERENCE to the pointer to another C function: from ctypes import cdll, Structure, c_int, c_void_p, addressof, pointer, POINTER, c_double, byref clibptr = cdll.LoadLibrary("libpointers.so") class Effect(Structure): _fields_ = [("ptr", POINTER(c_double))] clibptr.get_effect.restype = POINTER(Effect) pEffect = clibptr.get_effect() effect = pEffect.contents clibptr.print_ptraddress(byref(effect.ptr)) But this prints an error, because effect.ptr is None, so byref(None) fails. Below is full working code in the case where ptr is instead a double*, where there is no problem. As far as I can tell, there is no way to pass a c_void_p field by reference, which would be very useful! #include #include #define PRINT_MSG_2SX(ARG0, ARG1) printf("From C - [%s] (%d) - [%s]: ARG0: [%s], ARG1: 0x%016llX\n", __FILE__, __LINE__, __FUNCTION__, ARG0, (unsigned long long)ARG1) typedef struct Effect { double* ptr; } Effect; void print_ptraddress(double** ptraddress){ PRINT_MSG_2SX("Address of Pointer:", ptraddress); } Effect* get_effect(){ Effect* pEffect = malloc(sizeof(*pEffect)); pEffect->ptr = NULL; print_ptraddress(&pEffect->ptr); return pEffect; } Python: from ctypes import cdll, Structure, c_int, c_void_p, addressof, pointer, POINTER, c_double, byref clibptr = cdll.LoadLibrary("libpointers.so") class Effect(Structure): _fields_ = [("ptr", POINTER(c_double))] clibptr.get_effect.restype = POINTER(Effect) pEffect = clibptr.get_effect() effect = pEffect.contents clibptr.print_ptraddress(byref(effect.ptr)) gives matching addresses: >From C - [pointers.c] (11) - [print_ptraddress]: ARG0: [Address of Pointer:], ARG1: 0x00007FC2E1AD3770 From C - [pointers.c] (11) - [print_ptraddress]: ARG0: [Address of Pointer:], ARG1: 0x00007FC2E1AD3770 ---------- components: ctypes messages: 330961 nosy: dtamayo priority: normal severity: normal status: open title: ctypes not possible to pass NULL c_void_p in structure by reference type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 14:04:51 2018 From: report at bugs.python.org (Omer Bartal) Date: Mon, 03 Dec 2018 19:04:51 +0000 Subject: [New-bugs-announce] [issue35391] threading.RLock exception handling while waiting Message-ID: <1543863891.16.0.788709270274.issue35391@psf.upfronthosting.co.za> New submission from Omer Bartal : (tested on python 2.7) Created a threading.Condition(threading.RLock()) A piece of code acquires the lock using a with block and waits (for a notify) While wait() is running a KeyboardInterrupt is raised An exception is raised while exiting the lock's with block: File "/usr/lib/python2.7/threading.py", line 289, in __exit__ return self.__lock.__exit__(*args) File "/usr/lib/python2.7/threading.py", line 216, in __exit__ self.release() File "/usr/lib/python2.7/threading.py", line 204, in release raise RuntimeError("cannot release un-acquired lock") example code running on the main thread: def waiting(lock): # (the lock was created using Condition(RLock())) with lock: lock.wait(timeout=xxx) # while waiting a keyboard interrupt is raised The issue is caused because threading.RLock doesn't handle the exception correctly: def _acquire_restore(self, count_owner): count, owner = count_owner self.__block.acquire() self.__count = count self.__owner = owner if __debug__: self._note("%s._acquire_restore()", self) The exception is raised after the acquire() returns and the count and owner are not restored. The problem was solved using the following fix (added try, finally): def _acquire_restore(self, count_owner): count, owner = count_owner try: self.__block.acquire() finally: self.__count = count self.__owner = owner if __debug__: self._note("%s._acquire_restore()", self) Looking at the code, this issue exists in python 3.8 as well. ---------- components: Library (Lib) messages: 330972 nosy: Omer Bartal priority: normal severity: normal status: open title: threading.RLock exception handling while waiting type: crash versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 14:07:03 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Mon, 03 Dec 2018 19:07:03 +0000 Subject: [New-bugs-announce] [issue35392] Create asyncio/sockutils.py Message-ID: <1543864023.37.0.788709270274.issue35392@psf.upfronthosting.co.za> New submission from Andrew Svetlov : As discussed with Yuri on https://github.com/python/cpython/pull/10867#discussion_r238395192 Candidate functions to move into the helper modules: * _set_nodelay() * _ipaddr_info() * _set_reuseport() ---------- components: asyncio messages: 330973 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Create asyncio/sockutils.py versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 14:39:20 2018 From: report at bugs.python.org (Filip Bengtsson) Date: Mon, 03 Dec 2018 19:39:20 +0000 Subject: [New-bugs-announce] [issue35393] Typo in documentation Message-ID: <1543865960.77.0.788709270274.issue35393@psf.upfronthosting.co.za> New submission from Filip Bengtsson : There are 256 characters in the range 0?255. ---------- assignee: docs at python components: Documentation messages: 330975 nosy: autom, docs at python priority: normal pull_requests: 10114 severity: normal status: open title: Typo in documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 16:59:26 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Mon, 03 Dec 2018 21:59:26 +0000 Subject: [New-bugs-announce] [issue35394] Add __slots__ = () to asyncio protocols Message-ID: <1543874366.92.0.788709270274.issue35394@psf.upfronthosting.co.za> New submission from Andrew Svetlov : Protocols have no members. Adding empty slots doesn't harm any existing code but it allows to write proper protocol implementation with slot-based class. ---------- components: asyncio messages: 330986 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Add __slots__ = () to asyncio protocols versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 17:30:52 2018 From: report at bugs.python.org (Naglis) Date: Mon, 03 Dec 2018 22:30:52 +0000 Subject: [New-bugs-announce] [issue35395] Typo in asyncio eventloop documentation Message-ID: <1543876252.15.0.788709270274.issue35395@psf.upfronthosting.co.za> New submission from Naglis : loop.add_writer and loop.add_signal_handler have *callback* in their signatures, but in their documentation regarding functools.partial usage the function is referred to as *func*. ---------- assignee: docs at python components: Documentation messages: 330989 nosy: docs at python, naglis priority: normal severity: normal status: open title: Typo in asyncio eventloop documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 17:43:24 2018 From: report at bugs.python.org (=?utf-8?q?Andr=C3=A9s_Delfino?=) Date: Mon, 03 Dec 2018 22:43:24 +0000 Subject: [New-bugs-announce] [issue35396] Add support for __fspath__ to fnmatch.fnmatchase and filter Message-ID: <1543877004.47.0.788709270274.issue35396@psf.upfronthosting.co.za> New submission from Andr?s Delfino : Both fnmatch.fnmatchase and fnmatch.filter (in Unix) do not support path-like objects. This is inconvenient, for example, when taking advantage of os.scandir and working with os.DirEntry objets. Also, fnmatch.filter in Windows does support path-like objects, since it uses os.path.normcase (which works with path-like objects), so the change for Unix would add consistency. I propose for both functions to accept path-like objects, and in the case of fnmatch.filter, to return the path-like object if it matches the pattern (as it does now for Windows). ---------- components: Library (Lib) messages: 330992 nosy: adelfino priority: normal severity: normal status: open title: Add support for __fspath__ to fnmatch.fnmatchase and filter type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 18:25:50 2018 From: report at bugs.python.org (Steven D'Aprano) Date: Mon, 03 Dec 2018 23:25:50 +0000 Subject: [New-bugs-announce] [issue35397] Undeprecate and document urllib.parse.unwrap Message-ID: <1543879550.52.0.788709270274.issue35397@psf.upfronthosting.co.za> New submission from Steven D'Aprano : The urllib.parse module contains an undocumented function unwrap: unwrap('') --> 'type://host/path' This is useful. I've been re-inventing this function in many of my scripts, because I didn't know it existed (not documented!) and only stumbled across it by accident today, where I see it was deprecated in #27485 but I can't see any reason for the deprecation. If not for the deprecation, I would certainly use this unwrap function in preference to rolling my own. It seems to me that this might have been a case of an over-enthusiastic change. #27485 talks about deprecating the various split* functions, which are officially redundant (urlparse and urlsplit are preferred) but doesn't talk about unwrap, which is useful and (in my opinion) should have been documented rather than deprecated. ---------- messages: 331003 nosy: cheryl.sabella, serhiy.storchaka, steven.daprano priority: normal severity: normal status: open title: Undeprecate and document urllib.parse.unwrap type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 3 21:27:45 2018 From: report at bugs.python.org (Montana Low) Date: Tue, 04 Dec 2018 02:27:45 +0000 Subject: [New-bugs-announce] [issue35398] SQLite incorrect row count for UPDATE Message-ID: <1543890465.67.0.788709270274.issue35398@psf.upfronthosting.co.za> New submission from Montana Low : SQLite driver returns an incorrect row count (-1) for UPDATE statements that begin with a comment. Downstream Reference: https://github.com/sqlalchemy/sqlalchemy/issues/4396 Test Case: ``` import sqlite3 conn = sqlite3.connect(":memory:") cursor = conn.cursor() cursor.execute(""" CREATE TABLE foo ( id INTEGER NOT NULL, updated_at DATETIME, PRIMARY KEY (id) ) """) cursor.execute(""" /* watermarking bug */ INSERT INTO foo (id, updated_at) VALUES (?, ?) """, [1, None]) cursor.execute(""" UPDATE foo SET updated_at=? WHERE foo.id = ? """, ('2018-12-02 14:55:57.169785', 1)) assert cursor.rowcount == 1 cursor.execute(""" /* watermarking bug */ UPDATE foo SET updated_at=? WHERE foo.id = ? """, ('2018-12-03 14:55:57.169785', 1)) assert cursor.rowcount == 1 ``` ---------- components: Library (Lib) messages: 331006 nosy: Montana Low priority: normal severity: normal status: open title: SQLite incorrect row count for UPDATE versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 00:28:26 2018 From: report at bugs.python.org (Jorge Ramos) Date: Tue, 04 Dec 2018 05:28:26 +0000 Subject: [New-bugs-announce] [issue35399] Sysconfig bug Message-ID: <1543901306.28.0.788709270274.issue35399@psf.upfronthosting.co.za> New submission from Jorge Ramos : As can be seen in the file, the sysconfig test fails when profiling (PGO) this utility. This is the very same bug as described in issue#35299 https://bugs.python.org/issue35299 but in distutils. The problem is that when the test for sysconfig runs, it does not find the file pyconfig.h in the include directory. If this file is manually copied there, the test runs OK. ---------- files: sysconfig_bug.txt messages: 331007 nosy: neyuru priority: normal severity: normal status: open title: Sysconfig bug type: compile error versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47969/sysconfig_bug.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 00:44:02 2018 From: report at bugs.python.org (Jorge Ramos) Date: Tue, 04 Dec 2018 05:44:02 +0000 Subject: [New-bugs-announce] [issue35400] PGOMGR : warning PG0188: Message-ID: <1543902242.92.0.788709270274.issue35400@psf.upfronthosting.co.za> New submission from Jorge Ramos : The following command: Tools\msi\buildrelease.bat -x64 is used to build a 64 bit version (on win_10_64) of python (using visual studio 2017). The following modules did not build correctly because, presumably, the corresponding .PGC files could not be found, even when the PGO tests ran perfectly well: _elementtree _multiprocessing _ctypes winsound pyexpat _socket _bz2 _ssl _lzma _hashlib select See details in the attached file (search for the text "PGOMGR"). ---------- components: Build files: missing_pgc_files.txt messages: 331008 nosy: neyuru priority: normal severity: normal status: open title: PGOMGR : warning PG0188: type: compile error versions: Python 3.6 Added file: https://bugs.python.org/file47970/missing_pgc_files.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 02:10:32 2018 From: report at bugs.python.org (Ned Deily) Date: Tue, 04 Dec 2018 07:10:32 +0000 Subject: [New-bugs-announce] [issue35401] Upgrade Windows and macOS installers to use OpenSSL 1.1.0j / 1.0.2q Message-ID: <1543907432.02.0.788709270274.issue35401@psf.upfronthosting.co.za> New submission from Ned Deily : New versions of OpenSSL were released on 2018-11-20. We should update for 3.7.2, 3.6.8, and 2.7.16. ---------- assignee: christian.heimes components: SSL, Windows, macOS messages: 331010 nosy: benjamin.peterson, christian.heimes, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: release blocker severity: normal status: open title: Upgrade Windows and macOS installers to use OpenSSL 1.1.0j / 1.0.2q versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 02:22:40 2018 From: report at bugs.python.org (Ned Deily) Date: Tue, 04 Dec 2018 07:22:40 +0000 Subject: [New-bugs-announce] [issue35402] Upgrade macOS (and Windows?) installer to Tcl/Tk 8.6.9.1 Message-ID: <1543908160.05.0.788709270274.issue35402@psf.upfronthosting.co.za> New submission from Ned Deily : Tcl/Tk 8.6.9 (followed by Tk 8.6.9.1) was released recently. Among other things, they contain fixes for various issues on macOS, some of which have been seen by macOS users of IDLE and other tkinter apps, so the macOS installer should definitely be updated for 3.7.2, 3.6.8, and 2.7.16. ---------- components: Windows, macOS messages: 331011 nosy: ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Upgrade macOS (and Windows?) installer to Tcl/Tk 8.6.9.1 versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 03:14:05 2018 From: report at bugs.python.org (pmpp) Date: Tue, 04 Dec 2018 08:14:05 +0000 Subject: [New-bugs-announce] [issue35403] support application/wasm in mimetypes and http.server Message-ID: <1543911244.97.0.788709270274.issue35403@psf.upfronthosting.co.za> New submission from pmpp : web browsers have recently gained ability to run webassembly code and for that a new content type has to be add to web servers for optimal use: wasm => Content-Type header : application/wasm spec says it : https://webassembly.github.io/spec/web-api/index.html#streaming-modules "Firefox streaming compilation needs Content-Type header set" cf: https://groups.google.com/forum/#!topic/emscripten-discuss/C7-i1gqWay4 google's filament documentation says: "Python's simple server [...] does not serve WebAssembly files with the correct MIME type." it would be logical since simple htt server is mostly used for testing software to offer support of that new techonology. ---------- messages: 331015 nosy: pmpp priority: normal severity: normal status: open title: support application/wasm in mimetypes and http.server type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 04:01:35 2018 From: report at bugs.python.org (Charles-Axel Dein) Date: Tue, 04 Dec 2018 09:01:35 +0000 Subject: [New-bugs-announce] [issue35404] Document how to import _structure in email.message Message-ID: <1543914095.32.0.788709270274.issue35404@psf.upfronthosting.co.za> New submission from Charles-Axel Dein : The example for `walk()` in `email.message.EmailMessage` makes use of the `_structure` function but does not clarify how to import it. I can make a patch adding a line: from email.iterators import _structure ---------- assignee: docs at python components: Documentation messages: 331018 nosy: charlax, docs at python priority: normal severity: normal status: open title: Document how to import _structure in email.message type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 05:17:13 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Tue, 04 Dec 2018 10:17:13 +0000 Subject: [New-bugs-announce] [issue35405] Wrong traceback for AssertionError while running under pdb Message-ID: <1543918633.67.0.788709270274.issue35405@psf.upfronthosting.co.za> New submission from Karthikeyan Singaravelan : While running under pdb when I have an assertion error after the command continue then the assertion error is mentioned with the statement under which I have executed continue command in pdb. Below script has an assertion error on assert 1 == 2 but when I execute continue from assert 1 == 1 then it shows the line assert 1 == 1 with the AssertionError. I first noticed this with unittest but seems to be a general issue with assert. This is confusing while debugging unittest failures. I searched for issues but couldn't find a related one and this exists on master and 2.7. I assume this is a case where AssertionError doesn't use the current line and uses the one where pdb is executed with continue? I don't know if this is an issue with pdb or assert so I am adding Library as the component. # Reproducible script import pdb; pdb.set_trace(); assert 1 == 1 for i in range(10): pass assert 1 == 2 # Executing on master ? cpython git:(master) $ ./python.exe /tmp/foo.py > /tmp/foo.py(3)() -> assert 1 == 1 (Pdb) c Traceback (most recent call last): File "/tmp/foo.py", line 3, in assert 1 == 1 AssertionError ---------- components: Library (Lib) messages: 331025 nosy: xtreak priority: normal severity: normal status: open title: Wrong traceback for AssertionError while running under pdb type: behavior versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 05:55:26 2018 From: report at bugs.python.org (=?utf-8?b?xZ5haGlu?=) Date: Tue, 04 Dec 2018 10:55:26 +0000 Subject: [New-bugs-announce] [issue35406] calendar.nextmonth and calendar.prevmonth functions doesn't check if the month is valid Message-ID: <1543920926.36.0.788709270274.issue35406@psf.upfronthosting.co.za> New submission from ?ahin : import calendar calendar.nextmonth(2018, 11) returns (2018, 12) which is OK. calendar.nextmonth(2018, 12) returns (2019, 1) which is also OK. calendar.nextmonth(2018, 13) returns (2018, 14). It would make more sense if this was raise calendar.IllegalMonthError. ---------- components: Library (Lib) messages: 331031 nosy: asocia priority: normal severity: normal status: open title: calendar.nextmonth and calendar.prevmonth functions doesn't check if the month is valid _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 07:23:28 2018 From: report at bugs.python.org (Sameer Joshi) Date: Tue, 04 Dec 2018 12:23:28 +0000 Subject: [New-bugs-announce] [issue35407] Datetime function with selenium Message-ID: <1543926208.85.0.788709270274.issue35407@psf.upfronthosting.co.za> New submission from Sameer Joshi : I have defined 2 variables , 1 for Friday and other for rest of weekdays. However when I match these two with the website date(which is 'today - 3' for Monday and 'today -1' )it shows the error as variable not defined. Below is code for the same. import datetime d = datetime.date.today() if d.weekday() == 0: tdelta = datetime.timedelta(days=3) friday = d - tdelta print(friday) elif d.weekday() in range(1,5): tdelta1 = datetime.timedelta(days=1) prev_day = d - tdelta1 print(prev_day) data_date = new.date() # data_date is the date fetched from website if data_date == friday: print("Data as on", friday, "for Race Horses") elif data_date == prev_day: print("Data as on", prev_day, "for Race Horses") else: print("Data update required.") ---------- messages: 331038 nosy: jsameer23 priority: normal severity: normal status: open title: Datetime function with selenium versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 07:35:18 2018 From: report at bugs.python.org (Enric Tejedor Saavedra) Date: Tue, 04 Dec 2018 12:35:18 +0000 Subject: [New-bugs-announce] [issue35408] Python3.7 crash in PyCFunction_New due to broken _PyObject_GC_TRACK Message-ID: <1543926918.35.0.788709270274.issue35408@psf.upfronthosting.co.za> New submission from Enric Tejedor Saavedra : Attached is a reproducer that calls PyCFunction_New. The reproducer runs normally with Python 3.6.5, but it crashes with Python 3.7.1. The reason seems to be that the _PyObject_GC_TRACK macro ends up being called and it is broken in Python3.7. A fix for that macro seems to have been committed to master: https://github.com/python/cpython/pull/10507 ---------- components: Interpreter Core files: reproducer.cpp messages: 331040 nosy: etejedor priority: normal severity: normal status: open title: Python3.7 crash in PyCFunction_New due to broken _PyObject_GC_TRACK type: crash versions: Python 3.7 Added file: https://bugs.python.org/file47971/reproducer.cpp _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 08:21:20 2018 From: report at bugs.python.org (Vincent Michel) Date: Tue, 04 Dec 2018 13:21:20 +0000 Subject: [New-bugs-announce] [issue35409] Async generator might re-throw GeneratorExit on aclose() Message-ID: <1543929680.9.0.788709270274.issue35409@psf.upfronthosting.co.za> New submission from Vincent Michel : As far as I can tell, this issue is different than: https://bugs.python.org/issue34730 I noticed `async_gen.aclose()` raises a GeneratorExit exception if the async generator finalization awaits and silence a failing unfinished future (see example.py). This seems to be related to a bug in `async_gen_athrow_throw`. In fact, `async_gen.aclose().throw(exc)` does not silence GeneratorExit exceptions. This behavior can be reproduced without asyncio (see test.py). Attached is a possible patch, although I'm not too comfortable messing with the python C internals. I can make a PR if necessary. ---------- components: Interpreter Core files: example.py messages: 331043 nosy: vxgmichel priority: normal severity: normal status: open title: Async generator might re-throw GeneratorExit on aclose() type: behavior versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47972/example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 15:04:54 2018 From: report at bugs.python.org (Eliot Bixby) Date: Tue, 04 Dec 2018 20:04:54 +0000 Subject: [New-bugs-announce] [issue35410] copy.deepcopy does not respect metaclasses with __deepcopy__ implementations Message-ID: <1543953894.66.0.788709270274.issue35410@psf.upfronthosting.co.za> New submission from Eliot Bixby : __deepcopy__ implementations on metaclasses are ignored because deepcopy explicitly ignores class objects. It seems to me that more consistent behavior would be to use a null op as a fallback for class objects that do not have any of the relevant methods implemented (deepcopy, reduce, reduce_ex, etc) I've attached a PR that implements this. ---------- components: Library (Lib) messages: 331073 nosy: elibixby priority: normal pull_requests: 10144 severity: normal status: open title: copy.deepcopy does not respect metaclasses with __deepcopy__ implementations type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 17:08:09 2018 From: report at bugs.python.org (STINNER Victor) Date: Tue, 04 Dec 2018 22:08:09 +0000 Subject: [New-bugs-announce] [issue35411] FTP tests of test_urllib2net fail on Travis CI: 425 Security: Bad IP connecting. Message-ID: <1543961289.76.0.788709270274.issue35411@psf.upfronthosting.co.za> New submission from STINNER Victor : ERROR: test_ftp (test.test_urllib2net.OtherNetworkTests) (url='ftp://www.pythontest.net/README') ERROR: test_ftp_basic (test.test_urllib2net.TimeoutTest) ERROR: test_ftp_default_timeout (test.test_urllib2net.TimeoutTest) ERROR: test_ftp_no_timeout (test.test_urllib2net.TimeoutTest) ERROR: test_ftp_timeout (test.test_urllib2net.TimeoutTest) It seems like Jython has a similar issue: http://bugs.jython.org/issue2708 Logs from Travis CI job of https://github.com/python/cpython/pull/10898: https://travis-ci.org/python/cpython/jobs/463422586 Re-running test 'test_urllib2net' in verbose mode test_close (test.test_urllib2net.CloseSocketTest) ... ok test_custom_headers (test.test_urllib2net.OtherNetworkTests) ... ok test_file (test.test_urllib2net.OtherNetworkTests) ... ok test_ftp (test.test_urllib2net.OtherNetworkTests) ... test_redirect_url_withfrag (test.test_urllib2net.OtherNetworkTests) ... ok test_sites_no_connection_close (test.test_urllib2net.OtherNetworkTests) ... skipped 'XXX: http://www.imdb.com is gone' test_urlwithfrag (test.test_urllib2net.OtherNetworkTests) ... ok test_ftp_basic (test.test_urllib2net.TimeoutTest) ... ERROR test_ftp_default_timeout (test.test_urllib2net.TimeoutTest) ... /home/travis/build/python/cpython/Lib/urllib/request.py:222: ResourceWarning: unclosed return opener.open(url, data, timeout) /home/travis/build/python/cpython/Lib/urllib/request.py:222: ResourceWarning: unclosed return opener.open(url, data, timeout) /home/travis/build/python/cpython/Lib/urllib/request.py:222: ResourceWarning: unclosed return opener.open(url, data, timeout) ERROR test_ftp_no_timeout (test.test_urllib2net.TimeoutTest) ... ERROR test_ftp_timeout (test.test_urllib2net.TimeoutTest) ... ERROR test_http_basic (test.test_urllib2net.TimeoutTest) ... ok test_http_default_timeout (test.test_urllib2net.TimeoutTest) ... ok test_http_no_timeout (test.test_urllib2net.TimeoutTest) ... ok test_http_timeout (test.test_urllib2net.TimeoutTest) ... ok ====================================================================== ERROR: test_ftp (test.test_urllib2net.OtherNetworkTests) (url='ftp://www.pythontest.net/README') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2404, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) ftplib.error_temp: 425 Security: Bad IP connecting. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 219, in _test_urls f = urlopen(url, req, TIMEOUT) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 525, in open response = self._open(req, data) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 543, in _open '_open', req) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1551, in ftp_open raise exc.with_traceback(sys.exc_info()[2]) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2404, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) urllib.error.URLError: ====================================================================== ERROR: test_ftp_basic (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) ftplib.error_temp: 425 Security: Bad IP connecting. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 296, in test_ftp_basic u = _urlopen_with_retry(self.FTP_HOST) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 525, in open response = self._open(req, data) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 543, in _open '_open', req) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1551, in ftp_open raise exc.with_traceback(sys.exc_info()[2]) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) urllib.error.URLError: ====================================================================== ERROR: test_ftp_default_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) ftplib.error_temp: 425 Security: Bad IP connecting. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 305, in test_ftp_default_timeout u = _urlopen_with_retry(self.FTP_HOST) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 525, in open response = self._open(req, data) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 543, in _open '_open', req) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1551, in ftp_open raise exc.with_traceback(sys.exc_info()[2]) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) urllib.error.URLError: ====================================================================== ERROR: test_ftp_no_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) ftplib.error_temp: 425 Security: Bad IP connecting. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 316, in test_ftp_no_timeout u = _urlopen_with_retry(self.FTP_HOST, timeout=None) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 525, in open response = self._open(req, data) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 543, in _open '_open', req) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1551, in ftp_open raise exc.with_traceback(sys.exc_info()[2]) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) urllib.error.URLError: ====================================================================== ERROR: test_ftp_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) ftplib.error_temp: 425 Security: Bad IP connecting. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 324, in test_ftp_timeout u = _urlopen_with_retry(self.FTP_HOST, timeout=60) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/travis/build/python/cpython/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 525, in open response = self._open(req, data) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 543, in _open '_open', req) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1551, in ftp_open raise exc.with_traceback(sys.exc_info()[2]) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 1540, in ftp_open fp, retrlen = fw.retrfile(file, type) File "/home/travis/build/python/cpython/Lib/urllib/request.py", line 2425, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 365, in ntransfercmd resp = self.sendcmd(cmd) File "/home/travis/build/python/cpython/Lib/ftplib.py", line 273, in sendcmd return self.getresp() File "/home/travis/build/python/cpython/Lib/ftplib.py", line 244, in getresp raise error_temp(resp) urllib.error.URLError: ---------------------------------------------------------------------- Ran 15 tests in 5.693s FAILED (errors=5, skipped=1) /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() /home/travis/build/python/cpython/Lib/test/support/__init__.py:1535: ResourceWarning: unclosed gc.collect() test test_urllib2net failed 1 test failed again: test_urllib2net ---------- components: Tests messages: 331078 nosy: vstinner priority: normal severity: normal status: open title: FTP tests of test_urllib2net fail on Travis CI: 425 Security: Bad IP connecting. versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 17:37:17 2018 From: report at bugs.python.org (STINNER Victor) Date: Tue, 04 Dec 2018 22:37:17 +0000 Subject: [New-bugs-announce] [issue35412] test_future4 ran no test Message-ID: <1543963037.14.0.788709270274.issue35412@psf.upfronthosting.co.za> New submission from STINNER Victor : Since bpo-34279 has been fixed, regrtest now logs a message when a test runs no test. I noticed that test_future4 logs such message: ... 0:05:23 load avg: 0.56 [152/412] test_future4 0:05:24 load avg: 0.56 [153/412] test_future5 -- test_future4 run no tests ... 2 tests run no tests: test_dtrace test_future4 I can reproduce the issue: $ ./python -m test test_future4 (...) test_future4 run no tests (...) Tests result: NO TEST RUN The test has been added by: commit 62e2c7e3dfffd8465a54b99fc6d3c2a60acab350 Author: Jeremy Hylton Date: Wed Feb 28 17:48:06 2001 +0000 Add regression test for future statements. This adds eight files, but seven are not tests in their own right; these files are mentioned in regrtest. diff --git a/Lib/test/test_future4.py b/Lib/test/test_future4.py new file mode 100644 index 0000000000..805263be89 --- /dev/null +++ b/Lib/test/test_future4.py @@ -0,0 +1,10 @@ +"""This is a test""" +import __future__ +from __future__ import nested_scopes + +def f(x): + def g(y): + return x + y + return g + +print f(2)(4) ... test removed by commit 3090694068670371cdbd5b1a3d3c5dbecc83835a. A file recreated by: commit fa50bad9578cf32e6adcaf52c3a58c7b6cd81e30 Author: Christian Heimes Date: Wed Mar 26 22:55:31 2008 +0000 I forgot to svn add the future test ... I guess that it's related to: commit 3c60833e1e53f6239825b44f76fa22172feb1790 Author: Christian Heimes Date: Wed Mar 26 22:01:37 2008 +0000 Patch #2477: Added from __future__ import unicode_literals The new PyParser_*Ex() functions are based on Neal's suggestion and initial patch. The new __future__ feature makes all '' and r'' unicode strings. b'' and br'' stay (byte) strings. (Other candidates: commit 342212c52afd375d93f44f3ecda0914d77372f26 and commit 7f23d86107dfea69992322577c5033f2edbc3b4f.) ---------- messages: 331080 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_future4 ran no test versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 4 20:35:56 2018 From: report at bugs.python.org (STINNER Victor) Date: Wed, 05 Dec 2018 01:35:56 +0000 Subject: [New-bugs-announce] [issue35413] test_multiprocessing_fork: test_del_pool() leaks dangling threads and processes on AMD64 FreeBSD CURRENT Shared 3.x Message-ID: <1543973756.35.0.788709270274.issue35413@psf.upfronthosting.co.za> New submission from STINNER Victor : Previous issue fixing such bug: bpo-33676. https://buildbot.python.org/all/#/builders/168/builds/332 test_empty_string (test.test_multiprocessing_fork.WithThreadsTestPoll) ... ok test_strings (test.test_multiprocessing_fork.WithThreadsTestPoll) ... ok test_apply (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_async (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_async_timeout (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_context (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_del_pool (test.test_multiprocessing_fork.WithThreadsTestPool) ... Warning -- threading_cleanup() failed to cleanup 1 threads (count: 8, dangling: 9) Dangling thread: Dangling thread: <_MainThread(MainThread, started 34370793472)> Dangling thread: Dangling thread: Dangling thread: Dangling thread: Dangling thread: Dangling thread: Dangling thread: ok test_empty_iterable (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_imap (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_imap_handle_iterable_exception (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok test_imap_unordered (test.test_multiprocessing_fork.WithThreadsTestPool) ... ok ---------- components: Tests messages: 331086 nosy: vstinner priority: normal severity: normal status: open title: test_multiprocessing_fork: test_del_pool() leaks dangling threads and processes on AMD64 FreeBSD CURRENT Shared 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 01:25:24 2018 From: report at bugs.python.org (Zackery Spytz) Date: Wed, 05 Dec 2018 06:25:24 +0000 Subject: [New-bugs-announce] [issue35414] A reference counting bug in PyState_RemoveModule() Message-ID: <1543991124.3.0.788709270274.issue35414@psf.upfronthosting.co.za> New submission from Zackery Spytz : There's a missing Py_INCREF(Py_None) in PyState_RemoveModule(). ---------- components: Interpreter Core messages: 331091 nosy: ZackerySpytz priority: normal severity: normal status: open title: A reference counting bug in PyState_RemoveModule() versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 03:41:12 2018 From: report at bugs.python.org (Dima Tisnek) Date: Wed, 05 Dec 2018 08:41:12 +0000 Subject: [New-bugs-announce] [issue35415] fileno argument to socket.socket is not validated Message-ID: <1543999272.31.0.788709270274.issue35415@psf.upfronthosting.co.za> New submission from Dima Tisnek : socket.socket gained a fileno= kwarg the value of which is not checked if address family and socket type are both provided. For example, following is accepted: >>> socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-1234) >>> socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=1234) >>> socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=0.999) Resulting in a socket object that will fail at runtime. One of the implications is that it's possible to "steal" file descriptor, i.e. create a socket for an fd that doesn't exist; then some other function/thread happens to create e.g. socket with this specific fd, which can be "unexpectedly" used (or closed or modified, e.g. non-blocking changed) through the first socket object. Additionally if the shorthand is used, the exception raised in these cases has odd text, at least it was misleading for me. >>> socket.socket(fileno=get_wrong_fd_from_somewhere()) [snip] OSError: [Errno 9] Bad file descriptor: 'family' I thought that I had a bug whereby a string was passed in instead of an int fd; Ultimately I had to look in cpython source code to understand what the "family" meant. I volunteer to submit a patch! ---------- messages: 331096 nosy: Dima.Tisnek priority: normal severity: normal status: open title: fileno argument to socket.socket is not validated versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 03:54:19 2018 From: report at bugs.python.org (=?utf-8?q?Micka=C3=ABl_Schoentgen?=) Date: Wed, 05 Dec 2018 08:54:19 +0000 Subject: [New-bugs-announce] [issue35416] Fix potential resource warnings in distutils Message-ID: <1544000059.73.0.788709270274.issue35416@psf.upfronthosting.co.za> New submission from Micka?l Schoentgen : I am looking to clean-up potential ResourceWarnings in distutils. The patch will provide 2 changes: - ensure file descriptor are always closed when it is not the case - and uniformization of the "with open(...)" use ---------- components: Distutils messages: 331097 nosy: Tiger-222, dstufft, eric.araujo priority: normal severity: normal status: open title: Fix potential resource warnings in distutils type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 05:10:19 2018 From: report at bugs.python.org (Jonathan Alush-Aben) Date: Wed, 05 Dec 2018 10:10:19 +0000 Subject: [New-bugs-announce] [issue35417] Double parenthesis in print function running 2to3 in already correct call Message-ID: <1544004619.56.0.788709270274.issue35417@psf.upfronthosting.co.za> New submission from Jonathan Alush-Aben : If 2to3 is run on a file with the following contents: a="string" print ("%s" % a) The output is: a="string" print (("%s" % a)) Although it was already a valid call to print in python3. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 331098 nosy: jondaa priority: normal severity: normal status: open title: Double parenthesis in print function running 2to3 in already correct call _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 07:51:48 2018 From: report at bugs.python.org (Cao Hongfu) Date: Wed, 05 Dec 2018 12:51:48 +0000 Subject: [New-bugs-announce] [issue35418] python hung or stuck somtimes randomly on windows server 2008R2 Message-ID: <1544014308.03.0.788709270274.issue35418@psf.upfronthosting.co.za> New submission from Cao Hongfu : Recently, python frequently(but randomly) hung or stuck at initialization(when I click the python.exe or use python cmd prompt) on my server(running windows server 2008R2), But everything is fine on my windows 7 PC). I tried reinstall python, but not working(also tried 3.6). I tried process-explorer and found that the normal python process allocated about 12MB memory but the stuck one only allocated 8MB memory. here is the stack information for stuck python process(having 3 threads): ------------------thread1------------------ ntdll.dll!ZwWaitForSingleObject+0xa ntdll.dll!RtlImageDirectoryEntryToData+0x118 ntdll.dll!RtlEnterCriticalSection+0xd1 ntdll.dll!EtwDeliverDataBlock+0x777 ntdll.dll!LdrLoadDll+0xed !TlsGetValue+0x4756 !UuidCreate+0x1b00 !I_RpcBindingIsServerLocal+0x12899 !RegEnumKeyExW+0x13a !RegEnumKeyExW+0xbe !RpcBindingFree+0x320 !RpcAsyncRegisterInfo+0x10ff !Ndr64AsyncClientCall+0x9da !Ndr64AsyncClientCall+0xc9b !NdrClientCall3+0xf5 !LsaOpenPolicy+0xb9 !LsaOpenPolicy+0x56 !LookupPrivilegeValueW+0x6f !LookupPrivilegeValueA+0x84 !PyNamespace_New+0xd4 !PyCodec_LookupTextEncoding+0xb5 !PyObject_SetAttrId+0x21e !PyMethodDef_RawFastCallDict+0x115 !PyObject_SetAttr+0x352 !PyEval_EvalFrameDefault+0x1182 !PyEval_EvalCodeWithName+0x1a0 !PyMethodDef_RawFastCallKeywords+0xc32 !PyEval_EvalFrameDefault+0x4b1 !PyMethodDef_RawFastCallKeywords+0xa77 !PyEval_EvalFrameDefault+0x913 !PyMethodDef_RawFastCallKeywords+0xa77 !PyEval_EvalFrameDefault+0x4b1 !PyMethodDef_RawFastCallKeywords+0xa77 !PyEval_EvalFrameDefault+0x4b1 !PyMethodDef_RawFastCallKeywords+0xa77 !PyEval_EvalFrameDefault+0x913 !PyMethodDef_RawFastCallKeywords+0xa77 !PyEval_EvalFrameDefault+0x4b1 !PyMethodDef_RawFastCallKeywords+0xa77 !PyEval_EvalFrameDefault+0x913 !PyFunction_FastCallDict+0xdd !PyObject_CallMethod+0xef !PyObject_CallMethod+0xa2 !PyObject_CallMethod+0x3c !PyTime_MulDiv+0x47 !Py_InitializeMainInterpreter+0x95 !PyMainInterpreterConfig_Read+0x309 !PyMapping_SetItemString+0x306 !PyBytes_AsString+0x142 !Py_Main+0x52 !BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x1d ------------------thread2------------------ ntdll.dll!ZwWaitForSingleObject+0xa ntdll.dll!RtlImageDirectoryEntryToData+0x118 ntdll.dll!RtlEnterCriticalSection+0xd1 !UuidCreate+0x1ae2 !NdrFullPointerQueryPointer+0x35d !LsaLookupGetDomainInfo+0xb8 !RpcBindingFree+0x320 !RpcAsyncRegisterInfo+0x10ff !Ndr64AsyncClientCall+0x9da !Ndr64AsyncClientCall+0xc9b !NdrClientCall3+0xf5 !LsaLookupOpenLocalPolicy+0x41 !LookupAccountNameLocalW+0xaf !LookupAccountSidLocalW+0x25 !LookupAccountSidW+0x57 !MBCGlobal::get_proc_user_name+0x1f7 !MBCGlobal::init+0x240a !HDirSnap::operator=+0xda !LVPVTBase::to_file+0x46ef ntdll.dll!RtlDeactivateActivationContextUnsafeFast+0x34e ntdll.dll!EtwDeliverDataBlock+0xa44 ntdll.dll!LdrLoadDll+0xed !TlsGetValue+0x4756 !PublicService+0x13ec !BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x1d ------------------thread3------------------ ntdll.dll!ZwWaitForSingleObject+0xa ntdll.dll!RtlImageDirectoryEntryToData+0x118 ntdll.dll!RtlEnterCriticalSection+0xd1 ntdll.dll!LdrQueryModuleServiceTags+0x13f ntdll.dll!CsrIdentifyAlertableThread+0x9d ntdll.dll!EtwSendNotification+0x16d ntdll.dll!RtlQueryProcessDebugInformation+0x371 ntdll.dll!EtwDeliverDataBlock+0xf00 !BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x1d here is the stack info for normal python process(have 2 threads) ------------------thread1------------------ ntdll.dll!ZwRequestWaitReplyPort+0xa kernel32.dll!GetConsoleMode+0xf8 kernel32.dll!VerifyConsoleIoHandle+0x281 kernel32.dll!ReadConsoleW+0xbc python37.dll!PyOS_Readline+0x4f4 python37.dll!PyOS_Readline+0x333 python37.dll!PyOS_Readline+0xfa python37.dll!PyErr_NoMemory+0xc228 python37.dll!PyUnicode_AsUnicode+0x553 python37.dll!PyUnicode_AsUnicode+0x9c python37.dll!PyParser_ParseFileObject+0x86 python37.dll!PyParser_ASTFromFileObject+0x82 python37.dll!PyRun_InteractiveOneObject+0x24a python37.dll!PyRun_InteractiveLoopFlags+0xf6 python37.dll!PyRun_AnyFileExFlags+0x45 python37.dll!Py_UnixMain+0x50b python37.dll!Py_UnixMain+0x5b3 python37.dll!PyErr_NoMemory+0x195a4 python37.dll!PyBytes_AsString+0x14f python37.dll!Py_Main+0x52 python.exe+0x1258 kernel32.dll!BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x1d ------------------thread2------------------ ntdll.dll!NtWaitForMultipleObjects+0xa ntdll.dll!RtlIsCriticalSectionLockedByThread+0xd4d kernel32.dll!BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x1d One of my friend say that this may be issues with https://support.microsoft.com/en-us/help/2545627/a-multithreaded-application-might-crash-in-windows-7-or-in-windows-ser Thx. ---------- components: Windows messages: 331105 nosy: Cao Hongfu, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: python hung or stuck somtimes randomly on windows server 2008R2 type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 07:55:10 2018 From: report at bugs.python.org (Stan) Date: Wed, 05 Dec 2018 12:55:10 +0000 Subject: [New-bugs-announce] [issue35419] Thread.is_alive while running Process.is_alive causes either premature termination or never-terminating. Message-ID: <1544014510.18.0.788709270274.issue35419@psf.upfronthosting.co.za> New submission from Stan : Checking if thread.is_alive() while thread is checking on Process.is_alive() seemingly causes undefined behavior. The attached POC is expected to print "ThreadN.data == 1999" for N in range(0, 20) with some repeats. However the integers are spread all over the place. Moreover sometimes one or more of the threads never terminate resulting in technically infinite amount of "ThreadN.data == ###" prints. In python2.7.15 I never observed a thread lock (only early terminations), but in python3.4.8 I did. You may have to adjust max_count variable to have higher success rate of thread locking. I got about 40% chance of `python3 bug_test.py` never finishing on an Intel(R) Core(TM) i7-4610M CPU @ 3.00GHz ---------- files: bug_test.py messages: 331106 nosy: Hexorg priority: normal severity: normal status: open title: Thread.is_alive while running Process.is_alive causes either premature termination or never-terminating. type: behavior versions: Python 2.7, Python 3.4 Added file: https://bugs.python.org/file47975/bug_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 14:16:39 2018 From: report at bugs.python.org (mattip) Date: Wed, 05 Dec 2018 19:16:39 +0000 Subject: [New-bugs-announce] [issue35420] how to migrate a c-extension module to one that supports subinerpreters? Message-ID: <1544037399.3.0.788709270274.issue35420@psf.upfronthosting.co.za> New submission from mattip : NumPy does not currently support subinterpreters, it has global state that is not cleaned up when releasing the module. I could not find a description of the steps I need to take to modernize the C-extension module to be able to used under a subinterpreter. It would be nice to describe this in the Python documentation, or does such documentation exist? ---------- assignee: docs at python components: Documentation messages: 331142 nosy: docs at python, eric.snow, mattip priority: normal severity: normal status: open title: how to migrate a c-extension module to one that supports subinerpreters? type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 14:36:51 2018 From: report at bugs.python.org (Lingaraj Gowdar) Date: Wed, 05 Dec 2018 19:36:51 +0000 Subject: [New-bugs-announce] [issue35421] Expected result is not clear in case of list.append(list) Message-ID: <1544038611.2.0.788709270274.issue35421@psf.upfronthosting.co.za> New submission from Lingaraj Gowdar : Currently the output of below append cannot be used for practical purpose, This jira is to get the expectation for a case in append. >>> a=[1,2] >>> a.append(a) >>> a [1, 2, [...]] >>> ---------- assignee: terry.reedy components: IDLE messages: 331148 nosy: Lingaraj Gowdar, terry.reedy priority: normal severity: normal status: open title: Expected result is not clear in case of list.append(list) type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 16:24:32 2018 From: report at bugs.python.org (=?utf-8?q?C=C3=A9dric_Van_Rompay?=) Date: Wed, 05 Dec 2018 21:24:32 +0000 Subject: [New-bugs-announce] [issue35422] misleading error message from ssl.get_server_certificate() when bad port Message-ID: <1544045072.3.0.788709270274.issue35422@psf.upfronthosting.co.za> New submission from C?dric Van Rompay : When calling ssl.get_server_certificate() with a bad port number (I used 80 when I should have been using 443), the error raised is a bit misleading: >>> import ssl >>> ssl.get_server_certificate(('gitlab.com',80)) [...] SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:847) "SSL: wrong version number" seems to indicate that there is a mismatch between SSL versions supported by the client and the ones supported by the server. When here I guess the problem would better be described as "there is no SSL available at this address+port". ---------- assignee: christian.heimes components: SSL messages: 331171 nosy: cedricvanrompay, christian.heimes priority: normal severity: normal status: open title: misleading error message from ssl.get_server_certificate() when bad port type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 19:12:39 2018 From: report at bugs.python.org (Eric Snow) Date: Thu, 06 Dec 2018 00:12:39 +0000 Subject: [New-bugs-announce] [issue35423] Signal handling machinery still relies on "pending calls". Message-ID: <1544055159.6.0.788709270274.issue35423@psf.upfronthosting.co.za> New submission from Eric Snow : For a while now the signal handling machinery has piggy-backed on ceval's "pending calls" machinery (e.g. Py_AddPendingCall). This is a bit confusing. It also increases the risk with unrelated changes to the pending calls code. ---------- assignee: eric.snow messages: 331196 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Signal handling machinery still relies on "pending calls". type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 5 20:00:34 2018 From: report at bugs.python.org (STINNER Victor) Date: Thu, 06 Dec 2018 01:00:34 +0000 Subject: [New-bugs-announce] [issue35424] multiprocessing.Pool: emit ResourceWarning Message-ID: <1544058034.55.0.788709270274.issue35424@psf.upfronthosting.co.za> New submission from STINNER Victor : Since 2 years, I'm fixing frequently "dangling thread" and "dangling process" warnings on buildbots. These bugs are really hard to reproduce. They usually require to get access to a specific buildbot, simulate a specific workload, and get the proper timing to get the warning. I propose to emit a ResourceWarning in multiprocessing.Pool destructor if the pool has not been cleaned properly. I'm not sure in which cases a warning should be emitted. Attached PR is a WIP implementation. ---------- components: Tests messages: 331201 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: multiprocessing.Pool: emit ResourceWarning versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 07:28:28 2018 From: report at bugs.python.org (STINNER Victor) Date: Thu, 06 Dec 2018 12:28:28 +0000 Subject: [New-bugs-announce] [issue35425] test_eintr fails randomly on AMD64 FreeBSD 10-STABLE Non-Debug 3.7: TypeError: 'int' object is not callable Message-ID: <1544099308.26.0.788709270274.issue35425@psf.upfronthosting.co.za> New submission from STINNER Victor : I don't understand the error, it doesn't make sense. It *seems* like faulthandler.cancel_dump_traceback_later attribute has been set to an int, but I don't see how it could be possible. Or the call raises the TypeError? But the call can be summarized to: Py_CLEAR(thread.file); and I don't see how Py_CLEAR() can trigger a TypeError. https://buildbot.python.org/all/#/builders/170/builds/175 ERROR: test_sigwaitinfo (__main__.SignalEINTRTest) Traceback (most recent call last): File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/eintrdata/eintr_tester.py", line 71, in tearDown faulthandler.cancel_dump_traceback_later() TypeError: 'int' object is not callable ---------- components: Tests messages: 331232 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_eintr fails randomly on AMD64 FreeBSD 10-STABLE Non-Debug 3.7: TypeError: 'int' object is not callable versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 07:36:00 2018 From: report at bugs.python.org (STINNER Victor) Date: Thu, 06 Dec 2018 12:36:00 +0000 Subject: [New-bugs-announce] [issue35426] test_signal.test_interprocess_signal() race condition Message-ID: <1544099760.1.0.788709270274.issue35426@psf.upfronthosting.co.za> New submission from STINNER Victor : test_signal.test_interprocess_signal() has a race condition: with self.subprocess_send_signal(pid, "SIGUSR1") as child: # here self.wait_signal(child, 'SIGUSR1', SIGUSR1Exception) The test only except SIGUSR1Exception inside wait_signal(), but the signal can be sent during subprocess_send_signal() call. assertRaises(SIGUSR1Exception) should surround these two lines instead. wait_signal() shouldn't handle the signal. Or wait_signal() should call subprocess_send_signal(). It seems like Python 2.7 has the proper design. It might be a regression introduced by myself in: commit 32eb840a42ec0e131daac48d43aa35290e72571e Author: Victor Stinner Date: Tue Mar 15 11:12:35 2016 +0100 Issue #26566: Rewrite test_signal.InterProcessSignalTests * Add Lib/test/signalinterproctester.py * Don't disable the garbage collector anymore * Don't use os.fork() with a subprocess to not inherit existing signal handlers or threads: start from a fresh process * Don't use UNIX kill command to send a signal but Python os.kill() * Use a timeout of 10 seconds to wait for the signal instead of 1 second * Always use signal.pause(), instead of time.wait(1), to wait for a signal * Use context manager on subprocess.Popen * remove code to retry on EINTR: it's no more needed since the PEP 475 * remove unused function exit_subprocess() * Cleanup the code FAIL: test_interprocess_signal (test.test_signal.PosixTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/test_signal.py", line 62, in test_interprocess_signal assert_python_ok(script) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/support/script_helper.py", line 157, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/support/script_helper.py", line 143, in _assert_python res.fail(cmd_line) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/support/script_helper.py", line 84, in fail err)) AssertionError: Process return code is 1 command line: ['/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/python', '-X', 'faulthandler', '-I', '/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/signalinterproctester.py'] stdout: --- --- stderr: --- E/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/subprocess.py:858: ResourceWarning: subprocess 64567 is still running ResourceWarning, source=self) ResourceWarning: Enable tracemalloc to get the object allocation traceback ====================================================================== ERROR: test_interprocess_signal (__main__.InterProcessSignalTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/signalinterproctester.py", line 68, in test_interprocess_signal with self.subprocess_send_signal(pid, "SIGUSR1") as child: File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/signalinterproctester.py", line 50, in subprocess_send_signal return subprocess.Popen(args) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/subprocess.py", line 1476, in _execute_child part = os.read(errpipe_read, 50000) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/signalinterproctester.py", line 22, in sigusr1_handler raise SIGUSR1Exception SIGUSR1Exception ---------------------------------------------------------------------- Ran 1 test in 0.223s FAILED (errors=1) --- ---------------------------------------------------------------------- Ran 43 tests in 31.872s FAILED (failures=1, skipped=2) test test_signal failed ---------- components: Tests messages: 331233 nosy: vstinner priority: normal severity: normal status: open title: test_signal.test_interprocess_signal() race condition versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 11:39:34 2018 From: report at bugs.python.org (Mark Dickinson) Date: Thu, 06 Dec 2018 16:39:34 +0000 Subject: [New-bugs-announce] [issue35427] logging UnicodeDecodeError from undecodable strftime output Message-ID: <1544114374.69.0.788709270274.issue35427@psf.upfronthosting.co.za> New submission from Mark Dickinson : We're seeing UnicodeDecodeErrors on Windows / Python 2.7 when using logging on a Japanese customer machine. The cause turns out to be in the log record formatting, where unicode fields are combined with non-decodable bytestrings coming from strftime. More details: we were using the following formatter: _LOG_FORMATTER = logging.Formatter( "%(asctime)s %(levelname)s:%(name)s:%(message)s", datefmt="%Y-%m-%dT%H:%M:%S%Z", ) In the logging internals, that `datefmt` gets passed to `time.strftime`, which on the machine in question produced a non-ASCII bytestring (i.e., type `str`). When combined with other Unicode strings in the log record, this gave the `UnicodeDecodeError`. I'm unfortunately failing to reproduce this directly on my own macOS / UK locale machine, but it's documented that `time.strftime` returns a value encoded according to the current locale. In this particular case, the output we were getting from the `time.strftime` call looked like: "2018-12-06T23:57:14\x93\x8c\x8b\x9e (\x95W\x8f\x80\x8e\x9e)" which assuming an encoding of cp932 decodes to something plausible: >>> s.decode("cp932") u'2018-12-06T23:57:14\u6771\u4eac (\u6a19\u6e96\u6642)' >>> print(s.decode("cp932")) 2018-12-06T23:57:14?? (???) It looks as though the logging module should be explicitly decoding the strftime output before doing formatting, using for example what's recommended in the strftime documentation [1]: strftime().decode(locale.getlocale()[1]) Code links: this is the line that's producing non-decodable bytes: https://github.com/python/cpython/blob/49cedc51a68b4cd2525c14ab02bd1a483d8be389/Lib/logging/__init__.py#L425 ... and this is the formatting operation that then ends up raising UnicodeDecodeError as a result of those: https://github.com/python/cpython/blob/49cedc51a68b4cd2525c14ab02bd1a483d8be389/Lib/logging/__init__.py#L469 This isn't an issue on Python 3, and I was unable to reproduce it on my non-Windows machine; that particular form of strftime output may well be specific to Windows (or possibly even specific to Japanese flavours of Windows). [1] https://docs.python.org/2/library/time.html#time.strftime ---------- components: Library (Lib) messages: 331238 nosy: mark.dickinson priority: normal severity: normal status: open title: logging UnicodeDecodeError from undecodable strftime output type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 13:08:42 2018 From: report at bugs.python.org (EZ) Date: Thu, 06 Dec 2018 18:08:42 +0000 Subject: [New-bugs-announce] [issue35428] xml.etree.ElementTree.tostring violates W3 standards allowing encoding='unicode' without error Message-ID: <1544119722.12.0.788709270274.issue35428@psf.upfronthosting.co.za> New submission from EZ : The documentation[0] for 3.x of xml.etree.ElementTree.tostring is quite clear: > Use encoding="unicode" to generate a Unicode string. See also the creation of the problem: https://bugs.python.org/issue10942 This is a violation of W3 standards, referenced by the ElementTree documentation[1] claiming it must conform to these standards, which state: ...it is a fatal error for an entity including an encoding declaration to be presented to the XML processor in an encoding other than that named in the declaration.... Encoding for 'unicode' does not appear in the named declarations (https://www.iana.org/assignments/character-sets/character-sets.xhtml) referenced by the same documentation[1]. Handling of a fatal error, must, in part: Once a fatal error is detected, however, the processor MUST NOT continue normal processing (i.e., it MUST NOT continue to pass character data and information about the document's logical structure to the application in the normal way) [0] https://docs.python.org/3.2/library/xml.etree.elementtree.html [1] The encoding string included in XML output should conform to the appropriate standards. For example, ?UTF-8? is valid, but ?UTF8? is not. See http://www.w3.org/TR/2006/REC-xml11-20060816/#NT-EncodingDecl and http://www.iana.org/assignments/character-sets. ---------- components: XML messages: 331242 nosy: Zim priority: normal severity: normal status: open title: xml.etree.ElementTree.tostring violates W3 standards allowing encoding='unicode' without error type: behavior versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 15:01:58 2018 From: report at bugs.python.org (Roman Yurchak) Date: Thu, 06 Dec 2018 20:01:58 +0000 Subject: [New-bugs-announce] [issue35429] Incorrect use of raise NotImplemented Message-ID: <1544126518.18.0.788709270274.issue35429@psf.upfronthosting.co.za> New submission from Roman Yurchak : In two places in stdlib, `raise NotImplemented` is used instead of `raise NotImplementedError`. The former is not valid and produces, ``` >>> raise NotImplemented('message') Traceback (most recent call last): File "", line 1, in TypeError: 'NotImplementedType' object is not callable ``` ---------- components: Library (Lib) messages: 331244 nosy: rth priority: normal severity: normal status: open title: Incorrect use of raise NotImplemented versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 15:27:58 2018 From: report at bugs.python.org (Roman Yurchak) Date: Thu, 06 Dec 2018 20:27:58 +0000 Subject: [New-bugs-announce] [issue35430] Lib/argparse.py uses `is` for string comparison Message-ID: <1544128078.84.0.788709270274.issue35430@psf.upfronthosting.co.za> New submission from Roman Yurchak : Lib/argparse.py uses `is` for string comparison, ` 221: if self.heading is not SUPPRESS and self.heading is not None: 247: if text is not SUPPRESS and text is not None: 251: if usage is not SUPPRESS: 256: if action.help is not SUPPRESS: 290: if part and part is not SUPPRESS]) 679: if action.default is not SUPPRESS: 1130: if self.dest is not SUPPRESS: 1766: if action.dest is not SUPPRESS: 1768: if action.default is not SUPPRESS: 1851: if argument_values is not SUPPRESS: 2026: if action.help is not SUPPRESS] ` where `SUPPRESS = '==SUPPRESS=='`. Unless I'm missing something this can produce false negatives if the variable that we compare against is a slice from another string. Using equality is probably safer in any case. Detected with LGTM.com analysis. ---------- components: Library (Lib) messages: 331246 nosy: rth priority: normal severity: normal status: open title: Lib/argparse.py uses `is` for string comparison versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 16:18:02 2018 From: report at bugs.python.org (kellerfuchs) Date: Thu, 06 Dec 2018 21:18:02 +0000 Subject: [New-bugs-announce] [issue35431] The math module should provide a function for computing binomial coefficients Message-ID: <1544131082.09.0.788709270274.issue35431@psf.upfronthosting.co.za> New submission from kellerfuchs : A recuring pain point, for me and for many others who use Python for mathematical computations, is that the standard library does not provide a function for computing binomial coefficients. I would like to suggest adding a function, in the math module, with the following signature: binomial(n: Integral, k: Integral) -> Integral A simple native Python implementation would be: from functools import reduce from math import factorial from numbers import Integral def binomial(n: Integral, k: Integral) -> Integral: if k < 0 or n < 0: raise ValueError("math.binomial takes non-negative parameters.") k = min(k, n-k) num, den = 1, 1 for i in range(k): num = num * (n - i) den = den * (i + 1) return num//den As far as I can tell, all of the math module is implemented in C, so this should be done in C too, but the implemented behaviour should be equivalent. I will submit a Github pull request once I have a ready-to-review patch. Not starting a PEP, per [PEP 1]: > Small enhancements or patches often don't need a PEP and can be injected into the Python development workflow with a patch submission to the Python issue tracker. [PEP 1]: https://www.python.org/dev/peps/pep-0001/#id36 ---------- messages: 331251 nosy: kellerfuchs priority: normal severity: normal status: open title: The math module should provide a function for computing binomial coefficients type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 16:54:23 2018 From: report at bugs.python.org (Bruno Chanal) Date: Thu, 06 Dec 2018 21:54:23 +0000 Subject: [New-bugs-announce] [issue35432] str.format and string.Formatter bug with French (and other) locale Message-ID: <1544133264.0.0.788709270274.issue35432@psf.upfronthosting.co.za> New submission from Bruno Chanal : The short story: Small numbers are not displayed properly when using a French (language) locale or similar, and formatting output with str.format or string.Formatter(). The problem probably extends to other locales. Long story: --- $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic $ python3 Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import locale >>> locale.setlocale(locale.LC_ALL, '') 'fr_CA.UTF-8' >>> print('{:n}'.format(10)) # Garbled output >>> print('{:n}'.format(10000)) # OK 10?000 >>> # Note: narrow non-break space used as thousands separator ... pass >>> locale.format_string('%d', 10, grouping=True) # OK '10' >>> locale.format_string('%d', 10123) # OK '10123' >>> locale.format_string('%d', 10123, grouping=True) # OK thousands separator \u202f '10\u202f123' >>> import string >>> print(string.Formatter().format('{:n}', 10)) # Same problem with Formatter AB >>> print(string.Formatter().format('{:n}', 10000)) 10?000 locale aware functions implementing the {:n} formatting code, such as str.format and string.Formatter, generate garbled output with small numbers under a French locale. However, locale.format_string('%d', numeric_value) produces valid strings. In other words, it's a workaround for the time being... The problem seems to originate from a new version of Ubuntu: I ran the same program about 18 months ago and didn't notice any problem. My 0.02 $ worth of analysis: the output from the str.locale function is some random and changing value with small numbers. The behavior is reminiscent of invalid memory reads in C functions, e.g., mismatch of parameter in function calls, or similar. The value is not consistent. It feels like format does not expect and deal properly with long Unicode characters as part of numbers. The space character is a NARROW NON-BREAK SPACE, in most Ubuntu French locales (and quite a few others) however. The problem shows up in Python 3.6 and 3.7. This might also be a security issue... ---------- components: Interpreter Core messages: 331254 nosy: canuck7 priority: normal severity: normal status: open title: str.format and string.Formatter bug with French (and other) locale type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 20:03:10 2018 From: report at bugs.python.org (Jeremy Kloth) Date: Fri, 07 Dec 2018 01:03:10 +0000 Subject: [New-bugs-announce] [issue35433] Correctly detect installed SDK versions Message-ID: <1544144590.98.0.788709270274.issue35433@psf.upfronthosting.co.za> New submission from Jeremy Kloth : In the process of eliminating compiler warnings on my buildbot, I needed to update VS2015 to the latest toolset (VS2015 Update 3). This in turn now causes an error due about not having the required version of Windows SDK installed. It seems that the detection logic for that uses a hard-coded list which may not be up-to-date (and possibly incorrect for some installs). Referenced PR fixes this. ---------- components: Build, Windows messages: 331258 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Correctly detect installed SDK versions versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 6 23:46:28 2018 From: report at bugs.python.org (Josh Rosenberg) Date: Fri, 07 Dec 2018 04:46:28 +0000 Subject: [New-bugs-announce] [issue35434] Wrong bpo linked in What's New in 3.8 Message-ID: <1544157988.47.0.788709270274.issue35434@psf.upfronthosting.co.za> New submission from Josh Rosenberg : https://docs.python.org/3.8/whatsnew/3.8.html#optimizations begins with: shutil.copyfile(), shutil.copy(), shutil.copy2(), shutil.copytree() and shutil.move() use platform-specific ?fast-copy? syscalls on Linux, macOS and Solaris in order to copy the file more efficiently. ... more explanation ... (Contributed by Giampaolo Rodola? in bpo-25427.) That's all correct, except bpo-25427 is about removing the pyvenv script; it should be referencing bpo-33671. ---------- assignee: docs at python components: Documentation keywords: easy messages: 331264 nosy: docs at python, giampaolo.rodola, josh.r priority: low severity: normal status: open title: Wrong bpo linked in What's New in 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 7 01:30:58 2018 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Wirtel?=) Date: Fri, 07 Dec 2018 06:30:58 +0000 Subject: [New-bugs-announce] [issue35435] Documentation of 3.3 is available Message-ID: <1544164258.79.0.788709270274.issue35435@psf.upfronthosting.co.za> New submission from St?phane Wirtel : Today, I was looking for the doc of unittest.mock and the result from DuckDuckGo was this link: https://docs.python.org/3.3/library/unittest.mock-examples.html In the devguide, we stopped the support and everything about this version, in the bug tracker, 3.3 is not available. Maybe we could remove the doc from the server. ---------- assignee: docs at python components: Documentation messages: 331271 nosy: docs at python, matrixise, mdk priority: normal severity: normal status: open title: Documentation of 3.3 is available _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 7 01:54:27 2018 From: report at bugs.python.org (Zackery Spytz) Date: Fri, 07 Dec 2018 06:54:27 +0000 Subject: [New-bugs-announce] [issue35436] Add missing PyErr_NoMemory() calls Message-ID: <1544165667.15.0.788709270274.issue35436@psf.upfronthosting.co.za> Change by Zackery Spytz : ---------- components: Extension Modules, Interpreter Core nosy: ZackerySpytz priority: normal severity: normal status: open title: Add missing PyErr_NoMemory() calls type: behavior versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 7 04:57:18 2018 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 07 Dec 2018 09:57:18 +0000 Subject: [New-bugs-announce] [issue35437] Almost all Windows buildbots are failing to compile Message-ID: <1544176638.38.0.788709270274.issue35437@psf.upfronthosting.co.za> New submission from Pablo Galindo Salgado : Almost all Windows buildbots are failing to compile Python: https://buildbot.python.org/all/#/builders/130/builds/525 https://buildbot.python.org/all/#/builders/113/builds/825 https://buildbot.python.org/all/#/builders/121/builds/782 https://buildbot.python.org/all/#/builders/58/builds/1680 https://buildbot.python.org/all/#/builders/17/builds/494 ... I suspect that is due to this commit: 468a15a and its backports. ---------- components: Interpreter Core, Windows messages: 331283 nosy: pablogsal, paul.moore, steve.dower, tim.golden, zach.ware priority: release blocker severity: normal status: open title: Almost all Windows buildbots are failing to compile versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 7 18:25:59 2018 From: report at bugs.python.org (Eddie Elizondo) Date: Fri, 07 Dec 2018 23:25:59 +0000 Subject: [New-bugs-announce] [issue35438] Extension modules using non-API functions Message-ID: <1544225159.96.0.788709270274.issue35438@psf.upfronthosting.co.za> New submission from Eddie Elizondo : Three extension modules: _testcapimodule.c, posixmodule.c, and mathmodule.c are using `_PyObject_LookupSpecial` which is not API. These should instead use `PyObject_GetAttrString`, `PyType_GetSlot`. ---------- components: Library (Lib) messages: 331364 nosy: eelizondo priority: normal severity: normal status: open title: Extension modules using non-API functions type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 7 19:26:46 2018 From: report at bugs.python.org (Arturo Inzunza) Date: Sat, 08 Dec 2018 00:26:46 +0000 Subject: [New-bugs-announce] [issue35439] New class instance not initializing variables of type list Message-ID: <1544228806.94.0.788709270274.issue35439@psf.upfronthosting.co.za> New submission from Arturo Inzunza : List type variables in a class are not reset on new instances of the class. Example: class Klazz: lst = [] def __init__(self, va): print(self.lst) self.lst.append(va) k = Klazz(1) [] -> This is correct as the lst value is empty on class instantiation k2 = Klazz(2) [1] -> This is wrong, a totally new instance of the class retains the value of a previous class instance lst variable k3 = Klazz(3) [1, 2] -> And so on... new instances all share the same list ---------- messages: 331370 nosy: Arturo Inzunza priority: normal severity: normal status: open title: New class instance not initializing variables of type list versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 8 04:05:57 2018 From: report at bugs.python.org (Kamaal Khan) Date: Sat, 08 Dec 2018 09:05:57 +0000 Subject: [New-bugs-announce] [issue35440] Setup failed 0x80072f7d - Unspecified error Message-ID: <1544259957.98.0.788709270274.issue35440@psf.upfronthosting.co.za> New submission from Kamaal Khan : I've been trying to install version 3.7.1 64-bit but it keeps on giving me that error. Tried fixing it by installing KB2999226, but to no avail. Been running the install file as admin too. Using Windows 7. Install log is attached. ---------- components: Windows files: Python 3.7.1 (64-bit) log.txt messages: 331375 nosy: DesignEngineer, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Setup failed 0x80072f7d - Unspecified error type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file47981/Python 3.7.1 (64-bit) log.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 8 05:04:58 2018 From: report at bugs.python.org (Zackery Spytz) Date: Sat, 08 Dec 2018 10:04:58 +0000 Subject: [New-bugs-announce] [issue35441] Dead (and buggy) code due to mishandling of PyList_SetItem() errors Message-ID: <1544263498.57.0.788709270274.issue35441@psf.upfronthosting.co.za> Change by Zackery Spytz : ---------- components: Extension Modules nosy: ZackerySpytz priority: normal severity: normal status: open title: Dead (and buggy) code due to mishandling of PyList_SetItem() errors type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 8 16:58:49 2018 From: report at bugs.python.org (Victor Porton) Date: Sat, 08 Dec 2018 21:58:49 +0000 Subject: [New-bugs-announce] [issue35442] Chain of several subcommands in argparse Message-ID: <1544306329.79.0.788709270274.issue35442@psf.upfronthosting.co.za> New submission from Victor Porton : We should consider some way to implement argparse functionality asked here: https://stackoverflow.com/q/53686523/856090 It is unclear how exactly to do this. This message is a call to discuss what should be the information format and API. The awful thing is that I may need to write my own command line parser, as current argparse seems to be unable to provide this functionality. I think, I will implement this for my program sooner or later but the idea to write my own analogue of argparse somehow terrifies me. ---------- components: Library (Lib) messages: 331393 nosy: porton priority: normal severity: normal status: open title: Chain of several subcommands in argparse type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 04:33:16 2018 From: report at bugs.python.org (Muhammed Alkan) Date: Sun, 09 Dec 2018 09:33:16 +0000 Subject: [New-bugs-announce] [issue35443] Add Tail Call Optimization Message-ID: <1544347996.26.0.788709270274.issue35443@psf.upfronthosting.co.za> New submission from Muhammed Alkan : I see nothing wrong with adding Tail Call Optimization to Python. ---------- messages: 331420 nosy: midnio priority: normal severity: normal status: open title: Add Tail Call Optimization type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 06:01:30 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 09 Dec 2018 11:01:30 +0000 Subject: [New-bugs-announce] [issue35444] Unify and optimize the helper for getting a builtin object Message-ID: <1544353290.83.0.788709270274.issue35444@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : _PyIter_GetBuiltin() was introduced in issue14288 (31668b8f7a3efc7b17511bb08525b28e8ff5f23a). This was used for getting references to the builtin "iter" and "reverse". It was renamed to _PyObject_GetBuiltin() in a701388de1135241b5a8e4c970e06c0e83a66dc0. There is other code that gets references to the builtin "getattr" using PyEval_GetBuiltins(). It is more efficient, but contains bugs. The proposed PR unifies getting references to builtins: * The prefix _PyObject_ is changed to _PyEval_, since this function has relation not to the object type but to the evaluation environment. * It uses now the private _Py_Identifier API instead of a raw C string. This saves time by omitting the creation of a Unicode object on every call. * It uses now fast PyEval_GetBuiltins() instead of slower PyImport_Import(). * Fixed errors handling in the code that used PyEval_GetBuiltins() before. It no longer swallows unexpected exceptions, no longer returns an error without setting an exception, and no longer causes raising a SystemError. An example of an error in current code: >>> import builtins >>> del builtins.getattr >>> int.bit_length.__reduce__() Traceback (most recent call last): File "", line 1, in SystemError: NULL object passed to Py_BuildValue ---------- components: Interpreter Core messages: 331424 nosy: kristjan.jonsson, pitrou, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Unify and optimize the helper for getting a builtin object versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 07:34:29 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 09 Dec 2018 12:34:29 +0000 Subject: [New-bugs-announce] [issue35445] Do not ignore errors when create posix.environ Message-ID: <1544358869.69.0.788709270274.issue35445@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Currently most errors during creating posix.environ are ignored except an error of creating an empty dict. The initial revision 85a5fbbdfea617f6cc8fae82c9e8c2b5c424436d contained the comment "XXX This part ignores errors". Later changes removed "XXX" from the comment and added explicit error clearing. Later the POSIX code was duplicated for Windows. It looks to me that that comment was not declared the intentional behavior, but just described existing code, and was left as a reminder for implementing error handling. The proposed PR implements proper error handling in this code. ---------- components: Extension Modules messages: 331427 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Do not ignore errors when create posix.environ type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 07:40:06 2018 From: report at bugs.python.org (Alistsair Lenhard) Date: Sun, 09 Dec 2018 12:40:06 +0000 Subject: [New-bugs-announce] [issue35446] incorrect example Message-ID: <1544359206.53.0.788709270274.issue35446@psf.upfronthosting.co.za> New submission from Alistsair Lenhard : under: https://docs.python.org/3/tutorial/errors.html Original it says: "Note that if the except clauses were reversed (with except B first), it would have printed B, B, B ? the first matching except clause is triggered." It should read: "Note that if the except clauses were reversed (with except B first), it would have printed D, D, D ? the first matching except clause is triggered." As D is the first expression in the print statement. So if the expression is changed to "except B:" class B(Exception): pass class C(B): pass class D(C): pass for cls in [B, C, D]: try: raise cls() except B: print("D") except C: print("C") except B: print("B") Result is: D D D ---------- assignee: docs at python components: Documentation messages: 331428 nosy: Alistair, docs at python priority: normal severity: normal status: open title: incorrect example versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 10:23:18 2018 From: report at bugs.python.org (Shishmarev Pavel) Date: Sun, 09 Dec 2018 15:23:18 +0000 Subject: [New-bugs-announce] [issue35447] Redundant try-except block in urllib Message-ID: <1544368998.3.0.788709270274.issue35447@psf.upfronthosting.co.za> New submission from Shishmarev Pavel : https://github.com/python/cpython/blob/master/Lib/urllib/parse.py#L875 It's redundant to raise and then catch exception. ---------- components: Library (Lib) messages: 331436 nosy: PashaWNN priority: normal severity: normal status: open title: Redundant try-except block in urllib type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 11:35:30 2018 From: report at bugs.python.org (Adrian Wielgosik) Date: Sun, 09 Dec 2018 16:35:30 +0000 Subject: [New-bugs-announce] [issue35448] ConfigParser .read() - handling of nonexistent files Message-ID: <1544373330.85.0.788709270274.issue35448@psf.upfronthosting.co.za> New submission from Adrian Wielgosik : Documentation of ConfigParser says: > If a file named in filenames cannot be opened, that file will be ignored. This is designed so that you can specify an iterable of potential configuration file locations (for example, the current directory, the user?s home directory, and some system-wide directory), and all existing configuration files in the iterable will be read. While this is a useful property, it can also be a footgun. The first read() example in the Quick Start section contains just a single file read: >>> config.read('example.ini') I would expect that this basic usage is very popular. If the file doesn't exist, the normal usage pattern fails in a confusing way: from configparser import ConfigParser config = ConfigParser() config.read('config.txt') value = config.getint('section', 'option') ---> configparser.NoSectionError: No section: 'section' In my opinion, this error isn't very obvious to understand and debug, unless you have read that piece of .read() documentation. This behavior did also bite me even more, with another usage pattern I've found in a project I maintain: > config.read('global.txt') > config.read('local.txt') Here, both files are expected to exist, with the latter one extending or updating configuration from the first file. If one of the files doesn't exist (eg mistake during deployment), there's no obvious error, but the program will be configured in different way than intended. Now, I'm aware that all of this can be avoided by simply using `read_file()`: > with open('file.txt') as f: > config.read_file(f) But again, `.read()` is the one usually mentioned first in both the official documentation, and most independent guides, so it's easy to get wrong. Due to this, I propose adding an extra parameter to .read(): read(filenames, encoding=None, check_exist=False) that, when manually set to True, will throw exception if any of input files doesn't exist; and to use this parameter by default in Quick Start section of ConfigParser documentation. If this is a reasonable idea, I could try and make a PR. For comparison, the `toml` Python library has the following behavior: - if argument is a single filename and file doesn't exist, it throws - if argument is a list of filenames and none exist, it throws - if argument is a list of filenames and at least one exists, it works, but prints a warning for each nonexistent file. For the record, seems like this issue was also mentioned in https://bugs.python.org/issue490399 ---------- components: Library (Lib) messages: 331439 nosy: Adrian Wielgosik priority: normal severity: normal status: open title: ConfigParser .read() - handling of nonexistent files type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 12:41:42 2018 From: report at bugs.python.org (Stefan Seefeld) Date: Sun, 09 Dec 2018 17:41:42 +0000 Subject: [New-bugs-announce] [issue35449] documenting objects Message-ID: <1544377302.47.0.788709270274.issue35449@psf.upfronthosting.co.za> New submission from Stefan Seefeld : On multiple occasions I have wanted to add documentation not only to Python classes and functions, but also instance variables. This seems to involve (at least) two orthogonal questions: 1) what is the proper syntax to associate documentation (docstrings ?) to objects ? 2) what changes need to be applied to Python's infrastructure (e.g., the help system) to support it ? I have attempted to work around 1) in my custom code by explicitly setting an object's `__doc__` attribute. However, calling `help()` on such an object would simply ignore that attribute, and instead list the documentation associated with the instance type. Am I missing something here, i.e. am I approaching the problem the wrong way, or am I the first to want to use object-specific documentation ? ---------- messages: 331443 nosy: stefan priority: normal severity: normal status: open title: documenting objects type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 9 13:05:37 2018 From: report at bugs.python.org (Marcin) Date: Sun, 09 Dec 2018 18:05:37 +0000 Subject: [New-bugs-announce] [issue35450] venv module doesn't create a copy of python binary by default Message-ID: <1544378737.46.0.788709270274.issue35450@psf.upfronthosting.co.za> New submission from Marcin : Hello, from documentation: https://docs.python.org/3/library/venv.html "python3 -m venv /path/to/new/virtual/environment Running this command creates the target directory (creating any parent directories that don?t exist already) and places a pyvenv.cfg file in it with a home key pointing to the Python installation from which the command was run. It also creates a bin (or Scripts on Windows) subdirectory containing **a copy** of the python binary (or binaries, in the case of Windows)." This is not true. In my case it creates symlinks to python binary by default. This is quite different. Upgrading system's python version broke my virtual environment because I believed I'm having a static copy of python binary in my virtual environment. ---------- assignee: docs at python components: Documentation messages: 331445 nosy: docs at python, mkkot priority: normal severity: normal status: open title: venv module doesn't create a copy of python binary by default versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 10 04:18:29 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 10 Dec 2018 09:18:29 +0000 Subject: [New-bugs-announce] [issue35451] Incorrect reference counting for sys.warnoptions and sys._xoptions Message-ID: <1544433509.38.0.788709270274.issue35451@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Borrowed references are incorrectly decrefed in get_warnoptions() and get_xoptions() in Python/sysmodule.c. The bag was introduced in issue30860. ---------- components: Interpreter Core messages: 331478 nosy: eric.snow, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Incorrect reference counting for sys.warnoptions and sys._xoptions type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 10 05:39:44 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 10 Dec 2018 10:39:44 +0000 Subject: [New-bugs-announce] [issue35452] Make PySys_HasWarnOptions() never raising an exception Message-ID: <1544438384.96.0.788709270274.issue35452@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : While "warnings" was a static variable in Python/sysmodule.c, it was guarantied that PySys_HasWarnOptions() never raises an exception. But since it became a value of the sys dict (see issue30860), it can have an arbitrary type, and PySys_HasWarnOptions() can raise an exception (while returning 0). The proposed PR makes it never raising an exception again. ---------- components: Interpreter Core messages: 331488 nosy: eric.snow, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Make PySys_HasWarnOptions() never raising an exception type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 10 05:54:42 2018 From: report at bugs.python.org (Cristian Ciupitu) Date: Mon, 10 Dec 2018 10:54:42 +0000 Subject: [New-bugs-announce] [issue35453] pathlib.Path: glob and rglob should accept PathLike patterns Message-ID: <1544439282.17.0.788709270274.issue35453@psf.upfronthosting.co.za> New submission from Cristian Ciupitu : pathlib.Path.glob and pathlib.Path.rglob don't work with os.PathLike patterns. Short example: from pathlib import Path, PurePath # fails tuple(Path('/etc').glob(PurePath('passwd'))) # TypeError tuple(Path('/etc').rglob(PurePath('passwd'))) # TypeError tuple(Path('C:\\').glob(PurePath('Windows'))) # AttributeError tuple(Path('C:\\').rglob(PurePath('Windows'))) # AttributeError # works from os import fspath tuple(Path('/etc').glob(fspath(PurePath('passwd')))) tuple(Path('/etc').rglob(fspath(PurePath('passwd')))) tuple(Path('C:\\').glob(fspath(PurePath('Windows')))) tuple(Path('C:\\').rglob(fspath(PurePath('Windows')))) ---------- components: Library (Lib) messages: 331491 nosy: ciupicri priority: normal severity: normal status: open title: pathlib.Path: glob and rglob should accept PathLike patterns versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 10 07:30:05 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 10 Dec 2018 12:30:05 +0000 Subject: [New-bugs-announce] [issue35454] Fix miscellaneous issues in error handling Message-ID: <1544445005.88.0.788709270274.issue35454@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The proposed patch fixes miscellaneous issues in error handling. ---------- components: Extension Modules, Interpreter Core messages: 331504 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Fix miscellaneous issues in error handling type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 10 09:32:21 2018 From: report at bugs.python.org (Jakub Kulik) Date: Mon, 10 Dec 2018 14:32:21 +0000 Subject: [New-bugs-announce] [issue35455] Solaris thread_time doesn't work with current implementation Message-ID: <1544452341.93.0.788709270274.issue35455@psf.upfronthosting.co.za> New submission from Jakub Kulik : Implementation of time.thread_time() doesn't work on Solaris because clock_id CLOCK_THREAD_CPUTIME_ID is not known (it is defined, but clock_gettime returns EINVAL error). Solaris, however, has function gethrvtime() which can substitute this functionality. I attached a possible patch which does work during tests and I further tested it with some basic scripts. ---------- components: Extension Modules files: thread_time.diff keywords: patch messages: 331509 nosy: kulikjak priority: normal severity: normal status: open title: Solaris thread_time doesn't work with current implementation versions: Python 3.7 Added file: https://bugs.python.org/file47984/thread_time.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 10 23:40:29 2018 From: report at bugs.python.org (Yahya Abou Imran) Date: Tue, 11 Dec 2018 04:40:29 +0000 Subject: [New-bugs-announce] [issue35456] asyncio.Task.set_result() and set_exception() missing docstrings (and Liskov sub. principle) Message-ID: <1544503229.89.0.788709270274.issue35456@psf.upfronthosting.co.za> New submission from Yahya Abou Imran : In asyncio.Task help: | set_exception(self, exception, /) | Mark the future done and set an exception. | | If the future is already done when this method is called, raises | InvalidStateError. | | set_result(self, result, /) | Mark the future done and set its result. | | If the future is already done when this method is called, raises | InvalidStateError. These doctrings are inherited from asyncio.Future. But in fact it's wrong since: https://github.com/python/cpython/blob/4824385fec0a1de99b4183f995a3e4923771bf64/Lib/asyncio/tasks.py#L161: def set_result(self, result): raise RuntimeError('Task does not support set_result operation') def set_exception(self, exception): raise RuntimeError('Task does not support set_exception operation') Just adding another docstring is not a good solution - at leas for me - because the problem is in fact deeper: This prove by itself that a Task is not a Future in fact, or shouldn't be, because this breaks the Liskov substitution principle. We could have both Future and Task inheriting from some base class like PendingOperation witch would contain all the methods of Future except these two setters. One problem to deal with might be those calls to super().set_result/exception() in Task._step(): https://github.com/python/cpython/blob/4824385fec0a1de99b4183f995a3e4923771bf64/Lib/asyncio/tasks.py#L254 except StopIteration as exc: if self._must_cancel: # Task is cancelled right before coro stops. self._must_cancel = False super().set_exception(exceptions.CancelledError()) else: super().set_result(exc.value) except exceptions.CancelledError: super().cancel() # I.e., Future.cancel(self). except Exception as exc: super().set_exception(exc) except BaseException as exc: super().set_exception(exc) raise One way to deal with that would be to let a Task have a Future. "Prefer composition over inheritance" as they say. I want to work on PR for this if nobody goes against it... PS: I really don't like when some people says that Python core developers are known to have poor knowledge in regard to OOP principles. So I really don't like letting something like this in the standard library... ---------- messages: 331570 nosy: yahya-abou-imran priority: normal severity: normal status: open title: asyncio.Task.set_result() and set_exception() missing docstrings (and Liskov sub. principle) type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 04:30:47 2018 From: report at bugs.python.org (larsfuse) Date: Tue, 11 Dec 2018 09:30:47 +0000 Subject: [New-bugs-announce] [issue35457] robotparser reads empty robots.txt file as "all denied" Message-ID: <1544520647.95.0.788709270274.issue35457@psf.upfronthosting.co.za> New submission from larsfuse : The standard (http://www.robotstxt.org/robotstxt.html) says: > To allow all robots complete access: > User-agent: * > Disallow: > (or just create an empty "/robots.txt" file, or don't use one at all) Here I give python an empty file: $ curl http://10.223.68.186/robots.txt $ Code: rp = robotparser.RobotFileParser() print (robotsurl) rp.set_url(robotsurl) rp.read() print( "fetch /", rp.can_fetch(useragent = "*", url = "/")) print( "fetch /admin", rp.can_fetch(useragent = "*", url = "/admin")) Result: $ ./test.py http://10.223.68.186/robots.txt ('fetch /', False) ('fetch /admin', False) And the result is, robotparser thinks the site is blocked. ---------- components: Library (Lib) messages: 331595 nosy: larsfuse priority: normal severity: normal status: open title: robotparser reads empty robots.txt file as "all denied" type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 05:37:38 2018 From: report at bugs.python.org (STINNER Victor) Date: Tue, 11 Dec 2018 10:37:38 +0000 Subject: [New-bugs-announce] [issue35458] test_shutil.test_disk_usage() randomly fails when tests are run in parallel Message-ID: <1544524658.21.0.788709270274.issue35458@psf.upfronthosting.co.za> New submission from STINNER Victor : Extract of the test: usage = shutil.disk_usage(os.path.dirname(__file__)) self.assertEqual(usage, shutil.disk_usage(__file__)) The test fails if another process creates or removes data on the disk partition. IMHO "self.assertEqual(usage, shutil.disk_usage(__file__))" must be removed, it cannot be reliable without mocking os.statvfs() / nt._getdiskusage(). Even if tests are run sequentially, the test can fail if a program creates a file between the two lines of code, the test cannot be reliable. https://buildbot.python.org/all/#/builders/85/builds/1882 FAIL: test_disk_usage (test.test_shutil.TestShutil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/shager/cpython-buildarea/3.x.edelsohn-fedora-ppc64le/build/Lib/test/test_shutil.py", line 1366, in test_disk_usage self.assertEqual(usage, shutil.disk_usage(__file__)) AssertionError: usage(total=1925696024576, used=1793793806336, free=131902218240) != usage(total=1925696024576, used=1793793818624, free=131902205952) ---------- components: Tests messages: 331601 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_shutil.test_disk_usage() randomly fails when tests are run in parallel versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 05:54:17 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 11 Dec 2018 10:54:17 +0000 Subject: [New-bugs-announce] [issue35459] Use PyDict_GetItemWithError() with PyDict_GetItem() Message-ID: <1544525657.6.0.788709270274.issue35459@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : There is an issue with using PyDict_GetItem(). Since it silences all exceptions, it can return incorrect result when an exception like MemoryError or KeyboardInterrupt was raised in the user __hash__() and __eq__(). In addition PyDict_GetItemString() and _PyDict_GetItemId() swallow a MemoryError raised when fail to allocate a temporary string object. In addition, PyDict_GetItemWithError() is a tiny bit faster than PyDict_GetItem(), because it avoids checking the exception state in successful case. The proposed PR replaces most calls of PyDict_GetItem(), PyDict_GetItemString() and _PyDict_GetItemId() with calls of PyDict_GetItemWithError(), _PyDict_GetItemStringWithError() and _PyDict_GetItemIdWithError(). ---------- components: Extension Modules, Interpreter Core messages: 331604 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Use PyDict_GetItemWithError() with PyDict_GetItem() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 06:00:24 2018 From: report at bugs.python.org (Ronald Oussoren) Date: Tue, 11 Dec 2018 11:00:24 +0000 Subject: [New-bugs-announce] [issue35460] Add PyDict_GetItemStringWithError Message-ID: <1544526024.8.0.788709270274.issue35460@psf.upfronthosting.co.za> New submission from Ronald Oussoren : PyDict_GetItemWithError is a variant of PyDict_GetItem that doesn't swallow unrelated exceptions. While converting a project to use this API I noticed that there is similar variant of PyDict_GetItemString. It would be nice to have PyDict_GetItemStringWithError as a public API to make it easier to convert existing code to the better API. ---------- components: Interpreter Core messages: 331607 nosy: ronaldoussoren priority: normal severity: normal status: open title: Add PyDict_GetItemStringWithError type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 08:27:28 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 11 Dec 2018 13:27:28 +0000 Subject: [New-bugs-announce] [issue35461] Document C API functions which swallow exceptions Message-ID: <1544534848.15.0.788709270274.issue35461@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : C API functions like PyDict_GetItem() and PyObject_HasAttr() suppresses all errors that may occur, including MemoryError and KeyboardInterrupt. They can return incorrect result when the memory is exhausted or the user presses Ctrl-C. The proposed PR documents such functions and suggests the C API which do not swallow unrelated exceptions. Previous attempt to document this (for PyDict_GetItem() only) was in issue20615. ---------- assignee: docs at python components: Documentation messages: 331620 nosy: docs at python, rhettinger, ronaldoussoren, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Document C API functions which swallow exceptions type: enhancement versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 08:56:02 2018 From: report at bugs.python.org (STINNER Victor) Date: Tue, 11 Dec 2018 13:56:02 +0000 Subject: [New-bugs-announce] [issue35462] test_imaplib.test_enable_UTF8_True_append() failed on AMD64 FreeBSD 10-STABLE Non-Debug 3.7 Message-ID: <1544536562.99.0.788709270274.issue35462@psf.upfronthosting.co.za> New submission from STINNER Victor : AMD64 FreeBSD 10-STABLE Non-Debug 3.7: https://buildbot.python.org/all/#/builders/170/builds/200 Note: this buildbot is *very* slow. test_enable_UTF8_True_append (test.test_imaplib.NewIMAPTests) ... SENT: b'* OK IMAP4rev1' GOT: b'PJHE0 CAPABILITY' SENT: b'* CAPABILITY IMAP4rev1 ENABLE UTF8=ACCEPT' SENT: b'PJHE0 OK CAPABILITY completed' ERROR (...) ERROR: test_enable_UTF8_True_append (test.test_imaplib.NewIMAPTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/test/test_imaplib.py", line 297, in test_enable_UTF8_True_append code, _ = client.authenticate('MYAUTH', lambda x: b'fake') File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/imaplib.py", line 428, in authenticate typ, dat = self._simple_command('AUTHENTICATE', mech) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/imaplib.py", line 1196, in _simple_command return self._command_complete(name, self._command(name, *args)) File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/imaplib.py", line 989, in _command while self._get_response(): File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/imaplib.py", line 1047, in _get_response resp = self._get_line() File "/usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/imaplib.py", line 1151, in _get_line raise self.abort('socket error: EOF') imaplib.IMAP4.abort: socket error: EOF See also bpo-20118 ("test_imaplib test_linetoolong fails on 2.7 in SSL test on some buildbots"). ---------- components: Tests messages: 331626 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_imaplib.test_enable_UTF8_True_append() failed on AMD64 FreeBSD 10-STABLE Non-Debug 3.7 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 09:47:32 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Tue, 11 Dec 2018 14:47:32 +0000 Subject: [New-bugs-announce] [issue35463] mock uses incorrect signature for partial and partialmethod with autospec Message-ID: <1544539652.54.0.788709270274.issue35463@psf.upfronthosting.co.za> New submission from Karthikeyan Singaravelan : This is a bug report for https://bugs.python.org/issue17185#msg331149 that I was asked to raise as a separate issue. 1. When we call create_autospec it calls _get_signature_object that gets the signature for the given parameter. With functools.partial it returns a partial object and hence while getting the signature it returns the signature for the constructor of partial instead of the underlying function passed to functools.partial. I think a check needs to be added to make sure not to use func.__init__ when it's a partial object. 2. When we call create_autospect on a class that has a partialmethod the self parameter is not skipped in the signature and thus it creates a signature with self causing error. The fix would be to handle partialmethod also in _must_skip that determines whether to skip self or not. Sample reproducer : from functools import partial, partialmethod from unittest.mock import create_autospec import inspect def foo(a, b): pass p = partial(foo, 1) m = create_autospec(p) m(1, 2, 3) # passes since signature is set as (*args, **kwargs) the signature of functools.partial constructor. This should throw TypeError under autospec class A: def f(self, a, b): print(a, b) g = partialmethod(f, 1) m = create_autospec(A) m().g(1, 2) # passes since signature is set as (self, b) and self is not skipped in _must_skip thus self=1, b=2. This should throw TypeError under autospec since the valid call is m().g(2) ---------- components: Library (Lib) messages: 331631 nosy: cjw296, mariocj89, michael.foord, pablogsal, xtreak priority: normal severity: normal status: open title: mock uses incorrect signature for partial and partialmethod with autospec type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 09:52:03 2018 From: report at bugs.python.org (Or) Date: Tue, 11 Dec 2018 14:52:03 +0000 Subject: [New-bugs-announce] [issue35464] json.dumps very unclear exception Message-ID: <1544539923.61.0.788709270274.issue35464@psf.upfronthosting.co.za> New submission from Or : when dumping a value coming from numpy.random.choice([True,False]) the exception raised is very unclear. json.dumps(result) File "/usr/local/Cellar/python at 2/2.7.15/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default raise TypeError(repr(o) + " is not JSON serializable") which prints "True is not JSON serializable" - but it should actually print " is not JSON serializable". ---------- components: Library (Lib) messages: 331632 nosy: orshemy priority: normal severity: normal status: open title: json.dumps very unclear exception type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 14:21:33 2018 From: report at bugs.python.org (=?utf-8?b?SHJ2b2plIE5pa8WhacSH?=) Date: Tue, 11 Dec 2018 19:21:33 +0000 Subject: [New-bugs-announce] [issue35465] Document add_signal_handler Message-ID: <1544556093.28.0.788709270274.issue35465@psf.upfronthosting.co.za> New submission from Hrvoje Nik?i? : In https://stackoverflow.com/q/53704709/1600898 a StackOverflow user asked how the add_signal_handler event loop method differs from the signal.signal normally used by Python code. The add_signal_handler documentation is quite brief - if we exclude the parts that explain the exceptions raised and how to pass keyword arguments to the callback, the meat is this sentence: Set callback as the handler for the signum signal. It is only after looking at the source, and understanding asyncio, that one comes to the conclusion that the idea is to run the handler along with other event loop callbacks and coroutines, at the time when it is actually safe to invoke asyncio code. I think this deserves to be mentioned explicitly, for example: Set callback as the handler for the signum signal. The callback will be invoked in the thread that runs the event loop, along with other queued callbacks and runnable coroutines. Unlike signal handlers registered using signal.signal(), a callback registered with this function is allowed to interact with the event loop. ---------- assignee: docs at python components: Documentation messages: 331645 nosy: docs at python, hniksic priority: normal severity: normal status: open title: Document add_signal_handler versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 17:10:27 2018 From: report at bugs.python.org (Eric Snow) Date: Tue, 11 Dec 2018 22:10:27 +0000 Subject: [New-bugs-announce] [issue35466] Use a linked list for the ceval pending calls. Message-ID: <1544566227.12.0.788709270274.issue35466@psf.upfronthosting.co.za> New submission from Eric Snow : Currently the list of pending calls (see Include/internal/pycore_ceval.h) is implemented as a circular buffer. A linked list would be easier to understand and modify. It also allows for removing the restriction on the number of pending calls. ---------- assignee: eric.snow components: Interpreter Core messages: 331655 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Use a linked list for the ceval pending calls. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 11 21:32:04 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 12 Dec 2018 02:32:04 +0000 Subject: [New-bugs-announce] [issue35467] IDLE: unrequested pasting into Shell after restart Message-ID: <1544581924.2.0.788709270274.issue35467@psf.upfronthosting.co.za> New submission from Terry J. Reedy : IDLE very occasionally (frequency much less than .01), and AFAIK hapzardly, pastes previous shell output after I enter something at the prompt after a restart. Not fatal but definitly annoying. When it happened today, I decided to open this issue to start accumulating information that might point at where to start. tem3.py: (content likely not relevant) import inspect class A: pass print(inspect.getsource(A)) print(__name__) Shell copy: """ ... OSError: could not find class definition >>> ======================== RESTART: F:\Python\a\tem3.py ======================== class A: pass __main__ >>> 1/0======================== RESTART: F:\Python\a\tem3.py ======================== class A: pass SyntaxError: invalid syntax >>> 1/0 Traceback (most recent call last): ... """ The paste, after '1/0', is the restart line and the first two lines of output (but not the last two). It mixes text from IDLE and from the program, so it is not an echo from the run process). It is colored as if typed in: 'class' and 'pass' are keyword colored, the I believe I hit ENTER and got the paste instead of the exception. I hit Entere after the paste to get the SyntaxError and a clean prompt. Then I reentered 1/0. I did more or less the same thing about 5 times without a repeat of the problem. Possible factors: exception before restart (probably not relevant). restart, prompt, and entry (I believe these are essential elements). running a file (I seldom restart other wise). hitting return Included Content: restart line (I am pretty sure pasted text does not always include this). output from before the restart (ever?). output from after the restart (if always, must have run a file). --- Raymond, I believe you have seen this on Mac. Tal or Sheryl, how about linux? Anyone, more details on other examples are needed to know what is constant and what is incidental. ---------- messages: 331668 nosy: cheryl.sabella, rhettinger, taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: unrequested pasting into Shell after restart type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 01:23:26 2018 From: report at bugs.python.org (Matthias Klose) Date: Wed, 12 Dec 2018 06:23:26 +0000 Subject: [New-bugs-announce] [issue35468] [3.6/3.7] idlelib/help.html mentions 3.8alpha0 docs Message-ID: <1544595806.56.0.788709270274.issue35468@psf.upfronthosting.co.za> New submission from Matthias Klose : [3.6/3.7] idlelib/help.html mentions 3.8alpha0 docs: Seen in the 3.6.8 and 3.7.2 release candidates ---------- assignee: terry.reedy components: IDLE keywords: 3.6regression, 3.7regression messages: 331671 nosy: doko, ned.deily, terry.reedy priority: normal severity: normal status: open title: [3.6/3.7] idlelib/help.html mentions 3.8alpha0 docs versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 05:43:18 2018 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Dec 2018 10:43:18 +0000 Subject: [New-bugs-announce] [issue35469] [2.7] time.asctime() regression Message-ID: <1544611398.55.0.788709270274.issue35469@psf.upfronthosting.co.za> New submission from STINNER Victor : It seems like bpo-31339 introduced a regression with commit eeadf5fc231163ec97a8010754d9c995c7c14876 to fix a security issue. Copy of of bencordova's comment from GitHub: https://github.com/python/cpython/pull/3293#issuecomment-446378058 I'm new at commenting on this project so apologies if this is not the appropriate place to do so. >From what I can see (upgrading from python 2.7.13->2.7.15), the string format on line 648 of Modules/timemodule.c causes a different output from time.ctime() and time.asctime(t) for "days of the month < 10" "%s %s%3d %.2d:%.2d:%.2d %d" The "%3d" in this change removes the leading zero on the tm_mday but maintains a leading space. Just looking for feedback on what the intention was and if this is a bug. python 2.7.13: >>> import time >>> t = time.strptime("6 Dec 18", "%d %b %y") >>> time.asctime(t) 'Thu Dec 06 00:00:00 2018' python 2.7.15: >>> import time >>> t = time.strptime("6 Dec 18", "%d %b %y") >>> time.asctime(t) 'Thu Dec 6 00:00:00 2018' Note, the string with this change includes two spaces between "Dec" and "6" which also looks awkward. Original Post: https://github.com/python/cpython/commit/eeadf5fc231163ec97a8010754d9c995c7c14876#r31642310 ---------- components: Library (Lib) messages: 331687 nosy: vstinner priority: normal severity: normal status: open title: [2.7] time.asctime() regression versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 07:42:37 2018 From: report at bugs.python.org (Zackery Spytz) Date: Wed, 12 Dec 2018 12:42:37 +0000 Subject: [New-bugs-announce] [issue35470] A deadly decref in _PyImport_FindExtensionObjectEx() Message-ID: <1544618557.73.0.788709270274.issue35470@psf.upfronthosting.co.za> New submission from Zackery Spytz : In _PyImport_FindExtensionObjectEx(), "mod" shouldn't be decrefed if _PyState_AddModule() fails. ---------- components: Interpreter Core messages: 331693 nosy: ZackerySpytz priority: normal severity: normal status: open title: A deadly decref in _PyImport_FindExtensionObjectEx() versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 10:22:57 2018 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Dec 2018 15:22:57 +0000 Subject: [New-bugs-announce] [issue35471] Remove macpath module Message-ID: <1544628177.37.0.788709270274.issue35471@psf.upfronthosting.co.za> New submission from STINNER Victor : The module 'macpath' has been deprecated in Python 3.7 by bpo-9850 and scheduled for removal in Python 3.8. Attached PR removes the module. ---------- components: Library (Lib) messages: 331699 nosy: vstinner priority: normal severity: normal status: open title: Remove macpath module versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 11:30:41 2018 From: report at bugs.python.org (Matthias Klose) Date: Wed, 12 Dec 2018 16:30:41 +0000 Subject: [New-bugs-announce] [issue35472] python 3.7.2 rc1 bumped the build requirements for no reason Message-ID: <1544632241.86.0.788709270274.issue35472@psf.upfronthosting.co.za> New submission from Matthias Klose : python 3.7.2 rc1 bumped the build requirements apparently for no reason, now requiring sphinx 1.7 instead of 1.6.x before. This is a major pain, if you want to provide the build including the documentation on a stable release. Pretty please can we avoid such version bumps on the branches? Plus the documentation seems to build fine with 1.6.7. ---------- assignee: docs at python components: Documentation keywords: 3.7regression messages: 331705 nosy: docs at python, doko, ned.deily priority: release blocker severity: normal status: open title: python 3.7.2 rc1 bumped the build requirements for no reason versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 12:50:27 2018 From: report at bugs.python.org (jamie schnaitter) Date: Wed, 12 Dec 2018 17:50:27 +0000 Subject: [New-bugs-announce] [issue35473] Intel compiler (icc) does not fully support C11 Features, including atomics Message-ID: <1544637027.92.0.788709270274.issue35473@psf.upfronthosting.co.za> New submission from jamie schnaitter : I am currently trying to build 3.6.7 and 3.7.1 using Intel 2019 and it is failing because Intel's implementation of C11, in particular stdatomic, is incomplete. I receive many errors similar to the following, when it cannot find 'atomic_uintptr_t', which is not including in Intel's implementation: ``` In file included from ./Include/Python.h(56), from ./Modules/_io/bufferedio.c(11): ./Include/pyatomic.h(33): error: identifier "atomic_uintptr_t" is undefined atomic_uintptr_t _value; ``` The current check in configure.ac is insufficient, as it only checks to see that the header and library exist and that it contains 'atomic_int'. The configure.ac should be changed to either check for all the atomic types it uses (or at least atomic_uintprt_t) or, when `--with-icc` is enabled, it should set 'HAVE_STD_ATOMIC' to 0/false. ---------- components: Build, Installation, Library (Lib), ctypes messages: 331713 nosy: jamie schnaitter priority: normal severity: normal status: open title: Intel compiler (icc) does not fully support C11 Features, including atomics type: compile error versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 14:06:57 2018 From: report at bugs.python.org (Ryan McCampbell) Date: Wed, 12 Dec 2018 19:06:57 +0000 Subject: [New-bugs-announce] [issue35474] mimetypes.guess_all_extensions potentially mutates list Message-ID: <1544641617.77.0.788709270274.issue35474@psf.upfronthosting.co.za> New submission from Ryan McCampbell : The mimetypes.guess_all_extensions function is defined as: def guess_all_extensions(self, type, strict=True): type = type.lower() extensions = self.types_map_inv[True].get(type, []) if not strict: for ext in self.types_map_inv[False].get(type, []): if ext not in extensions: extensions.append(ext) return extensions If any mime type exists in both the strict and non-strict types_map_inv and it is called with strict=False, then it will modify the strict list in-place which effects future calls even with strict=True. While this doesn't manifest as an error for me because the dictionaries are non-overlapping, it is a potential error; it is also vulnerable to people accidentally modifying the returned list. The list should be copied after the first lookup. ---------- components: Library (Lib) messages: 331715 nosy: rmccampbell7 priority: normal severity: normal status: open title: mimetypes.guess_all_extensions potentially mutates list type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 14:11:41 2018 From: report at bugs.python.org (Eric Snow) Date: Wed, 12 Dec 2018 19:11:41 +0000 Subject: [New-bugs-announce] [issue35475] Docs do not show PyImport_AddModuleObject() returns a borrowed reference. Message-ID: <1544641901.4.0.788709270274.issue35475@psf.upfronthosting.co.za> New submission from Eric Snow : In the C-API documentation the entry for PyImport_AddModuleObject[1] does not indicate that it returns a borrowed reference. [1] https://docs.python.org/3/c-api/import.html#c.PyImport_AddModuleObject ---------- assignee: docs at python components: Documentation keywords: easy messages: 331716 nosy: docs at python, eric.snow priority: normal severity: normal stage: needs patch status: open title: Docs do not show PyImport_AddModuleObject() returns a borrowed reference. type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 14:21:19 2018 From: report at bugs.python.org (Eric Snow) Date: Wed, 12 Dec 2018 19:21:19 +0000 Subject: [New-bugs-announce] [issue35476] _imp_create_dynamic_impl() does not clear error. Message-ID: <1544642479.77.0.788709270274.issue35476@psf.upfronthosting.co.za> New submission from Eric Snow : In _imp_create_dynamic_impl() [1] the case where _PyImport_FindExtensionObject() returns NULL may leave an error set. Either the error should be raised (like _imp_create_builtin() does) or it should be cleared (via PyErr_Clear()). ---------- components: Interpreter Core messages: 331717 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: _imp_create_dynamic_impl() does not clear error. type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 18:08:12 2018 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Dec 2018 23:08:12 +0000 Subject: [New-bugs-announce] [issue35477] multiprocessing.Pool.__enter__() should raise an exception if called twice Message-ID: <1544656092.37.0.788709270274.issue35477@psf.upfronthosting.co.za> New submission from STINNER Victor : On a file, "with file:" fails if it's used a second time: --- fp = open('/etc/issue') with fp: print("first") with fp: print("second") --- fails with "ValueError: I/O operation on closed file", because file.__enter__() raises this exception if the file is closed. I propose to have the same behavior on multiprocessing.Pool.__enter__() to detect when the multiprocessing API is misused. Anyway, after the first "with pool:" block, the pool becomes unusable to schedule now tasks: apply() raise ValueError("Pool not running") in that case for example. ---------- components: Library (Lib) messages: 331719 nosy: vstinner priority: normal severity: normal status: open title: multiprocessing.Pool.__enter__() should raise an exception if called twice versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 19:30:51 2018 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Dec 2018 00:30:51 +0000 Subject: [New-bugs-announce] [issue35478] multiprocessing: ApplyResult.get() hangs if the pool is terminated Message-ID: <1544661051.76.0.788709270274.issue35478@psf.upfronthosting.co.za> New submission from STINNER Victor : The following code hangs: --- import multiprocessing, time pool = multiprocessing.Pool(1) result = pool.apply_async(time.sleep, (1.0,)) pool.terminate() result.get() --- pool.terminate() terminates workers before time.sleep(1.0) completes, but the pool doesn't mark result as completed with an error. Would it be possible to mark all pending tasks as failed? For example, "raise" a RuntimeError("pool terminated before task completed"). ---------- components: Library (Lib) messages: 331724 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: multiprocessing: ApplyResult.get() hangs if the pool is terminated versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 19:36:54 2018 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Dec 2018 00:36:54 +0000 Subject: [New-bugs-announce] [issue35479] multiprocessing.Pool.join() always takes at least 100 ms Message-ID: <1544661414.71.0.788709270274.issue35479@psf.upfronthosting.co.za> New submission from STINNER Victor : The join() method of multiprocessing.Pool calls self._worker_handler.join(): it's a thread running _handle_workers(). The core of this thread function is: while thread._state == RUN or (pool._cache and thread._state != TERMINATE): pool._maintain_pool() time.sleep(0.1) I understand that the delay of 100 ms is used to check regularly the stop condition changed. This sleep causes a mandatory delay of 100 ms on Pool.join(). ---------- components: Library (Lib) messages: 331726 nosy: vstinner priority: normal severity: normal status: open title: multiprocessing.Pool.join() always takes at least 100 ms versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 12 20:44:37 2018 From: report at bugs.python.org (Victor Porton) Date: Thu, 13 Dec 2018 01:44:37 +0000 Subject: [New-bugs-announce] [issue35480] argparse: add a full fledged parser as a subparser Message-ID: <1544665477.58.0.788709270274.issue35480@psf.upfronthosting.co.za> New submission from Victor Porton : Subparsers are added like: subparsers.add_parser('checkout', aliases=['co']) But I want to use a parser BOTH as a subparser and as a full-fledged parser. It is because my program should understand both of the following command line options: boiler chain -t http://www.w3.org/1999/xhtml -W inverseofsum and boiler pipe 'chain -t http://www.w3.org/1999/xhtml -W inverseofsum + transformation http://example.com/ns1'. I split it (at +) into several lists of arguments as explained in https://stackoverflow.com/a/53750697/856090 So I need `chain` both as a subparser and as a standalone parser of `-t http://www.w3.org/1999/xhtml -W inverseofsum`. So, feature which I want: subparsers.add_parser('checkout', aliases=['co'], parser=...) where ... is a reference to a parser object. ---------- components: Library (Lib) messages: 331730 nosy: porton priority: normal severity: normal status: open title: argparse: add a full fledged parser as a subparser type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 03:54:45 2018 From: report at bugs.python.org (youngchaos) Date: Thu, 13 Dec 2018 08:54:45 +0000 Subject: [New-bugs-announce] [issue35481] Run Tasks cannot Concurrent Message-ID: <1544691285.22.0.788709270274.issue35481@psf.upfronthosting.co.za> Change by youngchaos : ---------- components: asyncio nosy: asvetlov, youngchaos, yselivanov priority: normal severity: normal status: open title: Run Tasks cannot Concurrent type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 09:34:02 2018 From: report at bugs.python.org (Ma Lin) Date: Thu, 13 Dec 2018 14:34:02 +0000 Subject: [New-bugs-announce] [issue35482] python372rc1.chm is ill Message-ID: <1544711642.5.0.788709270274.issue35482@psf.upfronthosting.co.za> New submission from Ma Lin : python372rc1.chm can't be opened, it seems the compiling is not successful. Compare to python371.chm, the file size reduced a lot: python371.chm 8,534,435 bytes python372rc1.chm 5,766,102 bytes Some files in chm are missing, see attached pictures: 371_chm_files.png 372rc1_chm_files.png After switch "Display compile progress" to Yes in .hhp file, the output is abnormal, see attached pictures: 371_compile_progress.png 372rc1_compile_progress.png ---------- assignee: docs at python components: Documentation files: 371_chm_files.png messages: 331761 nosy: Ma Lin, docs at python priority: normal severity: normal status: open title: python372rc1.chm is ill versions: Python 3.7 Added file: https://bugs.python.org/file47988/371_chm_files.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 10:22:39 2018 From: report at bugs.python.org (Michael Brandl) Date: Thu, 13 Dec 2018 15:22:39 +0000 Subject: [New-bugs-announce] [issue35483] tarfile.extractall on existing symlink in Ubuntu overwrites target file, not symlink, unlinke GNU tar Message-ID: <1544714559.52.0.788709270274.issue35483@psf.upfronthosting.co.za> New submission from Michael Brandl : In Ubuntu 16.04, with python 3.5, as well as custom built 3.6 and 3.7.1: Given a file foo.txt (with content "foo") and a symlink myLink to it, packed in a tar, and a file bar.txt (with content "bar") with a symlink myLink to it, packed in another tar, unpacking the two tars into the same folder (first foo.tar, then bar.tar) leads to the following behavior: In GNU tar, the directory will contain: foo.txt (content "foo") bar.txt (content "bar") myLink ->bar.txt. Using python's tarfile however, the result of calling tarfile.extractall on the two tars will give: foo.txt (content "bar") bar.txt (content "bar") myLink ->foo.txt. Repro: 1. Unpack the attached symLinkBugRepro.tar.gz into a new folder 2. run > bash repoSymlink.bash (does exactly what is described above) 3. if the last two lines of the output are "bar" and "bar" (instead of "foo" and "bar"), then the content of foo.txt has been overwritten. Note that this is related to issues like https://bugs.python.org/issue23228 https://bugs.python.org/issue1167128 https://bugs.python.org/issue19974 https://bugs.python.org/issue10761 None of these issues target the issue at hand, however. The problem lies in line 2201 of https://github.com/python/cpython/blob/master/Lib/tarfile.py: The assumption is that any exception only comes from the os not supporting symlinks. But here, the exception comes from the symlink already existing, which should be caught separately. The correct behavior is then NOT to extract the member, but rather to overwrite the symlink (as GNU tar does). ---------- components: Library (Lib) files: symLinkBugRepro.tar.gz messages: 331762 nosy: michael.brandl at aid-driving.eu priority: normal severity: normal status: open title: tarfile.extractall on existing symlink in Ubuntu overwrites target file, not symlink, unlinke GNU tar type: behavior versions: Python 3.5, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file47992/symLinkBugRepro.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 10:38:08 2018 From: report at bugs.python.org (Jakub Kulik) Date: Thu, 13 Dec 2018 15:38:08 +0000 Subject: [New-bugs-announce] [issue35484] Segmentation fault due to faulthandler on Solaris Message-ID: <1544715488.9.0.788709270274.issue35484@psf.upfronthosting.co.za> New submission from Jakub Kulik : When running tests on Solaris (amd64) I noticed that one test in test_faulthandler.py fails with the segmentation fault. I have attached program reproducing this issue and its core stack trace. Program runs entirely with *Ending* printed as well and segfaults somewhere in the sys.exit() during deallocation. Problem doesn't appear when chain variable is set to False or if faulthandler is unregistered. Also, this bug appears only with --enable-optimizations on, but probably it just doesn't manifest itself in non optimized build (or on sparc). I am not sure how to diagnose it more so I am at least reporting this issue. ---------- components: Extension Modules, Interpreter Core files: crash_stack.txt messages: 331763 nosy: kulikjak priority: normal severity: normal status: open title: Segmentation fault due to faulthandler on Solaris type: crash versions: Python 3.7 Added file: https://bugs.python.org/file47993/crash_stack.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 15:22:39 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 13 Dec 2018 20:22:39 +0000 Subject: [New-bugs-announce] [issue35485] Mac: tkinter windows turn black while resized Message-ID: <1544732559.26.0.788709270274.issue35485@psf.upfronthosting.co.za> New submission from Terry J. Reedy : I updated to Mohave and did *not* switch to new dark theme.] I installed 64 bit 3.7.2rc1. Start IDLE. import tkinter root = tkinter.Tk() Black on white Tk window appears. Move mouse until resize arrow appears. Left click. Move mouse. Window turns uniformly black. Ugh. Cannot see contents that am trying to resize to. Release button. Background returns to white. Same thing happens with Text added, and hence with IDLE windows and such dialogs as can be resized. ---------- components: Tkinter, macOS messages: 331769 nosy: ned.deily, ronaldoussoren, terry.reedy, wordtech priority: release blocker severity: normal stage: needs patch status: open title: Mac: tkinter windows turn black while resized type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 18:14:15 2018 From: report at bugs.python.org (David Wilson) Date: Thu, 13 Dec 2018 23:14:15 +0000 Subject: [New-bugs-announce] [issue35486] subprocess module breaks backwards compatibility with older import hooks Message-ID: <1544742855.44.0.788709270274.issue35486@psf.upfronthosting.co.za> New submission from David Wilson : The subprocess package since 880d42a3b24 / September 2018 has begun using this idiom: try: import _foo except ModuleNotFoundError: bar However, ModuleNotFoundError may not be thrown by older import hook implementations, since that subclass was only introduced in Python 3.6 -- and so the test above will fail. PEP-302 continues to document ImportError as the appropriate exception that should be raised. https://mitogen.readthedocs.io/en/stable/ is one such import hook that lazily loads packages over the network when they aren't available locally. Current Python subprocess master breaks with Mitogen because when it discovers the master cannot satisfy the import, it throws ImportError. The new exception subtype was introduced in https://bugs.python.org/issue15767 , however very little in the way of rationale was included, and so it's unclear to me what need the new subtype is addressing, whether this is a problem with the subprocess module or the subtype as a whole, or indeed whether any of this should be considered a bug. It seems clear that some kind of regression is in the process of occurring during a minor release, and it also seems clear the new subtype will potentially spawn a whole variety of similar new regressions. I will be updating Mitogen to throw the new subtype if it is available, but I felt it was important to report the regression to see what others think. ---------- components: Library (Lib) messages: 331774 nosy: dw priority: normal severity: normal status: open title: subprocess module breaks backwards compatibility with older import hooks versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 13 22:43:32 2018 From: report at bugs.python.org (F. Eugene Aumson) Date: Fri, 14 Dec 2018 03:43:32 +0000 Subject: [New-bugs-announce] [issue35487] setup()'s package_data not following directory symlink Message-ID: <1544759012.02.0.788709270274.issue35487@psf.upfronthosting.co.za> New submission from F. Eugene Aumson : I have a package using setup.py. I want the package to include some data files, which are located in my source directory. Building my package in my 3.7.0 environment includes the data files as expected. Building the same exact package in my 3.7.1 environment does NOT include the data files. Attached are two logs demonstrating the `pip install` output. Both were produced with this command: `pip uninstall 0x-json-schemas --yes >pip.log && pip install .[dev] --verbose --verbose --verbose >> pip.log` Also attached is my setup.py script. Also worth noting is that the directory that contains my data files (src/zero_ex/json_schemas/schemas) is a symlink, which I've verified is resolving properly in both environments. And, when I replace the symlink with a real folder, containing the same files, then everything works as expected. So I surmise that the following of symlinks is what's broken here. ---------- components: Distutils files: package_data_symlink_bug.zip messages: 331778 nosy: F. Eugene Aumson, dstufft, eric.araujo priority: normal severity: normal status: open title: setup()'s package_data not following directory symlink versions: Python 3.7 Added file: https://bugs.python.org/file47995/package_data_symlink_bug.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 01:40:23 2018 From: report at bugs.python.org (anthony shaw) Date: Fri, 14 Dec 2018 06:40:23 +0000 Subject: [New-bugs-announce] [issue35488] pathlib Path.match does not behave as described Message-ID: <1544769623.98.0.788709270274.issue35488@psf.upfronthosting.co.za> New submission from anthony shaw : The documentation for pathlib PurePath.match(needle) says it accepts "glob-style pattern.". For an absolute path with a recursive pattern (**) it doesn't match correct for more than 1 directory level. All of the assertions in the attached file should pass. The issue I've seen is on the attached file. I'm using Python 3.7.1 and have also tested this against Python 3.6.6 with the pathlib module on PyPi. Absolute path glob'ing with a recursive pattern works as expected: entries = pathlib.Path('/var').glob('/var/**/*.log') Once this issue is confirmed, I would be happy to test & contribute a fix ---------- components: Library (Lib) files: test_pathlib.py messages: 331782 nosy: anthony shaw priority: normal severity: normal status: open title: pathlib Path.match does not behave as described versions: Python 3.7 Added file: https://bugs.python.org/file47996/test_pathlib.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 03:53:53 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 14 Dec 2018 08:53:53 +0000 Subject: [New-bugs-announce] [issue35489] Argument Clinic should use "const Py_UNICODE *" for the Py_UNICODE converter Message-ID: <1544777633.51.0.788709270274.issue35489@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Format units "Z" and "Z#" in PyArg_Parse* return a pointer to immutable internal data. Therefor the result of the Py_UNICODE converter in Argument Clinic should has type "const Py_UNICODE *". This would help to catch the bug reported in issue31446. ---------- assignee: serhiy.storchaka components: Argument Clinic messages: 331787 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Argument Clinic should use "const Py_UNICODE *" for the Py_UNICODE converter type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 04:39:36 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 14 Dec 2018 09:39:36 +0000 Subject: [New-bugs-announce] [issue35490] Remove the DecodeFSDefault return converter in Argument Clinic Message-ID: <1544780376.65.0.788709270274.issue35490@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The DecodeFSDefault return converter is used only in one function, os.ttyname(). It could be used also in os.ctermid(), but in any case there are too small use cases for it, because it is very uncommon to return a bare pointer to static C string. Since it is such uncommon and using this return converter does not add much to readability, I think that it is better to remove it. ---------- components: Argument Clinic messages: 331790 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Remove the DecodeFSDefault return converter in Argument Clinic _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 06:10:02 2018 From: report at bugs.python.org (STINNER Victor) Date: Fri, 14 Dec 2018 11:10:02 +0000 Subject: [New-bugs-announce] [issue35491] multiprocessing: enhance repr() Message-ID: <1544785802.31.0.788709270274.issue35491@psf.upfronthosting.co.za> New submission from STINNER Victor : multiprocessing.Pool has no __repr__() method, multiprocessing.BaseProcess.__repr__() doesn't contain the pid. I propose to enhance repr() in the multiprocessing module to ease debug. commit 2b417fba25f036c2d6139875e389d80e4286ad75 (HEAD -> master, upstream/master) Author: Victor Stinner Date: Fri Dec 14 11:13:18 2018 +0100 Add multiprocessing.Pool.__repr__() (GH-11137) * Add multiprocessing.Pool.__repr__() to ease debug * RUN, CLOSE and TERMINATE constants values are now strings rather than integer to ease debug ---------- components: Library (Lib) messages: 331795 nosy: vstinner priority: normal severity: normal status: open title: multiprocessing: enhance repr() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 06:22:35 2018 From: report at bugs.python.org (Jules Lasne) Date: Fri, 14 Dec 2018 11:22:35 +0000 Subject: [New-bugs-announce] [issue35492] Missing colon on func statement in library/sys doc Message-ID: <1544786555.59.0.788709270274.issue35492@psf.upfronthosting.co.za> New submission from Jules Lasne : As you can see here, a `:` is missing for the href to be created. https://docs.python.org/3/library/sys.html#sys.get_coroutine_origin_tracking_depth ---------- assignee: docs at python components: Documentation messages: 331796 nosy: docs at python, mdk, seluj78 priority: normal severity: normal status: open title: Missing colon on func statement in library/sys doc versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 06:27:54 2018 From: report at bugs.python.org (STINNER Victor) Date: Fri, 14 Dec 2018 11:27:54 +0000 Subject: [New-bugs-announce] [issue35493] multiprocessing.Pool._worker_handler(): use SIGCHLD to be notified on worker exit Message-ID: <1544786874.12.0.788709270274.issue35493@psf.upfronthosting.co.za> New submission from STINNER Victor : Currently, multiprocessing.Pool._worker_handler() checks every 100 ms if a worker exited using time.sleep(0.1). It causes a latency if worker exit frequently and the pool has to execute a large number of tasks. Worst case: --- import multiprocessing import time CONCURRENCY = 1 NTASK = 100 def noop(): pass with multiprocessing.Pool(CONCURRENCY, maxtasksperchild=1) as pool: start_time = time.monotonic() results = [pool.apply_async(noop, ()) for _ in range(NTASK)] for result in results: result.get() dt = time.monotonic() - start_time pool.terminate() pool.join() print("Total: %.1f sec" % dt) --- Output: --- Total: 10.2 sec --- The worst case is a pool of 1 process, each worker only executes a single task and the task does nothing (minimize task execution time): the latency is 100 ms per task, which means 10 seconds for 100 tasks. Using SIGCHLD signal to be notified when a worker completes would allow to avoid polling: reduce the latency and reduce CPU usage (the thread doesn't have to be awaken every 100 ms anymore). ---------- components: Library (Lib) messages: 331797 nosy: davin, pablogsal, pitrou, vstinner priority: normal severity: normal status: open title: multiprocessing.Pool._worker_handler(): use SIGCHLD to be notified on worker exit versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 08:34:17 2018 From: report at bugs.python.org (Sebastian Linke) Date: Fri, 14 Dec 2018 13:34:17 +0000 Subject: [New-bugs-announce] [issue35494] Inaccurate error message for f-string Message-ID: <1544794457.33.0.788709270274.issue35494@psf.upfronthosting.co.za> New submission from Sebastian Linke : Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:05:16) [MSC v.1915 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> spam = 'spam' >>> f'{spam[0}' File "", line 1 SyntaxError: f-string: expecting '}' The error message seems wrong because a "]" is missing rather than a "}". ---------- components: Interpreter Core messages: 331827 nosy: seblin priority: normal severity: normal status: open title: Inaccurate error message for f-string versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 10:59:40 2018 From: report at bugs.python.org (Ryan Govostes) Date: Fri, 14 Dec 2018 15:59:40 +0000 Subject: [New-bugs-announce] [issue35495] argparse does not honor default argument for nargs=argparse.REMAINDER argument Message-ID: <1544803180.32.0.788709270274.issue35495@psf.upfronthosting.co.za> New submission from Ryan Govostes : import argparse parser = argparse.ArgumentParser() parser.add_argument('things', nargs=argparse.REMAINDER, default=['nothing']) parser.parse_args([]) >>> Namespace(things=[]) Since there were no unparsed arguments remaining, the `default` setting for `things` should have been honored. However it silently ignores this setting. If there's a reason why this wouldn't be desirable, it should raise an exception that the options aren't compatible. ---------- components: Library (Lib) messages: 331837 nosy: rgov priority: normal severity: normal status: open title: argparse does not honor default argument for nargs=argparse.REMAINDER argument type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 11:06:16 2018 From: report at bugs.python.org (Steve Newcomb) Date: Fri, 14 Dec 2018 16:06:16 +0000 Subject: [New-bugs-announce] [issue35496] left-to-right violation in match order Message-ID: <1544803576.39.0.788709270274.issue35496@psf.upfronthosting.co.za> New submission from Steve Newcomb : Documentation for the re module insists that matches are made left-to-right within the alternatives delimited by an "or* | group. I seem to have found a case where the rightmost alternative is matched unless it (and only it) is commented out. See attached script, which is self-explanatory. ---------- files: left-to-right_violation_in_python3_re_match.py messages: 331838 nosy: steve.newcomb priority: normal severity: normal status: open title: left-to-right violation in match order type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file47997/left-to-right_violation_in_python3_re_match.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 11:19:12 2018 From: report at bugs.python.org (Manjusaka) Date: Fri, 14 Dec 2018 16:19:12 +0000 Subject: [New-bugs-announce] [issue35497] Libary select docs enhance Message-ID: <1544804352.43.0.788709270274.issue35497@psf.upfronthosting.co.za> New submission from Manjusaka : Since Python 3.7, Python adds a mask variable named EPOLLEXCLUSIVE for select.epoll. The mask variable is supported by the Linux Kernel since Kernel 4.5. So we can add a tip in this part of Python docs to notice the people the case. ---------- components: Library (Lib) messages: 331840 nosy: Manjusaka priority: normal severity: normal status: open title: Libary select docs enhance type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 11:55:17 2018 From: report at bugs.python.org (Joshua Cannon) Date: Fri, 14 Dec 2018 16:55:17 +0000 Subject: [New-bugs-announce] [issue35498] Parents objects in pathlib.Path don't support slices as __getitem__ arguments Message-ID: <1544806517.4.0.788709270274.issue35498@psf.upfronthosting.co.za> New submission from Joshua Cannon : I would expect the following to work: ``` >>> import pathlib >>> pathlib.Path.cwd().parents[0:1] Traceback (most recent call last): File "", line 1, in File "...\Python36\lib\pathlib.py", line 593, in __getitem__ if idx < 0 or idx >= len(self): TypeError: '<' not supported between instances of 'slice' and 'int' ``` Since pathlib documents `parents` as a sequence-type, and slicing a sequence is pretty standard behavior. ---------- components: Library (Lib) messages: 331841 nosy: thejcannon priority: normal severity: normal status: open title: Parents objects in pathlib.Path don't support slices as __getitem__ arguments type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 13:25:57 2018 From: report at bugs.python.org (STINNER Victor) Date: Fri, 14 Dec 2018 18:25:57 +0000 Subject: [New-bugs-announce] [issue35499] "make profile-opt" overrides CFLAGS_NODIST Message-ID: <1544811957.28.0.788709270274.issue35499@psf.upfronthosting.co.za> New submission from STINNER Victor : Makefile.pre.in contains the rule: build_all_generate_profile: $(MAKE) @DEF_MAKE_RULE@ CFLAGS_NODIST="$(CFLAGS) $(PGO_PROF_GEN_FLAG)" LDFLAGS="$(LDFLAGS) $(PGO_PROF_GEN_FLAG)" LIBS="$(LIBS)" I'm not sure that CFLAGS_NODIST="$(CFLAGS) $(PGO_PROF_GEN_FLAG)" is correct: it overrides user $CFLAGS_NODIST variable. I suggest to replace it with CFLAGS_NODIST="$(CFLAGS_NODIST) $(PGO_PROF_GEN_FLAG)": add $(PGO_PROF_GEN_FLAG) to CFLAGS_NODIST, don't copy $CFLAGS to $CFLAGS_NODIST (and add $(PGO_PROF_GEN_FLAG)). The code comes from bpo-23390: commit 2f90aa63666308e7a9b2d0a89110e0be445a393a Author: Gregory P. Smith Date: Wed Feb 4 02:11:56 2015 -0800 Fixes issue23390: make profile-opt causes -fprofile-generate and related flags to end up in distutils CFLAGS. (...) build_all_generate_profile: - $(MAKE) all CFLAGS="$(CFLAGS) -fprofile-generate" LIBS="$(LIBS) -lgcov" + $(MAKE) all CFLAGS_NODIST="$(CFLAGS) -fprofile-generate" LDFLAGS="-fprofile-generate" LIBS="$(LIBS) -lgcov" (...) CFLAGS_NODIST has been added by bpo-21121: commit acb8c5234302f8057b331abaafb2cc8697daf58f Author: Benjamin Peterson Date: Sat Aug 9 20:01:49 2014 -0700 add -Werror=declaration-after-statement only to stdlib extension modules (closes #21121) Patch from Stefan Krah. This issue is related to bpo-35257: "Avoid leaking linker flags into distutils: add PY_LDFLAGS_NODIST". ---------- components: Build messages: 331847 nosy: vstinner priority: normal severity: normal status: open title: "make profile-opt" overrides CFLAGS_NODIST versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 13:45:46 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 14 Dec 2018 18:45:46 +0000 Subject: [New-bugs-announce] [issue35500] Align expected and actual calls on mock.assert_called_with error message Message-ID: <1544813146.35.0.788709270274.issue35500@psf.upfronthosting.co.za> New submission from Karthikeyan Singaravelan : Currently, assert_called_with has expected calls list in the same line with AssertionError that causes the visualizing the difference to be hard. It will be great if Expected call occurs on the next line so that the diff is improved. The change has to be made at https://github.com/python/cpython/blob/f8e9bd568adf85c1e4aea1dda542a96b027797e2/Lib/unittest/mock.py#L749 . from unittest import mock m = mock.Mock() m(1, 2) m.assert_called_with(2, 3) Current output : Traceback (most recent call last): File "/tmp/bar.py", line 5, in m.assert_called_with(2, 3) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/unittest/mock.py", line 820, in assert_called_with raise AssertionError(_error_message()) from cause AssertionError: Expected call: mock(2, 3) Actual call: mock(1, 2) Proposed output : Traceback (most recent call last): File "/tmp/bar.py", line 5, in m.assert_called_with(2, 3) File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/unittest/mock.py", line 827, in assert_called_with raise AssertionError(_error_message()) from cause AssertionError: Expected call: mock(2, 3) Actual call: mock(1, 2) Some more alignment with the call list starting in the same column AssertionError: Expected call: mock(2, 3) Actual call: mock(1, 2) Originally reported in the GitHub repo at https://github.com/testing-cabal/mock/issues/424 . PR for this was closed since GitHub is used only for backporting (https://github.com/testing-cabal/mock/pull/425). I thought to report it here for discussion. Currently call list output is as per proposed output. AssertionError: Calls not found. Expected: [call(1, 2, 3)] Actual: [call(1, 2)]. ---------- components: Library (Lib) messages: 331849 nosy: cjw296, mariocj89, michael.foord, xtreak priority: normal severity: normal status: open title: Align expected and actual calls on mock.assert_called_with error message type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 13:47:02 2018 From: report at bugs.python.org (STINNER Victor) Date: Fri, 14 Dec 2018 18:47:02 +0000 Subject: [New-bugs-announce] [issue35501] "make coverage" should use leak coverage flags to third party C extensions Message-ID: <1544813222.33.0.788709270274.issue35501@psf.upfronthosting.co.za> New submission from STINNER Victor : "make coverage" modifies CFLAGS and LIBS, Makefile.pre.in: coverage: @echo "Building with support for coverage checking:" $(MAKE) clean profile-removal $(MAKE) @DEF_MAKE_RULE@ CFLAGS="$(CFLAGS) -O0 -pg -fprofile-arcs -ftest-coverage" LIBS="$(LIBS) -lgcov" CFLAGS_NODIST should be used instead here. I'm not sure about LIBS: do we need LIBS_NODIST, as we have CFLAGS_NODIST? LIBS_NODIST would be used for Python and C extensions of the stdlib, but not for third-party C extensions: not used by distutils. See also bpo-35257: "Avoid leaking linker flags into distutils: add PY_LDFLAGS_NODIST". ---------- components: Build messages: 331850 nosy: vstinner priority: normal severity: normal status: open title: "make coverage" should use leak coverage flags to third party C extensions versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 14:59:00 2018 From: report at bugs.python.org (Jess Johnson) Date: Fri, 14 Dec 2018 19:59:00 +0000 Subject: [New-bugs-announce] [issue35502] Memory leak in xml.etree.ElementTree.iterparse Message-ID: <1544817540.85.0.788709270274.issue35502@psf.upfronthosting.co.za> New submission from Jess Johnson : When given xml that that would raise a ParseError, but parsing is stopped before the ParseError is raised, xml.etree.ElementTree.iterparse leaks memory. Example: import gc from io import StringIO import xml.etree.ElementTree as etree import objgraph def parse_xml(): xml = """ """ parser = etree.iterparse(StringIO(initial_value=xml)) for _, elem in parser: if elem.tag == 'LEVEL1': break def run(): parse_xml() gc.collect() uncollected_elems = objgraph.by_type('Element') print(uncollected_elems) objgraph.show_backrefs(uncollected_elems, max_depth=15) if __name__ == "__main__": run() Output: [] Also see this gist which has an image showing the objects that are retained in memory: https://gist.github.com/grokcode/f89d5c5f1831c6bc373be6494f843de3 ---------- components: XML messages: 331861 nosy: jess.j priority: normal severity: normal status: open title: Memory leak in xml.etree.ElementTree.iterparse type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 16:16:22 2018 From: report at bugs.python.org (Benjamin Ward) Date: Fri, 14 Dec 2018 21:16:22 +0000 Subject: [New-bugs-announce] [issue35503] os.path.islink() works with cygwin installation but not python.org Message-ID: <1544822182.17.0.788709270274.issue35503@psf.upfronthosting.co.za> New submission from Benjamin Ward : I have python.org's Python27 installed on my laptop. In my Documnets/tmp folder/directory I created three "directories" (see below) and performed os.path.islink() on all three. in cmd window: (not dir output and prompt have be been shortened) *\Documents\tmp>dir Volume in drive C is OS Volume Serial Number is B2BB-F7DA Directory of *\Documents\tmp ... 12/14/2018 12:37 PM isDir 12/14/2018 12:37 PM isDirJunction [C:\Users\benjward\Documents\tmp\isDir] 12/14/2018 12:39 PM isDirSymbolicLink [isDir] ... *** using system installed python.org python 2.7 *\Documents\tmp>where python C:\Python27\python.exe python Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.islink('isDir') False >>> os.path.islink('isDirJunction') False >>> os.path.islink('isDirSymlink') False >>> *** but ..., using cygwin64 installation of python 2.7 *\Documents\tmp>C:\cygwin64\bin\python2.7 Python 2.7.14 (default, Oct 31 2017, 21:12:13) [GCC 6.4.0] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.islink('isDir') False >>> os.path.islink('isDirJunction') True >>> os.path.islink('isDirSymlink') True >>> The latter result is what I was expecting. Granted, my cygwin python is 2.7.14 and system installation is 2.7.15, but is it likely that a capability was lost? ---------- messages: 331871 nosy: bward priority: normal severity: normal status: open title: os.path.islink() works with cygwin installation but not python.org type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 17:22:24 2018 From: report at bugs.python.org (Bachsau) Date: Fri, 14 Dec 2018 22:22:24 +0000 Subject: [New-bugs-announce] [issue35504] `del OSError().characters_written` raises SystemError Message-ID: <1544826144.81.0.788709270274.issue35504@psf.upfronthosting.co.za> New submission from Bachsau : `del OSError().characters_written` raises `SystemError`: "null argument to internal routine" I don't know why anyone should try this in productive code, but since the documentation says, that every `SystemError` should be reported, I'm doing that. My suggestion would be to make that attribute behave like the other ones of `OSError`, e.g. defaulting to `None` and returning to that value on deletion. ---------- components: Interpreter Core messages: 331876 nosy: Bachsau priority: normal severity: normal status: open title: `del OSError().characters_written` raises SystemError versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 19:20:22 2018 From: report at bugs.python.org (Petr Stupka) Date: Sat, 15 Dec 2018 00:20:22 +0000 Subject: [New-bugs-announce] [issue35505] Test test_imaplib fail in test_imap4_host_default_value Message-ID: <1544833222.75.0.788709270274.issue35505@psf.upfronthosting.co.za> New submission from Petr Stupka : OS: CentOS Linux release 7.6.1810 (Core) ====================================================================== FAIL: test_imap4_host_default_value (test.test_imaplib.TestImaplib) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/stupka/build/Python-3.7.1/Lib/test/test_imaplib.py", line 83, in test_imap4_host_default_value imaplib.IMAP4() AssertionError: OSError not raised Stderr: /home/stupka/build/Python-3.7.1/Lib/socket.py:660: ResourceWarning: unclosed self._sock = None ---------------------------------------------------------------------- This test fails when there is (in my case) dovecot running and listening on ::1, port 143 - expected exception is not raised. With dovecot stopped and nothing listening on port 143 the test passes. It should probably check if there is something listening on ::1:143 and if so then the test can be skipped? ---------- components: Tests messages: 331881 nosy: Petr Stupka priority: normal severity: normal status: open title: Test test_imaplib fail in test_imap4_host_default_value type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 20:38:24 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 15 Dec 2018 01:38:24 +0000 Subject: [New-bugs-announce] [issue35506] Doc: fix keyword `as` link from `import` and `try` Message-ID: <1544837904.07.0.788709270274.issue35506@psf.upfronthosting.co.za> New submission from Cheryl Sabella : In the documentation, using the :keyword:`as` role links to the `as` defined for the `with` statement, which could be confusing when it was used in the `import` or `try` section of the docs. https://docs.python.org/3/reference/simple_stmts.html#the-import-statement ---------- assignee: docs at python components: Documentation messages: 331883 nosy: cheryl.sabella, docs at python priority: normal severity: normal status: open title: Doc: fix keyword `as` link from `import` and `try` type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 14 23:44:16 2018 From: report at bugs.python.org (sh37211) Date: Sat, 15 Dec 2018 04:44:16 +0000 Subject: [New-bugs-announce] [issue35507] multiprocessing: seg fault when creating RawArray from numpy ctypes Message-ID: <1544849056.09.0.788709270274.issue35507@psf.upfronthosting.co.za> New submission from sh37211 : After creating this post on StackOverflow... https://stackoverflow.com/questions/53757856/segmentation-fault-when-creating-multiprocessing-array ...it was suggested by one of the respondents that I file a bug report. The following code produces segmentation faults on certain OSes (Linux: Ubuntu 16.04, 18.04 and Debian) but not others (Mac 10.13.4): import numpy as np from multiprocessing import sharedctypes a = np.ctypeslib.as_ctypes(np.zeros((224,224,3))) b = sharedctypes.RawArray(a._type_, a) The segmentation fault occurs upon the creation of the multiprocessing.sharedctypes.RawArray. As a workaround, one can declare an intermediate variable, e.g. "a2", and write a = np.zeros((224,224,3)) a2 = np.ctypeslib.as_ctypes(a) b = sharedctypes.RawArray(a2._type_, a2) User kabanus seemed to think it was more likely to be an error with multiprocessing than with numpy. Using Anaconda python distribution, Python 3.5 and 3.6. Have not tried 3.7 or 3.8 yet. ---------- components: Library (Lib), ctypes messages: 331888 nosy: sh37211 priority: normal severity: normal status: open title: multiprocessing: seg fault when creating RawArray from numpy ctypes type: crash versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 15 04:02:55 2018 From: report at bugs.python.org (Neil Booth) Date: Sat, 15 Dec 2018 09:02:55 +0000 Subject: [New-bugs-announce] [issue35508] array.index should take optional start and stop indices like for lists Message-ID: <1544864575.4.0.788709270274.issue35508@psf.upfronthosting.co.za> New submission from Neil Booth : list.index has signature: index(value, [start, [stop]]) array.index from the array module should provide the same facility ---------- components: Library (Lib) messages: 331891 nosy: kyuupichan priority: normal severity: normal status: open title: array.index should take optional start and stop indices like for lists type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 15 07:07:48 2018 From: report at bugs.python.org (Chih-Hsuan Yen) Date: Sat, 15 Dec 2018 12:07:48 +0000 Subject: [New-bugs-announce] [issue35509] Unable to inherit from logging.Formatter Message-ID: <1544875668.42.0.788709270274.issue35509@psf.upfronthosting.co.za> New submission from Chih-Hsuan Yen : The following script runs fine on Python 3.7.1 but not on master (f5107dfd42). import logging class Foo(logging.Formatter): def __init__(self): super().__init__(self) Foo() The output is: Traceback (most recent call last): File "t.py", line 9, in Foo() File "t.py", line 6, in __init__ super().__init__(self) File "/usr/lib/python3.8/logging/__init__.py", line 589, in __init__ self._style.validate() File "/usr/lib/python3.8/logging/__init__.py", line 441, in validate if not self.validation_pattern.search(self._fmt): TypeError: expected string or bytes-like object Most likely there's something wrong in the newly-added validation step (issue34844). /cc the primary reviewer of the aforementioned patch. ---------- components: Library (Lib) messages: 331900 nosy: vinay.sajip, yan12125 priority: normal severity: normal status: open title: Unable to inherit from logging.Formatter type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 15 22:22:21 2018 From: report at bugs.python.org (Satrajit S Ghosh) Date: Sun, 16 Dec 2018 03:22:21 +0000 Subject: [New-bugs-announce] [issue35510] pickling derived dataclasses Message-ID: <1544930541.77.0.788709270274.issue35510@psf.upfronthosting.co.za> New submission from Satrajit S Ghosh : I'm not sure if this is intended behavior or an error. I'm creating dataclasses dynamically and trying to pickle those classes or objects containing instances of those classes. This was resulting in an error, so I trimmed it down to this example. ``` import pickle as pk import dataclasses as dc @dc.dataclass class A: pass pk.dumps(A) # --> this is fine B = dc.make_dataclass('B', [], bases=(A,)) pk.dumps(B) # --> results in an error # PicklingError: Can't pickle : attribute lookup B on types failed ``` is this expected behavior? and if so, is there a way to create a dynamic dataclass that pickles? ---------- components: Library (Lib) messages: 331914 nosy: satra priority: normal severity: normal status: open title: pickling derived dataclasses type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 16 01:48:23 2018 From: report at bugs.python.org (bombs) Date: Sun, 16 Dec 2018 06:48:23 +0000 Subject: [New-bugs-announce] [issue35511] Some methods of profile.Profile are not supported but the docs doesn't mention it. Message-ID: <1544942903.87.0.788709270274.issue35511@psf.upfronthosting.co.za> New submission from bombs : Currently enable, disable methods are only supported by Profile class of cProfile module, not profile module. But the docs doesn't give this information. I think we should, at least mention it, in the docs. ---------- assignee: docs at python components: Documentation messages: 331917 nosy: asvetlov, bluewhale8202, docs at python priority: normal severity: normal status: open title: Some methods of profile.Profile are not supported but the docs doesn't mention it. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 16 15:43:27 2018 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 16 Dec 2018 20:43:27 +0000 Subject: [New-bugs-announce] [issue35512] patch.dict resolves in_dict eagerly (should be late resolved) Message-ID: <1544993007.41.0.788709270274.issue35512@psf.upfronthosting.co.za> New submission from Jason R. Coombs : Originally [reported in testing-cabal/mock#405](https://github.com/testing-cabal/mock/issues/405), I believe I've discovered an inconsistency that manifests as a flaw: `patch` and `patch.object` allow the target to be specified as string referring to the target object and this object is resolved at the time the patch effected, not when the patch is declared. `patch.dict` contrarily seems to resolve the dict eagerly, when the patch is declared. Observe with this pytest: ``` import mock target = dict(a=1) @mock.patch.dict('test_patch_dict.target', dict(b=2)) def test_after_patch(): assert target == dict(a=2, b=2) target = dict(a=2) ``` Here's the output: ``` $ rwt mock pytest -- -m pytest test_patch_dict.py Collecting mock Using cached mock-2.0.0-py2.py3-none-any.whl Collecting pbr>=0.11 (from mock) Using cached pbr-3.0.0-py2.py3-none-any.whl Collecting six>=1.9 (from mock) Using cached six-1.10.0-py2.py3-none-any.whl Installing collected packages: pbr, six, mock Successfully installed mock-2.0.0 pbr-3.0.0 six-1.10.0 ====================================== test session starts ======================================= platform darwin -- Python 3.6.1, pytest-3.0.5, py-1.4.33, pluggy-0.4.0 rootdir: /Users/jaraco, inifile: collected 1 items test_patch_dict.py F ============================================ FAILURES ============================================ ________________________________________ test_after_patch ________________________________________ @mock.patch.dict('test_patch_dict.target', dict(b=2)) def test_after_patch(): > assert target == dict(a=2, b=2) E assert {'a': 2} == {'a': 2, 'b': 2} E Omitting 1 identical items, use -v to show E Right contains more items: E {'b': 2} E Use -v to get the full diff test_patch_dict.py:8: AssertionError ==================================== 1 failed in 0.05 seconds ==================================== ``` The target is unpatched because `test_patch_dict.target` was resolved during decoration rather than during test run. Removing the initial assignment of `target = dict(a=1)`, the failure is thus: ``` ______________________________ ERROR collecting test_patch_dict.py _______________________________ ImportError while importing test module '/Users/jaraco/test_patch_dict.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /var/folders/c6/v7hnmq453xb6p2dbz1gqc6rr0000gn/T/rwt-pcm3552g/mock/mock.py:1197: in _dot_lookup return getattr(thing, comp) E AttributeError: module 'test_patch_dict' has no attribute 'target' During handling of the above exception, another exception occurred: :942: in _find_and_load_unlocked ??? E AttributeError: module 'test_patch_dict' has no attribute '__path__' During handling of the above exception, another exception occurred: test_patch_dict.py:4: in @mock.patch.dict('test_patch_dict.target', dict(b=2)) /var/folders/c6/v7hnmq453xb6p2dbz1gqc6rr0000gn/T/rwt-pcm3552g/mock/mock.py:1708: in __init__ in_dict = _importer(in_dict) /var/folders/c6/v7hnmq453xb6p2dbz1gqc6rr0000gn/T/rwt-pcm3552g/mock/mock.py:1210: in _importer thing = _dot_lookup(thing, comp, import_path) /var/folders/c6/v7hnmq453xb6p2dbz1gqc6rr0000gn/T/rwt-pcm3552g/mock/mock.py:1199: in _dot_lookup __import__(import_path) E ModuleNotFoundError: No module named 'test_patch_dict.target'; 'test_patch_dict' is not a package !!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!! ==================================== 1 error in 0.41 seconds ===================================== ``` Is there any reason `patch.dict` doesn't have a similar deferred resolution behavior as its sister methods? ---------- components: Library (Lib) messages: 331937 nosy: jason.coombs priority: normal severity: normal status: open title: patch.dict resolves in_dict eagerly (should be late resolved) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 16 17:05:49 2018 From: report at bugs.python.org (STINNER Victor) Date: Sun, 16 Dec 2018 22:05:49 +0000 Subject: [New-bugs-announce] [issue35513] Lib/test/lock_tests.py should not use time.time(), but time.monotonic() Message-ID: <1544997949.57.0.788709270274.issue35513@psf.upfronthosting.co.za> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/145/builds/956 Unhandled exception in thread started by .task at 0x111775cb0> Traceback (most recent call last): File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/lock_tests.py", line 41, in task f() File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/lock_tests.py", line 591, in f self.assertTimeout(dt, 0.1) File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/lock_tests.py", line 80, in assertTimeout self.assertGreaterEqual(actual, expected * 0.6) File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/case.py", line 1283, in assertGreaterEqual self.fail(self._formatMessage(msg, standardMsg)) File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/case.py", line 719, in fail raise self.failureException(msg) AssertionError: -0.24049997329711914 not greater than or equal to 0.06 test_waitfor_timeout (test.test_threading.ConditionTests) ... FAIL test_waitfor_timeout(): ... dt = time.time() result = cond.wait_for(lambda : state==4, timeout=0.1) dt = time.time() - dt self.assertFalse(result) self.assertTimeout(dt, 0.1) ... with: def assertTimeout(self, actual, expected): ... self.assertGreaterEqual(actual, expected * 0.6) ... It seems like time.time() gone backward on the buildbot. The test must use time.monotonic() to measure time difference. Attached PR fix the issue. ---------- components: Tests messages: 331939 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: Lib/test/lock_tests.py should not use time.time(), but time.monotonic() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 16 23:45:31 2018 From: report at bugs.python.org (bombs) Date: Mon, 17 Dec 2018 04:45:31 +0000 Subject: [New-bugs-announce] [issue35514] Docs on reference count detail. enhancement. Message-ID: <1545021931.75.0.788709270274.issue35514@psf.upfronthosting.co.za> New submission from bombs : https://docs.python.org/3/c-api/intro.html#reference-count-details When I read that section of the docs first time, I found it hard to grasp what transferring of ownership is, which is an important and repeating concept throughout the docs. Some explanations were confusing. For example, > When a function passes ownership of a reference on to its caller, the > caller is said to receive a new reference This part tries to explain what is to receive a new reference, in terms of passing ownership, when readers have no ideas of what transferring of ownership is. I think it is kind of a circular definition fallacy. I think this section should've explained transferring of ownership, a high level concept, in terms of reference count changes, which are concrete operations. (original version) When a function passes ownership of a reference on to its caller, the caller is said to receive a new reference. When no ownership is transferred, the caller is said to borrow the reference. Nothing needs to be done for a borrowed reference. Conversely, when a calling function passes in a reference to an object, there are two possibilities: the function steals a reference to the object, or it does not. Stealing a reference means that when you pass a reference to a function, that function assumes that it now owns that reference, and you are not responsible for it any longer. (revision) When a function returns an object and effectively increases the reference count of it, the function is said to give ownership of a new reference to its caller. When a function returns an object without changing the reference count of it, the caller is said to borrow the reference. Nothing needs to be done for a borrowed reference. Conversely, if a function decreases the reference count of an object, it is said to steal the ownership of the reference from its owner. Stealing a reference means that when you pass a reference to a stealing function, that function assumes that it now owns that reference, and you are not responsible for it any longer. ---------- assignee: docs at python components: Documentation messages: 331946 nosy: bluewhale8202, docs at python priority: normal severity: normal status: open title: Docs on reference count detail. enhancement. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 04:04:28 2018 From: report at bugs.python.org (Arnaud) Date: Mon, 17 Dec 2018 09:04:28 +0000 Subject: [New-bugs-announce] [issue35515] Matrix operator star creates a false matrix Message-ID: <1545037468.26.0.788709270274.issue35515@psf.upfronthosting.co.za> New submission from Arnaud : It seems that the definition of a matrix like this: a=[[0,0]]*4 does not create the matrix correctly by create 4 times the same pointer. After the matrix creation, a print(a) gives the following result: [[0, 0], [0, 0], [0, 0], [0, 0]] which looks normal print(type(a)) and print(type(a[1]) give also the correct result: list But when we try to change a matrix element: a[2][0]=1 print(a) gives a false result: [[1, 0], [1, 0], [1, 0], [1, 0]] When the matrix definition is done like this: a=[[0, 0], [0, 0], [0, 0], [0, 0]] the behavior is "as expected" a[2][0]=1 print(a) gives the correct result: [[0, 0], [0, 0], [1, 0], [0, 0]] ---------- components: Interpreter Core files: python_bugreport.py messages: 331955 nosy: xda at abalgo.com priority: normal severity: normal status: open title: Matrix operator star creates a false matrix type: behavior versions: Python 2.7, Python 3.5, Python 3.6 Added file: https://bugs.python.org/file48001/python_bugreport.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 04:31:27 2018 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Dec 2018 09:31:27 +0000 Subject: [New-bugs-announce] [issue35516] platform.system_alias(): add macOS support Message-ID: <1545039087.64.0.788709270274.issue35516@psf.upfronthosting.co.za> New submission from STINNER Victor : platform.system_alias() documentation: Returns ``(system, release, version)`` aliased to common marketing names used for some systems. It also does some reordering of the information in some cases where it would otherwise cause confusion. IMHO "macOS" and macOS release are more appropriate than darwin and darwin release for platform.system_alias(). I propose to make a similar change in system_alias() than the platform.mac_ver() change made in bpo-35344. ---------- components: Library (Lib) messages: 331960 nosy: vstinner priority: normal severity: normal status: open title: platform.system_alias(): add macOS support versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 05:31:38 2018 From: report at bugs.python.org (Manjusaka) Date: Mon, 17 Dec 2018 10:31:38 +0000 Subject: [New-bugs-announce] [issue35517] enhhance for selector.EpollSelector Message-ID: <1545042698.49.0.788709270274.issue35517@psf.upfronthosting.co.za> New submission from Manjusaka : Add a keyword argument for selector.EpollSelector with default value. This can help people use the EPOLLEXCLUSIVE since Python 3.7 and Linux Kernel 4.5 to avoid the herd effect like this def register(self, fileobj, events, data=None, exclusive=False): key = super().register(fileobj, events, data) epoll_events = 0 if events & EVENT_READ: epoll_events |= select.EPOLLIN if events & EVENT_WRITE: epoll_events |= select.EPOLLOUT try: if exclusive and hasattr(select, "EPOLLEXCLUSIVE"): epoll_events |= select.EPOLLEXCLUSIVE self._epoll.register(key.fd, epoll_events) except BaseException: super().unregister(fileobj) raise return key ---------- components: Library (Lib) messages: 331969 nosy: Manjusaka priority: normal severity: normal status: open title: enhhance for selector.EpollSelector type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 06:08:35 2018 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Dec 2018 11:08:35 +0000 Subject: [New-bugs-announce] [issue35518] test_timeout uses blackhole.snakebite.net domain which doesn't exist anymore Message-ID: <1545044915.94.0.788709270274.issue35518@psf.upfronthosting.co.za> New submission from STINNER Victor : snakebite.net and blackhole.snakebite.net domains don't exist anymore, but test_timeout uses it: def testConnectTimeout(self): # Testing connect timeout is tricky: we need to have IP connectivity # to a host that silently drops our packets. We can't simulate this # from Python because it's a function of the underlying TCP/IP stack. # So, the following Snakebite host has been defined: blackhole = resolve_address('blackhole.snakebite.net', 56666) (...) If I recall properly, snakebite.net was a service provided by Trent Nelson to test CPython on various operating systems (Solaris, IRIX, HP-UX, etc.). It seems like the service is now completely down. Article from 2009: https://conferences.oreilly.com/oscon/oscon2009/public/schedule/detail/8268 "Snakebite is a culmination of ten months of secretive work, seven trips to Michigan State University, six blown fuses and about $60,000. The end result? A network of around 37-ish servers of all different shapes and sizes, specifically geared towards the development needs of open source projects. Get the inside scoop from Snakebite's Founder, Trent Nelson, and MSU Director Dr. Titus Brown." ---------- components: Tests messages: 331978 nosy: vstinner priority: normal severity: normal status: open title: test_timeout uses blackhole.snakebite.net domain which doesn't exist anymore versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 06:35:16 2018 From: report at bugs.python.org (Vajrasky Kok) Date: Mon, 17 Dec 2018 11:35:16 +0000 Subject: [New-bugs-announce] [issue35519] Can not run test without test module for tests which import random module Message-ID: <1545046516.28.0.788709270274.issue35519@psf.upfronthosting.co.za> New submission from Vajrasky Kok : $ git clone git at github.com:python/cpython.git cpython2 $ cd cpython2 $ ./configure --with-pydebug $ make -j $ ./python Lib/test/test_xmlrpc.py Traceback (most recent call last): File "Lib/test/test_xmlrpc.py", line 8, in import xmlrpc.client as xmlrpclib File "/opt/Code/python/cpython2/Lib/xmlrpc/client.py", line 136, in import http.client File "/opt/Code/python/cpython2/Lib/http/client.py", line 71, in import email.parser File "/opt/Code/python/cpython2/Lib/email/parser.py", line 12, in from email.feedparser import FeedParser, BytesFeedParser File "/opt/Code/python/cpython2/Lib/email/feedparser.py", line 27, in from email._policybase import compat32 File "/opt/Code/python/cpython2/Lib/email/_policybase.py", line 9, in from email.utils import _has_surrogates File "/opt/Code/python/cpython2/Lib/email/utils.py", line 28, in import random File "/opt/Code/python/cpython2/Lib/random.py", line 47, in import bisect as _bisect File "/opt/Code/python/cpython2/Lib/test/bisect.py", line 27, in import tempfile File "/opt/Code/python/cpython2/Lib/tempfile.py", line 45, in from random import Random as _Random ImportError: cannot import name 'Random' from 'random' (/opt/Code/python/cpython2/Lib/random.py) I know about running test this way: $ ./python -m test -v test_xmlrpc And it works. I am just wondering whether I should be able to run test this way: ./python Lib/test/test_blabla.py? Because running other tests without test module works, for example: ./python Lib/test/test_ast.py. Only test which imports random module fails. ---------- components: Tests messages: 331992 nosy: vajrasky priority: normal severity: normal status: open title: Can not run test without test module for tests which import random module versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 06:48:50 2018 From: report at bugs.python.org (Jakub Kulik) Date: Mon, 17 Dec 2018 11:48:50 +0000 Subject: [New-bugs-announce] [issue35520] Python won't build with dtrace enabled on some systems. Message-ID: <1545047330.52.0.788709270274.issue35520@psf.upfronthosting.co.za> New submission from Jakub Kulik : Python won't build on Solaris with dtrace support enabled. Solaris is one of those systems where it is necessary to generate dtrace object files with dtrace -G. While this need is included in python configure and Makefiles, it doesn't work correctly. First, configure tests -G support on file with not completely valid content of just BEGIN inside. Valid should have BEGIN{}. This is not a problem for systems that don't require dtrace object files as this test should fail for them anyway, however it incorrectly detects those like Solaris. And second, Makefile is not ready for dtrace as the DTRACE_DEPS variable doesn't include all the necessary files. ---------- components: Build messages: 331997 nosy: kulikjak priority: normal severity: normal status: open title: Python won't build with dtrace enabled on some systems. type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 17 19:45:28 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Tue, 18 Dec 2018 00:45:28 +0000 Subject: [New-bugs-announce] [issue35521] IDLE: Add doc section for Code Conext Message-ID: <1545093928.09.0.788709270274.issue35521@psf.upfronthosting.co.za> New submission from Cheryl Sabella : Item D1 from #33610. D1: idle.rst subsection on Code Context ---------- assignee: terry.reedy components: IDLE messages: 332032 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Add doc section for Code Conext type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 00:32:36 2018 From: report at bugs.python.org (Rohit Biswas) Date: Tue, 18 Dec 2018 05:32:36 +0000 Subject: [New-bugs-announce] [issue35522] os.stat().st_ctime and time.time() mismatch Message-ID: <1545111156.21.0.788709270274.issue35522@psf.upfronthosting.co.za> New submission from Rohit Biswas : Related Stack Overflow Question: https://stackoverflow.com/questions/53810984/mismatch-between-file-creation-time-and-current-time-in-python ---------- components: Library (Lib) messages: 332040 nosy: belopolsky, rbiswas143 priority: normal severity: normal status: open title: os.stat().st_ctime and time.time() mismatch type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 07:04:01 2018 From: report at bugs.python.org (STINNER Victor) Date: Tue, 18 Dec 2018 12:04:01 +0000 Subject: [New-bugs-announce] [issue35523] Remove ctypes old workaround: creating the first instance of a callback Message-ID: <1545134641.38.0.788709270274.issue35523@psf.upfronthosting.co.za> New submission from STINNER Victor : ctypes._reset_cache() contains the following code: # XXX for whatever reasons, creating the first instance of a callback # function is needed for the unittests on Win64 to succeed. This MAY # be a compiler bug, since the problem occurs only when _ctypes is # compiled with the MS SDK compiler. Or an uninitialized variable? CFUNCTYPE(c_int)(lambda: None) This code has been added 11 years ago: commit 674e9389e9fdadd622829f4833367ac3b38475b5 Author: Thomas Heller Date: Fri Aug 31 13:06:44 2007 +0000 Add a workaround for a strange bug on win64, when _ctypes is compiled with the SDK compiler. This should fix the failing Lib\ctypes\test\test_as_parameter.py test. diff --git a/Lib/ctypes/__init__.py b/Lib/ctypes/__init__.py index cdf6d36e47..f55d194b8f 100644 --- a/Lib/ctypes/__init__.py +++ b/Lib/ctypes/__init__.py @@ -535,3 +535,9 @@ for kind in [c_ushort, c_uint, c_ulong, c_ulonglong]: elif sizeof(kind) == 4: c_uint32 = kind elif sizeof(kind) == 8: c_uint64 = kind del(kind) + +# XXX for whatever reasons, creating the first instance of a callback +# function is needed for the unittests on Win64 to succeed. This MAY +# be a compiler bug, since the problem occurs only when _ctypes is +# compiled with the MS SDK compiler. Or an uninitialized variable? +CFUNCTYPE(c_int)(lambda: None) -- This call is removed from Fedora package by the following patch: https://src.fedoraproject.org/rpms/python3/blob/master/f/00155-avoid-ctypes-thunks.patch Extract of Fedora python3.spec: # 00155 # # Avoid allocating thunks in ctypes unless absolutely necessary, to avoid # generating SELinux denials on "import ctypes" and "import uuid" when # embedding Python within httpd # See https://bugzilla.redhat.com/show_bug.cgi?id=814391 Patch155: 00155-avoid-ctypes-thunks.patch The patch has been added 6 years ago in Fedora: commit 8a28107df1670a03a12cf6a7787160f103d8d8c8 Author: David Malcolm Date: Fri Apr 20 15:28:39 2012 -0400 3.2.3-4: avoid allocating thunks in ctypes unless absolutely necessary (patch 155; rhbz#814391) * Fri Apr 20 2012 David Malcolm - 3.2.3-4 - avoid allocating thunks in ctypes unless absolutely necessary, to avoid generating SELinux denials on "import ctypes" and "import uuid" when embedding Python within httpd (patch 155; rhbz#814391) https://src.fedoraproject.org/rpms/python3/c/8a28107df1670a03a12cf6a7787160f103d8d8c8?branch=master -- I don't understand the purpose of the workaround and ctypes is working well on Fedora. I propose to also remove the workaround in the master branch. In case of doubt, I prefer to keep the workaround in Python 3.7. Attached PR removes the workaround. ---------- components: Library (Lib) messages: 332054 nosy: vstinner priority: normal severity: normal status: open title: Remove ctypes old workaround: creating the first instance of a callback versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 07:15:02 2018 From: report at bugs.python.org (Jules Lasne) Date: Tue, 18 Dec 2018 12:15:02 +0000 Subject: [New-bugs-announce] [issue35524] using/windows launcher image might be deprecated Message-ID: <1545135302.84.0.788709270274.issue35524@psf.upfronthosting.co.za> New submission from Jules Lasne : https://docs.python.org/3/_images/win_installer.png This image corresponds to the Python 3.5 installer. Would it be useful to get a new screenshot of the latest installer ? I can do it if needed ---------- assignee: docs at python components: Documentation messages: 332055 nosy: docs at python, seluj78 priority: normal severity: normal status: open title: using/windows launcher image might be deprecated type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 10:28:06 2018 From: report at bugs.python.org (Colin McPhail) Date: Tue, 18 Dec 2018 15:28:06 +0000 Subject: [New-bugs-announce] [issue35525] Incorrect keyword name in NNTP.starttls() documentation Message-ID: <1545146886.62.0.788709270274.issue35525@psf.upfronthosting.co.za> New submission from Colin McPhail : The library documentation for nntplib.NNTP.starttls() says that it takes a keyword parameter called ssl_context. The source code referenced via the link at the top of the nntplib documentation shows the keyword is actually called context. The result is a TypeError if the documented name is used: TypeError: starttls() got an unexpected keyword argument 'ssl_context' ---------- assignee: docs at python components: Documentation messages: 332066 nosy: cmcp22, docs at python priority: normal severity: normal status: open title: Incorrect keyword name in NNTP.starttls() documentation versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 10:56:04 2018 From: report at bugs.python.org (ChrisRands) Date: Tue, 18 Dec 2018 15:56:04 +0000 Subject: [New-bugs-announce] [issue35526] __future__.barry_as_FLUFL documented as mandatory for Python 3.9 Message-ID: <1545148564.43.0.788709270274.issue35526@psf.upfronthosting.co.za> New submission from ChrisRands : A festive bug report: >>> from __future__ import barry_as_FLUFL >>> barry_as_FLUFL.mandatory (3, 9, 0, 'alpha', 0) So barry_as_FLUFL is documented to become mandatory for Python 3.9. Note that mandatory here means that the feature becomes permanent without the __future__ import and cannot be switched off. In this case, this means the '!=' operator becomes a SynaxError, with obvious consequences for existing python code. Now of course this is just an Easter egg, but given that 3.9 is surely on the horizon now, isn't it time to modify the joke, or maybe I'm missing the point and the joke is on me? ---------- messages: 332068 nosy: ChrisRands priority: normal severity: normal status: open title: __future__.barry_as_FLUFL documented as mandatory for Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 16:13:53 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Tue, 18 Dec 2018 21:13:53 +0000 Subject: [New-bugs-announce] [issue35527] Make fields selectively immutable in dataclasses Message-ID: <1545167633.41.0.788709270274.issue35527@psf.upfronthosting.co.za> New submission from Raymond Hettinger : The unsafe_hash option is unsafe only because it doesn't afford mutability protections. This can be mitigated with selective immutability. @dataclass class Person: ssn: int = field(immutable=True) birth_city: int = field(immutable=True) name: str # A person can change their name address: str # A person can move age: int # An age can change This would generate something like this: def __setattr__(self, attr, value): if attr in {'ssn', 'birth_city'} and hasattr(self, attr): raise TypeError( f'{attr!r} is not settable after initialization') return object.__setattr__(self, name, attr) A number of APIs are possible -- the important thing to be able to selectively block updates to particular fields (particularly those used in hashing and ordering). ---------- assignee: eric.smith components: Library (Lib) messages: 332080 nosy: eric.smith, rhettinger priority: normal severity: normal status: open title: Make fields selectively immutable in dataclasses type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 17:34:32 2018 From: report at bugs.python.org (jfbu) Date: Tue, 18 Dec 2018 22:34:32 +0000 Subject: [New-bugs-announce] [issue35528] [DOC] [LaTeX] Sphinx 2.0 uses GNU FreeFont as default for xelatex Message-ID: <1545172472.78.0.788709270274.issue35528@psf.upfronthosting.co.za> New submission from jfbu : Not sure if any issue at all, but as said in title, starting with Sphinx 2.0 (Spring 2019), XeLaTeX will be configured to use by default GNU FreeFont, (see https://github.com/sphinx-doc/sphinx/blob/master/CHANGES), and this means new dependency (for documentation builds on Ubuntu, package fonts-freefont-otf; for builds on Fedora 29 it is texlive-gnu-freefont). Indeed currently CPython PDFs are built using ``xelatex``. ---------- assignee: docs at python components: Documentation messages: 332092 nosy: docs at python, jfbu priority: normal severity: normal status: open title: [DOC] [LaTeX] Sphinx 2.0 uses GNU FreeFont as default for xelatex _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 18:44:22 2018 From: report at bugs.python.org (Zackery Spytz) Date: Tue, 18 Dec 2018 23:44:22 +0000 Subject: [New-bugs-announce] [issue35529] A reference counting bug in ctypes Message-ID: <1545176662.95.0.788709270274.issue35529@psf.upfronthosting.co.za> New submission from Zackery Spytz : In PyCFuncPtr_FromDll(), "dll" will leak if an error occurs in _validate_paramflags() or GenericPyCData_new(). ---------- components: Extension Modules, ctypes messages: 332101 nosy: ZackerySpytz priority: normal severity: normal status: open title: A reference counting bug in ctypes type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 19:09:09 2018 From: report at bugs.python.org (Victor Porton) Date: Wed, 19 Dec 2018 00:09:09 +0000 Subject: [New-bugs-announce] [issue35530] Counter-intuitive logging API Message-ID: <1545178149.93.0.788709270274.issue35530@psf.upfronthosting.co.za> New submission from Victor Porton : The following script: #/usr/bin/env python3 import logging logger = logging.getLogger(name='main') logger.setLevel(logging.INFO) logger.error('XXX') logging.error('ZZZ') logger.error('XXX') outputs XXX ERROR:root:ZZZ ERROR:main:XXX That is counter-intuitive: two logger.error('XXX') operators should output the same string, not two different strings "XXX" and "ERROR:main:XXX". Please discuss how to make Python behave as a user could expect. ---------- components: Library (Lib) messages: 332103 nosy: porton priority: normal severity: normal status: open title: Counter-intuitive logging API type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 18 20:49:24 2018 From: report at bugs.python.org (Justin) Date: Wed, 19 Dec 2018 01:49:24 +0000 Subject: [New-bugs-announce] [issue35531] xml.etree.ElementTree Elment.find bug, fails to find tag Message-ID: <1545184164.65.0.788709270274.issue35531@psf.upfronthosting.co.za> New submission from Justin : When the following text it loaded in to an ElementTree Element, the find method is unable to find one of the elements without a namespace assigned to it. ``` import xml.etree.ElementTree as ElementTree xml_text = """ a:ActionNotSupportedThe message with Action \'\' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). """ xml = ElementTree.fromstring(xml_text) ele = xml.find('faultstring') ele == None #True ``` ---------- components: XML messages: 332106 nosy: spacether priority: normal severity: normal status: open title: xml.etree.ElementTree Elment.find bug, fails to find tag type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 04:06:30 2018 From: report at bugs.python.org (=?utf-8?b?6IOh5aaC6Zuq?=) Date: Wed, 19 Dec 2018 09:06:30 +0000 Subject: [New-bugs-announce] [issue35532] numpy-stl library problem, class stl.base.BaseMesh lacks function 'is_closed()' Message-ID: <1545210390.66.0.788709270274.issue35532@psf.upfronthosting.co.za> Change by ??? : ---------- components: Library (Lib) nosy: ??? priority: normal severity: normal status: open title: numpy-stl library problem, class stl.base.BaseMesh lacks function 'is_closed()' type: resource usage versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 06:47:10 2018 From: report at bugs.python.org (Philip Rowlands) Date: Wed, 19 Dec 2018 11:47:10 +0000 Subject: [New-bugs-announce] [issue35533] argparse standard error usage for exit / error Message-ID: <1545220030.92.0.788709270274.issue35533@psf.upfronthosting.co.za> New submission from Philip Rowlands : Because error() mentions standard error and exit() does not, I assumed exit() did not use stderr, but it does. Please mention standard error in the description of exit(). Relevant code at: https://github.com/python/cpython/blob/3.7/Lib/argparse.py#L2482 ---------- assignee: docs at python components: Documentation messages: 332128 nosy: docs at python, philiprowlands priority: normal severity: normal status: open title: argparse standard error usage for exit / error type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 06:53:03 2018 From: report at bugs.python.org (Marcin Gozdalik) Date: Wed, 19 Dec 2018 11:53:03 +0000 Subject: [New-bugs-announce] [issue35534] SIGSEGV in stackdepth_walk Message-ID: <1545220383.87.0.788709270274.issue35534@psf.upfronthosting.co.za> New submission from Marcin Gozdalik : When running /usr/bin/python /usr/bin/pip install --upgrade "pip < 10" the interpreter crashed in stackdepth_walk. I've seen this crash multiple times, especially in our custom-compiled CPythons. Here it's reproduced with stock Ubuntu Xenial Python. It looks like it happens much more often on AMD Ryzens although it happens also on Intel CPUs. The Ryzen is otherwise stable. Sys details: Python 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] on linux2 Package python-minimal 2.7.12-1~16.04 from Ubuntu Xenial ---------- components: Interpreter Core files: core.pip.8270.1545144472.xz messages: 332129 nosy: gozdal priority: normal severity: normal status: open title: SIGSEGV in stackdepth_walk type: crash versions: Python 2.7, Python 3.6 Added file: https://bugs.python.org/file48006/core.pip.8270.1545144472.xz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 07:54:20 2018 From: report at bugs.python.org (Paul Keating) Date: Wed, 19 Dec 2018 12:54:20 +0000 Subject: [New-bugs-announce] [issue35535] time.strptime() unexpectedly gives the same result for %U and %W for 2018 Message-ID: <1545224060.23.0.788709270274.issue35535@psf.upfronthosting.co.za> New submission from Paul Keating : This was originally reported on StackOverflow (53829118) and I believe the poster has found a genuine issue. He reported a problem converting from Python 2.3 to Python 2.7 in which strptime() produced a different result for %U in the two versions. For lack of an old enough copy of Python, I can not reproduce the Python 2.3 result, which he reports as follows: Python 2.3.4 ------------ >>> dw='51 0 18' # 51 week number, 0 for Sunday and 18 for year 2018 >>> date=time.strptime(dw,"%U %w %y") >>> print date (2018, 12, 16, 0, 0, 0, 6, 350, -1) # 2018 12 16 [Remark: This output looks like Python 2.1 to me, but the issue is not the datatype of the result but the value of the result.] Python 2.7.5 ------------ >>> dw='51 0 18' # 51 week number, 0 for Sunday and 18 for year 2018 >>> date=time.strptime(dw,"%U %w %y") >>> print date time.struct_time(tm_year=2018, tm_mon=12, tm_mday=23, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=6, tm_yday=357, tm_isdst=-1) The point here is that the day of the month has shifted from 16 December to 23 December, and I believe that 16 December is correct. In ISO week numbers, week 51 in 2018 runs from Monday 17 to Sunday 23 December. So the Python 2.7.5 result is correct for ISO week numbers. Only, ISO week numbers are provided by directive %W. %U is supposed to work with the week numbering system common (as I understand it) in North America, where (according to Wikipedia) week 1 begins on a Sunday, and contains both 1 January and the first Saturday of the year. While I am not familiar with that system, Excel 2016 is, and it reports =WEEKNUM(DATE(2018,12,16)) as 51 =ISOWEEKNUM(DATE(2018,12,16)) as 50 But if I do the following in Python (2.6, 2.7 or 3.5) I get the week numbers reported as the same: >>> dw='51 0 18' # 51 week number, 0 for Sunday and 18 for year 2018 >>> time.strptime(dw,"%U %w %y") == time.strptime(dw,"%W %w %y") True [Should be False] So directives %U and %W are producing the equal results for this date, and further checking shows that the same unexpected equality appears for all Sundays in 2018. And I get the same unexpected equality for the Sunday of the 51st week of years 2063, 2057, 2052, 2046, 2035, 2027, 2007, 2001. It looks to recur when 1 January of a given year is a Monday. Now, it may be going too far to say that Excel is right and the Python standard library is wrong. It is clear that the algorithms are just systematically different. On the other hand, it appears that Python 2.3 did it the way that Excel does, and that raises the question of why Python does it differently now. A bit of searching reveals that people who complain that Excel's WEEKNUM function is wrong are generally just unaware that there are competing systems. So this difference is not in the same category as Excel's numbering of days before 1 March 1900. ---------- components: Library (Lib) messages: 332135 nosy: Paul Keating priority: normal severity: normal status: open title: time.strptime() unexpectedly gives the same result for %U and %W for 2018 type: behavior versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 09:54:20 2018 From: report at bugs.python.org (=?utf-8?b?0J3QuNC60LjRgtCwINCh0LjRgNCz0LjQtdC90LrQvg==?=) Date: Wed, 19 Dec 2018 14:54:20 +0000 Subject: [New-bugs-announce] [issue35536] Calling built-in locals() and globals() in C++ leads to SystemError Message-ID: <1545231260.91.0.788709270274.issue35536@psf.upfronthosting.co.za> New submission from ?????? ????????? : System: Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic Arch: x86_64 Compilier: g++ (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0 Python versions: Python 3.6.7-1 Python 2.7.15rc1 Description: This C++ code: PyObject* pBuiltin = PyImport_ImportModule("builtins"); PyObject* py_globals_fun = PyObject_GetAttrString(pBuiltin,"locals"); PyObject* globals = PyObject_CallObject(py_globals_fun, NULL); produces "SystemError: frame does not exist". For function "globals" output is "SystemError: returned NULL without setting an error". For python2 this code produces similar errors (descriptions of error little different). Another functions with arguments, like "abs", works fine. And calling function with optional argument, like "int", "float" works with this code (with null parameter) without problem. ---------- components: Library (Lib) messages: 332144 nosy: ?????? ????????? priority: normal severity: normal status: open title: Calling built-in locals() and globals() in C++ leads to SystemError type: behavior versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 11:47:59 2018 From: report at bugs.python.org (Joannah Nanjekye) Date: Wed, 19 Dec 2018 16:47:59 +0000 Subject: [New-bugs-announce] [issue35537] use os.posix_spawn in subprocess Message-ID: <1545238079.48.0.788709270274.issue35537@psf.upfronthosting.co.za> New submission from Joannah Nanjekye : On Linux, posix_spawn() uses vfork() instead of fork() which blocks the parent process. The child process executes exec() early and so we don't pay the price of duplicating the memory pages (the thing which tracks memory pages of a process). On macOS, posix_spawn() is a system call, so the kernel is free to use fast-paths to optimize it as they want. posix_spawn() is faster but it's also safer: it allows us to do a lot of "actions" before exec(), before executing the new program. For example, you can close files and control signals. Again, on macOS, these actions are "atomic" since it's a system call. On Linux, glibc uses a very good implementation which has no race condition. Currently, Python uses a C extension _posixsubprocess which more or less reimplements posix_spawn(): close file descriptors, make some file descriptors inheritable or not, etc. It is very tricky to write correct code: code run around fork() is very fragile. In theory, the POSIX specification only allows to use "async-signal-safe" functions after fork()... So it would be great to avoid _posixsubprocess whenever possible for (1) speed (2) correctness. ---------- components: Library (Lib) messages: 332151 nosy: nanjekyejoannah, vstinner priority: normal pull_requests: 10472 severity: normal status: open title: use os.posix_spawn in subprocess type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 12:39:35 2018 From: report at bugs.python.org (Devika Sondhi) Date: Wed, 19 Dec 2018 17:39:35 +0000 Subject: [New-bugs-announce] [issue35538] splitext does not seems to handle filepath ending in . Message-ID: <1545241175.47.0.788709270274.issue35538@psf.upfronthosting.co.za> New submission from Devika Sondhi : posixpath.splitext('.blah.') returns ('.blah', '.') while the expectation was to return an empty extension at the end. ---------- messages: 332157 nosy: Devika Sondhi priority: normal severity: normal status: open title: splitext does not seems to handle filepath ending in . versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 15:35:52 2018 From: report at bugs.python.org (=?utf-8?b?SHJ2b2plIE5pa8WhacSH?=) Date: Wed, 19 Dec 2018 20:35:52 +0000 Subject: [New-bugs-announce] [issue35539] Cannot properly close terminated process Message-ID: <1545251752.03.0.788709270274.issue35539@psf.upfronthosting.co.za> New submission from Hrvoje Nik?i? : It seems impossible to correctly close() an asyncio Process on which terminate has been invoked. Take the following coroutine: async def test(): proc = await asyncio.create_subprocess_shell( "sleep 1", stdout=asyncio.subprocess.PIPE) proc.terminate() await proc.wait() After running it with asyncio.run(), Python prints a warning about "Event loop is closed" exception ignored in BaseSubprocessTransport.__del__. The code does wait for the process to exit, and neither proc nor proc.stdout have a close() method, so the warning seems spurious. Commenting out proc.terminate() makes the program finish without an exception (but then it waits for a full second, of course). Runnable example attached. ---------- components: asyncio files: terminate.py messages: 332165 nosy: asvetlov, hniksic, yselivanov priority: normal severity: normal status: open title: Cannot properly close terminated process versions: Python 3.7 Added file: https://bugs.python.org/file48008/terminate.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 19 17:06:44 2018 From: report at bugs.python.org (Will T) Date: Wed, 19 Dec 2018 22:06:44 +0000 Subject: [New-bugs-announce] [issue35540] dataclasses.asdict breaks with defaultdict fields Message-ID: <1545257204.19.0.788709270274.issue35540@psf.upfronthosting.co.za> New submission from Will T : _asdict_inner attempts to manually recursively deepcopy dicts by calling type(obj) with a generator of transformed keyvalue tuples @ https://github.com/python/cpython/blob/b2f642ccd2f65d2f3bf77bbaa103dd2bc2733734/Lib/dataclasses.py#L1080 . defaultdicts are dicts so this runs but unlike other dicts their first arg has to be a callable or None: import collections import dataclasses as dc @dc.dataclass() class C: d: dict c = C(collections.defaultdict(lambda: 3, {})) d = dc.asdict(c) assert isinstance(d['d'], collections.defaultdict) assert d['d']['a'] == 3 => Traceback (most recent call last): File "boom.py", line 9, in d = dc.asdict(c) File "/Users/spinlock/.pyenv/versions/3.7.1/lib/python3.7/dataclasses.py", line 1019, in asdict return _asdict_inner(obj, dict_factory) File "/Users/spinlock/.pyenv/versions/3.7.1/lib/python3.7/dataclasses.py", line 1026, in _asdict_inner value = _asdict_inner(getattr(obj, f.name), dict_factory) File "/Users/spinlock/.pyenv/versions/3.7.1/lib/python3.7/dataclasses.py", line 1058, in _asdict_inner for k, v in obj.items()) TypeError: first argument must be callable or None I understand that it isn't this bit of code's job to support every dict (and list etc.) subclass under the sun but given defaultdict is stdlib it's imo worth supporting explicitly. ---------- components: Library (Lib) messages: 332166 nosy: wrmsr priority: normal severity: normal status: open title: dataclasses.asdict breaks with defaultdict fields versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 01:06:01 2018 From: report at bugs.python.org (Armandas) Date: Thu, 20 Dec 2018 06:06:01 +0000 Subject: [New-bugs-announce] [issue35541] CLI error when .python_history contains unicode characters Message-ID: <1545285961.64.0.788709270274.issue35541@psf.upfronthosting.co.za> New submission from Armandas : OS: Windows 10 Python version: Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:05:16) [MSC v.1915 32 bit (Intel)] on win32 Traceback: Failed calling sys.__interactivehook__ Traceback (most recent call last): File "C:\Users\owner\AppData\Local\Programs\Python\Python37-32\lib\site.py", line 439, in register_readline readline.read_history_file(history) File "C:\Users\owner\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pyreadline\rlmain.py", line 165, in read_history_file self.mode._history.read_history_file(filename) File "C:\Users\owner\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pyreadline\lineeditor\history.py", line 82, in read_history_file for line in open(filename, 'r'): File "C:\Users\owner\AppData\Local\Programs\Python\Python37-32\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 2: character maps to How to reproduce: On a Windows machine, add the following line to your .python_history: "?".isalpha() I believe the issue stems from the fact that the history file is opened with the "default" encoding, which on windows is cp1252. ---------- components: Library (Lib) messages: 332179 nosy: armandas priority: normal severity: normal status: open title: CLI error when .python_history contains unicode characters type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 01:53:17 2018 From: report at bugs.python.org (shuoz) Date: Thu, 20 Dec 2018 06:53:17 +0000 Subject: [New-bugs-announce] [issue35542] stack exhaustion in 3.6.7 Message-ID: <1545288797.27.0.788709270274.issue35542@psf.upfronthosting.co.za> New submission from shuoz : stack exhaustion in 3.6.7. in python 3.6.7 set recursive depth 20000 will exhaustion stack and get Segmentation fault. But this dont happen in python 2.7 ``` import sys sys.setrecursionlimit(20000) def f(): f() f() ``` ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 332183 nosy: shuoz priority: normal severity: normal status: open title: stack exhaustion in 3.6.7 type: security versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 03:26:30 2018 From: report at bugs.python.org (Sagar) Date: Thu, 20 Dec 2018 08:26:30 +0000 Subject: [New-bugs-announce] [issue35543] re.sub is only replacing max. of 2 string found by regexp. Message-ID: <1545294390.75.0.788709270274.issue35543@psf.upfronthosting.co.za> New submission from Sagar : Below are the logs: >>> dat = '"10GE" "4x" "AMPC" "B3" "BUILTIN" "DOWN" "LU" "SFP+" "ether" "xe" "DOWN" "MPC" "BUILTIN"' >>> type = re.subn(r'\"BUILTIN\"|\"B\d\"|\"I\d\"|\"LU\"|\"Trinity\"|\"Trio\"|\"DOWN\"|\"UNKNOWN\"|' ... r'^AND$|\"Q\"|\"MPC\"|\"EA\d\"|\"3D\"', '', dat, re.I) >>> type ('"10GE" "4x" "AMPC" "DOWN" "LU" "SFP+" "ether" "xe" "DOWN" "MPC" "BUILTIN"', 2) >>> dat = '"10GE" "4x" "AMPC" "DOWN" "LU" "SFP+" "ether" "xe" "DOWN" "MPC" "BUILTIN"' >>> type = re.subn(r'\"BUILTIN\"|\"B\d\"|\"I\d\"|\"LU\"|\"Trinity\"|\"Trio\"|\"DOWN\"|\"UNKNOWN\"|' ... r'^AND$|\"Q\"|\"MPC\"|\"EA\d\"|\"3D\"', '', dat, re.I) >>> type ('"10GE" "4x" "AMPC" "SFP+" "ether" "xe" "DOWN" "MPC" "BUILTIN"', 2) >>> dat = '"10GE" "4x" "AMPC" "SFP+" "ether" "xe" "DOWN" "MPC" "BUILTIN"' >>> type = re.subn(r'\"BUILTIN\"|\"B\d\"|\"I\d\"|\"LU\"|\"Trinity\"|\"Trio\"|\"DOWN\"|\"UNKNOWN\"|' ... r'^AND$|\"Q\"|\"MPC\"|\"EA\d\"|\"3D\"', '', dat, re.I) >>> type ('"10GE" "4x" "AMPC" "SFP+" "ether" "xe" "BUILTIN"', 2) >>> dat = '"10GE" "4x" "AMPC" "SFP+" "ether" "xe" "BUILTIN"' >>> type = re.subn(r'\"BUILTIN\"|\"B\d\"|\"I\d\"|\"LU\"|\"Trinity\"|\"Trio\"|\"DOWN\"|\"UNKNOWN\"|' ... r'^AND$|\"Q\"|\"MPC\"|\"EA\d\"|\"3D\"', '', dat, re.I) >>> type ('"10GE" "4x" "AMPC" "SFP+" "ether" "xe" ', 1) >>> ---------- components: Library (Lib) messages: 332198 nosy: saga priority: normal severity: normal status: open title: re.sub is only replacing max. of 2 string found by regexp. type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 04:18:56 2018 From: report at bugs.python.org (radiocane) Date: Thu, 20 Dec 2018 09:18:56 +0000 Subject: [New-bugs-announce] [issue35544] unicode.encode docstring says return value can be unicode Message-ID: <1545297536.15.0.788709270274.issue35544@psf.upfronthosting.co.za> New submission from radiocane : In Python 2.7.15rc1 the docstring for unicode.encode starts with: "S.encode([encoding[,errors]]) -> string or unicode" But if this answer https://stackoverflow.com/a/449281/5397695 is correct, then unicode.encode will never return a unicode object. Am I right? ---------- components: Unicode messages: 332203 nosy: ezio.melotti, radiocane, vstinner priority: normal severity: normal status: open title: unicode.encode docstring says return value can be unicode versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 05:56:50 2018 From: report at bugs.python.org (=?utf-8?b?0JzQsNC60YHQuNC8INCQ0YDQuNGB0YLQvtCy?=) Date: Thu, 20 Dec 2018 10:56:50 +0000 Subject: [New-bugs-announce] [issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses Message-ID: <1545303410.27.0.788709270274.issue35545@psf.upfronthosting.co.za> New submission from ?????? ??????? : loop.create_connection doesn't handle ipv6 RFC4007 addresses right since 3.7 TEST CASE # Set up listener on link-local address fe80::1%lo sudo ip a add dev lo fe80::1 # 3.6 handles everything fine socat file:/dev/null tcp6-listen:12345,REUSEADDR & python3.6 -c 'import asyncio;loop=asyncio.get_event_loop();loop.run_until_complete(loop.create_connection(lambda:asyncio.Protocol(),host="fe80::1%lo",port="12345"))' # 3.7 and later fails socat file:/dev/null tcp6-listen:12345,REUSEADDR & python3.7 -c 'import asyncio;loop=asyncio.get_event_loop();loop.run_until_complete(loop.create_connection(lambda:asyncio.Protocol(),host="fe80::1%lo",port="12345"))' Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.7/asyncio/base_events.py", line 576, in run_until_complete return future.result() File "/usr/lib/python3.7/asyncio/base_events.py", line 951, in create_connection raise exceptions[0] File "/usr/lib/python3.7/asyncio/base_events.py", line 938, in create_connection await self.sock_connect(sock, address) File "/usr/lib/python3.7/asyncio/selector_events.py", line 475, in sock_connect return await fut File "/usr/lib/python3.7/asyncio/selector_events.py", line 480, in _sock_connect sock.connect(address) OSError: [Errno 22] Invalid argument CAUSE Upon asyncio.base_events.create_connection _ensure_resolved is called twice, first time here: https://github.com/python/cpython/blob/3.7/Lib/asyncio/base_events.py#L908 then here through sock_connect: https://github.com/python/cpython/blob/3.7/Lib/asyncio/base_events.py#L946 https://github.com/python/cpython/blob/3.7/Lib/asyncio/selector_events.py#L458 _ensure_resolved calls getaddrinfo, but in 3.7 implementation changed: % python3.6 -c 'import socket;print(socket.getaddrinfo("fe80::1%lo",12345)[0][4])' ('fe80::1%lo', 12345, 0, 1) % python3.7 -c 'import socket;print(socket.getaddrinfo("fe80::1%lo",12345)[0][4])' ('fe80::1', 12345, 0, 1) _ensure_connect only considers host and port parts of the address tuple: https://github.com/python/cpython/blob/3.7/Lib/asyncio/base_events.py#L1272 In case of 3.7 first call to _ensure_resolved returns ('fe80::1', 12345, 0, 1) then second call returns ('fe80::1', 12345, 0, 0) Notice that scope is now completely lost and is set to 0, thus actual call to socket.connect is wrong In case of 3.6 both first and second call to _ensure_resolved return ('fe80::1%lo', 12345, 0, 1) because in 3.6 case scope info is preserved in address and second call can derive correct address tuple ---------- components: asyncio messages: 332211 nosy: asvetlov, yselivanov, ?????? ??????? priority: normal severity: normal status: open title: asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 09:41:00 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 20 Dec 2018 14:41:00 +0000 Subject: [New-bugs-announce] [issue35546] String formatting produces incorrect result with left-aligned zero-padded format Message-ID: <1545316860.16.0.788709270274.issue35546@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Compare printf-style string formatting and new-style string formatting. >>> '%-020d' % 42 '42 ' >>> format(42, '<020') '42000000000000000000' >>> format(42, '<020d') '42000000000000000000' >>> '%-020x' % 42 '2a ' >>> format(42, '<020x') '2a000000000000000000' >>> '%-020g' % 1.2e-8 '1.2e-08 ' >>> format(1.2e-8, '<020') '1.2e-080000000000000' >>> format(1.2e-8, '<020g') '1.2e-080000000000000' >>> format(1.2e-8, '<020e') '1.200000e-0800000000' New-style string formatting produces the result that looks like a correctly formatted number, but it represents incorrect number. I think that zero padding should not be allowed for left-aligned format for numbers (except the 'f' format). Zero padding is already disallowed for complex numbers. ---------- components: Interpreter Core messages: 332231 nosy: eric.smith, serhiy.storchaka priority: normal severity: normal status: open title: String formatting produces incorrect result with left-aligned zero-padded format _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 20 12:54:07 2018 From: report at bugs.python.org (Martijn Pieters) Date: Thu, 20 Dec 2018 17:54:07 +0000 Subject: [New-bugs-announce] [issue35547] email.parser / email.policy does correctly handle multiple RFC2047 encoded-word tokens across RFC5322 folded headers Message-ID: <1545328447.76.0.788709270274.issue35547@psf.upfronthosting.co.za> New submission from Martijn Pieters : The From header in the following email headers is not correctly decoded; both the subject and from headers contain UTF-8 encoded data encoded with RFC2047 encoded-words, in both cases a multi-byte UTF-8 codepoint has been split between the two encoded-word tokens: >>> msgdata = '''\ From: =?utf-8?b?4ZuX4Zqr4Zqx4ZuP4ZuB4ZuD4Zq+4ZuI4ZuB4ZuW4ZuP4ZuW4Zo=?= =?utf-8?b?seGbiw==?= Subject: =?utf-8?b?c8qHdcSxb2THnXBvyZQgOC3ihLLiiqXiiKkgx53Kh8qOcS3E?= =?utf-8?b?scqHyoNuya8gyaXKh8Sxyo0gx53Gg8mQc3PHncmvIMqHc8edyocgybnHncaDdW/Kgw==?= ''' >>> from io import StringIO >>> from email.parser import Parser >>> from email import policy >>> msg = Parser(policy=policy.default).parse(StringIO(msgdata)) >>> print(msg['Subject']) # correct s?u?od?po? 8-??? ???q-???n? ???? ???ss?? ?s?? ???uo? >>> print(msg['From']) # incorrect ????????????? ?? Note the two FFFD placeholders in the From line. The issue is that the raw value of the From and Subject contain the folding space at the start of the continuation lines: >>> for name, value in msg.raw_items(): ... if name in {'Subject', 'From'}: ... print(name, repr(value)) ... >From '=?utf-8?b?4ZuX4Zqr4Zqx4ZuP4ZuB4ZuD4Zq+4ZuI4ZuB4ZuW4ZuP4ZuW4Zo=?=\n =?utf-8?b?seGbiw==?= ' Subject '=?utf-8?b?c8qHdcSxb2THnXBvyZQgOC3ihLLiiqXiiKkgx53Kh8qOcS3E?=\n =?utf-8?b?scqHyoNuya8gyaXKh8Sxyo0gx53Gg8mQc3PHncmvIMqHc8edyocgybnHncaDdW/Kgw==?=' For the Subject header, _header_value_parser.get_unstructured is used, which *expects* there to be spaces between encoded words; it inserts EWWhiteSpaceTerminal tokens in between which are turned into empty strings. But for the From header, AddressHeader parser does not, the space at the start of the line is retained, and the surrogate escapes at the end of one encoded-word and the start start of the next encoded-word never ajoin, so the later handling of turning surrogates back into proper data fails. Since unstructured header parsing doesn't mind if a space is missing between encoded-word atoms, the work-around is to explicitly remove the space at the start of every line; this can be done in a custom policy: import re from email.policy import EmailPolicy class UnfoldingHeaderEmailPolicy(EmailPolicy): def header_fetch_parse(self, name, value): # remove any leading whitespace from header lines # before further processing value = re.sub(r'(?<=[\n\r])([\t ])', '', value) return super().header_fetch_parse(name, value) custom_policy = UnfoldingHeaderEmailPolicy() after which the From header comes out without placeholders: >>> msg = Parser(policy=custom_policy).parse(StringIO(msgdata)) >>> msg['from'] '?????????????? ' >>> msg['subject'] 's?u?od?po? 8-??? ???q-???n? ???? ???ss?? ?s?? ???uo?' This issue was found by way of https://stackoverflow.com/q/53868584/100297 ---------- messages: 332243 nosy: mjpieters priority: normal severity: normal status: open title: email.parser / email.policy does correctly handle multiple RFC2047 encoded-word tokens across RFC5322 folded headers _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 01:18:19 2018 From: report at bugs.python.org (Ilya Kulakov) Date: Fri, 21 Dec 2018 06:18:19 +0000 Subject: [New-bugs-announce] [issue35548] memoryview needlessly (?) requires represented object to be hashable Message-ID: <1545373099.64.0.788709270274.issue35548@psf.upfronthosting.co.za> New submission from Ilya Kulakov : Implementation of memoryview's hashing method [1] imposes the following constraints in order to be hashable (per documentation): > One-dimensional memoryviews of hashable (read-only) types with formats ?B?, ?b? or ?c? are also hashable. The hash is defined as hash(m) == hash(m.tobytes()): However it's not clear why original type needs to be hashable given that memoryview deals with 1-dimensional read-only bytes representation of an object. Not only it requires the developer to make an extra copy of C-bytes, but also calls __hash__ of a represented object without using the result other than to detect an error. My particular use case involves a memory view of a readonly numpy's ndarray. My view satisfies the following constraints: >>> print(data.format, data.readonly, data.shape, data.c_contiguous) b True (25,) True But nevertheless the hashing fails because ndarray itself is not hashable. Stefan Krah wrote [2]: > Note that memory_hash() raises an error if the exporter *itself* is not hashable, so it only hashes immutable objects by design. But while __hash__ indeed tells that object is (supposed to be) immutable, there is no requirement for all immutable objects to have __hash__. Both threads I have found ([3], [4]) are quite lengthy and show certain discord in opinions regarding the issue. Perhaps after 6 years since the release of the feature the view on the problem has changed? 1: https://github.com/python/cpython/blob/d1e717588728a23d576c4ead775f7dbd68149696/Objects/memoryobject.c#L2829-L2876 2: https://bugs.python.org/issue15814#msg169510 3: https://bugs.python.org/issue15573 4: https://bugs.python.org/issue15573 ---------- components: Library (Lib) messages: 332280 nosy: Kentzo, skrah priority: normal severity: normal status: open title: memoryview needlessly (?) requires represented object to be hashable versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 04:47:37 2018 From: report at bugs.python.org (Roman Inflianskas) Date: Fri, 21 Dec 2018 09:47:37 +0000 Subject: [New-bugs-announce] [issue35549] Add partial_match: bool = False argument to unicodedata.lookup Message-ID: <1545385657.75.0.788709270274.issue35549@psf.upfronthosting.co.za> New submission from Roman Inflianskas : I propose to add partial_match: bool = False argument to unicodedata.lookup so that the programmer could search Unicode symbols using partial_names. ---------- components: Unicode messages: 332283 nosy: ezio.melotti, rominf, vstinner priority: normal severity: normal status: open title: Add partial_match: bool = False argument to unicodedata.lookup type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 05:08:03 2018 From: report at bugs.python.org (Jakub Kulik) Date: Fri, 21 Dec 2018 10:08:03 +0000 Subject: [New-bugs-announce] [issue35550] Some define guards for Solaris are wrong Message-ID: <1545386883.42.0.788709270274.issue35550@psf.upfronthosting.co.za> New submission from Jakub Kulik : Python source code uses on several places ifdef sun or defined(sun) without the underscores, which is not standard compliant and shouldn't be used. Our recent Solaris python build ended up skipping these sections resulting in some obvious problems. Defines should check for __sun instead. (link: http://nadeausoftware.com/articles/2012/01/c_c_tip_how_use_compiler_predefined_macros_detect_operating_system#Solaris) ---------- components: Build messages: 332284 nosy: kulikjak priority: normal severity: normal status: open title: Some define guards for Solaris are wrong versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 05:08:12 2018 From: report at bugs.python.org (BLKSerene) Date: Fri, 21 Dec 2018 10:08:12 +0000 Subject: [New-bugs-announce] [issue35551] Encoding and alias issues Message-ID: <1545386892.27.0.788709270274.issue35551@psf.upfronthosting.co.za> New submission from BLKSerene : There're some minor issues about encodings supported by Python. 1. "tis260" is the alias for "tactis", where "tis260" might be a typo, which should be tis620. And "tactis" is not a supported encoding by Python (and I can't find any information about this encoding on Google). 2. "mac_latin2" and "mac_centeuro" refer to the same encoding (the decoding tables are identical), but they are provided as two encodings in different names ("maccentraleurope" is an alias for "mac_latin2", but "mac_centeuro" isn't). 3. The same problem for "latin_1" and "iso8859_1" ("iso_8859_1" is an alias for "latin_1", but "iso8859_1" isn't). ---------- components: Unicode messages: 332285 nosy: blkserene, ezio.melotti, vstinner priority: normal severity: normal status: open title: Encoding and alias issues type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 06:15:59 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 21 Dec 2018 11:15:59 +0000 Subject: [New-bugs-announce] [issue35552] Do not read memory past the specified limit in PyUnicode_FromFormat() and PyBytes_FromFormat() Message-ID: <1545390959.35.0.788709270274.issue35552@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Format characters %s and %V in PyUnicode_FromFormat() and %s PyBytes_FromFormat() allow to limit the number of bytes read from the argument. For example PyUnicode_FromFormat("must be string, not '%.50s'", obj->ob_type->tp_name) will use not more than 50 bytes from obj->ob_type->tp_name for creating a message. But while the number of bytes used for creating the resulting Unicode or bytes object is limited, the current implementation can read past this limit. It uses strlen() for searching the first null byte, and bounds the result to the specified limit. If the input is not null terminated, this can cause a crash. The proposed PR makes the code never reading past the specified limit. ---------- components: Interpreter Core messages: 332289 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Do not read memory past the specified limit in PyUnicode_FromFormat() and PyBytes_FromFormat() type: crash versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 18:13:23 2018 From: report at bugs.python.org (Ernest W. Durbin III) Date: Fri, 21 Dec 2018 23:13:23 +0000 Subject: [New-bugs-announce] [issue35554] Test Message-ID: New submission from Ernest W. Durbin III : Testing mailgateway ---------- messages: 332307 nosy: EWDurbin priority: normal severity: normal status: open title: Test _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 18:43:14 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Fri, 21 Dec 2018 23:43:14 +0000 Subject: [New-bugs-announce] [issue35555] IDLE: Gray out Code Context on non-editor windows Message-ID: <1545435794.26.0.98272194251.issue35555@roundup.psfhosted.org> New submission from Cheryl Sabella : M3 from #33610. Gray out menu entry when not applicable. ---------- assignee: terry.reedy components: IDLE messages: 332311 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Gray out Code Context on non-editor windows type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 19:06:17 2018 From: report at bugs.python.org (Brett Cannon) Date: Sat, 22 Dec 2018 00:06:17 +0000 Subject: [New-bugs-announce] [issue35556] See if frozen modules can use relative imports Message-ID: <1545437177.52.0.98272194251.issue35556@roundup.psfhosted.org> New submission from Brett Cannon : https://gregoryszorc.com/blog/2018/12/18/distributing-standalone-python-applications/ claims it doesn't work. ---------- components: Library (Lib) messages: 332314 nosy: brett.cannon priority: low severity: normal stage: test needed status: open title: See if frozen modules can use relative imports type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 21:49:42 2018 From: report at bugs.python.org (Dylan Houlihan) Date: Sat, 22 Dec 2018 02:49:42 +0000 Subject: [New-bugs-announce] [issue35557] Allow lowercase hexadecimal characters in base64.b16decode() Message-ID: <1545446982.53.0.98272194251.issue35557@roundup.psfhosted.org> New submission from Dylan Houlihan : Currently, the `base64` method `b16decode` does not decode a hexadecimal string with lowercase characters by default. To do so requires passing `casefold=True` as the second argument. I propose a change to the `b16decode` method to allow it to accept hexadecimal strings containing lowercase characters without requiring the `casefold` argument. The revision itself is straightforward. We simply have to amend the regular expression to match the lowercase characters a-f in addition to A-F. Likewise the corresponding tests in Lib/base64.py also need to be changed to account for the lack of a second argument. Therefore there are two files total which need to be refactored. In my view, there are several compelling reasons for this change: 1. There is a nontrivial performance improvement. I've already made the changes on my own test branch[1] and I see a mean decoding performance improvement of approximately 9.4% (tested by taking the average of 1,000,000 hexadecimal string encodings). The testing details are included in a file attached to this issue. 2. Hexadecimal strings are case insensitive, i.e. 8DEF is equivalent to 8def. This is the particularly motivating reason why I've written the patch - there have been many times when I've been momentarily confounded by a hexadecimal string that won't decode only to realize I'm yet again passing in a lowercase string. 3. The behavior of the underlying method in `binascii`, `unhexlify`, accepts both uppercase and lowercase characters by default without requiring a second parameter. From the perspective of code hygiene and language consistency, I think it's both more practical and more elegant for the language to behave in the same, predictable manner (particularly because `base64.py` simply calls `binascii.c` under the hood). Additionally, the `binascii` method `hexlify` actually outputs strings in lowercase encoding, meaning that any use of both `binascii` and `base64` in the same application will have to consistently do a `casefold` conversion if output from `binascii.hexlify` is fed back as input to `base64.b16decode` for some reason. There are two arguments against this patch, as far as I can see it: 1. In the relevant IETF reference documentation (RFC3548[2], referenced directly in the `b16decode` docstring; and RFC4648[3] with supersedes it), under Security Considerations the author Simon Josefsson claims that there exists a potential side channel security issue intrinsic to accepting case insensitive hexadecimal strings in a decoding function. While I'm not dismissing this out of hand, I personally do not find the claimed vulnerability compelling, and Josefsson does not clarify a real world attack scenario or threat model. I think it's important we challenge this assumption in light of the potential nontrivial improvements to both language consistency and performance. I would be very interested in hearing a real threat model here that would practically exist outside of a very contrived scenario. Moreover if this is such a security issue, why is the behavior already evident in `binascii.unhexlify`? 2. The other reason may be that there's simply no reason to make such a change. An argument can be put forward that a developer won't frequently have to deal with this issue because the opposite method, `b16encode`, produces hexadecimal strings with uppercase characters. However, in my experience hexadecimal strings with lowercase characters are extremely common in situations where developers haven't produced the strings themselves in the language. As I mentioned, I have already written the changes on my own patch branch. I'll open a pull request once this issue has been created and reference the issue in the pull request on GitHub. References: 1. https://github.com/djhoulihan/cpython/tree/base64_case_sensitivity 2. https://tools.ietf.org/html/rfc3548 3. https://tools.ietf.org/html/rfc4648 ---------- components: Library (Lib) files: testing_data.txt messages: 332319 nosy: djhoulihan priority: normal severity: normal status: open title: Allow lowercase hexadecimal characters in base64.b16decode() type: performance versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48013/testing_data.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 21 22:39:44 2018 From: report at bugs.python.org (Nils Lindemann) Date: Sat, 22 Dec 2018 03:39:44 +0000 Subject: [New-bugs-announce] [issue35558] venv: running activate.bat gives ' parameter format not correct - 65001' Message-ID: <1545449984.07.0.98272194251.issue35558@roundup.psfhosted.org> New submission from Nils Lindemann : Windows 7, Python 3.7.1:260ec2c36a after doing C:\python\python -m venv C:\myvenv and then C:\>myvenv\Scripts\activate.bat it prints parameter format not correct - 65001 However, it activates the venv - the prompt shows (myvenv) C:\> and C:\myvenv\Scripts; gets prepended to PATH. When i outcomment for /f "tokens=2 delims=:" %%a in ('"%SystemRoot%\System32\chcp.com"') do ( set "_OLD_CODEPAGE=%%a" ) in the activate.bat then the message wont show up. related: https://stackoverflow.com/questions/51358202/python-3-7-activate-venv-error-parameter-format-not-correct-65001-windows ---------- messages: 332320 nosy: Nils-Hero priority: normal severity: normal status: open title: venv: running activate.bat gives ' parameter format not correct - 65001' type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 02:29:08 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 22 Dec 2018 07:29:08 +0000 Subject: [New-bugs-announce] [issue35559] Optimize base64.b16decode to use compiled regex Message-ID: <1545463748.16.0.98272194251.issue35559@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I came across this as a result of issue35557 and thought to make a new issue to keep the discussion separate. Currently the b16decode function uses a regex with re.search that can be compiled at the module level as a static variable to give up to 30% improvement when executed on Python 3.7. I am proposing a PR for this change since it looks safe to me. $ python3 -m perf compare_to default.json optimized.json --table +--------------------+---------+------------------------------+ | Benchmark | default | optimized | +====================+=========+==============================+ | b16decode | 2.97 us | 2.03 us: 1.46x faster (-32%) | +--------------------+---------+------------------------------+ | b16decode_casefold | 3.18 us | 2.19 us: 1.45x faster (-31%) | +--------------------+---------+------------------------------+ Benchmark script : import perf import re import binascii import base64 _B16DECODE_PAT = re.compile(b'[^0-9A-F]') def b16decode_re_compiled_search(s, casefold=False): s = base64._bytes_from_decode_data(s) if casefold: s = s.upper() if _B16DECODE_PAT.search(s): raise binascii.Error('Non-base16 digit found') return binascii.unhexlify(s) if __name__ == "__main__": hex_data = "806903d098eb50957b1b376385f233bb3a5d54f54191c8536aefee21fc9ba3ca" hex_data_upper = hex_data.upper() assert base64.b16decode(hex_data_upper) == b16decode_re_compiled_search(hex_data_upper) assert base64.b16decode(hex_data, casefold=True) == b16decode_re_compiled_search(hex_data, casefold=True) runner = perf.Runner() if True: # toggle to False for default.json runner.timeit(name="b16decode", stmt="b16decode_re_compiled_search(hex_data_upper)", setup="from __main__ import b16decode_re_compiled_search, hex_data, hex_data_upper") runner.timeit(name="b16decode_casefold", stmt="b16decode_re_compiled_search(hex_data, casefold=True)", setup="from __main__ import b16decode_re_compiled_search, hex_data, hex_data_upper") else: runner.timeit(name="b16decode", stmt="base64.b16decode(hex_data_upper)", setup="from __main__ import hex_data, hex_data_upper; import base64") runner.timeit(name="b16decode_casefold", stmt="base64.b16decode(hex_data, casefold=True)", setup="from __main__ import hex_data, hex_data_upper; import base64") ---------- assignee: xtreak components: Library (Lib) messages: 332330 nosy: djhoulihan, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: Optimize base64.b16decode to use compiled regex type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 05:20:42 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 22 Dec 2018 10:20:42 +0000 Subject: [New-bugs-announce] [issue35560] format(float(123), "00") causes segfault in debug builds Message-ID: <1545474042.87.0.98272194251.issue35560@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I was looking into format issues and came across msg99839 . The example causes segfault in master, 3.7 and 3.6 branches. This used to pass in 3.7 and 3.6. I searched for open issues and cannot come across an issue for this. I guess this is caused due to issue33954 which adds an assert as I can see from the segfault. Compiling in release mode works fine but debug build fails. Are asserts removed in release builds? $ python3.7 Python 3.7.1rc2 (v3.7.1rc2:6c06ef7dc3, Oct 13 2018, 05:10:29) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> format(float(123), "00") '123.0' Master master $ ./python.exe Python 3.8.0a0 (heads/35559:c1b4b0f616, Dec 22 2018, 15:00:08) [Clang 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> format(float(123), "00") Assertion failed: (0 <= min_width), function _PyUnicode_InsertThousandsGrouping, file Objects/unicodeobject.c, line 9394. Python 3.6 cpython git:(5241ecff16) ./python.exe Python 3.6.8rc1+ (remotes/upstream/3.6:5241ecff16, Dec 22 2018, 15:05:57) [GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> format(float(123), "00") Assertion failed: (0 <= min_width), function _PyUnicode_InsertThousandsGrouping, file Objects/unicodeobject.c, line 9486. [1] 33859 abort ./python.exe Python 3.7 cpython git:(c046d6b618) ./python.exe Python 3.7.2rc1+ (remotes/upstream/3.7:c046d6b618, Dec 22 2018, 15:07:24) [Clang 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> format(float(123), "00") Assertion failed: (0 <= min_width), function _PyUnicode_InsertThousandsGrouping, file Objects/unicodeobject.c, line 9369. [1] 42710 abort ./python.exe Latest master, 3.6 and 3.7 branch has this bug in debug mode with this being last Python 3.6 bug fix release. Commenting out the assert line gives me the correct result but I have limited knowledge of the C code and I guess release builds remove asserts where it cannot be reproduced? I am tagging issue33954 devs who might have a better understanding of this and there might be limited bandwidth for someone to look into this along with other cases since it's holiday season. # Release mode works fine ./python.exe -c 'print(format(float(123), "00"))' 123.0 ---------- messages: 332342 nosy: eric.smith, serhiy.storchaka, vstinner, xtreak priority: normal severity: normal status: open title: format(float(123), "00") causes segfault in debug builds type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 10:22:42 2018 From: report at bugs.python.org (Nikolaus Rath) Date: Sat, 22 Dec 2018 15:22:42 +0000 Subject: [New-bugs-announce] [issue35561] Valgrind reports Syscall param epoll_ctl(event) points to uninitialised byte(s) Message-ID: <1545492162.72.0.0770528567349.issue35561@roundup.psfhosted.org> New submission from Nikolaus Rath : With current git master, configured --with-valgrind --with-pydebug, I get: ==31074== Command: /home/nikratio/clones/cpython/python /home/nikratio/in-progress/pyfuse3/test/../examples/hello.py /tmp/pytest-of-nikratio/pytest-11/test_hello_hello_py_0 ==31074== ==31074== Syscall param epoll_ctl(event) points to uninitialised byte(s) ==31074== at 0x584906A: epoll_ctl (syscall-template.S:84) ==31074== by 0xBDAA493: pyepoll_internal_ctl (selectmodule.c:1392) ==31074== by 0xBDAA59F: select_epoll_register_impl (selectmodule.c:1438) ==31074== by 0xBDAA5F8: select_epoll_register (selectmodule.c.h:599) ==31074== by 0x174E15: _PyMethodDef_RawFastCallKeywords (call.c:658) ==31074== by 0x300BCA: _PyMethodDescr_FastCallKeywords (descrobject.c:290) ==31074== by 0x21FC05: call_function (ceval.c:4611) ==31074== by 0x22B5E7: _PyEval_EvalFrameDefault (ceval.c:3183) ==31074== by 0x2206FF: PyEval_EvalFrameEx (ceval.c:533) ==31074== by 0x173B61: function_code_fastcall (call.c:285) ==31074== by 0x174737: _PyFunction_FastCallKeywords (call.c:410) ==31074== by 0x21FDF4: call_function (ceval.c:4634) ==31074== Address 0xffeffeb4c is on thread 1's stack ==31074== in frame #1, created by pyepoll_internal_ctl (selectmodule.c:1379) To reproduce: $ python-dev -m pip install --user pyfuse3 # for dependencies $ git clone https://github.com/libfuse/pyfuse3.git $ valgrind --trace-children=yes "--trace-children-skip=*mount*" python-dev -m pytest test/ pyfuse3 provides a C extension module, but I believe the problem is in the interpreter core as the stacktrace does not include anything from the extension. ---------- components: Interpreter Core messages: 332348 nosy: nikratio priority: normal severity: normal status: open title: Valgrind reports Syscall param epoll_ctl(event) points to uninitialised byte(s) type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 12:07:08 2018 From: report at bugs.python.org (Amir Aslan Haghrah) Date: Sat, 22 Dec 2018 17:07:08 +0000 Subject: [New-bugs-announce] [issue35562] Issue in sizeof() function Message-ID: <1545498428.94.0.0770528567349.issue35562@roundup.psfhosted.org> New submission from Amir Aslan Haghrah : If you define a structure which contains an 'c_int' and a 'c_double' member. Then run the sizeof() function for it you get 16 as result as follows: --------------------------------------------- from ctypes import c_int from ctypes import c_double from ctypes import sizeof from ctypes import Structure from struct import Struct class issueInSizeof(Structure): _fields_ = [('KEY', c_int), ('VALUE', c_double)] print(sizeof(issueInSizeof)) --------------------------------------------- output: 16 --------------------------------------------- In continue if you add another 'c_int' to your structure and run sizeof() function as follows. It return 16 too. --------------------------------------------- from ctypes import c_int from ctypes import c_double from ctypes import sizeof from ctypes import Structure from struct import Struct class issueInSizeof(Structure): _fields_ = [('Index', c_int), ('KEY', c_int), ('VALUE', c_double)] print(sizeof(issueInSizeof)) --------------------------------------------- output: 16 --------------------------------------------- If python assume the size of 'c_int' 4, then it should return 12 in the first run. Also if it assume the size of 'c_int' 8 then it should return 24 in the second run. thanks in advance. ---------- components: ctypes messages: 332355 nosy: Amir Aslan Haghrah priority: normal severity: normal status: open title: Issue in sizeof() function type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 13:33:28 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 22 Dec 2018 18:33:28 +0000 Subject: [New-bugs-announce] [issue35563] Doc: warnings.rst - add links to references Message-ID: <1545503608.31.0.0770528567349.issue35563@roundup.psfhosted.org> New submission from Cheryl Sabella : In the docs for the warnings module, there is some text referencing other areas of the documentation that would be more helpful as links. ---------- assignee: docs at python components: Documentation messages: 332362 nosy: cheryl.sabella, docs at python priority: normal severity: normal status: open title: Doc: warnings.rst - add links to references versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 16:29:58 2018 From: report at bugs.python.org (jfbu) Date: Sat, 22 Dec 2018 21:29:58 +0000 Subject: [New-bugs-announce] [issue35564] [DOC] Sphinx 2.0 will require master_doc variable set in conf.py Message-ID: <1545514198.61.0.0770528567349.issue35564@roundup.psfhosted.org> New submission from jfbu : When building CPython doc with master branch of dev repo of Sphinx (future Sphinx 2.0) one gets this warning: WARNING: Since v2.0, Sphinx uses "index" as master_doc by default. Please add "master_doc = 'contents'" to your conf.py. Fix will be to do as Sphinx says :) ---------- assignee: docs at python components: Documentation messages: 332371 nosy: docs at python, jfbu priority: normal severity: normal status: open title: [DOC] Sphinx 2.0 will require master_doc variable set in conf.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 20:19:21 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 23 Dec 2018 01:19:21 +0000 Subject: [New-bugs-announce] [issue35565] Add detail to an assertion failure message in wsgiref Message-ID: <1545527961.69.0.0770528567349.issue35565@roundup.psfhosted.org> New submission from Raymond Hettinger : On line 236 in Lib/wsgiref/handlers.py, we get the assertion message, "Hop-by-hop headers not allowed". That message should should show the *name* and *value* that triggered the failure. Otherwise, it is difficult to know which header caused the problem (in my case, it was Connection: close). ---------- assignee: cheryl.sabella components: Library (Lib) keywords: easy messages: 332378 nosy: cheryl.sabella, rhettinger priority: normal severity: normal status: open title: Add detail to an assertion failure message in wsgiref type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 20:23:12 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sun, 23 Dec 2018 01:23:12 +0000 Subject: [New-bugs-announce] [issue35566] DOC: Add links to annotation glossary term Message-ID: <1545528192.75.0.0770528567349.issue35566@roundup.psfhosted.org> New submission from Cheryl Sabella : Add links to glossary term when `annotation` is used. ---------- assignee: docs at python components: Documentation messages: 332379 nosy: cheryl.sabella, docs at python priority: normal severity: normal status: open title: DOC: Add links to annotation glossary term type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 20:24:22 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 23 Dec 2018 01:24:22 +0000 Subject: [New-bugs-announce] [issue35567] Convert membership test from dict-of-constants to a set Message-ID: <1545528262.75.0.0770528567349.issue35567@roundup.psfhosted.org> New submission from Raymond Hettinger : On line 164 in Lib/wsgiref/utils.py, there is a dictionary called *_hoppish* that should be a set object. ---------- assignee: cheryl.sabella components: Library (Lib) keywords: easy messages: 332380 nosy: cheryl.sabella, rhettinger priority: normal severity: normal status: open title: Convert membership test from dict-of-constants to a set versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 22 22:55:56 2018 From: report at bugs.python.org (Nathaniel Smith) Date: Sun, 23 Dec 2018 03:55:56 +0000 Subject: [New-bugs-announce] [issue35568] Expose the C raise() function in the signal module, for use on Windows Message-ID: <1545537356.71.0.0770528567349.issue35568@roundup.psfhosted.org> New submission from Nathaniel Smith : Suppose we want to test how a program responds to control-C. We'll want to write a test that delivers a SIGINT to our process at a time of our choosing, and then checks what happens. For example, asyncio and Trio both have tests like this, and Trio even does a similar thing at run-time to avoid dropping signals in an edge case [1]. So, how can we deliver a signal to our process? On POSIX platforms, you can use `os.kill(os.getpid(), signal.SIGINT)`, and that works fine. But on Windows, things are much more complicated: https://github.com/python/cpython/pull/11274#issuecomment-449543725 The simplest solution is to use the raise() function. On POSIX, raise(signum) is just a shorthand for kill(getpid(), signum). But, that's only on POSIX. On Windows, kill() doesn't even exist... but raise() does. In fact raise() is specified in C89, so *every* C runtime has to provide raise(), no matter what OS it runs on. So, you might think, that's ok, if we need to generate synthetic signals on Windows then we'll just use ctypes/cffi to access raise(). *But*, Windows has multiple C runtime libraries (for example: regular and debug), and you have to load raise() from the same library that Python is linked against. And I don't know of any way for a Python script to figure out which runtime it's linked against. (I know how to detect whether the interpreter is configured in debug mode, but that's not necessarily the same as being linked against the debug CRT.) So on the one platform where you really need to use raise(), there's AFAICT no reliable way to get access to it. This would all be much simpler if the signal module wrapped the raise() function, so that we could just do 'signal.raise_(signal.SIGINT)'. We should do that. ------- [1] Specifically, consider the following case (I'll use asyncio terminology for simplicity): (1) the user calls loop.add_signal_handler(...) to register a custom signal handler. (2) a signal arrives, and is written to the wakeup pipe. (3) but, before the loop reads the wakeup pipe, the user calls loop.remove_signal_handler(...) to remove the custom handler and restore the original signal settings. (4) now the loop reads the wakeup pipe, and discovers that a signal has arrived, that it no longer knows how to handle. Now what? In this case trio uses raise() to redeliver the signal, so that the new signal handler has a chance to run. ---------- messages: 332382 nosy: njs, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Expose the C raise() function in the signal module, for use on Windows type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 07:00:47 2018 From: report at bugs.python.org (chrysn) Date: Sun, 23 Dec 2018 12:00:47 +0000 Subject: [New-bugs-announce] [issue35569] OSX: Enable IPV6_RECVPKTINFO Message-ID: <1545566447.14.0.0770528567349.issue35569@roundup.psfhosted.org> New submission from chrysn : Python builds on MacOS do not expose the IPV6_RECVPKTINFO flag specified in [RFC3842], which is required for UDP protocols that need control over their servers' sending ports like [CoAP]. While I don't own Apple hardware and thus can't test it, the [nginx] code indicates that this API is available on OSX and is just gated behind `-D__APPLE_USE_RFC_3542`. Searching the web for that define indicates that other interpreted langues and applications use the flag as well (PHP, Ruby; PowerDNS, nmap, libcoap). Please consider enabling this on future releases of Python on OSX. [RFC3542]: https://tools.ietf.org/html/rfc3542 [CoAP]: https://github.com/chrysn/aiocoap/issues/69 [nginx]: http://hg.nginx.org/nginx/rev/9fb994513776 ---------- components: IO messages: 332389 nosy: chrysn priority: normal severity: normal status: open title: OSX: Enable IPV6_RECVPKTINFO type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 07:58:37 2018 From: report at bugs.python.org (Hanno Boeck) Date: Sun, 23 Dec 2018 12:58:37 +0000 Subject: [New-bugs-announce] [issue35570] 2to3 creates code using deprecated imp module Message-ID: <1545569917.75.0.0770528567349.issue35570@roundup.psfhosted.org> New submission from Hanno Boeck : 2to3 (in python 3.6.6) will rewrite the reload function to use the imp module. However according to [1] "Deprecated since version 3.4: The imp package is pending deprecation in favor of importlib." Also running the code with warnings enabled will show a deprecation warning. Example, take this minimal script: #!/usr/bin/python import sys reload(sys) Running to 2to3 ends up with: #!/usr/bin/python import sys import imp imp.reload(sys) $ PYTHONWARNINGS=d python3 foo.py test.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [1] https://docs.python.org/3/library/imp.html ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 332390 nosy: hanno priority: normal severity: normal status: open title: 2to3 creates code using deprecated imp module versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 13:05:07 2018 From: report at bugs.python.org (Stefan Volz) Date: Sun, 23 Dec 2018 18:05:07 +0000 Subject: [New-bugs-announce] [issue35571] Parallel Timeout Class Message-ID: <1545588307.5.0.0770528567349.issue35571@roundup.psfhosted.org> New submission from Stefan Volz : Hello, I'm currently writing my finals project using Python and needed a feature that threading.Timer could nearly but not quite fulfill: Execute a function after given time *with arguments provided and have the timer resettable*. So I did it myself and today I had the idea that it may be a good addition to the standartlibrary. The class is attached, on the bottom is a small test. ---------- components: Extension Modules files: Timeout.py messages: 332392 nosy: Stefan Volz priority: normal severity: normal status: open title: Parallel Timeout Class type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48014/Timeout.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 22:05:29 2018 From: report at bugs.python.org (Solomon Ucko) Date: Mon, 24 Dec 2018 03:05:29 +0000 Subject: [New-bugs-announce] [issue35572] Logging module cleanup Message-ID: <1545620729.24.0.0770528567349.issue35572@roundup.psfhosted.org> New submission from Solomon Ucko : The logging module should be changed to use snake_case (as opposed to camelCase). Also, logger.basicConfig should list keyword arguments and defaults in the argument list, as opposed to using `**kwargs` and `dict.pop` (for readability and improved inspection capabilities). These should both be relatively easy changes to make. The case conversion should leave the camelCase versions as deprecated but left for backwards compatibility (as in the operator module). ---------- components: Extension Modules messages: 332401 nosy: Solomon Ucko priority: normal severity: normal status: open title: Logging module cleanup type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 22:53:06 2018 From: report at bugs.python.org (Divya Rani) Date: Mon, 24 Dec 2018 03:53:06 +0000 Subject: [New-bugs-announce] [issue35573] is_HDN is returns false positive and false negative value for two test cases respectively Message-ID: <1545623586.83.0.0770528567349.issue35573@roundup.psfhosted.org> Change by Divya Rani : ---------- components: Library (Lib) nosy: Divya Rani priority: normal severity: normal status: open title: is_HDN is returns false positive and false negative value for two test cases respectively type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 22:53:26 2018 From: report at bugs.python.org (Solomon Ucko) Date: Mon, 24 Dec 2018 03:53:26 +0000 Subject: [New-bugs-announce] [issue35574] Coconut support Message-ID: <1545623606.14.0.0770528567349.issue35574@roundup.psfhosted.org> New submission from Solomon Ucko : Any chance we could integrate [Coconut](http://coconut-lang.org/) into Python? Any sain Python code should work with Coconut and Coconut allows making code *so* much more readable. IMO, the reason not many people use Coconut is that they haven't heard of it and because of its lack of IDE support. (I would use it if it had more IDE support than just syntax highlighting.) The reason it has so little IDE support is that not many people use it. Making it a part of Python would alleviate these concerns. ---------- messages: 332402 nosy: Solomon Ucko, benjamin.peterson, brett.cannon, yselivanov priority: normal severity: normal status: open title: Coconut support type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 22:54:19 2018 From: report at bugs.python.org (Solomon Ucko) Date: Mon, 24 Dec 2018 03:54:19 +0000 Subject: [New-bugs-announce] [issue35575] Improved range syntax Message-ID: <1545623659.43.0.0770528567349.issue35575@roundup.psfhosted.org> New submission from Solomon Ucko : 3 independent but related proposals. (#4 requires #3.) The main issue for #2 and #4 is the readability of a mix of square and round brackets, especially within nested brackets. This would be less of an issue with [Coconut support](https://bugs.python.org/issue35574). #1. Range inclusive/exclusive keyword arguments (mostly backward compatible) Inclusive/exclusive options for range as keyword arguments (defaulting to `inc_start=True, inc_stop=False`). Code that processes range objects will ignore these unless using `in` tests. The semantics would be as follows: ```python class range: ... def __iter__(self): if self.inc_start: yield self.start i = self.start + self.step while i < self.stop if self.step > 0 else i > self.stop: yield i i += self.step if self.inc_stop and i == self.stop: yield i ``` This would allow for control over the slightly controversial decision of inclusivity/exclusivity for ranges on a case-by-case basis. Any existing code that creates ranges would not be impacted. #2. Range syntax (fully backward compatible) Maybe `(start:stop)`, `(start:stop]`, `[start:stop)` and `[start:stop]` could be used to represent ranges? (`(` = exclusive; `[` = inclusive.) Step notation would also be legal. (E.g. `(start:stop:step)`.) This would allow for a concise, familiar notation for ranges. #3. Slice inclusive/exclusive keyword arguments (mostly backward compatible) This is analogous to #1, except with `slice` instead of `range`. #4. Slice inclusive/exclusive syntax (would require a __future__ in Python 3) As opposed to forcing half-open intervals, a mix of round parentheses and square brackets could be allowed to be used for slices, analogously to #2. Since square brackets with a colon currently represent half-open intervals, this would have to require a __future__ import in Python 3. This could become the default in Python 4. ---------- components: Interpreter Core messages: 332403 nosy: Solomon Ucko, benjamin.peterson, brett.cannon, yselivanov priority: normal severity: normal status: open title: Improved range syntax type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 23 23:52:32 2018 From: report at bugs.python.org (Divya Rani) Date: Mon, 24 Dec 2018 04:52:32 +0000 Subject: [New-bugs-announce] [issue35576] function splitextTest does not return expected value Message-ID: <1545627152.16.0.0770528567349.issue35576@roundup.psfhosted.org> New submission from Divya Rani : 1. For input ".blah." output is "." 2. For input "..." output is "..." results produced by the function are wrong according to the test suite provided by guava. 1. https://github.com/google/guava/blob/1e072a7922a0b3f7b45b9f53405a233834175177/guava-tests/test/com/google/common/io/FilesTest.java#L644 2. https://github.com/google/guava/blob/1e072a7922a0b3f7b45b9f53405a233834175177/guava-tests/test/com/google/common/io/FilesTest.java#L628 ---------- components: Library (Lib) messages: 332407 nosy: Divya Rani priority: normal severity: normal status: open title: function splitextTest does not return expected value type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 24 10:19:00 2018 From: report at bugs.python.org (Adnan Umer) Date: Mon, 24 Dec 2018 15:19:00 +0000 Subject: [New-bugs-announce] [issue35577] side_effect mocked method lose reference to instance Message-ID: <1545664740.73.0.712150888896.issue35577@roundup.psfhosted.org> New submission from Adnan Umer : When a method/bounded function is mocked and side_effect is supplied to it, the side_effect function doesn't get the reference to the instance. Suppose we have something like this class SomeClass: def do_something(self, x): pass def some_function(x): cls = SomeClass() y = class.do_something(x) return y And the test for some_function will be def do_something_side_effect(x): retrun x def test_some_function(): with mock.path("SomeCass.do_something") as do_something_mock: do_something_mock.side_effect = do_something_side_effect assert some_function(1) Here do_something_side_effect mock will not have access to SomeClass instance. ---------- components: Library (Lib) messages: 332463 nosy: Adnan Umer priority: normal severity: normal status: open title: side_effect mocked method lose reference to instance type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 24 11:41:25 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 24 Dec 2018 16:41:25 +0000 Subject: [New-bugs-announce] [issue35578] Add test for Argument Clinic converters Message-ID: <1545669685.38.0.712150888896.issue35578@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently Argument Clinic converters are tested by running Argument Clinic on the CPython source tree. If it generates incorrect code, then it contains bugs. But not all combinations of standard converters and options are used in the stdlib. The programming interface of Argument Clinic is complex, and it is hard to write tests for testing only specific functionality. The simplest way of testing Argument Clinic is to write a C file containing declarations and generated code for all test cases. Although this does not allow to test error cases. The proposed PR adds Lib/test/clinic_test.c which contains tests for all standard converters. It will be extended in bpo-20180 (PR #9828) and bpo-23867. ---------- components: Argument Clinic, Tests messages: 332493 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Add test for Argument Clinic converters type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 24 11:44:00 2018 From: report at bugs.python.org (Antoine Wecxsteen) Date: Mon, 24 Dec 2018 16:44:00 +0000 Subject: [New-bugs-announce] [issue35579] Typo in in asyncio-task documentation Message-ID: <1545669840.38.0.712150888896.issue35579@roundup.psfhosted.org> New submission from Antoine Wecxsteen : I believe there is a typo in the library/asyncio-task documentation https://docs.python.org/3.8/library/asyncio-task.html#scheduling-from-other-threads "Unlike other asyncio functions this functions requires the loop argument to be passed explicitly." It should be "this function", without "s". ---------- assignee: docs at python components: Documentation, asyncio messages: 332495 nosy: Antoine Wecxsteen, asvetlov, docs at python, eric.araujo, ezio.melotti, mdk, willingc, yselivanov priority: normal severity: normal status: open title: Typo in in asyncio-task documentation versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 24 18:41:44 2018 From: report at bugs.python.org (Jeff Robbins) Date: Mon, 24 Dec 2018 23:41:44 +0000 Subject: [New-bugs-announce] [issue35580] Windows IocpProactor: CreateIoCompletionPort 4th arg 0xffffffff -- why is this value the default? Message-ID: <1545694904.74.0.712150888896.issue35580@roundup.psfhosted.org> New submission from Jeff Robbins : By default, the __init__ function of IocpProactor in windows_events.py calls CreateIoCompletionPort with a 4th argument of 0xffffffff, yet MSDN doesn't document this as a valid argument. https://docs.microsoft.com/en-us/windows/desktop/fileio/createiocompletionport It looks like the 4th arg (NumberOfConcurrentThreads) is meant to be either a positive integer or 0. 0 is a special value meaning "If this parameter is zero, the system allows as many concurrently running threads as there are processors in the system." Why does asyncio use 0xffffffff instead as the default value? ---------- components: asyncio messages: 332498 nosy: asvetlov, jeffr at livedata.com, yselivanov priority: normal severity: normal status: open title: Windows IocpProactor: CreateIoCompletionPort 4th arg 0xffffffff -- why is this value the default? versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 07:52:37 2018 From: report at bugs.python.org (Sebastian Rittau) Date: Tue, 25 Dec 2018 12:52:37 +0000 Subject: [New-bugs-announce] [issue35581] Document @typing.type_check_only Message-ID: <1545742357.2.0.712150888896.issue35581@roundup.psfhosted.org> New submission from Sebastian Rittau : Document @typing.type_check_only per https://github.com/python/typing/issues/597. ---------- assignee: docs at python components: Documentation messages: 332508 nosy: docs at python, srittau priority: normal severity: normal status: open title: Document @typing.type_check_only versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 09:40:52 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 25 Dec 2018 14:40:52 +0000 Subject: [New-bugs-announce] [issue35582] Argument Clinic: inline parsing code for functions with only positional parameters Message-ID: <1545748852.85.0.712150888896.issue35582@roundup.psfhosted.org> New submission from Serhiy Storchaka : This is a continuation of issue23867. The proposed PR makes Argument Clinic inlining parsing code for functions with only positional parameters, i.e. functions that use PyArg_ParseTuple() and _PyArg_ParseStack() now. This saves time for parsing format strings and calling few levels of functions. It can save also a C stack, because of lesser number of nested (and potentially recursive) calls, lesser number of variables, and getting rid of a stack allocated array for "objects" which will need to be deallocated or cleaned up if overall parsing fails. PyArg_ParseTuple() and _PyArg_ParseStack() will still be used if there are parameters for which inlining converter is not supported. Unsupported converters are deprecated Py_UNICODE API ("u", "Z"), encoded strings ("es", "et"), obsolete string/bytes converters ("y", "s#", "z#"), some custom converters (DWORD, HANDLE, pid_t, intptr_t). ---------- components: Argument Clinic messages: 332510 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Argument Clinic: inline parsing code for functions with only positional parameters type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 14:39:03 2018 From: report at bugs.python.org (Gagan) Date: Tue, 25 Dec 2018 19:39:03 +0000 Subject: [New-bugs-announce] [issue35583] python 3.7.x interpreter segmentation fault (3.6.x/2.7.x compile fine) Message-ID: <1545766743.84.0.712150888896.issue35583@roundup.psfhosted.org> New submission from Gagan : Hello everyone. I am currently trying to compile Python 3.7.x on a MIPS(el, little endian; 32 bit) platform, and I am having issues producing a functioning interpreter to continue the compilation. I have no issue compiling either 2.7.x or 3.6.x versions on this machine, and I am using 3.6.7. here is the dump from the installation: -------- root at DD-WRT:/mnt/work/Python-3.7.2# make platform LD_LIBRARY_PATH=/mnt/work/Python-3.7.2:/lib:/usr/lib:/usr/local/lib:/jffs/lib:/jffs/usr/lib:/jffs/usr/local/lib:/mmc/lib:/mmc/usr/lib:/opt/lib:/opt/usr/lib ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi /bin/sh: line 5: 399 Segmentation fault LD_LIBRARY_PATH=/mnt/work/Python-3.7.2:/lib:/usr/lib:/usr/local/lib:/jffs/lib:/jffs/usr/lib:/jffs/usr/local/lib:/mmc/lib:/mmc/usr/lib:/opt/lib:/opt/usr/lib ./python -E -S -m sysconfig --generate-posix-vars generate-posix-vars failed make: *** [Makefile:604: pybuilddir.txt] Error 1 ----------- and here is the valgrind output.: ----- root at DD-WRT:/mnt/work/Python-3.7.2# valgrind ./python -E -S -m sysconfig --generate-posix-vars ==1246== Memcheck, a memory error detector ==1246== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==1246== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info ==1246== Command: ./python -E -S -m sysconfig --generate-posix-vars ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x401FAF4: ??? (in /lib/ld-2.28.so) ==1246== by 0x400942C: ??? (in /lib/ld-2.28.so) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4020104: ??? (in /lib/ld-2.28.so) ==1246== by 0x4020024: ??? (in /lib/ld-2.28.so) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4007E94: ??? (in /lib/ld-2.28.so) ==1246== by 0x4007DC4: ??? (in /lib/ld-2.28.so) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4020104: ??? (in /lib/ld-2.28.so) ==1246== by 0x400D950: ??? (in /lib/ld-2.28.so) ==1246== Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4858AEC: wcslen (vg_replace_strmem.c:1856) ==1246== by 0x4A51CEC: _PyMem_RawWcsdup (obmalloc.c:569) ==1246== by 0x4D15A8C: pymain_wstrdup (main.c:501) ==1246== by 0x4D15A8C: pymain_init_cmdline_argv (main.c:561) ==1246== by 0x4D15A8C: pymain_read_conf (main.c:2024) ==1246== by 0x4D15A8C: pymain_cmdline_impl (main.c:2642) ==1246== by 0x4D15A8C: pymain_cmdline (main.c:2707) ==1246== by 0x4D15A8C: pymain_init (main.c:2748) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== by 0x4D1AE04: _Py_UnixMain (main.c:2822) ==1246== by 0x400D88: main (in /tmp/mnt/work/Python-3.7.2/python) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4858AEC: wcslen (vg_replace_strmem.c:1856) ==1246== by 0x4A51CEC: _PyMem_RawWcsdup (obmalloc.c:569) ==1246== by 0x4D0D688: copy_wstrlist (main.c:1237) ==1246== by 0x4D171E0: pymain_init_core_argv (main.c:1263) ==1246== by 0x4D171E0: pymain_read_conf_impl (main.c:1955) ==1246== by 0x4D171E0: pymain_read_conf (main.c:2028) ==1246== by 0x4D171E0: pymain_cmdline_impl (main.c:2642) ==1246== by 0x4D171E0: pymain_cmdline (main.c:2707) ==1246== by 0x4D171E0: pymain_init (main.c:2748) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== by 0x4D1AE04: _Py_UnixMain (main.c:2822) ==1246== by 0x400D88: main (in /tmp/mnt/work/Python-3.7.2/python) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4858AEC: wcslen (vg_replace_strmem.c:1856) ==1246== by 0x4A51CEC: _PyMem_RawWcsdup (obmalloc.c:569) ==1246== by 0x4D0D688: copy_wstrlist (main.c:1237) ==1246== by 0x4D17B78: pymain_cmdline_impl (main.c:2663) ==1246== by 0x4D17B78: pymain_cmdline (main.c:2707) ==1246== by 0x4D17B78: pymain_init (main.c:2748) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== by 0x4D1AE04: _Py_UnixMain (main.c:2822) ==1246== by 0x400D88: main (in /tmp/mnt/work/Python-3.7.2/python) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4858AEC: wcslen (vg_replace_strmem.c:1856) ==1246== by 0x4A51CEC: _PyMem_RawWcsdup (obmalloc.c:569) ==1246== by 0x4D11C3C: _PyCoreConfig_Copy (main.c:2415) ==1246== by 0x4C9D630: _Py_InitializeCore (pylifecycle.c:847) ==1246== by 0x4D150D4: pymain_init (main.c:2760) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== by 0x4D1AE04: _Py_UnixMain (main.c:2822) ==1246== by 0x400D88: main (in /tmp/mnt/work/Python-3.7.2/python) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4858AEC: wcslen (vg_replace_strmem.c:1856) ==1246== by 0x4A51CEC: _PyMem_RawWcsdup (obmalloc.c:569) ==1246== by 0x4D0D688: copy_wstrlist (main.c:1237) ==1246== by 0x4D11C9C: _PyCoreConfig_Copy (main.c:2417) ==1246== by 0x4C9D630: _Py_InitializeCore (pylifecycle.c:847) ==1246== by 0x4D150D4: pymain_init (main.c:2760) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== by 0x4D1AE04: _Py_UnixMain (main.c:2822) ==1246== by 0x400D88: main (in /tmp/mnt/work/Python-3.7.2/python) ==1246== ==1246== Conditional jump or move depends on uninitialised value(s) ==1246== at 0x4858AEC: wcslen (vg_replace_strmem.c:1856) ==1246== by 0x4A51CEC: _PyMem_RawWcsdup (obmalloc.c:569) ==1246== by 0x4D11C3C: _PyCoreConfig_Copy (main.c:2415) ==1246== by 0x4C9C910: _Py_InitializeCore_impl (pylifecycle.c:711) ==1246== by 0x4C9D820: _Py_InitializeCore (pylifecycle.c:859) ==1246== by 0x4D150D4: pymain_init (main.c:2760) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== by 0x4D1AE04: _Py_UnixMain (main.c:2822) ==1246== by 0x400D88: main (in /tmp/mnt/work/Python-3.7.2/python) ==1246== ==1246== Invalid read of size 4 ==1246== at 0x4A4A864: address_in_range (obmalloc.c:1338) ==1246== by 0x4A4A864: pymalloc_free.isra.0 (obmalloc.c:1610) ==1246== by 0x4A4BBF8: _PyObject_Free (obmalloc.c:1815) ==1246== by 0x4A4B160: PyObject_Free (obmalloc.c:640) ==1246== by 0x4A08E04: dictresize (dictobject.c:1196) ==1246== by 0x4A0C66C: insertion_resize (dictobject.c:994) ==1246== by 0x4A0C66C: insertdict (dictobject.c:1038) ==1246== by 0x4A0D084: PyDict_SetItem (dictobject.c:1463) ==1246== by 0x4A946A0: add_operators (typeobject.c:7428) ==1246== by 0x4A946A0: PyType_Ready (typeobject.c:5188) ==1246== by 0x4A3D954: _Py_ReadyTypes (object.c:1713) ==1246== by 0x4C9CA20: _Py_InitializeCore_impl (pylifecycle.c:733) ==1246== by 0x4C9D820: _Py_InitializeCore (pylifecycle.c:859) ==1246== by 0x4D150D4: pymain_init (main.c:2760) ==1246== by 0x4D18BA0: pymain_main (main.c:2782) ==1246== Address 0x58f3010 is 8 bytes before a block of size 1,168 alloc'd ==1246== at 0x484C740: malloc (vg_replace_malloc.c:299) ==1246== by 0x4A4BFA8: PyMem_RawMalloc (obmalloc.c:503) ==1246== by 0x4A4FF80: _PyObject_Malloc (obmalloc.c:1560) ==1246== by 0x4A4B484: PyObject_Malloc (obmalloc.c:616) ==1246== by 0x4A07B34: new_keys_object (dictobject.c:534) ==1246== by 0x4A085C8: dictresize (dictobject.c:1141) ==1246== by 0x4A0C66C: insertion_resize (dictobject.c:994) ==1246== by 0x4A0C66C: insertdict (dictobject.c:1038) ==1246== by 0x4A0D084: PyDict_SetItem (dictobject.c:1463) ==1246== by 0x4A946A0: add_operators (typeobject.c:7428) ==1246== by 0x4A946A0: PyType_Ready (typeobject.c:5188) ==1246== by 0x4A3D920: _Py_ReadyTypes (object.c:1710) ==1246== by 0x4C9CA20: _Py_InitializeCore_impl (pylifecycle.c:733) ==1246== by 0x4C9D820: _Py_InitializeCore (pylifecycle.c:859) ==1246== ==1246== Invalid read of size 4 ==1246== at 0x4C51A9C: PyErr_SetObject (errors.c:89) ==1246== by 0x4C52394: PyErr_FormatV (errors.c:837) ==1246== by 0x4C525B0: PyErr_Format (errors.c:852) ==1246== by 0x4AB6E8C: find_maxchar_surrogates (unicodeobject.c:1637) ==1246== by 0x4AE3D74: PyUnicode_FromWideChar (unicodeobject.c:2045) ==1246== by 0x4AE5CD0: unicode_decode_locale (unicodeobject.c:3610) ==1246== by 0x4B1C3BC: PyUnicode_DecodeFSDefaultAndSize (unicodeobject.c:3658) ==1246== by 0x4CED434: PyThread_GetInfo (thread.c:216) ==1246== by 0x4CE1C04: _PySys_BeginInit (sysmodule.c:2414) ==1246== by 0x4C9CCE8: _Py_InitializeCore_impl (pylifecycle.c:753) ==1246== by 0x4C9D820: _Py_InitializeCore (pylifecycle.c:859) ==1246== by 0x4D150D4: pymain_init (main.c:2760) ==1246== Address 0x54 is not stack'd, malloc'd or (recently) free'd ==1246== ==1246== ==1246== Process terminating with default action of signal 11 (SIGSEGV) ==1246== Access not within mapped region at address 0x54 ==1246== at 0x4C51A9C: PyErr_SetObject (errors.c:89) ==1246== by 0x4C52394: PyErr_FormatV (errors.c:837) ==1246== by 0x4C525B0: PyErr_Format (errors.c:852) ==1246== by 0x4AB6E8C: find_maxchar_surrogates (unicodeobject.c:1637) ==1246== by 0x4AE3D74: PyUnicode_FromWideChar (unicodeobject.c:2045) ==1246== by 0x4AE5CD0: unicode_decode_locale (unicodeobject.c:3610) ==1246== by 0x4B1C3BC: PyUnicode_DecodeFSDefaultAndSize (unicodeobject.c:3658) ==1246== by 0x4CED434: PyThread_GetInfo (thread.c:216) ==1246== by 0x4CE1C04: _PySys_BeginInit (sysmodule.c:2414) ==1246== by 0x4C9CCE8: _Py_InitializeCore_impl (pylifecycle.c:753) ==1246== by 0x4C9D820: _Py_InitializeCore (pylifecycle.c:859) ==1246== by 0x4D150D4: pymain_init (main.c:2760) ==1246== If you believe this happened as a result of a stack ==1246== overflow in your program's main thread (unlikely but ==1246== possible), you can try to increase the size of the ==1246== main thread stack using the --main-stacksize= flag. ==1246== The main thread stack size used in this run was 8388608. ==1246== ==1246== HEAP SUMMARY: ==1246== in use at exit: 40,028 bytes in 100 blocks ==1246== total heap usage: 197 allocs, 97 frees, 60,589 bytes allocated ==1246== ==1246== LEAK SUMMARY: ==1246== definitely lost: 0 bytes in 0 blocks ==1246== indirectly lost: 0 bytes in 0 blocks ==1246== possibly lost: 0 bytes in 0 blocks ==1246== still reachable: 40,028 bytes in 100 blocks ==1246== suppressed: 0 bytes in 0 blocks ==1246== Rerun with --leak-check=full to see details of leaked memory ==1246== ==1246== For counts of detected and suppressed errors, rerun with: -v ==1246== Use --track-origins=yes to see where uninitialised values come from ==1246== ERROR SUMMARY: 45 errors from 12 contexts (suppressed: 0 from 0) Segmentation fault ------ it seems to me there is an issue with the new Modules/getpath.c Objects/pathconfig.c Modules/main.c compared to the 3.6.x versions? Any help would be appreciated! thank you ---------- components: Interpreter Core messages: 332514 nosy: broly priority: normal severity: normal status: open title: python 3.7.x interpreter segmentation fault (3.6.x/2.7.x compile fine) type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 17:10:48 2018 From: report at bugs.python.org (Julien Palard) Date: Tue, 25 Dec 2018 22:10:48 +0000 Subject: [New-bugs-announce] [issue35584] Wrong statement about ^ in howto/regex.rst Message-ID: <1545775848.12.0.712150888896.issue35584@roundup.psfhosted.org> New submission from Julien Palard : In howto/regex.rst I read: > '^' outside a character class will simply match the '^' character. Which looks wrong, '^' is the "begin anchor", it's a metacharacter that typically won't match '^'. I propose to simply remove the statement, if nobody finds a better idea. ---------- assignee: docs at python components: Documentation messages: 332520 nosy: Vaibhav Gupta, docs at python, mdk priority: normal severity: normal status: open title: Wrong statement about ^ in howto/regex.rst versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 19:26:24 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Wed, 26 Dec 2018 00:26:24 +0000 Subject: [New-bugs-announce] [issue35585] Speedup Enum lookup Message-ID: <1545783984.81.0.712150888896.issue35585@roundup.psfhosted.org> New submission from Andrew Svetlov : Construction enums by-value (e.g. http.HTTPStatus(200)) performs two dict lookups: if value in cls._value2member_map_: return cls._value2member_map_[value] Changing the code to just return cls._value2member_map_[value] with catching KeyError can speedup the fast path a little. ---------- components: Library (Lib) messages: 332524 nosy: asvetlov priority: normal severity: normal status: open title: Speedup Enum lookup versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 22:59:43 2018 From: report at bugs.python.org (kbengine) Date: Wed, 26 Dec 2018 03:59:43 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue35586=5D_Open_pyexpat_comp?= =?utf-8?q?ilation=2C_Make_shows_error=EF=BC=88missing_separator=EF=BC=89?= Message-ID: <1545796783.34.0.712150888896.issue35586@roundup.psfhosted.org> New submission from kbengine : Python3.7.2 My compilation steps: 1. Modify Modules/Setup.dist, open pyexpat pyexpat expat/xmlparse.c expat/xmlrole.c expat/xmltok.c pyexpat.c -I$(srcdir)/Modules/expat -DHAVE_EXPAT_CONFIG_H -DXML_POOR_ENTROPY=1 -DUSE_PYEXPAT_CAPI 2. ./configure 3. make Makefile:272: *** missing separator. Stop. ---------------------------------------------------- ---------- components: Extension Modules messages: 332529 nosy: kbengine priority: normal severity: normal status: open title: Open pyexpat compilation, Make shows error?missing separator? type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 25 23:21:34 2018 From: report at bugs.python.org (embed372) Date: Wed, 26 Dec 2018 04:21:34 +0000 Subject: [New-bugs-announce] [issue35587] Python 3.7.2 Embed - zipimport.ZipImportError Message-ID: <1545798094.3.0.712150888896.issue35587@roundup.psfhosted.org> New submission from embed372 : [zip file] =----------------------------------------------------------- python37.zip . # Uncomment to run site.main() automatically #import site ------------------------------------------------------------ Fatal Python error: initfsencoding: unable to load the file system codec zipimport.ZipImportError: can't find module 'encodings' Current thread 0x0000228c (most recent call first): [directory : extracted form python37.zip] =----------------------------------------------------------- python37 . # Uncomment to run site.main() automatically #import site ------------------------------------------------------------ works well ... [duplicaiton of vcruntime140.dll in Python 3.7.2 embed] ------------------------------------------------------------ python37._pth libcrypto-1_1-x64.dll libssl-1_1-x64.dll python3.dll python37.dll sqlite3.dll vcruntime140.dll [*] vcruntime140.dll [*] <-- _distutils_findvs.pyd (?) python.exe pythonw.exe _asyncio.pyd _bz2.pyd _contextvars.pyd _ctypes.pyd _decimal.pyd _elementtree.pyd _hashlib.pyd _lzma.pyd _msi.pyd _multiprocessing.pyd _overlapped.pyd _queue.pyd _socket.pyd _sqlite3.pyd _ssl.pyd pyexpat.pyd select.pyd unicodedata.pyd winsound.pyd python37.zip ------------------------------------------------------------ ---------- components: Build messages: 332530 nosy: embed372 priority: normal severity: normal status: open title: Python 3.7.2 Embed - zipimport.ZipImportError type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 26 04:39:32 2018 From: report at bugs.python.org (Stefan Behnel) Date: Wed, 26 Dec 2018 09:39:32 +0000 Subject: [New-bugs-announce] [issue35588] Speed up mod/divmod for Fraction type Message-ID: <1545817172.57.0.712150888896.issue35588@roundup.psfhosted.org> New submission from Stefan Behnel : Spelling out the numerator/denominator calculation in the __mod__ special method, and actually implementing __divmod__, speeds up both operations by 2-3x. This is due to avoiding repeated Fraction instantiation and normalisation, as well as less arithmetic operations. $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'a%b' 50000 loops, best of 5: 9.53 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'a%3' 50000 loops, best of 5: 6.61 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'divmod(a, b)' 20000 loops, best of 5: 14.1 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'divmod(a, 3)' 20000 loops, best of 5: 10.2 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'a%b' 100000 loops, best of 5: 2.96 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'a%3' 100000 loops, best of 5: 2.78 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'divmod(a, b)' 100000 loops, best of 5: 3.93 usec per loop $ ./python -m timeit -s 'from fractions import Fraction as F; a = F(-7, 3); b = F(3, 2)' 'divmod(a, 3)' 50000 loops, best of 5: 3.82 usec per loop ---------- components: Library (Lib) messages: 332533 nosy: scoder priority: normal severity: normal status: open title: Speed up mod/divmod for Fraction type type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 26 05:04:33 2018 From: report at bugs.python.org (Huazuo Gao) Date: Wed, 26 Dec 2018 10:04:33 +0000 Subject: [New-bugs-announce] [issue35589] BaseSelectorEventLoop.sock_sendall() performance regression: extra copy of data Message-ID: <1545818673.36.0.712150888896.issue35589@roundup.psfhosted.org> New submission from Huazuo Gao : Prior to PR 10419, sock_sendall does not make a copy of the data. PR 10419 introduced an extra copy, which may cause problem for code that send a huge chunk of data simultaneously to many peers. Relevant change is: https://github.com/python/cpython/pull/10419/files#diff-2d64b02252335b37396e00e56fa66984R443 Bellow is a test that show the regression between 3.7.1 and 3.8-dev --- import asyncio import socket import os from subprocess import check_output loop = asyncio.get_event_loop() def mem_usage(): pid = str(os.getpid()) print(check_output(['ps', '-o', 'rss,comm'], text=True)) async def main(): data = bytearray(10*10**6) data = memoryview(data) tasks = [] for i in range(100): s1, s2 = socket.socketpair() s1.setblocking(False) s2.setblocking(False) tasks.append(loop.create_task(loop.sock_sendall(s1, data))) tasks.append(loop.create_task(loop.sock_recv(s2, 1))) await asyncio.sleep(0.1) mem_usage() for t in tasks: t.cancel() await asyncio.wait(tasks) loop.run_until_complete(main()) --- result 3.7.1: 24724 3.8-dev: 979184 ---------- components: asyncio messages: 332534 nosy: Huazuo Gao, asvetlov, yselivanov priority: normal severity: normal status: open title: BaseSelectorEventLoop.sock_sendall() performance regression: extra copy of data type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 26 06:00:53 2018 From: report at bugs.python.org (jso2460) Date: Wed, 26 Dec 2018 11:00:53 +0000 Subject: [New-bugs-announce] [issue35590] logging.handlers.SysLogHandler with STREAM connects in constructor without timeout Message-ID: <1545822053.48.0.712150888896.issue35590@roundup.psfhosted.org> New submission from jso2460 : logging.handlers.SysLogHandler in __init__ contains the following code, where socket is created and then connected right away. This seem to provide no way to specify a connection timeout for the socket being created. sock = socket.socket(af, socktype, proto) if socktype == socket.SOCK_STREAM: sock.connect(sa) I believe to add an argument to specify the optional timeout would be appreciated, i.e., optionally calling sock.settimeout(..), something like: sock = socket.socket(af, socktype, proto) if timeout: sock.settimeout(timeout) if socktype == socket.SOCK_STREAM: sock.connect(sa) ---------- messages: 332536 nosy: jso2460 priority: normal severity: normal status: open title: logging.handlers.SysLogHandler with STREAM connects in constructor without timeout versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 26 13:50:50 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Wed, 26 Dec 2018 18:50:50 +0000 Subject: [New-bugs-announce] [issue35591] IDLE: Traceback on Find Selection Message-ID: <1545850250.73.0.720523781121.issue35591@roundup.psfhosted.org> New submission from Cheryl Sabella : This probably isn't a traceback that's likely to happen, but I wanted to document it since I was able to recreate it. To recreate: In a new shell, do Select All, then Find Selection. Exception in Tkinter callback Traceback (most recent call last): File "N:\projects\cpython\lib\tkinter\__init__.py", line 1883, in __call__ return self.func(*args) File "N:\projects\cpython\lib\idlelib\editor.py", line 644, in find_selection_event search.find_selection(self.text) File "N:\projects\cpython\lib\idlelib\search.py", line 25, in find_selection return _setup(text).find_selection(text) File "N:\projects\cpython\lib\idlelib\search.py", line 72, in find_selection return self.find_again(text) File "N:\projects\cpython\lib\idlelib\search.py", line 65, in find_again self.bell() AttributeError: 'SearchDialog' object has no attribute 'bell' ---------- assignee: terry.reedy components: IDLE messages: 332559 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Traceback on Find Selection type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 26 17:21:56 2018 From: report at bugs.python.org (Gunasekar Rajendran) Date: Wed, 26 Dec 2018 22:21:56 +0000 Subject: [New-bugs-announce] [issue35592] Not able to use Python 3.7.2 due to SSL issue Message-ID: <1545862916.5.0.100281497043.issue35592@roundup.psfhosted.org> New submission from Gunasekar Rajendran : I am trying to run python code in Visual studio code and get the below error while trying to connect to mysql db Python installation has no SSL support ---------- assignee: christian.heimes components: SSL files: dbconnect.py messages: 332564 nosy: Gunasekar Rajendran, christian.heimes priority: normal severity: normal status: open title: Not able to use Python 3.7.2 due to SSL issue type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48015/dbconnect.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 08:36:19 2018 From: report at bugs.python.org (Emmanuel Arias) Date: Thu, 27 Dec 2018 13:36:19 +0000 Subject: [New-bugs-announce] [issue35593] Register standard browser: Chrome Message-ID: <1545917779.17.0.514318907905.issue35593@roundup.psfhosted.org> New submission from Emmanuel Arias : Hi! This issue is open to discuss the PR: https://github.com/python/cpython/pull/11327 This PR propose add "chrome" on webbrowser.register_standard_browsers for windows IMO this is a reasonable new feature simply because Chrome is commonly used. ---------- components: Library (Lib) messages: 332586 nosy: eamanu priority: normal severity: normal status: open title: Register standard browser: Chrome type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 10:28:33 2018 From: report at bugs.python.org (Daugeras) Date: Thu, 27 Dec 2018 15:28:33 +0000 Subject: [New-bugs-announce] [issue35594] Python script generating Segmentation Fault Message-ID: <1545924513.31.0.604345478264.issue35594@roundup.psfhosted.org> New submission from Daugeras : Python script generates segmentation fault I cannot find the source of the problem. How is it to debug a segfault simply in Python ? Are there recommended coding practices to avoid Segmentation Faults ? I wrote a script (1600 lines) to systematically download CSV files from a source and format the collected data. The script works very well for 100-200 files, but it systematically crashes with a segmentation fault message after a while. -Crash always happens at the same spot in the script, with no understandable cause -I run it on Mac OSX but the crash also happens on Ubuntu Linux and Debian 9 -If I run the Pandas Routines that crash during my script on single files, they work properly. The crash only happens when I loop the script 100-200 times. -I checked every variable content, constructors (init) and they seem to be fine. Code is too long to be pasted, but available on demand Expected result should be execution to the end. Instead, it crashes after 100-200 iterations ---------- messages: 332592 nosy: Daugeras priority: normal severity: normal status: open title: Python script generating Segmentation Fault type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 10:50:04 2018 From: report at bugs.python.org (Scott Arciszewski) Date: Thu, 27 Dec 2018 15:50:04 +0000 Subject: [New-bugs-announce] [issue35595] Add sys flag to always show full paths in stack traces (instead of relative paths) Message-ID: <1545925804.0.0.366688236093.issue35595@roundup.psfhosted.org> New submission from Scott Arciszewski : I have a wsgi script writing to a log file. The contents look like this (truncated): File "build/bdist.linux-x86_64/egg/trac/ticket/query.py", line 284, in _count % sql, args)[0][0] File "build/bdist.linux-x86_64/egg/trac/db/api.py", line 122, in execute return db.execute(query, params) File "build/bdist.linux-x86_64/egg/trac/db/util.py", line 128, in execute cursor.execute(query, params if params is not None else []) When confronted with this logfile, I have no idea where build/bdist.linux-x86_64 lives. Rather than hoping a well-timed lsof is adequate to catch the actual script path, I'd like to be able to set a sys.flag to always log the real, fullpath of the .py script either instead of, or alongside, the file path. ---------- messages: 332593 nosy: Scott Arciszewski priority: normal severity: normal status: open title: Add sys flag to always show full paths in stack traces (instead of relative paths) versions: Python 2.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 11:18:27 2018 From: report at bugs.python.org (hyu) Date: Thu, 27 Dec 2018 16:18:27 +0000 Subject: [New-bugs-announce] [issue35596] Fatal Python error: initfsencoding: unable to load the file system codec zipimport.ZipImportError: can't find module 'encodings' Message-ID: <1545927507.21.0.851075424973.issue35596@roundup.psfhosted.org> New submission from hyu : >python Fatal Python error: initfsencoding: unable to load the file system codec zipimport.ZipImportError: can't find module 'encodings' There are two vcruntime140.dll with no binary diff. Date Time Attr Size Compressed Name ------------------- ----- -------- ------------ ---------------- 2018-12-10 22:06:34 ..... 80128 45532 vcruntime140.dll ... 2018-12-10 22:06:34 ..... 80128 45532 vcruntime140.dll Repeated downloads. Checked both versions: https://www.python.org/ftp/python/3.7.2/python-3.7.2-embed-amd64.zip https://www.python.org/ftp/python/3.7.2/python-3.7.2-embed-win32.zip Searched and read release and doc. Checked bugs since yesterday. ---------- messages: 332595 nosy: hyu priority: normal severity: normal status: open title: Fatal Python error: initfsencoding: unable to load the file system codec zipimport.ZipImportError: can't find module 'encodings' versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 11:19:48 2018 From: report at bugs.python.org (Fady shehata) Date: Thu, 27 Dec 2018 16:19:48 +0000 Subject: [New-bugs-announce] [issue35597] Bug in Python's compiler Message-ID: <1545927588.67.0.841657320947.issue35597@roundup.psfhosted.org> New submission from Fady shehata : this code is completely right , and its trace is right and give the correct result but your compiler give me an incorrect result. if we input 1010 it must give 10, the compiler give ten but uncollected ten like 11111122 and if we input 111 it's output by your compiler is 1123 ---------- components: Windows files: Capture.PNG messages: 332596 nosy: Fady shehata, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Bug in Python's compiler type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file48017/Capture.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 15:27:57 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Thu, 27 Dec 2018 20:27:57 +0000 Subject: [New-bugs-announce] [issue35598] IDLE: Modernize config_key module Message-ID: <1545942477.09.0.499754382121.issue35598@roundup.psfhosted.org> New submission from Cheryl Sabella : * Apply PEP8 naming convention. * Add additional tests to get coverage (close?) to 100%. * Update to more meaningful names. * Switch to ttk widgets and revise imports. * Split toplevel class into a window class and frame class(es). ---------- assignee: terry.reedy components: IDLE messages: 332614 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Modernize config_key module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 20:22:04 2018 From: report at bugs.python.org (Jeff Robbins) Date: Fri, 28 Dec 2018 01:22:04 +0000 Subject: [New-bugs-announce] [issue35599] asyncio windows_events.py IocpProactor bug Message-ID: <1545960124.09.0.284140631848.issue35599@roundup.psfhosted.org> New submission from Jeff Robbins : The close() method of IocpProactor in windows_events.py has this code in its close() method: while self._cache: if not self._poll(1): logger.debug('taking long time to close proactor') The bug is that self._poll() has *no* return statements in it, and so returns None no matter what. Which makes the "if not" part confusing, at best. At worst, it might reflect a disconnect with the author's intent. I added a bit more logging and re-ran my test: while self._cache: logger.debug('before self._poll(1)') if not self._poll(1): logger.debug('taking long time to close proactor') logger.debug(f'{self._cache}') logger output: 20:16:30.247 (D) MainThread asyncio: before self._poll(1) 20:16:30.248 (D) MainThread asyncio: taking long time to close proactor 20:16:30.249 (D) MainThread asyncio: {} Obviously 1 millisecond isn't "taking a long time to close proactor". Also of interest, the _cache is now empty. I think the intent of the author must have been to check if the call to ._poll() cleared out any possible pending futures, or waited the full 1 second. Since ._poll() doesn't return any value to differentiate if it waited the full wait period or not, the "if" is wrong, and, I think, the intent of the author isn't met by this code. But, separate from speculating on "intent", the debug output of "taking a long time to close proactor" seems wrong, and the .close() code seems disassociated with the implementation of ._poll() in the same class IocpProactor in windows_events.py. ---------- components: asyncio messages: 332632 nosy: asvetlov, jeffr at livedata.com, yselivanov priority: normal severity: normal status: open title: asyncio windows_events.py IocpProactor bug versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 21:47:05 2018 From: report at bugs.python.org (Dima Tisnek) Date: Fri, 28 Dec 2018 02:47:05 +0000 Subject: [New-bugs-announce] [issue35600] Expose siphash Message-ID: <1545965225.21.0.451953823903.issue35600@roundup.psfhosted.org> New submission from Dima Tisnek : Just recently, i found rolling my own simple hash for strings. (task was to distribute tests across executors, stably across invocations, no external input, no security) In the old days I'd just `hash(some_variable)` but of course now I cannot. `hashlib.sha*` seemed too complex and I ended up with something like `sum(map(ord, str(some_variable)))`. How much easier this would be is `siphash` implementation that cpython uses internally was available to me! ---------- components: Extension Modules messages: 332633 nosy: Dima.Tisnek priority: normal severity: normal status: open title: Expose siphash type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 22:45:59 2018 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 28 Dec 2018 03:45:59 +0000 Subject: [New-bugs-announce] [issue35601] Race condition in test_signal_handling_args x86-64 High Sierra 3.75 Message-ID: <1545968759.07.0.590219498678.issue35601@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : There is a race condition in FAIL: test_signal_handling_args (test.test_asyncio.test_events.KqueueEventLoopTests) in macOS: https://buildbot.python.org/all/#/builders/147/builds/546/steps/4/logs/stdio ====================================================================== FAIL: test_signal_handling_args (test.test_asyncio.test_events.KqueueEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildbot/buildarea/3.7.billenstein-sierra/build/Lib/test/test_asyncio/test_events.py", line 595, in test_signal_handling_args self.assertEqual(caught, 1) AssertionError: 0 != 1 It seems that SIGALRM is never received in the 0.5 seconds of timeout: def test_signal_handling_args(self): some_args = (42,) caught = 0 def my_handler(*args): nonlocal caught caught += 1 self.assertEqual(args, some_args) self.loop.add_signal_handler(signal.SIGALRM, my_handler, *some_args) signal.setitimer(signal.ITIMER_REAL, 0.1, 0) # Send SIGALRM once. self.loop.call_later(0.5, self.loop.stop) self.loop.run_forever() self.assertEqual(caught, 1) Maybe we should set up a much bigger timeout and make the handle stop the event loop. ---------- components: Tests, asyncio, macOS messages: 332637 nosy: asvetlov, ned.deily, pablogsal, ronaldoussoren, yselivanov priority: normal severity: normal status: open title: Race condition in test_signal_handling_args x86-64 High Sierra 3.75 type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 27 23:21:50 2018 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 28 Dec 2018 04:21:50 +0000 Subject: [New-bugs-announce] [issue35602] cleanup code may fail in test_asyncio.test_unix_events.SelectorEventLoopUnixSockSendfileTests Message-ID: <1545970910.26.0.567308733358.issue35602@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : According to this buildbot: https://buildbot.python.org/all/#/builders/170/builds/218/steps/4/logs/stdio there is some cleanup failure in test_sock_sendfile_os_error_first_call: test_sock_sendfile_os_error_first_call (test.test_asyncio.test_unix_events.SelectorEventLoopUnixSockSendfileTests) ... /usr/home/buildbot/python/3.7.koobs-freebsd10.nondebug/build/Lib/asyncio/selector_events.py:655: ResourceWarning: unclosed transport <_SelectorSocketTransport fd=10> source=self) ResourceWarning: Enable tracemalloc to get the object allocation traceback ok The code that is supposed to clean up the resource is: def cleanup(): if proto.transport is not None: # can be None if the task was cancelled before # connection_made callback proto.transport.close() self.run_loop(proto.wait_closed()) apparently, proto.transport may be None and then it fails to be closed even if the test succeeds (I assume because the condition in the comment happens or something else) and then the transport is not properly closed. ---------- components: Tests, asyncio messages: 332642 nosy: asvetlov, pablogsal, yselivanov priority: normal severity: normal status: open title: cleanup code may fail in test_asyncio.test_unix_events.SelectorEventLoopUnixSockSendfileTests versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 04:18:00 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 28 Dec 2018 09:18:00 +0000 Subject: [New-bugs-announce] [issue35603] table header in output of difflib.HtmlDiff.make_table is not escaped and can be rendered as code in the browser Message-ID: <1545988680.87.0.068090288545.issue35603@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : HtmlDiff.make_table takes fromdesc and todesc that are not escaped causing problems while rendering html when they contain tags like fromdesc="", todesc="". There is no validation for them to be filenames so they could be arbitrary strings. Since contents of the table are escaped I think it's good to escape headers too since they might lead to the browser to execute the headers as code and potential XSS. I don't think it's worthy of adding security type so I am adding behavior. Feel free to change the type if needed. I could see no test failures on applying my patch and I will push a PR with a test. Current output : ( and are not escaped in the output) $ python3 -c 'import difflib; print(difflib.HtmlDiff().make_table([" hello "], [" hello "], fromdesc="", todesc=""))'


t1<a> hello </a>t1<b> hello </b>
---------- components: Library (Lib) messages: 332648 nosy: xtreak priority: normal severity: normal status: open title: table header in output of difflib.HtmlDiff.make_table is not escaped and can be rendered as code in the browser type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 05:20:21 2018 From: report at bugs.python.org (manchun kumar) Date: Fri, 28 Dec 2018 10:20:21 +0000 Subject: [New-bugs-announce] [issue35604] Is python used more than Java Nowadays? Message-ID: <1545992421.85.0.893262731106.issue35604@roundup.psfhosted.org> New submission from manchun kumar : Whether we should choose Python or Java! Which one is easy or which is used more often? These questions are natural if you belong to this industry where everyone is talking about this. Programmers community endlessly debate about these two languages and the discussion about which language is best seems endless. But yes, Python has become the talk of the town and used by so many programmers. I think it?s totally worth looking differences and similarities they share, advantages, disadvantages, ideal use cases and other factors. Using Java or Python actually depends on the experience and interest of developer like experience with respect to coding style, language, application-development requirements, etc. I have consistently observed both of them are equally important but yes, in today?s condition, it is great to say that I know Python like a Pro. Python adds a lot of value to your profile. This is all due to the huge demand for Data mining, big data, machine learning, IOT, Artificial intelligence, etc. The reason for Python?s preference is the widely spread scientific community, academic institutions and other efficient sources that have availed thousands of different ways to learn python with ease. This also makes people bag good job opportunities with Python. Thanks to the tons of libraries! https://www.janbasktraining.com/blog/python-programming-tutorial/ Businesses, organizations as well as the people have very well accepted that Python is easy to use and can be efficiently used for tricky tasks like writing a mobile app, making high-end games as well as writing web server without a hitch. ---------- assignee: docs at python components: Documentation messages: 332651 nosy: docs at python, manchun priority: normal severity: normal status: open title: Is python used more than Java Nowadays? type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 12:00:36 2018 From: report at bugs.python.org (Anthony Sottile) Date: Fri, 28 Dec 2018 17:00:36 +0000 Subject: [New-bugs-announce] [issue35605] backported patch requires new sphinx, minimum sphinx version was not bumped Message-ID: <1546016436.32.0.942702382011.issue35605@roundup.psfhosted.org> New submission from Anthony Sottile : Noticed this while packaging 3.6.8 for deadsnakes (ubuntu ppa) This patch: https://github.com/python/cpython/pull/11251 Requires a version of sphinx where `sphinx.util.logging.getLogger` is available. It appears that the first version which that was available was 1.6: https://github.com/sphinx-doc/sphinx/commit/6d4e6454093953943e79d4db6efeb17390870e62#diff-db360b033c6011189d978db1a4b7dcb7 For example, on ubuntu xenial (16.04) the newest packaged version of python3-sphinx available is 1.3.6 (released 2016-02) which satisfies the "minimum version": https://github.com/python/cpython/blob/3c6b436a57893dd1fae4e072768f41a199076252/Doc/conf.py#L36-L37 I hacked around it in this case by just using `logging.getLogger`: https://github.com/deadsnakes/python3.6/commit/9ba2234f35087a4bf67e3aecf2bd8dd0e3f67186 I'm not sure what the right answer is here, bumping the minimum version will make it _harder_ for packagers -- though I understand continuing to support old (2 years ago) things can be cumbersome. ---------- assignee: docs at python components: Build, Documentation messages: 332665 nosy: Anthony Sottile, docs at python priority: normal severity: normal status: open title: backported patch requires new sphinx, minimum sphinx version was not bumped versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 14:29:12 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 28 Dec 2018 19:29:12 +0000 Subject: [New-bugs-announce] [issue35606] Add prod() function to the math module Message-ID: <1546025352.75.0.868813504694.issue35606@roundup.psfhosted.org> New submission from Raymond Hettinger : Back in 2007, a user suggested a built-in prod() function with an API similar to the built-in sum() function. The proposal was rejected because it wasn't needed often enough to justify a builtin function. See https://bugs.python.org/issue1093 Though prod() doesn't meet the threshold for a builtin, it would be reasonable to add this to the math module (or an imath module). Personally, I've wanted and written this function on several occasions (for applications such as multiplying probabilities). On stack overflow, it has been a popular question with recurring interest. See https://stackoverflow.com/questions/7948291/ and https://stackoverflow.com/questions/595374 ---------- components: Library (Lib) messages: 332676 nosy: aleax, mark.dickinson, rhettinger, tim.peters priority: normal severity: normal status: open title: Add prod() function to the math module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 21:50:03 2018 From: report at bugs.python.org (=?utf-8?b?55m956iz5bmz?=) Date: Sat, 29 Dec 2018 02:50:03 +0000 Subject: [New-bugs-announce] [issue35607] python3 multiprocessing queue deadlock when use thread and process at same time Message-ID: <1546051803.5.0.180583804125.issue35607@roundup.psfhosted.org> Change by ??? : ---------- components: Library (Lib) files: ??????.txt nosy: ??? priority: normal severity: normal status: open title: python3 multiprocessing queue deadlock when use thread and process at same time type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48020/??????.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 21:53:42 2018 From: report at bugs.python.org (=?utf-8?b?55m956iz5bmz?=) Date: Sat, 29 Dec 2018 02:53:42 +0000 Subject: [New-bugs-announce] [issue35608] python3 multiprocessing queue deadlock when use thread and process at same time Message-ID: <1546052022.11.0.920360901938.issue35608@roundup.psfhosted.org> New submission from ??? : I used multi-processes to handle cpu intensive task,I have a thread reading data from stdin and put it to a input_queue, a thread get data from output_queue and write it to stdout, multiple processes get data from input queue,then handled the data,and put it to output_queue.But It some times will block forever,I doubt that it was because inappropriate to use the multiprocessing Queue,But I don't know how to solved it,can anyone help me? my code as follows: import multiprocessing import sys import threading import time from multiprocessing import Queue def write_to_stdout(result_queue: Queue): """write queue data to stdout""" while True: data = result_queue.get() if data is StopIteration: break sys.stdout.write(data) sys.stdout.flush() def read_from_stdin(queue): """read data from stdin, put it in queue for process handling""" try: for line in sys.stdin: queue.put(line) finally: queue.put(StopIteration) def process_func(input_queue, result_queue): """get data from input_queue,handled,put result into result_queue""" try: while True: data = input_queue.get() if data is StopIteration: break # cpu intensive task,use time.sleep instead # result = compute_something(data) time.sleep(0.1) result_queue.put(data) finally: # ensure every process end input_queue.put(StopIteration) if __name__ == '__main__': # queue for reading to stdout input_queue = Queue(1000) # queue for writing to stdout result_queue = Queue(1000) # thread reading data from stdin input_thread = threading.Thread(target=read_from_stdin, args=(input_queue,)) input_thread.start() # thread reading data from stdin output_thread = threading.Thread(target=write_to_stdout, args=(result_queue,)) output_thread.start() processes = [] cpu_count = multiprocessing.cpu_count() # start multi-process to handle some cpu intensive task for i in range(cpu_count): proc = multiprocessing.Process(target=process_func, args=(input_queue, result_queue)) proc.start() processes.append(proc) # joined input thread input_thread.join() # joined all task processes for proc in processes: proc.join() # ensure output thread end result_queue.put(StopIteration) # joined output thread output_thread.join() test environment: python3.6.5 ubuntu16.04 ---------- components: Library (Lib) messages: 332691 nosy: davin, pitrou, ??? priority: normal severity: normal status: open title: python3 multiprocessing queue deadlock when use thread and process at same time versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 28 22:05:51 2018 From: report at bugs.python.org (Emmanuel Arias) Date: Sat, 29 Dec 2018 03:05:51 +0000 Subject: [New-bugs-announce] [issue35609] Improve of abc.py docstring Message-ID: <1546052751.93.0.333388893142.issue35609@roundup.psfhosted.org> New submission from Emmanuel Arias : Hi! I prepare a little improve. I added some samples usage, some clarification and delete some whitespace unnecessary. Attach patch. Regards ---------- assignee: docs at python components: Documentation files: 0001-improve-abc.py-docstring.patch keywords: patch messages: 332693 nosy: docs at python, eamanu priority: normal severity: normal status: open title: Improve of abc.py docstring type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48021/0001-improve-abc.py-docstring.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 00:02:31 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Sat, 29 Dec 2018 05:02:31 +0000 Subject: [New-bugs-announce] [issue35610] IDLE: replace use of EditorWindow.context_use_ps1 Message-ID: <1546059751.61.0.784436952304.issue35610@roundup.psfhosted.org> New submission from Terry J. Reedy : Attribute .context_use_ps1 is False in EditorWindow and Outwin, True in PyShell. It is use to switch code paths in multiple classes. It is equal to isinstance(self/editwin, PyShell) (which requires an import). It has the same truth value as attribute .prompt_last_line, which is '' except in PyShell. This more informative attribute was added in #31858 to consolidate all PS1 handling in PyShell. A PR for #34055 proposed to remove the setting of .context_use_ps1 and the uses with .prompt_last_line. I will change the title after I submit this. I am not yet sure if this is the change I want to make. ---------- assignee: terry.reedy components: IDLE messages: 332700 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal stage: patch review status: open title: IDLE: replace use of EditorWindow.context_use_ps1 type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 01:00:15 2018 From: report at bugs.python.org (David Haney) Date: Sat, 29 Dec 2018 06:00:15 +0000 Subject: [New-bugs-announce] [issue35611] open doesn't call IncrementalEncoder with final=True Message-ID: <1546063215.14.0.206805691744.issue35611@roundup.psfhosted.org> New submission from David Haney : The implementation of open relies on a codecs' IncrementalEncoder, however it never calls `encode` with final=True. This appears to violate the documentation for IncrementalEncoder.encode which states that the last call to encode _must_ set final=True. The attached test case demonstrates this behavior. A codec "delayed" is implemented that holds the last encoded string until the next call to `encode`, at which point it returns the encoded string. When final=True, both the previous and current string are returned. When `codecs.iterencode` is used to encode a sequence of strings, the encode function is called for each element in the sequence, with final=False. encode is then called a final time with an empty string and final=True. When `open` is used to open a file stream for the encoding, each call to `write` calls `encode` with final=False, however it never calls `encode` with final=True, and it doesn't appear there's an API for forcing it to occur (for instance `flush` and `close` do not). ---------- components: IO files: test.py messages: 332701 nosy: haney priority: normal severity: normal status: open title: open doesn't call IncrementalEncoder with final=True type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48022/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 06:48:17 2018 From: report at bugs.python.org (Devika Sondhi) Date: Sat, 29 Dec 2018 11:48:17 +0000 Subject: [New-bugs-announce] [issue35612] Text wrap over text containing tab character Message-ID: <1546084097.18.0.541982298623.issue35612@roundup.psfhosted.org> New submission from Devika Sondhi : textwrap.wrap does not seem to preserve tab character ('\t') in the text if it is not separated from other characters by a space. Example: >>> textwrap.wrap("Here is\tone line of text that is going to be wrapped after 20 columns.",20) ['Here is one line of', 'text that is going', 'to be wrapped after', '20 columns.'] The tab is missing from the above output. However, for text with \t separated by space, the behavior is as expected (shown below). >>> textwrap.wrap("Here is \t one line of text that is going to be wrapped after 20 columns.",20) ['Here is one', 'line of text that is', 'going to be wrapped', 'after 20 columns.'] ---------- components: Tests messages: 332712 nosy: Devika Sondhi priority: normal severity: normal status: open title: Text wrap over text containing tab character versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 08:33:58 2018 From: report at bugs.python.org (Devika Sondhi) Date: Sat, 29 Dec 2018 13:33:58 +0000 Subject: [New-bugs-announce] [issue35613] Escaping string containing invalid characters as per XML Message-ID: <1546090438.84.0.909772702632.issue35613@roundup.psfhosted.org> New submission from Devika Sondhi : As per XML 1.0 and 1.1 specs, the null character is treated as invalid in an XML doc. (https://en.wikipedia.org/wiki/Valid_characters_in_XML) Shouldn't invalid xml characters be omitted while escaping? The current behavior(tested on Python 3.7) is as follows: >>> from xml.sax.saxutils import escape >>> escape("a\u0000\u0001\u0008\u000b\u000c\u000e\u001fb") 'a\x00\x01\x08\x0b\x0c\x0e\x1fb' ---------- messages: 332716 nosy: Devika Sondhi priority: normal severity: normal status: open title: Escaping string containing invalid characters as per XML versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 10:21:50 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 29 Dec 2018 15:21:50 +0000 Subject: [New-bugs-announce] [issue35614] Broken help() on metaclasses Message-ID: <1546096910.12.0.853369060665.issue35614@roundup.psfhosted.org> New submission from Serhiy Storchaka : $ ./python -m pydoc abc Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/serhiy/py/cpython/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2765, in cli() File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2727, in cli help.help(arg) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1967, in help elif request: doc(request, 'Help on %s:', output=self._output) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1690, in doc pager(render_doc(thing, title, forceload)) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1683, in render_doc return title % desc + '\n\n' + renderer.document(object, name) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 385, in document if inspect.ismodule(object): return self.docmodule(*args) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1182, in docmodule contents.append(self.document(value, key, name)) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 386, in document if inspect.isclass(object): return self.docclass(*args) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1258, in docclass (str(cls.__name__) for cls in object.__subclasses__() TypeError: descriptor '__subclasses__' of 'type' object needs an argument $ ./python -m pydoc enum Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/serhiy/py/cpython/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2765, in cli() File "/home/serhiy/py/cpython/Lib/pydoc.py", line 2727, in cli help.help(arg) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1967, in help elif request: doc(request, 'Help on %s:', output=self._output) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1690, in doc pager(render_doc(thing, title, forceload)) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1683, in render_doc return title % desc + '\n\n' + renderer.document(object, name) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 385, in document if inspect.ismodule(object): return self.docmodule(*args) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1182, in docmodule contents.append(self.document(value, key, name)) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 386, in document if inspect.isclass(object): return self.docclass(*args) File "/home/serhiy/py/cpython/Lib/pydoc.py", line 1258, in docclass (str(cls.__name__) for cls in object.__subclasses__() TypeError: descriptor '__subclasses__' of 'type' object needs an argument "object" is a metaclass (abc.ABCMeta or enum.EnumMeta) in tracebacks above. The regression was introduced in issue8525. ---------- components: Library (Lib) messages: 332720 nosy: CuriousLearner, belopolsky, eric.araujo, ncoghlan, serhiy.storchaka priority: normal severity: normal status: open title: Broken help() on metaclasses type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 15:45:37 2018 From: report at bugs.python.org (Fish Wang) Date: Sat, 29 Dec 2018 20:45:37 +0000 Subject: [New-bugs-announce] [issue35615] "RuntimeError: Dictionary changed size during iteration" when copying a WeakValueDictionary Message-ID: <1546116337.91.0.423915840639.issue35615@roundup.psfhosted.org> New submission from Fish Wang : I come across this issue recently when developing a multi-threaded PySide2 (Qt) application. When I'm calling .copy() on a WeakValueDictionary, there is a high chance that my application crashes with the following stack backtrace: ------ Traceback (most recent call last): File "F:\angr\angr-management\angrmanagement\ui\widgets\qdisasm_graph.py", line 239, in mouseDoubleClickEvent block.on_mouse_doubleclicked(event.button(), self._to_graph_pos(event.pos())) File "F:\angr\angr-management\angrmanagement\ui\widgets\qblock.py", line 130, in on_mouse_doubleclicked obj.on_mouse_doubleclicked(button, pos) File "F:\angr\angr-management\angrmanagement\ui\widgets\qinstruction.py", line 128, in on_mouse_doubleclicked op.on_mouse_doubleclicked(button, pos) File "F:\angr\angr-management\angrmanagement\ui\widgets\qoperand.py", line 162, in on_mouse_doubleclicked self.disasm_view.jump_to(self._branch_target, src_ins_addr=self.insn.addr) File "F:\angr\angr-management\angrmanagement\ui\views\disassembly_view.py", line 258, in jump_to self._jump_to(addr) File "F:\angr\angr-management\angrmanagement\ui\views\disassembly_view.py", line 372, in _jump_to self._display_function(function) File "F:\angr\angr-management\angrmanagement\ui\views\disassembly_view.py", line 343, in _display_function vr = self.workspace.instance.project.analyses.VariableRecoveryFast(the_func) File "f:\angr\angr\angr\analyses\analysis.py", line 109, in __call__ oself.__init__(*args, **kwargs) File "f:\angr\angr\angr\analyses\variable_recovery\variable_recovery_fast.py", line 618, in __init__ self._analyze() File "f:\angr\angr\angr\analyses\forward_analysis.py", line 557, in _analyze self._analysis_core_graph() File "f:\angr\angr\angr\analyses\forward_analysis.py", line 580, in _analysis_core_graph changed, output_state = self._run_on_node(n, job_state) File "f:\angr\angr\angr\analyses\variable_recovery\variable_recovery_fast.py", line 712, in _run_on_node input_state = prev_state.merge(input_state, successor=node.addr) File "f:\angr\angr\angr\analyses\variable_recovery\variable_recovery_fast.py", line 488, in merge merged_register_region = self.register_region.copy().replace(replacements).merge(other.register_region, File "f:\angr\angr\angr\keyed_region.py", line 159, in copy kr._object_mapping = self._object_mapping.copy() File "D:\My Program Files\Python37\lib\weakref.py", line 174, in copy for key, wr in self.data.items(): RuntimeError: dictionary changed size during iteration ------ I went ahead and read the related methods in Lib\weakref.py, and it seems to me that the WeakValueDictionary.copy() method is missing the protection of an _IterationGuard: It is iterating through self.data.items(), which, might have entries removed because of GC during the iteration. It seems that this crash can be fixed by wrapping the iteration with `with _IterationGuard(self):`. It worked for me in my tests. If my above analysis is correct, the following methods all require protection of _IterationGuard (which are currently missing): - WeakValueDictionary.copy() - WeakValueDictionary.__deepcopy__() - WeakKeyDictionary.copy() - WeakKeyDictionary.__deepcopy__() Please let me know if this is a legitimate issue, in which case I will be happy to provide a patch. Thanks. ---------- components: Library (Lib) messages: 332734 nosy: Fish Wang priority: normal severity: normal status: open title: "RuntimeError: Dictionary changed size during iteration" when copying a WeakValueDictionary type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 29 23:52:22 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 30 Dec 2018 04:52:22 +0000 Subject: [New-bugs-announce] [issue35616] Change references to '4.0'. Message-ID: <1546145542.58.0.745728606206.issue35616@roundup.psfhosted.org> New submission from Terry J. Reedy : https://docs.python.org/3/c-api/unicode.html#deprecated-py-unicode-apis says "Deprecated since version 3.3, will be removed in version 4.0." (I am aware that the quote above was written before we decided that '3.9' should be followed by '3.10' rather than '4.0' to avoid giving mis-impressions.) There is currently no plan for a '4.0' and part of the reason is that it stirs up unnecessary negative feeling in people. For example: https://stackoverflow.com/questions/53899931/why-does-an-empty-string-in-python-sometimes-take-up-49-bytes-and-sometimes-51 The second most upvoted comment (9) is "seeing a reference to a "[Python] 4.0" is giving me anxiety..." ? Mike Caron (11000+ reputation). We, as well as they, don't need this. When '4.0' was used in an asyncio deprecation, it was changed. Let us do the same elsewhere. ---------- assignee: docs at python components: Documentation messages: 332745 nosy: docs at python, terry.reedy, vstinner priority: normal severity: normal stage: needs patch status: open title: Change references to '4.0'. versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 04:32:17 2018 From: report at bugs.python.org (Simon Fagerholm) Date: Sun, 30 Dec 2018 09:32:17 +0000 Subject: [New-bugs-announce] [issue35617] unittest discover does not work with implicit namespaces Message-ID: <1546162337.0.0.646769357962.issue35617@roundup.psfhosted.org> New submission from Simon Fagerholm : When "python -m unittest discover" is run in a folder that is an implicit namespace package with the structure as below, no tests are discovered. The condition that the tests must be importable from the top level directory is fulfilled and has been tested by importing the tests from the top level. I did some investigating and have a PR underway that seems to fix it Example project structure is: . ??? requirements.txt ??? main.py ??? tests ??? unit ? ??? test_thing1.py ??? integration.py ??? test_integration_thing1.py ---------- components: Library (Lib) messages: 332748 nosy: Simon Fagerholm priority: normal severity: normal status: open title: unittest discover does not work with implicit namespaces type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 10:37:18 2018 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sun, 30 Dec 2018 15:37:18 +0000 Subject: [New-bugs-announce] [issue35618] Allow users to set suffix list in cookiejar policy Message-ID: <1546184238.41.0.9488315767.issue35618@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : cookiejar has a fixed set of public suffixes [0] on which cookies cannot be set when strict_domain is enabled. rfc6265 recommends rejecting cookies being set directly on domain which are public suffixes. The current list was last updated at issue1483395 (2006). Given the proliferation of public suffixes and new ones released by IANA it's not feasible for Python to be always updated with this list. It would be good if the suffix list can be supplied during constructing the cookiejar policy so that users can supply updated entries and Python can default to the current set that might be updated with more common ones. Outdated list causes someone to set cookie on a public suffix which is sent along with all the requests to the domain with the suffix causing problems. The algorithm is also assumes suffixes to be of two parts like .co.uk which is not the case today and can be improved. But that require more work and increases the scope of the ticket. The current list is hardcoded as part of the code and it's not available for extension at https://github.com/python/cpython/blob/3f5fc70c6213008243e7d605f7d8a2d8f94cf919/Lib/http/cookiejar.py#L1020 . The default policy can be extended to override this but I think it's good to allow users to set this and to document a place if any where users can find updated lists. rfc6265 recommends http://publicsuffix.org/ that has a data file. Looking at other popular implementations like go [1] and okhttp (java) [2] follow similar approach where users can specify a suffix list and resort to defaults. [0] https://en.wikipedia.org/wiki/Public_Suffix_List [1] https://godoc.org/golang.org/x/net/publicsuffix [2] https://github.com/square/okhttp/blob/81d702c62d92d7dbd83c1daf620a4588b7d8e785/okhttp/src/main/java/okhttp3/internal/publicsuffix/PublicSuffixDatabase.java#L36 https://tools.ietf.org/html/rfc6265#section-5.3 If the user agent is configured to reject "public suffixes" and the domain-attribute is a public suffix: If the domain-attribute is identical to the canonicalized request-host: Let the domain-attribute be the empty string. Otherwise: Ignore the cookie entirely and abort these steps. NOTE: A "public suffix" is a domain that is controlled by a public registry, such as "com", "co.uk", and "pvt.k12.wy.us". This step is essential for preventing attacker.com from disrupting the integrity of example.com by setting a cookie with a Domain attribute of "com". Unfortunately, the set of public suffixes (also known as "registry controlled domains") changes over time. If feasible, user agents SHOULD use an up-to-date public suffix list, such as the one maintained by the Mozilla project at . ---------- components: Library (Lib) messages: 332752 nosy: xtreak priority: normal severity: normal status: open title: Allow users to set suffix list in cookiejar policy type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 12:48:44 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 30 Dec 2018 17:48:44 +0000 Subject: [New-bugs-announce] [issue35619] Support custom data descriptors in pydoc Message-ID: <1546192124.51.0.0883005560054.issue35619@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently pydoc supports only limited set of data descriptors: builtin member and getset descriptors (this covers slot descriptors and structseq member descriptors) and properties. But it does not fully support custom data descriptors. For example, after implementing accelerators for namedtuple fileds access in issue32492, if P = namedtuple('P', 'x y'), help(P.x) will output the help for the _tuplegetter class instead of the P.x member. The proposed PR replaces checks for particular types of data descriptors with a general check. It performs also some refactoring and adds a bunch of tests. ---------- components: Library (Lib) messages: 332753 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Support custom data descriptors in pydoc type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 14:47:17 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 30 Dec 2018 19:47:17 +0000 Subject: [New-bugs-announce] [issue35620] asyncio test failure on appveyor Message-ID: <1546199237.74.0.757790156934.issue35620@roundup.psfhosted.org> New submission from Terry J. Reedy : https://ci.appveyor.com/project/python/cpython/builds/21296354?fullLog=true Blocked merge. Passed on Azure Pipeline +- same time. Appveyor re-run passed. Please disable or weaken the false positive tests. I will propose a different solution elsewhere, perhaps pydev. ---------- components: Tests, asyncio messages: 332757 nosy: asvetlov, pablogsal, terry.reedy, yselivanov priority: normal severity: normal status: open title: asyncio test failure on appveyor type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 17:59:38 2018 From: report at bugs.python.org (Stephan Hohe) Date: Sun, 30 Dec 2018 22:59:38 +0000 Subject: [New-bugs-announce] [issue35621] asyncio.create_subprocess_exec() only works with main event loop Message-ID: <1546210778.19.0.43686311162.issue35621@roundup.psfhosted.org> New submission from Stephan Hohe : `asyncio.create_subprocess_exec()` accepts a `loop` parameter, but doesn't use it to watch the child process. Instead uses `get_event_loop_policy().get_child_watcher()`, which doesn't doesn't know about `loop` but tries to use the current default event loop. This fails if there is no current event loop or if that loop isn't running: ``` import asyncio async def action(loop): proc = await asyncio.create_subprocess_exec('echo', loop=loop) await proc.wait() loop = asyncio.new_event_loop() loop.run_until_complete(action(loop)) loop.close() ``` This crashes because the main event loop never was created: Traceback (most recent call last): File "sample.py", line 8, in loop.run_until_complete(action(loop)) File "/home/sth/devel/cpython.vanilla/Lib/asyncio/base_events.py", line 589, in run_until_complete return future.result() File "sample.py", line 4, in action proc = await asyncio.create_subprocess_exec('echo', loop=loop) File "/home/sth/devel/cpython.vanilla/Lib/asyncio/subprocess.py", line 213, in create_subprocess_exec transport, protocol = await loop.subprocess_exec( File "/home/sth/devel/cpython.vanilla/Lib/asyncio/base_events.py", line 1542, in subprocess_exec transport = await self._make_subprocess_transport( File "/home/sth/devel/cpython.vanilla/Lib/asyncio/unix_events.py", line 193, in _make_subprocess_transport watcher.add_child_handler(transp.get_pid(), File "/home/sth/devel/cpython.vanilla/Lib/asyncio/unix_events.py", line 924, in add_child_handler raise RuntimeError( RuntimeError: Cannot add child handler, the child watcher does not have a loop attached If we do have a current event loop, for example by calling `asyncio.get_event_loop()` before creating out own loop, then we don't get an error, but the program hangs indefinitely since that loop isn't running. Expected behavior would be that the loop given to create_subprocess_exec() is used to watch the child process. ---------- components: asyncio messages: 332771 nosy: asvetlov, sth, yselivanov priority: normal severity: normal status: open title: asyncio.create_subprocess_exec() only works with main event loop type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 17:59:50 2018 From: report at bugs.python.org (=?utf-8?q?Michael_B=C3=BCsch?=) Date: Sun, 30 Dec 2018 22:59:50 +0000 Subject: [New-bugs-announce] [issue35622] Add support for Linux SCHED_DEADLINE Message-ID: <1546210790.69.0.579405254492.issue35622@roundup.psfhosted.org> New submission from Michael B?sch : Are there plans to support Linux SCHED_DEADLINE in the os module? If not, would changes to add such support be welcome? Support for SCHED_DEADLINE would also need support for sched_setattr/sched_getattr. ---------- components: Library (Lib) messages: 332772 nosy: mb_ priority: normal severity: normal status: open title: Add support for Linux SCHED_DEADLINE type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 30 20:16:28 2018 From: report at bugs.python.org (Stephan Hohe) Date: Mon, 31 Dec 2018 01:16:28 +0000 Subject: [New-bugs-announce] [issue35623] Segfault in test_bigmem.ListTest.test_sort Message-ID: <1546218988.78.0.27697670498.issue35623@roundup.psfhosted.org> New submission from Stephan Hohe : When running test_bigmem with -M 30G the interpreter crashes in list_sort_impl() in Objects/listobject.c:2290 due to an integer overflow in `i`. ---------- components: Interpreter Core messages: 332780 nosy: sth priority: normal severity: normal status: open title: Segfault in test_bigmem.ListTest.test_sort type: crash versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 03:12:16 2018 From: report at bugs.python.org (Oded Engel) Date: Mon, 31 Dec 2018 08:12:16 +0000 Subject: [New-bugs-announce] [issue35624] Shelve sync issues while using Gevent Message-ID: <1546243936.53.0.108056450415.issue35624@roundup.psfhosted.org> New submission from Oded Engel : Shelve method, sync, does not work when using gevent threading. writeback was set to True, flag was set to 'c'. only way to get the dbb synced is by closing and reopening the db. ---------- components: Library (Lib) messages: 332807 nosy: Oded Engel priority: normal severity: normal status: open title: Shelve sync issues while using Gevent versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 05:48:00 2018 From: report at bugs.python.org (bzip2) Date: Mon, 31 Dec 2018 10:48:00 +0000 Subject: [New-bugs-announce] [issue35625] documentation of list, set & dict comprehension make no mention of buggy class scope behavior Message-ID: <1546253280.49.0.920078772343.issue35625@roundup.psfhosted.org> New submission from bzip2 : The sections on list, set and dict comprehensions in the tutorial on data structures (ref. 1) state repeatedly that they are equivalent to for loops, but do not mention that this is not true in classes. In fact, the example used for nested list comprehensions (section 5.1.4) will work in a function, but not in a class. Similarly, there seems to be no mention of this scope "limitation" in the tutorial on classes (ref. 2), despite a section on scopes and namespaces (section 9.2) and another that mentions list comprehensions (section 9.10). The scope "limitation" is mentioned at the end of a section on resolution of names on a page about the execution model in the reference guide (ref. 3), and of course in various forums, where people may perhaps eventually find them after wasting time trying to figure out what they've done wrong. If comprehensions are "equivalent" to for loops only under certain conditions (in a class, but only in a class, only one variable from outside the comprehension is accessible in the comprehension, and it must be the outermost iterable), they are not equivalent and should not be described as such. This "limitation" should be mentioned prominently wherever comprehensions are described, since both classes and comprehensions are presumably common constructs. When people read "is equivalent to" without a qualifier, they assume "is always equivalent to". Returning to section 9.10 in ref. 2, the unique_words example is misleading because it strongly implies that nested for loops in a comprehension should work in a class. Since that's only true in some cases, the example should be qualified. More broadly, because that tutorial is about classes, the relevance of the last three sections should be revisited. As an aside, I agree with the developers who consider this scope "limitation" a bug and not (paraphrasing) "just how the language works", since the exact same two lines of code, which depend on no other variables or functions, work in a function or module but not in a class. 1. https://docs.python.org/3/tutorial/datastructures.html 2. https://docs.python.org/3/tutorial/classes.html 3. https://docs.python.org/3/reference/executionmodel.html ---------- assignee: docs at python components: Documentation messages: 332808 nosy: bzip2, docs at python priority: normal severity: normal status: open title: documentation of list, set & dict comprehension make no mention of buggy class scope behavior type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 05:53:23 2018 From: report at bugs.python.org (Eduardo Orochena) Date: Mon, 31 Dec 2018 10:53:23 +0000 Subject: [New-bugs-announce] [issue35626] Python dictreader KeyError issue Message-ID: <1546253603.09.0.372827657102.issue35626@roundup.psfhosted.org> New submission from Eduardo Orochena : def load_file(filename): with open(filename, 'r', encoding='utf-8') as fin: header = fin.readline() print('Found ' + header) reader = csv.DictReader(fin) for row in reader: print(type(row), row) print('Beds {} '.format(row['beds'])) This results in a KeyError exception whilst open_f = open(filename, 'r', encoding='utf-8') read_it = csv.DictReader(open_f) for i in read_it: print('Beds {}'.format(i['beds'])) behaves as expected ---------- components: Build messages: 332810 nosy: eorochena priority: normal severity: normal status: open title: Python dictreader KeyError issue type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 06:22:12 2018 From: report at bugs.python.org (June Kim) Date: Mon, 31 Dec 2018 11:22:12 +0000 Subject: [New-bugs-announce] [issue35627] multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1 Message-ID: <1546255332.76.0.703203586878.issue35627@roundup.psfhosted.org> Change by June Kim : ---------- components: Library (Lib) nosy: June Kim priority: normal severity: normal status: open title: multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 08:14:02 2018 From: report at bugs.python.org (s-ball) Date: Mon, 31 Dec 2018 13:14:02 +0000 Subject: [New-bugs-announce] [issue35628] Allow lazy loading of translations in gettext. Message-ID: <1546262042.35.0.847076847051.issue35628@roundup.psfhosted.org> New submission from s-ball : When working on i18n, I realized that msgfmt.py did not generate any hash table. One step further, I realized that the gettext.py would not have used it because it unconditionnaly loads the whole translation files and contains the following TODO message: TODO: - Lazy loading of .mo files. Currently the entire catalog is loaded into memory, but that's probably bad for large translated programs. Instead, the lexical sort of original strings in GNU .mo files should be exploited to do binary searches and lazy initializations. Or you might want to use the undocumented double-hash algorithm for .mo files with hash tables, but you'll need to study the GNU gettext code to do this. I have studied the code, and found that it should not be too complex to implement it in pure Python. I have posted a message on python-ideas about it and here are my conclusion: Features: ======== The gettext module should be allowed to load lazily the catalogs from mo file. This lazy load should be optional and make use of the hash tables from mo files when they are present or revert to a binary search. The translation strings should be cached for better performances. API changes: ============ 3 functions from the gettext module will have 2 new optional parameter named caching, and keepopen: gettext.bindtextdomain(domain, localedir=None) would become gettext.bindtextdomain(domain, localedir=None, caching=None, keepopen=False) gettext.translation(domain, localedir=None, languages=None, class_=None, fallback=False, codeset=None) would become gettext.translation(domain, localedir=None, languages=None, class_=None, fallback=False, codeset=None, caching=None, keepopen=False) gettext.install(domain, localedir=None, codeset=None, names=None) would become gettext.install(domain, localedir=None, codeset=None, names=None, caching=None, keepopen=False) The new caching parameter could receive the following values: caching=None: revert to the previour eager loading of the full catalog. It will be the default to allow previous application to see no change caching=1: lazy loading with unlimited cache caching=n where n is a positive (>=0) integer value: lazy loading with a LRU cache limited to n strings The keepopen parameter would be a boolean: keepopen=False (default): the mo file is only opened before loading a translation string and closed immediately after - it is also opened once when the GNUTranslation class is initialized to load the file description keepopen=True: the mo file is kept open during the lifetime of the GNUTranslation object. This parameter is ignored and not used if caching is None Implementation: ============== The current GNUTranslation class loads the content of the mo file to build a dictionnary where the original strings are the keys and the translated keys the values. Plural forms use a special processing: the key is a 2 tuple (singular original string, order), and the value is the corresponding translated string - order=0 is normally for the singular translated string. The proposed implementation would simply replace this dictionary with a special mapping subclass when caching is not None. That subclass would use same keys as the original directory and would: - first search in its cache - if not found in cache and if the hashtable has not a zero size search the original string by hash - if not found in cache and if the hashtable has a zero size, search the original string with a binary search algorithm. - if a string is found, it should feed the LRU cache, eventually throwing away the oldest entry (entries) That should allow to implement the new feature with minimal refactoring for the gettext module. But I also propose to change msgfmt.py to build the hashtable. IMHO, the function should lie in the standard library probably as a submodule of gettext to allow various Python projects (pybabel, django) to directly use it instead of developping their own ones. I will probably submit a PR in a while but it will will require some time to propose a full implementation with a correct test coverage. ---------- components: Library (Lib) messages: 332815 nosy: s-ball priority: normal severity: normal status: open title: Allow lazy loading of translations in gettext. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 12:28:33 2018 From: report at bugs.python.org (Anthony Sottile) Date: Mon, 31 Dec 2018 17:28:33 +0000 Subject: [New-bugs-announce] [issue35629] hang and/or leaked processes with multiprocessing.Pool(...).imap(...) Message-ID: <1546277313.94.0.653504981022.issue35629@roundup.psfhosted.org> New submission from Anthony Sottile : This simple program causes a hang / leaked processes (easiest to run in an interactive shell): import multiprocessing tuple(multiprocessing.Pool(4).imap(print, (1, 2, 3))) $ python3.6 Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing >>> tuple(multiprocessing.Pool(4).imap(print, (1, 2, 3))) 1 2 3 <<>> ^CProcess ForkPoolWorker-1: Traceback (most recent call last): Process ForkPoolWorker-2: Process ForkPoolWorker-3: Process ForkPoolWorker-4: File "/usr/lib/python3.6/multiprocessing/pool.py", line 746, in next item = self._items.popleft() IndexError: pop from an empty deque During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/multiprocessing/pool.py", line 750, in next self._cond.wait(timeout) File "/usr/lib/python3.6/threading.py", line 295, in wait waiter.acquire() KeyboardInterrupt $ python3.7 Python 3.7.2 (default, Dec 25 2018, 03:50:46) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing >>> tuple(multiprocessing.Pool(4).imap(print, (1, 2, 3))) 1 2 3 (None, None, None) >>> KeyboardInterrupt Process ForkPoolWorker-3: Process ForkPoolWorker-1: Process ForkPoolWorker-2: Process ForkPoolWorker-4: >>> Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/pool.py", line 110, in worker task = get() File "/usr/lib/python3.7/multiprocessing/queues.py", line 351, in get with self._rlock: File "/usr/lib/python3.7/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.7/multiprocessing/pool.py", line 110, in worker task = get() File "/usr/lib/python3.7/multiprocessing/queues.py", line 351, in get with self._rlock: File "/usr/lib/python3.7/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.7/multiprocessing/pool.py", line 110, in worker task = get() File "/usr/lib/python3.7/multiprocessing/queues.py", line 351, in get with self._rlock: File "/usr/lib/python3.7/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.7/multiprocessing/pool.py", line 110, in worker task = get() File "/usr/lib/python3.7/multiprocessing/queues.py", line 352, in get res = self._reader.recv_bytes() File "/usr/lib/python3.7/multiprocessing/connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/lib/python3.7/multiprocessing/connection.py", line 407, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.7/multiprocessing/connection.py", line 379, in _recv chunk = read(handle, remaining) KeyboardInterrupt (python3.8 shows the same behaviour as python3.7) $ ./python --version --version Python 3.8.0a0 (heads/master:ede0b6fae2, Dec 31 2018, 09:19:17) [GCC 7.3.0] python2.7 also has similar behaviour. I'm told this more reliably hangs on windows, though I don't have windows on hand. I've "fixed" my code to explicitly open / close the pool: with contextlib.closing(multiprocessing.Pool(jobs)) as pool: tuple(pool.imap(...)) I suspect a refcounting / gc bug ---------- components: Library (Lib) messages: 332825 nosy: Anthony Sottile priority: normal severity: normal status: open title: hang and/or leaked processes with multiprocessing.Pool(...).imap(...) type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 31 15:53:11 2018 From: report at bugs.python.org (Suriyaa Sundararuban) Date: Mon, 31 Dec 2018 20:53:11 +0000 Subject: [New-bugs-announce] [issue35630] Missing code tag for "python3" in README.rst Message-ID: <1546289591.73.0.0479486789763.issue35630@roundup.psfhosted.org> New submission from Suriyaa Sundararuban : Currently there is no code tag for "python3" in the sentence "This will install Python as python3." (Location: https://github.com/python/cpython#build-instructions). I'm working on this small improvement. ---------- assignee: docs at python components: Documentation messages: 332832 nosy: docs at python, suriyaa priority: normal severity: normal status: open title: Missing code tag for "python3" in README.rst versions: Python 3.8 _______________________________________ Python tracker _______________________________________