From report at bugs.python.org Tue Jan 1 02:35:43 2019 From: report at bugs.python.org (=?utf-8?q?Ville_Skytt=C3=A4?=) Date: Tue, 01 Jan 2019 07:35:43 +0000 Subject: [New-bugs-announce] [issue35631] Improve typing docs wrt abstract/concrete collection types Message-ID: <1546328143.85.0.754432945533.issue35631@roundup.psfhosted.org> New submission from Ville Skytt? : The typing docs for List includes a note to use generic collection types, but lists AbstractSet and Mapping which aren't generally replacements for a List. It would be better to remove those types from the List note and add corresponding ones to Dict and Set which are currently lacking it. Additionally, some examples in the typing docs are in violation of the above stated preference, using Lists and Dicts as parameters. ---------- assignee: docs at python components: Documentation messages: 332842 nosy: docs at python, scop priority: normal severity: normal status: open title: Improve typing docs wrt abstract/concrete collection types type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 1 13:02:50 2019 From: report at bugs.python.org (thautwarm) Date: Tue, 01 Jan 2019 18:02:50 +0000 Subject: [New-bugs-announce] [issue35632] support unparse for Suite ast Message-ID: <1546365770.39.0.971588675244.issue35632@roundup.psfhosted.org> New submission from thautwarm : Although `Suite` is not an actual AST used in CPython, it's quite useful when performing some code analysis. `Suite` is a sequence of statements which could be used to represent a block whose context inherits from outside block's. Also, the document said it's useful in Jython. I wonder if we could support `unparse` for Suite through making a tiny modification to https://github.com/python/cpython/blob/master/Tools/parser/unparse.py def _Suite(self, tree): for stmt in tree.body: self.dispatch(stmt) ---------- components: Demos and Tools messages: 332845 nosy: thautwarm priority: normal severity: normal status: open title: support unparse for Suite ast type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 1 13:09:08 2019 From: report at bugs.python.org (Michael Felt) Date: Tue, 01 Jan 2019 18:09:08 +0000 Subject: [New-bugs-announce] [issue35633] test_eintr fails on AIX since fcntl functions were modified Message-ID: <1546366148.3.0.0239151726659.issue35633@roundup.psfhosted.org> New submission from Michael Felt : test_eintr fails on AIX since fcntl functions were modified In issue35189 the fnctl() module was modified so that the EINTR interruption should be retried automatically. On AIX the test for flock() passes, but the test for lockf() fails: ====================================================================== > ERROR: test_lockf (__main__.FNTLEINTRTest) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/data/prj/python/git/python3-3.8/Lib/test/eintrdata/eintr_tester.py", line 522, in test_lockf > self._lock(fcntl.lockf, "lockf") > File "/data/prj/python/git/python3-3.8/Lib/test/eintrdata/eintr_tester.py", line 507, in _lock > lock_func(f, fcntl.LOCK_EX | fcntl.LOCK_NB) > PermissionError: [Errno 13] Permission denied > Researching... ---------- components: IO, Tests messages: 332846 nosy: Michael.Felt priority: normal severity: normal status: open title: test_eintr fails on AIX since fcntl functions were modified type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 1 20:04:33 2019 From: report at bugs.python.org (iceboy) Date: Wed, 02 Jan 2019 01:04:33 +0000 Subject: [New-bugs-announce] [issue35634] kwargs regression when there are multiple entries with the same key Message-ID: <1546391073.15.0.462051858565.issue35634@roundup.psfhosted.org> New submission from iceboy : Using the multidict package on pypi to illustrate the problem. Python 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import multidict >>> d = multidict.CIMultiDict([('a', 1), ('a', 2)]) >>> def foo(**kwargs): pass ... >>> foo(**d) >>> foo(**{}, **d) Python 3.6.7 (default, Oct 21 2018, 08:08:16) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import multidict >>> d = multidict.CIMultiDict([('a', 1), ('a', 2)]) >>> def foo(**kwargs): pass ... >>> foo(**d) >>> foo(**{}, **d) Traceback (most recent call last): File "", line 1, in TypeError: foo() got multiple values for keyword argument 'a' (1) foo(**d) (2) foo(**{}, **d) (1) works fine in both versions but (2) only works in Python 3.5 but raises TypeError in Python 3.6. This should be a regression. We should either make both expressions work or raises error. ---------- messages: 332849 nosy: iceboy priority: normal severity: normal status: open title: kwargs regression when there are multiple entries with the same key versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 1 21:07:20 2019 From: report at bugs.python.org (Stefan Seefeld) Date: Wed, 02 Jan 2019 02:07:20 +0000 Subject: [New-bugs-announce] [issue35635] asyncio.create_subprocess_exec() only works in main thread Message-ID: <1546394840.15.0.0864584726941.issue35635@roundup.psfhosted.org> New submission from Stefan Seefeld : This is an addendum to issue35621: To be able to call `asyncio.create_subprocess_exec()` from another thread, A separate event loop needs to be created. To make the child watcher aware of this new loop, I have to call `asyncio.get_child_watcher().attach_loop(loop)`. However, in the current implementation this call needs to be made by the main thread (or else the `signal` module will complain as handlers may only be registered in the main thread). So, to work around the above limitations, the following workflow needs to be used: 1) create a new loop in the main thread 2) attach it to the child watcher 3) spawn a worker thread 4) set the previously created event loop as default loop After that, I can run `asyncio.create_subprocess_exec()` in the worker thread. However, I suppose the worker thread will be the only thread able to call that function, given the child watcher's limitation to a single loop. Am I missing something ? Given the complexity of this, I would expect this to be better documented in the sections explaining how `asyncio.subprocess` and `threading` interact. ---------- components: asyncio messages: 332855 nosy: asvetlov, stefan, yselivanov priority: normal severity: normal status: open title: asyncio.create_subprocess_exec() only works in main thread type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 02:20:57 2019 From: report at bugs.python.org (Ma Lin) Date: Wed, 02 Jan 2019 07:20:57 +0000 Subject: [New-bugs-announce] [issue35636] remove redundant code in unicode_hash(PyObject *self) Message-ID: <1546413657.32.0.171614827801.issue35636@roundup.psfhosted.org> New submission from Ma Lin : Please see the PR ---------- messages: 332857 nosy: Ma Lin priority: normal severity: normal status: open title: remove redundant code in unicode_hash(PyObject *self) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 06:14:25 2019 From: report at bugs.python.org (Yash Aggarwal) Date: Wed, 02 Jan 2019 11:14:25 +0000 Subject: [New-bugs-announce] [issue35637] Factorial should be able to evaluate float arguments Message-ID: <1546427665.46.0.211230468351.issue35637@roundup.psfhosted.org> New submission from Yash Aggarwal : Factorial as of now accepts only integers or integral floats. I want to suggest extending the definition of float to accept all positive real numbers to be more consistent with general definition of factorial that uses gamma function. What I am proposing is: 1. for integer value, the function should work as it does and return integer result. 2. for float input, both integer and non-integer valued, the returned value should be a floating point number. 3. the input domain should be extended to all real numbers except negative integers. Such generalized function would feel more mathematically consistent. ---------- components: Library (Lib) messages: 332862 nosy: FR4NKESTI3N priority: normal severity: normal status: open title: Factorial should be able to evaluate float arguments type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 06:25:01 2019 From: report at bugs.python.org (steelman) Date: Wed, 02 Jan 2019 11:25:01 +0000 Subject: [New-bugs-announce] [issue35638] Introduce fixed point locale awear format type for floating point numbers Message-ID: <1546428301.89.0.74748590824.issue35638@roundup.psfhosted.org> New submission from steelman : It is currently impossible to format floating point numbers with an arbitrary number of decimal digits AND the decimal point matching locale settings. For example no current format allows to display numbers ranging from 1 to 1000 with exactly two decimal digits. ---------- components: Library (Lib) messages: 332863 nosy: steelman priority: normal severity: normal status: open title: Introduce fixed point locale awear format type for floating point numbers type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 07:03:43 2019 From: report at bugs.python.org (Erdem Uney) Date: Wed, 02 Jan 2019 12:03:43 +0000 Subject: [New-bugs-announce] [issue35639] Lowecasing Unicode Characters Message-ID: <1546430623.48.0.462352175173.issue35639@roundup.psfhosted.org> New submission from Erdem Uney : assert '???L?'.lower() == '?i?li' Lowercasing the capital ? (with a dot on - \u0130) adds a unicode character \u0307 after i and if there is a following character it adds that dot (\u0307) over that character. The behavior is different in Python 2.7.10 where it adds the dot on top of 'i'. Accord to Unicode Specifications character \u0130 should be converted to character \u0069. ---------- messages: 332865 nosy: kingofsevens priority: normal severity: normal status: open title: Lowecasing Unicode Characters type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 07:53:41 2019 From: report at bugs.python.org (Emmanuel Arias) Date: Wed, 02 Jan 2019 12:53:41 +0000 Subject: [New-bugs-announce] [issue35640] Allow passing PathLike arguments to SimpleHTTPRequestHandler Message-ID: <1546433621.79.0.480952807407.issue35640@roundup.psfhosted.org> New submission from Emmanuel Arias : Hi, A PR was opened https://github.com/python/cpython/pull/11398. This PR seems interest in the sense that this allow passing a pathlike arguments to SimpleHTTPRequestHandler. Regards ---------- components: Library (Lib) messages: 332873 nosy: eamanu priority: normal pull_requests: 10792 severity: normal status: open title: Allow passing PathLike arguments to SimpleHTTPRequestHandler type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 11:43:39 2019 From: report at bugs.python.org (Tal Einat) Date: Wed, 02 Jan 2019 16:43:39 +0000 Subject: [New-bugs-announce] [issue35641] IDLE: calltips not properly formatted for functions without doc-strings Message-ID: <1546447419.98.0.0355519553361.issue35641@roundup.psfhosted.org> New submission from Tal Einat : IDLE usually wraps call-tips to 85 characters. However, for functions without a doc-string, this formatting is skipped. This is an issue for functions with long signatures, e.g. due to having many arguments or due to having default values with long repr-s. This appears to be caused by line 170 in Lib/idlelib/calltip.py being indented once too much. (see: https://github.com/python/cpython/blob/87e59ac11ee074b0dc1bc864c74fac0660b27f6e/Lib/idlelib/calltip.py) Thanks to Dan Snider for the original report in msg332881 on issue #35196. Example: >>> def foo(s='a'*100): pass >>> print(get_argspec(foo)) (s='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa') >>> def bar(s='a'*100): """doc-string""" pass >>> print(get_argspec(bar)) (s='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa') doc-string ---------- messages: 332882 nosy: bup, taleinat priority: normal severity: normal status: open title: IDLE: calltips not properly formatted for functions without doc-strings _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 13:05:51 2019 From: report at bugs.python.org (Gregory Szorc) Date: Wed, 02 Jan 2019 18:05:51 +0000 Subject: [New-bugs-announce] [issue35642] _asynciomodule.c compiled in both pythoncore.vcxproj and _asyncio.vcxproj Message-ID: <1546452351.49.0.287032771837.issue35642@roundup.psfhosted.org> New submission from Gregory Szorc : The _asynciomodule.c source file is compiled as part of both pythoncore.vcxproj (providing pythonXY.dll) and _asyncio.vcxproj (providing _asyncio.pyd). PC\config.c doesn't reference PyInit__asyncio. I'm inclined to believe that _asynciomodule.c being built as part of pythoncore.vcxproj is a mistake. If all goes according to plan, I will contribute my first CPython patch with a fix shortly... ---------- components: Build messages: 332887 nosy: Gregory.Szorc priority: normal severity: normal status: open title: _asynciomodule.c compiled in both pythoncore.vcxproj and _asyncio.vcxproj type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 14:01:47 2019 From: report at bugs.python.org (=?utf-8?q?Micka=C3=ABl_Schoentgen?=) Date: Wed, 02 Jan 2019 19:01:47 +0000 Subject: [New-bugs-announce] [issue35643] SyntaxWarning: invalid escape sequence in Modules/_sha3/cleanup.py Message-ID: <1546455707.05.0.304016401963.issue35643@roundup.psfhosted.org> New submission from Micka?l Schoentgen : This warning is emitted on Modules/_sha3/cleanup.py, line 11: SyntaxWarning: invalid escape sequence \ CPP2 = re.compile("\ //(.*)") ---------- components: Extension Modules messages: 332888 nosy: Tiger-222 priority: normal severity: normal status: open title: SyntaxWarning: invalid escape sequence in Modules/_sha3/cleanup.py type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 15:30:12 2019 From: report at bugs.python.org (Ray Donnelly) Date: Wed, 02 Jan 2019 20:30:12 +0000 Subject: [New-bugs-announce] [issue35644] venv doesn't do what it claims to do (apears not to work at all?) Message-ID: <1546461012.71.0.574979571732.issue35644@roundup.psfhosted.org> New submission from Ray Donnelly : Happy New Year! I'm not sure if this is a misunderstanding on my part, a docs bug or a code bug. At https://docs.python.org/3/library/venv.html we see: "The solution for this problem is to create a virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages" and "This will create the tutorial-env directory if it doesn?t exist, and also create directories inside it containing a copy of the Python interpreter, the standard library, and various supporting files." However, when testing with https://www.python.org/ftp/python/3.7.2/python-3.7.2-amd64.exe I see no Python interpreter (nor DLL) in my venv directory: ``` python.exe -m venv %TEMP%\venv %TEMP%\venv\Scripts\activate.bat dir %TEMP%\venv ``` gives: ``` Directory of C:\Users\RDONNE~1\AppData\Local\Temp\venv 02/01/2019 19:38 . 02/01/2019 19:38 .. 02/01/2019 19:38 Include 02/01/2019 19:38 Lib 02/01/2019 19:38 121 pyvenv.cfg 02/01/2019 19:38 Scripts 1 File(s) 121 bytes 5 Dir(s) 912,281,780,224 bytes free ``` pyvenv.cfg contains: ``` home = C:\Users\rdonnelly\AppData\Local\Programs\Python\Python37 include-system-site-packages = false version = 3.7.2 ``` Further to this, after activating, I do not see the `venv` directory in `sys.path`: ``` python -c "import sys; print(sys.path)" ['', 'C:\\Users\\rdonnelly\\AppData\\Local\\Programs\\Python\\Python37\\python37.zip', 'C:\\Users\\rdonnelly\\AppData\\Local\\Programs\\Python\\Python37\\DLLs', 'C:\\Users\\rdonnelly\\AppData\\Local\\Programs\\Python\\Python37\\lib', 'C:\\Users\\rdonnelly\\AppData\\Local\\Programs\\Python\\Python37', 'C:\\Users\\rdonnelly\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages'] ``` >From past experience, the old `virtualenv` project would copy the interpreter and DLL across. Any help here would be appreciated! ---------- components: Windows messages: 332892 nosy: Ray Donnelly, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: venv doesn't do what it claims to do (apears not to work at all?) type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 2 23:57:40 2019 From: report at bugs.python.org (Siva) Date: Thu, 03 Jan 2019 04:57:40 +0000 Subject: [New-bugs-announce] [issue35645] Alarm usage Message-ID: <1546491460.13.0.00460604445854.issue35645@roundup.psfhosted.org> New submission from Siva : '\a' in a command line gives '\x07' in response.Tried '\a' in a calci programe but response gives me enter a valid data and a small box but o alarm. Do we have any ways to rectify the same. if so please let me know. print('enter a value from the below list\n') a = input('enter a value + , - ') if a!= '+' and a!= '-' : print ('enter a valid data \a') elif a == '+': b = eval(input('enter first value')) c=eval(input('enter 2nd value')) add = b+c print (b,'+',c,'=',add) elif a== '-': b = eval(input('enter first value')) c=eval(input('enter 2nd value')) sub=b-c print (b,'-',c,'=',sub) ---------- components: Regular Expressions messages: 332907 nosy: ezio.melotti, mrabarnett, shivsidhi priority: normal severity: normal status: open title: Alarm usage type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 02:37:43 2019 From: report at bugs.python.org (Deepak Joshi) Date: Thu, 03 Jan 2019 07:37:43 +0000 Subject: [New-bugs-announce] [issue35646] Subprocess.Popen('python -v', stdout=PIPE, stderr=PIPE, Shell=True) gives output in stderr Message-ID: <1546501063.03.0.591969937784.issue35646@roundup.psfhosted.org> New submission from Deepak Joshi : Subprocess.Popen('python -v',stdout=PIPE,stderr=PIPE,Shell=True) Prduces output in stderr instead of stdout. For others: pip --version or git --version output is in stdout and is expected. ---------- components: Windows, ctypes messages: 332915 nosy: Deepak Joshi, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Subprocess.Popen('python -v',stdout=PIPE,stderr=PIPE,Shell=True) gives output in stderr type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 02:59:56 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Thu, 03 Jan 2019 07:59:56 +0000 Subject: [New-bugs-announce] [issue35647] Cookie path check returns incorrect results Message-ID: <1546502396.67.0.243403156352.issue35647@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I came across the issue during https://bugs.python.org/issue35121#msg332583 . I think this can be dealt as a separate issue not blocking the original report. I am classifying it as security but can be reclassified as a bug fix given the section on weak confidentiality in RFC 6265. I have a fix implemented at https://github.com/tirkarthi/cpython/tree/bpo35121-cookie-path. Report : I have come across another behavior change between path checks while using the cookie jar implementation available in Python. This is related to incorrect cookie validation but with respect to path. I observed the following difference : 1. Make a request to "/" that sets a cookie with "path=/any" 2. Make a request to "/any" and the set cookie is passed since the path matches 3. Make a request to "/anybad" and cookie with "path=/any" is also passed too. On using golang stdlib implementation of cookiejar the cookie "path=/any" is not passed when a request to "/anybad" is made. So I checked with RFC 6265 where the path match check is defined in section-5.1.4 . RFC 6265 also obsoletes RFC 2965 upon which cookiejar is based I hope since original implementation of cookiejar is from 2004 and RFC 6265 was standardized later. So I think it's good to enforce RFC 6265 since RFC 2965 is obsolete at least in Python 3.8 unless this is considered as a security issue. I think this is a security issue. The current implementation can potentially cause cookies to be sent to incorrect paths in the same domain that share the same prefix. This is a behavior change with more strict checks but I could see no tests failing with RFC 6265 implementation too. RFC 2965 also gives a loose definition of path-match without mentioning about / check in the paths based on which Python implementation is based as a simple prefix match. > For two strings that represent paths, P1 and P2, P1 path-matches P2 > if P2 is a prefix of P1 (including the case where P1 and P2 string- > compare equal). Thus, the string /tec/waldo path-matches /tec. RFC 6265 path-match definition : https://tools.ietf.org/html/rfc6265#section-5.1.4 A request-path path-matches a given cookie-path if at least one of the following conditions holds: o The cookie-path and the request-path are identical. o The cookie-path is a prefix of the request-path, and the last character of the cookie-path is %x2F ("/"). o The cookie-path is a prefix of the request-path, and the first character of the request-path that is not included in the cookie- path is a %x2F ("/") character. The current implementation in cookiejar is as below : def path_return_ok(self, path, request): _debug("- checking cookie path=%s", path) req_path = request_path(request) if not req_path.startswith(path): _debug(" %s does not path-match %s", req_path, path) return False return True Translating the RFC 6265 steps (a literal translation of go implementation) would have something like below and no tests fail on master. So the python implementation goes in line with the RFC not passing cookies of "path=/any" to /anybody def path_return_ok(self, path, request): req_path = request_path(request) if req_path == path: return True elif req_path.startswith(path) and ((path.endswith("/") or req_path[len(path)] == "/")): return True return False The golang implementation is as below which is a literal translation of RFC 6265 steps at https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/net/http/cookiejar/jar.go#L130 // pathMatch implements "path-match" according to RFC 6265 section 5.1.4. func (e *entry) pathMatch(requestPath string) bool { if requestPath == e.Path { return true } if strings.HasPrefix(requestPath, e.Path) { if e.Path[len(e.Path)-1] == '/' { return true // The "/any/" matches "/any/path" case. } else if requestPath[len(e.Path)] == '/' { return true // The "/any" matches "/any/path" case. } } return false } RFC 6265 on weak confidentiality (https://tools.ietf.org/html/rfc6265#section-8.5) Cookies do not always provide isolation by path. Although the network-level protocol does not send cookies stored for one path to another, some user agents expose cookies via non-HTTP APIs, such as HTML's document.cookie API. Because some of these user agents (e.g., web browsers) do not isolate resources received from different paths, a resource retrieved from one path might be able to access cookies stored for another path. ---------- components: Library (Lib) messages: 332919 nosy: martin.panter, ned.deily, orsenthil, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: Cookie path check returns incorrect results type: security versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 04:06:12 2019 From: report at bugs.python.org (flokX) Date: Thu, 03 Jan 2019 09:06:12 +0000 Subject: [New-bugs-announce] [issue35648] Add use_srcentry parameter to shutil.copytree() Message-ID: <1546506372.52.0.724433917377.issue35648@roundup.psfhosted.org> New submission from flokX : Currently it is decided if to use the srcentry in the copy_function by checking if the copy_function is copy() or copy2(). This will fail if the copy_function is a modified copy() or copy2() function. To control if the copy_function gets a srcentry or srcname parameter, added the use_srcentry parameter. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 332923 nosy: docs at python, flokX priority: normal severity: normal status: open title: Add use_srcentry parameter to shutil.copytree() type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 11:41:48 2019 From: report at bugs.python.org (skorpeo) Date: Thu, 03 Jan 2019 16:41:48 +0000 Subject: [New-bugs-announce] [issue35649] http.client doesn't close. Infinite loop Message-ID: <1546533708.33.0.81545579375.issue35649@roundup.psfhosted.org> New submission from skorpeo : when testing example from https://docs.python.org/3/library/http.client.html. Specifically the chunked example, i.e. while not r1.closed. Results in infinite loop. I believe this is because line 398 function def _close_conn(self), should call self.close(). ---------- messages: 332934 nosy: skorpeo priority: normal severity: normal status: open title: http.client doesn't close. Infinite loop versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 11:53:56 2019 From: report at bugs.python.org (Anthony Sottile) Date: Thu, 03 Jan 2019 16:53:56 +0000 Subject: [New-bugs-announce] [issue35650] cygwin treats X and X.exe as the same file Message-ID: <1546534436.45.0.760844199857.issue35650@roundup.psfhosted.org> New submission from Anthony Sottile : >>> with open('f.exe', 'w') as f: ... f.write('hi') ... >>> with open('f') as f: ... print(f.read()) ... hi `os.path.exists(...)` and others treat them as the same file as well. It seems the only reliable way to write both files is: 1. write to f.exe 2. write to f.bak 3. move f.bak to f (`os.rename`) ---------- components: Windows messages: 332935 nosy: Anthony Sottile, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: cygwin treats X and X.exe as the same file type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 12:34:26 2019 From: report at bugs.python.org (Mark Amery) Date: Thu, 03 Jan 2019 17:34:26 +0000 Subject: [New-bugs-announce] [issue35651] PEP 257 (active) references PEP 258 (rejected) as if it were active Message-ID: <1546536866.6.0.633154963858.issue35651@roundup.psfhosted.org> New submission from Mark Amery : PEP 257 says: > Please see PEP 258, "Docutils Design Specification" [2], for a detailed description of attribute and additional docstrings. But PEP 258 is rejected. It doesn't seem coherent that an active PEP can defer some of its details to a rejected PEP - and indeed it makes me unsure how much of the surrounding commentary in PEP 257 to treat as active. e.g. should I treat the entire concepts of "attribute docstrings" and "additional docstrings" as rejected, given the rejection of PEP 258, or are they still part of the current spec, given that they're referenced in PEP 257 prior to any mention of PEP 258? It's currently completely unclear. ---------- assignee: docs at python components: Documentation messages: 332940 nosy: ExplodingCabbage, docs at python priority: normal severity: normal status: open title: PEP 257 (active) references PEP 258 (rejected) as if it were active versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 13:57:45 2019 From: report at bugs.python.org (flokX) Date: Thu, 03 Jan 2019 18:57:45 +0000 Subject: [New-bugs-announce] [issue35652] Add use_srcentry parameter to shutil.copytree() II Message-ID: <1546541865.13.0.115152005965.issue35652@roundup.psfhosted.org> New submission from flokX : Currently it is decided if to use the srcentry in the copy_function by checking if the copy_function is copy() or copy2(). This will fail if the copy_function is a modified copy() or copy2() function. To control if the copy_function gets a srcentry or srcname parameter, added the use_srcentry parameter. Successor of https://bugs.python.org/issue35648 ---------- components: Library (Lib) messages: 332941 nosy: flokX priority: normal severity: normal status: open title: Add use_srcentry parameter to shutil.copytree() II type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 16:29:10 2019 From: report at bugs.python.org (adiba) Date: Thu, 03 Jan 2019 21:29:10 +0000 Subject: [New-bugs-announce] [issue35653] All regular expression match groups are the empty string Message-ID: <1546550950.61.0.624475109311.issue35653@roundup.psfhosted.org> New submission from adiba : This is the regular expression: ^(?:(\d*)(\D*))*$ This is the test string: 42AZ This is the expectation for the match groups: ('42', 'AZ') This is the actual return value: ('', '') https://gist.github.com/adiba/791ba943a1102994d43171dc98aaecd0 ---------- components: Regular Expressions messages: 332948 nosy: adiba, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: All regular expression match groups are the empty string type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 3 16:35:41 2019 From: report at bugs.python.org (Martijn Pieters) Date: Thu, 03 Jan 2019 21:35:41 +0000 Subject: [New-bugs-announce] [issue35654] Remove 'guarantee' that sorting only relies on __lt__ from sorting howto Message-ID: <1546551341.08.0.0908752668479.issue35654@roundup.psfhosted.org> New submission from Martijn Pieters : Currently, the sorting HOWTO at https://docs.python.org/3/howto/sorting.html#odd-and-ends contains the text: > The sort routines are guaranteed to use __lt__() when making comparisons between two objects. So, it is easy to add a standard sort order to a class by defining an __lt__() method Nowhere else in the Python documentation is this guarantee made, however. That sort currently uses __lt__ only is, in my opinion, an implementation detail. The above advice also goes against the advice PEP 8 gives: > When implementing ordering operations with rich comparisons, it is best to implement all six operations (__eq__, __ne__, __lt__, __le__, __gt__, __ge__) rather than relying on other code to only exercise a particular comparison. > > To minimize the effort involved, the functools.total_ordering() decorator provides a tool to generate missing comparison methods. The 'guarantee' seems to have been copied verbatim from the Wiki version of the HOWTO in https://github.com/python/cpython/commit/0fe095e87f727f4a19b6cbfd718d51935a888740, where that part of the Wiki page was added by an anonymous user in revision 44 to the page: https://wiki.python.org/moin/HowTo/Sorting?action=diff&rev1=43&rev2=44 Can this be removed from the HOWTO? ---------- assignee: docs at python components: Documentation messages: 332949 nosy: docs at python, mjpieters, rhettinger priority: normal severity: normal status: open title: Remove 'guarantee' that sorting only relies on __lt__ from sorting howto versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 06:02:42 2019 From: report at bugs.python.org (Juan) Date: Fri, 04 Jan 2019 11:02:42 +0000 Subject: [New-bugs-announce] [issue35655] documentation have wrong info about fibo module examples Message-ID: <1546599762.04.0.11226790108.issue35655@roundup.psfhosted.org> New submission from Juan : The below sections in modules documentation have wrong information about fibo module: 6. Modules 6.1. More on Modules 6.1.1. Executing modules as scripts 6.3. The dir() Function The name of module is Fibo not fibo and the attributes are fab,fab2 not fib,fib2. root at archlinux ~]# python2 --version Python 2.7.15 [root at archlinux ~]# pip2 --version pip 18.1 from /usr/lib/python2.7/site-packages/pip (python 2.7) [root at archlinux ~]# pip2 install fibo Collecting fibo Using cached https://files.pythonhosted.org/packages/24/50/e74bd48bbef1040afb01b107e6cfbc3c1e991be24c10c40a37e335383e54/Fibo-1.0.0.tar.gz Installing collected packages: fibo Running setup.py install for fibo ... done Successfully installed fibo-1.0.0 [root at archlinux ~]# pip2 list modules |grep -i fibo Fibo 1.0.0 [root at archlinux ~]# python2 Python 2.7.15 (default, Jun 27 2018, 13:05:28) [GCC 8.1.1 20180531] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import fibo Traceback (most recent call last): File "", line 1, in ImportError: No module named fibo >>> import Fibo >>> Fibo.fib(10) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'fib' >>> Fibo.fib2(10) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'fib2' >>> Fibo.fab(10) 1 1 2 3 5 8 13 21 34 55 >>> Fibo.fab2(10) [1, 1, 2, 3, 5, 8, 13, 21, 34, 55] >>> Fibo.__name__ 'Fibo' >>> dir(Fibo) ['Fab', '__builtins__', '__doc__', '__file__', '__name__', '__package__', 'fab', 'fab2', 'fab4'] ---------- assignee: docs at python components: Documentation messages: 332967 nosy: docs at python, eric.araujo, ezio.melotti, juanbaio10, mdk, willingc priority: normal severity: normal status: open title: documentation have wrong info about fibo module examples type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 06:04:51 2019 From: report at bugs.python.org (Petter S) Date: Fri, 04 Jan 2019 11:04:51 +0000 Subject: [New-bugs-announce] [issue35656] More matchers in unittest.mock Message-ID: <1546599891.86.0.0577370914139.issue35656@roundup.psfhosted.org> New submission from Petter S : The ``ANY`` object in ``unittest.mock`` is also pretty useful when verifying dicts in tests: self.assertEqual(result, { "message": "Hi!", "code": 0, "id": mock.ANY }) Then it does not matter what the (presumably randomly generated) id is. For the same use cases, objects like ``APPROXIMATE`` (for approximate floating-point matching) and ``MATCHES`` (taking a boolean lambda) would be pretty useful, I think. ---------- components: Library (Lib) messages: 332968 nosy: Petter S priority: normal severity: normal status: open title: More matchers in unittest.mock type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 06:52:47 2019 From: report at bugs.python.org (Huazuo Gao) Date: Fri, 04 Jan 2019 11:52:47 +0000 Subject: [New-bugs-announce] [issue35657] multiprocessing.Process.join() ignores timeout if child process use os.exec*() Message-ID: <1546602767.92.0.99491105947.issue35657@roundup.psfhosted.org> New submission from Huazuo Gao : import os import time from multiprocessing import Process p = Process(target=lambda:os.execlp('bash', 'bash', '-c', 'sleep 1.5')) t0 = time.time() p.start() p.join(0.1) print(time.time() - t0) --- Python 3.5 - 3.8 take 1.5 sec to finish Python 2.7 take 0.1 sec to finish ---------- components: Library (Lib) messages: 332970 nosy: Huazuo Gao priority: normal severity: normal status: open title: multiprocessing.Process.join() ignores timeout if child process use os.exec*() type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 07:17:38 2019 From: report at bugs.python.org (Bart van den Donk) Date: Fri, 04 Jan 2019 12:17:38 +0000 Subject: [New-bugs-announce] [issue35658] Reggie_Linear_Regression_Solution.ipynb devide by 10 diff with multiply by .1 Message-ID: <1546604258.86.0.0685397461869.issue35658@roundup.psfhosted.org> New submission from Bart van den Donk : possible_ms1 = [i*.1 for i in range(-100, 101, 1)] #your list comprehension here print(possible_ms1) possible_ms2 = [i/10 for i in range(-100, 101, 1)] #your list comprehension here print(possible_ms2) Multiply by .1 gives dirty results. Divide by 10 gives clean results. ---------- components: Demos and Tools, Regular Expressions messages: 332973 nosy: Bart van den Donk, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: Reggie_Linear_Regression_Solution.ipynb devide by 10 diff with multiply by .1 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 18:40:05 2019 From: report at bugs.python.org (Wanja Chresta) Date: Fri, 04 Jan 2019 23:40:05 +0000 Subject: [New-bugs-announce] [issue35659] Add heapremove() function to heapq Message-ID: <1546645205.64.0.740247003371.issue35659@roundup.psfhosted.org> New submission from Wanja Chresta : Heap Queues are extremely useful data structures. They can, for example, be used to implement Dijkstra's algorithm for finding the shortest paths between nodes in a graph in O(edge * log vertices) time instead of (edge * vertices) without heaps. One operation such implementations need, though, is the possibility to modify an element in the heap (and thus having to reorder it afterwards) in O(log n) time. One can model such an operation by removing a specific element from the heap and then adding the modified element. So far, heapq only allows removing the first element through heappop; this is not what we need. Instead, we would want to support a heapremove function that removes an arbitrary element in the heap (if it exists) and raises ValueError if the value is not present. list.remove cannot be used, since it needs O(n) time. heapremove can be easily implemented by using bisect.bisect_left since heap is always sorted: def heapremove(heap, x): i = bisect.bisect_left(heap, x) if heap[i] == x: del heap[i] else: raise ValueError c.f. remove method in https://docs.oracle.com/javase/7/docs/api/java/util/PriorityQueue.html ---------- components: Library (Lib) messages: 333024 nosy: Wanja Chresta priority: normal severity: normal status: open title: Add heapremove() function to heapq versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 19:13:36 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 05 Jan 2019 00:13:36 +0000 Subject: [New-bugs-announce] [issue35660] IDLE: Remove import * from window.py Message-ID: <1546647216.45.0.186888273611.issue35660@roundup.psfhosted.org> New submission from Cheryl Sabella : Remove use of `from tkinter import *` from windows.py. ---------- assignee: terry.reedy components: IDLE messages: 333028 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Remove import * from window.py type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 19:58:02 2019 From: report at bugs.python.org (Brett Cannon) Date: Sat, 05 Jan 2019 00:58:02 +0000 Subject: [New-bugs-announce] [issue35661] Store the venv prompt in pyvenv.cfg Message-ID: <1546649882.91.0.811189740909.issue35661@roundup.psfhosted.org> New submission from Brett Cannon : When creating the pyvenv.cfg file, the prompt setting should be stored there so that tools can introspect on it (e.g. VS Code could read the value to tell users the name of the venv they have selected in the status bar). ---------- assignee: brett.cannon components: Library (Lib) messages: 333030 nosy: brett.cannon, vinay.sajip priority: normal severity: normal stage: test needed status: open title: Store the venv prompt in pyvenv.cfg type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 4 22:25:25 2019 From: report at bugs.python.org (Jeff Robbins) Date: Sat, 05 Jan 2019 03:25:25 +0000 Subject: [New-bugs-announce] [issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug Message-ID: <1546658725.61.0.827924851828.issue35662@roundup.psfhosted.org> New submission from Jeff Robbins : Python 3.x defaults to using emulated condition variables on Windows. I tested a build with native Windows condition variables (#define _PY_EMULATED_WIN_CV 0), and found a serious issue. The problem is in condvar.h, in this routine: /* This implementation makes no distinction about timeouts. Signal * 2 to indicate that we don't know. */ Py_LOCAL_INLINE(int) PyCOND_TIMEDWAIT(PyCOND_T *cv, PyMUTEX_T *cs, long long us) { return SleepConditionVariableSRW(cv, cs, (DWORD)(us/1000), 0) ? 2 : -1; } The issue is that `SleepConditionVariableSRW` returns FALSE in the timeout case. PyCOND_TIMEDWAIT returns -1 in that case. But... COND_TIMED_WAIT, which calls PyCOND_TIMEDWAIT, in ceval_gil.h, fatals(!) on a negative return value #define COND_TIMED_WAIT(cond, mut, microseconds, timeout_result) \ { \ int r = PyCOND_TIMEDWAIT(&(cond), &(mut), (microseconds)); \ if (r < 0) \ Py_FatalError("PyCOND_WAIT(" #cond ") failed"); \ I'd like to suggest that we use the documented behavior of the OS API call already being used (SleepConditionVariableSRW https://docs.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-sleepconditionvariablesrw) and return 0 on regular success and 1 on timeout, like in the _POSIX_THREADS case. """ Return Value If the function succeeds, the return value is nonzero. If the function fails, the return value is zero. To get extended error information, call GetLastError. If the timeout expires the function returns FALSE and GetLastError returns ERROR_TIMEOUT. """ I've tested this rewrite -- the main difference is in the FALSE case, check GetLastError() for ERROR_TIMEOUT and then *do not* treat this as a fatal error. /* * PyCOND_TIMEDWAIT, in addition to returning negative on error, * thus returns 0 on regular success, 1 on timeout */ Py_LOCAL_INLINE(int) PyCOND_TIMEDWAIT(PyCOND_T *cv, PyMUTEX_T *cs, long long us) { BOOL result = SleepConditionVariableSRW(cv, cs, (DWORD)(us / 1000), 0); if (result) return 0; if (GetLastError() == ERROR_TIMEOUT) return 1; return -1; } I've attached the test I ran to reproduce the crash. ---------- components: Windows files: thread_test2.py messages: 333036 nosy: jeffr at livedata.com, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows #define _PY_EMULATED_WIN_CV 0 bug type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48024/thread_test2.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 5 00:03:52 2019 From: report at bugs.python.org (Tanmay Jain) Date: Sat, 05 Jan 2019 05:03:52 +0000 Subject: [New-bugs-announce] [issue35663] webbrowser.py firefox bug [python3, windows 10] Message-ID: <1546664632.95.0.705177998493.issue35663@roundup.psfhosted.org> New submission from Tanmay Jain : https://docs.python.org/3/library/webbrowser.html#webbrowser.controller.open browser_controller = webbrowser.get() result = browser_controller.open(url)# <-- return False even though firefox successfully opens url # expected behavior when url is opened successfully in browser it should return True # like it return True for chrome and edge. ---------- components: Library (Lib), Tkinter, Windows messages: 333039 nosy: codextj, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: webbrowser.py firefox bug [python3, windows 10] type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 5 00:17:56 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 05 Jan 2019 05:17:56 +0000 Subject: [New-bugs-announce] [issue35664] Optimize itemgetter() Message-ID: <1546665476.73.0.183501211971.issue35664@roundup.psfhosted.org> New submission from Raymond Hettinger : Improve performance by 33% by optimizing argument handling and by adding a fast path for the common case of a single non-negative integer index into a tuple (which is the typical use case in the standard library). ---------- components: Library (Lib) messages: 333041 nosy: rhettinger priority: normal severity: normal status: open title: Optimize itemgetter() type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 5 07:24:45 2019 From: report at bugs.python.org (=?utf-8?q?Vladimir_Peri=C4=87?=) Date: Sat, 05 Jan 2019 12:24:45 +0000 Subject: [New-bugs-announce] [issue35665] Function ssl.create_default_context raises exception on Windows 10 when called with ssl.Purpose.SERVER_AUTH) attribute Message-ID: <1546691085.37.0.66333867377.issue35665@roundup.psfhosted.org> New submission from Vladimir Peri? : In Python 3.7.1 on Windows 10 ssl library function call ssl.create_default_context(ssl.Purpose.SERVER_AUTH) raises an ssl error: File "C:\Python37\lib\ssl.py", line 471, in _load_windows_store_certs self.load_verify_locations(cadata=certs) ssl.SSLError: nested asn1 error (_ssl.c:3926) In Python 3.6.4 same function call raises no error. ---------- assignee: christian.heimes components: SSL messages: 333054 nosy: christian.heimes, pervlad priority: normal severity: normal status: open title: Function ssl.create_default_context raises exception on Windows 10 when called with ssl.Purpose.SERVER_AUTH) attribute type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 5 11:08:00 2019 From: report at bugs.python.org (Carl Bordum Hansen) Date: Sat, 05 Jan 2019 16:08:00 +0000 Subject: [New-bugs-announce] [issue35666] Update design FAQ about assignment expression Message-ID: <1546704480.61.0.245493701505.issue35666@roundup.psfhosted.org> New submission from Carl Bordum Hansen : Hi there, In ``Doc/faq/design.rst`` there is an explanation of why Python does not have assignment in expressions. This is dated since PEP 572 / Python 3.8. Online version: https://docs.python.org/3/faq/design.html#why-can-t-i-use-an-assignment-in-an-expression I suggest updating it to the attached file. `git diff`: ``` diff --git a/Doc/faq/design.rst b/Doc/faq/design.rst index e2d63a0323..e61284611d 100644 --- a/Doc/faq/design.rst +++ b/Doc/faq/design.rst @@ -149,7 +149,15 @@ to tell Python which namespace to use. Why can't I use an assignment in an expression? ----------------------------------------------- -Many people used to C or Perl complain that they want to use this C idiom: +In Python 3.8 and newer, you can use assignment in an expression with the +``:=`` operator (as described in :pep:`572`):: + + while line := f.readline(): + ... # do something with line + +For more than 25 years it was not possible to do assignments in expressions in +Python. Naturally, many people used to C or Perl would complain that they want +to use this C idiom: .. code-block:: c @@ -157,7 +165,7 @@ Many people used to C or Perl complain that they want to use this C idiom: // do something with line } -where in Python you're forced to write this:: +where in Python you would be forced to write this:: while True: line = f.readline() @@ -165,8 +173,10 @@ where in Python you're forced to write this:: break ... # do something with line -The reason for not allowing assignment in Python expressions is a common, -hard-to-find bug in those other languages, caused by this construct: +The reason different operators are used for assignment and assignment in +expressions (``=`` and ``:=``, respectively), and why Python didn't allow +assignment in expressions for a long time is a common, hard-to-find bug in +those other languages, caused by this construct: .. code-block:: c @@ -180,11 +190,6 @@ hard-to-find bug in those other languages, caused by this construct: The error is a simple typo: ``x = 0``, which assigns 0 to the variable ``x``, was written while the comparison ``x == 0`` is certainly what was intended. -Many alternatives have been proposed. Most are hacks that save some typing but -use arbitrary or cryptic syntax or keywords, and fail the simple criterion for -language change proposals: it should intuitively suggest the proper meaning to a -human reader who has not yet been introduced to the construct. - An interesting phenomenon is that most experienced Python programmers recognize the ``while True`` idiom and don't seem to be missing the assignment in expression construct much; it's only newcomers who express a strong desire to ``` ---------- assignee: docs at python components: Documentation files: design.rst messages: 333063 nosy: carlbordum, docs at python priority: normal severity: normal status: open title: Update design FAQ about assignment expression type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48025/design.rst _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 5 16:34:29 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 05 Jan 2019 21:34:29 +0000 Subject: [New-bugs-announce] [issue35667] activate for venv containing apostrophe doesn't work in powershell Message-ID: <1546724069.06.0.00436819935865.issue35667@roundup.psfhosted.org> New submission from Cheryl Sabella : On Windows 10, when I try to activate a venv in powershell where the name contains an apostrophe, I get the following error ("don't" is the name of the venv): PS N:\projects\cpython\don't\Scripts> .\activate.ps1 At N:\projects\cpython\don't\Scripts\Activate.ps1:42 char:28 + function global:prompt { + ~ Missing closing '}' in statement block or type definition. At N:\projects\cpython\don't\Scripts\Activate.ps1:37 char:40 + if (! $env:VIRTUAL_ENV_DISABLE_PROMPT) { + ~ Missing closing '}' in statement block or type definition. At N:\projects\cpython\don't\Scripts\Activate.ps1:43 char:61 + Write-Host -NoNewline -ForegroundColor Green '(don't) ' + ~ Unexpected token ')' in expression or statement. At N:\projects\cpython\don't\Scripts\Activate.ps1:43 char:63 + Write-Host -NoNewline -ForegroundColor Green '(don't) ' + ~ The string is missing the terminator: '. + CategoryInfo : ParserError: (:) [], ParseException + FullyQualifiedErrorId : MissingEndCurlyBrace This works OK in Command Prompt. ---------- components: Library (Lib) messages: 333075 nosy: cheryl.sabella priority: normal severity: normal status: open title: activate for venv containing apostrophe doesn't work in powershell type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 5 16:44:20 2019 From: report at bugs.python.org (anthony shaw) Date: Sat, 05 Jan 2019 21:44:20 +0000 Subject: [New-bugs-announce] [issue35668] low test coverage for idlelib Message-ID: <1546724660.21.0.411261013357.issue35668@roundup.psfhosted.org> New submission from anthony shaw : idlelib is one of the lesser-tested libraries in cpython: https://codecov.io/gh/python/cpython/tree/master/Lib/idlelib Raising this issue and also volunteering to extend the test module to get coverage across major behaviours and functions that are missing tests. ---------- assignee: terry.reedy components: IDLE, Library (Lib) messages: 333077 nosy: anthony shaw, terry.reedy priority: normal severity: normal status: open title: low test coverage for idlelib type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 04:08:11 2019 From: report at bugs.python.org (Yigit Can) Date: Sun, 06 Jan 2019 09:08:11 +0000 Subject: [New-bugs-announce] [issue35669] tar symlink Message-ID: <1546765691.35.0.523415727187.issue35669@roundup.psfhosted.org> New submission from Yigit Can : ##Summary: A TAR file can escape the Python working directory with symlink. #Steps to reproduce: 1- Create a directory in Desktop (for example : testbolum) 2- Enter the path with "cd" command. 3- Create a symbolic link with "ln" command ( ln -s ../ symlink ). 4- Create a test files with "touch" command (touch ../testfile) 5- Create a tar file with "tar" command line tool ( tar -czvf proofofconcept.tar symlink/ symlink/testfile) 6- Delete "symlink" with "rm" command 7- Delete "../testfile" with "rm" command 8- Run "extract_tar.py" You can see "testfile" in "../" path Proof of concept: "status_python.mp4" ##Status on ptar: Apply the steps to reproduce for "ptar". ptar warning the user. You can see "status_on_ptarsymlink_file.mp4". ##Status on tar: Apply the steps to reproduce for "tar". tar warning the user. You can see "status_on_tarsymlink_file.mp4". #Note for Step 3: You can set a other path for example ( ln -s /user/test/area/ symlink) Python should be check symbolic link . The user may not be aware of this. This issue may also cause the software service to run in macos. ##Proof of concept files: http://yigittestman.000webhostapp.com/ta/ ##Impact: when the user tar file is extracting, the file will be sent to the desired location of the attacker. This issue may also cause the software service to mount in macOS. ---------- components: Library (Lib), Windows, macOS messages: 333094 nosy: Yilmaz, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: tar symlink type: security versions: Python 2.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 05:26:47 2019 From: report at bugs.python.org (Creation Elemental) Date: Sun, 06 Jan 2019 10:26:47 +0000 Subject: [New-bugs-announce] [issue35670] os functions return '??' for unicode characters in paths on windows Message-ID: <1546770407.32.0.613838233923.issue35670@roundup.psfhosted.org> New submission from Creation Elemental : I have a few files that contain emojis in their names, and also a folder that has such. Commands like `os.getcwd`, `os.listdir`, `os.path.realpath`, etc. will cause this to happen. However, this is only, as far as I can tell, happening on pure windows distributions. This does not happen in the cygwin64 version I have, nor does it happen in python3. For example, say you have a folder simply called '?'. If you run python inside of it and run `os.getcwd()` you will simply get `'??'` as the result. This breaks MANY of my programs that depend on knowing exactly where they are, and knowing the contents of a directory to pass to other functions. ---------- components: Windows messages: 333100 nosy: Creation Elemental, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os functions return '??' for unicode characters in paths on windows type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 07:13:12 2019 From: report at bugs.python.org (Markus Elfring) Date: Sun, 06 Jan 2019 12:13:12 +0000 Subject: [New-bugs-announce] [issue35671] reserved identifier violation Message-ID: <1546776792.69.0.991179182294.issue35671@roundup.psfhosted.org> New submission from Markus Elfring : I would like to point out that identifiers like ?__DYNAMIC_ANNOTATIONS_H__? and ?_Py_memory_order? do not fit to the expected naming convention of the C++ language standard. https://www.securecoding.cert.org/confluence/display/cplusplus/DCL51-CPP.+Do+not+declare+or+define+a+reserved+identifier Would you like to adjust your selection for unique names? * https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Include/dynamic_annotations.h#L56 * https://github.com/python/cpython/blob/130893debfd97c70e3a89d9ba49892f53e6b9d79/Include/internal/pycore_atomic.h#L36 ---------- components: Interpreter Core messages: 333105 nosy: elfring priority: normal severity: normal status: open title: reserved identifier violation type: security versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 07:34:02 2019 From: report at bugs.python.org (Jorge Teran) Date: Sun, 06 Jan 2019 12:34:02 +0000 Subject: [New-bugs-announce] [issue35672] Error on divide Message-ID: <1546778042.38.0.180444315145.issue35672@roundup.psfhosted.org> New submission from Jorge Teran : The following code produces an error in the diivision in python 3.5, 3.7 works in python 2.7 import math import sys x=int(1000112004278059472142857) y1=int(1000003) y2=int(1000033) y3=int(1000037) y4=int(1000039) print (int(y1y2y3y4)) print (x) #this product equals x Correct print (int(y2y3*y4)) n=int(x / y1) print (n) #n is an incorrect answer #works in pythoin 2.7 #Gives an incorrect answe in python 3.6.7, 3.7.1 ---------- components: Interpreter Core messages: 333107 nosy: Jorge Teran priority: normal severity: normal status: open title: Error on divide versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 08:23:23 2019 From: report at bugs.python.org (Ronald Oussoren) Date: Sun, 06 Jan 2019 13:23:23 +0000 Subject: [New-bugs-announce] [issue35673] Loader for namespace packages Message-ID: <1546781003.0.0.391060889435.issue35673@roundup.psfhosted.org> New submission from Ronald Oussoren : The documentation for import lib.machinery.ModuleSpec says that the attribute "loader" should be None for namespace packages (see ) In reality the loader for namespace packages is an instance of a private class, and that class does not conform to the importlib.abc.Loader ABC. To reproduce: * Create and empty directory "namespace" * (Optionally) create an empty "module.py" in that directory * Start a python shell and follow along: Python 3.7.2 (v3.7.2:9a3ffc0492, Dec 24 2018, 02:44:43) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import namespace >>> namespace.__loader__ <_frozen_importlib_external._NamespaceLoader object at 0x104c7bdd8> >>> import importlib.abc >>> isinstance(namespace.__loader__, importlib.abc.Loader) False >>> import importlib.util >>> importlib.util.find_spec('namespace') ModuleSpec(name='namespace', loader=<_frozen_importlib_external._NamespaceLoader object at 0x104c7bdd8>, submodule_search_locations=_NamespacePath(['/Users/ronald/Projects/pyobjc-hg/modulegraph2/namespace'])) Note how "namespace" has an attribute named "__loader__" that is not None, and the same is true for the ModuleSpec found using importlib.util.find_spec. The loader does not claim to conform to any Loader ABC (but provides all methods required for conformance to the InspectLoader ABC) I'm not sure if this should be two issues: 1) Documentation doesn't match behaviour 2) The loader for namespace packages isn't registered with the relevant ABCs P.S. the latter is also true for zipimport.zipimporter. ---------- components: Library (Lib) messages: 333111 nosy: brett.cannon, ronaldoussoren priority: normal severity: normal status: open title: Loader for namespace packages type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 19:46:08 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 07 Jan 2019 00:46:08 +0000 Subject: [New-bugs-announce] [issue35674] Expose os.posix_spawnp() Message-ID: <1546821968.26.0.975322975658.issue35674@roundup.psfhosted.org> New submission from STINNER Victor : bpo-20104 exposed os.posix_spawn(), but not os.posix_spawnp(). os.posix_spawnp() would be useful to support executable with no directory. See bpo-35537 "use os.posix_spawn in subprocess". I'm not sure what is the best API: * Add os.posix_spawnp()? duplicate the documentation and some parts of the C code (but share most of the C code) * Add a new optional parameter to os.posix_spawn()? Ideas of names: 'use_path' or 'search_executable'. Internally, the glibc uses SPAWN_XFLAGS_USE_PATH flag to distinguish posix_spawn() and posix_spawnp(). execvp() uses the PATH environment variable, or use confstr(_CS_PATH) if PATH is not set. I guess that posix_spawnp() also uses confstr(_CS_PATH) if PATH is not set. Currently, my favorite option is to add a new optional 'use_path' parameter to the existing os.posix_spawn() function. ---------- components: Interpreter Core messages: 333128 nosy: pablogsal, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Expose os.posix_spawnp() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 22:56:05 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 07 Jan 2019 03:56:05 +0000 Subject: [New-bugs-announce] [issue35675] IDLE: Refactor config_key module. Message-ID: <1546833365.65.0.740704370404.issue35675@roundup.psfhosted.org> New submission from Terry J. Reedy : Continuation of #35598. Cheryl said there (rearranged): "PR11427 refactors the main frame from the window. My main goal in splitting them was more for readability rather than for being able to add it to a Tabbed window. As a follow up to this refactor, I hope to split the Basic and Advanced frames into their own tabs, mostly to clean up the create_widgets and to organize the supporting functions." A pair of tabs on a ttk Notebook would make this more like configdialog. ---------- messages: 333134 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal stage: patch review status: open title: IDLE: Refactor config_key module. type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 6 23:17:47 2019 From: report at bugs.python.org (=?utf-8?b?15DXldeo15kg16jXldeT15HXqNeS?=) Date: Mon, 07 Jan 2019 04:17:47 +0000 Subject: [New-bugs-announce] [issue35676] class TestCase: docs are not correct Message-ID: <1546834667.39.0.848001961358.issue35676@roundup.psfhosted.org> New submission from ???? ?????? : I think some functions of `class TestCase` are not documented correctly in the docs. For example in https://docs.python.org/3.5/library/unittest.html and also https://docs.python.org/3.6/library/unittest.html and https://docs.python.org/3.7/library/unittest.html. Some of the functions which are not documented correctly: assertListEqual assertSetEqual assertDictEqual assertIsNone And many other functions. You can see some more details on https://github.com/python/typeshed/issues/2716. ---------- assignee: docs at python components: Documentation messages: 333137 nosy: docs at python, ???? ?????? priority: normal severity: normal status: open title: class TestCase: docs are not correct versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 04:17:58 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 07 Jan 2019 09:17:58 +0000 Subject: [New-bugs-announce] [issue35677] Do not automount in stat() by default Message-ID: <1546852678.92.0.232634094535.issue35677@roundup.psfhosted.org> New submission from Serhiy Storchaka : There is a subtle difference between implementations of os.stat() that use system calls stat() and fstatat(). On Linux, fstatat() by default automounts the terminal ("basename") component of pathname if it is a directory that is an automount point. The Linux-specific AT_NO_AUTOMOUNT flag should be set to prevent automounting. Both stat() and lstat() act as though AT_NO_AUTOMOUNT was set. Therefore os.stat() should set AT_NO_AUTOMOUNT if it is defined to simulate the behavior of stat() and lstat(). New keyword parameter can be added to os.stat() in new Python release to control this behavior. There is the same issue with DirEntry.stat(). ---------- components: Extension Modules messages: 333143 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Do not automount in stat() by default type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 07:47:53 2019 From: report at bugs.python.org (MaximilianSP) Date: Mon, 07 Jan 2019 12:47:53 +0000 Subject: [New-bugs-announce] [issue35678] Issue with execute_child in startupinfo Message-ID: <1546865273.42.0.642343749188.issue35678@roundup.psfhosted.org> New submission from MaximilianSP : Whenever startupinfo is called, python crashes on my Computer. I have added the file showing the error traceback. I have seen a few bug Reports related to startupinfo on Windows. Not sure what the issue is and I hope you can help me. I am running the python code via spyder. My apologies if this is not the forum for these Kind of questions. ---------- components: Windows files: Win Error 87_ in execute child startupinfo.png messages: 333147 nosy: MaximilianSP, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Issue with execute_child in startupinfo type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48027/Win Error 87_ in execute child startupinfo.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 08:55:50 2019 From: report at bugs.python.org (Hernot) Date: Mon, 07 Jan 2019 13:55:50 +0000 Subject: [New-bugs-announce] [issue35679] pdb restart hooks Message-ID: <1546869350.93.0.529177618496.issue35679@roundup.psfhosted.org> New submission from Hernot : I like the PDB debugger it is a quite powerfull tool, despite a few donws. One is that cleanup code eg registered by debugged script, module is not executed on restart. a crude hack is to check whether pdb is invoked via python -mpdb using inspect and decorate pdb._runscript and psb._runmodule methods with own versions calling registered cleanup methods before returning to main. A cleaner approach would be if either pdb intercepts atexit calls recording any method which is registered by a call to atexit.register or provide it's own atexit method to register methods which pdb should call to revert to a clean enviroment expected by the script or module at startup. open to any discussion, examples will follow as necessary. ---------- components: Demos and Tools, Library (Lib) messages: 333150 nosy: Hernot priority: normal severity: normal status: open title: pdb restart hooks type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 11:41:05 2019 From: report at bugs.python.org (Charalampos Stratakis) Date: Mon, 07 Jan 2019 16:41:05 +0000 Subject: [New-bugs-announce] [issue35680] [2.7] Coverity scan: Passing freed pointer "name" as an argument to "Py_BuildValue" in _bsddb module. Message-ID: <1546879265.59.0.976013250068.issue35680@roundup.psfhosted.org> New submission from Charalampos Stratakis : Results from a recent static analysis scan for python2: Error: USE_AFTER_FREE (CWE-825): Python-2.7.15/Modules/_bsddb.c:6697: freed_arg: "free" frees "name". Python-2.7.15/Modules/_bsddb.c:6715: pass_freed_arg: Passing freed pointer "name" as an argument to "Py_BuildValue". 6713| RETURN_IF_ERR(); /* Maybe the size is not the problem */ 6714| 6715|-> retval = Py_BuildValue("s", name); 6716| free(name); 6717| return retval; Attaching a draft patch. ---------- components: Extension Modules files: bsddb_fix.patch keywords: patch messages: 333176 nosy: cstratak priority: normal severity: normal status: open title: [2.7] Coverity scan: Passing freed pointer "name" as an argument to "Py_BuildValue" in _bsddb module. versions: Python 2.7 Added file: https://bugs.python.org/file48028/bsddb_fix.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 18:09:03 2019 From: report at bugs.python.org (Tatz Sekine) Date: Mon, 07 Jan 2019 23:09:03 +0000 Subject: [New-bugs-announce] [issue35681] urllib.request.HTTPPasswordMgr.add_password requires more information for HTTPPasswordMgrWithDefaultRealm Message-ID: <1546902543.15.0.573868900421.issue35681@roundup.psfhosted.org> New submission from Tatz Sekine : TL;DR HTTPPasswordMgrWithDefaultRealm.add_password() doesn't have proper documentation. All known version of urllib.request (or urllib2 in Python 2.x) documentaion have the same issue. Details: HTTPPasswordMgrWithDefaultRealm object doesn't have its own documentation. Instead of that, HTTPPasswordMgr's doc have information for those 2 objects: https://docs.python.org/3.8/library/urllib.request.html?highlight=httppasswordmgr#http-password-mgr Both objects have just 2 functions: add_password() and find_user_password(). The doc for find_user_password() has explanation how different between those 2 objects, while the one for add_password() doesn't. One of the missing explanation for HTTPPasswordMgrWithDefaultRealm.add_passowrd() is the value of realm. The document has now "realm, user and passwd must be strings.", but realm could be None for HTTPPasswordMgrWithDefaultRealm, to set default realm-less password. That's typical use case of HTTPPasswordMgrWithDefaultRealm.add_password(), or, that's exactly urllib howto doc mentions in https://docs.python.org/3.8/howto/urllib2.html?highlight=httppasswordmgrwithdefaultrealm Conclusion: So, documentation of HTTPPasswordMgr.add_passoword() should have additional explanation for realm which could be None for HTTPPasswordMgrWithDefaultRealm. Or, HTTPPasswordMgrWithDefaultRealm objects could have independent section from HTTPPasswordMgr, to clarify its usage. Why is this matter? Typeshed (https://github.com/python/typeshed) depends on the doc. I (or somebody else in the company) will report it separately. ---------- assignee: docs at python components: Documentation messages: 333189 nosy: docs at python, tsekine priority: normal severity: normal status: open title: urllib.request.HTTPPasswordMgr.add_password requires more information for HTTPPasswordMgrWithDefaultRealm type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 19:50:48 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 08 Jan 2019 00:50:48 +0000 Subject: [New-bugs-announce] [issue35682] asyncio: bug in _ProactorBasePipeTransport._force_close() Message-ID: <1546908648.7.0.754492801962.issue35682@roundup.psfhosted.org> New submission from STINNER Victor : Running ProactorEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving() logs a bug in _force_close(): see logs below. Extract of _force_close(): def _force_close(self, exc): if self._empty_waiter is not None: if exc is None: self._empty_waiter.set_result(None) else: self._empty_waiter.set_exception(exc) ... Problem: _empty_waiter can be already done. For example, it can be created directly as done: def _make_empty_waiter(self): ... self._empty_waiter = self._loop.create_future() if self._write_fut is None: self._empty_waiter.set_result(None) return self._empty_waiter Attached PR fixes _force_close(): do nothing if _empty_waiter is already done. The regression comes from the following change: commit a19fb3c6aaa7632410d1d9dcb395d7101d124da4 Author: Andrew Svetlov Date: Sun Feb 25 19:32:14 2018 +0300 bpo-32622: Native sendfile on windows (#5565) * Support sendfile on Windows Proactor event loop naively. Logs: vstinner at WIN C:\vstinner\python\master>python -X dev -m test test_asyncio -m test.test_asyncio.test_sendfile.ProactorEventLoopTests.test_sendfile_close_peer_in_the_middle_of_receiving Running Debug|x64 interpreter... Run tests sequentially 0:00:00 [1/1] test_asyncio Exception in callback _ProactorReadPipeTransport._loop_reading(<_OverlappedF...events.py:452>) handle: ) created at C:\vstinner\python\master\lib\asyncio\windows_events.py:82> source_traceback: Object created at (most recent call last): File "C:\vstinner\python\master\lib\test\test_asyncio\test_sendfile.py", line 125, in run_loop return self.loop.run_until_complete(coro) File "C:\vstinner\python\master\lib\asyncio\base_events.py", line 576, in run_until_complete self.run_forever() File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 315, in run_forever super().run_forever() File "C:\vstinner\python\master\lib\asyncio\base_events.py", line 544, in run_forever self._run_once() File "C:\vstinner\python\master\lib\asyncio\base_events.py", line 1729, in _run_once event_list = self._selector.select(timeout) File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 421, in select self._poll(timeout) File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 750, in _poll f.set_exception(e) File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 82, in set_exception super().set_exception(exception) Traceback (most recent call last): File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 444, in finish_recv return ov.getresult() OSError: [WinError 64] The specified network name is no longer available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\vstinner\python\master\lib\asyncio\proactor_events.py", line 256, in _loop_reading data = fut.result() File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 748, in _poll value = callback(transferred, key, ov) File "C:\vstinner\python\master\lib\asyncio\windows_events.py", line 448, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 64] The specified network name is no longer available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\vstinner\python\master\lib\asyncio\events.py", line 81, in _run self._context.run(self._callback, *self._args) File "C:\vstinner\python\master\lib\asyncio\proactor_events.py", line 283, in _loop_reading self._force_close(exc) File "C:\vstinner\python\master\lib\asyncio\proactor_events.py", line 118, in _force_close self._empty_waiter.set_exception(exc) asyncio.exceptions.InvalidStateError: invalid state == Tests result: SUCCESS == 1 test OK. Total duration: 531 ms Tests result: SUCCESS ---------- components: Windows, asyncio messages: 333192 nosy: asvetlov, paul.moore, steve.dower, tim.golden, vstinner, yselivanov, zach.ware priority: normal severity: normal status: open title: asyncio: bug in _ProactorBasePipeTransport._force_close() versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 7 23:19:11 2019 From: report at bugs.python.org (Steve Dower) Date: Tue, 08 Jan 2019 04:19:11 +0000 Subject: [New-bugs-announce] [issue35683] Enable manylinux1 builds on Pipelines Message-ID: <1546921151.97.0.533946172892.issue35683@roundup.psfhosted.org> New submission from Steve Dower : Azure Pipelines can now support container jobs: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=vsts&tabs=yaml I experimented with enabling a manylinux1 build a while back, which should now be able to use identical steps to the POSIX build. With the new syntax, we can enable CI (and perhaps PR?) builds using the snippet below: - job: ManyLinux1_CI_Tests displayName: ManyLinux1 CI Tests dependsOn: Prebuild condition: | and( and( succeeded(), eq(variables['manylinux'], 'true') ), eq(dependencies.Prebuild.outputs['tests.run'], 'true') ) resources: containers: - container: manylinux1 image: dockcross:manylinux-x64 pool: vmImage: ubuntu-16.04 container: manylinux1 variables: testRunTitle: '$(build.sourceBranchName)-manylinux1' testRunPlatform: manylinux1 steps: - template: ./posix-steps.yml I don't have time right now to test this change, but someone else might. It's certainly going to be easier for someone to test it by adding this to the PR build first (or set up a build on your own Pipelines instance). Maybe there are other more relevant containers we should be testing in? ---------- components: Cross-Build messages: 333209 nosy: Alex.Willmer, barry, steve.dower, zach.ware priority: normal severity: normal status: open title: Enable manylinux1 builds on Pipelines versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 05:28:52 2019 From: report at bugs.python.org (Nathaniel Smith) Date: Tue, 08 Jan 2019 10:28:52 +0000 Subject: [New-bugs-announce] [issue35684] Windows "embedded" Python downloads are malformed Message-ID: <1546943332.69.0.924867771355.issue35684@roundup.psfhosted.org> New submission from Nathaniel Smith : ~$ unzip -l /tmp/python-3.7.2-embed-amd64.zip Archive: /tmp/python-3.7.2-embed-amd64.zip Length Date Time Name --------- ---------- ----- ---- 99856 2018-12-23 23:10 python.exe 98320 2018-12-23 23:10 pythonw.exe 3780624 2018-12-23 23:09 python37.dll 58896 2018-12-23 23:10 python3.dll 85232 2018-12-10 22:06 vcruntime140.dll 200208 2018-12-23 23:10 pyexpat.pyd 26640 2018-12-23 23:10 select.pyd 1073168 2018-12-23 23:10 unicodedata.pyd 28688 2018-12-23 23:10 winsound.pyd 71184 2018-12-23 23:10 _asyncio.pyd 89104 2018-12-23 23:10 _bz2.pyd 22544 2018-12-23 23:10 _contextvars.pyd 133136 2018-12-23 23:10 _ctypes.pyd 267792 2018-12-23 23:10 _decimal.pyd 209424 2018-12-23 23:10 _elementtree.pyd 38928 2018-12-23 23:10 _hashlib.pyd 257040 2018-12-23 23:10 _lzma.pyd 39440 2018-12-23 23:10 _msi.pyd 29200 2018-12-23 23:10 _multiprocessing.pyd 44048 2018-12-23 23:10 _overlapped.pyd 27664 2018-12-23 23:10 _queue.pyd 75792 2018-12-23 23:10 _socket.pyd 85520 2018-12-23 23:10 _sqlite3.pyd 123408 2018-12-23 23:10 _ssl.pyd 2480296 2018-12-23 22:20 libcrypto-1_1-x64.dll 523944 2018-12-23 22:20 libssl-1_1-x64.dll 1190416 2018-12-23 23:10 sqlite3.dll 85232 2018-12-10 22:06 vcruntime140.dll 2386539 2018-12-23 23:14 python37.zip 79 2018-12-23 23:14 python37._pth --------- ------- 13632362 30 files Notice that "vcruntime140.dll" appears twice on this list, once near the top and once near the bottom. If we try to unpack this using the powershell Expand-Archive command, it fails with: Failed to create file 'D:\a\1\s\python-dir\vcruntime140.dll' while expanding the archive file 'D:\a\1\s\python.zip' contents as the file 'D:\a\1\s\python-dir\vcruntime140.dll' already exists. Use the -Force parameter if you want to overwrite the existing directory 'D:\a\1\s\python-dir\vcruntime140.dll' contents when expanding the archive file. Probably it would be better to only include one copy of vcruntime140.dll :-) ---------- assignee: steve.dower messages: 333217 nosy: njs, steve.dower priority: normal severity: normal status: open title: Windows "embedded" Python downloads are malformed _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 07:37:57 2019 From: report at bugs.python.org (Emmanuel Arias) Date: Tue, 08 Jan 2019 12:37:57 +0000 Subject: [New-bugs-announce] [issue35685] Add samples on patch.dict of the use of decorator and in class Message-ID: <1546951077.39.0.591357222806.issue35685@roundup.psfhosted.org> New submission from Emmanuel Arias : Hi! I create this PR to add a samples of the use of patch.dict with decorator on method and class. I think that is a good improve because the doc mentions the use of patch.dict with decorator on method and class but don't show any samples. Other, question, why on unittest.mock documentation is used `assert *something* == *something*` and it is not used the assertEquals (and others)? IMHO this would be better for unittest docs. Regards ---------- assignee: docs at python components: Documentation messages: 333226 nosy: docs at python, eamanu, michael.foord priority: normal severity: normal status: open title: Add samples on patch.dict of the use of decorator and in class type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 12:03:20 2019 From: report at bugs.python.org (Thomas Waldmann) Date: Tue, 08 Jan 2019 17:03:20 +0000 Subject: [New-bugs-announce] [issue35686] memoryview contextmanager causing strange crash Message-ID: <1546967000.66.0.594887800159.issue35686@roundup.psfhosted.org> New submission from Thomas Waldmann : See there: https://github.com/borgbackup/borg/pull/4247 I did the first changeset after seeing some strange exception popping up which it was handling another exception - which I assumed was related to memoryview.release not being called in the original code. So it was clear to me, that we should use the CM there. So I added that (first changeset) and that made the code always fail (see first travis-ci link). Then removed the CM again and replaced it with functionally equivalent try/finally (second changeset) - that worked. So, the question is whether there is some issue in CPython's memoryview contextmanager code that make it fail in such a strange way. ---------- messages: 333236 nosy: Thomas.Waldmann priority: normal severity: normal status: open title: memoryview contextmanager causing strange crash type: crash versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 16:57:48 2019 From: report at bugs.python.org (Addons Zz) Date: Tue, 08 Jan 2019 21:57:48 +0000 Subject: [New-bugs-announce] [issue35687] The unittest module diff is missing/forgetting/not putting newline before + and ? for some inputs Message-ID: <1546984668.06.0.145226274602.issue35687@roundup.psfhosted.org> New submission from Addons Zz : Create this program and run with `Python 3.6.3`: ```python import unittest class StdErrUnitTests(unittest.TestCase): def test_function_name(self): expected = "testing.main_unit_tests.test_dictionaryBasicLogging:416 - dictionary\n" \ "testing.main_unit_tests.test_dictionaryBasicLogging:417 - dictionary {1: 'defined_chunk'}" actual = "15:49:35:912.348986 - testing.main_unit_tests - dictionary\n" \ "15:49:35:918.879986 - testing.main_unit_tests - dictionary {1: 'defined_chunk'}" self.assertEqual(expected, actual) if __name__ == '__main__': unittest.main() ``` ### Actual output ```diff F ====================================================================== FAIL: test_function_name (__main__.StdErrUnitTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\bug_assert_unittests.py", line 13, in test_function_name self.assertEqual(expected, actual) AssertionError: "testing.main_unit_tests.test_dictionaryBa[114 chars]nk'}" != "15:49:35:912.348986 - testing.main_unit_t[94 chars]nk'}" - testing.main_unit_tests.test_dictionaryBasicLogging:416 - dictionary - testing.main_unit_tests.test_dictionaryBasicLogging:417 - dictionary {1: 'defined_chunk'}+ 15:49:35:912.348986 - testing.main_unit_tests - dictionary + 15:49:35:918.879986 - testing.main_unit_tests - dictionary {1: 'defined_chunk'} ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1) ``` ### Expected output ```diff F ====================================================================== FAIL: test_function_name (__main__.StdErrUnitTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\bug_assert_unittests.py", line 13, in test_function_name self.assertEqual(expected, actual) AssertionError: "testing.main_unit_tests.test_dictionaryBa[114 chars]nk'}" != "15:49:35:912.348986 - testing.main_unit_t[94 chars]nk'}" - testing.main_unit_tests.test_dictionaryBasicLogging:416 - dictionary - testing.main_unit_tests.test_dictionaryBasicLogging:417 - dictionary {1: 'defined_chunk'} + 15:49:35:912.348986 - testing.main_unit_tests - dictionary + 15:49:35:918.879986 - testing.main_unit_tests - dictionary {1: 'defined_chunk'} ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1) ``` ### Differences between actual and expected output ```diff @@ -7,7 +7,8 @@ self.assertEqual(expected, actual) AssertionError: "testing.main_unit_tests.test_dictionaryBa[114 chars]nk'}" != "15:49:35:912.348986 - testing.main_unit_t[94 chars]nk'}" - testing.main_unit_tests.test_dictionaryBasicLogging:416 - dictionary -- testing.main_unit_tests.test_dictionaryBasicLogging:417 - dictionary {1: 'defined_chunk'}+ 15:49:35:912.348986 - testing.main_unit_tests - dictionary +- testing.main_unit_tests.test_dictionaryBasicLogging:417 - dictionary {1: 'defined_chunk'} ++ 15:49:35:912.348986 - testing.main_unit_tests - dictionary + 15:49:35:918.879986 - testing.main_unit_tests - dictionary {1: 'defined_chunk'} ``` This kind of bug frequently happens. On this case, it is not putting a new line before the `+`. Other cases, it is not putting a new line before the `?`. ---------- components: Tests, Windows messages: 333258 nosy: addons_zz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: The unittest module diff is missing/forgetting/not putting newline before + and ? for some inputs type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 17:45:20 2019 From: report at bugs.python.org (mattip) Date: Tue, 08 Jan 2019 22:45:20 +0000 Subject: [New-bugs-announce] [issue35688] "pip install --user numpy" fails on Python from the Windos Store Message-ID: <1546987520.17.0.731862977401.issue35688@roundup.psfhosted.org> New submission from mattip : After enabling Insider and installing Python3.7 from the Windows Store, I open a cmd window and do `pip install --user numpy` which runs to completion. But I cannot `import numpy`. The NumPy `mutiarray` c-extension module in the `numpy/core` directory depends on an `OpenBLAS` DLL that is installed into the `numpy/.libs` directory. But even after adding that directory to the `PATH` before running python (and checking with `depends.exe` that the `multiarray` c-extension module is now not missing any dependencies) I still cannot `import numpy`. See also NumPy issue https://github.com/numpy/numpy/issues/12667 ---------- components: Windows messages: 333262 nosy: brett.cannon, mattip, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: "pip install --user numpy" fails on Python from the Windos Store versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 17:50:52 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Tue, 08 Jan 2019 22:50:52 +0000 Subject: [New-bugs-announce] [issue35689] IDLE: Docstrings and test for colorizer Message-ID: <1546987852.48.0.500936785043.issue35689@roundup.psfhosted.org> New submission from Cheryl Sabella : Add docstrings and unittests for colorizer.py. ---------- assignee: terry.reedy components: IDLE messages: 333263 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Docstrings and test for colorizer type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 19:07:08 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 09 Jan 2019 00:07:08 +0000 Subject: [New-bugs-announce] [issue35690] IDLE: Fix and test debugger. Message-ID: <1546992428.61.0.963144948304.issue35690@roundup.psfhosted.org> New submission from Terry J. Reedy : Move PR 11451 from #35668. * Remove use of blank comments to make blank lines. * Greatly expand test_debugger. * Fix a couple of issues discovered while writing tests. (It is possible that one of these caused one of the reported debugger bugs, but we don't need to determine that now.) ---------- messages: 333267 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Fix and test debugger. type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 20:39:49 2019 From: report at bugs.python.org (Wencan Deng) Date: Wed, 09 Jan 2019 01:39:49 +0000 Subject: [New-bugs-announce] [issue35691] cpython3.7.2 make test failed Message-ID: <1546997989.54.0.929715073405.issue35691@roundup.psfhosted.org> New submission from Wencan Deng : os: Linux skynet 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64 GNU/Linux gcc: gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 FAIL: test_startup_imports (test.test_site.StartupImportTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/wencan/Downloads/Python-3.7.2/Lib/test/test_site.py", line 532, in test_startup_imports self.assertFalse(modules.intersection(collection_mods), stderr) AssertionError: {'collections', 'operator', 'functools', 'types', 'keyword', 'reprlib', 'heapq'} is not false : import _frozen_importlib # frozen import _imp # builtin import '_thread' # import '_warnings' # import '_weakref' # # installing zipimport hook import 'zipimport' # # installed zipimport hook import '_frozen_importlib_external' # import '_io' # import 'marshal' # import 'posix' # import _thread # previously loaded ('_thread') import '_thread' # import _weakref # previously loaded ('_weakref') import '_weakref' # # /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/__init__.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__init__.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/__init__.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/codecs.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/codecs.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/codecs.cpython-37.pyc' import '_codecs' # import 'codecs' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f07a940> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/aliases.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/aliases.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/aliases.cpython-37.pyc' import 'encodings.aliases' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f095358> import 'encodings' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f07a3c8> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/utf_8.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/utf_8.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/utf_8.cpython-37.pyc' import 'encodings.utf_8' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f099080> import '_signal' # # /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/latin_1.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/latin_1.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/encodings/__pycache__/latin_1.cpython-37.pyc' import 'encodings.latin_1' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f099b38> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/io.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/io.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/io.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/abc.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/abc.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/abc.cpython-37.pyc' import '_abc' # import 'abc' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f0a4160> import 'io' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f099d68> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/_bootlocale.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/_bootlocale.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/_bootlocale.cpython-37.pyc' import '_locale' # import '_bootlocale' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f0a4780> Python 3.7.2 (default, Jan 9 2019, 09:31:17) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/site.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/site.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/site.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/os.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/os.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/os.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/stat.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/stat.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/stat.cpython-37.pyc' import '_stat' # import 'stat' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f042128> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/posixpath.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/posixpath.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/posixpath.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/genericpath.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/genericpath.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/genericpath.cpython-37.pyc' import 'genericpath' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f046be0> import 'posixpath' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f042860> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/_collections_abc.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/_collections_abc.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/_collections_abc.cpython-37.pyc' import '_collections_abc' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f050278> import 'os' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f0afda0> # /home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/_sitebuiltins.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/build/../Lib/_sitebuiltins.py # code object from '/home/wencan/Downloads/Python-3.7.2/build/../Lib/__pycache__/_sitebuiltins.cpython-37.pyc' import '_sitebuiltins' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f034198> # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/types.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/types.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/types.cpython-37.pyc' import 'types' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f0080b8> # /home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/__init__.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/importlib/__init__.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/__init__.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/warnings.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/warnings.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/warnings.cpython-37.pyc' import 'warnings' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f008f98> import 'importlib' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f008b70> # /home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/util.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/importlib/util.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/util.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/abc.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/importlib/abc.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/abc.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/machinery.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/importlib/machinery.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/importlib/__pycache__/machinery.cpython-37.pyc' import 'importlib.machinery' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f01cb70> import 'importlib.abc' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f01c320> # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/contextlib.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/contextlib.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/contextlib.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/Lib/collections/__pycache__/__init__.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/collections/__init__.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/collections/__pycache__/__init__.cpython-37.pyc' # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/operator.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/operator.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/operator.cpython-37.pyc' import '_operator' # import 'operator' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3efd12b0> # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/keyword.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/keyword.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/keyword.cpython-37.pyc' import 'keyword' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3efdd358> # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/heapq.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/heapq.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/heapq.cpython-37.pyc' # extension module '_heapq' loaded from '/home/wencan/Downloads/Python-3.7.2/build/build/lib.linux-x86_64-3.7/_heapq.cpython-37m-x86_64-linux-gnu.so' # extension module '_heapq' executed from '/home/wencan/Downloads/Python-3.7.2/build/build/lib.linux-x86_64-3.7/_heapq.cpython-37m-x86_64-linux-gnu.so' import '_heapq' # <_frozen_importlib_external.ExtensionFileLoader object at 0x7fac3efe3278> import 'heapq' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3efddcc0> import 'itertools' # # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/reprlib.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/reprlib.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/reprlib.cpython-37.pyc' import 'reprlib' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3efe3358> import '_collections' # import 'collections' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3efb4470> # /home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/functools.cpython-37.pyc matches /home/wencan/Downloads/Python-3.7.2/Lib/functools.py # code object from '/home/wencan/Downloads/Python-3.7.2/Lib/__pycache__/functools.cpython-37.pyc' import '_functools' # import 'functools' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3efb4860> import 'contextlib' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f01c9e8> import 'importlib.util' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f011630> # possible namespace for /usr/local/lib/python3.7/site-packages/sphinxcontrib import 'site' # <_frozen_importlib_external.SourceFileLoader object at 0x7fac3f0adac8> ---------- components: Build messages: 333271 nosy: Wencan Deng priority: normal severity: normal status: open title: cpython3.7.2 make test failed type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 8 22:00:15 2019 From: report at bugs.python.org (Jordan Hueckstaedt) Date: Wed, 09 Jan 2019 03:00:15 +0000 Subject: [New-bugs-announce] [issue35692] pathlib.Path.exists() on non-existent drive raises WinError instead of returning False Message-ID: <1547002815.07.0.33405132487.issue35692@roundup.psfhosted.org> New submission from Jordan Hueckstaedt : Tested in 3.7.0 on windows 10. This looks related to https://bugs.python.org/issue22759 >>> import pathlib >>> pathlib.Path(r"E:\Whatever\blah.txt").exists() # This drive doesn't exist Traceback (most recent call last): File "", line 1, in File "C:\Users\jordanhu\AppData\Local\Continuum\anaconda2\envs\py3\lib\pathlib.py", line 1318, in exists self.stat() File "C:\Users\jordanhu\AppData\Local\Continuum\anaconda2\envs\py3\lib\pathlib.py", line 1140, in stat return self._accessor.stat(self) PermissionError: [WinError 21] The device is not ready: 'E:\\Whatever\\blah.txt' >>> pathlib.Path(r"C:\Whatever\blah.txt").exists() # This drive exists False ---------- messages: 333275 nosy: Jordan Hueckstaedt priority: normal severity: normal status: open title: pathlib.Path.exists() on non-existent drive raises WinError instead of returning False type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 00:01:34 2019 From: report at bugs.python.org (Jorge Ramos) Date: Wed, 09 Jan 2019 05:01:34 +0000 Subject: [New-bugs-announce] [issue35693] test_httpservers fails Message-ID: <1547010094.25.0.275369600875.issue35693@roundup.psfhosted.org> New submission from Jorge Ramos : when running test_httpservers fails: 0:04:53 [171/407] test_httpservers E:\RepoGiT\3.6\lib\socket.py:144: ResourceWarning: unclosed _socket.socket.__init__(self, family, type, proto, fileno) E:\RepoGiT\3.6\lib\test\support\__init__.py:1542: ResourceWarning: unclosed gc.collect() test test_httpservers failed -- multiple errors occurred; run in verbose mode for details full run on attached file ---------- components: Tests, Windows files: run.txt messages: 333279 nosy: neyuru, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_httpservers fails versions: Python 3.6 Added file: https://bugs.python.org/file48034/run.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 00:08:13 2019 From: report at bugs.python.org (Jorge Ramos) Date: Wed, 09 Jan 2019 05:08:13 +0000 Subject: [New-bugs-announce] [issue35694] missing modules on test suite Message-ID: <1547010493.14.0.290956143834.issue35694@roundup.psfhosted.org> New submission from Jorge Ramos : when running some tests where skipped due to missing modules: 0:02:52 [ 86/407] test_crypt test_crypt skipped -- No module named '_crypt' 0:02:55 [ 93/407] test_dbm_gnu test_dbm_gnu skipped -- No module named '_gdbm' 0:02:55 [ 94/407] test_dbm_ndbm -- test_dbm_gnu skipped test_dbm_ndbm skipped -- No module named '_dbm' 0:05:25 [183/407/1] test_ioctl test_ioctl skipped -- No module named 'fcntl' 0:07:43 [224/407/1] test_nis test_nis skipped -- No module named 'nis' full verbose on attached file ---------- components: Tests, Windows files: run.txt messages: 333280 nosy: neyuru, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: missing modules on test suite versions: Python 3.6 Added file: https://bugs.python.org/file48035/run.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 00:13:54 2019 From: report at bugs.python.org (Jorge Ramos) Date: Wed, 09 Jan 2019 05:13:54 +0000 Subject: [New-bugs-announce] [issue35695] missing attributes Message-ID: <1547010834.19.0.61434721558.issue35695@roundup.psfhosted.org> New submission from Jorge Ramos : while running : 0:04:26 [136/407] test_fork1 test_fork1 skipped -- object has no attribute 'fork' 0:11:56 [384/407/1] test_wait4 -- test_wait3 skipped test_wait4 skipped -- object has no attribute 'fork' see attached file ---------- components: Tests, Windows files: run.txt messages: 333282 nosy: neyuru, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: missing attributes versions: Python 3.6 Added file: https://bugs.python.org/file48036/run.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 04:26:39 2019 From: report at bugs.python.org (Ma Lin) Date: Wed, 09 Jan 2019 09:26:39 +0000 Subject: [New-bugs-announce] [issue35696] remove unnecessary operation in long_compare() Message-ID: <1547025999.59.0.0318422017652.issue35696@roundup.psfhosted.org> New submission from Ma Lin : static int long_compare(PyLongObject *a, PyLongObject *b) { .... } This function in /Objects/longobject.c is used to compare two PyLongObject's value. We only need the sign, converting to -1 or +1 is not necessary. ---------- messages: 333293 nosy: Ma Lin priority: normal severity: normal status: open title: remove unnecessary operation in long_compare() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 06:31:48 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 09 Jan 2019 11:31:48 +0000 Subject: [New-bugs-announce] [issue35697] decimal: formatter error if LC_NUMERIC uses a different encoding than LC_CTYPE Message-ID: <1547033508.44.0.197270282125.issue35697@roundup.psfhosted.org> New submission from STINNER Victor : The decimal module support formatting a number in the "n" formatting type if the LC_NUMERIC locale uses a different encoding than the LC_CTYPE locale. Example with attached decimal_locale.py on Fedora 29 with Python 3.7.2: $ python3 decimal_locale.py LC_NUMERIC locale: uk_UA.koi8u decimal_point: ',' = ',' = U+002c thousands_sep: '\xa0' = '\xa0' = U+00a0 Traceback (most recent call last): File "/home/vstinner/decimal_locale.py", line 16, in text = format(num, "n") ValueError: invalid decimal point or unsupported combination of LC_CTYPE and LC_NUMERIC Attached PR modify the _decimal module to support this corner case. Note: I already wrote PR 5191 last year, but I abandoned the PR in the meanwhile. -- Supporting non-ASCII decimal point and thousands separator has a long history and a list of now fixed issues: * bpo-7442 * bpo-13706 * bpo-25812 * bpo-28604 (LC_MONETARY) * bpo-31900 * bpo-33954 I even wrote an article about these bugs :-) https://github.com/python/cpython/pull/5191 Python 3.7.2 now supports different encodings for LC_NUMERIC, LC_MONETARY and LC_CTYPE locales. format(int, "n") sets temporarily LC_CTYPE to LC_NUMERIC to decode decimal_point and thousands_sep from the correct encoding. The LC_CTYPE locale is only changed if it's different than LC_NUMERIC locale and if the decimal point and/or thousands separator is non-ASCII. It's implemented in this function: int _Py_GetLocaleconvNumeric(struct lconv *lc, PyObject **decimal_point, PyObject **thousands_sep) Function used by locale.localeconv() and format() (for "n" type). I decided to fix the bug when I was fixing other locale bugs because we now got enough bug reports. Copy of my msg309980: """ > I would not consider this a bug in Python, but rather in the locale settings passed to setlocale(). Past 10 years, I repeated to every single user I met that "Python 3 is right, your system setup is wrong". But that's a waste of time. People continue to associate Python3 and Unicode to annoying bugs, because they don't understand how locales work. Instead of having to repeat to each user that "hum, maybe your config is wrong", I prefer to support this non convential setup and work as expected ("it just works"). With my latest implementation, setlocale() is only done when LC_CTYPE and LC_NUMERIC are different, which is the corner case which "shouldn't occur in practice". """ ---------- components: Library (Lib) files: decimal_locale.py messages: 333302 nosy: vstinner priority: normal severity: normal status: open title: decimal: formatter error if LC_NUMERIC uses a different encoding than LC_CTYPE versions: Python 3.8 Added file: https://bugs.python.org/file48038/decimal_locale.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 06:46:12 2019 From: report at bugs.python.org (Jonathan Fine) Date: Wed, 09 Jan 2019 11:46:12 +0000 Subject: [New-bugs-announce] [issue35698] Division by 2 in statistics.median Message-ID: <1547034372.58.0.0557058004142.issue35698@roundup.psfhosted.org> New submission from Jonathan Fine : When len(data) is odd, median returns the average of the two middle values. This average is computed using i = n//2 return (data[i - 1] + data[i])/2 This results in the following behaviour >>> from fractions import Fraction >>> from statistics import median >>> F1 = Fraction(1, 1) >>> median([1]) 1 >>> median([1, 1]) # Example 1. 1.0 >>> median([F1]) Fraction(1, 1) >>> median([F1, F1]) Fraction(1, 1) >>> median([2, 2, 1, F1]) # Example 2. Fraction(3, 2) >>> median([2, 2, F1, 1]) # Example 3. 1.5 Perhaps, when len(data) is odd, it would be better to test the two middle values for equality. This would resolve Example 1. It would not help with Examples 2 and 3, which might not have a satisfactory solution. See also issue 33084. ---------- messages: 333305 nosy: jfine2358 priority: normal severity: normal status: open title: Division by 2 in statistics.median type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 07:53:53 2019 From: report at bugs.python.org (Marc Schlaich) Date: Wed, 09 Jan 2019 12:53:53 +0000 Subject: [New-bugs-announce] [issue35699] distutils cannot find Build Tools 2017 since 3.7.2 Message-ID: <1547038433.43.0.625372378947.issue35699@roundup.psfhosted.org> New submission from Marc Schlaich : vshwere.exe doesn't return Build Tools 2017 per default. This means Build Tools 2017 are not detected by distutils in 3.7.2 and you get the famous "Microsoft Visual C++ 14.0 is required" error. Please see https://github.com/Microsoft/vswhere/issues/125 for more details. The solution is to add "-products", "*", to the vswhere.exe call. This is a regression of https://bugs.python.org/issue35067. ---------- components: Distutils messages: 333312 nosy: dstufft, eric.araujo, schlamar, steve.dower priority: normal severity: normal status: open title: distutils cannot find Build Tools 2017 since 3.7.2 type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 08:31:37 2019 From: report at bugs.python.org (Peter Vex) Date: Wed, 09 Jan 2019 13:31:37 +0000 Subject: [New-bugs-announce] [issue35700] Place, Pack and Grid should return the widget Message-ID: <1547040697.39.0.463281400773.issue35700@roundup.psfhosted.org> New submission from Peter Vex : When you want to simply place a widget on a window and you also want to store the reference for that widget in a variable you can't do that in one line, which is really unpleasant, because when you create a new widget these things are usually the first what you want to do with a widget and breaking it two line is just making things more complicated. For example, if you want to create 3 label, place it next to each other and store their reference: import tkinter as tk root = tk.Tk() # you can't do that: # here the variables assigned to None, since grid() returns 'nothing' label1 = tk.Label(root).grid(row=0, column=0) label2 = tk.Label(root).grid(row=0, column=1) label3 = tk.Label(root).grid(row=0, column=2) # actually, you must do this: label1 = tk.Label(root) label1.grid(row=0, column=0) label2 = tk.Label(root) label2.grid(row=0, column=1) label3 = tk.Label(root) label3.grid(row=0, column=2) ---------- components: Tkinter messages: 333318 nosy: Peter Vex priority: normal pull_requests: 10980 severity: normal status: open title: Place, Pack and Grid should return the widget type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 9 12:37:04 2019 From: report at bugs.python.org (Josh Rosenberg) Date: Wed, 09 Jan 2019 17:37:04 +0000 Subject: [New-bugs-announce] [issue35701] 3.8 needlessly breaks weak references for UUIDs Message-ID: <1547055424.22.0.868480715167.issue35701@roundup.psfhosted.org> New submission from Josh Rosenberg : I 100% agree with the aim of #30977 (reduce uuid.UUID() memory footprint), but it broke compatibility for any application that was weak referencing UUID instances (which seems a reasonable thing to do; a strong reference to a UUID can be stored in a single master container or passed through a processing pipeline, while also keying WeakKeyDictionary with cached supplementary data). I specifically noticed this because I was about to do that very thing in a processing flow, then noticed UUIDs in 3.6 were a bit heavyweight, memory-wise, went to file a bug on memory usage to add __slots__, and discovered someone had already done it for me. Rather than break compatibility in 3.8, why not simply include '__weakref__' in the __slots__ listing? It would also remove the need for a What's New level description of the change, since the description informs people that: 1. Instances can no longer be weak-referenced (which adding __weakref__ would undp) 2. Instances can no longer add arbitrary attributes. (which was already the case in terms of documented API, programmatically enforced via a __setattr__ override, so it seems an unnecessary thing to highlight outside of Misc/NEWS) The cost of changing __slots__ from: __slots__ = ('int', 'is_safe') to: __slots__ = 'int', 'is_safe', '__weakref__' would only be 4-8 bytes (for 64 bit Python, total cost of object + int would go from 100 to 108 bytes, still about half of the pre-__slots__ cost of 212 bytes), and avoid breaking any code that might rely on being able to weak reference UUIDs. I've marked this as release blocker for the time being because if 3.8 actually releases with this change, it will cause back compat issues that might prevent people relying on UUID weak references from upgrading their code. ---------- components: Library (Lib) keywords: 3.7regression, easy messages: 333338 nosy: Nir Soffer, josh.r, serhiy.storchaka, taleinat, vstinner, wbolster priority: release blocker severity: normal stage: needs patch status: open title: 3.8 needlessly breaks weak references for UUIDs type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 02:43:37 2019 From: report at bugs.python.org (Ricardo F) Date: Thu, 10 Jan 2019 07:43:37 +0000 Subject: [New-bugs-announce] [issue35702] clock_gettime: Add new identifier CLOCK_UPTIME_RAW for Darwin Message-ID: <1547106217.28.0.153121423636.issue35702@roundup.psfhosted.org> New submission from Ricardo F : Finally since the release of OSX 10.12 the equivalent from the FreeBSD and OpenBSD "CLOCK_UPTIME" is available on Darwin under the name "CLOCK_UPTIME_RAW": CLOCK_UPTIME FreeBSD [1]: Starts at zero when the kernel boots and increments monotonically in SI seconds while the machine is running. CLOCK_UPTIME OpenBSD [2]: Time whose absolute value is the time the system has been running and not suspended, providing accurate uptime measurement, both absolute and interval CLOCK_UPTIME_RAW Darwin [3]: Clock that increments monotonically, tracking the time since an arbitrary point, unaffected by frequency or time adjustments and not increment while the system is asleep. It would be useful to have it available on time module [4] for this platform. As the behaviour is equivalent, maybe it can be assigned to the existing time.CLOCK_UPTIME funtion. Thanks, [1] - https://www.freebsd.org/cgi/man.cgi?query=clock_gettime [2] - https://man.openbsd.org/clock_gettime.2 [3] - http://www.manpagez.com/man/3/clock_gettime_nsec_np/ [4] - https://docs.python.org/3/library/time.htm ---------- components: macOS messages: 333366 nosy: ned.deily, rfrail3, ronaldoussoren priority: normal severity: normal status: open title: clock_gettime: Add new identifier CLOCK_UPTIME_RAW for Darwin type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 03:38:15 2019 From: report at bugs.python.org (Yoong Hor Meng) Date: Thu, 10 Jan 2019 08:38:15 +0000 Subject: [New-bugs-announce] [issue35703] Underscores in numeric literals cannot be before or after decimal (.) Message-ID: <1547109495.23.0.369119620159.issue35703@roundup.psfhosted.org> New submission from Yoong Hor Meng : s = 1_234.567_8 print(float(s)) # It works s = 1_234._567 print(float(s)) # It does not work s = 1_234_.567 print(float(s)) # It does not work too ---------- components: Interpreter Core messages: 333368 nosy: yoonghm priority: normal severity: normal status: open title: Underscores in numeric literals cannot be before or after decimal (.) type: crash versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 04:12:51 2019 From: report at bugs.python.org (Michael Felt) Date: Thu, 10 Jan 2019 09:12:51 +0000 Subject: [New-bugs-announce] [issue35704] On AIX, test_unpack_archive_xztar fails with default MAXDATA settings Message-ID: <1547111571.31.0.315459583294.issue35704@roundup.psfhosted.org> New submission from Michael Felt : By default AIX builds 32-bit applications - and the combined .data, .bss and .stack areas share one memory segment of 256 Mbyte. This can be modified by either specifying a larger value for maxdata during linking (e.g., with LDFLAGS=-bmaxdata:0x40000000) or using the program ldedit (e.g., ldedit -b maxdata:0x40000000). The subtest test_shutil.test_unpack_archive_xztar fails with the default. The patch here looks at the MAXDATA value of the executable XCOFF headers and skips the test when AIX is 32-bit and MAXDATA < 0x20000000. This helps the result of AIX bots to be more accurate - as this so-called failure is not an issue with python itself. ---------- components: Tests messages: 333370 nosy: Michael.Felt priority: normal severity: normal status: open title: On AIX, test_unpack_archive_xztar fails with default MAXDATA settings type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 04:49:09 2019 From: report at bugs.python.org (ossdev) Date: Thu, 10 Jan 2019 09:49:09 +0000 Subject: [New-bugs-announce] [issue35705] libffi support is not there for windows on ARM64 Message-ID: <1547113749.84.0.713221399914.issue35705@roundup.psfhosted.org> Change by ossdev : ---------- components: ctypes nosy: ossdev07 priority: normal severity: normal status: open title: libffi support is not there for windows on ARM64 type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 09:04:59 2019 From: report at bugs.python.org (Dieter Weber) Date: Thu, 10 Jan 2019 14:04:59 +0000 Subject: [New-bugs-announce] [issue35706] Making an embedded Python interpreter use a venv is difficult Message-ID: <1547129099.62.0.924394155356.issue35706@roundup.psfhosted.org> New submission from Dieter Weber : Python virtual environments are awesome! Using venvs with an embedded Python interpreter has proven difficult, unfortunately. With conda environments it works. See appended a sample file to reproduce the behavior. The core of the problem seems to be that a venv doesn't contain a full Python installation, and Py_Initialize() apparently doesn't support setting up the combination of venv directories and base installation correctly, i.e. setting sys.prefix and sys.base_prefix and potentially other values. Observed behavior when trying to use a venv: """ Initializing... Fatal Python error: Py_Initialize: unable to load the file system codec ModuleNotFoundError: No module named 'encodings' Current thread 0x00001e90 (most recent call first): """ Expected behavior: Setting Py_SetPythonHome() to a venv works and sets up all paths and prefixes correctly to use the venv, just like it does for a conda environment. ---------- files: Source.cpp messages: 333378 nosy: Dieter Weber priority: normal severity: normal status: open title: Making an embedded Python interpreter use a venv is difficult type: enhancement versions: Python 3.6 Added file: https://bugs.python.org/file48039/Source.cpp _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 10:58:39 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Thu, 10 Jan 2019 15:58:39 +0000 Subject: [New-bugs-announce] [issue35707] time.sleep() should support objects with __float__ Message-ID: <1547135919.8.0.0876776516097.issue35707@roundup.psfhosted.org> New submission from Jeroen Demeyer : This used to work correctly in Python 2: class Half(object): def __float__(self): return 0.5 import time time.sleep(Half()) With Python 3.6, one gets instead Traceback (most recent call last): File "test.py", line 6, in time.sleep(Half()) TypeError: an integer is required (got type Half) ---------- messages: 333391 nosy: jdemeyer priority: normal severity: normal status: open title: time.sleep() should support objects with __float__ versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 11:08:02 2019 From: report at bugs.python.org (Nj Hsiong) Date: Thu, 10 Jan 2019 16:08:02 +0000 Subject: [New-bugs-announce] [issue35708] lib2to3 failed to convert as refactor's fixes not search.pyc files Message-ID: <1547136482.81.0.227627171679.issue35708@roundup.psfhosted.org> New submission from Nj Hsiong : python3's lib2to3 would fail in silence if python3 and its packages are installed as compiled .pyc files. Root cause is in Lib/lib2to3/refactor.py, the function get_all_fix_names only searches '.py' fix names. =========below is workaround========= --- a/Lib/lib2to3/refactor.py +++ b/Lib/lib2to3/refactor.py @@ -37,6 +37,12 @@ if remove_prefix: name = name[4:] fix_names.append(name[:-3]) + if name.startswith("fix_") and name.endswith(".pyc"): + if remove_prefix: + name = name[4:] + name = name[:-4] + if name not in fix_names: + fix_names.append(name) return fix_names ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 333393 nosy: njhsio priority: normal severity: normal status: open title: lib2to3 failed to convert as refactor's fixes not search.pyc files type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 11:47:37 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 10 Jan 2019 16:47:37 +0000 Subject: [New-bugs-announce] [issue35709] test_ssl fails on Fedora 29: test_min_max_version() Message-ID: <1547138857.4.0.0864082014803.issue35709@roundup.psfhosted.org> New submission from STINNER Victor : test_ssl fails on Fedora 29: vstinner at apu$ ./python -m test test_ssl -m test_min_max_version -v == CPython 3.8.0a0 (heads/pytime_inf:aaea5b25d1, Jan 10 2019, 17:40:16) [GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] == Linux-4.19.13-300.fc29.x86_64-x86_64-with-glibc2.28 little-endian == cwd: /home/vstinner/prog/python/master/build/test_python_26069 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 Run tests sequentially 0:00:00 load avg: 2.33 [1/1] test_ssl test_ssl: testing with 'OpenSSL 1.1.1 FIPS 11 Sep 2018' (1, 1, 1, 0, 15) under 'Linux-4.19.13-300.fc29.x86_64-x86_64-with-glibc2.28' HAS_SNI = True OP_ALL = 0x80000054 OP_NO_TLSv1_1 = 0x10000000 test_min_max_version (test.test_ssl.ContextTests) ... FAIL test_min_max_version (test.test_ssl.ThreadedTests) ... server: new connection from ('127.0.0.1', 35268) server: connection cipher is now ('ECDHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256) server: selected protocol is now None server: new connection from ('127.0.0.1', 40390) server: connection cipher is now ('ECDHE-RSA-AES256-SHA', 'TLSv1.0', 256) server: selected protocol is now None server: new connection from ('127.0.0.1', 36674) server: bad connection attempt from ('127.0.0.1', 36674): Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", line 2150, in wrap_conn self.sslconn = self.server.context.wrap_socket( File "/home/vstinner/prog/python/master/Lib/ssl.py", line 405, in wrap_socket return self.sslsocket_class._create( File "/home/vstinner/prog/python/master/Lib/ssl.py", line 853, in _create self.do_handshake() File "/home/vstinner/prog/python/master/Lib/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1055) ok ====================================================================== FAIL: test_min_max_version (test.test_ssl.ContextTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", line 1069, in test_min_max_version self.assertEqual( AssertionError: != ---------------------------------------------------------------------- Ran 2 tests in 0.026s FAILED (failures=1) test test_ssl failed test_ssl failed == Tests result: FAILURE == 1 test failed: test_ssl Total duration: 269 ms Tests result: FAILURE vstinner at apu$ ./python -m test.pythoninfo|grep ^ssl ssl.HAS_SNI: True ssl.OPENSSL_VERSION: OpenSSL 1.1.1 FIPS 11 Sep 2018 ssl.OPENSSL_VERSION_INFO: (1, 1, 1, 0, 15) ssl.OP_ALL: 0x80000054 ssl.OP_NO_TLSv1_1: 0x10000000 ---------- components: Tests messages: 333402 nosy: vstinner priority: normal severity: normal status: open title: test_ssl fails on Fedora 29: test_min_max_version() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 12:11:11 2019 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Thu, 10 Jan 2019 17:11:11 +0000 Subject: [New-bugs-announce] [issue35710] Make dataclasses.field() accept another name for __init__ field's name Message-ID: <1547140271.38.0.846532339527.issue35710@roundup.psfhosted.org> New submission from R?mi Lapeyre : When creating a class, I sometimes wish to get this behavior: def MyClass: def __init__(self, param): self._param = param def __repr__(self): return f"MyClass(param={self._param})" Unless I'm making a mistaking, this behavior is not currently possible with dataclasses. I propose to change: field(*, default=MISSING, default_factory=MISSING, repr=True, hash=None, init=True, compare=True, metadata=None) to: field(*, default=MISSING, default_factory=MISSING, repr=True, hash=None, init=True, compare=True, metadata=None, target=None) with target being used as the init parameter name for this field and in the repr. If this is accepted, I can post the patch to make this change. ---------- messages: 333409 nosy: remi.lapeyre priority: normal severity: normal status: open title: Make dataclasses.field() accept another name for __init__ field's name versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 15:57:59 2019 From: report at bugs.python.org (Samuel Freilich) Date: Thu, 10 Jan 2019 20:57:59 +0000 Subject: [New-bugs-announce] [issue35711] Print information about an unexpectedly pending error before crashing Message-ID: <1547153879.73.0.173283479509.issue35711@roundup.psfhosted.org> New submission from Samuel Freilich : _PyObject_FastCallDict and _PyObject_FastCallKeywords assert that there is no pending exception before calling functions that might otherwise clobber the exception state. However, that doesn't produce very clear output for debugging, since the assert failure doesn't say anything about what the pending exception actually was. It would be better to print the pending exception first. ---------- messages: 333418 nosy: Samuel Freilich priority: normal severity: normal status: open title: Print information about an unexpectedly pending error before crashing _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 16:55:06 2019 From: report at bugs.python.org (Josh Rosenberg) Date: Thu, 10 Jan 2019 21:55:06 +0000 Subject: [New-bugs-announce] [issue35712] Make NotImplemented unusable in boolean context Message-ID: <1547157306.18.0.725390794489.issue35712@roundup.psfhosted.org> New submission from Josh Rosenberg : I don't really expect this to go anywhere until Python 4 (*maybe* 3.9 after a deprecation period), but it seems like it would have been a good idea to make NotImplementedType's __bool__ explicitly raise a TypeError (rather than leaving it unset, so NotImplemented evaluates as truthy). Any correct use of NotImplemented per its documented intent would never evaluate it in a boolean context, but rather use identity testing, e.g. back in the Py2 days, the canonical __ne__ delegation to __eq__ for any class should be implemented as something like: def __ne__(self, other): equal = self.__eq__(other) return equal if equal is NotImplemented else not equal Problem is, a lot of folks would make mistakes like doing: def __ne__(self, other): return not self.__eq__(other) which silently returns False when __eq__ returns NotImplemented, rather than returning NotImplemented and allowing Python to check the mirrored operation. Similar issues arise when hand-writing the other rich comparison operators in terms of each other. It seems like, given NotImplemented is a sentinel value that should never be evaluated in a boolean context, at some point it might be nice to explicitly prevent it, to avoid errors like this. Main argument against it is that I don't know of any other type/object that explicitly makes itself unevaluable in a boolean context, so this could be surprising if someone uses NotImplemented as a sentinel unrelated to its intended purpose and suffers the problem. ---------- messages: 333421 nosy: josh.r priority: normal severity: normal status: open title: Make NotImplemented unusable in boolean context type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 17:35:31 2019 From: report at bugs.python.org (Tasy) Date: Thu, 10 Jan 2019 22:35:31 +0000 Subject: [New-bugs-announce] [issue35713] Fatal Python error: _PySys_BeginInit: can't initialize sys module Message-ID: <1547159731.63.0.590496502832.issue35713@roundup.psfhosted.org> New submission from Tasy : . . . ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi Fatal Python error: _PySys_BeginInit: can't initialize sys module Current thread 0x00002b4e5f9bf400 (most recent call first): Aborted (core dumped) generate-posix-vars failed make[1]: *** [pybuilddir.txt] Error 1 make[1]: Leaving directory `/usr/local/mysoftware/Python-3.7.2/build' make: *** [profile-opt] Error 2 ---------- components: Build messages: 333423 nosy: Tasy priority: normal severity: normal status: open title: Fatal Python error: _PySys_BeginInit: can't initialize sys module type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 10 17:42:10 2019 From: report at bugs.python.org (Dan Snider) Date: Thu, 10 Jan 2019 22:42:10 +0000 Subject: [New-bugs-announce] [issue35714] Document that the null character '\0' terminates a struct format spec Message-ID: <1547160130.27.0.885629506261.issue35714@roundup.psfhosted.org> New submission from Dan Snider : ie.: >>> from struct import calcsize >>> calcsize('\144\u0064\000xf\U00000031000\60d\121\U00000051') 16 I'm sure some people think it's obvious or even expect the null character to signal EOF but it probably isn't obvious at all to those without experience in lower level languages. It actually seems like Python goes out of its way to make sure everything treats the null character no more special than the letter "H", which is good. At first glance I'd think something like this was just another trivial quirk of the language and not bring it up, but because the documentation doesn't mention it I actually got stuck on something related for half an hour when unit testing some dynamically generated format specs. Without going into unnecessary detail, what happened was that a typo in another tangentially related part of the test was enabling the generation of a rogue null byte. I'm bad at those "find face in the crowd" puzzles and this was hardly different, being literally camouflaged within a 300 character format spec containing a random mixture of escaped and non-escaped source characters in the forms: \Uffffffff, \uffff, \777, \xff, \x00, + latin/ascii. If I'm not the only one who sees this as a slightly bigger deal than poor documentation, the fix is trivial with an extra call to PyBytes_GET_SIZE when null is found. But just because I can't think of a use case in allowing the null character to precede other characters in the format string doesn't mean there isn't one, which is why only documentation is currently selected. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 333424 nosy: bup, docs at python priority: normal severity: normal status: open title: Document that the null character '\0' terminates a struct format spec versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 03:36:21 2019 From: report at bugs.python.org (David Chevell) Date: Fri, 11 Jan 2019 08:36:21 +0000 Subject: [New-bugs-announce] [issue35715] ProcessPool workers hold onto return value of last task in memory Message-ID: <1547195781.25.0.0195934881381.issue35715@roundup.psfhosted.org> New submission from David Chevell : ProcessPoolExecutor workers will hold onto the return value of their last task in memory until the next task is received. Since the return value has already been propagated to the parent process's `Future` or else effectively discarded, this is holding onto objects unnecessarily. Simple case to reproduce: import concurrent.futures import time executor = concurrent.futures.ProcessPoolExecutor(max_workers=1) def big_val(): return [{1:1} for i in range(1, 1000000)] executor.submit(big_val) # Observe the memory usage of the process worker during the sleep interval time.sleep(10) This should be easily fixed by having the worker explicitly `del r` after calling `_sendback_result` as it already does this for `call_item` ---------- components: Library (Lib) messages: 333444 nosy: dchevell priority: normal severity: normal status: open title: ProcessPool workers hold onto return value of last task in memory versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 04:32:48 2019 From: report at bugs.python.org (Ricardo Fraile) Date: Fri, 11 Jan 2019 09:32:48 +0000 Subject: [New-bugs-announce] [issue35716] CLOCK_MONOTONIC_RAW available on macOS Message-ID: <1547199168.5.0.0198252527214.issue35716@roundup.psfhosted.org> New submission from Ricardo Fraile : Add macOS to CLOCK_MONOTONIC_RAW description because it is already available since 10.12. ---------- assignee: docs at python components: Documentation files: 001.patch keywords: patch messages: 333445 nosy: docs at python, rfrail3, vstinner priority: normal severity: normal status: open title: CLOCK_MONOTONIC_RAW available on macOS type: enhancement Added file: https://bugs.python.org/file48041/001.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 10:08:13 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 11 Jan 2019 15:08:13 +0000 Subject: [New-bugs-announce] [issue35717] enum.Enum error on sys._getframe(2) Message-ID: <1547219293.78.0.972109062344.issue35717@roundup.psfhosted.org> New submission from STINNER Victor : sys._getframe(2) fails in the following example: code = "from enum import Enum; Enum('Animal', 'ANT BEE CAT DOG')" code = compile(code, "", "exec") global_ns = {} local_ls = {} exec(code, global_ns, local_ls) Error with Python 3.7.2 (Fedora 29): Traceback (most recent call last): File "x.py", line 5, in exec(code, global_ns, local_ls) File "", line 1, in File "/usr/lib64/python3.7/enum.py", line 311, in __call__ return cls._create_(value, names, module=module, qualname=qualname, type=type, start=start) File "/usr/lib64/python3.7/enum.py", line 429, in _create_ module = sys._getframe(2).f_globals['__name__'] KeyError: '__name__' ---------- components: Library (Lib) messages: 333474 nosy: barry, eli.bendersky, ethan.furman, vstinner priority: normal severity: normal status: open title: enum.Enum error on sys._getframe(2) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 11:01:26 2019 From: report at bugs.python.org (Opher Shachar) Date: Fri, 11 Jan 2019 16:01:26 +0000 Subject: [New-bugs-announce] [issue35718] Cannot initialize the "force" command-option Message-ID: <1547222486.48.0.811316963234.issue35718@roundup.psfhosted.org> New submission from Opher Shachar : When creating a custom Command (or sub-classing one) we cannot initialize in "initialize_options()" the "force" option to 1, because Command.__init__ resets it to None after the call to self.initialize_options(). ---------- components: Distutils messages: 333481 nosy: Opher Shachar, dstufft, eric.araujo priority: normal severity: normal status: open title: Cannot initialize the "force" command-option type: behavior versions: Python 2.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 13:17:24 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 11 Jan 2019 18:17:24 +0000 Subject: [New-bugs-announce] [issue35719] Optimize multi-argument math functions Message-ID: <1547230644.92.0.865651994946.issue35719@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR makes multi-argument functions in the math module atan2(), copysign(), remainder() and hypot() to use the fast call convention and inline arguments tuple unpacking. Results: $ ./python -m timeit -s "from math import atan2" "atan2(1.0, 1.0)" Unpatched: 5000000 loops, best of 5: 79.5 nsec per loop Patched: 5000000 loops, best of 5: 66.1 nsec per loop $ ./python -m timeit -s "from math import copysign" "copysign(1.0, 1.0)" Unpatched: 5000000 loops, best of 5: 90.3 nsec per loop Patched: 10000000 loops, best of 5: 35.9 nsec per loop $ ./python -m timeit -s "from math import remainder" "remainder(1.0, 1.0)" Unpatched: 5000000 loops, best of 5: 69.5 nsec per loop Patched: 5000000 loops, best of 5: 44.5 nsec per loop $ ./python -m timeit -s "from math import hypot" "hypot(1.0, 1.0)" Unpatched: 5000000 loops, best of 5: 63.6 nsec per loop Patched: 5000000 loops, best of 5: 47.4 nsec per loop ---------- components: Extension Modules messages: 333497 nosy: mark.dickinson, rhettinger, serhiy.storchaka, stutzbach, vstinner priority: normal severity: normal status: open title: Optimize multi-argument math functions type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 14:14:02 2019 From: report at bugs.python.org (Lucas Cimon) Date: Fri, 11 Jan 2019 19:14:02 +0000 Subject: [New-bugs-announce] [issue35720] Memory leak in Modules/main.c:pymain_parse_cmdline_impl when using the CLI flag Message-ID: <1547234042.68.0.95184796304.issue35720@roundup.psfhosted.org> New submission from Lucas Cimon : Hi. I think I have found a minor memory leak in Modules/main.c:pymain_parse_cmdline_impl. When the loop in the pymain_read_conf function in this same file calls pymain_init_cmdline_argv a 2nd time, the pymain->command buffer of wchar_t is overriden and the previously allocated memory is never freed. I haven't written any code test to reproduce this, but it can be tested easily with gdb: ``` gdb -- bin/python3 -c pass start b Modules/main.c:587 b pymain_clear_pymain c c ``` You'll see that PyMem_RawMalloc is called twice without pymain->command ever being freed in pymain_clear_pymain. I have a patch coming as PR on GitHub I'd be glad to have your feedback on this issue and my proposal for a fix. Regards. ---------- messages: 333499 nosy: Lucas Cimon priority: normal severity: normal status: open title: Memory leak in Modules/main.c:pymain_parse_cmdline_impl when using the CLI flag _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 14:24:51 2019 From: report at bugs.python.org (Niklas Fiekas) Date: Fri, 11 Jan 2019 19:24:51 +0000 Subject: [New-bugs-announce] [issue35721] _UnixSubprocessTransport leaks socket pair if Popen fails Message-ID: <1547234691.74.0.652164571217.issue35721@roundup.psfhosted.org> New submission from Niklas Fiekas : Output of attached test case: non-existing indeed subprocess-exec-test.py:11: ResourceWarning: unclosed print("non-existing indeed") ResourceWarning: Enable tracemalloc to get the object allocation traceback subprocess-exec-test.py:11: ResourceWarning: unclosed print("non-existing indeed") ResourceWarning: Enable tracemalloc to get the object allocation traceback . ---------------------------------------------------------------------- Ran 1 test in 0.007s OK ---------- components: asyncio files: subprocess-exec-test.py messages: 333501 nosy: asvetlov, niklasf, yselivanov priority: normal severity: normal status: open title: _UnixSubprocessTransport leaks socket pair if Popen fails type: resource usage versions: Python 3.8 Added file: https://bugs.python.org/file48043/subprocess-exec-test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 14:32:29 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 11 Jan 2019 19:32:29 +0000 Subject: [New-bugs-announce] [issue35722] disable_existing_loggers does not apply to the root logger Message-ID: <1547235149.9.0.274775552253.issue35722@roundup.psfhosted.org> New submission from G?ry : In the logging package, the parameter disable_existing_loggers used in logging.config.dictConfig and logging.config.fileConfig does not apply to the root logger. More precisely, its disabled attribute remains unchanged (while it is set to True for non-root loggers). So it is either a bug or the documentation should be updated. Illustration: import logging.config assert logging.getLogger().disabled is False assert logging.getLogger("foo").disabled is False logging.config.dictConfig({"version": 1}) assert logging.getLogger().disabled is False assert logging.getLogger("foo").disabled is True ---------- components: Library (Lib) messages: 333502 nosy: eric.araujo, ezio.melotti, maggyero, mdk, vinay.sajip, willingc priority: normal pull_requests: 11121 severity: normal status: open title: disable_existing_loggers does not apply to the root logger type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 15:34:17 2019 From: report at bugs.python.org (Paul Ganssle) Date: Fri, 11 Jan 2019 20:34:17 +0000 Subject: [New-bugs-announce] [issue35723] Add "time zone index" cache to datetime objects Message-ID: <1547238857.17.0.663234077571.issue35723@roundup.psfhosted.org> New submission from Paul Ganssle : When examining the performance characteristics of pytz, I realized that pytz's eager calculation of tzname, offset and DST gives it an implicit cache that makes it much faster for repeated queries to .utcoffset(), .dst() and/or .tzname() though the eager calculation means that it's slower to create an aware datetime that never calculates those functions - see my blog post "pytz: The Fastest Footgun in the West" [1]. I do not think that datetime should move to eager calculations (for one thing it would be a pretty major change), but I did come up with a modest change that can make it possible to implement a pythonic time zone provider without taking the performance hit: introducing a small, optional, set-once cache for "time zone index" onto the datetime object. The idea takes advantage of the fact that essentially all time zones can be described by a very small number of (offset, tzname, dst) combinations plus a function to describe which one applies. Time zone implementations can store these offsets in one or more indexable containers and implement a `tzidx(self, dt)` method returning the relevant index as a function of the datetime. We would provide a per-datetime cache by implementing a datetime.tzidx(self) method, which would be a memoized call to `self.tzinfo.tzidx()`, like this (ignoring error handling - a more detailed implementation can be found in the PoC PR): def tzidx(self): if self._tzidx != 0xff: return self._tzidx tzidx = self.tzinfo.tzidx(self) if isinstance(tzidx, int) and 0 <= tzidx < 255: self._tzidx = tzidx return tzidx And then `utcoffset(self, dt)`, `dst(self, dt)` and `tzname(self, dt)` could be implemented in terms of `dt.tzidx()`! This interface would be completely opt-in, and `tzinfo.tzidx` would have no default implementation. Note that I have used 0xff as the signal value here - this is because I propose that the `tzidx` cache be limited to *only* integers in the interval [0, 255), with 255 reserved as the "not set" value. It is exceedingly unlikely that a given time zone will have more than 255 distinct values in its index, and even if it does, this implementation gracefully falls back to "every call is a cache miss". In my tests, using a single unsigned char for `tzidx` does not increase the size of the `PyDateTime` struct, because it's using a byte that is currently part of the alignment padding anyway. This same trick was used to minimize the impact of `fold`, and I figure it's better to be conservative and enforce 0 <= tzidx < 255, since we can only do it so many times. The last thing I'd like to note is the problem of mutability - datetime objects are supposed to be immutable, and this cache value actually mutates the datetime struct! While it's true that the in-memory value of the datetime changes, the fundamental concept of immutability is retained, since this does not affect any of the qualities of the datetime observable via the public API. In fact (and I hope this is not too much of a digression), it is already unfortunately true that datetimes are more mutable than they would seem, because nothing prevents `tzinfo` objects from returning different values on subsequent calls to the timezone-lookup functions. What's worse, datetime's hash implementation takes its UTC offset into account! In practice it's rare for a tzinfo to ever return a different value for utcoffset(dt), but one prominent example where this could be a problem is with something like `dateutil.tz.tzlocal`, which is a local timezone object written in terms of the `time` module's time zone information - which can change if the local timezone information changes over the course of a program's run. This change does not necessarily fix that problem or start enforcing immutability of `utcoffset`, but it does encourage a design that is *less susceptible* to these problems, since even if the return value of `tzinfo.tzidx()` changes over time for some reason, that function would only be called once per datetime. I have an initial PoC for this implemented, and I've tested it out with an accompanying implementation of `dateutil.tz` that makes use of it, it does indeed make things much faster. I look forward to your comments. 1. https://blog.ganssle.io/articles/2018/03/pytz-fastest-footgun.html ---------- components: Library (Lib) messages: 333503 nosy: p-ganssle priority: normal severity: normal status: open title: Add "time zone index" cache to datetime objects type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 16:04:47 2019 From: report at bugs.python.org (Eric Snow) Date: Fri, 11 Jan 2019 21:04:47 +0000 Subject: [New-bugs-announce] [issue35724] Check for main interpreter when checking for "main" thread (for signal handling) Message-ID: <1547240687.08.0.242313450509.issue35724@roundup.psfhosted.org> New submission from Eric Snow : The code in Modules/signalsmodule.c (as well as a few other places in the code) has a concept of a "main" thread. It's the OS thread where Py_Initialize() was called (and likely the process's original thread). For various good reasons, we ensure that signal handling happens relative to that ("main") thread. The problem is that we track the OS thread (by ID), which multiple interpreters can share. What we really want is to track the original PyThreadState. Otherwise signal-handling could happen (or handlers get added) in the wrong interpreter. Options: 1. track the PyThreadState pointer instead of the OS thread ID 2. check that the current interpreter is the main one, in every place we check for the main thread >From what I can tell, the simpler option is #2. ---------- components: Interpreter Core messages: 333506 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Check for main interpreter when checking for "main" thread (for signal handling) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 11 23:26:24 2019 From: report at bugs.python.org (Yoong Hor Meng) Date: Sat, 12 Jan 2019 04:26:24 +0000 Subject: [New-bugs-announce] [issue35725] Using for...in.. generator-iterator Message-ID: <1547267184.5.0.152784183529.issue35725@roundup.psfhosted.org> New submission from Yoong Hor Meng : def f(): print('-- Start --') yield 1 print('-- Middle --') yield 2 print('-- Finished --') yield 3 gen = f() for x in gen: print('Another things ...') next(gen) The output: -- Start -- Another things ... -- Middle -- -- Finished -- Another things ... I noticed that the generator function will execute whenever it is in the for...in loop. Is it expected? I do not see it documented anywhere. Thanks. ---------- components: Interpreter Core messages: 333510 nosy: yoonghm priority: normal severity: normal status: open title: Using for...in.. generator-iterator type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 12 08:55:13 2019 From: report at bugs.python.org (David Ruggles) Date: Sat, 12 Jan 2019 13:55:13 +0000 Subject: [New-bugs-announce] [issue35726] QueueHandler formating affects other handlers Message-ID: <1547301313.72.0.148950825212.issue35726@roundup.psfhosted.org> New submission from David Ruggles : ISSUE: if you add a formatter to QueueHandler any subsequently added handlers will get the formatting added to QueueHandler CAUSE: as best as I can tell, the code here: https://github.com/python/cpython/blob/d586ccb04f79863c819b212ec5b9d873964078e4/Lib/logging/handlers.py#L1380 is modifying the record object so when it get passed to the next handler here: https://github.com/python/cpython/blob/d586ccb04f79863c819b212ec5b9d873964078e4/Lib/logging/__init__.py#L1656 it includes the formatting applied by the QueueHandler's formatter. I worked around this issue by moving my formatter from the QueueHandler to the QueueListener I've attached a simple example of the issue NOTE: I marked this as Python 3.7 because that's what I'm using, but I looked at github and the code is in master so I assume this affects 3.8 too. ---------- components: Library (Lib) files: queuehandler_bug.py messages: 333526 nosy: David Ruggles priority: normal severity: normal status: open title: QueueHandler formating affects other handlers versions: Python 3.7 Added file: https://bugs.python.org/file48044/queuehandler_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 12 14:33:37 2019 From: report at bugs.python.org (Christopher Hunt) Date: Sat, 12 Jan 2019 19:33:37 +0000 Subject: [New-bugs-announce] [issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior Message-ID: <1547321617.14.0.566390882886.issue35727@roundup.psfhosted.org> New submission from Christopher Hunt : When a function is executed by a multiprocessing.Process and uses sys.exit, the actual exit code reported by multiprocessing is different than would be expected given the Python interpreter behavior and documentation. For example, given: from functools import partial from multiprocessing import get_context import sys def run(ctx, fn): p = ctx.Process(target=fn) p.start() p.join() return p.exitcode if __name__ == '__main__': ctx = get_context('fork') print(run(ctx, partial(sys.exit, 2))) print(run(ctx, partial(sys.exit, None))) print(run(ctx, sys.exit)) ctx = get_context('spawn') print(run(ctx, partial(sys.exit, 2))) print(run(ctx, partial(sys.exit, None))) print(run(ctx, sys.exit)) ctx = get_context('forkserver') print(run(ctx, partial(sys.exit, 2))) print(run(ctx, partial(sys.exit, None))) print(run(ctx, sys.exit)) when executed results in $ python exit.py 2 1 1 2 1 1 2 1 1 but when Python itself is executed we see different behavior $ for arg in 2 None ''; do python -c "import sys; sys.exit($arg)"; echo $?; done 2 0 0 The documentation states > sys.exit([arg]) > ... > The optional argument arg can be an integer giving the exit status > (defaulting to zero), or another type of object. The relevant line in multiprocessing (https://github.com/python/cpython/blame/1cffd0eed313011c0c2bb071c8affeb4a7ed05c7/Lib/multiprocessing/process.py#L307) seems to be from the original pyprocessing module itself, and I could not locate an active site that maintains the repository to see if there was any justification for the behavior. ---------- components: Library (Lib) files: multiprocessing-exitcode-3.7.1.patch keywords: patch messages: 333531 nosy: chrahunt priority: normal severity: normal status: open title: sys.exit() in a multiprocessing.Process does not align with Python behavior type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48045/multiprocessing-exitcode-3.7.1.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 12 16:33:32 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Sat, 12 Jan 2019 21:33:32 +0000 Subject: [New-bugs-announce] [issue35728] Tkinter font nametofont requires default root Message-ID: <1547328812.09.0.281453810047.issue35728@roundup.psfhosted.org> New submission from Terry J. Reedy : font.Font.__init__, font.families, and font.names have a 'root=None' argument and start with if not root: root = tkinter._default_root But font.nametofont does not, and so it calls Font without passing a root argument: return Font(name=name, exists=True) Font fails if there is no default root. There cannot be one if, as recommended, one disables it. import tkinter as tk from tkinter import font tk.NoDefaultRoot() root = tk.Tk() font.nametofont('TkFixedFont') # AttributeError: module 'tkinter' has no attribute '_default_root' Proposed fix: add 'root=None' parameter to nametofont (at end, to not break code) and 'root=root' to Font call. ---------- components: Tkinter messages: 333532 nosy: serhiy.storchaka, terry.reedy priority: normal severity: normal stage: needs patch status: open title: Tkinter font nametofont requires default root type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 13 03:17:55 2019 From: report at bugs.python.org (Igor Nowicki) Date: Sun, 13 Jan 2019 08:17:55 +0000 Subject: [New-bugs-announce] [issue35729] XML.etree bug Message-ID: <1547367475.69.0.228799716496.issue35729@roundup.psfhosted.org> New submission from Igor Nowicki : Consider we have big XML file and we can't load it all into memory. We use then `iterparse` function from XML.etree.ElementTree module to parse it element by element. Problem is, XML doesn't allow to run this smoothly and starts outputing wrong data after loading 16 kb (16*1024, found it after looking into source code). Having large number of children, we get the information that we have just a few. To reproduce the problem, I created this example program. It makes simple xml file with progressively bigger files and tracks how many children of main objects there are counted. For small objects we have actual number, 100 children. For bigger and bigger sizes we have smaller numbers, going down to just few. ---------- components: Library (Lib) files: find_records.py messages: 333549 nosy: Igor Nowicki priority: normal severity: normal status: open title: XML.etree bug type: performance versions: Python 3.6 Added file: https://bugs.python.org/file48046/find_records.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 13 12:22:41 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 13 Jan 2019 17:22:41 +0000 Subject: [New-bugs-announce] [issue35730] IDLE: Fix squeezer test_reload. Message-ID: <1547400161.39.0.565587662986.issue35730@roundup.psfhosted.org> New submission from Terry J. Reedy : PR 10454 for #35196 added, among other things, more tests to test_squeezer.py. SqueezerTest.test_reload initially worked on Mac and personal Windows machines. It failed on Cheryl Sabella's personal Ubuntu machine because doubling the nominal font size did not necessarily exactly double the reported pixel size of '0'. This was easily fixed by testing only that the size increased. self.assertGreater(squeezer.zero_char_width, orig_zero_char_width) It failed on CI Linux and Windows machines because the pixel size did not increase at all. This was fixed for the CI machines by directly assigning a new font tuple to text['font'] instead of involving the idleConf machinery. However, after merging, it failed with the same error that previously occurred on the CI machines. AssertionError: 6 not greater than 6. The initial fix will be to disable the assertion. ---------- assignee: terry.reedy components: IDLE messages: 333558 nosy: taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Fix squeezer test_reload. type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 13 14:21:01 2019 From: report at bugs.python.org (Arlen) Date: Sun, 13 Jan 2019 19:21:01 +0000 Subject: [New-bugs-announce] [issue35731] Modify to support multiple urls in webbrowser.open Message-ID: <1547407261.77.0.914864319887.issue35731@roundup.psfhosted.org> New submission from Arlen : Note: new to python, please provide any feedback Currently webbrowser.open supports one url, and there is no fn for url batching. I am proposing modifying webbrowser.open to support something along these lines: ``` def open(*urls, new=0, autoraise=True): ... browser = get(name) actions = [browser.open(url, new, autoraise) for url in urls] ... # usage open('http://example.com', 'http://example2.com') ``` ---------- components: Library (Lib) messages: 333563 nosy: arlenyu priority: normal severity: normal status: open title: Modify to support multiple urls in webbrowser.open type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 13 18:18:01 2019 From: report at bugs.python.org (Antoine Wecxsteen) Date: Sun, 13 Jan 2019 23:18:01 +0000 Subject: [New-bugs-announce] [issue35732] Typo in library/warnings documentation Message-ID: <1547421481.41.0.571803871053.issue35732@roundup.psfhosted.org> New submission from Antoine Wecxsteen : Hello, I believe there's a mistake in the documentation of library/warnings. https://docs.python.org/3.8/library/warnings.html#warnings.warn "This function raises an exception if the particular warning issued is changed into an error by the warnings filter see above." I think "see above" should be enclosed in brackets (or maybe completely removed as there is already a "(see above)" in the same text block). Regards. ---------- assignee: docs at python components: Documentation messages: 333574 nosy: awecx, docs at python, eric.araujo, ezio.melotti, mdk, willingc priority: normal severity: normal status: open title: Typo in library/warnings documentation versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 13 21:31:19 2019 From: report at bugs.python.org (Anthony Sottile) Date: Mon, 14 Jan 2019 02:31:19 +0000 Subject: [New-bugs-announce] [issue35733] isinstance(ast.Constant(value=True), ast.Num) should be False Message-ID: <1547433079.8.0.602510928085.issue35733@roundup.psfhosted.org> New submission from Anthony Sottile : Noticing this in pyflakes https://github.com/PyCQA/pyflakes/pull/408 ---------- messages: 333579 nosy: Anthony Sottile priority: normal severity: normal status: open title: isinstance(ast.Constant(value=True), ast.Num) should be False versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 13 22:51:18 2019 From: report at bugs.python.org (ulin) Date: Mon, 14 Jan 2019 03:51:18 +0000 Subject: [New-bugs-announce] [issue35734] ipaddress's _BaseV4._is_valid_netmask fails to detect invalid netmask like 255.254.128.0 Message-ID: <1547437878.33.0.689203196624.issue35734@roundup.psfhosted.org> New submission from ulin : valid netmask like 255.0.0.0 255.128.0.0, but 255.254.128.0 is not valid, but ipaddress._BaseV4._is_valid_netmask fails to detect the latter. Tested in Python 3.6.7, as the code stays the same, affects all after Python 3.6.7. ---------- components: Library (Lib) files: test_ipaddress.py messages: 333581 nosy: ulindog priority: normal severity: normal status: open title: ipaddress's _BaseV4._is_valid_netmask fails to detect invalid netmask like 255.254.128.0 type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48047/test_ipaddress.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 14 02:25:42 2019 From: report at bugs.python.org (Michael Felt) Date: Mon, 14 Jan 2019 07:25:42 +0000 Subject: [New-bugs-announce] [issue35735] Current "make test" status for AIX Message-ID: <1547450742.92.0.354703288853.issue35735@roundup.psfhosted.org> New submission from Michael Felt : Hi all, as we get closer to having the current tests all patched I want to have a place to post new "failures" - since the BOT process is unable to report regressions before all tests are passing for a time. Initially, the tests run normally, and report two unexpected failures. However, the second pass does not complete - it is giving a segmentation error (repeated twice, starting third run shortly). i.e., the tests test_eintr and test_importlib have PR that are waitig to be merged. the failure of test_os and test_xml_etree_c are new. Tested: heads/master-dirty:5bb146aaea ./python ../git/python3-3.8/Tools/scripts/run_tests.py /data/prj/python/python3-3.8/python -u -W default -bb -E -m test -r -w -j 0 -u all,-largefile,-audio,-gui == CPython 3.8.0a0 (heads/master-dirty:5bb146aaea, Jan 13 2019, 21:30:27) [C] == AIX-1-00C291F54C00-powerpc-32bit big-endian == cwd: /data/prj/python/python3-3.8/build/test_python_9109548 == CPU count: 8 == encodings: locale=ISO8859-1, FS=iso8859-1 Using random seed 9105159 Run tests in parallel using 10 child processes 0:00:01 [ 1/418] test_nis passed 0:00:01 [ 2/418] test_zipfile64 skipped (resource denied) test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run 0:00:01 [ 3/418] test_stringprep passed ... 0:10:23 [418/418/4] test_tools passed (4 min 31 sec) == Tests result: FAILURE == 390 tests OK. 4 tests failed: test_eintr test_importlib test_os test_xml_etree_c 24 tests skipped: test_dbm_gnu test_devpoll test_epoll test_gdb test_idle test_kqueue test_msilib test_ossaudiodev test_readline test_spwd test_sqlite test_startfile test_tcl test_tix test_tk test_ttk_guionly test_ttk_textonly test_turtle test_unicode_file test_unicode_file_functions test_winconsoleio test_winreg test_winsound test_zipfile64 Re-running failed tests in verbose mode ... Re-running test 'test_xml_etree_c' in verbose mode test_bpo_31728 (test.test_xml_etree_c.MiscTests) ... ok test_del_attribute (test.test_xml_etree_c.MiscTests) ... ok test_iterparse_leaks (test.test_xml_etree_c.MiscTests) ... ok test_length_overflow (test.test_xml_etree_c.MiscTests) ... skipped 'not enough memory: 2.0G minimum needed' test_parser_ref_cycle (test.test_xml_etree_c.MiscTests) ... ok test_setstate_leaks (test.test_xml_etree_c.MiscTests) ... ok test_trashcan (test.test_xml_etree_c.MiscTests) ... ok test_xmlpullparser_leaks (test.test_xml_etree_c.MiscTests) ... ok test_alias_working (test.test_xml_etree_c.TestAliasWorking) ... ok test_correct_import_cET (test.test_xml_etree_c.TestAcceleratorImported) ... ok test_correct_import_cET_alias (test.test_xml_etree_c.TestAcceleratorImported) ... ok test_parser_comes_from_C (test.test_xml_etree_c.TestAcceleratorImported) ... ok test_element (test.test_xml_etree_c.SizeofTest) ... ok test_element_with_attrib (test.test_xml_etree_c.SizeofTest) ... ok test_element_with_children (test.test_xml_etree_c.SizeofTest) ... ok ---------------------------------------------------------------------- Ran 15 tests in 0.871s OK (skipped=1) test_all (test.test_xml_etree.ModuleTest) ... ok test_sanity (test.test_xml_etree.ModuleTest) ... ok test_delslice (test.test_xml_etree.ElementSlicingTest) ... ok test_getslice_negative_steps (test.test_xml_etree.ElementSlicingTest) ... ok test_getslice_range (test.test_xml_etree.ElementSlicingTest) ... ok test_getslice_single_index (test.test_xml_etree.ElementSlicingTest) ... ok test_getslice_steps (test.test_xml_etree.ElementSlicingTest) ... ok test_setslice_negative_steps (test.test_xml_etree.ElementSlicingTest) ... ok test_setslice_range (test.test_xml_etree.ElementSlicingTest) ... ok test_setslice_single_index (test.test_xml_etree.ElementSlicingTest) ... ok test_setslice_steps (test.test_xml_etree.ElementSlicingTest) ... ok test_augmentation_type_errors (test.test_xml_etree.BasicElementTest) ... Fatal Python error: Segmentation fault Current thread 0x00000001 (most recent call first): File "/data/prj/python/git/python3-3.8/Lib/unittest/case.py", line 197 in handle File "/data/prj/python/git/python3-3.8/Lib/unittest/case.py", line 782 in assertRaises File "/data/prj/python/git/python3-3.8/Lib/test/test_xml_etree.py", line 1811 in test_augmentation_type_errors File "/data/prj/python/git/python3-3.8/Lib/unittest/case.py", line 642 in run File "/data/prj/python/git/python3-3.8/Lib/unittest/case.py", line 702 in __call__ File "/data/prj/python/git/python3-3.8/Lib/unittest/suite.py", line 122 in run File "/data/prj/python/git/python3-3.8/Lib/unittest/suite.py", line 84 in __call__ File "/data/prj/python/git/python3-3.8/Lib/unittest/suite.py", line 122 in run File "/data/prj/python/git/python3-3.8/Lib/unittest/suite.py", line 84 in __call__ File "/data/prj/python/git/python3-3.8/Lib/unittest/runner.py", line 176 in run File "/data/prj/python/git/python3-3.8/Lib/test/support/__init__.py", line 1935 in _run_suite File "/data/prj/python/git/python3-3.8/Lib/test/support/__init__.py", line 2031 in run_unittest File "/data/prj/python/git/python3-3.8/Lib/test/test_xml_etree.py", line 3213 in test_main File "/data/prj/python/git/python3-3.8/Lib/test/test_xml_etree_c.py", line 222 in test_main File "/data/prj/python/git/python3-3.8/Lib/test/libregrtest/runtest.py", line 182 in runtest_inner File "/data/prj/python/git/python3-3.8/Lib/test/libregrtest/runtest.py", line 137 in runtest File "/data/prj/python/git/python3-3.8/Lib/test/libregrtest/main.py", line 304 in rerun_failed_tests File "/data/prj/python/git/python3-3.8/Lib/test/libregrtest/main.py", line 619 in _main File "/data/prj/python/git/python3-3.8/Lib/test/libregrtest/main.py", line 582 in main File "/data/prj/python/git/python3-3.8/Lib/test/libregrtest/main.py", line 636 in main File "/data/prj/python/git/python3-3.8/Lib/test/__main__.py", line 2 in File "/data/prj/python/git/python3-3.8/Lib/runpy.py", line 85 in _run_code File "/data/prj/python/git/python3-3.8/Lib/runpy.py", line 192 in _run_module_as_main make: 1254-059 The signal code from the last command is 11. ---------- messages: 333588 nosy: Michael.Felt priority: normal severity: normal status: open title: Current "make test" status for AIX versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 14 08:28:08 2019 From: report at bugs.python.org (=?utf-8?q?Michael_Kr=C3=B6tlinger?=) Date: Mon, 14 Jan 2019 13:28:08 +0000 Subject: [New-bugs-announce] [issue35736] Missing component in table after getElementsByTagName("nn") Message-ID: <1547472488.22.0.593179518504.issue35736@roundup.psfhosted.org> New submission from Michael Kr?tlinger : After operations = xmltree.getElementsByTagName("operation") the table does not contain operations antragstypenErmitteln and mammographieIndikationenErmitteln ---------- files: EbsService.wsdl messages: 333621 nosy: MiKr41 priority: normal severity: normal status: open title: Missing component in table after getElementsByTagName("nn") type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48048/EbsService.wsdl _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 14 10:52:24 2019 From: report at bugs.python.org (Brett R) Date: Mon, 14 Jan 2019 15:52:24 +0000 Subject: [New-bugs-announce] [issue35737] crypt AuthenticationError introduced with new Linux kernel Message-ID: <1547481144.26.0.447593759473.issue35737@roundup.psfhosted.org> New submission from Brett R : We are seeing a crash apparently in crypt.py when invoked via SaltStack and have narrowed it down to some change in the Linux kernel introduced by this security update: https://access.redhat.com/errata/RHSA-2018:3083 Linux kernel 3.10.0-862.14.4.el7.x86_64 works fine Linux kernel 3.10.0-957.el7.x86_64 and later show this error: 2018-11-28T16:35:13.302740+00:00 ip-10-128-152-49 cloud-init: [INFO ] Executing state cmd.script for [setup-secondary-ips] 2018-11-28T16:35:13.494523+00:00 ip-10-128-152-49 cloud-init: [ERROR ] An exception occurred in this state: Traceback (most recent call last): 2018-11-28T16:35:13.497189+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/state.py", line 1889, in call 2018-11-28T16:35:13.500053+00:00 ip-10-128-152-49 cloud-init: **cdata['kwargs']) 2018-11-28T16:35:13.502780+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1839, in wrapper 2018-11-28T16:35:13.505822+00:00 ip-10-128-152-49 cloud-init: return f(*args, **kwargs) 2018-11-28T16:35:13.508537+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/states/cmd.py", line 1118, in script 2018-11-28T16:35:13.511297+00:00 ip-10-128-152-49 cloud-init: cmd_all = __salt__['cmd.script'](source, python_shell=True, **cmd_kwargs) 2018-11-28T16:35:13.514308+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/modules/cmdmod.py", line 2114, in script 2018-11-28T16:35:13.517107+00:00 ip-10-128-152-49 cloud-init: fn_ = __salt__['cp.cache_file'](source, saltenv) 2018-11-28T16:35:13.520171+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/modules/cp.py", line 474, in cache_file 2018-11-28T16:35:13.523112+00:00 ip-10-128-152-49 cloud-init: result = _client().cache_file(path, saltenv) 2018-11-28T16:35:13.526199+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/fileclient.py", line 188, in cache_file 2018-11-28T16:35:13.529055+00:00 ip-10-128-152-49 cloud-init: return self.get_url(path, '', True, saltenv, cachedir=cachedir) 2018-11-28T16:35:13.532046+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/fileclient.py", line 494, in get_url 2018-11-28T16:35:13.535280+00:00 ip-10-128-152-49 cloud-init: result = self.get_file(url, dest, makedirs, saltenv, cachedir=cachedir) 2018-11-28T16:35:13.538335+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/fileclient.py", line 1145, in get_file 2018-11-28T16:35:13.541621+00:00 ip-10-128-152-49 cloud-init: data = self.channel.send(load, raw=True) 2018-11-28T16:35:13.544750+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/utils/async.py", line 65, in wrap 2018-11-28T16:35:13.548071+00:00 ip-10-128-152-49 cloud-init: ret = self._block_future(ret) 2018-11-28T16:35:13.551304+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/utils/async.py", line 75, in _block_future 2018-11-28T16:35:13.554546+00:00 ip-10-128-152-49 cloud-init: return future.result() 2018-11-28T16:35:13.557950+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 214, in result 2018-11-28T16:35:13.561205+00:00 ip-10-128-152-49 cloud-init: raise_exc_info(self._exc_info) 2018-11-28T16:35:13.564478+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 876, in run 2018-11-28T16:35:13.568139+00:00 ip-10-128-152-49 cloud-init: yielded = self.gen.throw(*exc_info) 2018-11-28T16:35:13.571683+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/transport/zeromq.py", line 312, in send 2018-11-28T16:35:13.575103+00:00 ip-10-128-152-49 cloud-init: ret = yield self._crypted_transfer(load, tries=tries, timeout=timeout, raw=raw) 2018-11-28T16:35:13.578736+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 870, in run 2018-11-28T16:35:13.582255+00:00 ip-10-128-152-49 cloud-init: value = future.result() 2018-11-28T16:35:13.585869+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 214, in result 2018-11-28T16:35:13.589636+00:00 ip-10-128-152-49 cloud-init: raise_exc_info(self._exc_info) 2018-11-28T16:35:13.593537+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 876, in run 2018-11-28T16:35:13.597250+00:00 ip-10-128-152-49 cloud-init: yielded = self.gen.throw(*exc_info) 2018-11-28T16:35:13.604695+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/transport/zeromq.py", line 284, in _crypted_transfer 2018-11-28T16:35:13.608535+00:00 ip-10-128-152-49 cloud-init: ret = yield _do_transfer() 2018-11-28T16:35:13.612022+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 870, in run 2018-11-28T16:35:13.615530+00:00 ip-10-128-152-49 cloud-init: value = future.result() 2018-11-28T16:35:13.619175+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 214, in result 2018-11-28T16:35:13.622702+00:00 ip-10-128-152-49 cloud-init: raise_exc_info(self._exc_info) 2018-11-28T16:35:13.626336+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 879, in run 2018-11-28T16:35:13.629839+00:00 ip-10-128-152-49 cloud-init: yielded = self.gen.send(value) 2018-11-28T16:35:13.633372+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/transport/zeromq.py", line 271, in _do_transfer 2018-11-28T16:35:13.636931+00:00 ip-10-128-152-49 cloud-init: data = self.auth.crypticle.loads(data, raw) 2018-11-28T16:35:13.640794+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/crypt.py", line 1316, in loads 2018-11-28T16:35:13.644318+00:00 ip-10-128-152-49 cloud-init: data = self.decrypt(data) 2018-11-28T16:35:13.647997+00:00 ip-10-128-152-49 cloud-init: File "/usr/lib/python2.7/site-packages/salt/crypt.py", line 1296, in decrypt 2018-11-28T16:35:13.651578+00:00 ip-10-128-152-49 cloud-init: raise AuthenticationError('message authentication failed') 2018-11-28T16:35:13.655231+00:00 ip-10-128-152-49 cloud-init: AuthenticationError: message authentication failed 2018-11-28T16:35:13.658805+00:00 ip-10-128-152-49 cloud-init: [INFO ] Completed state [setup-secondary-ips] at time 16:35:13.491356 duration_in_ms=196.894 This is very reproducible and we originally reported it here: https://github.com/saltstack/salt/issues/50673 but it does not appear to be related to SaltStack so I am trying this as the next place to file. Please advise what additional info may be needed. ---------- messages: 333628 nosy: icycle priority: normal severity: normal status: open title: crypt AuthenticationError introduced with new Linux kernel type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 14 13:56:00 2019 From: report at bugs.python.org (Jayanth Raman) Date: Mon, 14 Jan 2019 18:56:00 +0000 Subject: [New-bugs-announce] [issue35738] Update timeit documentation to reflect default repeat of three Message-ID: <1547492160.57.0.532136359252.issue35738@roundup.psfhosted.org> New submission from Jayanth Raman : In the Examples section of the timeit documentation, repeat() returns a list of size three. But the default is now five and the documentation should reflect that. Thanks. ---------- assignee: docs at python components: Documentation messages: 333635 nosy: docs at python, jayanth priority: normal severity: normal status: open title: Update timeit documentation to reflect default repeat of three type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 00:26:47 2019 From: report at bugs.python.org (Jorge Ramos) Date: Tue, 15 Jan 2019 05:26:47 +0000 Subject: [New-bugs-announce] [issue35739] Enable verbose of tests during PGO build on amd64 platforms Message-ID: <1547530007.74.0.355457877892.issue35739@roundup.psfhosted.org> New submission from Jorge Ramos : It would be interesting to allow regrtests to output to command line during testing with PGO enabled. The default behavior is to not display output unless some fatal error occurs ("quiet" mode). Making this issue to create a pull request. ---------- components: Build, Windows messages: 333648 nosy: neyuru, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable verbose of tests during PGO build on amd64 platforms type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 02:41:18 2019 From: report at bugs.python.org (ossdev) Date: Tue, 15 Jan 2019 07:41:18 +0000 Subject: [New-bugs-announce] [issue35740] ssl version 1.1.1 need to be there in cpython-source-deps for windows ARM64 Message-ID: <1547538078.35.0.434984866585.issue35740@roundup.psfhosted.org> Change by ossdev : ---------- assignee: christian.heimes components: SSL nosy: christian.heimes, ossdev07 priority: normal severity: normal status: open title: ssl version 1.1.1 need to be there in cpython-source-deps for windows ARM64 type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 04:32:07 2019 From: report at bugs.python.org (jianxu3) Date: Tue, 15 Jan 2019 09:32:07 +0000 Subject: [New-bugs-announce] [issue35741] unittest.skipUnless(time._STRUCT_TM_ITEMS == 11, "needs tm_zone support") doesn't work Message-ID: <1547544727.75.0.967204925198.issue35741@roundup.psfhosted.org> New submission from jianxu3 : Whether or not the HAVE_STRUCT_TM_TM_ZONE is defined, _STRUCT_TM_ITEMS always equal to 11. It is initialized at PyInit_time(void). PyModule_AddIntConstant(m, "_STRUCT_TM_ITEMS", 11); If I modify it like this: #ifdef HAVE_STRUCT_TM_TM_ZONE PyModule_AddIntConstant(m, "_STRUCT_TM_ITEMS", 11) #else PyModule_AddIntConstant(m, "_STRUCT_TM_ITEMS", 9) #endif Then test_fields at test_structseq.py will fail. def test_fields(self): self.assertEqual(t.n_fields, time._STRUCT_TM_ITEMS) What I hope is that if HAVE_STRUCT_TM_TM_ZONE is not defined, test_localtime_timezone will be skipped. ---------- components: Tests messages: 333654 nosy: jianxu3 priority: normal severity: normal status: open title: unittest.skipUnless(time._STRUCT_TM_ITEMS == 11, "needs tm_zone support") doesn't work type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 04:43:57 2019 From: report at bugs.python.org (Chih-Hsuan Yen) Date: Tue, 15 Jan 2019 09:43:57 +0000 Subject: [New-bugs-announce] [issue35742] test_builtin fails after merging the fix for bpo-34756 Message-ID: <1547545437.79.0.55588181748.issue35742@roundup.psfhosted.org> New submission from Chih-Hsuan Yen : On git-master (32ebd8508d4807a7c85d2ed8e9c3b44ecd6de591) of CPython, 3 tests of test_builtin fails: ====================================================================== ERROR: test_envar_unimportable (test.test_builtin.TestBreakpoint) (envar='.') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/yen/Projects/cpython/Lib/test/test_builtin.py", line 1618, in test_envar_unimportable breakpoint() ValueError: Empty module name ====================================================================== ERROR: test_envar_unimportable (test.test_builtin.TestBreakpoint) (envar='.foo') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/yen/Projects/cpython/Lib/test/test_builtin.py", line 1618, in test_envar_unimportable breakpoint() ValueError: Empty module name ====================================================================== ERROR: test_envar_unimportable (test.test_builtin.TestBreakpoint) (envar='.int') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/yen/Projects/cpython/Lib/test/test_builtin.py", line 1618, in test_envar_unimportable breakpoint() ValueError: Empty module name ---------------------------------------------------------------------- If I revert 6fe9c446f8302553952f63fc6d96be4dfa48ceba, tests pass. This commit is from issue34756, so I add the author of that patch to the nosy list. Environment: Arch Linux x86_64 Steps to reproduce: $ ./configure $ make $ ./python -m test -v test_builtin ---------- components: Tests messages: 333655 nosy: serhiy.storchaka, yan12125 priority: normal severity: normal status: open title: test_builtin fails after merging the fix for bpo-34756 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 05:18:35 2019 From: report at bugs.python.org (Ori Avtalion) Date: Tue, 15 Jan 2019 10:18:35 +0000 Subject: [New-bugs-announce] [issue35743] Broken "Exception ignored in:" message on OSError's Message-ID: <1547547515.11.0.919853628992.issue35743@roundup.psfhosted.org> New submission from Ori Avtalion : When an OSError exception is raised in __del__, both Python 2 and 3 print the "Exception ignored" message, but Python 3 also prints a traceback. This is similar to issue 22836, with dealt with errors in __repr__ while inside __del__. Test script: import os class Obj(object): def __init__(self): self.f = open('/dev/null') os.close(self.f.fileno()) def __del__(self): self.f.close() f = Obj() del f Output with Python 3.7.2: Exception ignored in: Traceback (most recent call last): File "/tmp/test.py", line 9, in __del__ self.f.close() OSError: [Errno 9] Bad file descriptor ---------- components: Interpreter Core messages: 333661 nosy: salty-horse priority: normal severity: normal status: open title: Broken "Exception ignored in:" message on OSError's versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 08:12:12 2019 From: report at bugs.python.org (Jay) Date: Tue, 15 Jan 2019 13:12:12 +0000 Subject: [New-bugs-announce] [issue35744] Problem in the documentation of numpy.random.randint in python 2.7 Message-ID: <1547557932.37.0.905069096511.issue35744@roundup.psfhosted.org> New submission from Jay : Official Documentation of python 2.7 mentions that numpy.random.randint(a,b) will return a random integer from N such that a<=N<=b. But I have run the code and I have found that it never returns equal to b. So, what I did was I ran numpy.random.randint(0,1) for 50 milion times and finally printed the sum. The output was 0. I don't know if this a documentation or an implementation issue, but this is an issue which needs to be looked at. I am attaching the code that I ran. ---------- assignee: docs at python components: Documentation files: sample.py messages: 333701 nosy: Jay, docs at python priority: normal severity: normal status: open title: Problem in the documentation of numpy.random.randint in python 2.7 type: behavior versions: Python 2.7 Added file: https://bugs.python.org/file48051/sample.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 10:33:11 2019 From: report at bugs.python.org (Windson Yang) Date: Tue, 15 Jan 2019 15:33:11 +0000 Subject: [New-bugs-announce] [issue35745] Add import statement in dataclass code snippet Message-ID: <1547566391.87.0.956827047672.issue35745@roundup.psfhosted.org> New submission from Windson Yang : Most of the example in https://docs.python.org/3/library/dataclasses.html miss code like from dataclasses import dataclass, field from typing import List I think we should add this statement in the code snippet. ---------- assignee: docs at python components: Documentation messages: 333707 nosy: Windson Yang, docs at python priority: normal severity: normal status: open title: Add import statement in dataclass code snippet type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 11:24:28 2019 From: report at bugs.python.org (Cisco Talos) Date: Tue, 15 Jan 2019 16:24:28 +0000 Subject: [New-bugs-announce] [issue35746] TALOS-2018-0758 Denial of Service Message-ID: <1547569468.87.0.514647021744.issue35746@roundup.psfhosted.org> New submission from Cisco Talos : An exploitable denial-of-service vulnerability exists in the X509 certificate parser of Python.org Python 2.7.11 / 3.6.6. A specially crafted X509 certificate can cause a NULL pointer dereference, resulting in a denial of service. An attacker can initiate or accept TLS connections using crafted certificates to trigger this vulnerability. ---------- files: TALOS-2019-0758.txt messages: 333709 nosy: Talos priority: normal severity: normal status: open title: TALOS-2018-0758 Denial of Service type: security versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48052/TALOS-2019-0758.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 15 12:54:33 2019 From: report at bugs.python.org (ido k) Date: Tue, 15 Jan 2019 17:54:33 +0000 Subject: [New-bugs-announce] [issue35747] Python threading event wait influenced by date change Message-ID: <1547574873.05.0.783688735174.issue35747@roundup.psfhosted.org> New submission from ido k : Happen on ubuntu Opening two threads - one thread alternate system date The seconds waits for 60 seconds. joining both threads. The execution should take at least 60 seconds. Takes less then 15 seconds. Any work around? ---------- components: Library (Lib) files: wrong_wait_behaviour.py messages: 333718 nosy: ido k priority: normal severity: normal status: open title: Python threading event wait influenced by date change type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48056/wrong_wait_behaviour.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 02:45:25 2019 From: report at bugs.python.org (Neeraj Sonaniya) Date: Wed, 16 Jan 2019 07:45:25 +0000 Subject: [New-bugs-announce] [issue35748] urlparse library detecting wrong hostname leads to open redirect vulnerability Message-ID: <1547624725.28.0.16631607093.issue35748@roundup.psfhosted.org> New submission from Neeraj Sonaniya : Summary: It have been identified that `urlparse` under `urllib.parse` module is detecting wrong hostname which could leads to a security issue known as Open redirect vulnerability. Steps to reproduce the issue: Following code will help you in reproducing the issue: ``` from urllib.parse import urlparse x= 'http://www.google.com\@xxx.com' y = urlparse(x) print(y.hostname) ``` Output: xxx.com The hostname from above URL which is actually rendered by browser is : 'https://www.google.com'. In following browsers tested: (hostname detected as: https://www.google.com) ``` 1. Chromium - Version 72.0.3626.7 - Developer Build 2. Firefox - 60.4.0esr (64-bit) 3. Internet Explorer - 11.0.9600.17843 4. Safari - Version 12.0.2 (14606.3.4) ``` ---------- components: Library (Lib) files: Screenshot from 2019-01-16 12-47-22.png messages: 333750 nosy: nsonaniya2010, orsenthil priority: normal severity: normal status: open title: urlparse library detecting wrong hostname leads to open redirect vulnerability type: security versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48058/Screenshot from 2019-01-16 12-47-22.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 03:17:26 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Wed, 16 Jan 2019 08:17:26 +0000 Subject: [New-bugs-announce] [issue35749] Ignore exception if event loop wakeup pipe is full Message-ID: <1547626646.01.0.761930434303.issue35749@roundup.psfhosted.org> New submission from Andrew Svetlov : Asyncio uses a pipe to wakeup event loop in cases of 1. Signal handlers (set_wakeup_fd) 2. Calling asyncio code from another thread In both cases, it sends b'\0' to the pipe to wake up a loop. If the pipe is full OSError is raised. asyncio logs these exceptions in debug mode. The logging can be omitted because if the pipe is full the loop wakes up and drains the pipe anyway. ---------- messages: 333751 nosy: asvetlov priority: normal severity: normal status: open title: Ignore exception if event loop wakeup pipe is full _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 03:18:22 2019 From: report at bugs.python.org (Anil kunduru) Date: Wed, 16 Jan 2019 08:18:22 +0000 Subject: [New-bugs-announce] [issue35750] process finished with exit code -1073740940 (0xc0000374) Message-ID: <1547626702.61.0.971998033647.issue35750@roundup.psfhosted.org> New submission from Anil kunduru : function contains of long loop each iteration it will do get data from different website using request.get and do analysis, loop will take one minute for 10 iteration. after completed some 100 to 400 iteration it will exit with process finished with exit code -1073740940 (0xc0000374) do no what is this unexpected behavior! ---------- messages: 333752 nosy: kunduruanil priority: normal severity: normal status: open title: process finished with exit code -1073740940 (0xc0000374) type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 05:34:07 2019 From: report at bugs.python.org (=?utf-8?b?0KLQsNGA0LDRgSDQktC+0LnQvdCw0YDQvtCy0YHRjNC60LjQuQ==?=) Date: Wed, 16 Jan 2019 10:34:07 +0000 Subject: [New-bugs-announce] [issue35751] traceback.clear_frames manages to deadlock a background task Message-ID: <1547634847.81.0.0050263541371.issue35751@roundup.psfhosted.org> New submission from ????? ????????????? : My use case: I have a background task, say called "coordination". In that task, I want to catch any errors and push those to the user waiting in the main task and only continue running the background coroutine after the user manually resolves the exception. Issue: When testing the behaviour with ``unittest.Case`` and using ``assertRaises`` to catch the exception, the background coroutine manages to just freeze. I have narrowed it down to ``traceback.clear_frames`` in ``assertRaises`` that causes a GeneratorExit in the background coroutine. I believe this issue is a duplicate to https://bugs.python.org/issue29211, but wanted to provide another actual use case where it can pop up. Also even if the generator raises a GeneratorExit, why did the background thread freeze is still a mystery to me. Script to reproduce in my case is attached. ---------- components: asyncio files: test_async_deadlock.py messages: 333759 nosy: asvetlov, yselivanov, ????? ????????????? priority: normal severity: normal status: open title: traceback.clear_frames manages to deadlock a background task type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48059/test_async_deadlock.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 11:51:18 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 16 Jan 2019 16:51:18 +0000 Subject: [New-bugs-announce] [issue35752] test_buffer fails on ppc64le: memoryview pack_single() is miscompiled Message-ID: <1547657478.14.0.363235930119.issue35752@roundup.psfhosted.org> New submission from STINNER Victor : The bug was first reported on Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=1540995 ====================================================================== FAIL: test_memoryview_struct_module (test.test_buffer.TestBufferProtocol) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_buffer.py", line 2540, in test_memoryview_struct_module self.assertEqual(m[1], nd[1]) AssertionError: -21.099998474121094 != -21.100000381469727 The problem is the conversion from C double (64-bit float) and C float (32-bit float). There are 2 implementations: * Objects/memoryobject.c: pack_single() and unpack_single() * Modules/_struct.c: nu_float() and np_float() Attached ppc64_float32_bug.py is the simplified test case to trigger the bug. The result depends on the compiler optimization level: * gcc -O0: -21.100000381469727 == -21.100000381469727, OK * gcc -O1: -21.099998474121094 != -21.100000381469727, ERROR * (I guess that higher optimization level also trigger the bug) The problem is that the pack_single() function is "miscompiled" (or "too optimized"). Adding "volatile" to PACK_SINGLE() prevents the unsafe compiler optimization and fix the issue for me: try attached pack_single_volatile.patch. === -O1 assembler code with the bug === PACK_SINGLE(ptr, d, float); r30 = ptr (gdb) p $vs63.v2_double $17 = {0, -21.100000000000001} => 0x00000000100a1178 : stxsspx vs63,0,r30 (gdb) p /x (*ptr)@4 $10 = {0xcc, 0xcc, 0xa8, 0xc1} The first byte is 0xcc: WRONG. === -O1 assembler code without the bug (volatile) === r30 = ptr (gdb) p $f31 $1 = -21.100000000000001 => 0x00000000100a11e4 : frsp f31,f31 (gdb) p $f31 $2 = -21.100000381469727 0x00000000100a11e8 : stfs f31,152(r1) 0x00000000100a11ec : lwz r9,152(r1) (gdb) p /x $r9 $8 = 0xc1a8cccd 0x00000000100a11f0 : stw r9,0(r30) (gdb) p /x (*ptr)@4 $9 = {0xcd, 0xcc, 0xa8, 0xc1} 0x00000000100a11f4 : li r3,0 0x00000000100a11f8 : lfd f31,216(r1) 0x00000000100a11fc : ld r30,200(r1) The first byte is 0xcd: GOOD. ---------- components: Library (Lib) files: ppc64_float32_bug.py messages: 333774 nosy: mark.dickinson, skrah, vstinner priority: normal severity: normal status: open title: test_buffer fails on ppc64le: memoryview pack_single() is miscompiled versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48060/ppc64_float32_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 15:17:24 2019 From: report at bugs.python.org (David Antonini) Date: Wed, 16 Jan 2019 20:17:24 +0000 Subject: [New-bugs-announce] [issue35753] Importing call from unittest.mock directly causes ValueError Message-ID: <1547669844.88.0.953853980523.issue35753@roundup.psfhosted.org> New submission from David Antonini : Ok so that's a pretty odd bug. I already had from unittest.mock import patch, mock_open so I simply modified that to from instead of doing mock.call in my test. Changing this to from unittest import mock and then mock.call fixed the error. from unittest.mock import patch, mock_open, call mocked_print.assert_has_calls([ call("first print"), call("second print"), ]) I get: C:\Program Files (x86)\Python37-64\lib\doctest.py:932: in find self._find(tests, obj, name, module, source_lines, globs, {}) C:\Program Files (x86)\Python37-64\lib\doctest.py:991: in _find if ((inspect.isroutine(inspect.unwrap(val)) C:\Program Files (x86)\Python37-64\lib\inspect.py:515: in unwrap raise ValueError('wrapper loop when unwrapping {!r}'.format(f)) E ValueError: wrapper loop when unwrapping call collected 1 item / 1 errors But when I don't import call directly my test runs as expected: from unittest.mock import patch, mock_open import unittest.mock mocked_print.assert_has_calls([ mock.call(), mock.call(), ]) I have the same issue when using: assert mocked_print.call_args_list == [call("first print"), call("second print")] <- ValueError assert mocked_print.call_args_list == [mock.call("first print"), mock.call("second print")] <- Works as expected. ---------- components: Tests messages: 333786 nosy: toonarmycaptain priority: normal severity: normal status: open title: Importing call from unittest.mock directly causes ValueError type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 16:33:08 2019 From: report at bugs.python.org (jimbo1qaz_ via Gmail) Date: Wed, 16 Jan 2019 21:33:08 +0000 Subject: [New-bugs-announce] [issue35754] When writing/closing a closed Popen.stdin, I get OSError vs. BrokenPipeError randomly or depending on bufsize Message-ID: <1547674388.65.0.744708229654.issue35754@roundup.psfhosted.org> New submission from jimbo1qaz_ via Gmail : Windows 10 1709 x64, Python 3.7.1. Minimal example and stack traces at https://gist.github.com/jimbo1qaz/75d7a40cac307f8239ce011fd90c86bf Essentially I create a subprocess.Popen, using a process (msys2 head.exe) which closes its stdin after some amount of input, then write nothing but b"\n"*1000 bytes to its stdin. If the bufsize is small (1000 bytes), I always get OSError: [Errno 22] Invalid argument If the bufsize is large (1 million bytes), I always get BrokenPipeError: [Errno 32] Broken pipe. (This happens whether I write 1 million newlines or 1000 at a time). Originally I created a ffmpeg->ffplay pipeline with a massive bufsize (around 1280*720*3 * 2 frames), then wrote 1280*720*3 bytes of video frames at a time. Closing ffplay's window usually created BrokenPipeError, but occasionally OSError. This was actually random. ------------ It seems that this is known to some extent, although I couldn't find any relevant issues on the bug tracker, and "having to catch 2 separate errors" isn't explained on the documentation. (Is it intended though undocumented behavior?) Popen._communicate() calls Popen._stdin_write(), but specifically ignores BrokenPipeError and OSError where exc.errno == errno.EINVAL == 22 (the 2 cases I encountered). But I don't call Popen.communicate() but instead write directly to stdin, since I have a loop that outputs 1 video frame at a time, and rely on pipe blocking to stop my application from running too far ahead of ffmpeg/ffplay. ------------ popen.stdin is a <_io.BufferedWriter name=3>. https://docs.python.org/3/library/io.html#io.BufferedIOBase.write >Write the given bytes-like object, b, and return the number of bytes written (always equal to the length of b in bytes, since if the write fails an OSError will be raised). Depending on the actual implementation, these bytes may be readily written to the underlying stream, or held in a buffer for performance and latency reasons. The page doesn't mention BrokenPipeError at all (Ctrl+F). So why do I *sometimes* get a BrokenPipeError (subclasses ConnectionError subclasses OSError) instead? ---------- messages: 333792 nosy: jimbo1qaz_ priority: normal severity: normal status: open title: When writing/closing a closed Popen.stdin, I get OSError vs. BrokenPipeError randomly or depending on bufsize versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 18:41:46 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 16 Jan 2019 23:41:46 +0000 Subject: [New-bugs-announce] [issue35755] Remove current directory from posixpath.defpath to enhance security Message-ID: <1547682106.56.0.245747157525.issue35755@roundup.psfhosted.org> New submission from STINNER Victor : Currently, posixpath.defpath is equal to: defpath = ':/bin:/usr/bin' It gives 3 directories: >>> posixpath.defpath.split(posixpath.pathsep) ['', '/bin', '/usr/bin'] where the empty string means "the current directory". Trying to locate an executable from the current directory can be security issue when an attacker tries to execute arbitrary command. The Linux exec(3) manual page contains an interesting note about the removal of the empty string from glibc 2.24 by accident: http://man7.org/linux/man-pages/man3/execvp.3.html NOTES The default search path (used when the environment does not contain the variable PATH) shows some variation across systems. It generally includes /bin and /usr/bin (in that order) and may also include the current working directory. On some other systems, the current working is included after /bin and /usr/bin, as an anti-Trojan-horse measure. The glibc implementation long followed the traditional default where the current working directory is included at the start of the search path. However, some code refactoring during the development of glibc 2.24 caused the current working directory to be dropped altogether from the default search path. This accidental behavior change is considered mildly beneficial, and won't be reverted. (...) Context of this issue: This discussion started from my PR 11579 which modifies the subprocess module to use posix_spawnp(): https://github.com/python/cpython/pull/11579#pullrequestreview-193261299 So I propose to replace defpath = ':/bin:/usr/bin' with defpath = '/bin:/usr/bin' which gives 2 directories: >>> '/bin:/usr/bin'.split(posixpath.pathsep) ['/bin', '/usr/bin'] This change would only affect os.get_exec_path(), and so indirectly the subprocess module (when the executable contains no directory), *when the PATH environmant variable is not set*. ---------- components: Library (Lib) messages: 333801 nosy: christian.heimes, giampaolo.rodola, gregory.p.smith, vstinner priority: normal severity: normal status: open title: Remove current directory from posixpath.defpath to enhance security type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 18:42:09 2019 From: report at bugs.python.org (Bryan Koch) Date: Wed, 16 Jan 2019 23:42:09 +0000 Subject: [New-bugs-announce] [issue35756] Using `return value` in a generator function skips the returned value on for-loop iteration Message-ID: <1547682129.59.0.632977183775.issue35756@roundup.psfhosted.org> New submission from Bryan Koch : Using the new "`return value` is semantically equivalent to `raise StopIteration(value)`" syntax created in PEP-380 (https://legacy.python.org/dev/peps/pep-0380/#formal-semantics) causes the returned value to be skipped by standard methods of iteration. The PEP reads as if returning a value via StopIteration was meant to signal that the generator was finished and that StopIteration.value was the final value. If StopIteration.value is meant to represent the final value, then the built-in for-loop should not skip it and the current implementation in 3.3, 3.4, 3.5, and 3.6 should be considered an oversight of the PEP and a bug (I don't have a version of 3.7 or 3.8 to test newer versions). Reproduction code is attached with comments/annotations. ---------- files: ex1.py messages: 333802 nosy: Bryan Koch priority: normal severity: normal status: open title: Using `return value` in a generator function skips the returned value on for-loop iteration type: behavior versions: Python 3.4, Python 3.5, Python 3.6 Added file: https://bugs.python.org/file48062/ex1.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 16 23:29:01 2019 From: report at bugs.python.org (Kirill Kolyshkin) Date: Thu, 17 Jan 2019 04:29:01 +0000 Subject: [New-bugs-announce] [issue35757] slow subprocess.Popen(..., close_fds=True) Message-ID: <1547699341.03.0.843639889929.issue35757@roundup.psfhosted.org> New submission from Kirill Kolyshkin : In case close_fds=True is passed to subprocess.Popen() or its users (subprocess.call() etc), it might spend some considerable time closing non-opened file descriptors, as demonstrated by the following snippet from strace: close(3) = -1 EBADF (Bad file descriptor) close(5) = -1 EBADF (Bad file descriptor) close(6) = -1 EBADF (Bad file descriptor) close(7) = -1 EBADF (Bad file descriptor) ... close(1021) = -1 EBADF (Bad file descriptor) close(1022) = -1 EBADF (Bad file descriptor) close(1023) = -1 EBADF (Bad file descriptor) This happens because the code in _close_fds() iterates from 3 up to MAX_FDS = os.sysconf("SC_OPEN_MAX"). Now, syscalls are cheap, but SC_OPEN_MAX (also known as RLIMIT_NOFILE or ulimit -n) can be quite high, for example: $ docker run --rm python python3 -c \ $'import os\nprint(os.sysconf("SC_OPEN_MAX"))' 1048576 This means a million syscalls before spawning a child process, which can result in a major delay, like 0.1s as measured on my fast and mostly idling laptop. Here is the comparison with python3 (which does not have this problem): $ docker run --rm python python3 -c $'import subprocess\nimport time\ns = time.time()\nsubprocess.check_call([\'/bin/true\'], close_fds=True)\nprint(time.time() - s)\n' 0.0009245872497558594 $ docker run --rm python python2 -c $'import subprocess\nimport time\ns = time.time()\nsubprocess.check_call([\'/bin/true\'], close_fds=True)\nprint(time.time() - s)\n' 0.0964419841766 ---------- components: Library (Lib) messages: 333819 nosy: Kirill Kolyshkin priority: normal pull_requests: 11269 severity: normal status: open title: slow subprocess.Popen(..., close_fds=True) type: performance versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 00:16:02 2019 From: report at bugs.python.org (Minmin Gong) Date: Thu, 17 Jan 2019 05:16:02 +0000 Subject: [New-bugs-announce] [issue35758] Disable x87 control word for MSVC ARM compiler Message-ID: <1547702162.77.0.571505556494.issue35758@roundup.psfhosted.org> New submission from Minmin Gong : Msvc defines _M_ARM for arm target, but it doesn't have x87 control word. Need to disable it to prevent some compiling problems. ---------- messages: 333821 nosy: Minmin.Gong priority: normal pull_requests: 11270 severity: normal status: open title: Disable x87 control word for MSVC ARM compiler _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 07:34:49 2019 From: report at bugs.python.org (Thomas Krennwallner) Date: Thu, 17 Jan 2019 12:34:49 +0000 Subject: [New-bugs-announce] [issue35759] inspect module does not implement introspection API for asynchronous generators Message-ID: <1547728489.52.0.471681525673.issue35759@roundup.psfhosted.org> New submission from Thomas Krennwallner : The `inspect` module does not contain functions for determining the current state of asynchronous generators. That is, there is no introspection API for asynchronous generators that match the API for generators and coroutines: https://docs.python.org/3.8/library/inspect.html#current-state-of-generators-and-coroutines. I propose to add `inspect.getasyncgenstate` and `inspect.getasyncgenlocals` (following `inspect.isasyncgenfunction` and `inspect.isasyncgen`). % ./python Python 3.8.0a0 (heads/fix-issue-getasyncgenstate:a24deae1e2, Jan 17 2019, 11:44:45) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> async def agen(): ... x = 1 ... yield x ... x += 1 ... yield x ... >>> ag = agen() >>> inspect.getasyncgenstate(ag) 'AGEN_CREATED' >>> inspect.getasyncgenlocals(ag) {} >>> ag.__anext__().__next__() Traceback (most recent call last): File "", line 1, in StopIteration: 1 >>> inspect.getasyncgenstate(ag) 'AGEN_SUSPENDED' >>> inspect.getasyncgenlocals(ag) {'x': 1} >>> ag.__anext__().__next__() Traceback (most recent call last): File "", line 1, in StopIteration: 2 >>> inspect.getasyncgenstate(ag) 'AGEN_SUSPENDED' >>> inspect.getasyncgenlocals(ag) {'x': 2} >>> ag.aclose().send(None) Traceback (most recent call last): File "", line 1, in StopIteration >>> inspect.getasyncgenstate(ag) 'AGEN_CLOSED' >>> inspect.getasyncgenlocals(ag) {} ---------- components: Library (Lib) files: 0001-inspect-add-introspection-API-for-asynchronous-gener.patch keywords: patch messages: 333861 nosy: tkren priority: normal severity: normal status: open title: inspect module does not implement introspection API for asynchronous generators type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48066/0001-inspect-add-introspection-API-for-asynchronous-gener.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 09:04:24 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 17 Jan 2019 14:04:24 +0000 Subject: [New-bugs-announce] [issue35760] test_asyncio: test_async_gen_asyncio_gc_aclose_09() race condition Message-ID: <1547733864.78.0.251061521649.issue35760@roundup.psfhosted.org> New submission from STINNER Victor : The test fails once on AMD64 Windows8.1 Non-Debug 3.x when the Python test suite is run in parallel, but pass if the test is run alone (when the system is more "idle"). https://buildbot.python.org/all/#/builders/12/builds/1898 ====================================================================== FAIL: test_async_gen_asyncio_gc_aclose_09 (test.test_asyncgen.AsyncGenAsyncioTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release\build\lib\test\test_asyncgen.py", line 684, in test_async_gen_asyncio_gc_aclose_09 self.assertEqual(DONE, 1) AssertionError: 0 != 1 It can reproduce the failure on a very busy Windows using 2 terminals: * python -m test -F -W -j4 test_asyncgen test_asyncgen test_asyncgen test_asyncgen * python -m test -j0 -r -u all The first command runs the test 4 times in parallel in a loop until if fails, the second command is just one way to stress the system. The test is based on time and so has a race condition depending on the exact timing: def test_async_gen_asyncio_gc_aclose_09(self): DONE = 0 async def gen(): nonlocal DONE try: while True: yield 1 finally: await asyncio.sleep(0.01) await asyncio.sleep(0.01) DONE = 1 async def run(): g = gen() await g.__anext__() await g.__anext__() del g await asyncio.sleep(0.1) self.loop.run_until_complete(run()) self.assertEqual(DONE, 1) ---------- components: Library (Lib), Tests messages: 333868 nosy: vstinner priority: normal severity: normal status: open title: test_asyncio: test_async_gen_asyncio_gc_aclose_09() race condition versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 09:23:09 2019 From: report at bugs.python.org (=?utf-8?q?Th=C3=A9ophile_Chevalier?=) Date: Thu, 17 Jan 2019 14:23:09 +0000 Subject: [New-bugs-announce] [issue35761] Allow dataclasses to be updated in place Message-ID: <1547734989.68.0.404239918534.issue35761@roundup.psfhosted.org> New submission from Th?ophile Chevalier : Calling dataclasses.replace(instance, **changes) returns a new object of the same type. >From my understanding there is, however, no method to update in place fields of a dataclass from another one. I propose to add dataclasses.update(instance_to_update, other_instance, **changes). This would for instance allow one to change all fields of current object in a sturdy way. In my case, I currently call obj.__dict__.update(other_obj.__dict__) to perform the operation, but I know it has always to be done pretty carefully. If this is accepted, I'm willing to post the change. ---------- messages: 333872 nosy: theophile priority: normal severity: normal status: open title: Allow dataclasses to be updated in place type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 10:22:48 2019 From: report at bugs.python.org (Samuel Bayer) Date: Thu, 17 Jan 2019 15:22:48 +0000 Subject: [New-bugs-announce] [issue35762] subprocess.Popen with universal_newlines and nonblocking streams failes with "can't concat NoneType to bytes" Message-ID: <1547738568.37.0.865745194211.issue35762@roundup.psfhosted.org> New submission from Samuel Bayer : This bug is probably related to issue 24560. This: >>> import subprocess, fcntl, os >>>> p = subprocess.Popen(["python", "-c", 'import time; time.sleep(5)'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, universal_newlines= True) >>> fcntl.fcntl(p.stderr.fileno(), fcntl.F_SETFL, os.O_NONBLOCK | fcntl.fcntl(p.stderr.fileno(), fcntl.F_GETFL)) >>> p.stderr.read() causes this: Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 321, in decode data = self.buffer + input TypeError: can't concat NoneType to bytes I'm assuming the problem is that the underlying unbuffered stream returns None and the incremental byte decoder that's induced by universal_newlines = True isn't expecting it. ---------- components: IO messages: 333883 nosy: sambayer priority: normal severity: normal status: open title: subprocess.Popen with universal_newlines and nonblocking streams failes with "can't concat NoneType to bytes" type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 15:01:28 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 17 Jan 2019 20:01:28 +0000 Subject: [New-bugs-announce] [issue35763] IDLE calltips: make positional note less obtrusive Message-ID: <1547755288.11.0.42128686161.issue35763@roundup.psfhosted.org> New submission from Terry J. Reedy : #19903 made calltip.getargspec use inspect.signature. The latter may include '/' following positional-only arguments. Slashes are possible for the growing number of C-coded functions processed with Argument Clinic. They appear in both help output and IDLE calltips, but not yet in the regular docs, let alone Python code. The result, for instance, is 'float([x])' in the docs and 'float(x=0, /)' in help() and calltips. Since '/' is effectively undocumented, especially in anything beginners would see, and since there have been questions from beginners as to its meaning, the following note is added to calltips on a new line followed by a blank line: ['/' marks preceding arguments as positional-only] The negative effect is somewhat obtrusively expanding what would typically be 2 lines to 4 in order to say something that hopefully becomes useless. Raymond's #16638 comment about big tips being distracting prompted me to consider some possible (non-exclusive) changes to reduce the impact. 0. Omit the blank line. We added the blank line to make it clearer that the comment is not part of the docstring. This can be done otherwise. 1. Change the font to something like smaller, red, italic characters. Issue: the tip string is computed as a whole in the user execution process and inserted in the tip window in the IDLE process. 2. Shorten and move the comment and mark it with '#'. Most builtins have short signatures, so a short enough comment could be appended to the signature line as a comment. In combination with 0. (and 1., but not visible here), the float tip would shrink from the current float(x=0, /) ['/' marks preceding arguments as positional-only] Convert a string or number to a floating point number, if possible. back down to float(x=0, /) # / means positional-only Convert a string or number to a floating point number, if possible. 3. Limit the number of appearances in a particular session. The following should work. slash_comments = 3 ... if '/' in sig: if slash_comments: slash_comments -= 1 I think 3 would be about enough. I don't want to make it configurable. Issue: restarting the user execution process would restart the count in that process, where the addition is currently made. If the proposal to use '/' in the regular docs were ever accepted, I would remove the special calltip comment. ---------- assignee: terry.reedy components: IDLE messages: 333897 nosy: rhettinger, terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE calltips: make positional note less obtrusive type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 15:18:44 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 17 Jan 2019 20:18:44 +0000 Subject: [New-bugs-announce] [issue35764] IDLE: revise calltip doc Message-ID: <1547756324.8.0.250551276486.issue35764@roundup.psfhosted.org> New submission from Terry J. Reedy : Add cross-reference from Menu section entry. Document '/' for builtins. Check other details. (Also remove 'extension' from end of previous entry.) ---------- assignee: terry.reedy components: IDLE messages: 333898 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: revise calltip doc type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 19:31:40 2019 From: report at bugs.python.org (Patrick Rice) Date: Fri, 18 Jan 2019 00:31:40 +0000 Subject: [New-bugs-announce] [issue35765] Document references object x but doesn't show it in the example Message-ID: <1547771500.83.0.912392206765.issue35765@roundup.psfhosted.org> New submission from Patrick Rice : https://docs.python.org/3.5/tutorial/inputoutput.html If you have an object x, you can view its JSON string representation with a simple line of code: >>> >>> import json >>> json.dumps([1, 'simple', 'list']) '[1, "simple", "list"]' ---------- assignee: docs at python components: Documentation messages: 333917 nosy: Patrick Rice, docs at python priority: normal severity: normal status: open title: Document references object x but doesn't show it in the example versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 19:33:01 2019 From: report at bugs.python.org (Guido van Rossum) Date: Fri, 18 Jan 2019 00:33:01 +0000 Subject: [New-bugs-announce] [issue35766] Merge typed_ast back into CPython Message-ID: <1547771581.08.0.389360423635.issue35766@roundup.psfhosted.org> New submission from Guido van Rossum : (This started at https://discuss.python.org/t/merge-typed-ast-back-into-cpython/377. It's somewhat related to https://bugs.python.org/issue33337.) I now have a thorough understanding of what typed_ast does, and I think it would be straightforward to port it upstream. We?d need to define two new tokens to represent `# type: ignore` and `# type: `, and tokenizer code to recognize these. Then we need a new flag to be passed to the tokenizer (via the parser) that enables this behavior. We make a small number of changes to `Grammar` (inserting optional `TYPE_COMMENT` tokens and to `Python.asdl` (adding fields to a few node types to hold the optional type comment), and a fair number of changes to `ast.c` to extract the type comments. We have similar patches for 3.6 and 3.7, so it's a simple matter of porting those patches to 3.8. By default, `ast.parse()` should not return type comments, since this would reject some perfectly good Python code (with sonething looking like a type comment in a place where the grammar doesn?t allow it). But passing an new flag will cause the tokenizer to process type comments and the returned tree will contain them. I could produce a PR with this in a few days (having just gone over most of the process for porting typed_ast from 3.6 to 3.7). There?s one more feature I?d like to lobby for ? a feature_version flag that modifies the grammar slightly so it resembles an older version of Python (going back to 3.4). This is used in mypy to decouple the Python version you?re running from the Python version for which you?re checking compatibility (useful when checking code that will be deployed on a system with a different Python version installed). I imagine this would be useful to other linters as well, and the implementation is mostly manipulating whether `async` and `await` are keywords. But if there?s pushback to this part I can live without it ? the rest of the work is still useful. In the next few days I will produce a PR so people can see for themselves. In https://discuss.python.org/t/merge-typed-ast-back-into-cpython/377/17, ?ukasz offered to merge my PR. ---------- components: Library (Lib) messages: 333919 nosy: gvanrossum priority: normal severity: normal stage: needs patch status: open title: Merge typed_ast back into CPython type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 20:25:19 2019 From: report at bugs.python.org (Jason Fried) Date: Fri, 18 Jan 2019 01:25:19 +0000 Subject: [New-bugs-announce] [issue35767] unittest loader doesn't work with partial test functions Message-ID: <1547774719.89.0.274196426974.issue35767@roundup.psfhosted.org> New submission from Jason Fried : https://github.com/python/cpython/blob/3.7/Lib/unittest/loader.py#L232 fullName = '%s.%s' % (testCaseClass.__module__, testFunc.__qualname__) Instead we should probably replace testFunc.__qualname__ with attrname I ran into this while running a test suite that built up test functions using partials and added them to the TestCase class with setattr. This works in 3.6.3 ---------- messages: 333926 nosy: fried, lisroach, lukasz.langa priority: normal severity: normal status: open title: unittest loader doesn't work with partial test functions versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 17 23:51:19 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 18 Jan 2019 04:51:19 +0000 Subject: [New-bugs-announce] [issue35768] IDLE: Auto measure font fixed pitch characteristics Message-ID: <1547787079.94.0.0809073179555.issue35768@roundup.psfhosted.org> New submission from Terry J. Reedy : The greatly expanded configdialog Font tab multi-alphabet sample reveals to some degree how well tk fills in BMP Unicode characters on a particular machine. It also lets users extend the sample. The sample has 2 lines of 20 ascii characters each and lines of 20 non-Latin1, IPA, Greek, Cyrillic, Hebrew, and Arabic characters. The intention is to let one see if a font (as extended) is fixed-pitch for Ascii and if that property extends to any of those other European and Near East alphabets. On my machine, the number of fixed alphabets varies from 0, 1 (ascii), 2 (rest of Latin1) up to 7 (for Courier) (and some in between). On #35196, Raymond Hettinger asked whether fixed-pitch fonts could be detected. With the caveat that this property is not binary unless we restrict attention to Ascii, yes. Without measuring each character, we could check the ascii lines and then the others. We could then highlight the lines in the sample that pass. Before coding, we need to experiment a bit with the Font measuring method. Should we cache results in .idlerc? For all the fonts on my machine, the East Asian CJK characters are filled in with a fixed-pitch that is about 1.6 to 1.8 (not 2.0) times the Ascii fixed or average pitch. Raymond also suggested limiting the font list to those with fixed ascii. I think at least segregating fixed Ascii pitch fonts to make them easy to find is a great idea. Some detail need to be thought about. ---------- assignee: terry.reedy components: IDLE messages: 333939 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Auto measure font fixed pitch characteristics type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 01:19:06 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 18 Jan 2019 06:19:06 +0000 Subject: [New-bugs-announce] [issue35769] IDLE: change new file name from ''Untitled" to "untitled" Message-ID: <1547792346.64.0.506304768871.issue35769@roundup.psfhosted.org> New submission from Terry J. Reedy : Conform to PEP 8. ---------- assignee: terry.reedy components: IDLE messages: 333943 nosy: terry.reedy priority: normal severity: normal stage: commit review status: open title: IDLE: change new file name from ''Untitled" to "untitled" type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 01:42:07 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 18 Jan 2019 06:42:07 +0000 Subject: [New-bugs-announce] [issue35770] IDLE: python -m idlelib fails on master on Mac OS 10.10.4 Message-ID: <1547793727.58.0.008691556265.issue35770@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I used to launch IDLE using from master using ./python.exe -m idlelib . It used to work but fails on master on Mac OS now. There seems to be some discussion about this on msg332672 after the commit c1b4b0f6160e1919394586f44b12538505fed300. Feel free to close this if it's a known issue. I haven't tested it on latest 3.7 but it seems the commit was also merged to 3.7 so I am just adding 3.8 as version. Mac OS version : 10.10.4 ? cpython git:(master) ? git checkout c1b4b0f6160e1919394586f44b12538505fed300 Lib/idlelib && ./python.exe -m idlelib Traceback (most recent call last): File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/idlelib/__main__.py", line 7, in idlelib.pyshell.main() File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/idlelib/pyshell.py", line 1507, in main macosx.setupApp(root, flist) File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/idlelib/macosx.py", line 280, in setupApp overrideRootMenu(root, flist) File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/idlelib/macosx.py", line 181, in overrideRootMenu del mainmenu.menudefs[-2][1][0] IndexError: list assignment index out of range ? cpython git:(master) ? git checkout c1b4b0f6160e1919394586f44b12538505fed300~1 Lib/idlelib && ./python.exe -m idlelib # Works $ git show c1b4b0f6160e1919394586f44b12538505fed300 commit c1b4b0f6160e1919394586f44b12538505fed300 (bpo35557, 35559) Author: Cheryl Sabella Date: Sat Dec 22 01:25:45 2018 -0500 bpo-22703: IDLE: Improve Code Context and Zoom Height menu labels (GH-11214) The Code Context menu label now toggles between Show/Hide Code Context. The Zoom Height menu now toggles between Zoom/Restore Height. Zoom Height has moved from the Window menu to the Options menu. https://bugs.python.org/issue22703 ---------- assignee: terry.reedy components: IDLE messages: 333945 nosy: cheryl.sabella, taleinat, terry.reedy, xtreak priority: normal severity: normal status: open title: IDLE: python -m idlelib fails on master on Mac OS 10.10.4 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 01:51:07 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 18 Jan 2019 06:51:07 +0000 Subject: [New-bugs-announce] [issue35771] IDLE: Fix tooltip Hovertiptest failure Message-ID: <1547794267.4.0.582455332849.issue35771@roundup.psfhosted.org> New submission from Terry J. Reedy : In the buildbot testing for #35730, this test failed on X86 Windows 3.7 and passed on retest. I did not check the green bots, so there could be other fail and pass results. ====================================================================== FAIL: test_showtip_on_mouse_enter_hover_delay (idlelib.idle_test.test_tooltip.HovertipTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.7.bolen-windows7\build\lib\idlelib\idle_test\test_tooltip.py", line 112, in test_showtip_on_mouse_enter_hover_delay self.assertFalse(tooltip.tipwindow and tooltip.tipwindow.winfo_viewable()) AssertionError: 1 is not false We should at least look at this since it might someday fail twice in a row on 1 machine. ---------- messages: 333946 nosy: taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Fix tooltip Hovertiptest failure type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 06:15:44 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 18 Jan 2019 11:15:44 +0000 Subject: [New-bugs-announce] [issue35772] test_tarfile fails on ppc64le when using tmpfs filesystem Message-ID: <1547810144.05.0.534595171139.issue35772@roundup.psfhosted.org> New submission from STINNER Victor : The following test_tarfile tests fail on ppc64 when using tmpfs filesystem (which is the case on RHEL package build server): * test_sparse_file_00 (test.test_tarfile.GNUReadTest) * test_sparse_file_01 (test.test_tarfile.GNUReadTest) * test_sparse_file_10 (test.test_tarfile.GNUReadTest) * test_sparse_file_old (test.test_tarfile.GNUReadTest) Example of failure: ====================================================================== FAIL: test_sparse_file_00 (test.test_tarfile.GNUReadTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.6/Lib/test/test_tarfile.py", line 964, in test_sparse_file_00 self._test_sparse_file("gnu/sparse-0.0") File "/builddir/build/BUILD/Python-3.6.6/Lib/test/test_tarfile.py", line 958, in _test_sparse_file self.assertLess(s.st_blocks * 512, s.st_size) AssertionError: 131072 not less than 86016 Bug first report on RHEL8: https://bugzilla.redhat.com/show_bug.cgi?id=1639490 test_tarfile has _fs_supports_holes() function to check if the filesystem supports sparse files with holes. The function returns True on: * ext4 filesystem on x86_64 on my Fedora 29 (kernel 4.19) * ext4 filesystem on x86_64 on my Fedora 29 (kernel 4.19) * XFS filesystem on ppc64le (kernel 4.18) * tmpfs filesystem on ppc64le (kernel 4.18) In short, it always return True on x86_64 and ppc64le Linux kernels. Problem: in practice, "tmpfs filesystem on ppc64le (kernel 4.18)" doesn't fully support sparse files. -- Example from: https://bugzilla.redhat.com/show_bug.cgi?id=1639490#c5 # ls -lhs ~/sparse 48K -rw-r--r--. 1 root root 84K Jan 18 05:36 /root/sparse Copy a sparse file from XFS to tmpfs: cp --sparse=always and fallocate --dig fail to punch holes, the file always take 128K on disk on tmpfs. # cp sparse /root/mytmp/sparse --sparse=always # ls -lhs /root/mytmp/sparse 128K -rw-r--r--. 1 root root 84K Jan 18 06:10 /root/mytmp/sparse # fallocate --dig /root/mytmp/sparse # ls -lhs /root/mytmp/sparse 128K -rw-r--r--. 1 root root 84K Jan 18 06:10 /root/mytmp/sparse Counter example on XFS, source and destionation files use 48K on disk fo 84K of data: # cp sparse sparse2 --sparse=always # ls -lhs sparse* 48K -rw-r--r--. 1 root root 84K Jan 18 05:36 sparse 48K -rw-r--r--. 1 root root 84K Jan 18 06:13 sparse2 -- Attached PR fix the _fs_support_holes() heuristic to return properly False on tmpfs on ppc64le. ---------- components: Tests messages: 333956 nosy: vstinner priority: normal severity: normal status: open title: test_tarfile fails on ppc64le when using tmpfs filesystem versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 06:57:58 2019 From: report at bugs.python.org (Michael Felt) Date: Fri, 18 Jan 2019 11:57:58 +0000 Subject: [New-bugs-announce] [issue35773] test_bdb fails on AIX bot (regression) Message-ID: <1547812678.96.0.834885366728.issue35773@roundup.psfhosted.org> New submission from Michael Felt : I see in the bot history that test_bdb is now failing on AIX https://buildbot.python.org/all/#/builders/161/builds/718/steps/4/logs/stdio == CPython 3.8.0a0 (heads/master:a37f52436f, Jan 15 2019, 22:53:01) [C] == AIX-1-00C291F54C00-powerpc-32bit big-endian == cwd: /home/buildbot/buildarea/3.x.aixtools-aix-power6/build/build/test_python_5177546 == CPU count: 8 == encodings: locale=ISO8859-15, FS=iso8859-15 FYI: it is not failing on the GCC based bot (mine is based on XLC). ---------- components: Tests messages: 333957 nosy: Michael.Felt priority: normal severity: normal status: open title: test_bdb fails on AIX bot (regression) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 07:28:26 2019 From: report at bugs.python.org (Dhiraj) Date: Fri, 18 Jan 2019 12:28:26 +0000 Subject: [New-bugs-announce] [issue35774] ASAN, memory leak Message-ID: <1547814506.63.0.194209135829.issue35774@roundup.psfhosted.org> New submission from Dhiraj : Hi Team, I have compiled cpython via clang using ASAN and memory leak was observed. After successful build of python, 1. Run python 2. Ctrl + D ==21461==ERROR: LeakSanitizer: detected memory leaks Direct leak of 257790 byte(s) in 93 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 Direct leak of 1640 byte(s) in 3 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 #4 0x96cea4 in _PyObject_GC_Malloc /home/input0/Desktop/cpython/Modules/gcmodule.c:1908:12 #5 0x96cea4 in _PyObject_GC_NewVar /home/input0/Desktop/cpython/Modules/gcmodule.c:1937 Direct leak of 663 byte(s) in 1 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 #4 0x8b9dd8 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1362:20 #5 0x8b84a5 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1194:18 #6 0x8b9e09 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1365:22 #7 0x8bf86a in read_object /home/input0/Desktop/cpython/Python/marshal.c:1451:9 #8 0x8bf86a in marshal_loads_impl /home/input0/Desktop/cpython/Python/marshal.c:1763 #9 0x8bf86a in marshal_loads /home/input0/Desktop/cpython/Python/clinic/marshal.c.h:158 #10 0x564da7 in _PyMethodDef_RawFastCallKeywords /home/input0/Desktop/cpython/Objects/call.c Direct leak of 579 byte(s) in 1 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 #4 0x8b9dd8 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1362:20 #5 0x8b84a5 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1194:18 #6 0x8b9e09 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1365:22 #7 0x8b84a5 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1194:18 #8 0x8b9e09 in r_object /home/input0/Desktop/cpython/Python/marshal.c:1365:22 #9 0x8b409d in PyMarshal_ReadObjectFromString /home/input0/Desktop/cpython/Python/marshal.c:1568:14 #10 0x8a0d81 in get_frozen_object /home/input0/Desktop/cpython/Python/import.c:1277:12 #11 0x8a0d81 in _imp_get_frozen_object_impl /home/input0/Desktop/cpython/Python/import.c:2036 #12 0x8a0d81 in _imp_get_frozen_object /home/input0/Desktop/cpython/Python/clinic/import.c.h:198 #13 0x5623eb in _PyCFunction_FastCallDict /home/input0/Desktop/cpython/Objects/call.c:584:14 #14 0x5623eb in PyCFunction_Call /home/input0/Desktop/cpython/Objects/call.c:789 Direct leak of 536 byte(s) in 1 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x6403b0 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x6403b0 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x6403b0 in pymalloc_realloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1869 #4 0x6403b0 in _PyObject_Realloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1888 #5 0x644ead in PyObject_Realloc /home/input0/Desktop/cpython/Objects/obmalloc.c:658:12 Indirect leak of 15640 byte(s) in 17 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 #4 0x675f9a in PyType_GenericAlloc /home/input0/Desktop/cpython/Objects/typeobject.c:975:15 Indirect leak of 7440 byte(s) in 7 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 Indirect leak of 2571 byte(s) in 2 object(s) allocated from: #0 0x4f1460 in malloc (/home/input0/Desktop/cpython/python+0x4f1460) #1 0x63fc59 in PyMem_RawMalloc /home/input0/Desktop/cpython/Objects/obmalloc.c:527:12 #2 0x63fc59 in _PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:1550 #3 0x644d77 in PyObject_Malloc /home/input0/Desktop/cpython/Objects/obmalloc.c:640:12 #4 0x687d07 in type_call /home/input0/Desktop/cpython/Objects/typeobject.c:934:11 SUMMARY: AddressSanitizer: 286859 byte(s) leaked in 125 allocation(s). ---------- messages: 333958 nosy: Dhiraj_Mishra priority: normal severity: normal status: open title: ASAN, memory leak type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 09:04:49 2019 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Fri, 18 Jan 2019 14:04:49 +0000 Subject: [New-bugs-announce] [issue35775] Add a general selection function to statistics Message-ID: <1547820289.72.0.450290663887.issue35775@roundup.psfhosted.org> New submission from R?mi Lapeyre : Like discussed in #30999, the attached PR adds a general selection function to the statistics module. This allows to simply get the element at a given quantile of a collection. https://www.cs.rochester.edu/~gildea/csc282/slides/C09-median.pdf ---------- components: Library (Lib) messages: 333964 nosy: remi.lapeyre priority: normal severity: normal status: open title: Add a general selection function to statistics type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 11:15:36 2019 From: report at bugs.python.org (Nazar) Date: Fri, 18 Jan 2019 16:15:36 +0000 Subject: [New-bugs-announce] [issue35776] Virtualenv 16.2.0 Error Finding Pip Message-ID: <1547828136.9.0.603302624111.issue35776@roundup.psfhosted.org> New submission from Nazar : Issue resolved by downgrading virtualenv from 16.2.0 to 15.1.0. $ virtualenv -p python my_venv Already using interpreter /usr/bin/python New python executable in /home/your_user_name/my_venv/bin/python Cannot find a wheel for setuptools Cannot find a wheel for pip Installing setuptools, pip, wheel... Complete output from command /home/your_user_name/my_venv/bin/python - setuptools pip wheel: Traceback (most recent call last): File "", line 11, in ImportError: No module named pip ---------------------------------------- ...Installing setuptools, pip, wheel...done. Traceback (most recent call last): File "/usr/bin/virtualenv", line 11, in load_entry_point('virtualenv==16.2.0', 'console_scripts', 'virtualenv')() File "build/bdist.cygwin--x86_64/egg/virtualenv.py", line 768, in main File "build/bdist.cygwin--x86_64/egg/virtualenv.py", line 1030, in create_environment File "build/bdist.cygwin--x86_64/egg/virtualenv.py", line 983, in install_wheel File "build/bdist.cygwin--x86_64/egg/virtualenv.py", line 861, in call_subprocess OSError: Command /home/your_user_name/my_venv/bin/python - setuptools pip wheel failed with error code 1 ---------- components: Installation messages: 333986 nosy: NazarTrilisky priority: normal severity: normal status: open title: Virtualenv 16.2.0 Error Finding Pip type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 16:26:40 2019 From: report at bugs.python.org (Jez Hill) Date: Fri, 18 Jan 2019 21:26:40 +0000 Subject: [New-bugs-announce] [issue35777] mismatched eval() and ast.literal_eval() behavior with unicode_literals Message-ID: <1547846800.05.0.589923940496.issue35777@roundup.psfhosted.org> New submission from Jez Hill : Following `from __future__ import unicode_literals` the expression `eval(" 'foo' ")` will return a `unicode` instance. However, using the same input, `ast.literal_eval(" 'foo' ")` will return a `str` instance. The caller's preference, that those plain single-quotes should a denote unicode literal, is respected by `eval()` but not by `ast.literal_eval()`. I propose that `ast.literal_eval()` be made sensitive to this preference, to bring it in line with `eval()`. ---------- messages: 334011 nosy: jez priority: normal severity: normal status: open title: mismatched eval() and ast.literal_eval() behavior with unicode_literals type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 18:29:15 2019 From: report at bugs.python.org (Oscar Esteban) Date: Fri, 18 Jan 2019 23:29:15 +0000 Subject: [New-bugs-announce] [issue35778] RF: ``pathlib.Path.checksum()`` member Message-ID: <1547854155.56.0.887499494247.issue35778@roundup.psfhosted.org> New submission from Oscar Esteban : Gauging the interest in a checksum calculation function built-in Path objects: ``` >>> Path('somefile.img').checksum() '4976c36bacf922cbc5c811c9c288e61d' >>> Path('somefile.img').checksum(hash='md5') '4976c36bacf922cbc5c811c9c288e61d' >>> Path('somefile.img').checksum(hash='sha256') '12917abe21e1eb4ba3c704600db53a1ff1434b3259422b86bfd08afa8216e4aa' >>> Path.home().checksum() '798d3a5c2b679750a90e91b09cf93129' >>> Path.home().checksum(hash='sha256') 'b3e04961fd54818d93aac305db4a3dec51b9731808c19ea9c59460c841e2d145' # Do not checksum content, just the file's path, as for directories >>> Path('somefile.img').checksum(content=False) '3fb531e352cbc2e2103ab73ede40f2d6' ``` ---------- components: Library (Lib) messages: 334023 nosy: oesteban priority: normal severity: normal status: open title: RF: ``pathlib.Path.checksum()`` member type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 21:43:35 2019 From: report at bugs.python.org (Ma Lin) Date: Sat, 19 Jan 2019 02:43:35 +0000 Subject: [New-bugs-announce] [issue35779] Print friendly version message in REPL Message-ID: <1547865815.91.0.564607608514.issue35779@roundup.psfhosted.org> New submission from Ma Lin : The current version message in REPL is too complicate for official release. Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> tags/v3.7.2 git tag is superfluous. 9a3ffc0492 git commit id may scare junior users. 23:09:28 quite meaningless. win32 "I'm using a 64 bit Windows, why call it win32?" IMO, we can simply print this message for official release: Python 3.7.2 (64 bit, Dec 23 2018) on Windows How about this logic? if version in git_tag and "dirty" not in commit_id: print_simplified_message() else: print_current_message() ---------- messages: 334026 nosy: Ma Lin priority: normal severity: normal status: open title: Print friendly version message in REPL type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 23:19:38 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 19 Jan 2019 04:19:38 +0000 Subject: [New-bugs-announce] [issue35780] Recheck logic in the C version of the lru_cache() Message-ID: <1547871578.67.0.302934084247.issue35780@roundup.psfhosted.org> New submission from Raymond Hettinger : After the check for popresult==Py_None, there is the comment that was mostly copied from the Python version but doesn't match the actual code: /* Getting here means that this same key was added to the cache while the lock was released. Since the link update is already done, we need only return the computed result and update the count of misses. */ The cache.pop uses the old key (the one being evicted), so at this point in the code we have an extracted link containing the old key but the pop failed to find the dictionary reference to that link. It tells us nothing about whether the current new key has already been added to the cache or whether another thread added a different key. This code path doesn't add the new key. Also, it leaves the self->full variable set to True even though we're now at least one link short of maxsize. The next test is for popresult == NULL. If I'm understanding it correctly, it means that an error occurred during lookup (possible during the equality check). If so, then why is the link being moved to the front of the lru_cache -- it should have remained at the oldest position. The solution to this is only extract the link after a successful pop rather than before. The final case runs code when the pop succeeded in finding the oldest link. The popresult is decreffed but not checked to make sure that it actually is the oldest link. Afterwards, _PyDict_SetItem_KnownHash() is called with the new key. Unlike the pure python code, it does not check to see if the new key has already been added by another thread. This can result in an orphaned link (a link not referred to by the cache dict). I think that is why popresult code can ever get to a state where it can return Py_None (it means that the cache structure is in an inconsistent state). I think the fix is to make the code more closely follow the pure python code. Verify that the new key hasn't been added by another thread during the user function call. Don't delete the old link until it has been successfully popped. A Py_None return from the pop should be regarded as sign the structure is in an inconsistent state. The self->full variable needed to be reset if there are any code paths that deletes links but don't add them back. Better yet, the extraction of a link should be immediately followed by repopulating it will new values and moving it to the front of the cache. That way, the cache structure will always remain in a consistent state and the number of links will be constant from start to finish. The current code likely doesn't fail in any spectacular way. Instead, it will occasionally have unreferenced orphan links, will occasionally be marked as full when it is short one or more links (and never regaining the lost links), will occasionally not put the result of the newest function call into the cache, and will occasionally mark the oldest link as being the newest even though there wasn't a user function call to the corresponding old key. Minor nit: The decrefs should be done at the bottom of each code path instead of the top. This makes it a lot easier to verify that we aren't making arbitrary reentrant callbacks until the cache data structures have been put into a consistent state. Minor nit: The test "self->root.next != &self->root" may no longer be necessary if the above issues are fixed. We can only get to this wrapper when maxsize > 0, so self->full being true implies that there is at least one link in the chain, so self->root.next cannot point back to itself. Possibly the need for this test exists only because the cache is getting into an inconsistent state where it is marked as full but there aren't any extant links. Minor nit: "lru_cache_extricate_link" should be named "lru_cache_extract_link". The word "extricate" applies only when solving an error case; whereas, "extract" applies equally well to normal cases and cases. The latter word more closely means "remove an object from a data structure" which is what was likely intended. Another minor nit: The code in lru_cache_append_link() is written in way where the compiler has to handle an impossible case where "link->prev->next = link->next" changes the value of "link->next". The suspicion of aliased pointers causes the compiler to generate an unnecessary and redundant memory fetch. The solution again is to more closely follow the pure python code: diff --git a/Modules/_functoolsmodule.c b/Modules/_functoolsmodule.c index 0fb4847af9..8cbd79ceaf 100644 --- a/Modules/_functoolsmodule.c +++ b/Modules/_functoolsmodule.c @@ -837,8 +837,10 @@ infinite_lru_cache_wrapper(lru_cache_object *self, PyObject *args, PyObject *kwd static void lru_cache_extricate_link(lru_list_elem *link) { - link->prev->next = link->next; - link->next->prev = link->prev; + lru_list_elem *link_prev = link->prev; + lru_list_elem *link_next = link->next; + link_prev->next = link->next; + link_next->prev = link->prev; } Clang assembly before: movq 16(%rax), %rcx # link->prev movq 24(%rax), %rdx # link->next movq %rdx, 24(%rcx) # link->prev->next = link->next; movq 24(%rax), %rdx # duplicate fetch of link->next movq %rcx, 16(%rdx) # link->next->prev = link->prev; Clang assembly after: movq 16(%rax), %rcx movq 24(%rax), %rdx movq %rdx, 24(%rcx) movq %rcx, 16(%rdx) Open question: Is the any part of the code that relies on the cache key being a tuple? If not, would it be reasonable to emulate the pure python code and return a scalar instead of a tuple when the tuple length is one and there are no keyword arguments or typing requirements? In other words, does f(1) need to have a key of (1,) instead of just 1? It would be nice to save a little space (for the enclosing tuple) and get a little speed (hash the object directly instead of hashing a tuple with just one object). ---------- assignee: serhiy.storchaka components: Extension Modules messages: 334029 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Recheck logic in the C version of the lru_cache() type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 18 23:50:52 2019 From: report at bugs.python.org (yuji38kwmt) Date: Sat, 19 Jan 2019 04:50:52 +0000 Subject: [New-bugs-announce] [issue35781] `logger.warn` method is used in "Logging HOWTO" documentation though `logger.warn` method is deprecated in Python 3.7 Message-ID: <1547873452.27.0.124013164829.issue35781@roundup.psfhosted.org> New submission from yuji38kwmt : ### Target Documentation https://docs.python.org/3/howto/logging.html#configuring-logging ### Actual Sample Code ``` # 'application' code logger.debug('debug message') logger.info('info message') logger.warn('warn message') logger.error('error message') logger.critical('critical message') ``` ### Expected Sample Code ``` # 'application' code logger.debug('debug message') logger.info('info message') logger.warning('warn message') logger.error('error message') logger.critical('critical message') ``` ### Reference > There is an obsolete method warn which is functionally identical to warning. As warn is deprecated, please do not use it - use warning instead. https://docs.python.org/3.7/library/logging.html#logging.Logger.warning ---------- assignee: docs at python components: Documentation messages: 334030 nosy: docs at python, yuji38kwmt priority: normal severity: normal status: open title: `logger.warn` method is used in "Logging HOWTO" documentation though `logger.warn` method is deprecated in Python 3.7 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 19 00:53:32 2019 From: report at bugs.python.org (Louie Lu) Date: Sat, 19 Jan 2019 05:53:32 +0000 Subject: [New-bugs-announce] [issue35782] Missing whitespace after comma in randrange raise error Message-ID: <1547877212.85.0.682089452337.issue35782@roundup.psfhosted.org> New submission from Louie Lu : In random.py:randrange File "/usr/lib/python3.7/random.py", line 200, in randrange raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (3,3, 0) should be "empty range for randrange() (3, 3, 0)" ---------- components: Library (Lib) messages: 334033 nosy: louielu priority: normal severity: normal status: open title: Missing whitespace after comma in randrange raise error versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 19 03:40:17 2019 From: report at bugs.python.org (=?utf-8?b?0JzQsNC60YEg0JLQtdGA0L3QtdGA?=) Date: Sat, 19 Jan 2019 08:40:17 +0000 Subject: [New-bugs-announce] [issue35783] incorrect example of fetching messages in imaplib documentation Message-ID: <1547887217.4.0.578077466695.issue35783@roundup.psfhosted.org> New submission from ???? ?????? : An example of fetching messages from the mailbox given in "IMAP4 Example" section is incorrect: typ, data = M.fetch(num, '(RFC822)') print('Message %s\n%s\n' % (num, data[0][1])) "fetch" may return server data that was not requested (see "7.4.2. FETCH Response" section of RFC 3501). In that case "data[0][1]" won't return what user expects. This is a bad example, that many people repeat and advise to other developers: https://stackoverflow.com/questions/13210737/get-only-new-emails-imaplib-and-python https://gist.github.com/robulouski/7441883 https://stackoverflow.com/questions/51098962/check-if-email-inbox-is-empty-imaplib-python3 https://stackoverflow.com/questions/21116498/imaplib-not-getting-all-emails-in-folder https://stackoverflow.com/questions/2230037/how-to-fetch-an-email-body-using-imaplib-in-python I guess, this peculiarity should be clarified in the documentation. I offer to mark this fetching method is not safe and requests careful fetch result parsing. ---------- assignee: docs at python components: Documentation, email messages: 334048 nosy: barry, docs at python, r.david.murray, ???? ?????? priority: normal severity: normal status: open title: incorrect example of fetching messages in imaplib documentation type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 19 04:51:52 2019 From: report at bugs.python.org (=?utf-8?q?J=C3=B6rn_Heissler?=) Date: Sat, 19 Jan 2019 09:51:52 +0000 Subject: [New-bugs-announce] [issue35784] document that hashlib.new takes kwargs Message-ID: <1547891512.78.0.551625258156.issue35784@roundup.psfhosted.org> New submission from J?rn Heissler : This code works: hashlib.new('blake2b', b'foo', digest_size=7) https://github.com/python/cpython/blob/master/Lib/hashlib.py#L7 documents the function as: new(name, data=b'', **kwargs) But the **kwargs argument is missing in https://docs.python.org/3/library/hashlib.html#hashlib.new and there aren't any examples either. ---------- assignee: docs at python components: Documentation messages: 334053 nosy: docs at python, joernheissler priority: normal severity: normal status: open title: document that hashlib.new takes kwargs type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 19 12:38:06 2019 From: report at bugs.python.org (Eric Fahlgren) Date: Sat, 19 Jan 2019 17:38:06 +0000 Subject: [New-bugs-announce] [issue35785] argparse crashes in gettext when processing missing arguments Message-ID: <1547919486.3.0.425644467841.issue35785@roundup.psfhosted.org> New submission from Eric Fahlgren : When argparse is configured with an option that takes arguments, then the script is invoked with the switch but no arguments, a nonsensical exception is raised during gettext processing. In the 3.7.1 source, the error is at line 2077 of argparse.py, where 'action.nargs' is not an integer as expected by 'ngettext', but one of None, '*' or '?': default = ngettext('expected %s argument', 'expected %s arguments', action.nargs) % action.nargs msg = nargs_errors.get(action.nargs, default) Fix should be pretty trivial, swap the two lines and if 'get' produces None, only then compute the default. File "C:\Program Files\Python37\lib\argparse.py", line 1749, in parse_args args, argv = self.parse_known_args(args, namespace) File "C:\Program Files\Python37\lib\argparse.py", line 1781, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "C:\Program Files\Python37\lib\argparse.py", line 1987, in _parse_known_args start_index = consume_optional(start_index) File "C:\Program Files\Python37\lib\argparse.py", line 1917, in consume_optional arg_count = match_argument(action, selected_patterns) File "C:\Program Files\Python37\lib\argparse.py", line 2079, in _match_argument action.nargs) % action.nargs File "C:\Program Files\Python37\lib\gettext.py", line 631, in ngettext return dngettext(_current_domain, msgid1, msgid2, n) File "C:\Program Files\Python37\lib\gettext.py", line 610, in dngettext return t.ngettext(msgid1, msgid2, n) File "C:\Program Files\Python37\lib\gettext.py", line 462, in ngettext tmsg = self._catalog[(msgid1, self.plural(n))] File "", line 4, in func File "C:\Program Files\Python37\lib\gettext.py", line 168, in _as_int (n.__class__.__name__,)) from None TypeError: Plural value must be an integer, got NoneType ---------- components: Library (Lib) messages: 334065 nosy: eric.fahlgren priority: normal severity: normal status: open title: argparse crashes in gettext when processing missing arguments versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 19 16:50:32 2019 From: report at bugs.python.org (Lorenzo Persichetti) Date: Sat, 19 Jan 2019 21:50:32 +0000 Subject: [New-bugs-announce] [issue35786] get_lock() method is not present for Values created using multiprocessing.Manager() Message-ID: <1547934632.93.0.223661167179.issue35786@roundup.psfhosted.org> New submission from Lorenzo Persichetti : According to the documentation of the multiprocessing.Value() class available here https://docs.python.org/3.6/library/multiprocessing.html#multiprocessing.Value Operations like += which involve a read and write are not atomic. So if, for instance, you want to atomically increment a shared value it is insufficient to just do counter.value += 1 Assuming the associated lock is recursive (which it is by default) you can instead do with counter.get_lock(): counter.value += 1 What happens is that when running the following snippet import multiprocessing manager = multiprocessing.Manager() value = manager.Value('i', 0) value.get_lock() the result is AttributeError: 'ValueProxy' object has no attribute 'get_lock' ---------- assignee: docs at python components: Documentation, Library (Lib), Windows messages: 334070 nosy: Lorenzo Persichetti, docs at python, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: get_lock() method is not present for Values created using multiprocessing.Manager() versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 20 05:23:15 2019 From: report at bugs.python.org (Max) Date: Sun, 20 Jan 2019 10:23:15 +0000 Subject: [New-bugs-announce] [issue35787] shlex.split inserts extra item on backslash space space Message-ID: <1547979795.17.0.450145238356.issue35787@roundup.psfhosted.org> New submission from Max : I believe in both cases below, the ouptu should be ['a', 'b']; the extra ' ' inserted in the list is incorrect: python3.6 Python 3.6.2 (default, Aug 4 2017, 14:35:04) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import shlex >>> shlex.split('a \ b') ['a', ' b'] >>> shlex.split('a \ b') ['a', ' ', 'b'] >>> Doc reference: https://docs.python.org/3/library/shlex.html#parsing-rules > Non-quoted escape characters (e.g. '\') preserve the literal value of the next character that follows; I believe this implies that backslash space should be just space; and then two adjacent spaces should be used (just like a single space) as a separator between arguments. ---------- components: Library (Lib) messages: 334081 nosy: max priority: normal severity: normal status: open title: shlex.split inserts extra item on backslash space space versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 20 06:14:33 2019 From: report at bugs.python.org (Samuel Colvin) Date: Sun, 20 Jan 2019 11:14:33 +0000 Subject: [New-bugs-announce] [issue35788] smtpd.PureProxy and smtpd.MailmanProxy broken by extra kwargs, bytes and more Message-ID: <1547982873.85.0.188578370535.issue35788@roundup.psfhosted.org> New submission from Samuel Colvin : smtpd.PureProxy.process_message and smtpd.MailmanProxy.process_message are defined to not receive the extra kwargs which they're called with. They both also expect "data" to be str when it's actually bytes. Thus they're completed broken at the moment. I'd like to submit a PR to fix these two bugs. There are a number of other issues/potential improvements to smtpd which are not critical but I guess should be fixed: * no support for starttls * use of print(..., file=DEBUGSTREAM) instead of logger.debug * no type hints * PureProxy's forwarding doesn't try starttls Should I create a new issue(s) for these problems or is there some agreement that only actual bugs will be fixed in little-used modules like this? ---------- components: email messages: 334083 nosy: barry, r.david.murray, samuelcolvin priority: normal severity: normal status: open title: smtpd.PureProxy and smtpd.MailmanProxy broken by extra kwargs, bytes and more type: crash versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 20 10:12:31 2019 From: report at bugs.python.org (Jim Carroll) Date: Sun, 20 Jan 2019 15:12:31 +0000 Subject: [New-bugs-announce] [issue35789] Typo in unittest.mock docs Message-ID: <1547997151.52.0.475948555823.issue35789@roundup.psfhosted.org> New submission from Jim Carroll : There is a typo in the unittest.mock documentation found at https://docs.python.org/3/library/unittest.mock.html. There are seven(7) instances of the word assret, where the author clearly intended assert. ---------- assignee: docs at python components: Documentation messages: 334087 nosy: docs at python, jamercee priority: normal severity: normal status: open title: Typo in unittest.mock docs type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 20 11:39:44 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sun, 20 Jan 2019 16:39:44 +0000 Subject: [New-bugs-announce] [issue35790] Correct a statement about sys.exc_info() values restoration Message-ID: <1548002384.27.0.286179066843.issue35790@roundup.psfhosted.org> New submission from G?ry : In the documentation of the try statement (https://docs.python.org/3/reference/compound_stmts.html#the-try-statement), I think that the sentence: "sys.exc_info() values are restored to their previous values (before the call) when returning from a function that handled an exception." should be replaced by this sentence: "sys.exc_info() values are restored to their previous values (before the call) when leaving an exception handler." as proven by this code which does not use any "function that handled an exception" and yet restores sys.exc_info() values: >>> try: ... raise ValueError ... except: ... try: ... raise TypeError ... except: ... print(sys.exc_info()) ... print(sys.exc_info()) ... (, TypeError(), ) (, ValueError(), ) ---------- assignee: docs at python components: Documentation messages: 334092 nosy: docs at python, eric.araujo, ezio.melotti, maggyero, mdk, willingc priority: normal severity: normal status: open title: Correct a statement about sys.exc_info() values restoration versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 20 12:38:23 2019 From: report at bugs.python.org (Ronald Oussoren) Date: Sun, 20 Jan 2019 17:38:23 +0000 Subject: [New-bugs-announce] [issue35791] Unexpected exception with importlib Message-ID: <1548005903.93.0.648072616722.issue35791@roundup.psfhosted.org> New submission from Ronald Oussoren : Using Python 3.7.2 on macOS 10.14 I get an unexpected exception when calling "importlib.util.find_spec('py')" after importing "py". find_spec works as expected before I import 'py'. See the repl session below: Python 3.7.2 (v3.7.2:9a3ffc0492, Dec 24 2018, 02:44:43) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import importlib.util >>> importlib.util.find_spec("py") ModuleSpec(name='py', loader=<_frozen_importlib_external.SourceFileLoader object at 0x10a953fd0>, origin='/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/py/__init__.py', submodule_search_locations=['/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/py']) >>> import py >>> importlib.util.find_spec("py") Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/util.py", line 111, in find_spec raise ValueError('{}.__spec__ is not set'.format(name)) from None ValueError: py.__spec__ is not set This is with py version 1.7.0 installed (pip install py) ---------- components: Library (Lib) messages: 334094 nosy: brett.cannon, eric.snow, ncoghlan, ronaldoussoren priority: normal severity: normal status: open title: Unexpected exception with importlib type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 20 21:59:18 2019 From: report at bugs.python.org (Christopher Hunt) Date: Mon, 21 Jan 2019 02:59:18 +0000 Subject: [New-bugs-announce] [issue35792] Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent Message-ID: <1548039558.68.0.986540752255.issue35792@roundup.psfhosted.org> New submission from Christopher Hunt : Currently AbstractEventLoop.run_in_executor is specified as a coroutine, while BaseEventLoop.run_in_executor is actually a non-coroutine that returns a Future object. The behavior of BaseEventLoop.run_in_executor would be significantly different if changed to align with the interface . If run_in_executor is a coroutine then the provided func will not actually be scheduled until the coroutine is awaited, which conflicts with the statement in PEP 3156 that it "is equivalent to `wrap_future(executor.submit(callback, *args))`". There has already been an attempt in bpo-32327 to convert this function to a coroutine. We should change the interface specified in `AbstractEventLoop` to indicate that `run_in_executor` is not a coroutine, which should help ensure it does not get changed in the future without full consideration of the impacts. ---------- components: asyncio messages: 334109 nosy: asvetlov, chrahunt, yselivanov priority: normal severity: normal status: open title: Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 00:49:50 2019 From: report at bugs.python.org (Mingun Pak) Date: Mon, 21 Jan 2019 05:49:50 +0000 Subject: [New-bugs-announce] [issue35793] round() doesn't return the right value when I put 0.5 in it. Message-ID: <1548049790.29.0.46389192028.issue35793@roundup.psfhosted.org> New submission from Mingun Pak : It should be 1, but it returns 0. ---------- components: Library (Lib) files: Screen Shot 2019-01-20 at 9.40.52 PM.png messages: 334112 nosy: Mingun Pak priority: normal severity: normal status: open title: round() doesn't return the right value when I put 0.5 in it. versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48069/Screen Shot 2019-01-20 at 9.40.52 PM.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 05:53:44 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Mon, 21 Jan 2019 10:53:44 +0000 Subject: [New-bugs-announce] [issue35794] test_posix.py test failure Message-ID: <1548068024.55.0.742068832525.issue35794@roundup.psfhosted.org> New submission from Jeroen Demeyer : This test was recently added (PR 6332): def test_no_such_executable(self): no_such_executable = 'no_such_executable' try: pid = posix.posix_spawn(no_such_executable, [no_such_executable], os.environ) except FileNotFoundError as exc: self.assertEqual(exc.filename, no_such_executable) On my system, it fails with PermissionError: [Errno 13] Permission denied: 'no_such_executable' ---------- messages: 334123 nosy: jdemeyer priority: normal severity: normal status: open title: test_posix.py test failure _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 05:54:15 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 21 Jan 2019 10:54:15 +0000 Subject: [New-bugs-announce] [issue35795] test_pkgutil test_zipapp fail in AMD64 Windows7 SP1 3.x and AMD64 Windows7 SP1 3.7 buildbots Message-ID: <1548068055.8.0.250041418477.issue35795@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : test_pkgutil test_zipapp fail in AMD64 Windows7 SP1 3.x and AMD64 Windows7 SP1 3.7 buildbots: https://buildbot.python.org/all/#/builders/40/builds/1525 https://buildbot.python.org/all/#/builders/130/builds/636 ====================================================================== ERROR: test_create_archive_filter_exclude_dir (test.test_zipapp.ZipAppTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\tempfile.py", line 806, in cleanup _shutil.rmtree(self.name) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 681, in rmtree return _rmtree_unsafe(path, onerror) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 569, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 567, in _rmtree_unsafe os.rmdir(path) OSError: [WinError 145] The directory is not empty: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmpe1ubc15t' ====================================================================== ERROR: test_create_archive_with_compression (test.test_zipapp.ZipAppTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\tempfile.py", line 806, in cleanup _shutil.rmtree(self.name) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 681, in rmtree return _rmtree_unsafe(path, onerror) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 569, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 567, in _rmtree_unsafe os.rmdir(path) OSError: [WinError 145] The directory is not empty: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmp5wemewtr' ====================================================================== ERROR: test_main_only_written_once (test.test_zipapp.ZipAppTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\tempfile.py", line 806, in cleanup _shutil.rmtree(self.name) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 681, in rmtree return _rmtree_unsafe(path, onerror) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 569, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 567, in _rmtree_unsafe os.rmdir(path) OSError: [WinError 145] The directory is not empty: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmp3am6ham9' ====================================================================== ERROR: test_main_written (test.test_zipapp.ZipAppTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\tempfile.py", line 806, in cleanup _shutil.rmtree(self.name) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 681, in rmtree return _rmtree_unsafe(path, onerror) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 569, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 567, in _rmtree_unsafe os.rmdir(path) OSError: [WinError 145] The directory is not empty: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmpyg_dqrb3' ---------------------------------------------------------------------- I have rebuilt older commits that succeded and they now fail, which points to a failure in the builder itself. These builders are on the same machine: kloth-win64 ---------- components: Tests messages: 334124 nosy: pablogsal priority: normal severity: normal status: open title: test_pkgutil test_zipapp fail in AMD64 Windows7 SP1 3.x and AMD64 Windows7 SP1 3.7 buildbots versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 06:32:19 2019 From: report at bugs.python.org (Mba) Date: Mon, 21 Jan 2019 11:32:19 +0000 Subject: [New-bugs-announce] [issue35796] time.localtime returns error for negative values Message-ID: <1548070339.94.0.0791021620364.issue35796@roundup.psfhosted.org> New submission from Mba : Steps to reproduce the bug: ``` >>> import sys >>> sys.version '3.6.7 (v3.6.7:6ec5cf24b7, Oct 20 2018, 13:35:33) [MSC v.1900 64 bit (AMD64)]' >>> import datetime >>> print(datetime.datetime.now().astimezone().tzinfo) datetime.timezone(datetime.timedelta(0, 3600), 'Central European Standard Time') >>> import time >>> time.localtime(0) time.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=1, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=1, tm_isdst=0) >>> time.localtime(-1) Traceback (most recent call last): File "", line 1, in OSError: [Errno 22] Invalid argument ``` On Ubuntu it works fine: ``` >>> time.localtime(-1) time.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=59, tm_sec=59, tm_wday=3, tm_yday=1, tm_isdst=0) ``` ---------- components: Windows messages: 334132 nosy: mba, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: time.localtime returns error for negative values type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 10:10:41 2019 From: report at bugs.python.org (Christian Ullrich) Date: Mon, 21 Jan 2019 15:10:41 +0000 Subject: [New-bugs-announce] [issue35797] concurrent.futures.ProcessPoolExecutor does not work in venv on Windows Message-ID: <1548083441.54.0.942305079713.issue35797@roundup.psfhosted.org> New submission from Christian Ullrich : Using concurrent.futures.ProcessPoolExecutor on Windows fails immediately with a lot of exceptions of the "access denied", "file not found", and "invalid handle" varieties. Running the script that creates the ProcessPoolExecutor from the main system-wide installation works correctly. Due to Windows' infamous lack of fork(), ProcessPoolExecutor launches its worker processes by setting up an inheritable handle to a pipe and passing the handle on the command line. In a venv situation, it appears that the venv's python.exe internally launches the parent environment's python.exe and passes on its command line, but not its handle table. This sub-subprocess therefore does not have the original handle, and may have a different handle at the same index. Output of the ProcessPoolExecutor example program from the docs when run with the main installation: C:\Daten>py cft.py 112272535095293 is prime: True 112582705942171 is prime: True 112272535095293 is prime: True 115280095190773 is prime: True 115797848077099 is prime: True 1099726899285419 is prime: False Output when run using a venv: C:\Daten>pyv\v37\Scripts\python.exe cft.py Process SpawnProcess-4: Traceback (most recent call last): File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "C:\Program Files\Python37\lib\concurrent\futures\process.py", line 226, in _process_worker call_item = call_queue.get(block=True) File "C:\Program Files\Python37\lib\multiprocessing\queues.py", line 93, in get with self._rlock: File "C:\Program Files\Python37\lib\multiprocessing\synchronize.py", line 95, in __enter__ return self._semlock.__enter__() PermissionError: [WinError 5] Access is denied Process SpawnProcess-5: Traceback (most recent call last): File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "C:\Program Files\Python37\lib\concurrent\futures\process.py", line 226, in _process_worker call_item = call_queue.get(block=True) File "C:\Program Files\Python37\lib\multiprocessing\queues.py", line 93, in get with self._rlock: File "C:\Program Files\Python37\lib\multiprocessing\synchronize.py", line 95, in __enter__ return self._semlock.__enter__() PermissionError: [WinError 5] Access is denied Traceback (most recent call last): File "cft.py", line 28, in main() File "cft.py", line 24, in main for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)): File "C:\Program Files\Python37\lib\concurrent\futures\process.py", line 476, in _chain_from_iterable_of_lists for element in iterable: File "C:\Program Files\Python37\lib\concurrent\futures\_base.py", line 586, in result_iterator yield fs.pop().result() File "C:\Program Files\Python37\lib\concurrent\futures\_base.py", line 432, in result return self.__get_result() File "C:\Program Files\Python37\lib\concurrent\futures\_base.py", line 384, in __get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. ---------- assignee: docs at python components: Documentation, Library (Lib), Windows messages: 334142 nosy: chrullrich, docs at python, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: concurrent.futures.ProcessPoolExecutor does not work in venv on Windows type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 10:13:45 2019 From: report at bugs.python.org (Jakub Wilk) Date: Mon, 21 Jan 2019 15:13:45 +0000 Subject: [New-bugs-announce] [issue35798] duplicate SyntaxWarning: "is" with a literal Message-ID: <1548083625.41.0.59098075844.issue35798@roundup.psfhosted.org> New submission from Jakub Wilk : $ python3.8 -c 'if object() is 42: pass' :1: SyntaxWarning: "is" with a literal. Did you mean "=="? :1: SyntaxWarning: "is" with a literal. Did you mean "=="? I'd like only one copy of this warning, not two. Tested with git master (e9b185f2a493cc54f0d49eac44bf21e8d7de2990). ---------- components: Interpreter Core messages: 334143 nosy: jwilk, serhiy.storchaka priority: normal severity: normal status: open title: duplicate SyntaxWarning: "is" with a literal type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 12:25:15 2019 From: report at bugs.python.org (Samuel Colvin) Date: Mon, 21 Jan 2019 17:25:15 +0000 Subject: [New-bugs-announce] [issue35799] fix or remove smtpd.PureProxy Message-ID: <1548091515.27.0.738178577175.issue35799@roundup.psfhosted.org> New submission from Samuel Colvin : smtpd.PureProxy.process_message is defined to not receive the extra kwargs which it is called with. It also expects "data" to be str when it's actually bytes. PureProxy should either be removed for fixed. Personally, I think it should be fixed as the fix is pretty simple and PureProxy can be very useful. Created from https://bugs.python.org/issue35788 Happy to create a PR if this is agreed. ---------- components: email messages: 334156 nosy: barry, r.david.murray, samuelcolvin priority: normal severity: normal status: open title: fix or remove smtpd.PureProxy type: crash versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 12:27:10 2019 From: report at bugs.python.org (Samuel Colvin) Date: Mon, 21 Jan 2019 17:27:10 +0000 Subject: [New-bugs-announce] [issue35800] remove smtpd.MailmanProxy Message-ID: <1548091630.84.0.140233087609.issue35800@roundup.psfhosted.org> New submission from Samuel Colvin : smtpd.MailmanProxy is completely broken, it takes the wrong arguments but also assumes the existence of a "Mailman" module which doesn't exist. It should be removed in 3.8 or 3.9. Created from https://bugs.python.org/issue35788 Happy to create a PR if this is agreed. ---------- components: email messages: 334157 nosy: barry, r.david.murray, samuelcolvin priority: normal severity: normal status: open title: remove smtpd.MailmanProxy versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 13:48:44 2019 From: report at bugs.python.org (Paul Watson) Date: Mon, 21 Jan 2019 18:48:44 +0000 Subject: [New-bugs-announce] [issue35801] venv in 3.7 references python3 executable Message-ID: <1548096524.26.0.673516861772.issue35801@roundup.psfhosted.org> New submission from Paul Watson : The documentation for venv in Python 3.7 references using `python3` to run venv. I do not find a `python3` executable in the kit. https://docs.python.org/3.7/library/venv.html#module-venv ---------- assignee: docs at python components: Documentation messages: 334165 nosy: Paul Watson, docs at python priority: normal severity: normal status: open title: venv in 3.7 references python3 executable type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 16:42:34 2019 From: report at bugs.python.org (Anthony Sottile) Date: Mon, 21 Jan 2019 21:42:34 +0000 Subject: [New-bugs-announce] [issue35802] os.stat / os.lstat always present, but code checks hastattr(os, 'stat') / hasattr(os, 'lstat') Message-ID: <1548106954.23.0.477810476155.issue35802@roundup.psfhosted.org> New submission from Anthony Sottile : Unless I'm reading incorrectly: https://github.com/python/cpython/blob/7a2368063f25746d4008a74aca0dc0b82f86ff7b/Modules/clinic/posixmodule.c.h#L30-L31 https://github.com/python/cpython/blob/7a2368063f25746d4008a74aca0dc0b82f86ff7b/Modules/clinic/posixmodule.c.h#L68-L69 ---------- components: Library (Lib) messages: 334182 nosy: Anthony Sottile priority: normal severity: normal status: open title: os.stat / os.lstat always present, but code checks hastattr(os, 'stat') / hasattr(os, 'lstat') versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 21 17:07:02 2019 From: report at bugs.python.org (Anthony Sottile) Date: Mon, 21 Jan 2019 22:07:02 +0000 Subject: [New-bugs-announce] [issue35803] Test and document that `dir=...` in tempfile may be PathLike Message-ID: <1548108422.42.0.0789815194403.issue35803@roundup.psfhosted.org> New submission from Anthony Sottile : This appears to be true in 3.6+ -- I'd like to add a test and documentation ensuring that going forward. Related: https://github.com/python/typeshed/issues/2749 ---------- assignee: docs at python components: Documentation, Tests messages: 334188 nosy: Anthony Sottile, docs at python priority: normal severity: normal status: open title: Test and document that `dir=...` in tempfile may be PathLike type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 22 04:48:08 2019 From: report at bugs.python.org (Anselm Kruis) Date: Tue, 22 Jan 2019 09:48:08 +0000 Subject: [New-bugs-announce] [issue35804] v3.6.8 _ctypes win32 compiled with pgo crash Message-ID: <1548150488.57.0.992942838687.issue35804@roundup.psfhosted.org> New submission from Anselm Kruis : During the QA for Stackless 3.6.8 I observed a crash in _ctypes compiled for win32 with PGO, that also exists with plain C-Python v3.6.8. I didn't check other versions yet. OS: Win7 (64bit) Compiler: Visual Studio 2017 professional 15.9.5 How to reproduce 1. Checkout v3.6.8 I'm using the git-bash $ git status HEAD detached at v3.6.8 nothing to commit, working tree clean 2. Clean the sandbox and compile. It is sufficient to use a short pgo-job, but the default pgo-job works a well (but takes much more time). $ cd PCbuild/ $ rm -rf obj win32 amd64 $ PYTHON=/e/Pythons/3.6.4-64/python.exe cmd //c build.bat --pgo-job "-m test --pgo test_ctypes" 2>&1 | tee build.log The file "build.log" is attached. Nothing conspicuous in it. 3. Run the test case ctypes.test.test_win32.WindowsTestCase.test_callconv_1 Sometimes the test passes, sometimes it fails with a Segmentation Fault. $ win32/python.exe -X faulthandler -m ctypes.test.test_win32 WindowsTestCase.test_callconv_1 . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK $ win32/python.exe -X faulthandler -m ctypes.test.test_win32 WindowsTestCase.test_callconv_1 Windows fatal exception: access violation Current thread 0x00001574 (most recent call first): File "C:\build\python36\lib\unittest\case.py", line 178 in handle File "C:\build\python36\lib\unittest\case.py", line 733 in assertRaises File "C:\build\python36\lib\ctypes\test\test_win32.py", line 20 in test_callconv_1 File "C:\build\python36\lib\unittest\case.py", line 605 in run File "C:\build\python36\lib\unittest\case.py", line 653 in __call__ File "C:\build\python36\lib\unittest\suite.py", line 122 in run File "C:\build\python36\lib\unittest\suite.py", line 84 in __call__ File "C:\build\python36\lib\unittest\suite.py", line 122 in run File "C:\build\python36\lib\unittest\suite.py", line 84 in __call__ File "C:\build\python36\lib\unittest\runner.py", line 176 in run File "C:\build\python36\lib\unittest\main.py", line 256 in runTests File "C:\build\python36\lib\unittest\main.py", line 95 in __init__ File "C:\build\python36\lib\ctypes\test\test_win32.py", line 165 in File "C:\build\python36\lib\runpy.py", line 85 in _run_code File "C:\build\python36\lib\runpy.py", line 193 in _run_module_as_main Segmentation fault 4. I observed another variant of the crash, if I run all tests in test_ctypes $ cmd //c rt.bat -q -v test_ctypes 2>&1 | tee test_ctypes.log The file "test_ctypes.log" is attached. Relevant content: test_callconv_1 (ctypes.test.test_win32.WindowsTestCase) ... XXX lineno: 124, opcode: 0 ERROR test_callconv_2 (ctypes.test.test_win32.WindowsTestCase) ... XXX lineno: 124, opcode: 0 ERROR test_variant_bool (ctypes.test.test_wintypes.WinTypesTest) ... test test_ctypes failed ok ====================================================================== ERROR: test_callconv_1 (ctypes.test.test_win32.WindowsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\build\python36\lib\ctypes\test\test_win32.py", line 27, in test_callconv_1 self.assertRaises(ValueError, IsWindow, 0, 0, 0) File "C:\build\python36\lib\unittest\case.py", line 733, in assertRaises return context.handle('assertRaises', args, kwargs) File "C:\build\python36\lib\unittest\case.py", line 157, in handle if not _is_subtype(self.expected, self._base_type): File "C:\build\python36\lib\unittest\case.py", line 124, in _is_subtype if isinstance(expected, tuple): SystemError: unknown opcode ====================================================================== ERROR: test_callconv_2 (ctypes.test.test_win32.WindowsTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\build\python36\lib\ctypes\test\test_win32.py", line 36, in test_callconv_2 self.assertRaises(ValueError, IsWindow, None) File "C:\build\python36\lib\unittest\case.py", line 733, in assertRaises return context.handle('assertRaises', args, kwargs) File "C:\build\python36\lib\unittest\case.py", line 157, in handle if not _is_subtype(self.expected, self._base_type): File "C:\build\python36\lib\unittest\case.py", line 124, in _is_subtype if isinstance(expected, tuple): SystemError: unknown opcode ---------------------------------------------------------------------- I had a quick look at _call_function_pointer() in Modules/_ctypes/callproc.c, but I didn't see anything obvious. A very speculative first guess is the calling convention of ffi_call() or a related function written in (inline) assembly. Work around: compile _ctypes for win32 without PGO. ---------- components: ctypes files: build.log messages: 334202 nosy: anselm.kruis priority: normal severity: normal status: open title: v3.6.8 _ctypes win32 compiled with pgo crash type: crash versions: Python 3.6 Added file: https://bugs.python.org/file48071/build.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 22 07:56:02 2019 From: report at bugs.python.org (Martijn Pieters) Date: Tue, 22 Jan 2019 12:56:02 +0000 Subject: [New-bugs-announce] [issue35805] email package folds msg-id identifiers using RFC2047 encoded words where it must not Message-ID: <1548161762.46.0.975813554813.issue35805@roundup.psfhosted.org> New submission from Martijn Pieters : When encountering identifier headers such as Message-ID containing a msg-id token longer than 77 characters (including the <...> angle brackets), the email package folds that header using RFC 2047 encoded words, e.g. Message-ID: <154810422972.4.16142961424846318784 at aaf39fce-569e-473a-9453-6862595bd8da.prvt.dyno.rt.heroku.com> becomes Message-ID: =?utf-8?q?=3C154810422972=2E4=2E16142961424846318784=40aaf39fce-?= =?utf-8?q?569e-473a-9453-6862595bd8da=2Eprvt=2Edyno=2Ert=2Eheroku=2Ecom=3E?= The msg-id token here is this long because Heroku Dyno machines use a UUID in the FQDN, but Heroku is hardly the only source of such long msg-id tokens. Microsoft's Outlook.com / Office365 email servers balk at the RFC2047 encoded word use here and attempt to wrap the email in a TNEF winmail.dat attachment, then may fail at this under some conditions that I haven't quite worked out yet and deliver an error message to the recipient with the helpful message "554 5.6.0 Corrupt message content", or just deliver the ever unhelpful winmail.dat attachment to the unsuspecting recipient (I'm only noting these symptom here for future searches). I encountered this issue with long Message-ID values generated by email.util.make_msgid(), but this applies to all RFC 5322 section 3.6.4 Identification Fields headers, as well as the corresponding headers from RFC 822 section 4.6 (covered by section 4.5.4 in 5322). What is happening here is that the email._header_value_parser module has no handling for the msg-id tokens *at all*, and email.headerregistry has no dedicated header class for identifier headers. So these headers are parsed as unstructured, and folded at will. RFC2047 section 5 on the other hand states that the msg-id token is strictly off-limits, and no RFC2047 encoding should be used to encode such elements. Because headers *can* exceed 78 characters (RFC 5322 section 2.1.1 states that "Each line of characters MUST be no more than 998 characters, and SHOULD be no more than 78 characters[.]") I think that RFC5322 msg-id tokens should simply not be folded, at all. The obsoleted RFC822 syntax for msg-id makes them equal to the addr-spec token, where the local-part (before the @) contains word tokens; those would be fair game but then at least apply the RFC2047 encoded word replacement only to those word tokens. For now, I worked around the issue by using a custom policy that uses 998 as the maximum line length for identifier headers: from email.policy import EmailPolicy # Headers that contain msg-id values, RFC5322 MSG_ID_HEADERS = {'message-id', 'in-reply-to', 'references', 'resent-msg-id'} class MsgIdExcemptPolicy(EmailPolicy): def _fold(self, name, value, *args, **kwargs): if name.lower() in MSG_ID_HEADERS and self.max_line_length - len(name) - 2 < len(value): # RFC 5322, section 2.1.1: "Each line of characters MUST be no # more than 998 characters, and SHOULD be no more than 78 # characters, excluding the CRLF.". To avoid msg-id tokens from being folded # by means of RFC2047, fold identifier lines to the max length instead. return self.clone(max_line_length=998)._fold(name, value, *args, **kwargs) return super()._fold(name, value, *args, **kwargs) This ignores the fact that In-Reply-To and References contain foldable whitespace in between each msg-id, but it at least let us send email through smtp.office365.com again without confusing recipients. ---------- components: email messages: 334210 nosy: barry, mjpieters, r.david.murray priority: normal severity: normal status: open title: email package folds msg-id identifiers using RFC2047 encoded words where it must not versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 22 12:16:42 2019 From: report at bugs.python.org (Eric Snow) Date: Tue, 22 Jan 2019 17:16:42 +0000 Subject: [New-bugs-announce] [issue35806] typing module adds objects to sys.modules that don't look like modules Message-ID: <1548177402.52.0.471302677604.issue35806@roundup.psfhosted.org> New submission from Eric Snow : tl;dr Should all objects in sys.modules look like module objects? In #35791 Ronald described a problem with a "module" added to sys.modules that does not have all the attributes a module should have. He also mentioned a similar problem with typing.io [1]: BTW. Typing.io is a namespace added to sys.modules by the typing module that also does not have __spec__, and causes similar problems. I have an simple workaround for that on my side. I've verified the missing module attributes (using 3.8): >>> old = sorted(sys.modules) >>> import typing >>> new = sorted(sys.modules) >>> assert sorted(set(old) - set(new)) == [] >>> sorted(set(new) - set(old)) ['_collections', '_functools', '_heapq', '_locale', '_operator', '_sre', 'collections', 'collections.abc', 'contextlib', 'copyreg', 'enum', 'functools', 'heapq', 'itertools', 'keyword', 'operator', 're', 'reprlib', 'sre_compile', 'sre_constants', 'sre_parse', 'types', 'typing', 'typing.io', 'typing.re'] >>> [name for name in vars(sys.modules['typing.io']) if name.startswith('__')] ['__module__', '__doc__', '__all__', '__dict__', '__weakref__'] >>> [name for name in vars(sys.modules['typing.re']) if name.startswith('__')] ['__module__', '__doc__', '__all__', '__dict__', '__weakref__'] Per the language reference [2], modules should have the following attributes: __name__ __loader__ __package__ __spec__ Modules imported from files also should have __file__ and __cached__. (For the sake of completeness, packages also should have a __path__ attribute.) As seen above, typing.io and typing.re don't have any of the import-related attributes. So, should those two "modules" have all those attributes added? I'm in favor of saying that every sys.modules entry must have all the appropriate import-related attributes (but doesn't have to be an actual module object). Otherwise tools (e.g. importlib.reload(), Ronald's) making that (arguably valid) assumption break. The right place for the change in the language reference is probably the "module cache" section. [3] The actual entry for sys.modules [4] is probably fine as-is. [1] https://bugs.python.org/issue35791#msg334212 [2] https://docs.python.org/3/reference/import.html#module-spec [3] https://docs.python.org/3/reference/import.html#the-module-cache [4] https://docs.python.org/3/library/sys.html#sys.modules ---------- components: Library (Lib) messages: 334222 nosy: barry, brett.cannon, eric.snow, gvanrossum, ncoghlan, ronaldoussoren priority: normal severity: normal stage: test needed status: open title: typing module adds objects to sys.modules that don't look like modules type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 22 14:37:03 2019 From: report at bugs.python.org (Pradyun Gedam) Date: Tue, 22 Jan 2019 19:37:03 +0000 Subject: [New-bugs-announce] [issue35807] Update bundled pip to 19.0 Message-ID: <1548185823.13.0.177750416451.issue35807@roundup.psfhosted.org> New submission from Pradyun Gedam : In line with https://bugs.python.org/issue35277. Will also update setuptools while I do this. (if no one else gets to it, I'll file a PR tomorrow morning) ---------- components: Library (Lib) messages: 334230 nosy: dstufft, ncoghlan, paul.moore, pradyunsg priority: normal severity: normal status: open title: Update bundled pip to 19.0 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 23 00:32:17 2019 From: report at bugs.python.org (Guido van Rossum) Date: Wed, 23 Jan 2019 05:32:17 +0000 Subject: [New-bugs-announce] [issue35808] Let's retire pgen Message-ID: <1548221537.82.0.492786218381.issue35808@roundup.psfhosted.org> New submission from Guido van Rossum : Pgen is literally the oldest piece of technology in the CPython repo -- it was the first thing I wrote for Python over 29 years ago. It's not aged well, and building it requires various #if[n]def PGEN hacks in other parts of the code; it also depends more and more on CPython internals. There already is a replacement written in pure Python (Lib/lib2to3/pgen/), it just needs some glue to actually generate the graminit.[ch] files. Note that several other essential generation steps (everything listed for regen-all except regen-importlib and clinic) already depend on having a working Python interpreter around, so let's not worry about the bootstrapping process. ---------- components: Build messages: 334247 nosy: gvanrossum priority: low severity: normal stage: needs patch status: open title: Let's retire pgen versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 23 08:48:21 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Wed, 23 Jan 2019 13:48:21 +0000 Subject: [New-bugs-announce] [issue35809] test_concurrent_futures.ProcessPoolForkExecutorDeadlockTest fails intermittently on Travis and passes in verbose mode Message-ID: <1548251301.46.0.302427080079.issue35809@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I can see this test failing intermittently many times on Travis during the first run and to pass later during a verbose run hence the failure is not visible. I don't know the exact cause and haven't checked the buildbots. Search also didn't bring up anything so I thought to file a new issue for this. Stack trace : ====================================================================== FAIL: test_crash (test.test_concurrent_futures.ProcessPoolForkExecutorDeadlockTest) [crash at task unpickle] ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_concurrent_futures.py", line 958, in test_crash res.result(timeout=self.TIMEOUT) File "/home/travis/build/python/cpython/Lib/concurrent/futures/_base.py", line 438, in result raise TimeoutError() concurrent.futures._base.TimeoutError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_concurrent_futures.py", line 962, in test_crash self._fail_on_deadlock(executor) File "/home/travis/build/python/cpython/Lib/test/test_concurrent_futures.py", line 910, in _fail_on_deadlock self.fail(f"Executor deadlock:\n\n{tb}") AssertionError: Executor deadlock: Thread 0x00002b5105ca3700 (most recent call first): File "/home/travis/build/python/cpython/Lib/threading.py", line 296 in wait File "/home/travis/build/python/cpython/Lib/multiprocessing/queues.py", line 227 in _feed File "/home/travis/build/python/cpython/Lib/threading.py", line 865 in run File "/home/travis/build/python/cpython/Lib/threading.py", line 917 in _bootstrap_inner File "/home/travis/build/python/cpython/Lib/threading.py", line 885 in _bootstrap Thread 0x00002b510584c700 (most recent call first): File "/home/travis/build/python/cpython/Lib/selectors.py", line 415 in select File "/home/travis/build/python/cpython/Lib/multiprocessing/connection.py", line 930 in wait File "/home/travis/build/python/cpython/Lib/concurrent/futures/process.py", line 354 in _queue_management_worker File "/home/travis/build/python/cpython/Lib/threading.py", line 865 in run File "/home/travis/build/python/cpython/Lib/threading.py", line 917 in _bootstrap_inner File "/home/travis/build/python/cpython/Lib/threading.py", line 885 in _bootstrap Current thread 0x00002b50fe39c9c0 (most recent call first): File "/home/travis/build/python/cpython/Lib/test/test_concurrent_futures.py", line 901 in _fail_on_deadlock File "/home/travis/build/python/cpython/Lib/test/test_concurrent_futures.py", line 962 in test_crash File "/home/travis/build/python/cpython/Lib/unittest/case.py", line 642 in run File "/home/travis/build/python/cpython/Lib/unittest/case.py", line 702 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 122 in run File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 122 in run File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 122 in run File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/runner.py", line 176 in run File "/home/travis/build/python/cpython/Lib/test/support/__init__.py", line 1935 in _run_suite File "/home/travis/build/python/cpython/Lib/test/support/__init__.py", line 2031 in run_unittest File "/home/travis/build/python/cpython/Lib/test/test_concurrent_futures.py", line 1241 in test_main File "/home/travis/build/python/cpython/Lib/test/support/__init__.py", line 2163 in decorator File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest.py", line 182 in runtest_inner File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest.py", line 127 in runtest File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest_mp.py", line 68 in run_tests_worker File "/home/travis/build/python/cpython/Lib/test/libregrtest/main.py", line 600 in _main File "/home/travis/build/python/cpython/Lib/test/libregrtest/main.py", line 586 in main File "/home/travis/build/python/cpython/Lib/test/libregrtest/main.py", line 640 in main File "/home/travis/build/python/cpython/Lib/test/regrtest.py", line 46 in _main File "/home/travis/build/python/cpython/Lib/test/regrtest.py", line 50 in File "/home/travis/build/python/cpython/Lib/runpy.py", line 85 in _run_code File "/home/travis/build/python/cpython/Lib/runpy.py", line 192 in _run_module_as_main Sample crash : https://travis-ci.org/python/cpython/jobs/483394585#L2781 ---------- components: Tests messages: 334255 nosy: bquinlan, pablogsal, pitrou, vstinner, xtreak priority: normal severity: normal status: open title: test_concurrent_futures.ProcessPoolForkExecutorDeadlockTest fails intermittently on Travis and passes in verbose mode versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 23 15:35:52 2019 From: report at bugs.python.org (Eddie Elizondo) Date: Wed, 23 Jan 2019 20:35:52 +0000 Subject: [New-bugs-announce] [issue35810] Object Initialization Bug with Heap-allocated Types Message-ID: <1548275752.94.0.463880453345.issue35810@roundup.psfhosted.org> New submission from Eddie Elizondo : Heap-allocated Types initializing instances through `PyObject_{,GC}_New{Var}` will *NOT* not have their refcnt increased. This was totally fine under the assumption that static types are immortal. However, heap-allocated types MUST participate in refcounting. Furthermore, their deallocation routine should also make sure to decrease their refcnt to provide the incref/decref pair. ---------- components: Library (Lib) messages: 334271 nosy: eelizondo priority: normal severity: normal status: open title: Object Initialization Bug with Heap-allocated Types versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 23 19:48:16 2019 From: report at bugs.python.org (Eryk Sun) Date: Thu, 24 Jan 2019 00:48:16 +0000 Subject: [New-bugs-announce] [issue35811] py.exe should unset the __PYVENV_LAUNCHER__ environment variable Message-ID: <1548290896.4.0.108661661457.issue35811@roundup.psfhosted.org> New submission from Eryk Sun : In 3.7.2 on Windows, venv now uses a redirecting launcher 'script' for python[w].exe. This replaces (on Windows only) the setup requirement to copy or symlink the interpreter binaries (i.e. python[w].exe, python37.dll, and vcruntime140.dll). Apparently this is required to be able to install Python as a Windows Store app. I haven't experimented with it yet, so I'll just accept that as a given. I'm curious to find out whether using symlinks would still work, and even if it doesn't work for the app installation, whether we could still support that option for desktop installations. I've used `--symlinks` for a while now, since I grant the required privilege to the authenticated users group and thus don't have to elevate to create symlinks. (I know it's not so easy for many Windows users.) The new launcher reads pyvenv.cfg to locate and execute the real python.exe. Since the process image is no longer in the virtual environment's "Scripts" directory (which has various consequences), we need a way to communicate the launcher's path to make Python use the virtual environment. A -X command-line option could work, but then all packages and tools that spawn worker processes, such as multiprocessing, would need to be updated to look for this option in sys._xoptions and propagate it. Instead the launcher sets a special "__PYVENV_LAUNCHER__" environment variable. This is reasonable because processes are usually created with a copy of the parent's environment. Some environment variables may be added or modified, but it's rare for a child process to get a completely new environment. (One example of the latter would be creating a process that runs as a different user.) An oversight in the current ecosystem is that py.exe and the distlib entry-point launchers do not unset "__PYVENV_LAUNCHER__". Thus, when executing a script from a virtual environment (e.g. either directly via py.exe or via the .py file association), it will mistakenly be pinned into the virtual environment if it runs in Python 3.7. Similarly, pip.exe for an installed Python 3.7 will mistakenly install into the virtual environment. However, the latter is out of scope here since the entry-point launchers are in distlib. It's also a problem if we run the fully-qualified path for an installed Python 3.7, e.g. from shutil.which('python'). We can't automatically address this since it's exactly the reason "__PYVENV_LAUNCHER__" exists. We have to know to manually unset "__PYVENV_LAUNCHER__" in the environment that's passed to the child. This should be documented somewhere -- maybe in the venv docs. It's not a problem if a script runs unqualified "python.exe" since the final result is usually the same, but for different reasons. With the launcher, it's locked down by inheriting "__PYVENV_LAUNCHER__". With the previous design, the application directory was the virtual environment's "Scripts" directory. Unqualified "python.exe" was thus pinned for CreateProcessW, which checks the application directory first. It's also not a problem if we run sys.executable, since previously that was the virtual environment's executable. (In 3.7.2, sys.executable gets set to the virtual environment's launcher, but that breaks multiprocessing. See issue 35797.) ---------- components: Windows messages: 334275 nosy: eryksun, paul.moore, steve.dower, tim.golden, vinay.sajip, zach.ware priority: normal severity: normal stage: test needed status: open title: py.exe should unset the __PYVENV_LAUNCHER__ environment variable type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 23 19:58:52 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Thu, 24 Jan 2019 00:58:52 +0000 Subject: [New-bugs-announce] [issue35812] Don't log an exception from the main coroutine in asyncio.run() Message-ID: <1548291532.01.0.358121522852.issue35812@roundup.psfhosted.org> New submission from Andrew Svetlov : We use `asyncio.run()` (well, a backported to python3.6 private copy) in our application. The problem is: when `asyncio.run(main_coro(args))` raises an exception it is both raised to a caller and passed to `loop.call_exception_handler()` by `_cancel_all_tasks()` as *unhandled exception*. I believe that the logging of unhandled exceptions is a very useful feature but the logging should be skipped for the main coroutine passed to `asyncio.run()` because it is handled by outer code anyway. The fix is trivial, the attached file shows how to do it. But the fix needs tests also. If somebody wishes to pick up the issue and make it done -- it would be awesome. I will help with review and commit, sure. Another question is should the changing land into 3.7? I think yes but feedback from Yuri Selivanov is very welcome. ---------- components: asyncio files: utils.py keywords: easy messages: 334276 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Don't log an exception from the main coroutine in asyncio.run() versions: Python 3.8 Added file: https://bugs.python.org/file48074/utils.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 23 23:02:05 2019 From: report at bugs.python.org (Davin Potts) Date: Thu, 24 Jan 2019 04:02:05 +0000 Subject: [New-bugs-announce] [issue35813] shared memory construct to avoid need for serialization between processes Message-ID: <1548302525.41.0.348784694633.issue35813@roundup.psfhosted.org> New submission from Davin Potts : A facility for using shared memory would permit direct, zero-copy access to data across distinct processes (especially when created via multiprocessing) without the need for serialization, thus eliminating the primary performance bottleneck in the most common use cases for multiprocessing. Currently, multiprocessing communicates data from one process to another by first serializing it (by default via pickle) on the sender's end then de-serializing it on the receiver's end. Because distinct processes possess their own process memory space, no data in memory is common across processes and thus any information to be shared must be communicated over a socket/pipe/other mechanism. Serialization via tools like pickle is convenient especially when supporting processes on physically distinct hardware with potentially different architectures (which multiprocessing does also support). Such serialization is wasteful and potentially unnecessary when multiple multiprocessing.Process instances are running on the same machine. The cost of this serialization is believed to be a non-trivial drag on performance when using multiprocessing on multi-core and/or SMP machines. While not a new concept (System V Shared Memory has been around for quite some time), the proliferation of support for shared memory segments on modern operating systems (Windows, Linux, *BSDs, and more) provides a means for exposing a consistent interface and api to a shared memory construct usable across platforms despite technical differences in the underlying implementation details of POSIX shared memory versus Native Shared Memory (Windows). For further reading/reference: Tools such as the posix_ipc module have provided fairly mature apis around POSIX shared memory and seen use in other projects. The "shared-array", "shared_ndarray", and "sharedmem-numpy" packages all have interesting implementations for exposing NumPy arrays via shared memory segments. PostgreSQL has a consistent internal API for offering shared memory across Windows/Unix platforms based on System V, enabling use on NetBSD/OpenBSD before those platforms supported POSIX shared memory. At least initially, objects which support the buffer protocol can be most readily shared across processes via shared memory. From a design standpoint, the use of a Manager instance is likely recommended to enforce access rules in different processes via proxy objects as well as cleanup of shared memory segments once an object is no longer referenced. The documentation around multiprocessing's existing sharedctypes submodule (which uses a single memory segment through the heap submodule with its own memory management implementation to "malloc" space for allowed ctypes and then "free" that space when no longer used, recycling it for use again from the shared memory segment) will need to be updated to avoid confusion over concepts. Ultimately, the primary motivation is to provide a path for better parallel execution performance by eliminating the need to transmit data between distinct processes on a single system (not for use in distributed memory architectures). Secondary use cases have been suggested including a means for sharing data across concurrent Python interactive shells, potential use with subinterpreters, and other traditional uses for shared memory since the first introduction of System V Shared Memory onwards. ---------- assignee: davin components: Library (Lib) messages: 334278 nosy: davin, eric.snow, lukasz.langa, ned.deily, rhettinger, yselivanov priority: normal severity: normal status: open title: shared memory construct to avoid need for serialization between processes type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 00:45:39 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 24 Jan 2019 05:45:39 +0000 Subject: [New-bugs-announce] [issue35814] Syntax quirk with variable annotations Message-ID: <1548308739.94.0.144807070146.issue35814@roundup.psfhosted.org> New submission from Raymond Hettinger : Am not sure how much we care about this, but parenthesis around tuples stops being optional when there is a variable annotation. >>> from typing import Tuple >>> t = 10, 'hello' # Parens not normally required >>> t: Tuple[int, str] = (10, 'hello') # Annotated allows parens >>> t: Tuple[int, str] = 10, 'hello' # Annotated w/o parens fails SyntaxError: invalid syntax ---------- components: Interpreter Core messages: 334280 nosy: rhettinger priority: low severity: normal status: open title: Syntax quirk with variable annotations type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 02:57:05 2019 From: report at bugs.python.org (Jazeps Basko) Date: Thu, 24 Jan 2019 07:57:05 +0000 Subject: [New-bugs-announce] [issue35815] Able to instantiate a subclass with abstract methods from __init_subclass__ of the ABC Message-ID: <1548316625.2.0.852101004839.issue35815@roundup.psfhosted.org> New submission from Jazeps Basko : I am creating and registering singleton instances of subclasses of ABC in the ABC's __init_subclass__ and I just noticed that I am able to instantiate even the classes which have abstract methods. import abc class Base(abc.ABC): def __init_subclass__(cls, **kwargs): instance = cls() print(f"Created instance of {cls} easily: {instance}") @abc.abstractmethod def do_something(self): pass class Derived(Base): pass Actual Output: Created instance of easily: <__main__.Derived object at 0x10a6dd6a0> Expected Output: TypeError: Can't instantiate abstract class Derived with abstract methods do_something ---------- messages: 334284 nosy: jbasko priority: normal severity: normal status: open title: Able to instantiate a subclass with abstract methods from __init_subclass__ of the ABC type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 05:28:34 2019 From: report at bugs.python.org (=?utf-8?q?Andr=C3=A9_Lehmann?=) Date: Thu, 24 Jan 2019 10:28:34 +0000 Subject: [New-bugs-announce] [issue35816] csv.DictReader, skipinitialspace does not ignore tabs Message-ID: <1548325714.32.0.972508693176.issue35816@roundup.psfhosted.org> New submission from Andr? Lehmann : When using the csv.DictReader a dialect can be given to change the behavior of interpretation of the csv file. The Dialect has an option "skipinitialspace" which shall ignore the whitespace after the delimiter according to the documentation (https://docs.python.org/3/library/csv.html). Unfortunately this works only for spaces but not for tabs which are also whitespaces. See the following code snippet applied on the attached file: with open("conf-csv", "r") as csvfile: csv.register_dialect("comma_and_ws", skipinitialspace=True) csv_dict_reader = csv.DictReader(csvfile, dialect="comma_and_ws") for line in csv_dict_reader: print(line) The second line shall not contain "\t" chars. ---------- files: conf.csv messages: 334289 nosy: andre.lehmann priority: normal severity: normal status: open title: csv.DictReader, skipinitialspace does not ignore tabs type: behavior versions: Python 3.5 Added file: https://bugs.python.org/file48075/conf.csv _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 06:04:12 2019 From: report at bugs.python.org (Audric) Date: Thu, 24 Jan 2019 11:04:12 +0000 Subject: [New-bugs-announce] [issue35817] IDLE 2.713 on debian 9.6 over WSL W10 IdentationError Message-ID: <1548327852.82.0.278103847412.issue35817@roundup.psfhosted.org> New submission from Audric : Hello, The screenshot attached is a clear repro. Environment: Surface Pro 3 Win 10 1803 Python 2.7.14 WSL Debian 9.6 with Python 2.7.13 Code: >>elements = [] >>for i in range(0, 6): >>...elements.append(i) ------------------------------- Working: >>print elements >>[0, 1, 2, 3, 4, 5] Non working: File "", line 2 elements.append(i) ^ IndentationError: expected an indented block ---------- assignee: terry.reedy components: IDLE files: py27debian9wslw10indentationerror.PNG messages: 334293 nosy: audricd, terry.reedy priority: normal severity: normal status: open title: IDLE 2.713 on debian 9.6 over WSL W10 IdentationError type: behavior versions: Python 2.7 Added file: https://bugs.python.org/file48076/py27debian9wslw10indentationerror.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 06:15:08 2019 From: report at bugs.python.org (Andreas Schwab) Date: Thu, 24 Jan 2019 11:15:08 +0000 Subject: [New-bugs-announce] [issue35818] test_email: test_localtime_daylight_false_dst_true() fails if timezone database is missing Message-ID: <1548328508.2.0.57014982778.issue35818@roundup.psfhosted.org> New submission from Andreas Schwab : bpo-35317 solution is incomplete, the test needs to be skipped if the timezone database is unavailable. ---------- messages: 334294 nosy: barry, miss-islington, p-ganssle, r.david.murray, schwab, vstinner, xtreak priority: normal severity: normal status: open title: test_email: test_localtime_daylight_false_dst_true() fails if timezone database is missing type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 12:29:50 2019 From: report at bugs.python.org (Abdallah) Date: Thu, 24 Jan 2019 17:29:50 +0000 Subject: [New-bugs-announce] [issue35819] Fatal Python error Message-ID: New submission from Abdallah : ,He, i am having this problem for about 2 weeks , I can't do anything , so please give me some instructions so i can solve .it Fatal Python error: initfsencoding: unable to load the file system codec ModuleNotFoundError: No module named 'encodings' :Current thread 0x00003c7c (most recent call first Process finished with exit code -1073740791 (0xC0000409) Thanks ---------- messages: 334308 nosy: abdallahadham priority: normal severity: normal status: open title: Fatal Python error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 15:01:29 2019 From: report at bugs.python.org (Tinu Tomson) Date: Thu, 24 Jan 2019 20:01:29 +0000 Subject: [New-bugs-announce] [issue35820] Inconsistent behavior when parsing IP address Message-ID: <1548360089.92.0.137181767325.issue35820@roundup.psfhosted.org> New submission from Tinu Tomson : ip = '23.00.021.002' ipaddress.IPv4Address(ip) throw error: File "/usr/lib/python3.4/ipaddress.py", line 1271, in __init__ self._ip = self._ip_int_from_string(addr_str) File "/usr/lib/python3.4/ipaddress.py", line 1122, in _ip_int_from_string raise AddressValueError("%s in %r" % (exc, ip_str)) from None ipaddress.AddressValueError: Ambiguous (octal/decimal) value in '021' not permitted in '23.00.021.002' ip = '23.00.21.002' ipaddress.IPv4Address(ip) parses correctly. ---------- components: Library (Lib) messages: 334319 nosy: tinutomson priority: normal severity: normal status: open title: Inconsistent behavior when parsing IP address _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 15:07:19 2019 From: report at bugs.python.org (Chris Jerdonek) Date: Thu, 24 Jan 2019 20:07:19 +0000 Subject: [New-bugs-announce] [issue35821] Clarify when logging events are propagated when propagate is true Message-ID: <1548360439.22.0.955979079327.issue35821@roundup.psfhosted.org> New submission from Chris Jerdonek : Currently, the logging docs are a bit ambiguous or at least not completely clear as to when events are propagated when Logger.propagate is true. The docs currently say [1]-- "If [the `propagate`] attribute evaluates to true, events logged to this logger will be passed to the handlers of higher level (ancestor) loggers, in addition to any handlers attached to this logger." But it's not clear if "logged to this logger" means (1) a log method like info() or error() was called on the logger, or (2) the event was passed to the logger's handlers (i.e. satisfied the logger's log level threshold and any filters). Empirically, I found that the meaning is (2). [1]: https://docs.python.org/3/library/logging.html#logging.Logger.propagate ---------- assignee: docs at python components: Documentation messages: 334320 nosy: chris.jerdonek, docs at python priority: normal severity: normal stage: needs patch status: open title: Clarify when logging events are propagated when propagate is true type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 19:08:52 2019 From: report at bugs.python.org (Igor Z) Date: Fri, 25 Jan 2019 00:08:52 +0000 Subject: [New-bugs-announce] [issue35822] _queue _queuemodule.c is missing inside the Setup file Message-ID: <1548374932.46.0.67529287669.issue35822@roundup.psfhosted.org> New submission from Igor Z : I had to install manually new urllib3 (zip archive) module inside python 3.7 and got and error that module "_queue" is missing. I did not find anything related to such module inside the Setup file. Then I found this module in the github and manually added: _queue _queuemodule.c and did "make" once again to get new "libpython3.7m.so.1.0" that I needed for the project. The problem was solved. ---------- assignee: docs at python components: Build, Documentation, Library (Lib) messages: 334329 nosy: Igor Z, docs at python priority: normal severity: normal status: open title: _queue _queuemodule.c is missing inside the Setup file type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 21:03:26 2019 From: report at bugs.python.org (Alexey Izbyshev) Date: Fri, 25 Jan 2019 02:03:26 +0000 Subject: [New-bugs-announce] [issue35823] Use vfork() in subprocess on Linux Message-ID: <1548381806.02.0.709569222975.issue35823@roundup.psfhosted.org> New submission from Alexey Izbyshev : This issue is to propose a (complementary) alternative to the usage of posix_spawn() in subprocess (see bpo-35537). As mentioned by Victor Stinner in msg332236, posix_spawn() has the potential of being faster and safer than fork()/exec() approach. However, some of the currently available implementations of posix_spawn() have technical problems (this mostly summarizes discussions in bpo-35537): * In glibc < 2.24 on Linux, posix_spawn() doesn't report errors to the parent properly, breaking existing subprocess behavior. * In glibc >= 2.25 on Linux, posix_spawn() doesn't report errors to the parent in certain environments, such as QEMU user-mode emulation and Windows subsystem for Linux. * In FreeBSD, as of this writing, posix_spawn() doesn't block signals in the child process, so a signal handler executed between vfork() and execve() may change memory shared with the parent [1]. Regardless of implementation, posix_spawn() is also unsuitable for some subprocess use cases: * posix_spawnp() can't be used directly to implement file searching logic of subprocess because of different semantics, requiring workarounds. * posix_spawn() has no standard way to specify the current working directory for the child. * posix_spawn() has no way to close all file descriptors > 2 in the child, which is the *default* mode of operation of subprocess.Popen(). May be even more importantly, fundamentally, posix_spawn() will always be less flexible than fork()/exec() approach. Any additions will have to go through POSIX standardization or be unportable. Even if approved, a change will take years to get to actual users because of the requirement to update the C library, which may be more than a decade behind in enterprise Linux distros. This is in contrast to having an addition implemented in CPython. For example, a setrlimit() action for posix_spawn() is currently rejected in POSIX[2], despite being trivial to add. I'm interested in avoiding posix_spawn() problems on Linux while still delivering comparable performance and safety. To that end I've studied implementations of posix_spawn() in glibc[3] and musl[4], which use vfork()/execve()-like approach, and investigated challenges of using vfork() safely on Linux (e.g. [5]) -- all of that for the purpose of using vfork()/exec() instead of fork()/exec() or posix_spawn() in subprocess where possible. The unique property of vfork() is that the child shares the address space (including heap and stack) as well as thread-local storage with the parent, which means that the child must be very careful not to surprise the parent by changing the shared resources under its feet. The parent is suspended until the child performs execve(), _exit() or dies in any other way. The most safe way to use vfork() is if one has access to the C library internals and can do the the following: 1) Disable thread cancellation before vfork() to ensure that the parent thread is not suddenly cancelled by another thread with pthread_cancel() while being in the middle of child creation. 2) Block all signals before vfork(). This ensures that no signal handlers are run in the child. But the signal mask is preserved by execve(), so the child must restore the original signal mask. To do that safely, it must reset dispositions of all non-ignored signals to the default, ensuring that no signal handlers are executed in the window between restoring the mask and execve(). Note that libc-internal signals should be blocked too, in particular, to avoid "setxid problem"[5]. 3) Use a separate stack for the child via clone(CLONE_VM|CLONE_VFORK), which has exactly the same semantics as vfork(), but allows the caller to provide a separate stack. This way potential compiler bugs arising from the fact that vfork() returns twice to the same stack frame are avoided. 4) Call only async-signal-safe functions in the child. In an application, only (1) and (4) can be done easily. One can't disable internal libc signals for (2) without using syscall(), which requires knowledge of the kernel ABI for the particular architecture. clone(CLONE_VM) can't be used at least before glibc 2.24 because it corrupts the glibc pid/tid cache in the parent process[6,7]. (As may be guessed, this problem was solved by glibc developers when they implemented posix_spawn() via clone()). Even now, the overall message seems to be that clone() is a low-level function not intended to be used by applications. Even with the above, I still think that in context of subprocess/CPython the sufficient vfork()-safety requirements are provided by the following. Despite being easy, (1) seems to be not necessary: CPython never uses pthread_cancel() internally, so Python code can't do that. A non-Python thread in an embedding app could try, but cancellation, in my knowledge, is not supported by CPython in any case (there is no way for an app to cleanup after the cancelled thread), so subprocess has no reason to care. For (2), we don't have to worry about the internal signal used for thread cancellation because of the above. The only other internal signal is used for setxid syncronization[5]. The "setxid problem" is mitigated in Python because the spawning thread holds GIL, so Python code can't call os.setuid() concurrently. Again, a non-Python thread could, but I argue that an application that spawns a child and calls setuid() in non-synchronized manner is not worth supporting: a child will have "random" privileges depending on who wins the race, so this is hardly a good security practice. Even if such apps are considered worthy to support, we may limit vfork()/exec() path only to the non-embedded use case. For (3), with production-quality compilers, using vfork() should be OK. Both GCC and Clang recognize it and handle in a special way (similar to setjmp(), which also has "returning twice" semantics). The supporting evidence is that Java has been using vfork() for ages, Go has migrated to vfork(), and, coincidentally, dotnet is doing it right now[8]. (4) is already done in _posixsubprocess on Linux. I've implemented a simple proof-of-concept that uses vfork() in subprocess on Linux by default in all cases except if preexec_fn is not None. It passes all tests on OpenSUSE (Linux 4.15, glibc 2.27) and Ubuntu 14.04 (Linux 4.4, glibc 2.19), but triggers spurious GCC warnings, probably due to a long-standing GCC bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=21161 I've also run a variant of subprocess_bench.py (by Victor Stinner from bpo-35537) with close_fds=False and restore_signals=False removed on OpenSUSE: $ env/bin/python -m perf compare_to fork.json vfork.json Mean +- std dev: [fork] 154 ms +- 18 ms -> [vfork] 1.23 ms +- 0.04 ms: 125.52x faster (-99%) Compared to posix_spawn, the results on the same machine are similar: $ env/bin/python -m perf compare_to posix_spawn.json vfork.json Mean +- std dev: [posix_spawn] 1.24 ms +- 0.04 ms -> [vfork] 1.22 ms +- 0.05 ms: 1.02x faster (-2%) Note that my implementation should work even for QEMU user-mode (and probably WSL) because it doesn't rely on address space sharing. Things to do: * Decide whether pthread_setcancelstate() should be used. I'd be grateful for opinions from Python threading experts. * Decide whether "setxid problem"[5] is important enough to worry about. * Deal with GCC warnings. * Test in user-mode QEMU and WSL. [1] https://svnweb.freebsd.org/base/head/lib/libc/gen/posix_spawn.c?view=markup&pathrev=326193 [2] http://austingroupbugs.net/view.php?id=603 [3] https://sourceware.org/git/?p=glibc.git;a=history;f=sysdeps/unix/sysv/linux/spawni.c;h=353bcf5b333457d191320e358d35775a2e9b319b;hb=HEAD [4] http://git.musl-libc.org/cgit/musl/log/src/process/posix_spawn.c [5] https://ewontfix.com/7 [6] https://sourceware.org/bugzilla/show_bug.cgi?id=10311 [7] https://sourceware.org/bugzilla/show_bug.cgi?id=18862 [8] https://github.com/dotnet/corefx/pull/33289 ---------- components: Extension Modules messages: 334336 nosy: gregory.p.smith, izbyshev, pablogsal, vstinner priority: normal severity: normal status: open title: Use vfork() in subprocess on Linux type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 24 22:08:24 2019 From: report at bugs.python.org (MeiK) Date: Fri, 25 Jan 2019 03:08:24 +0000 Subject: [New-bugs-announce] [issue35824] http.cookies._CookiePattern modifying regular expressions Message-ID: <1548385704.31.0.124133284281.issue35824@roundup.psfhosted.org> Change by MeiK : ---------- components: Extension Modules nosy: MeiK priority: normal severity: normal status: open title: http.cookies._CookiePattern modifying regular expressions type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 05:31:17 2019 From: report at bugs.python.org (Kristof Niederholtmeyer) Date: Fri, 25 Jan 2019 10:31:17 +0000 Subject: [New-bugs-announce] [issue35825] Py_UNICODE_SIZE=4 fails to link on Windows Message-ID: <1548412277.74.0.133748802246.issue35825@roundup.psfhosted.org> New submission from Kristof Niederholtmeyer : When I change Py_UNICODE_SIZE from 2 (default) to 4 in PC/pyconfig.h, msvc-14 gives the following error: _winreg.obj : error LNK2001: unresolved external symbol PyUnicode_DecodeMBCS [e:\kristof\SDR\sdrsandbox\w64\msvc140\Python-2.7.15\PCbuild\pythoncore.vcxproj] e:\kristof\SDR\sdrsandbox\w64\msvc140\Python-2.7.15\PCBuild\\amd64\python27_snps_vp.dll : fatal error LNK1120: 1 unresolved externals [e:\kristof\SDR\sdrsandbox\w64\msvc140\Python-2.7.15\PCbuild\pythoncore.vcxproj] The problem appears to be that the missing function gets disabled in unicodeobject.c with #if defined(MS_WINDOWS) && defined(HAVE_USABLE_WCHAR_T) ... #endif while _winreg.c does not check the availability of this function. Thanks, Kristof ---------- components: Build, Windows messages: 334348 nosy: kristof, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Py_UNICODE_SIZE=4 fails to link on Windows type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 05:44:12 2019 From: report at bugs.python.org (Kevin Mai-Hsuan Chia) Date: Fri, 25 Jan 2019 10:44:12 +0000 Subject: [New-bugs-announce] [issue35826] Typo in example for async with statement with condition Message-ID: <1548413052.91.0.387178410518.issue35826@roundup.psfhosted.org> New submission from Kevin Mai-Hsuan Chia : In the [example](https://docs.python.org/3.8/library/asyncio-sync.html#asyncio.Condition) of the equivalent code to the `async with statement`: ```python cond = asyncio.Condition() # ... later await lock.acquire() try: await cond.wait() finally: lock.release() ``` `lock.acquire()` should be replaced by `cond.acquire()`, and `lock.release()` replaced by `cond.release()`. So the resulting code snippet becomes: ```python cond = asyncio.Condition() # ... later await cond.acquire() try: await cond.wait() finally: cond.release() ``` ---------- assignee: docs at python components: Documentation messages: 334349 nosy: docs at python, mhchia priority: normal severity: normal status: open title: Typo in example for async with statement with condition type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 07:05:04 2019 From: report at bugs.python.org (Ori Avtalion) Date: Fri, 25 Jan 2019 12:05:04 +0000 Subject: [New-bugs-announce] [issue35827] C API dictionary views type checkers are not documented Message-ID: <1548417904.67.0.718119168094.issue35827@roundup.psfhosted.org> New submission from Ori Avtalion : dictobject.h defines several helpers to ease checking of dictionary view types. If they are meant to be part of the API, they should be documented. PyDictKeys_Check PyDictItems_Check PyDictValues_Check PyDictViewSet_Check Should they be added to dict.rst, or a separate file? ---------- assignee: docs at python components: Documentation messages: 334355 nosy: docs at python, salty-horse priority: normal severity: normal status: open title: C API dictionary views type checkers are not documented type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 08:06:47 2019 From: report at bugs.python.org (Michael Felt) Date: Fri, 25 Jan 2019 13:06:47 +0000 Subject: [New-bugs-announce] [issue35828] test_multiprocessing_* tests - success versus fail varies over time Message-ID: <1548421607.34.0.921297585279.issue35828@roundup.psfhosted.org> New submission from Michael Felt : Last August I started running a bot for AIX using xlc_r as the compiler, rather than gcc that the other AIX bot uses. Initially, I had no issues with the test_multiprocess* tests, but of late (last two+ months I am guessing) I have been having regular issues when the bot builds, but not when I would run the tests (all 418) or individually - when run manually. The last two weeks I have invested time - and have been repaid - in that I can now get a regular failure when running the tests. Your assistance is appreciated. I'll continue to work on this when I have time. Short version: This looks like there is a statement "crafted" to cause a crash: (dbx) where PyDict_GetItem(op = 0x2002ddc8, key = 0x30061e68), line 1320 in "dictobject.c" _PyDict_GetItemId(dp = 0xdbdbdbdb, key = 0xdbdbdbdb), line 3276 in "dictobject.c" ... This is based on bot run that failed (Python 3.8.0a0 (heads/master:0785889468)). The test message that comes back is: test_multiprocessing_fork failed Process Process-94: Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/process.py", line 302, in _bootstrap self.run() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/test/_test_multiprocessing.py", line 2847, in _putter manager.connect() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/managers.py", line 512, in connect conn = Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused test test_multiprocessing_fork failed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/test/_test_multiprocessing.py", line 2865, in test_rapid_restart queue = manager.get_queue() File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/managers.py", line 701, in temp token, exp = self._create(typeid, *args, **kwds) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/managers.py", line 584, in _create conn = self._Client(self._address, authkey=self._authkey) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/connection.py", line 796, in XmlClient return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/connection.py", line 502, in Client c = SocketClient(address) File "/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue/Lib/multiprocessing/connection.py", line 629, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 79] Connection refused I had a hard time finding anything - because I was looking for a permission issue in the Socket "domain", but what seems more likely is that the "server" thread/process is crashing with a segmentation fault and a "client" thread is getting "refused" because the server no longer exists and/or never got successfully started. I finally managed to capture a core dump and I hope that this will help you help me with getting deeper and closer to a resolution/understanding on what is going on. buildbot at x064:[/home/buildbot/buildarea/3.x.aixtools-aix-power6/issue]area/coredumps/core.12582914.25123239 < Type 'help' for help. warning: The core file is not a fullcore. Some info may not be available. [using memory image in /home/buildbot/buildarea/coredumps/core.12582914.25123239] reading symbolic information ... Segmentation fault in PyDict_GetItem at line 1320 in file "Objects/dictobject.c" 1320 if (!PyDict_Check(op)) (dbx) where PyDict_GetItem(op = 0x2002ddc8, key = 0x30061e68), line 1320 in "dictobject.c" >From other data about the program I expect the segmentation error is caused by the key value. Unless the program has done a mmap/shmap request for memory allocation (something not done by default) the address 0x30000000-0x3fffffff is not a valid address. Summary: This looks like there is a statement "crafted" to cause a crash: (dbx) where PyDict_GetItem(op = 0x2002ddc8, key = 0x30061e68), line 1320 in "dictobject.c" _PyDict_GetItemId(dp = 0xdbdbdbdb, key = 0xdbdbdbdb), line 3276 in "dictobject.c" ... Gory details: (dbx) where PyDict_GetItem(op = 0x2002ddc8, key = 0x30061e68), line 1320 in "dictobject.c" _PyDict_GetItemId(dp = 0xdbdbdbdb, key = 0xdbdbdbdb), line 3276 in "dictobject.c" unnamed block in _PyEval_EvalFrameDefault(f = 0x1000eb04, throwflag = 807503860), line 957 in "ceval.c" unnamed block in _PyEval_EvalFrameDefault(f = 0x1000eb04, throwflag = 807503860), line 957 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1000eb04, throwflag = 807503860), line 957 in "ceval.c" PyEval_EvalFrameEx(f = 0x100111d0, throwflag = 537350752), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x1009f4b0, globals = 0x202f476c, locals = (nil), args = (nil), argcount = 24, kwnames = 0x30211d88, kwargs = 0x202f4700, kwcount = -2013255516, kwstep = 2, defs = 0x302187f4, defcount = 1, kwdefs = (nil), closure = (nil), name = 0x300022e8, qualname = 0x30214928), line 3969 in "ceval.c" _PyFunction_FastCallDict(func = 0x1010ac50, args = 0x20021c28, nargs = 539970704, kwargs = 0x820028af), line 380 in "call.c" _PyObject_FastCallDict(callable = 0x10209a1c, args = 0x30a17028, nargs = 539969536, kwargs = 0x2820424f), line 100 in "call.c" _PyObject_Call_Prepend(callable = 0x100206ac, obj = 0x20077e84, args = 0x202f4880, kwargs = 0x20088734), line 906 in "call.c" slot_tp_init(self = 0x100de870, args = 0x00000074, kwds = 0x20126740), line 6638 in "typeobject.c" unnamed block in type_call(type = 0x100d9ed8, args = 0x20021c28, kwds = 0x202f4940), line 954 in "typeobject.c" type_call(type = 0x100d9ed8, args = 0x20021c28, kwds = 0x202f4940), line 954 in "typeobject.c" _PyObject_FastCallKeywords(callable = 0x30665450, stack = 0x3008c1b0, nargs = 805573048, kwnames = 0x103521ac), line 201 in "call.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 807520936), line 4658 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de870, throwflag = 813135108), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3021dbf8, globals = 0x2008d708, locals = 0xcfb98979, args = 0x30037e30, argcount = 807520936, kwnames = 0x3021a4f0, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = 0x30218a04, defcount = 1, kwdefs = (nil), closure = (nil), name = 0x3021caa8, qualname = 0x3021caa8), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x202f50b8, stack = 0x300d3ef8, nargs = 539971696, kwnames = 0x222022cf), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x20089a38, throwflag = 807373136), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = (nil), throwflag = 807476128), line 581 in "ceval.c" function_code_fastcall(co = 0x3021df78, args = 0x2008d708, nargs = 538317872, globals = 0x00000034), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x3091f5a0, stack = 0x30041b50, nargs = 815894632, kwnames = 0x30142e64), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 807366808), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100cc074, throwflag = 806932108), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x202f61bc, globals = 0x3021a570, locals = 0x202f5d10, args = 0x30037e30, argcount = 269083832, kwnames = 0x301f49f0, kwargs = 0x202f5d10, kwcount = 537429812, kwstep = 1, defs = 0x3022123c, defcount = 2, kwdefs = (nil), closure = (nil), name = 0x301f7098, qualname = 0x301f7098), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x1018c538, stack = 0x30a158e8, nargs = 537037592, kwnames = 0x30142e64), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805546776), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100cc074, throwflag = 807787624), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x202f686c, globals = 0x301f4c70, locals = 0x202f63c0, args = 0x30037e30, argcount = 269083832, kwnames = 0x301edbf0, kwargs = 0x202f63c0, kwcount = 537429812, kwstep = 1, defs = 0x30218464, defcount = 1, kwdefs = (nil), closure = (nil), name = 0x3003ab18, qualname = 0x3003ab18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x1003470c, stack = 0x200058f8, nargs = 539976896, kwnames = 0x20088734), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100c3c64, throwflag = 807867476), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x301f19e0, throwflag = 807867476), line 581 in "ceval.c" function_code_fastcall(co = 0x301f5418, args = 0x2008d708, nargs = -796895527, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x1018c538, stack = 0x3096a9b8, nargs = 814795680, kwnames = 0x20088734), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x3091e488, throwflag = 805546776), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100cc074, throwflag = 810945964), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x202f758c, globals = 0x301edc70, locals = 0x202f70e0, args = 0x15e8c00d, argcount = 269083832, kwnames = 0x309ec970, kwargs = 0x202f70e0, kwcount = 537429812, kwstep = 1, defs = 0x301f3884, defcount = 1, kwdefs = (nil), closure = (nil), name = 0x3003ab18, qualname = 0x3003ab18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x3091f540, stack = 0x30005840, nargs = 815871240, kwnames = 0x42204842), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100c1d90, throwflag = 537410828), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 814835560), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x309ec970, locals = 0x202f7770, args = 0x20088734, argcount = 269346004, kwnames = 0x20021c28, kwargs = 0x202f7770, kwcount = 815881976, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x2007d04c, locals = 0x202f77e0, args = 0x200058f8, argcount = 269237348, kws = 0x300ba1a8, kwcount = 539981808, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x30a15f00, globals = 0x300ba1ac, locals = 0x202f7820), line 558 in "ceval.c" builtin_exec(module = 0x1000eb04, args = (nil), nargs = 539981952, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x100c6724, args = 0x300ba1ac, nargs = 0), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x100a0d3c, self = 0x103461d0, args = (nil), nargs = 815035112, kwargs = 0x309435f0), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x3007f8a8, args = 0x309ec9b0, nargs = 539982304, kwargs = 0x220048cf), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = (nil), kwargs = 0x300927d0), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 815019488), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x100c1d90, stack = 0x200058f8, nargs = 539984160, kwnames = 0x4420228f), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 814745524), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 815712688), line 581 in "ceval.c" function_code_fastcall(co = 0x202f8bdc, args = 0x3096ecf0, nargs = 539985712, globals = 0x309007ac), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x3096ed20, stack = 0x30055ea8, nargs = 537028192, kwnames = 0x00000049), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 815253484), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 815253488), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x10010cf8, stack = 0x300cf030, nargs = 805768508, kwnames = 0x4420228f), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 814745132), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 814745132), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x00000030, stack = 0x202dc120, nargs = -260629964, kwnames = 0x202d8840), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x00000008, throwflag = 815253044), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 537009192), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = 0x200058f8, nargs = 539990624, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x3002dfa8, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 539990896, kwargs = 0x302dc830), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = "0-\307\3600-\307\260"), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x302beee8, 0x3003a7a0, 0x0, 0x4ea54, 0xdb, 0x30828de0), line 1250 in "call.c" import_find_and_load(abs_name = 0x309044f8), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x302dc428, globals = 0x309ecaf0, locals = 0x202f9d20, fromlist = 0x300bc190, level = 269346004), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x302dc428, globals = 0x309ecaf0, locals = 0x202f9d20, fromlist = 0x300bc190, level = 269346004), line 1748 in "import.c" import_name(f = 0x100c5e54, name = 0x10338aa8, fromlist = 0x202f9d80, level = 0x20088734), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x10010cf8, throwflag = 537410828), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 815072624), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x309ecaf0, locals = 0x202fa3b0, args = 0x20088734, argcount = 269346004, kwnames = 0x10338aa8, kwargs = 0x309e8e60, kwcount = 537434348, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x200058f8, locals = 0x202fa450, args = 0x20088734, argcount = 269246788, kws = (nil), kwcount = 539993120, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x00000003, globals = 0x300ba1ac, locals = (nil)), line 558 in "ceval.c" builtin_exec(module = 0x10337030, args = (nil), nargs = 539993344, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x00000003, args = 0x307bbe28, nargs = 805795944), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x3007df24, self = 0x103461d0, args = (nil), nargs = 815035480, kwargs = 0x30943478), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x00000004, args = 0x0000008d, nargs = 806068652, kwargs = 0x30089ad8), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = 0x300ba1a8, kwargs = 0x30031290), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 815019112), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x100c1d90, stack = 0x200058f8, nargs = 539995488, kwnames = 0x4420228f), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 814744724), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 815713072), line 581 in "ceval.c" function_code_fastcall(co = 0x202fb81c, args = 0x3096e990, nargs = 539997040, globals = 0x3090048c), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x3096e900, stack = 0x30055ea8, nargs = 537028192, kwnames = 0x0000004f), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 815252260), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 815252264), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x10010cf8, stack = 0x300cf030, nargs = 805768508, kwnames = 0x4420228f), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 814744332), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 814744332), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x00000002, stack = (nil), nargs = 538317872, kwnames = 0x304e9d30), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x00000006, throwflag = 815035836), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 537028216), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = 0x20161430, nargs = 540001952, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x30828de0, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 540002224, kwargs = 0x20088734), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = "0N\235\2600N\235p"), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x304a86e8, 0x3003a7a0, 0x0, 0x4e71c, 0xdb, 0x30828de0), line 1250 in "call.c" import_find_and_load(abs_name = 0x3097dbf8), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x30286ee8, globals = 0x309ecc70, locals = 0x202fc960, fromlist = 0x300bc190, level = 269346004), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x30286ee8, globals = 0x309ecc70, locals = 0x202fc960, fromlist = 0x300bc190, level = 269346004), line 1748 in "import.c" import_name(f = 0x100c5e54, name = 0x10338aa8, fromlist = 0x202fc9c0, level = 0x20088734), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x10010cf8, throwflag = 537410828), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 815071504), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x309ecc70, locals = 0x202fcff0, args = 0x20088734, argcount = 269346004, kwnames = 0x10338aa8, kwargs = 0x309e8f78, kwcount = 537434348, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x200058f8, locals = 0x202fd090, args = 0x20088734, argcount = 269246788, kws = (nil), kwcount = 540004448, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x00000003, globals = 0x300ba1ac, locals = (nil)), line 558 in "ceval.c" builtin_exec(module = 0x10337030, args = (nil), nargs = 540004672, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x00000003, args = 0x307bbca8, nargs = 805795944), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x3007df24, self = 0x103461d0, args = (nil), nargs = 815036216, kwargs = 0x306a9bd0), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x00000004, args = 0x0000008d, nargs = 806068652, kwargs = 0x30089ad8), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = 0x300ba1a8, kwargs = 0x30031290), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 812293056), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x100c1d90, stack = 0x200058f8, nargs = 540006816, kwnames = 0x4420228f), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 812375540), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 815713456), line 581 in "ceval.c" function_code_fastcall(co = 0x202fe45c, args = 0x3096e3c0, nargs = 540008368, globals = 0x306bddec), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x3096e270, stack = 0x30055ea8, nargs = 537028192, kwnames = 0x0000004e), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 815251852), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 815251856), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x10010cf8, stack = 0x300cf030, nargs = 805768508, kwnames = 0x4420228f), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812375148), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 812375148), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x00000002, stack = (nil), nargs = 538317872, kwnames = 0x304e9d30), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x00000005, throwflag = 812342084), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 537028216), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = 0x20161430, nargs = 540013280, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x3082e5a0, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 540013552, kwargs = 0x20088734), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = "0N\235\2600N\235p"), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x3091cfa0, 0x3003a7a0, 0x0, 0x4e40f, 0xdb, 0x3082e5a0), line 1250 in "call.c" import_find_and_load(abs_name = 0x30977e28), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x30097df0, globals = 0x309ecdf0, locals = 0x202ff5a0, fromlist = 0x442042c4, level = 269346004), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x30097df0, globals = 0x309ecdf0, locals = 0x202ff5a0, fromlist = 0x442042c4, level = 269346004), line 1748 in "import.c" import_name(f = 0x1000eb04, name = 0x10338aa8, fromlist = 0x202ff600, level = 0x200898ec), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x10010cf8, throwflag = 537410828), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 815011856), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x309ecdf0, locals = 0x202ffc30, args = 0x20088734, argcount = 269346004, kwnames = 0x10338aa8, kwargs = 0x309e8e60, kwcount = 537434348, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x200058f8, locals = 0x202ffcd0, args = 0x20088734, argcount = 269246788, kws = (nil), kwcount = 540015776, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x00000003, globals = 0x300ba1ac, locals = (nil)), line 558 in "ceval.c" builtin_exec(module = 0x10337030, args = (nil), nargs = 540016000, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x00000003, args = 0x307bbb28, nargs = 805795944), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x3007df24, self = 0x103461d0, args = (nil), nargs = 812148536, kwargs = 0x306a9a58), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x00000004, args = 0x0000008d, nargs = 806068652, kwargs = 0x30089ad8), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = 0x300ba1a8, kwargs = 0x30031290), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 812292680), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x100c1d90, stack = 0x200058f8, nargs = 540018144, kwnames = 0x4420228f), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 812374340), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 815713840), line 581 in "ceval.c" function_code_fastcall(co = 0x2030109c, args = 0x30973f30, nargs = 540019696, globals = 0x306bd93c), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x30973f60, stack = 0x30055ea8, nargs = 537028192, kwnames = 0x0000004a), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812358836), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 812358840), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x10010cf8, stack = 0x300cf030, nargs = 805768508, kwnames = 0x4420228f), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812373948), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 812373948), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x100c6724, stack = 0x00000311, nargs = 540023088, kwnames = 0x6f636e65), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x00000005, throwflag = 812195788), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 806149408), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = (nil), nargs = 540024608, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x3082e5a0, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 540024880, kwargs = 0x20088734), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = "0^N^Z00^Mop"), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x3091cf58, 0x3003a7a0, 0x0, 0x4e09d, 0xdb, 0x3082e5a0), line 1250 in "call.c" import_find_and_load(abs_name = 0x30977338), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x3009d568, globals = 0x309ecf70, locals = 0x203021e0, fromlist = 0x482042c4, level = 269346004), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x3009d568, globals = 0x309ecf70, locals = 0x203021e0, fromlist = 0x482042c4, level = 269346004), line 1748 in "import.c" import_name(f = 0x203022f0, name = 0x300652f8, fromlist = 0x20302250, level = 0x4220422f), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x10010cf8, throwflag = 537410828), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 815201728), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x309ecf70, locals = 0x20302870, args = 0x20088734, argcount = 269346004, kwnames = 0x10338aa8, kwargs = 0x309e87a8, kwcount = 537434348, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x200058f8, locals = 0x20302910, args = 0x20088734, argcount = 269246788, kws = (nil), kwcount = 540027104, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x00000003, globals = 0x300ba1ac, locals = (nil)), line 558 in "ceval.c" builtin_exec(module = 0x10337030, args = (nil), nargs = 540027328, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x00000003, args = 0x307bba28, nargs = 805795944), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x3007df24, self = 0x103461d0, args = (nil), nargs = 809333848, kwargs = 0x306bd188), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x00000004, args = 0x0000008d, nargs = 806068652, kwargs = 0x30089ad8), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = 0x300ba1a8, kwargs = 0x30031290), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 812372344), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x102097b0, stack = 0x300b9770, nargs = 537028768, kwnames = 0x20088734), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 812356772), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 815714224), line 581 in "ceval.c" function_code_fastcall(co = 0x20303cdc, args = 0x30946720, nargs = 540031024, globals = 0x306b949c), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x309469c0, stack = 0x30055ea8, nargs = 537028192, kwnames = 0x20088734), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812355980), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 812355984), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x100c6084, stack = 0x00000001, nargs = 540032768, kwnames = 0x20088734), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812325596), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 812325596), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x000084b1, stack = 0x0000405d, nargs = 540164848, kwnames = 0x000084a1), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x00000001, throwflag = 812314220), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 537009192), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = 0x309b7d50, nargs = 814817192, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x100c30bc, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 540036208, kwargs = 0x0000014d), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = warning: Unable to access address 0xdbdbdbdb from core (invalid char ptr (0xdbdbdbdb))), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x304e9fa8, 0x3003a7a0, 0x0, 0x4deb7, 0xcb, 0x306aee88), line 1250 in "call.c" import_find_and_load(abs_name = 0x3097abf8), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x30005840, globals = 0x306bc9f0, locals = 0x20304e20, fromlist = 0x20088734, level = 269346004), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x30005840, globals = 0x306bc9f0, locals = 0x20304e20, fromlist = 0x20088734, level = 269346004), line 1748 in "import.c" import_name(f = 0x000083df, name = 0x20304f6c, fromlist = 0x20304e80, level = 0x42204842), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x100c1d90, throwflag = 537410828), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 815247744), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x306bc9f0, locals = 0x203054b0, args = 0x20088734, argcount = 269346004, kwnames = 0x20021c28, kwargs = 0x203054b0, kwcount = 815697696, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x2007d04c, locals = 0x20305520, args = 0x200058f8, argcount = 269237348, kws = 0x300ba1a8, kwcount = 540038448, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x309e8f28, globals = 0x300ba1ac, locals = 0x20305560), line 558 in "ceval.c" builtin_exec(module = 0x1000eb04, args = (nil), nargs = 540038592, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x0000001a, args = 0x20077e84, nargs = 815031568), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x100a0d3c, self = 0x103461d0, args = (nil), nargs = 812326392, kwargs = 0x306a9d48), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x3007f8a8, args = 0x306bca70, nargs = 540038944, kwargs = 0x220048cc), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = (nil), kwargs = 0x300927d0), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 812293432), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x1018c538, stack = 0x300b9770, nargs = 540040960, kwnames = 0x20088734), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 812373140), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 812370544), line 581 in "ceval.c" function_code_fastcall(co = 0x2030691c, args = 0x30946510, nargs = 540042352, globals = 0x306bd48c), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x309466c0, stack = 0x30055ea8, nargs = 537028192, kwnames = 0x300553e8), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812357204), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 812357208), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x100a0d3c, stack = 0x3002c958, nargs = 540044096, kwnames = 0x20088734), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 812372748), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 812372748), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x202d2e20, stack = 0x202d6130, nargs = 805955712, kwnames = 0x20161430), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x00000007, throwflag = 812345196), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 814615216), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = (nil), nargs = 540047264, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x30055b68, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 540047536, kwargs = 0x22de5185), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = "0k\312\2600k\312\360"), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x304a8ba8, 0x3003a7a0, 0x0, 0x4d0b7, 0xdb, 0x30653a40), line 1250 in "call.c" import_find_and_load(abs_name = 0x30960178), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x309a5418, globals = 0x306acf30, locals = 0x20307a60, fromlist = 0x20088734, level = 269346004), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x309a5418, globals = 0x306acf30, locals = 0x20307a60, fromlist = 0x20088734, level = 269346004), line 1748 in "import.c" import_name(f = 0x00008505, name = 0x20307bac, fromlist = 0x20307ac0, level = 0x42204242), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x100c1d90, throwflag = 537410828), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x00000001, throwflag = 815384032), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x30061e28, globals = 0x306acf30, locals = 0x203080f0, args = 0x20088734, argcount = 269346004, kwnames = 0x20021c28, kwargs = 0x203080f0, kwcount = 814950664, kwstep = 2, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = (nil), qualname = (nil)), line 3969 in "ceval.c" PyEval_EvalCodeEx(_co = 0x10109cb0, globals = 0x2007d04c, locals = 0x20308160, args = 0x200058f8, argcount = 269237348, kws = 0x300ba1a8, kwcount = 540049776, defs = 0x20088734, defcount = 0, kwdefs = (nil), closure = (nil)), line 4000 in "ceval.c" PyEval_EvalCode(co = 0x30932910, globals = 0x300ba1ac, locals = 0x203081a0), line 558 in "ceval.c" builtin_exec(module = 0x1000eb04, args = (nil), nargs = 540049920, ??), line 1040 in "bltinmodule.c" builtin_exec(module = 0x100c6724, args = 0x300ba1ac, nargs = 0), line 317 in "bltinmodule.c.h" _PyMethodDef_RawFastCallDict(method = 0x100a0d3c, self = 0x103461d0, args = (nil), nargs = 812195432, kwargs = 0x30798a58), line 532 in "call.c" _PyCFunction_FastCallDict(func = 0x3007f8a8, args = 0x306ac730, nargs = 540050272, kwargs = 0x220042cf), line 585 in "call.c" PyCFunction_Call(func = 0x100da54c, args = (nil), kwargs = 0x300927d0), line 791 in "call.c" do_call_core(func = 0x200af330, callargs = 0x300ba1a8, kwdict = 0x30031290), line 4681 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x1034b668, throwflag = 805419800), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x1009f9b0, throwflag = 813271624), line 581 in "ceval.c" _PyEval_EvalCodeWithName(_co = 0x3007bbf8, globals = 0x2008d708, locals = 0x055abdc8, args = 0x30037e30, argcount = 805547336, kwnames = 0x2008d708, kwargs = 0x20161430, kwcount = 0, kwstep = 1, defs = (nil), defcount = 0, kwdefs = (nil), closure = (nil), name = 0x3001bb18, qualname = 0x3001bb18), line 3969 in "ceval.c" _PyFunction_FastCallKeywords(func = 0x100c3dc4, stack = (nil), nargs = 540052288, kwnames = 0x30600d78), line 437 in "call.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 814590276), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30055ce8, throwflag = 812304176), line 581 in "ceval.c" function_code_fastcall(co = 0x2030955c, args = 0x308e0b70, nargs = 540053680, globals = 0x308da93c), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x308e0f30, stack = 0x30055ea8, nargs = 537028192, kwnames = 0xcbcbcbcb), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 815270276), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x3006e348, throwflag = 815270280), line 581 in "ceval.c" function_code_fastcall(co = 0x30062568, args = 0x2008d708, nargs = 135563570, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x100c6724, stack = 0x30831cf0, nargs = 540055440, kwnames = 0x20088734), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x1009f88c, throwflag = 814588684), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x1011900c, throwflag = 814588684), line 581 in "ceval.c" function_code_fastcall(co = 0x300625d8, args = 0x2008d708, nargs = 1019748152, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x30061e28, stack = 0x30998070, nargs = 540057104, kwnames = 0x306b2480), line 410 in "call.c" _PyEval_EvalFrameDefault(f = (nil), throwflag = 812195060), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100de860, throwflag = 812316976), line 581 in "ceval.c" function_code_fastcall(co = 0x1009f88c, args = 0xcbcbcbcb, nargs = 540058576, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x100a0d3c, args = (nil), nargs = 537556816, kwargs = (nil)), line 324 in "call.c" _PyObject_FastCallDict(callable = (nil), args = 0x2008bf70, nargs = 540058864, kwargs = 0x306ac8a0), line 100 in "call.c" object_vacall(callable = 0x100de658, vargs = ""), line 1200 in "call.c" _PyObject_CallMethodIdObjArgs(obj = 0x30061db0, name = 0x20039710, ... = 0x307a6a28, 0x3003a7a0, 0x0, 0x4b651, 0xcb, 0x30691ba8), line 1250 in "call.c" import_find_and_load(abs_name = 0x3061ee50), line 1652 in "import.c" unnamed block in PyImport_ImportModuleLevelObject(name = 0x30691bb0, globals = 0x20021c28, locals = 0x2030a6a0, fromlist = 0x880042c4, level = 269329740), line 1748 in "import.c" PyImport_ImportModuleLevelObject(name = 0x30691bb0, globals = 0x20021c28, locals = 0x2030a6a0, fromlist = 0x880042c4, level = 269329740), line 1748 in "import.c" import_name(f = 0x100c1d90, name = 0xffffffff, fromlist = 0x2030a710, level = 0x42002224), line 4836 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x100de860, throwflag = 812215736), line 2722 in "ceval.c" PyEval_EvalFrameEx(f = 0x30344060, throwflag = 812151856), line 581 in "ceval.c" function_code_fastcall(co = 0x2030b1bc, args = 0x30621c00, nargs = 540060944, globals = 0x30696dac), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x100fe3d4, stack = 0x305608a4, nargs = 540061056, kwnames = 0x20088734), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x10109334, throwflag = 812316528), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x100fc8d0, throwflag = -889192245), line 581 in "ceval.c" function_code_fastcall(co = 0x30022868, args = 0x20077e84, nargs = 807677136, globals = (nil)), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x200009f0, args = 0x20077e84, nargs = 537028216, kwargs = 0x3024c82e), line 324 in "call.c" _PyObject_FastCallDict(callable = 0x00000002, args = 0x20077e84, nargs = 16, kwargs = (nil)), line 100 in "call.c" _PyObject_Call_Prepend(callable = 0x30245e00, obj = 0x306ac5b0, args = (nil), kwargs = 0x306acfb0), line 906 in "call.c" method_call(method = 0x1021f5ac, args = (nil), kwargs = 0x2030b570), line 304 in "classobject.c" PyObject_Call(callable = 0x100a2254, args = 0x20021c28, kwargs = 0x2030b570), line 247 in "call.c" do_call_core(func = 0x308ddf90, callargs = 0x30250ca0, kwdict = 0x20026660), line 4709 in "ceval.c" _PyEval_EvalFrameDefault(f = 0x300bae00, throwflag = 812198764), line 3285 in "ceval.c" PyEval_EvalFrameEx(f = 0x30048aa8, throwflag = 812306352), line 581 in "ceval.c" function_code_fastcall(co = 0x2030c08c, args = 0x308dd090, nargs = 540064736, globals = 0x30037e30), line 285 in "call.c" _PyFunction_FastCallKeywords(func = (nil), stack = (nil), nargs = 0, kwnames = (nil)), line 410 in "call.c" _PyEval_EvalFrameDefault(f = 0x100c6144, throwflag = 812196632), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x30254658, throwflag = 812306352), line 581 in "ceval.c" function_code_fastcall(co = 0x2030c6fc, args = 0x308dd090, nargs = 540066384, globals = 0x20088734), line 285 in "call.c" _PyFunction_FastCallKeywords(func = 0x30825b80, stack = 0x200058f8, nargs = 540066496, kwnames = 0x308ddb08), line 410 in "call.c" _PyEval_EvalFrameDefault(f = (nil), throwflag = -875836469), line 4655 in "ceval.c" PyEval_EvalFrameEx(f = 0x10144cb8, throwflag = 812131072), line 581 in "ceval.c" function_code_fastcall(co = 0x202d2110, args = 0x202d2118, nargs = 1548420902, globals = 0x0005e716), line 285 in "call.c" _PyFunction_FastCallDict(func = 0x10010cf8, args = 0x2030c9d0, nargs = 1548420902, kwargs = 0x0005e715), line 324 in "call.c" _PyObject_FastCallDict(callable = 0xd0127034, args = 0xcbcbcbcb, nargs = 540068256, kwargs = 0xf0766408), line 100 in "call.c" _PyObject_Call_Prepend(callable = 0x1000f0f0, obj = 0x175a9b70, args = 0x2030ca20, kwargs = 0x20088734), line 906 in "call.c" method_call(method = 0x100fd6d4, args = 0x200058c8, kwargs = 0x2030ca70), line 304 in "classobject.c" PyObject_Call(callable = 0x20081d14, args = 0x3082ab90, kwargs = 0x2030cab0), line 247 in "call.c" t_bootstrap(boot_raw = 0xcbcbcbcb), line 994 in "_threadmodule.c" pythread_wrapper(arg = (nil)), line 174 in "thread_pthread.h" (dbx) ---------- messages: 334359 nosy: Michael.Felt priority: normal severity: normal status: open title: test_multiprocessing_* tests - success versus fail varies over time type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 14:36:47 2019 From: report at bugs.python.org (rdb) Date: Fri, 25 Jan 2019 19:36:47 +0000 Subject: [New-bugs-announce] [issue35829] datetime: parse "Z" timezone suffix in fromisoformat() Message-ID: <1548445007.56.0.14925163397.issue35829@roundup.psfhosted.org> New submission from rdb : The fromisoformat() function added in 3.7 is a very welcome addition. But one quite noticeable absence was the inability to parse Z instead of +00:00 as the timezone suffix. Its absence is particularly noticeable given how ubiquitous use of Z is in ISO 8601 timestamps on the web; it is also part of the RFC 3339 subset. In particular, JavaScript produces it in its canonical ISO 8601 format and is therefore quite common in JSON APIs; this would be the only piece missing to parse ISO dates produced by JavaScript correctly. I realise that the function was not intended to be able to parse *all* timestamps. But given the triviality of this change, the ubiquity of this particular formatting feature, and the fact that this change is designed in particular for operability with the widely-used JavaScript date format, I don't think this is a slippery slope, and I would personally see no harm in accepting a 'Z' instead of a timezone. I am happy to follow up with a patch for this, but would first like confirmation that there is any chance that such a change would be accepted. Thanks for your consideration! ---------- components: Library (Lib) messages: 334365 nosy: rdb priority: normal severity: normal status: open title: datetime: parse "Z" timezone suffix in fromisoformat() type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 19:55:33 2019 From: report at bugs.python.org (Stefan Seefeld) Date: Sat, 26 Jan 2019 00:55:33 +0000 Subject: [New-bugs-announce] [issue35830] building multiple (binary) packages from a single project Message-ID: <1548464133.32.0.379789250574.issue35830@roundup.psfhosted.org> New submission from Stefan Seefeld : I'm working on a project that I'd like to split into multiple separately installable components. The main component is a command-line tool without any external dependencies. Another component is a GUI frontend that adds some third-party dependencies. Therefore, I'd like to distribute the code in a single source package, but separate binary packages (so users can install only what they actually need). I couldn't find any obvious way to support such a scenario with either `distutils` nor `setuptools`. Is there an easy solution to this ? (I'm currently thinking of adding two `setup()` calls to my `setup.py` script. That would then call all commands twice, so I'd need to override the `sdist` command to only build a single (joint) source package. Is there a better way to achieve what I want ? ---------- assignee: docs at python components: Distutils, Documentation messages: 334381 nosy: docs at python, dstufft, eric.araujo, stefan priority: normal severity: normal status: open title: building multiple (binary) packages from a single project type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jan 25 23:06:00 2019 From: report at bugs.python.org (Mitchell L Model) Date: Sat, 26 Jan 2019 04:06:00 +0000 Subject: [New-bugs-announce] [issue35831] Format Spec example says limited to 3.1+ but works in 2.7 Message-ID: <1548475560.75.0.51760850853.issue35831@roundup.psfhosted.org> New submission from Mitchell L Model : https://docs.python.org/3/library/string.html#format-examples includes this line: '{}, {}, {}'.format('a', 'b', 'c') # 3.1+ only This does in fact work in 2.7. I don't see anything special about this -- seems an entirely straightforward format. ---------- messages: 334385 nosy: mlm priority: normal severity: normal status: open title: Format Spec example says limited to 3.1+ but works in 2.7 versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 26 07:21:36 2019 From: report at bugs.python.org (Stefano Bonalumi) Date: Sat, 26 Jan 2019 12:21:36 +0000 Subject: [New-bugs-announce] [issue35832] Installation error Message-ID: <1548505296.29.0.442399484637.issue35832@roundup.psfhosted.org> New submission from Stefano Bonalumi : Hi i get the following installation error code when i try to install python 3.7.2 version on Windows 10 See the attached files ---------- components: Installation files: Annotazione 2019-01-26 132044.jpg messages: 334390 nosy: Stefano Bonalumi priority: normal severity: normal status: open title: Installation error type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48078/Annotazione 2019-01-26 132044.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 26 09:52:50 2019 From: report at bugs.python.org (Dude Roast) Date: Sat, 26 Jan 2019 14:52:50 +0000 Subject: [New-bugs-announce] [issue35833] Backspace not working Message-ID: <1548514370.87.0.460753518023.issue35833@roundup.psfhosted.org> New submission from Dude Roast : Whenever I try to use Backspace(\b) it always prints square boxes instead of deleting previous string. Code Down:- import pyautogui print("Press Ctrl+c to quit") try: while True: x,y = pyautogui.position(); positionStr = "X: " + str(x).rjust(4) + " Y: " + str(y).rjust(4) print(positionStr,end ='') print('\b'*len(positionStr),end='',flush=True) except KeyboardInterrupt: print("\nDone") O/P:- Press Ctrl+c to quit X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261X: 317 Y: 261 Done ---------- assignee: terry.reedy components: IDLE files: mousenow.py messages: 334394 nosy: Dude Roast, terry.reedy priority: normal severity: normal status: open title: Backspace not working type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48079/mousenow.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 26 12:00:18 2019 From: report at bugs.python.org (Lincoln Quirk) Date: Sat, 26 Jan 2019 17:00:18 +0000 Subject: [New-bugs-announce] [issue35834] get_type_hints exposes an instance of ForwardRef (internal class) in its result, with `from __future__ import annotations` enabled Message-ID: <1548522018.14.0.654103456004.issue35834@roundup.psfhosted.org> New submission from Lincoln Quirk : Consider this code: ``` from __future__ import annotations import typing class A: f: 'Undef' hints = typing.get_type_hints(A) ``` Since Undef is not defined, I should get an exception when calling get_type_hints, something like "NameError: name 'Undef' is not defined". But instead, get_type_hints returns {'f': ForwardRef('Undef')}. If I remove the `from __future__ import annotations` line, get_type_hints correctly raises this exception. I think the behavior should be to raise an exception in both cases. ---------- messages: 334396 nosy: Lincoln Quirk priority: normal severity: normal status: open title: get_type_hints exposes an instance of ForwardRef (internal class) in its result, with `from __future__ import annotations` enabled type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jan 26 18:09:09 2019 From: report at bugs.python.org (jcrmatos) Date: Sat, 26 Jan 2019 23:09:09 +0000 Subject: [New-bugs-announce] [issue35835] There is no mention of breakpoint() in the pdb documentation Message-ID: <1548544149.44.0.0958996770934.issue35835@roundup.psfhosted.org> New submission from jcrmatos : In the Pdb documentation, found at https://docs.python.org/3.7/library/pdb.html?highlight=pdb#module-pdb there is no mention of breakpoint(). In my opinion, this text import pdb; pdb.set_trace() should be replaced with import pdb; pdb.set_trace() New in version 3.7: breakpoint() replaces the previous line. Thanks, JM ---------- messages: 334406 nosy: jcrmatos priority: normal severity: normal status: open title: There is no mention of breakpoint() in the pdb documentation type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 27 05:24:57 2019 From: report at bugs.python.org (jcrmatos) Date: Sun, 27 Jan 2019 10:24:57 +0000 Subject: [New-bugs-announce] [issue35836] ZeroDivisionError class should have a __name__ attr Message-ID: <1548584697.69.0.29003071368.issue35836@roundup.psfhosted.org> New submission from jcrmatos : Hello, When trying this try: 1/0 except Exception as exc: print(type(exc)) # returns print(exc.__name__) # returns AttributeError: 'ZeroDivisionError' object has no attribute '__name__' I believe all classes should have a __name__ attr, correct? It would be nice to check all the other exceptions to see if any other is also missing the __name__ attr. Thanks, JM ---------- messages: 334416 nosy: jcrmatos priority: normal severity: normal status: open title: ZeroDivisionError class should have a __name__ attr versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 27 07:56:37 2019 From: report at bugs.python.org (Sjoerd) Date: Sun, 27 Jan 2019 12:56:37 +0000 Subject: [New-bugs-announce] [issue35837] smtpd PureProxy breaks on mail_options keyword argument Message-ID: <1548593797.17.0.823741388378.issue35837@roundup.psfhosted.org> New submission from Sjoerd : According to https://python.readthedocs.io/en/stable/whatsnew/3.5.html: The SMTPServer class now advertises the 8BITMIME extension (RFC 6152) if decode_data has been set True. If the client specifies BODY=8BITMIME on the MAIL command, it is passed to SMTPServer.process_message() via the mail_options keyword. (Contributed by Milan Oberkirch and R. David Murray in bpo-21795.) This means that process_message gets a mail_options kwarg. However, the smtpd PureProxy and MailmanProxy don't take keyword arguments, which results in an exception. One way to trigger this is to run a debug mailserver and send a mail to it: $ python3 -m smtpd -n error: uncaptured python exception, closing channel <__main__.SMTPChannel connected ('::1', 52007, 0, 0) at 0x10e7eddd8> (:process_message() got an unexpected keyword argument 'mail_options' [/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncore.py|read|83] [/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncore.py|handle_read_event|422] [/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asynchat.py|handle_read|171] [/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/smtpd.py|found_terminator|386]) ---------- components: Library (Lib) messages: 334424 nosy: Sjoerder, giampaolo.rodola, r.david.murray priority: normal severity: normal status: open title: smtpd PureProxy breaks on mail_options keyword argument versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jan 27 17:27:01 2019 From: report at bugs.python.org (Phil Kang) Date: Sun, 27 Jan 2019 22:27:01 +0000 Subject: [New-bugs-announce] [issue35838] ConfigParser calls optionxform twice when assigning dict Message-ID: <1548628021.95.0.834258344082.issue35838@roundup.psfhosted.org> New submission from Phil Kang : ConfigParser calls ConfigParser.optionxform twice per each key when assigning a dictionary to a section. The following code: ini = configparser.ConfigParser() ini.optionxform = lambda x: '(' + x + ')' # Bugged ini['section A'] = {'key 1': 'value 1', 'key 2': 'value 2'} # Not bugged ini.add_section('section B') ini['section B']['key 3'] = 'value 3' ini['section B']['key 4'] = 'value 4' inifile = io.StringIO() ini.write(inifile) print(inifile.getvalue()) ...results in an INI file that looks like: [section A] ((key 1)) = value 1 ((key 2)) = value 2 [section B] (key 3) = value 3 (key 4) = value 4 Here, optionxform has been called twice on key 1 and key 2, resulting in the double parentheses. This also breaks conventional mapping access on the ConfigParser: print(ini['section A']['key 1']) # Raises KeyError('key 1') print(ini['section A']['(key 1)']) # OK # Raises ValueError: too many values to unpack (expected 2) for key, value in ini['section A']: print(key + ', ' + value) ---------- components: Library (Lib) messages: 334439 nosy: Phil Kang priority: normal severity: normal status: open title: ConfigParser calls optionxform twice when assigning dict type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 01:54:14 2019 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 28 Jan 2019 06:54:14 +0000 Subject: [New-bugs-announce] [issue35839] Suggestion: Ignore sys.modules entries with no __spec__ attribute in find_spec Message-ID: <1548658454.23.0.545832854699.issue35839@roundup.psfhosted.org> New submission from Nick Coghlan : (Alternate proposal inspired by the discussions in #35806 and #35791) Currently, a sys.modules entry with no __spec__ attribute will prevent importlib.util.find_spec() from locating the module, requiring the following workaround: def find_spec_bypassing_module_cache(modname): _missing = object() module = sys.modules.pop(modname, _missing) try: spec = importlib.util.find_spec(modname) finally: if module is not _missing: sys.modules[modname] = module The big downside of that approach is that it requires mutation of global state in order to work. One of the easiest ways for this situation to be encountered is with code that replaces itself in sys.modules as a side effect of import, and doesn't bind __spec__ on the new object to the original module __spec__. While we could take the hard line that all modules doing that need to transfer the attribute in order to be properly compatible with find_spec, I think there's a more pragmatic path we can take by differentiating between "__spec__ attribute doesn't exist" and "__spec__ attribute exists, but is None". "__spec__ attribute doesn't exist" would be handled by find_spec as "Ignore the sys.modules entry entirely, and run the same search that would be run if the cache lookup had failed". This will then implicitly handle cases where a module replaces its own sys.modules entry. By contrast, "__spec__ attribute is set to None" would be a true negative cache entry that indicated "this is a synthetic module that cannot be directly introspected or reloaded". ---------- components: Library (Lib) messages: 334446 nosy: brett.cannon, eric.snow, ncoghlan, ronaldoussoren priority: normal severity: normal status: open title: Suggestion: Ignore sys.modules entries with no __spec__ attribute in find_spec type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 02:54:05 2019 From: report at bugs.python.org (Marc Schlaich) Date: Mon, 28 Jan 2019 07:54:05 +0000 Subject: [New-bugs-announce] [issue35840] Control flow inconsistency on closed asyncio stream Message-ID: <1548662045.39.0.770532095049.issue35840@roundup.psfhosted.org> New submission from Marc Schlaich : After closing a StreamWriter the `StreamReaderProtocol.connection_lost` on the other end is not getting called. In this case the StreamReader is at EOF but calling write/drain does not raise any Exception (and sending data to Nirvana). I would expect that StreamWriter.is_closing returns True after the close and calling write/drain raises immediately and not just after the second call. Please see attached example. I see the same behavior with Proactor and Selector event loop on Windows. Maybe this is expected behavior. But in this case it is completely undocumented. Should there be a check for `StreamReader.at_eof` (and maybe `StreamReader.exception`) before writing to the StreamWriter? This might be related to bpo-34176. ---------- components: asyncio files: tcp_test.py messages: 334450 nosy: asvetlov, schlamar, yselivanov priority: normal severity: normal status: open title: Control flow inconsistency on closed asyncio stream type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48082/tcp_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 07:32:09 2019 From: report at bugs.python.org (Tommy Rowland) Date: Mon, 28 Jan 2019 12:32:09 +0000 Subject: [New-bugs-announce] [issue35841] Datetime strftime() does not return correct week numbers for 2019 Message-ID: <1548678729.34.0.0402959484409.issue35841@roundup.psfhosted.org> New submission from Tommy Rowland : This relates to the calculation of the week number from a given datetime, when calling the strftime method. If you call isocalendar() on the datetime.datetime object for the date ?2018-12-31?, the week number returned is 1, which is correct. This is the same when checking the week attribute for the pandas timestamp equivalent. However, when you call strftime on this object (either datetime or timestamp), passing the ?%W? offset string, it returns 53, and then returns 00 for the remainder of the week. It seems that the rest of the weeks in 2019 are out by 1 when returned using this function. This issue seems to be present with the strptime function also. ---------- components: Extension Modules, Windows files: Python Datetime Issue.JPG messages: 334462 nosy: paul.moore, steve.dower, tim.golden, tr12, zach.ware priority: normal severity: normal status: open title: Datetime strftime() does not return correct week numbers for 2019 type: behavior versions: Python 2.7, Python 3.6 Added file: https://bugs.python.org/file48083/Python Datetime Issue.JPG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 08:38:04 2019 From: report at bugs.python.org (rongxin) Date: Mon, 28 Jan 2019 13:38:04 +0000 Subject: [New-bugs-announce] [issue35842] A potential bug about use of uninitialised variable Message-ID: <1548682684.86.0.506675719122.issue35842@roundup.psfhosted.org> New submission from rongxin : In the source file mmapmodule.c, the function mmap_subscript contains a potential bug about the use of uninitialised variable. mmapmodule.c: 764 static PyObject * 765 mmap_subscript(mmap_object *self, PyObject *item) 766 { ... else if (PySlice_Check(item)) { 782 Py_ssize_t start, stop, step, slicelen; 783 784 if (PySlice_Unpack(item, &start, &stop, &step) < 0) { 785 return NULL; 786 } 787 slicelen = PySlice_AdjustIndices(self->size, &start, &stop, step); ... In Line 782 of the file mmapmodule.c, the variable stop is not initialised and will be passed to the function PySlice_Unpack as the third parameter. Inside the function, it is likely that stop is not initialised. Please see the following code. sliceobject.c: 196 int 197 PySlice_Unpack(PyObject *_r, 198 Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step) 199 { ... 231 if (r->stop == Py_None) { 232 *stop = *step < 0 ? PY_SSIZE_T_MIN : PY_SSIZE_T_MAX; 233 } 234 else { 235 if (!_PyEval_SliceIndex(r->stop, stop)) return -1; 236 } The third parameter **stop** may be changed at line 232 or 235. However, at Line 235, it is still likely that **stop** is not initialised at Line 235 where **stop** is passed as the second parameter. Note that, at Line 235, we only know r->stop!=Py_None. The following is the code snippet of the function _PyEval_SliceIndex. ceval.c: 4718 int 4719 _PyEval_SliceIndex(PyObject *v, Py_ssize_t *pi) 4720 { 4721 if (v != Py_None) { 4722 Py_ssize_t x; 4723 if (PyIndex_Check(v)) { 4724 x = PyNumber_AsSsize_t(v, NULL); 4725 if (x == -1 && PyErr_Occurred()) 4726 return 0; 4727 } 4728 else { 4729 PyErr_SetString(PyExc_TypeError, 4730 "slice indices must be integers or " 4731 "None or have an __index__ method"); 4732 return 0; 4733 } 4734 *pi = x; 4735 } 4736 return 1; 4737 } As we can see, it is likely that when the third parameter v can be NULL, then the function _PyEval_SliceIndex will return 1. In the caller function PySlice_Unpack, at Line 235, the condition **if (!_PyEval_SliceIndex(r->stop, stop))** is not satisfied, and thus it will go to Line 238 which returns 0. In the caller function mmap_subscript in the file mmapmodule.c, at Line 784, since the return value is 0, and thus the path condition **PySlice_Unpack(item, &start, &stop, &step) < 0** is not satisfied. It will continue to execute the Line 787. The uninitialised variable **stop** again will be passed to the function PySlice_AdjustIndices as the third parameter. **stop** then will be dereferenced without initialisation. Please see the following. sliceobject.c: 241 Py_ssize_t 242 PySlice_AdjustIndices(Py_ssize_t length, 243 Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t step) ... 260 if (*stop < 0) { 261 *stop += length; ... ---------- messages: 334466 nosy: wurongxin1987 priority: normal severity: normal status: open title: A potential bug about use of uninitialised variable type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 13:03:10 2019 From: report at bugs.python.org (Anthony Sottile) Date: Mon, 28 Jan 2019 18:03:10 +0000 Subject: [New-bugs-announce] [issue35843] importlib.util docs for namespace packages innaccurate Message-ID: <1548698590.94.0.426601552573.issue35843@roundup.psfhosted.org> New submission from Anthony Sottile : For instance: # `a` is an empty directory, a PEP 420 namespace package >>> import importlib.util >>> importlib.util.find_spec('a') ModuleSpec(name='a', loader=None, origin='namespace', submodule_search_locations=_NamespacePath(['/tmp/x/a'])) https://docs.python.org/3/library/importlib.html#importlib.machinery.ModuleSpec.origin > ... Normally ?origin? should be set, but it may be None (the default) which indicates it is unspecified (e.g. for namespace packages). above the `origin` is `'namespace'` https://docs.python.org/3/library/importlib.html#importlib.machinery.ModuleSpec.submodule_search_locations > List of strings for where to find submodules, if a package (None otherwise). However the `_NamespacePath` object above is not indexable: >>> x = importlib.util.find_spec('a').submodule_search_locations >>> x[0] Traceback (most recent call last): File "", line 1, in TypeError: '_NamespacePath' object does not support indexing I can work around however with: >>> next(iter(x)) '/tmp/x/a' ====================== so I guess a few things can/should come out of this: - Document the `'namespace'` origin - Document that `submodule_search_paths` is a Sized[str] instead - Add `__getitem__` to `_NamespacePath` such that it implements the full `Sized` protocol ---------- assignee: docs at python components: Documentation messages: 334484 nosy: Anthony Sottile, docs at python priority: normal severity: normal status: open title: importlib.util docs for namespace packages innaccurate versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 15:28:52 2019 From: report at bugs.python.org (Samuel Grayson) Date: Mon, 28 Jan 2019 20:28:52 +0000 Subject: [New-bugs-announce] [issue35844] Calling `Multiprocessing.Queue.close()` too quickly causes intermittent failure (BrokenPipeError) Message-ID: <1548707332.73.0.0560406738777.issue35844@roundup.psfhosted.org> New submission from Samuel Grayson : If all processes try to close the Queue immediately after someone has written to it, this causes [an error][1] (see the link for more details). Uncommenting any of the `time.sleep`s makes it work consistently again. import multiprocessing import time import logging import multiprocessing.util multiprocessing.util.log_to_stderr(level=logging.DEBUG) queue = multiprocessing.Queue(maxsize=10) def worker(queue): queue.put('abcdefghijklmnop') # "Indicate that no more data will be put on this queue by the # current process." --Documentation # time.sleep(0.01) queue.close() proc = multiprocessing.Process(target=worker, args=(queue,)) proc.start() # "Indicate that no more data will be put on this queue by the current # process." --Documentation # time.sleep(0.01) queue.close() proc.join() Perhaps this is because I am not understanding the documentation correctly, but in that case I would contend this is a documentation bug. Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/queues.py", line 242, in _feed send_bytes(obj) File "/usr/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/usr/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes self._send(header + buf) File "/usr/lib/python3.7/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe [1]: https://stackoverflow.com/q/51680479/1078199 ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 334490 nosy: charmonium, docs at python priority: normal severity: normal status: open title: Calling `Multiprocessing.Queue.close()` too quickly causes intermittent failure (BrokenPipeError) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jan 28 15:51:37 2019 From: report at bugs.python.org (Antoine Pitrou) Date: Mon, 28 Jan 2019 20:51:37 +0000 Subject: [New-bugs-announce] [issue35845] Can't read a F-contiguous memoryview in physical order Message-ID: <1548708697.36.0.695790780311.issue35845@roundup.psfhosted.org> New submission from Antoine Pitrou : This request is motivated in detail here: https://github.com/python/peps/pull/883#issuecomment-458290745 In short: in C, when you have a Py_buffer, you can directly read the memory in whatever order you want (including physical order). It is not possible in pure Python, though. Somewhat unintuitively, memoryview.tobytes() as well as bytes(memoryview) read bytes in *logical* order, even though it flattens the dimensions and doesn't keep the original type. Logical order is different from physical order for Fortran-contiguous arrays. One possible way of alleviating this would be to offer a memoryview.transpose() method, similar to the Numpy transpose() method (see https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.transpose.html). One could also imagine a memoryview.to_c_contiguous() method. Or even: a memoryview.raw_memory() method, that would 1) flatten dimensions 2) cast to 'B' format 3) keep physical order. ---------- components: Interpreter Core messages: 334491 nosy: pitrou, skrah priority: normal severity: normal stage: needs patch status: open title: Can't read a F-contiguous memoryview in physical order type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 03:06:08 2019 From: report at bugs.python.org (Pascal Bugnion) Date: Tue, 29 Jan 2019 08:06:08 +0000 Subject: [New-bugs-announce] [issue35846] Incomplete documentation for re.sub Message-ID: <1548749168.19.0.209945761228.issue35846@roundup.psfhosted.org> New submission from Pascal Bugnion : The documentation for `re.sub` states that "Unknown escapes such as ``\&`` are left alone.". This is only true for escapes which are not ascii characters, as far as I can tell (c.f. source on https://github.com/python/cpython/blob/master/Lib/sre_parse.py#L1047). Would there be value in amending that documentation to either remove that sentence or to clarify it? If so, I'm happy to submit a PR on GitHub. ---------- components: Regular Expressions messages: 334504 nosy: ezio.melotti, mrabarnett, pbugnion priority: normal severity: normal status: open title: Incomplete documentation for re.sub versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 03:35:39 2019 From: report at bugs.python.org (Andreas Schwab) Date: Tue, 29 Jan 2019 08:35:39 +0000 Subject: [New-bugs-announce] [issue35847] RISC-V needs CTYPES_PASS_BY_REF_HACK Message-ID: <1548750939.45.0.386681668204.issue35847@roundup.psfhosted.org> New submission from Andreas Schwab : ====================================================================== FAIL: test_pass_by_value (ctypes.test.test_structures.StructureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abuild/rpmbuild/BUILD/Python-3.7.2/Lib/ctypes/test/test_structures.py", line 416, in test_pass_by_value self.assertEqual(s.first, 0xdeadbeef) AssertionError: 195948557 != 3735928559 ---------------------------------------------------------------------- ---------- components: ctypes messages: 334505 nosy: schwab priority: normal severity: normal status: open title: RISC-V needs CTYPES_PASS_BY_REF_HACK type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 04:18:40 2019 From: report at bugs.python.org (Steve Palmer) Date: Tue, 29 Jan 2019 09:18:40 +0000 Subject: [New-bugs-announce] [issue35848] readinto is not a method on io.TextIOBase Message-ID: <1548753520.89.0.970608711782.issue35848@roundup.psfhosted.org> New submission from Steve Palmer : class io.IOBase states "Even though IOBase does not declare read(), readinto(), or write() because their signatures will vary, implementations and clients should consider those methods part of the interface. Also, implementations may raise a ValueError (or UnsupportedOperation) when operations they do not support are called." However, even though class io.TextIOBase is described as inheriting from io.IOBase, a call to readinto method returns AttributeError exception indicating no readinto attribute, inconsistent with the documentation. ---------- components: IO messages: 334507 nosy: steverpalmer priority: normal severity: normal status: open title: readinto is not a method on io.TextIOBase type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 07:52:11 2019 From: report at bugs.python.org (Addons Zz) Date: Tue, 29 Jan 2019 12:52:11 +0000 Subject: [New-bugs-announce] [issue35849] Added thousands separators to Lib/pstats.py final report Message-ID: <1548766331.46.0.0833286868767.issue35849@roundup.psfhosted.org> New submission from Addons Zz : Instead of doing: ``` 10056.0 function calls in 0.006 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1.0 0.002 0.002 0.006 0.006 benchmark_tests.py:121(logging_mod_log_debuglog_off) 5000.0 0.002 0.000 0.004 0.000 F:\Python\lib\logging\__init__.py:1362(debug) 5000.0 0.001 0.000 0.001 0.000 F:\Python\lib\logging\__init__.py:1620(isEnabledFor) ``` Do: ``` 10,056.0 function calls in 0.006 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1.0 0.002 0.002 0.006 0.006 benchmark_tests.py:121(logging_mod_log_debuglog_off) 5,000.0 0.002 0.000 0.004 0.000 F:\Python\lib\logging\__init__.py:1362(debug) 5,000.0 0.001 0.000 0.001 0.000 F:\Python\lib\logging\__init__.py:1620(isEnabledFor) ``` ---------- components: Library (Lib) messages: 334513 nosy: addons_zz priority: normal severity: normal status: open title: Added thousands separators to Lib/pstats.py final report type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 08:50:47 2019 From: report at bugs.python.org (Previn Kutty) Date: Tue, 29 Jan 2019 13:50:47 +0000 Subject: [New-bugs-announce] [issue35850] CKAN installation went on script error Message-ID: <1548769847.83.0.810844890416.issue35850@roundup.psfhosted.org> New submission from Previn Kutty : Command : paster make-config ckan development.ini was showing the following error (ckan) C:\applns\ckan-master>paster make-config ckan test.ini Distribution already installed: ckan 2.8.2 from c:\users\x179706\envs\ckan\lib\site-packages\ckan-2.8.2-py3.7.egg Traceback (most recent call last): File "C:\applns\Python37-32\Lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\applns\Python37-32\Lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\x179706\Envs\ckan\Scripts\paster.exe\__main__.py", line 9, in File "c:\users\x179706\envs\ckan\lib\site-packages\paste\script\command.py", line 102, in run invoke(command, command_name, options, args[1:]) File "c:\users\x179706\envs\ckan\lib\site-packages\paste\script\command.py", line 141, in invoke exit_code = runner.run(args) File "c:\users\x179706\envs\ckan\lib\site-packages\paste\script\appinstall.py", line 66, in run return super(AbstractInstallCommand, self).run(new_args) File "c:\users\x179706\envs\ckan\lib\site-packages\paste\script\command.py", line 236, in run result = self.command() File "c:\users\x179706\envs\ckan\lib\site-packages\paste\script\appinstall.py", line 293, in command self.distro, self.options.ep_group, self.options.ep_name) File "c:\users\x179706\envs\ckan\lib\site-packages\paste\script\appinstall.py", line 232, in get_installer 'paste.app_install', ep_name) File "c:\users\x179706\envs\ckan\lib\site-packages\pkg_resources\__init__.py", line 2728, in load_entry_poin t =============================================== Python version : 3.7.2 CKAN version : 2.8.2 -> https://github.com/ckan/ckan ---------- components: Library (Lib), Windows messages: 334517 nosy: Previn Kutty, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: CKAN installation went on script error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 09:40:07 2019 From: report at bugs.python.org (Roel Schroeven) Date: Tue, 29 Jan 2019 14:40:07 +0000 Subject: [New-bugs-announce] [issue35851] Make search result in online docs keep their position when search finishes Message-ID: <1548772807.63.0.497905321537.issue35851@roundup.psfhosted.org> New submission from Roel Schroeven : Search in the online documentation shows results while the search continues in the background, which is very nice. Only problem is: when the search finishes, a line with the text "Search finished, found x page(s) matching the search query." appears which pushes all the search results down a bit. When the result you were looking for was already displayed, you suddenly have to aim the mouse cursor at a new position, or it can happen that you accidentally open the wrong link because of the results not staying in their place. Is it possible to allocate space for the "Search finished, ..." line from the beginning, so that search results stay in the same place the whole time? ---------- assignee: docs at python components: Documentation messages: 334525 nosy: docs at python, roelschroeven priority: normal severity: normal status: open title: Make search result in online docs keep their position when search finishes type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 11:27:01 2019 From: report at bugs.python.org (Addons Zz) Date: Tue, 29 Jan 2019 16:27:01 +0000 Subject: [New-bugs-announce] [issue35852] Fixed tests regenerating using CRLF when running it on Windows Message-ID: <1548779221.97.0.25499601133.issue35852@roundup.psfhosted.org> New submission from Addons Zz : When generating the file on Windows by running ``` python test/test_profile.py -r ``` The file has its line ending converted from LF to CRLF, creating noise on the git diff. ---------- components: Library (Lib), Tests messages: 334529 nosy: addons_zz priority: normal severity: normal status: open title: Fixed tests regenerating using CRLF when running it on Windows type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 17:08:38 2019 From: report at bugs.python.org (Tobias Pleyer) Date: Tue, 29 Jan 2019 22:08:38 +0000 Subject: [New-bugs-announce] [issue35853] Extend the functools module with more higher order function combinators Message-ID: <1548799718.5.0.603490291.issue35853@roundup.psfhosted.org> New submission from Tobias Pleyer : The partial function is a typical example of a higher order function: It takes a function as argument and returns a function. As the name of the functools module suggests its purpose is to provide tools for working with functions. This should, in my opinion, include a much bigger set of higher order function combinators as they are known from functional programming. As a start I suggest to add the following functions: identity: The identity function which returns its input compose: Create a function pipeline (threaded computation) sequence: Use compose without binding it to a variable ---------- components: Library (Lib) messages: 334536 nosy: tpleyer priority: normal severity: normal status: open title: Extend the functools module with more higher order function combinators type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 18:13:14 2019 From: report at bugs.python.org (Steve Dower) Date: Tue, 29 Jan 2019 23:13:14 +0000 Subject: [New-bugs-announce] [issue35854] EnvBuilder and venv symlinks do not work on Windows on 3.7.2 Message-ID: <1548803594.51.0.0945300904975.issue35854@roundup.psfhosted.org> New submission from Steve Dower : The change to pull the redirector executables as part of scripts was... perhaps too clever. There exists code out there that uses EnvBuilder to create environments _without_ copying scripts, which now results in an environment that does not include python.exe. It also undid symlink support, as scripts are never symlinked. I'll restore both. ---------- assignee: steve.dower components: Windows keywords: 3.7regression messages: 334537 nosy: eryksun, paul.moore, steve.dower, tim.golden, vinay.sajip, zach.ware priority: normal severity: normal status: open title: EnvBuilder and venv symlinks do not work on Windows on 3.7.2 versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 22:14:30 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 30 Jan 2019 03:14:30 +0000 Subject: [New-bugs-announce] [issue35855] IDLE squeezer: improve unsqueezing and autosqueeze default Message-ID: <1548818070.19.0.784051079348.issue35855@roundup.psfhosted.org> New submission from Terry J. Reedy : This issue continues #35196, which fixed some bugs (inconsistencies) in auto-squeezing and sped the scan of output strings for possible auto-squeezing. This issue has two parts: 1. Make unsqueezing faster and easier for the user. Details discussed below. 2. Reconsider the parameters and protocol for auto-squeezing. 2a. Increase the default setting for the minimum number of number of lines to squeeze. The best setting depends on the result of part 1. 2b. Other changes? (Some of the work might be done on separate PRs on issues that are dependencies of this one.) If users ask for a big blob of text, they presumably want to read at least some of it, usually from the beginning, and possibly scan up and down. The most common example is output from help(object), defined in the pydoc and _sitebuiltins modules. Help first computes a single output string, whether 2 lines (list.append), 600 (itertools) or over 20000 (tkinter). For modules, one most likely want to see the module docstring. Those for itertools are about 40 and 80 lines respectively and both list the functions/classes in the module. With standard interactive python on a text terminal/console, help output is run through a pager. If more than a full screen, output is paused with a prompt. I consider both 'more' on Windows and 'less' on Mac to be overall unsatisfactory, especially for naive beginners, and hope we can do overall better on IDLE. On Windows, the prompt is "-- More --", with no hint of what to do. One can experiment and discover that Enter means 'display one more line' and Space means 'display one more screenful'. Or one can guess to enter "more /h" at a console command prompt. Either way, paging a thousand times to get the entire output for tkinter is not practical. As far as I can tell from the 'more /h' output, there is no 'dump remaining text' command other than 'p '. What is worse, a Windows console only holds the last N lines displayed, where N defaults to 300 and can be increased to at most 9999. So scrolling back is limited. This is terrible, especially at the default setting. On Mac Terminal, the pager is 'less', the prompt is ':', Space and Enter do the same, scrolling is only partially possible and weird, P goes to the top, a number N after space goes down N lines, to an 'END' prompt. As near as I could discover, less refuses to exit until one hits 'q', at which point the help text disappears. This is true even for the 2-line help for list.append. Terrible. On IDLE, without squeezer, the entire text is displayed followed by a fresh 'enter code' prompt ('>>>'). However, for multi-screen text, the user immediately sees only the last lines, not the first. This is bad, But at least there is a possibility of scrolling up and trying to find the beginning of the text, although this may take several seconds. When squeezer unsqueezes, it makes the first line, not the last line, visible. For a long enough output string, the easier way to get to the first line to start reading, other than scrolling, is to squeeze and unsqueeze. (This applies to open_file.read() also.) Absent anything better, I now consider this the primary justification for auto-squeezing. The following should make triggering expansion of a squeeze label easier and faster, in terms of user actions, regardless of how the label came about. E1. Add 'Expand' at the top of the context menu. (I sometimes right click instead of double-clicking the squeeze label.) (And add 'Help' at the bottom, to display an explanation of Squeezer.) E2. Add a hot key to expand when the text cursor is on the line with the squeeze label. After typing 'help(xyz)' and getting a squeeze label, I would prefer to hit and perhaps use navigation keys instead of immediately having to grab the mouse. E3. Stop the false (or at least out-dated) and confusing warning about many normal text lines causing problems. The 20000 lines of tkinter help is not an issue on either my years-old Windows desktop and slower years-old MacbookAir. I once printed over half a million lines , about 40 chars as I remember, on Windows IDLE without issue. Long lines are a different issue. 'a'*10000 is okay on Windows, but not on Mac. On the latter, after unsqueezing, and scrolling down to the new prompt, trying to scroll back up results in the OS twirly pause icon for a few seconds. The natureal response of adding more keys presses and mouse clicks trying to get a response probably made the experience worse. This reminds me of my previous Windows machine a decade ago. A line length warning needs data both on the machine and the max lines of the text. The latter might be gathered fast enough by checking line lengths in an after loop. Once the text is expanded, it could be more immediately useful. U1. If the squeeze label is near the bottom of the window, only the top few lines are made visible. Instead, put the top of the output at the top of the window to make as many lines as possible visible. This should be easy. U2. *Perhaps* we should put the text cursor at the beginning of the output, instead of leaving it at the next prompt, so nagivation keys work. But this has tradeoffs and I think it should be left until some other stuff is done. Once we improve unsqueezing, we can consider auto-squeeze changes. A1. Increase the default max lines before auto-squeeze. Changing defaults is problematical because users customizations apply to all versions a user runs. A new default on a new version will erase a matching custom value set on an older version. So new values should be unlikely as custom values. Another issue is using the lines number to check line length. 10000 = 125 X 80. On my Windows machine, I think 123 (not 125!) might be okay. But definitely not on Mac. One could say we should not protect people from the consequence of foolish inputs, but letting IDLE freeze is part of what has lead to 'IDLE is junk' comments (on Stackoverflow and pydev that I know of). A2. Decouple lines from length. Regardless of what I thought when reviewing squeezer, I think now that this is crucial. But try to determine an appropriate length by internal checks instead of adding another configuration value. Or make the coupling non-linear (maxlen = C*log(maxlines)?) Or see what adding horizontal scroll does. A3. (Suggestion from others) Print a screenful of an output block and then squeeze. The downside to me is two-fold: one will likely need to unsqueeze anyway; this prevents or complicates moving text to the clipboard or a text viewer. (The label item could keep track of how much was displayed, but then resizing is an issue.) Some other ideas. I1. Write a pager. No. I2. Add other ways to get back to the start of a text block. Add 'Previous Prompt' or 'Prompt Up' or ??? (and Prompt next/down) to the unsqueezed menu. Or add Control/Command PageUp/Down for Prev/Next Prompt to the fixed navigation keys. (or both.) These sequences are unused for my current keysets. config-keys.def should be checked. Some people would likely prefer these to autosqueeze for getting to the beginning of a blob. I3. Add 'Copy' and 'View' (and 'Help') to the unsqueezed context menu so one can skip the visible label step. ---------- assignee: terry.reedy components: IDLE messages: 334541 nosy: taleinat, terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE squeezer: improve unsqueezing and autosqueeze default type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jan 29 23:23:52 2019 From: report at bugs.python.org (Dima Tisnek) Date: Wed, 30 Jan 2019 04:23:52 +0000 Subject: [New-bugs-announce] [issue35856] bundled pip syntaxwarning Message-ID: <1548822232.71.0.928620315145.issue35856@roundup.psfhosted.org> New submission from Dima Tisnek : It seems that `pip` vendored/bundled with Python3.8 doesn't conform to 3.8 syntax: ? ~> /usr/local/bin/python3.8 -m ensurepip /.../tmp.../pip-18.1-py2.py3-none-any.whl/pip/_vendor/requests/status_codes.py:3: SyntaxWarning: invalid escape sequence \o /.../tmp.../pip-18.1-py2.py3-none-any.whl/pip/_vendor/requests/status_codes.py:3: SyntaxWarning: invalid escape sequence \o Looking in links: /.../tmp... Requirement already satisfied: setuptools in /usr/local/lib/python3.8/site-packages (40.6.2) Requirement already satisfied: pip in /usr/local/lib/python3.8/site-packages (18.1) Python 3.8.0a0 (bafa8487f77fa076de3a06755399daf81cb75598) built from source ---------- components: Library (Lib) messages: 334542 nosy: Dima.Tisnek priority: normal severity: normal status: open title: bundled pip syntaxwarning type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 00:03:26 2019 From: report at bugs.python.org (Steve Pryde) Date: Wed, 30 Jan 2019 05:03:26 +0000 Subject: [New-bugs-announce] [issue35857] Stacktrace shows lines from updated file on disk, not code actually running Message-ID: <1548824606.25.0.597748947115.issue35857@roundup.psfhosted.org> New submission from Steve Pryde : When python prints or returns a stacktrace, it displays the appropriate line from the *current file on disk*, which may have changed since the program was run. It should instead show the lines from the file as it was when the code was executed. Steps to reproduce: 1. Save the following code to a file and run it in a terminal. import time time.sleep(600) 2. While it is running, insert a line before "time.sleep(600)", type some random characters, and save the file. The "time.sleep" should now be on line 4, and some random characters on line 3. 3. Now go back to the terminal and press Ctrl+C (to generate a stacktrace). Expected output: ^CTraceback (most recent call last): File "py1.py", line 3, in time.sleep(600) KeyboardInterrupt Actual output: ^CTraceback (most recent call last): File "py1.py", line 3, in some random characters KeyboardInterrupt ---------- components: Interpreter Core messages: 334544 nosy: Steve Pryde priority: normal severity: normal status: open title: Stacktrace shows lines from updated file on disk, not code actually running type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 08:51:57 2019 From: report at bugs.python.org (jcrmatos) Date: Wed, 30 Jan 2019 13:51:57 +0000 Subject: [New-bugs-announce] [issue35858] Consider adding the option of running shell/console commands inside the REPL Message-ID: <1548856317.43.0.990558066571.issue35858@roundup.psfhosted.org> New submission from jcrmatos : Consider adding the option of running shell/console commands inside the REPL. Something like >>>!ls ---------- messages: 334556 nosy: jcrmatos priority: normal severity: normal status: open title: Consider adding the option of running shell/console commands inside the REPL type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 09:16:14 2019 From: report at bugs.python.org (James Davis) Date: Wed, 30 Jan 2019 14:16:14 +0000 Subject: [New-bugs-announce] [issue35859] Capture behavior depends on the order of an alternation Message-ID: <1548857774.3.0.1614070954.issue35859@roundup.psfhosted.org> New submission from James Davis : I have two regexes: /(a|ab)*?b/ and /(ab|a)*?b/. If I re.search the string "ab" for these regexes, I get inconsistent behavior. Specifically, /(a|ab)*?b/ matches with capture "a", while /(ab|a)*?b/ matches with an empty capture group. I am not actually sure which behavior is correct. Interpretation 1: The (ab|a) clause matches the a, satisfying the (ab|a)*? once, and the engine proceeds to the b and completes. The capture group ends up containing "a". Interpretation 2: The (ab|a) clause matches the a. Since the clause is marked with *, the engine repeats the attempt and finds nothing the second time. It proceeds to the b and completes. Because the second match attempt on (ab|a) found nothing, the capture group ends up empty. The behavior depends on both the order of (ab|a) vs. (a|ab), and the use of the non-greedy quantifier. I cannot see why changing the order of the alternation should have this effect. The change in behavior occurs in the built-in "re" module but not in the competing "regex" module. The behavior is consistent in both Python 2.7 and Python 3.5. I have not tested other versions. I have included the confusing-regex-behavior.py file for troubleshooting. Below is the behavior for matches on these and many variants. I find the following lines the most striking: Regex pattern matched? matched string captured content -------------------- -------------------- -------------------- -------------------- (ab|a)*?b True ab ('',) (ab|a)+?b True ab ('',) (ab|a){0,}?b True ab ('',) (ab|a){0,2}?b True ab ('',) (ab|a){0,1}?b True ab ('a',) (ab|a)*b True ab ('a',) (ab|a)+b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)+?b True ab ('a',) (08:58:48) jamie at woody ~ $ python3 /tmp/confusing-regex-behavior.py Behavior from re Regex pattern matched? matched string captured content -------------------- -------------------- -------------------- -------------------- (ab|a)*?b True ab ('',) (ab|a)+?b True ab ('',) (ab|a){0,}?b True ab ('',) (ab|a){0,2}?b True ab ('',) (ab|a)?b True ab ('a',) (ab|a)??b True ab ('a',) (ab|a)b True ab ('a',) (ab|a){0,1}?b True ab ('a',) (ab|a)*b True ab ('a',) (ab|a)+b True ab ('a',) (a|ab)*b True ab ('a',) (a|ab)+b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)+?b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)*?b True ab ('a',) (bb|a)*?b True ab ('a',) ((?:ab|a)*?)b True ab ('a',) ((?:a|ab)*?)b True ab ('a',) Behavior from regex Regex pattern matched? matched string captured content -------------------- -------------------- -------------------- -------------------- (ab|a)*?b True ab ('a',) (ab|a)+?b True ab ('a',) (ab|a){0,}?b True ab ('a',) (ab|a){0,2}?b True ab ('a',) (ab|a)?b True ab ('a',) (ab|a)??b True ab ('a',) (ab|a)b True ab ('a',) (ab|a){0,1}?b True ab ('a',) (ab|a)*b True ab ('a',) (ab|a)+b True ab ('a',) (a|ab)*b True ab ('a',) (a|ab)+b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)+?b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)*?b True ab ('a',) (a|ab)*?b True ab ('a',) (bb|a)*?b True ab ('a',) ((?:ab|a)*?)b True ab ('a',) ((?:a|ab)*?)b True ab ('a',) ---------- components: Regular Expressions files: confusing-regex-behavior.py messages: 334560 nosy: davisjam, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: Capture behavior depends on the order of an alternation type: behavior versions: Python 2.7, Python 3.5 Added file: https://bugs.python.org/file48085/confusing-regex-behavior.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 11:06:56 2019 From: report at bugs.python.org (Bence Nagy) Date: Wed, 30 Jan 2019 16:06:56 +0000 Subject: [New-bugs-announce] [issue35860] ProcessPoolExecutor subprocesses crash & break pool when raising an exception with keyword-only args and an arg passed to Exception Message-ID: <1548864416.11.0.414806369535.issue35860@roundup.psfhosted.org> New submission from Bence Nagy : ProcessPoolExecutor's subprocesses normally transparently proxy exceptions raised within a child to the parent process. One special case I bumped into however causes a crash within the stdlib code responsible for communication. The special case is triggered when both of these are true: 1) The exception being raised uses `*` to mark arguments as keyword-only 2) The exception being raised sets a positional argument for Exception: `super().__init__("test")` I have attached a file which demonstrates what happens when only 1), only 2), and both 1) and 2) are true. Running the file with Python 3.7.2 will result in this output: ``` raised Works1('test') raised Works2() raised BrokenProcessPool('A process in the process pool was terminated abruptly while the future was running or pending.') ``` The expected result for the third call would be keeping the executor usable and printing this: ``` raised Breaks('test') ``` ---------- components: Library (Lib) files: ppe_crash.py messages: 334570 nosy: underyx priority: normal severity: normal status: open title: ProcessPoolExecutor subprocesses crash & break pool when raising an exception with keyword-only args and an arg passed to Exception type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48086/ppe_crash.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 14:05:31 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Wed, 30 Jan 2019 19:05:31 +0000 Subject: [New-bugs-announce] [issue35861] test_named_expressions raises SyntaxWarning Message-ID: <1548875131.95.0.280347289465.issue35861@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : SyntaxWarning was recently added for comparison using "is" over literals with issue34850. This is raised on master for a PEP 572 related test. The warning is emitted twice which is covered with bpo-35798 and I verified the patch. The fix for this issue would be to use == as noted in the warning. Emily, if you can confirm my report then I would like to triage this as an easy one since the fix is simple. # SyntaxWarning on master ? cpython git:(master) ./python.exe Lib/test/test_named_expressions.py Lib/test/test_named_expressions.py:168: SyntaxWarning: "is" with a literal. Did you mean "=="? if (match := 10) is 10: Lib/test/test_named_expressions.py:168: SyntaxWarning: "is" with a literal. Did you mean "=="? if (match := 10) is 10: ........................................................ ---------------------------------------------------------------------- Ran 56 tests in 0.010s OK Thanks ---------- components: Tests messages: 334587 nosy: emilyemorehouse, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: test_named_expressions raises SyntaxWarning type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 14:45:16 2019 From: report at bugs.python.org (=?utf-8?q?Tobias_D=C3=A4ullary?=) Date: Wed, 30 Jan 2019 19:45:16 +0000 Subject: [New-bugs-announce] [issue35862] Change the environment for a new process Message-ID: <1548877516.06.0.747468813802.issue35862@roundup.psfhosted.org> New submission from Tobias D?ullary : There should be a possibility to change the environment of a process created with multiprocessing. For subprocess this is possible thanks to the "env" attribute. Elaboration: While it is trivial to change os.environ manually, in some cases this is not possible. For instance: creating a COM process on Windows; this process will always inherit the environment of the host process. A workaround is to spawn a python process with a different environment which then will provide this to the child COM process. ---------- components: Library (Lib) messages: 334591 nosy: r-or priority: normal severity: normal status: open title: Change the environment for a new process type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 16:23:57 2019 From: report at bugs.python.org (Jon Ribbens) Date: Wed, 30 Jan 2019 21:23:57 +0000 Subject: [New-bugs-announce] [issue35863] email.headers wraps headers badly Message-ID: <1548883437.25.0.0584466696793.issue35863@roundup.psfhosted.org> New submission from Jon Ribbens : email.headers can wrap headers by putting a FWS as the very first thing in the output: >>> from email.header import Header >>> Header("a" * 67, header_name="Content-ID").encode() '\n aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' i.e. it produces headers that look like this: Content-ID: blah It is unclear to me whether this is compliant with the spec, but there seems to be little reason to do this, and good reason not to in that at the very least Outlook does not understand such headers. (e.g. if you have an HTML email with an inline image referenced by Content-ID then Outlook will not find it if the Content-ID header is wrapped as above.) ---------- components: Library (Lib) messages: 334594 nosy: jribbens priority: normal severity: normal status: open title: email.headers wraps headers badly versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jan 30 21:20:11 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 31 Jan 2019 02:20:11 +0000 Subject: [New-bugs-announce] [issue35864] Replace OrderedDict with regular dict in namedtuple's _asdict() method. Message-ID: <1548901211.98.0.0872539907535.issue35864@roundup.psfhosted.org> New submission from Raymond Hettinger : Now that regular dicts are ordered and compact, it makes more sense for the _asdict() method to create a regular dict (as it did in its early days) rather than an OrderedDict. The regular dict is much smaller, much faster, and has a much cleaner looking repr. Historically we would go through a deprecation period for a possibly breaking change; however, it was considered more benefit to users and less disruptive to make the update directly. See the thread starting at: https://mail.python.org/pipermail/python-dev/2019-January/156150.html ---------- assignee: rhettinger components: Library (Lib) messages: 334602 nosy: rhettinger priority: normal severity: normal status: open title: Replace OrderedDict with regular dict in namedtuple's _asdict() method. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 03:10:10 2019 From: report at bugs.python.org (INADA Naoki) Date: Thu, 31 Jan 2019 08:10:10 +0000 Subject: [New-bugs-announce] [issue35865] configparser document refers about random dict order Message-ID: <1548922210.28.0.356473813231.issue35865@roundup.psfhosted.org> New submission from INADA Naoki : GH-6819 (bpo-33504) changed OrderedDict to dict, and removed note about randomness of dict order in dict. But it is only for 3.8. Python 3.7 document should remove the note too. ---------- assignee: docs at python components: Documentation messages: 334609 nosy: docs at python, inada.naoki priority: normal severity: normal status: open title: configparser document refers about random dict order versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 05:05:11 2019 From: report at bugs.python.org (Jakub Wilk) Date: Thu, 31 Jan 2019 10:05:11 +0000 Subject: [New-bugs-announce] [issue35866] concurrent.futures deadlock Message-ID: <1548929111.09.0.517798780255.issue35866@roundup.psfhosted.org> New submission from Jakub Wilk : The attached test program hangs eventually (it may need a few thousand of iterations). Tested with Python v3.7.2 on Linux, amd64. ---------- components: Library (Lib) files: cf-deadlock.py messages: 334618 nosy: jwilk priority: normal severity: normal status: open title: concurrent.futures deadlock type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48090/cf-deadlock.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 08:16:37 2019 From: report at bugs.python.org (Sampsa Riikonen) Date: Thu, 31 Jan 2019 13:16:37 +0000 Subject: [New-bugs-announce] [issue35867] NameError is not caught at Task execution Message-ID: <1548940597.02.0.134680701167.issue35867@roundup.psfhosted.org> New submission from Sampsa Riikonen : - Create a cofunction that raises an Exception or an Error - Schedule that cofunction as a task - Exceptions are raised when the task is executed OK - However, Errors (i.e. NameError, AssertionError, etc.) are raised only at task garbage collection..! Please try this snippet: ``` import asyncio class HevonPaskaa: def __init__(self): pass async def goodfunc(self): await asyncio.sleep(3) print("Good function was called allright") print("While it was sleeping, hevonpaska must have been executed") async def hevonpaska(self): """When this cofunction is scheduled as a task: - The NameError is not raised immediately .. ! - BaseException is raised immeadiately OK """ raise NameError # WARNING: This is catched only when the program terminates # raise BaseException # WARNING: comment the previous line and uncomment this: this is catched immediately async def cofunc2(self): # # we'd like this to raise the NameError immediately: self.task = asyncio.get_event_loop().create_task(self.hevonpaska()) self.good_task = asyncio.get_event_loop().create_task(self.goodfunc()) # # this raises NameError immediately because the task is garbage collected: # self.task = None async def cofunc1(self): await self.cofunc2() print("\nwaitin' : where-t-f is the NameError hiding!?") await asyncio.sleep(6) print("Wait is over, let's exit\n") hv = HevonPaskaa() asyncio.get_event_loop().run_until_complete(hv.cofunc1()) ``` ---------- components: asyncio messages: 334625 nosy: Sampsa Riikonen, asvetlov, yselivanov priority: normal severity: normal status: open title: NameError is not caught at Task execution type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 08:35:34 2019 From: report at bugs.python.org (Oleh Khoma) Date: Thu, 31 Jan 2019 13:35:34 +0000 Subject: [New-bugs-announce] [issue35868] Support ALL_PROXY environment variable in urllib Message-ID: <1548941734.95.0.137679013052.issue35868@roundup.psfhosted.org> New submission from Oleh Khoma : Please, add support for ALL_PROXY environment variable to urllib. When this environment variable is found, add same proxy for HTTP, HTTPS and FTP. ---------- components: Extension Modules messages: 334627 nosy: Oleh Khoma priority: normal severity: normal status: open title: Support ALL_PROXY environment variable in urllib type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 10:21:50 2019 From: report at bugs.python.org (Steve Palmer) Date: Thu, 31 Jan 2019 15:21:50 +0000 Subject: [New-bugs-announce] [issue35869] io.BufferReader.read() returns None Message-ID: <1548948110.23.0.999331035986.issue35869@roundup.psfhosted.org> New submission from Steve Palmer : class io.BufferedIOBase states "In addition, those methods [read(), readinto() and write()] can raise BlockingIOError if the underlying raw stream is in non-blocking mode and cannot take or give enough data; unlike their RawIOBase counterparts, they will never return None." However, class.io.BufferedReader (inheriting from io.BufferedIOBase) *does* return None in this case. Admittedly, io.BufferedReader does says it is overriding the inherited method, but I'm surprised that change in behaviour declared for buffered objects, is reverted to the RarIOBase behaviour on a more specific class. The attached file (a little long - sorry), simulates a slow non-blocking raw file, which it wraps in a BufferReader to test the behaviour defined in BufferedIOBase. ---------- files: read2.py messages: 334630 nosy: steverpalmer priority: normal severity: normal status: open title: io.BufferReader.read() returns None type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48091/read2.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 13:13:51 2019 From: report at bugs.python.org (Victor Porton) Date: Thu, 31 Jan 2019 18:13:51 +0000 Subject: [New-bugs-announce] [issue35870] readline() specification is unclear Message-ID: <1548958431.25.0.44536557418.issue35870@roundup.psfhosted.org> New submission from Victor Porton : In https://docs.python.org/3/library/io.html it is forgotten to say whether '\n' is appened to the return value of readline(). It is also unclear what happens if the last line not terminated by '\n' is read. It is also unclear what is returned if a text file (say with '\r\n' terminators) is read. Is it appended to the return value '\n', '\r\n' or nothing? ---------- assignee: docs at python components: Documentation messages: 334634 nosy: docs at python, porton priority: normal severity: normal status: open title: readline() specification is unclear versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jan 31 19:25:49 2019 From: report at bugs.python.org (Jayanth Raman) Date: Fri, 01 Feb 2019 00:25:49 +0000 Subject: [New-bugs-announce] [issue35871] Pdb NameError in generator param and list comprehension Message-ID: <1548980749.75.0.843445055452.issue35871@roundup.psfhosted.org> New submission from Jayanth Raman : I get a NameError for a variable in the generator param of a function or in a list comprehension. See example below. The variable is available to the program, but not to the interactive Pdb shell. # Test file: def main(nn=10): xx = list(range(nn)) breakpoint() for ii in range(nn): num = sum(xx[jj] for jj in range(nn)) print(f'xx={xx}') if __name__ == '__main__': main() $ python3 Python 3.7.2 (default, Jan 13 2019, 12:50:15) [Clang 10.0.0 (clang-1000.11.45.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> $ python3 /tmp/test.py > /tmp/test.py(5)main() -> for ii in range(nn): (Pdb) n > /tmp/test.py(6)main() -> num = sum(xx[jj] for jj in range(nn)) (Pdb) sum(xx[jj] for jj in range(nn)) *** NameError: name 'xx' is not defined (Pdb) [xx[jj] for jj in range(nn)] *** NameError: name 'xx' is not defined (Pdb) c xx=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] FWIW python3 is a homebrew installation. I had the same issue with 3.7.0 as well (also homebrew): Python 3.7.0 (default, Sep 18 2018, 18:47:22) [Clang 9.1.0 (clang-902.0.39.2)] on darwin ---------- components: Interpreter Core messages: 334639 nosy: jayanth priority: normal severity: normal status: open title: Pdb NameError in generator param and list comprehension versions: Python 3.7 _______________________________________ Python tracker _______________________________________