From report at bugs.python.org Sat Apr 1 00:37:19 2017 From: report at bugs.python.org (Michael Selik) Date: Sat, 01 Apr 2017 04:37:19 +0000 Subject: [New-bugs-announce] [issue29957] unnecessary LBYL for key contained in defaultdict, lib2to3/btm_matcher Message-ID: <1491021439.17.0.48753104649.issue29957@psf.upfronthosting.co.za> New submission from Michael Selik: Minor, but it looks like someone decided to use a defaultdict but forgot to remove the checks for whether a key exists. Creating a defaultdict(list): https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/lib2to3/btm_matcher.py#L100 Checking for the key, then initializing an empty list: https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/lib2to3/btm_matcher.py#L120 Again: https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/lib2to3/btm_matcher.py#L137 Because the ``results`` is getting returned, perhaps it'd be better to use a regular dict and dict.setdefault instead of a defaultdict. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 290955 nosy: selik priority: normal severity: normal status: open title: unnecessary LBYL for key contained in defaultdict, lib2to3/btm_matcher _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 1 02:38:51 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 01 Apr 2017 06:38:51 +0000 Subject: [New-bugs-announce] [issue29958] Use add_mutually_exclusive_group(required=True) in zipfile and tarfile CLI Message-ID: <1491028731.32.0.981459327576.issue29958@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: In the comment to the patch on issue28115 SilentGhost suggested to pass required=True to add_mutually_exclusive_group(). This makes the last "else" not needed. Proposed patch implements this idea for zipfile and tarfile. It also improves error handling when pass empty tar archive name, and adds a hyphen in "command-line". ---------- assignee: serhiy.storchaka components: Demos and Tools messages: 290966 nosy: SilentGhost, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Use add_mutually_exclusive_group(required=True) in zipfile and tarfile CLI type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 1 03:38:49 2017 From: report at bugs.python.org (bo qu) Date: Sat, 01 Apr 2017 07:38:49 +0000 Subject: [New-bugs-announce] [issue29959] re.match failed to match left square brackets as the first char Message-ID: <1491032329.85.0.64760293799.issue29959@psf.upfronthosting.co.za> New submission from bo qu: if "[" is the first char in a string, then re.match can't match any pattern from the string, but re.findall works fine details as follows: [da at namenode log]$ python3 Python 3.4.3 (default, Jun 14 2015, 14:23:40) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> cyzd="[abc]" >>> cyzd '[abc]' >>> pattern="ab(.*)" >>> pattern 'ab(.*)' >>> match=re.match(pattern, cyzd) >>> match >>> pattern=r'ab(.*)' >>> re.findall(pattern, cyzd) ['c]'] ---------- components: Regular Expressions messages: 290968 nosy: bo qu, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: re.match failed to match left square brackets as the first char type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 1 06:46:26 2017 From: report at bugs.python.org (Bryan G. Olson) Date: Sat, 01 Apr 2017 10:46:26 +0000 Subject: [New-bugs-announce] [issue29960] _random.Random state corrupted on exception Message-ID: <1491043586.04.0.46814850863.issue29960@psf.upfronthosting.co.za> New submission from Bryan G. Olson: Demo: Run the Python library's test_random.py under the Python debugger and check the generator at the start of test_shuffle(): C:\bin\Python36>python -m pdb Lib\test\test_random.py > c:\bin\python36\lib\test\test_random.py(1)() -> import unittest (Pdb) break 61 Breakpoint 1 at c:\bin\python36\lib\test\test_random.py:61 (Pdb) continue .............................> c:\bin\python36\lib\test\test_random.py(61)test_shuffle() -> shuffle = self.gen.shuffle (Pdb) list 56 # randomness source is not available. 57 urandom_mock.side_effect = NotImplementedError 58 self.test_seedargs() 59 60 def test_shuffle(self): 61 B-> shuffle = self.gen.shuffle 62 lst = [] 63 shuffle(lst) 64 self.assertEqual(lst, []) 65 lst = [37] 66 shuffle(lst) (Pdb) p self.gen.getrandbits(31) 2137781566 (Pdb) p self.gen.getrandbits(31) 2137781566 (Pdb) p self.gen.getrandbits(31) 2137781566 (Pdb) p self.gen.getrandbits(31) 2137781566 (Pdb) p self.gen.getrandbits(31) 2137781566 That's not random. Diagnosis: The order in which test functions run is the lexicographic order of their names. Thus unittest ran test_setstate_middle_arg() before running test_shuffle(). test_setstate_middle_arg() did some failed calls to _random.Random.setstate(), which raised exceptions as planned, but also trashed the state of the generator. test_random.py continues to use the same instance of _random.Random after setstate() raises exceptions. The documentation for Random.setstate() does not specify what happens to the state of the generator if setstate() raises an exception. Fortunately the generator recommended for secure applications, SystemRandom, does not implement setstate(). Solution: The fix I prefer is a small change to random_setstate() in _randommodule.c, so that it does not change the state of the generator until the operation is sure to succeed. ---------- components: Library (Lib) messages: 290977 nosy: bryangeneolson priority: normal severity: normal status: open title: _random.Random state corrupted on exception type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 1 07:52:41 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 01 Apr 2017 11:52:41 +0000 Subject: [New-bugs-announce] [issue29961] More compact sets and frozensets created from sets Message-ID: <1491047561.92.0.226119974337.issue29961@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: For now new set and frozenset objects can allocate 2 times larger table than necessary when are created from set or dict. For example if the size n of the original set is the power of two, resulting set will allocate the table of 4*n rather that 2*n. Up to 20% of new sets use twice more memory than necessary. Proposed patch makes set_merge() allocating the table of size n*5/3 instead of n*2. This is the minimal size necessary for inserting all elements with fill ration <=60%. $ ./python -c 'N = 6000; from sys import getsizeof; s = [getsizeof(frozenset(set(range(n)))) for n in range(N)]; print( [(n, s[n]) for n in range(N) if not n or s[n] != s[n-1]] )' Unpatched: [(0, 112), (5, 240), (8, 368), (16, 624), (32, 1136), (64, 2160), (128, 4208), (256, 8304), (512, 16496), (1024, 32880), (2048, 65648), (4096, 131184)] Patched: [(0, 112), (5, 240), (9, 368), (19, 624), (38, 1136), (77, 2160), (153, 4208), (307, 8304), (614, 16496), (1229, 32880), (2457, 65648), (4915, 131184)] ---------- components: Interpreter Core messages: 290980 nosy: inada.naoki, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: More compact sets and frozensets created from sets type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 1 10:32:38 2017 From: report at bugs.python.org (Mark Dickinson) Date: Sat, 01 Apr 2017 14:32:38 +0000 Subject: [New-bugs-announce] [issue29962] Add math.remainder operation Message-ID: <1491057158.44.0.328286654069.issue29962@psf.upfronthosting.co.za> New submission from Mark Dickinson: IEEE 754, the C99 standard, the Decimal IBM standard and Java all support/specify a 'remainder-near' operation. Apart from being standard, this has a number of useful applications: 1. Argument reduction in numerical algorithms: it's common to want to reduce to a range [-modulus/2, modulus/2] rather than [0, modulus). 2. Particular case of the above: reduction of angles to lie in the range [-pi, pi] 3. Rounding a float x to the nearest multiple of y. This is a much-asked StackOverflow question, and the standard answer of y * round(x / y) risks introducing floating-point error and so can give incorrect results in corner cases. With a remainder operation, it's trivial to do this correctly: x - remainder(x, y) gives the closest representable float to the closest integer multiple of y to x. remainder(x, y) has some nice properties: it's *always* exactly representable (unlike x % y), it satisfies the symmetry remainder(-x, y) == -remainder(x, y), and it's periodic with period 2*y. I have a patch, and will make a PR shortly. ---------- components: Extension Modules messages: 290985 nosy: mark.dickinson priority: normal severity: normal stage: patch review status: open title: Add math.remainder operation type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 2 07:46:04 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Sun, 02 Apr 2017 11:46:04 +0000 Subject: [New-bugs-announce] [issue29963] Remove obsolete declaration PyTokenizer_RestoreEncoding in tokenizer.h Message-ID: <1491133564.75.0.911637792932.issue29963@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Couldn't trace exactly when it was removed from tokenizer.c but the corresponding declaration in the header file survived. I'm not sure how to tag this small clean-up. ---------- messages: 291033 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Remove obsolete declaration PyTokenizer_RestoreEncoding in tokenizer.h versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 2 16:19:38 2017 From: report at bugs.python.org (Paul Pinterits) Date: Sun, 02 Apr 2017 20:19:38 +0000 Subject: [New-bugs-announce] [issue29964] %z directive has no effect on the output of time.strptime Message-ID: <1491164378.66.0.775281160942.issue29964@psf.upfronthosting.co.za> New submission from Paul Pinterits: %z is listed as a supported directive in the python 3 documentation (https://docs.python.org/3.5/library/time.html#time.strftime), but it doesn't actually do anything: >>> from time import strptime >>> strptime('+0000', '%z') == strptime('+0200', '%z') True As far as I can tell, there aren't any footnotes saying that %z might not be supported on some platforms, like it was back in python 2. In case it matters, I'm using python 3.5.3 on linux. ---------- components: Library (Lib) messages: 291041 nosy: Paul Pinterits priority: normal severity: normal status: open title: %z directive has no effect on the output of time.strptime type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 2 23:44:12 2017 From: report at bugs.python.org (Michael Selik) Date: Mon, 03 Apr 2017 03:44:12 +0000 Subject: [New-bugs-announce] [issue29965] MatchObject __getitem__() should support slicing and len Message-ID: <1491191052.84.0.321633971824.issue29965@psf.upfronthosting.co.za> New submission from Michael Selik: Currently, slicing a MatchObject causes an IndexError and len() a TypeError. It's natural to expect slicing and len to work on objects of a finite length that index by natural numbers. ---------- messages: 291050 nosy: selik priority: normal severity: normal status: open title: MatchObject __getitem__() should support slicing and len _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 06:21:14 2017 From: report at bugs.python.org (Simon Percivall) Date: Mon, 03 Apr 2017 10:21:14 +0000 Subject: [New-bugs-announce] [issue29966] typing.get_type_hints doesn't really work for classes with ForwardRefs Message-ID: <1491214874.69.0.0315417893192.issue29966@psf.upfronthosting.co.za> New submission from Simon Percivall: For classes with ForwardRef annotations, typing.get_type_hints is unusable. As example, we have two files: a.py: class Base: a: 'A' class A: pass b.py: from a import Base class MyClass(Base): b: 'B' class B: pass >>> from typing import get_type_hints >>> from b import MyClass >>> get_type_hints(MyClass) # NameError What should globals/locals be here? ---------- messages: 291058 nosy: simon.percivall priority: normal severity: normal status: open title: typing.get_type_hints doesn't really work for classes with ForwardRefs versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 06:29:43 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Apr 2017 10:29:43 +0000 Subject: [New-bugs-announce] [issue29967] "AMD64 FreeBSD 9.x 3.x" tries to rebuild Include/opcode.h, timestamp issue Message-ID: <1491215383.28.0.794511557475.issue29967@psf.upfronthosting.co.za> New submission from STINNER Victor: "make buildbottest" on "AMD64 FreeBSD 9.x 3.x" fails with: --- Cannot generate ./Include/opcode.h, python not found ! To skip re-generation of ./Include/opcode.h run or . Otherwise, set python in PATH and run configure or run . *** [./Include/opcode.h] Error code 1 --- http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.x%203.x/builds/123/steps/test/logs/stdio Python has a "make touch" command which uses "hg --config extensions.touch=Tools/hg/hgtouch.py touch -v", but CPython moved to Git. ---------- components: Build messages: 291059 nosy: haypo, koobs, zach.ware priority: normal severity: normal status: open title: "AMD64 FreeBSD 9.x 3.x" tries to rebuild Include/opcode.h, timestamp issue versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 08:49:25 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Mon, 03 Apr 2017 12:49:25 +0000 Subject: [New-bugs-announce] [issue29968] Document that no characters are allowed to proceed \ in explicit line joining Message-ID: <1491223765.12.0.65807116216.issue29968@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: After looking through the code, the tokenizer only allows a new line character to proceed `\` in explicit line joining [1]. The Devguide section on it [2] actually states many of the limitations of using `\` but not directly that nothing is allowed after it (it does have a remark on comments). Would it be a good idea to amend it to state that no characters are allowed after `\`? [1]: https://github.com/python/cpython/blob/734125938d4653459593ebd28a0aec086efb1f27/Parser/tokenizer.c#L1847 [2]: https://docs.python.org/3/reference/lexical_analysis.html#explicit-line-joining ---------- assignee: docs at python components: Documentation messages: 291067 nosy: Jim Fasarakis-Hilliard, docs at python priority: normal severity: normal status: open title: Document that no characters are allowed to proceed \ in explicit line joining _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 09:48:45 2017 From: report at bugs.python.org (Angus Hollands) Date: Mon, 03 Apr 2017 13:48:45 +0000 Subject: [New-bugs-announce] [issue29969] Typo in decimal error message Message-ID: <1491227325.12.0.305742350956.issue29969@psf.upfronthosting.co.za> New submission from Angus Hollands: When passing an object that fails Py_FloatCheck, the error message raised reads "argument must be int of float", rather than "argument must be int or float" ---------- components: Extension Modules messages: 291070 nosy: Angus Hollands priority: normal severity: normal status: open title: Typo in decimal error message type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 10:10:15 2017 From: report at bugs.python.org (kyuupichan) Date: Mon, 03 Apr 2017 14:10:15 +0000 Subject: [New-bugs-announce] [issue29970] Severe open file leakage running asyncio SSL server Message-ID: <1491228615.05.0.224246839796.issue29970@psf.upfronthosting.co.za> New submission from kyuupichan: Original report at old repo here: https://github.com/python/asyncio/issues/483 There this is reported fixed by https://github.com/python/cpython/pull/480 I wish to report that whilst the above patch might have a small positive effect, it is far from solving the actual issue. Several users report eventual exhaustion of the open file resource running SSL asyncio servers. Here are graphs provided by a friend running my ElectrumX server software, first accepting SSL connections and the second accepting TCP connections only. Both of the servers were monkey-patched with the pull-480 fix above, so this is evidence it isn't solving the issue. http://imgur.com/a/cWnSu As you can see, the TCP server (which has far less connections; most users use SSL) has no leaked file handles, whereas the SSL server has over 300. This becomes an easy denial of service vector against asyncio servers. One way to trigger this (though I doubt it explains the numbers above) is simply to connect to the SSL server from telnet, and do nothing. asyncio doesn't time you out, the telnet session seems to sit there forever, and the open file resources are lost in the SSL handshake stage until the remote host kindly decides to disconnect. I suspect these resource issues all revolve around the SSL handshake process, certainly at the opening of a connection, but also perhaps when closing. As the application author I am not informed by asyncio of a potential connection until the initial handshake is complete, so I cannot do anything to close these phantom socket connections. I have to rely on asyncio to be properly handling DoS issues and it is not currently doing so robustly. ---------- components: asyncio messages: 291071 nosy: kyuupichan, yselivanov priority: normal severity: normal status: open title: Severe open file leakage running asyncio SSL server type: resource usage versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 10:43:38 2017 From: report at bugs.python.org (Antoine Pitrou) Date: Mon, 03 Apr 2017 14:43:38 +0000 Subject: [New-bugs-announce] [issue29971] Lock.acquire() not interruptible on Windows Message-ID: <1491230618.78.0.644794311178.issue29971@psf.upfronthosting.co.za> New submission from Antoine Pitrou: On Windows, Lock.acquire() (and other synchronization primitives derived from it, such as queue.Queue) cannot be interrupted with Ctrl-C, which makes it difficult to interrupt a process waiting on such a primitive. Judging by the code in Python/_thread_nt.h, it should be relatively easy to add such support for the "legacy" semaphore-based implementation (by using WaitForMultipleObjects instead of WaitForSingleObject), but it would be much hairier for the new condition variable-based implementation. Of course, many other library calls are prone to this limitation (not being interruptible with Ctrl-C on Windows). See https://github.com/dask/dask/pull/2144#issuecomment-290556996 for original report. ---------- components: Library (Lib), Windows messages: 291072 nosy: kristjan.jonsson, paul.moore, pitrou, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Lock.acquire() not interruptible on Windows type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 11:14:20 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Apr 2017 15:14:20 +0000 Subject: [New-bugs-announce] [issue29972] Skip tests known to fail on AIX Message-ID: <1491232460.4.0.914311825782.issue29972@psf.upfronthosting.co.za> New submission from STINNER Victor: Extract of David Edelsohn's email: """ The testsuite failures on AIX are issues with the AIX kernel and C Library, often corner cases. I don't want to get into arguments about the POSIX standard. Some of the issues are actual conformance issues and some are different interpretations of the standard. Addressing the problems in AIX is a slow process. If the failing testcases are too annoying, I would recommend to skip the testcases. Despite the testsuite failures, Python builds and runs on AIX for the vast majority of users and applications. I don't see the benefit in dropping support for a platform that functions because it doesn't fully pass the testsuite. """ ref: https://mail.python.org/pipermail/python-dev/2017-April/147748.html I agree, so let's skip tests known to fail on AIX! ---------- components: Tests messages: 291073 nosy: haypo priority: normal severity: normal status: open title: Skip tests known to fail on AIX versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 11:34:14 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Apr 2017 15:34:14 +0000 Subject: [New-bugs-announce] [issue29973] Travis CI docs broken: UnboundLocalError: local variable 'prefix' referenced before assignment Message-ID: <1491233654.37.0.108307382934.issue29973@psf.upfronthosting.co.za> New submission from STINNER Victor: https://travis-ci.org/python/cpython/jobs/218107336 CPython: master branch Sphinx 1.5.4 make[1]: Entering directory `/home/travis/build/python/cpython/Doc' ./venv/bin/python -m sphinx -b suspicious -d build/doctrees -D latex_elements.papersize= -q -W . build/suspicious Exception occurred: File "/home/travis/virtualenv/python3.6.0/lib/python3.6/site-packages/sphinx/domains/python.py", line 317, in before_content if prefix: UnboundLocalError: local variable 'prefix' referenced before assignment ---------- assignee: docs at python components: Documentation, Tests messages: 291075 nosy: docs at python, haypo priority: normal severity: normal status: open title: Travis CI docs broken: UnboundLocalError: local variable 'prefix' referenced before assignment versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 13:32:43 2017 From: report at bugs.python.org (Mathias Rav) Date: Mon, 03 Apr 2017 17:32:43 +0000 Subject: [New-bugs-announce] [issue29974] typing.TYPE_CHECKING doc example is incorrect Message-ID: <1491240763.76.0.717121375128.issue29974@psf.upfronthosting.co.za> New submission from Mathias Rav: The documentation of typing.TYPE_CHECKING has an example (introduced in issue #26141) that would lead to NameError at runtime. The example shows how to limit the import of "expensive_mod" to type checkers, but then goes on to use "expensive_mod.some_type" in a type annotation that is evaluated at runtime ("local_var: expensive_mod.some_type"). The use case of TYPE_CHECKING is probably meant for type annotations placed in comments, e.g. "local_var # type: expensive_mod.some_type". ---------- assignee: docs at python components: Documentation messages: 291085 nosy: docs at python, rav priority: normal severity: normal status: open title: typing.TYPE_CHECKING doc example is incorrect type: enhancement versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 3 22:36:41 2017 From: report at bugs.python.org (Namjun Kim) Date: Tue, 04 Apr 2017 02:36:41 +0000 Subject: [New-bugs-announce] [issue29975] Issue in extending documentation Message-ID: <1491273401.15.0.69165837027.issue29975@psf.upfronthosting.co.za> New submission from Namjun Kim: https://docs.python.org/3.7/extending/extending.html "Should it become a dangling pointer, C code which raises the exception could cause a core dump or other unintended side effects." The typo error in this sentence. "If it become a dangling pointer, C code which raises the exception could cause a core dump or other unintended side effects." fix the typo error. ---------- assignee: docs at python components: Documentation messages: 291098 nosy: Namjun Kim, docs at python priority: normal severity: normal status: open title: Issue in extending documentation versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 00:54:17 2017 From: report at bugs.python.org (Senthil Kumaran) Date: Tue, 04 Apr 2017 04:54:17 +0000 Subject: [New-bugs-announce] [issue29976] urllib.parse clarify what ' ' in schemes mean Message-ID: <1491281657.78.0.713830439385.issue29976@psf.upfronthosting.co.za> New submission from Senthil Kumaran: urllib.parse has the following information in this module. ``` # A classification of schemes ('' means apply by default) uses_relative = ['ftp', 'http', 'gopher', 'nntp', 'imap', 'wais', 'file', 'https', 'shttp', 'mms', 'prospero', 'rtsp', 'rtspu', '', 'sftp', 'svn', 'svn+ssh', 'ws', 'wss'] ``` Note the '' in the list. 1) First it needs to be first one for easy identification. 2) It needs to be clarified. '' means apply by default does not help the reader. ---------- assignee: orsenthil messages: 291100 nosy: orsenthil priority: normal severity: normal stage: needs patch status: open title: urllib.parse clarify what ' ' in schemes mean type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 03:00:42 2017 From: report at bugs.python.org (Robert Lujo) Date: Tue, 04 Apr 2017 07:00:42 +0000 Subject: [New-bugs-announce] [issue29977] re.sub stalls forever on an unmatched non-greedy case Message-ID: <1491289242.07.0.894132585391.issue29977@psf.upfronthosting.co.za> New submission from Robert Lujo: Hello, I assume I have hit some bug/misbehaviour in re module. I will provide you "working" example: import re RE_C_COMMENTS = re.compile(r"/\*(.|\s)*?\*/", re.MULTILINE|re.DOTALL|re.UNICODE) text = "Special section /* valves:\n\n\nsilicone\n\n\n\n\n\n\nHarness:\n\n\nmetal and plastic fibre\n\n\n\n\n\n\nInner frame:\n\n\nmultibutylene\n\n\n\n\n\n\nWeight:\n\n\n147 g\n\n\n\n\n\n\n\n\n\n\n\n\n\nSelection guide\n" and then this command takes forever: RE_C_COMMENTS.sub(" ", text, re.MULTILINE|re.DOTALL|re.UNICODE) and the same problem you can notice on first 90 chars, it takes 10s on my machine: RE_C_COMMENTS.sub(" ", text[:90], re.MULTILINE|re.DOTALL|re.UNICODE) Some clarification: I try to remove the C style comments from text with non-greedy regular expression, and in this case start of comment (/*) is found, and end of comment (*/) can not be found. Notice the multiline and other re options. Python versions used: '2.7.11 (default, Jan 22 2016, 16:30:50) \n[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]' / macOs 10.12.13 and: '2.7.12 (default, Nov 19 2016, 06:48:10) \n[GCC 5.4.0 20160609]' -> Linux 84-Ubuntu SMP Wed Feb 1 17:20:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux ---------- components: Regular Expressions messages: 291107 nosy: Robert Lujo, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: re.sub stalls forever on an unmatched non-greedy case type: performance versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 04:40:54 2017 From: report at bugs.python.org (Mariatta Wijaya) Date: Tue, 04 Apr 2017 08:40:54 +0000 Subject: [New-bugs-announce] [issue29978] Remove remove merge=union attribute for Misc/NEWS in 3.6 and 2.7 Message-ID: <1491295254.15.0.485735428902.issue29978@psf.upfronthosting.co.za> New submission from Mariatta Wijaya: In https://github.com/python/cpython/pull/212, merge=union was added to the .gitattributes, but was later removed in https://github.com/python/cpython/pull/460. Somehow this attribute made their way into 3.6 and 2.7. I will remove it. ---------- assignee: Mariatta messages: 291115 nosy: Mariatta priority: normal severity: normal stage: needs patch status: open title: Remove remove merge=union attribute for Misc/NEWS in 3.6 and 2.7 versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 04:58:51 2017 From: report at bugs.python.org (Pierre Quentel) Date: Tue, 04 Apr 2017 08:58:51 +0000 Subject: [New-bugs-announce] [issue29979] cgi.parse_multipart is not consistent with FieldStorage Message-ID: <1491296331.11.0.181533719685.issue29979@psf.upfronthosting.co.za> New submission from Pierre Quentel: In the cgi module, the parse_multipart() function duplicates code from FieldStorage, and the result is not compliant with that of FieldStorage for requests sent with multipart/form-data : for non-file fields, the value associated with a key is a list of *bytes* in parse_multipart() and a list of *strings* for FieldStorage (the bytes decoded with the argument "encoding" passed to FieldStorage()). I will propose a PR on the Github repo with a version of parse_multipart that uses FieldStorage and returns the same result (values as strings). The function will take an additional argument "encoding". ---------- components: Library (Lib) messages: 291117 nosy: quentel priority: normal severity: normal status: open title: cgi.parse_multipart is not consistent with FieldStorage type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 13:12:21 2017 From: report at bugs.python.org (R. David Murray) Date: Tue, 04 Apr 2017 17:12:21 +0000 Subject: [New-bugs-announce] [issue29980] OSError: multiple exceptions should preserve the exception type if it is common Message-ID: <1491325941.34.0.941861469291.issue29980@psf.upfronthosting.co.za> New submission from R. David Murray: create_connection will try multiple times to connect if there are multiple addresses returned by getaddrinfo. If all connections file it inspects the exceptions, and raises the first one if they are all equal. But since the addresses are often different (else why would we try multiple times?), the messages will usually be different. When the messages are different, the code raises an OSError with a list of the exceptions so the user can see them all. This, however, looses the information as to *what* kind of exception occurred (ie: ConnectioRefusedError, etc). I propose that if all of the exceptions raised are of the same subclass, that that subclass be raised with the multi-message list, rather than the base OSError. ---------- components: asyncio messages: 291126 nosy: r.david.murray, yselivanov priority: normal severity: normal status: open title: OSError: multiple exceptions should preserve the exception type if it is common _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 14:01:11 2017 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 04 Apr 2017 18:01:11 +0000 Subject: [New-bugs-announce] [issue29981] Update Index set, dict, and generator 'comprehensions' Message-ID: <1491328871.52.0.864348862414.issue29981@psf.upfronthosting.co.za> New submission from Terry J. Reedy: The index currently has comprehensions *list* with *list* linked to 6.2.5 List displays. I suggest: 1. Link *comprehensions* to 6.2.4. Displays for lists, sets and dictionaries 2. Add subentries *set*, *dict*, and *generator* linked to 2a. 6.2.6. Set displays 2b. 6.2.7. Dictionary displays 2c. 6.2.8. Generator expressions We don't *call* generator expressions 'generator comprehensions', but that is what they are syntactically and one looking for 'comprehensions' should be able to find them there. There is already *list* ... *comprehensions* ... *list comprehensions* with 'list' and 'list comprehensions' linked to glossary entries, while 'list, comprehensions' links to the same section as 'comprehensions, list'. 3. Add 'set/dictionary, comprehensions' sub-entries linked like 'list, com 4. Add Glossary entries and links like 'list comprehensions' ---------- assignee: docs at python components: Documentation keywords: easy messages: 291129 nosy: docs at python, terry.reedy priority: normal severity: normal stage: needs patch status: open title: Update Index set, dict, and generator 'comprehensions' type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 14:26:18 2017 From: report at bugs.python.org (Max) Date: Tue, 04 Apr 2017 18:26:18 +0000 Subject: [New-bugs-announce] [issue29982] tempfile.TemporaryDirectory fails to delete itself Message-ID: <1491330378.49.0.274398190766.issue29982@psf.upfronthosting.co.za> New submission from Max: There's a known issue with `shutil.rmtree` on Windows, in that it fails intermittently. The issue is well known (https://mail.python.org/pipermail/python-dev/2013-September/128353.html), and the agreement is that it cannot be cleanly solved inside `shutil` and should instead be solved by the calling app. Specifically, python devs themselves faced it in their test suite and solved it by retrying delete. However, what to do about `tempfile.TemporaryDirectory`? Is it considered the calling app, and therefore should retry delete when it calls `shutil.rmtree` in its `cleanup` method? I don't think `tempfile` is protected by the same argument that `shutil.rmtree` is protected, in that it's too messy to solve it in the standard library. My rationale is that while it's very easy for the end user to retry `shutil.rmtree`, it's far more difficult to fix the problem with `tempfile.TempDirectory` not deleting itself - how would the end user retry the `cleanup` method (which is called from `weakref.finalizer`)? So perhaps the retry loop should be added to `cleanup`. ---------- components: Library (Lib) messages: 291130 nosy: max priority: normal severity: normal status: open title: tempfile.TemporaryDirectory fails to delete itself type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 14:31:12 2017 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 04 Apr 2017 18:31:12 +0000 Subject: [New-bugs-announce] [issue29983] Reference TOC: expand 'Atoms' and 'Primaries' Message-ID: <1491330672.66.0.700114037172.issue29983@psf.upfronthosting.co.za> New submission from Terry J. Reedy: Today there is a fairly long python-list thread about the difficult of finding info on 'dict comprehensions' in the doc. Point 1 is missing index entries. I opened #29981 for this. Point 2 is something I have also noticed: the obscurity of 'Atoms' and 'Primaries' as titles of sections in the Expressions chapter. These are fairly esoteric Computer Science language theory terms. Compare these to beginner-friendly 'Binary arithmetic operators' and 'Comparisons'. My specific suggestions, subject to change: Atoms, including identifiers, literals, displays, and comprehensions Primaries: attributes, subscripts, slices, and calls ---------- assignee: docs at python components: Documentation messages: 291131 nosy: docs at python, terry.reedy priority: normal severity: normal stage: needs patch status: open title: Reference TOC: expand 'Atoms' and 'Primaries' type: enhancement versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 16:02:39 2017 From: report at bugs.python.org (Robert Day) Date: Tue, 04 Apr 2017 20:02:39 +0000 Subject: [New-bugs-announce] [issue29984] Improve test coverage for 'heapq' module Message-ID: <1491336159.04.0.480090056771.issue29984@psf.upfronthosting.co.za> New submission from Robert Day: It's currently at 97%: Name Stmts Miss Cover Missing -------------------------------------------- Lib/heapq.py 262 7 97% 187, 351-352, 375-376, 606-607 I'm submitting a Github PR to fix it. ---------- components: Tests messages: 291136 nosy: Robert Day priority: normal severity: normal status: open title: Improve test coverage for 'heapq' module versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 18:00:54 2017 From: report at bugs.python.org (Chris Jerdonek) Date: Tue, 04 Apr 2017 22:00:54 +0000 Subject: [New-bugs-announce] [issue29985] make install doesn't seem to support --quiet Message-ID: <1491343254.68.0.574187952476.issue29985@psf.upfronthosting.co.za> New submission from Chris Jerdonek: When installing from source, the --quiet option works with "configure" and a bare "make": $ ./configure --quiet $ make --quiet However, it doesn't seem to work when passed to "make install" (and "make altinstall", etc). I tried a number of variations like: $ make --quiet install $ make install --quiet etc. The install output is quite verbose, so it would be useful to support --quiet. This should still allow warnings, etc, through like it does for configure and bare make. ---------- components: Installation messages: 291143 nosy: chris.jerdonek priority: normal severity: normal status: open title: make install doesn't seem to support --quiet type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 18:51:53 2017 From: report at bugs.python.org (Devin Jeanpierre) Date: Tue, 04 Apr 2017 22:51:53 +0000 Subject: [New-bugs-announce] [issue29986] Documentation recommends raising TypeError from tp_richcompare Message-ID: <1491346313.99.0.131506773991.issue29986@psf.upfronthosting.co.za> New submission from Devin Jeanpierre: am not sure when TypeError is the right choice. Definitely, most of the time I've seen it done, it causes trouble, and NotImplemented usually does something better. For example, see the work in https://bugs.python.org/issue8743 to get set to interoperate correctly with other set-like classes --- a problem caused by the use of TypeError instead of returning NotImplemented (e.g. https://hg.python.org/cpython/rev/3615cdb3b86d). This advice seems to conflict with the usual and expected behavior of objects from Python: e.g. object().__lt__(1) returns NotImplemented rather than raising TypeError, despite < not "making sense" for object. Similarly for file objects and other uncomparable classes. Even complex numbers only return NotImplemented! >>> 1j.__lt__(1j) NotImplemented If this note should be kept, this section could use a decent explanation of the difference between "undefined" (should return NotImplemented) and "nonsensical" (should apparently raise TypeError). Perhaps a reference to an example from the stdlib. ---------- assignee: docs at python components: Documentation messages: 291144 nosy: Devin Jeanpierre, docs at python priority: normal pull_requests: 1167 severity: normal status: open title: Documentation recommends raising TypeError from tp_richcompare _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 19:35:15 2017 From: report at bugs.python.org (Thomas Antony) Date: Tue, 04 Apr 2017 23:35:15 +0000 Subject: [New-bugs-announce] [issue29987] inspect.isgeneratorfunction not working with partial functions Message-ID: <1491348915.09.0.197335598745.issue29987@psf.upfronthosting.co.za> New submission from Thomas Antony: When inspect.isgeneratorfunction is called on the output of functools.partial, it returns False even if the original function was a generator function. Test case is attached. Tested in fresh conda environment running Python 3.6.1 ---------- files: testcode.py messages: 291147 nosy: Thomas Antony priority: normal severity: normal status: open title: inspect.isgeneratorfunction not working with partial functions versions: Python 3.5 Added file: http://bugs.python.org/file46776/testcode.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 19:42:58 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Tue, 04 Apr 2017 23:42:58 +0000 Subject: [New-bugs-announce] [issue29988] (async) with blocks and try/finally are not as KeyboardInterrupt-safe as one might like Message-ID: <1491349378.03.0.0671954460373.issue29988@psf.upfronthosting.co.za> New submission from Nathaniel Smith: You might hope the interpreter would enforce the invariant that for 'with' and 'async with' blocks, either '__(a)enter__' and '__(a)exit__' are both called, or else neither of them is called. But it turns out that this is not true once KeyboardInterrupt gets involved ? even if we write our (async) context manager in C, or otherwise guarantee that the actual '__(a)enter__' and '__(a)exit__' methods are immune to KeyboardInterrupt. The invariant is *almost* preserved for 'with' blocks: there's one instruction (SETUP_WITH) that atomically (wrt signals) calls '__enter__' and then enters the implicit 'try' block, and there's one instruction (WITH_CLEANUP_START) that atomically enters the implicit 'finally' block and then calls '__exit__'. But there's a gap between exiting the 'try' block and WITH_CLEANUP_START where a signal can arrive and cause us to exit without running the 'finally' block at all. In this disassembly, the POP_BLOCK at offset 7 is the end of the 'try' block; if a KeyboardInterrupt is raised between POP_BLOCK and WITH_CLEANUP_START, then it will propagate out without '__exit__' being run: In [2]: def f(): ...: with a: ...: pass ...: In [3]: dis.dis(f) 2 0 LOAD_GLOBAL 0 (a) 3 SETUP_WITH 5 (to 11) 6 POP_TOP 3 7 POP_BLOCK 8 LOAD_CONST 0 (None) >> 11 WITH_CLEANUP_START 12 WITH_CLEANUP_FINISH 13 END_FINALLY 14 LOAD_CONST 0 (None) 17 RETURN_VALUE For async context managers, the race condition is substantially worse, because the 'await' dance is inlined into the bytecode: In [4]: async def f(): ...: async with a: ...: pass ...: In [5]: dis.dis(f) 2 0 LOAD_GLOBAL 0 (a) 3 BEFORE_ASYNC_WITH 4 GET_AWAITABLE 5 LOAD_CONST 0 (None) 8 YIELD_FROM 9 SETUP_ASYNC_WITH 5 (to 17) 12 POP_TOP 3 13 POP_BLOCK 14 LOAD_CONST 0 (None) >> 17 WITH_CLEANUP_START 18 GET_AWAITABLE 19 LOAD_CONST 0 (None) 22 YIELD_FROM 23 WITH_CLEANUP_FINISH 24 END_FINALLY 25 LOAD_CONST 0 (None) 28 RETURN_VALUE Really the sequence from 3 BEFORE_ASYNC_WITH to 9 SETUP_ASYNC_WITH should be atomic wrt signal delivery, and from 13 POP_BLOCK to 22 YIELD_FROM likewise. This probably isn't the highest priority bug in practice, but I feel like it'd be nice if this kind of basic language invariant could be 100% guaranteed, not just 99% guaranteed :-). And the 'async with' race condition is plausible to hit in practice, because if I have an '__aenter__' that's otherwise protected from KeyboardInterrupt, then it can run for some time, and any control-C during that time will get noticed just before the WITH_CLEANUP_START, so e.g. 'async with lock: ...' might complete while still holding the lock. The traditional solution would be to define single "super-instructions" that do all of the work we want to be atomic. This would be pretty tricky here though, because WITH_CLEANUP_START is a jump target (so naively we'd need to jump into the "middle" of a hypothetical new super-instruction), and because the implementation of YIELD_FROM kind of assumes that it's a standalone instruction exposed directly in the bytecode. Probably there is some solution to these issues but some cleverness would be required. A alternative approach would be to keep the current bytecode, but somehow mark certain stretches of bytecode as bad places to run signal handlers. The eval loop's "check for signal handlers" code is run rarely, so we could afford to do relatively expensive things like check a lookaside table that says "no signal handlers when 13 < f_lasti <= 22". Or we could steal a bit in the opcode encoding or something. ---------- components: Interpreter Core messages: 291148 nosy: ncoghlan, njs, yselivanov priority: normal severity: normal status: open title: (async) with blocks and try/finally are not as KeyboardInterrupt-safe as one might like versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 22:38:53 2017 From: report at bugs.python.org (Raphael Gaschignard) Date: Wed, 05 Apr 2017 02:38:53 +0000 Subject: [New-bugs-announce] [issue29989] subprocess.Popen does not handle file-like objects without file descriptors Message-ID: <1491359933.76.0.94329407382.issue29989@psf.upfronthosting.co.za> New submission from Raphael Gaschignard: >From the documentation of the io module: fileno() Return the underlying file descriptor (an integer) of the stream if it exists. An OSError is raised if the IO object does not use a file descriptor. However, when passing a file-like object without a file descriptor (that raises OSError when calling f.fileno()) to POpen (for stdout, for example), the raised exception is not handled properly. (However, on inspection of subprocess code, returning -1 will cause the code to handle this properly) I'm not sure whether this is an issue in the io module documentation or in the subprocess code. the core issue seems to be in POpen.get_handles, that seems to expect that -1 is used to signal "no file descriptor available". ---------- messages: 291151 nosy: rtpg priority: normal severity: normal status: open title: subprocess.Popen does not handle file-like objects without file descriptors type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 4 23:50:03 2017 From: report at bugs.python.org (Ma Lin) Date: Wed, 05 Apr 2017 03:50:03 +0000 Subject: [New-bugs-announce] [issue29990] Range checking in GB18030 decoder Message-ID: <1491364203.61.0.947006759836.issue29990@psf.upfronthosting.co.za> New submission from Ma Lin: This issue is split from issue24117, that issue became a soup of small issues, so I'm going to close it. For 4-byte GB18030 sequence, the legal range is: 0x81-0xFE for the 1st byte 0x30-0x39 for the 2nd byte 0x81-0xFE for the 3rd byte 0x30-0x39 for the 4th byte GB18030 standard: https://en.wikipedia.org/wiki/GB_18030 https://pan.baidu.com/share/link?shareid=2606985291&uk=3341026630 The current code forgets to check 0xFE for the 1st and 3rd byte. Therefore, there are 8630 illegal 4-byte sequences can be decoded by GB18030 codec, here is an example: # legal sequence b'\x81\x31\x81\x30' is decoded to U+060A, it's fine. uchar = b'\x81\x31\x81\x30'.decode('gb18030') print(hex(ord(uchar))) # illegal sequence 0x8130FF30 can be decoded to U+060A as well, this should not happen. uchar = b'\x81\x30\xFF\x30' .decode('gb18030') print(hex(ord(uchar))) ---------- components: Unicode messages: 291153 nosy: Ma Lin, ezio.melotti, haypo priority: normal severity: normal status: open title: Range checking in GB18030 decoder type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 04:04:42 2017 From: report at bugs.python.org (Paresh Verma) Date: Wed, 05 Apr 2017 08:04:42 +0000 Subject: [New-bugs-announce] [issue29991] http client marks valid multipart headers with defects. Message-ID: <1491379482.82.0.913101246746.issue29991@psf.upfronthosting.co.za> New submission from Paresh Verma: When http client parses a multipart response, it always taints the headers with defects. e.g. Use the attached file to start a simple http server, using current python exec, with commands: ```python .\example_bug.py server``` and run client with: ```python .\example_bug.py client``` which outputs: """[StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()]""" even though the multipart response is correct. This appears to be happening because http.client, when parsing headers of response doesn't specifies the headersonly option, which leads to email.feedparser to parse response body (but http.client only passes header lines for parsing in parse_headers method, and the request body isn't available to email.feedparser). The issue has been mentioned at: https://github.com/shazow/urllib3/issues/800 https://github.com/Azure/azure-storage-python/issues/167 The submitted PR partially fixes the problem: ```..\python.bat .\example_bug.py client``` which outputs """[MultipartInvariantViolationDefect()]""" ---------- components: Library (Lib) files: example_bug.py messages: 291165 nosy: pareshverma91 priority: normal pull_requests: 1172 severity: normal status: open title: http client marks valid multipart headers with defects. type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file46780/example_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 04:22:05 2017 From: report at bugs.python.org (Tobias Oberstein) Date: Wed, 05 Apr 2017 08:22:05 +0000 Subject: [New-bugs-announce] [issue29992] Expose parse_string in JSONDecoder Message-ID: <1491380525.56.0.628505740968.issue29992@psf.upfronthosting.co.za> New submission from Tobias Oberstein: Though the JSONDecoder already has all the hooks internally to allow for a custom parse_string (https://github.com/python/cpython/blob/master/Lib/json/decoder.py#L330), this currently is not exposed in the constructor JSONDecoder.__init__. It would be nice to expose it. Currently, I need to do hack it: https://gist.github.com/oberstet/fa8b8e04b8d532912bd616d9db65101a ---------- messages: 291167 nosy: oberstet priority: normal severity: normal status: open title: Expose parse_string in JSONDecoder type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 05:30:45 2017 From: report at bugs.python.org (sijian liang) Date: Wed, 05 Apr 2017 09:30:45 +0000 Subject: [New-bugs-announce] [issue29993] error of parsing encoded words in email of standard library Message-ID: <1491384645.5.0.116667122318.issue29993@psf.upfronthosting.co.za> New submission from sijian liang: This issue is fixed in python3 see https://github.com/python/cpython/commit/07ea53cb218812404cdbde820647ce6e4b2d0f8e ---------- components: email messages: 291171 nosy: barry, r.david.murray, sijian liang priority: normal severity: normal status: open title: error of parsing encoded words in email of standard library type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 06:36:22 2017 From: report at bugs.python.org (Brecht Machiels) Date: Wed, 05 Apr 2017 10:36:22 +0000 Subject: [New-bugs-announce] [issue29994] site.USER_SITE is None for Windows embeddable Python 3.6 Message-ID: <1491388582.7.0.185144058431.issue29994@psf.upfronthosting.co.za> New submission from Brecht Machiels: Previous versions of the embeddable Python: Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import site >>> site.USER_SITE 'C:\\Users\\Brecht\\AppData\\Roaming\\Python\\Python35\\site-packages' >>> Version 3.6.0 and 3.6.1, both win32 and amd64: Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)] on win32 >>> import site >>> site.USER_SITE >>> This causes problems when importing pip for example. ---------- components: Windows messages: 291174 nosy: brechtm, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: site.USER_SITE is None for Windows embeddable Python 3.6 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 10:17:51 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 05 Apr 2017 14:17:51 +0000 Subject: [New-bugs-announce] [issue29995] re.escape() escapes too much Message-ID: <1491401871.1.0.717928370576.issue29995@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: re.escape() escapes all the characters except ASCII letters, numbers and '_'. This is too excessive, makes escaping and compiling slower and makes the pattern less human-readable. Characters "!\"%&\',/:;<=>@_`~" as well as non-ASCII characters are always literal in a regular expression and don't need escaping. Proposed patch makes re.escape() escaping only minimal set of characters that can have special meaning in regular expressions. This includes special characters ".\\[]{}()*+?^$|", "-" (a range in a character set), "#" (starts a comment in verbose mode) and ASCII whitespaces (ignored in verbose mode). The null character no longer need a special escaping. The patch also increases the speed of re.escape() (even if it produces the same result). $ ./python -m perf timeit -s 'from re import escape; s = "()[]{}?*+-|^$\\.# \t\n\r\v\f"' -- --duplicate 100 'escape(s)' Unpatched: Median +- std dev: 42.2 us +- 0.8 us Patched: Median +- std dev: 11.4 us +- 0.1 us $ ./python -m perf timeit -s 'from re import escape; s = b"()[]{}?*+-|^$\\.# \t\n\r\v\f"' -- --duplicate 100 'escape(s)' Unpatched: Median +- std dev: 38.7 us +- 0.7 us Patched: Median +- std dev: 18.4 us +- 0.2 us $ ./python -m perf timeit -s 'from re import escape; s = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"' -- --duplicate 100 'escape(s)' Unpatched: Median +- std dev: 40.3 us +- 0.5 us Patched: Median +- std dev: 33.1 us +- 0.6 us $ ./python -m perf timeit -s 'from re import escape; s = b"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"' -- --duplicate 100 'escape(s)' Unpatched: Median +- std dev: 54.4 us +- 0.7 us Patched: Median +- std dev: 40.6 us +- 0.5 us $ ./python -m perf timeit -s 'from re import escape; s = "??????????????????????????????????????????????????????????????????"' -- --duplicate 100 'escape(s)' Unpatched: Median +- std dev: 156 us +- 3 us Patched: Median +- std dev: 43.5 us +- 0.5 us $ ./python -m perf timeit -s 'from re import escape; s = "??????????????????????????????????????????????????????????????????".encode()' -- --duplicate 100 'escape(s)' Unpatched: Median +- std dev: 200 us +- 4 us Patched: Median +- std dev: 77.0 us +- 0.6 us And the speed of compilation of escaped string. $ ./python -m perf timeit -s 'from re import escape; from sre_compile import compile; s = "??????????????????????????????????????????????????????????????????"; p = escape(s)' -- --duplicate 100 'compile(p)' Unpatched: Median +- std dev: 1.96 ms +- 0.02 ms Patched: Median +- std dev: 1.16 ms +- 0.02 ms $ ./python -m perf timeit -s 'from re import escape; from sre_compile import compile; s = "??????????????????????????????????????????????????????????????????".encode(); p = escape(s)' -- --duplicate 100 'compile(p)' Unpatched: Median +- std dev: 3.69 ms +- 0.04 ms Patched: Median +- std dev: 2.13 ms +- 0.03 ms ---------- components: Library (Lib), Regular Expressions messages: 291177 nosy: ezio.melotti, mrabarnett, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: re.escape() escapes too much type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 12:29:16 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 05 Apr 2017 16:29:16 +0000 Subject: [New-bugs-announce] [issue29996] Use terminal width by default in pprint Message-ID: <1491409756.85.0.158082896832.issue29996@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: pprint() uses width=80 by default. But default output stream is sys.stdout which often is connected to a terminal, and terminals now usually have larger width than 80 columns. Proposed patch change the default value of the width parameter in pprint(). If the width is specified and the output is a terminal, then the width of the terminal is used. ---------- components: Library (Lib) messages: 291187 nosy: fdrake, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Use terminal width by default in pprint type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 14:46:35 2017 From: report at bugs.python.org (Arthur Goldberg) Date: Wed, 05 Apr 2017 18:46:35 +0000 Subject: [New-bugs-announce] [issue29997] Suggested changes for https://docs.python.org/3.6/extending/extending.html Message-ID: <1491417995.71.0.832434605196.issue29997@psf.upfronthosting.co.za> New submission from Arthur Goldberg: I've just taught myself how to write C extensions to Python with https://docs.python.org/3.6/extending/extending.html. I think it's quite good. Nevertheless, I've some suggested improvements. These all use the vi s/// replacement syntax. Ambiguous 'it': s/If the latter header file does not exist on your system, it declares the functions malloc(), free() and realloc() directly./If the latter header file does not exist on your system, Python.h declares the functions malloc(), free() and realloc() directly./ Unclear, as 'The C function' refers to the specific example, whereas 'always has' implies that this applies to all calls from Python to C: s/The C function always has two arguments, conventionally/A C function called by Python always has two arguments, conventionally/ In PyMODINIT_FUNC PyInit_spam(void) { PyObject *m; m = PyModule_Create(&spammodule); if (m == NULL) return NULL; SpamError = PyErr_NewException("spam.error", NULL, NULL); Py_INCREF(SpamError); PyModule_AddObject(m, "error", SpamError); return m; } remove m = PyModule_Create(&spammodule); if (m == NULL) return NULL; and replace it with ... because it won't compile because spammodule has not been described yet on the page. Self-contradictory: 'normally always' is an oxymoron. s/It should normally always be METH_VARARGS or METH_VARARGS | METH_KEYWORDS; a value of 0 means that an obsolete variant of PyArg_ParseTuple() is used./It should always be METH_VARARGS or METH_VARARGS | METH_KEYWORDS; however, legacy code may use 0, which indicates that an obsolete variant of PyArg_ParseTuple() is being used./ Incomplete: this comment doesn't contain a complete thought s/module documentation, may be NULL/pointer to a string containing the module's documentation, or NULL if none is provided/ Provide hyperlink: for user convenience, add a hyperlink to 'Modules/xxmodule.c' s/included in the Python source distribution as Modules/xxmodule.c/included in the Python source distribution as Modules/xxmodule.c/ Incomplete: It would be good to lead programmers towards the easiest approach. s/ If you use dynamic loading,/ If you can use dynamic loading, the the easiest approach is to use Python's distutils module to build your module. If you use dynamic loading,/ ---------- assignee: docs at python components: Documentation messages: 291192 nosy: ArthurGoldberg, docs at python priority: normal severity: normal status: open title: Suggested changes for https://docs.python.org/3.6/extending/extending.html type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 16:34:36 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 05 Apr 2017 20:34:36 +0000 Subject: [New-bugs-announce] [issue29998] Pickling and copying ImportError doesn't preserve name and path Message-ID: <1491424476.08.0.673236809486.issue29998@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Pickling and copying ImportError doesn't preserve name and path attributes. >>> import copy, pickle >>> e = ImportError('test', name='n', path='p') >>> e.name 'n' >>> e.path 'p' >>> e2 = pickle.loads(pickle.dumps(e, 4)) >>> e2.name >>> e2.path >>> e2 = copy.copy(e) >>> e2.name >>> e2.path Proposed patch fixes this. ---------- components: Interpreter Core messages: 291194 nosy: brett.cannon, eric.snow, ncoghlan, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Pickling and copying ImportError doesn't preserve name and path type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 18:29:11 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 05 Apr 2017 22:29:11 +0000 Subject: [New-bugs-announce] [issue29999] repr() of ImportError misses keyword arguments name and path Message-ID: <1491431351.79.0.0513888316468.issue29999@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The repr of standard exceptions usually looks as exception constructor used for creating that exception. But the repr of ImportError misses keyword arguments name and path. >>> ImportError('test', name='somename', path='somepath') ImportError('test',) Proposed patch make the repr of ImportError containing keyword arguments. >>> ImportError('test', name='somename', path='somepath') ImportError('test', name='somename', path='somepath') I don't know how to classify this issue and whether the patch should be backported. ---------- components: Interpreter Core messages: 291200 nosy: brett.cannon, eric.snow, ncoghlan, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: repr() of ImportError misses keyword arguments name and path versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 19:28:22 2017 From: report at bugs.python.org (Ellison Marks) Date: Wed, 05 Apr 2017 23:28:22 +0000 Subject: [New-bugs-announce] [issue30000] Inconsistency in the zlib module Message-ID: <1491434902.87.0.645672242128.issue30000@psf.upfronthosting.co.za> New submission from Ellison Marks: In the zlib module, three of the methods support the wbits parameter, those being zlib.compressobj, zlib.decompress and zlib.decompressobj. zlib.compress does not support the wbits parameter. Looking at the source for these functions, those that support the wbits parameter use the "advanced" version of the zlib functions, deflateInit2 and inflateInit2, whereas zlib.compress uses the "basic" function deflateInit. The effect of this is that while you can decode from zlib data with non-default wbits values in one call with zlib.decompress, you cannot encode to zlib data with non-default wbits in one call with zlib.compress. You need to take to extra step of creating a compression object with the appropriate values, then use that to compress the data. eg: zlib.compress(data) # can't use wbits here vs. compressor = zlib.compressobj(wbits=16+zlib.MAX_WBITS) compressor.compress(data) + compressor.flush() Some quick benchmarking shows little speed difference between the two implementations: $ python -m timeit -s 'import zlib' -s 'import random' -s 'import string' -s 's="".join(random.choice(string.printable) for _ in xrange(10000000))' 'zlib.compress(s)' 10 loops, best of 3: 356 msec per loop $ python -m timeit -s 'import zlib' -s 'import random' -s 'import string' -s 's="".join(random.choice(string.printable) for _ in xrange(10000000))' 'compressor=zlib.compressobj()' 'compressor.compress(s)+compressor.flush()' 10 loops, best of 3: 364 msec per loop so I can't see any downside of switching zlib.compress to the "advanced" implementation and exposing the extra parameters to python. ---------- components: Library (Lib) messages: 291201 nosy: Ellison Marks priority: normal severity: normal status: open title: Inconsistency in the zlib module type: enhancement versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 20:50:50 2017 From: report at bugs.python.org (AJ Jordan) Date: Thu, 06 Apr 2017 00:50:50 +0000 Subject: [New-bugs-announce] [issue30001] CPython contribution docs reference missing /issuetracker page Message-ID: <1491439850.02.0.465996386955.issue30001@psf.upfronthosting.co.za> New submission from AJ Jordan: https://cpython-devguide.readthedocs.io/pullrequest.html#licensing (and presumably other pages in this project) references https://cpython-devguide.readthedocs.io/issuetracker, but this page returns 404 Not Found. ---------- assignee: docs at python components: Documentation messages: 291202 nosy: docs at python, strugee priority: normal severity: normal status: open title: CPython contribution docs reference missing /issuetracker page _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 22:29:36 2017 From: report at bugs.python.org (Arthur Goldberg) Date: Thu, 06 Apr 2017 02:29:36 +0000 Subject: [New-bugs-announce] [issue30002] Minor change to https://docs.python.org/3.6/extending/building.html Message-ID: <1491445776.09.0.388961752083.issue30002@psf.upfronthosting.co.za> New submission from Arthur Goldberg: The core example on this page starts: from distutils.core import setup, Extension module1 = Extension('demo', sources = ['demo.c']) ... I suggest that 'sources = ['demo.c']' be changed to 'sources = ['demomodule.c']', because this would make the example consistent with https://docs.python.org/3.6/extending/extending.html which says: "Begin by creating a file spammodule.c. (Historically, if a module is called spam, the C file containing its implementation is called spammodule.c; ... )" This minor change may help encourage this standard practice. Arthur ---------- assignee: docs at python components: Documentation messages: 291203 nosy: ArthurGoldberg, docs at python priority: normal severity: normal status: open title: Minor change to https://docs.python.org/3.6/extending/building.html type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 5 23:42:17 2017 From: report at bugs.python.org (Ma Lin) Date: Thu, 06 Apr 2017 03:42:17 +0000 Subject: [New-bugs-announce] [issue30003] Remove hz codec Message-ID: <1491450137.17.0.519356226961.issue30003@psf.upfronthosting.co.za> New submission from Ma Lin: hz is a Simplified Chinese codec, available in Python since around 2004. However, hz encoder has a serious bug, it forgets to escape ~ >>> 'hi~'.encode('hz') b'hi~' # the correct output should be b'hi~~' As a result, we can't finish a roundtrip: >>> b'hi~'.decode('hz') Traceback (most recent call last): File "", line 1, in UnicodeDecodeError: 'hz' codec can't decode byte 0x7e in position 2: incomplete multibyte In these years, no one has reported this bug, so I think it's pretty safe to remove hz codec. FYI: HZ codec is a 7-bit wrapper for GB2312, was formerly commonly used in email and USENET postings. It was designed in 1989 by Fung Fung Lee, and subsequently codified in 1995 into RFC 1843. It was popular in USENET networks, which in the late 1980s and early 1990s, generally did not allow transmission of 8-bit characters or escape characters. https://en.wikipedia.org/wiki/HZ_(character_encoding) Does other languages have hz codec? Java 8: no [1] .NET: yes [2] PHP: yes [3] Perl: yes [4] [1] http://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html [2] https://msdn.microsoft.com/en-us/library/system.text.encoding(v=vs.110).aspx [3] http://php.net/manual/en/mbstring.supported-encodings.php [4] http://perldoc.perl.org/Encode/CN.html ---------- components: Unicode messages: 291207 nosy: Ma Lin, ezio.melotti, haypo, xiang.zhang priority: normal severity: normal status: open title: Remove hz codec type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 6 00:40:42 2017 From: report at bugs.python.org (Cristian Barbarosie) Date: Thu, 06 Apr 2017 04:40:42 +0000 Subject: [New-bugs-announce] [issue30004] in regex-howto, improve example on grouping Message-ID: <1491453642.54.0.802330198745.issue30004@psf.upfronthosting.co.za> New submission from Cristian Barbarosie: In the Regular Expression HOWTO https://docs.python.org/3.6/howto/regex.html#regex-howto the last example in the "Grouping" section has a bug. The code is supposed to find repeated words, but it catches false repetitions. >>> p = re.compile(r'(\b\w+)\s+\1') >>> p.search('Paris in the the spring').group() 'the the' >>> p.search('k is the thermal coefficient').group() 'the the' I propose adding a \b after \1, this solves the problem : >>> p = re.compile(r'(\b\w+)\s+\1\b') >>> p.search('Paris in the the spring').group() 'the the' >>> print p.search('k is the thermal coefficient') None ---------- assignee: docs at python components: Documentation messages: 291209 nosy: Cristian Barbarosie, docs at python priority: normal severity: normal status: open title: in regex-howto, improve example on grouping type: enhancement versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 6 01:59:35 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 06 Apr 2017 05:59:35 +0000 Subject: [New-bugs-announce] [issue30005] Pickling and copying exceptions doesn't preserve non-__dict__ attributes Message-ID: <1491458375.12.0.99955546052.issue30005@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Pickling and copying exceptions preserves only __dict__ attributes. This includes writeable internal fields initialized in constructor: >>> import pickle, copy >>> e = StopIteration(12) >>> e.value = 34 >>> e.value 34 >>> e2 = pickle.loads(pickle.dumps(e, 4)) >>> e2.value 12 >>> e2 = copy.copy(e) >>> e2.value 12 And __slots__: >>> class E(Exception): __slots__ = ('x', 'y') ... >>> e = E() >>> e.x = 12 >>> e.x 12 >>> e2 = pickle.loads(pickle.dumps(e, 4)) >>> e2.x Traceback (most recent call last): File "", line 1, in AttributeError: x >>> e2 = copy.copy(e) >>> e2.x Traceback (most recent call last): File "", line 1, in AttributeError: x __context__, __cause__ and __traceback__ are lost too (see issue29466). Issue26579 is similar, but resolving it will not resolve this issue since BaseException has its own __reduce__ and __setstate__ implementations. The solution of this issue will look similar to issue29998, but more complex and general. ---------- components: Interpreter Core messages: 291212 nosy: alexandre.vassalotti, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Pickling and copying exceptions doesn't preserve non-__dict__ attributes type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 6 08:41:11 2017 From: report at bugs.python.org (Thomas Moreau) Date: Thu, 06 Apr 2017 12:41:11 +0000 Subject: [New-bugs-announce] [issue30006] Deadlocks in `concurrent.futures.ProcessPoolExecutor` Message-ID: <1491482471.73.0.550300132711.issue30006@psf.upfronthosting.co.za> New submission from Thomas Moreau: The design of ProcessPoolExecutor contains some possible race conditions that may freeze the interpreter due to deadlocks. This is notably the case with pickling and unpickling errors for a submitted job and returned results. This makes it hard to reuse a launched executor. We propose in the joint PR to fix some of those situations to make the ProcessPoolExecutor more robust to failure in the different threads and worker. ---------- components: Library (Lib) messages: 291224 nosy: tomMoral priority: normal pull_requests: 1180 severity: normal status: open title: Deadlocks in `concurrent.futures.ProcessPoolExecutor` type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 6 08:45:18 2017 From: report at bugs.python.org (Yared Gebre) Date: Thu, 06 Apr 2017 12:45:18 +0000 Subject: [New-bugs-announce] [issue30007] report bug Message-ID: New submission from Yared Gebre: Hello, I am using python 3.6 could you look at this bug. Thanks. /home/yared/anaconda3/lib/python3.6/site-packages/mlimages/util/file_api.py in add_ext_name(cls, path, ext_name) 63 @classmethod 64 def add_ext_name(cls, path, ext_name):---> 65 name, ext = os.path.splitext(os.path.basename(path)) 66 added = os.path.join(os.path.dirname(path), name + ext_name + ext) 67 return added /home/yared/anaconda3/lib/python3.6/posixpath.py in basename(p) 142 def basename(p): 143 """Returns the final component of a pathname"""--> 144 p = os.fspath(p) 145 sep = _get_sep(p) 146 i = p.rfind(sep) + 1 TypeError: expected str, bytes or os.PathLike object, not ImageProperty ---------- messages: 291225 nosy: Yaredoh priority: normal severity: normal status: open title: report bug _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 6 12:34:45 2017 From: report at bugs.python.org (Mike Gilbert) Date: Thu, 06 Apr 2017 16:34:45 +0000 Subject: [New-bugs-announce] [issue30008] OpenSSL 1.1.0 deprecated functions Message-ID: <1491496485.65.0.239768530197.issue30008@psf.upfronthosting.co.za> New submission from Mike Gilbert: Some effort was made to port Python to OpenSSL 1.1.0 (see issue 26470). However, the code still uses several deprecated functions, and fails to compile against OpenSSL 1.1.0 if these functions are disabled. This may be replicated by building OpenSSL with --api=1.1.0. This will disable all functions marked as deprecated. I have attached a build log from the cpython master branch. Downstream bug: https://bugs.gentoo.org/show_bug.cgi?id=592480 ---------- components: Library (Lib) files: build.log messages: 291236 nosy: floppymaster priority: normal severity: normal status: open title: OpenSSL 1.1.0 deprecated functions type: compile error versions: Python 3.7 Added file: http://bugs.python.org/file46782/build.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 00:02:05 2017 From: report at bugs.python.org (Tri Nguyen) Date: Fri, 07 Apr 2017 04:02:05 +0000 Subject: [New-bugs-announce] [issue30009] Integer conversion failure Message-ID: <1491537725.94.0.500502995113.issue30009@psf.upfronthosting.co.za> New submission from Tri Nguyen: This code below shows a situation when Python int() library would return a value of int(1.0) -> 0.0 ---------------CODE---------------------------- CHANGES = [1.00, 0.50, 0.25, 0.10, 0.05, 0.01] # This code was originally to solve the least number of changes needed. # However, in an attempt to solve this. A bug is found. def get_change(R): for change in CHANGES: # This division and int() is where failure is happening num = int(R / change) # This printing line shows the failure. print 'int(%s)\t = %s' % (R / change, num) R = R - num * change print 'R = %s' % R get_change(4.01) -------------OUTPUT---------------------- int(4.01) = 4 int(0.02) = 0 int(0.04) = 0 int(0.1) = 0 int(0.2) = 0 int(1.0) = 0 # This should be 1, right? R = 0.01 ---------- components: Library (Lib) files: int_bug.py messages: 291249 nosy: nvutri priority: normal severity: normal status: open title: Integer conversion failure type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file46783/int_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 03:40:16 2017 From: report at bugs.python.org (Samuli G) Date: Fri, 07 Apr 2017 07:40:16 +0000 Subject: [New-bugs-announce] [issue30010] Initial bytes to BytesIO cannot be seeked to Message-ID: <1491550816.13.0.724625022482.issue30010@psf.upfronthosting.co.za> New submission from Samuli G: The initial bytes provided for the BytesIO constructor are lost when the stream is written to. Seeking to offset zero, and then getting the value of the entire buffer results of getting only the bytes that have been appended by calling "write". ---------- components: IO files: bytesio_bug.py messages: 291254 nosy: Samuli G priority: normal severity: normal status: open title: Initial bytes to BytesIO cannot be seeked to type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file46784/bytesio_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 04:26:37 2017 From: report at bugs.python.org (Alessandro Vesely) Date: Fri, 07 Apr 2017 08:26:37 +0000 Subject: [New-bugs-announce] [issue30011] HTMLParser class is not thread safe Message-ID: <1491553597.55.0.784750933175.issue30011@psf.upfronthosting.co.za> New submission from Alessandro Vesely: SYMPTOM: When used in a multithreaded program, instances of a class derived from HTMLParser may convert an entity or leave it alone, in an apparently random fashion. CAUSE: The class has a static attribute, entitydefs, which, on first use, is initialized from None to a dictionary of entity definitions. Initialization is not atomic. Therefore, instances in concurrent threads assume that initialization is complete and catch a KeyError if the entity at hand hasn't been set yet. In that case, the entity is left alone as if it were invalid. WORKAROUND: class Dummy(HTMLParser): """this class is defined here so that we can initialize its base class""" def __init__(self): HTMLParser.__init__(self) # Initialize HTMLParser by loading htmlentitydefs dummy = Dummy() dummy.feed('') del dummy, Dummy ---------- components: Library (Lib) messages: 291256 nosy: ale2017 priority: normal severity: normal status: open title: HTMLParser class is not thread safe type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 05:29:47 2017 From: report at bugs.python.org (Peter) Date: Fri, 07 Apr 2017 09:29:47 +0000 Subject: [New-bugs-announce] [issue30012] gzip.open(filename, "rt") fails on Python 2.7.11 on win32, invalid mode rtb Message-ID: <1491557387.23.0.595235956819.issue30012@psf.upfronthosting.co.za> New submission from Peter: Under Python 2, gzip.open defaults to giving (non-unicode) strings. Under Python 3, gzip.open defaults to giving bytes. Therefore it was fixed to allow text mode be specified, see http://bugs.python.org/issue13989 In order to write Python 2 and 3 compatible code to get strings from gzip, I now use: >>> import gzip >>> handle = gzip.open(filename, "rt") In general mode="rt" works great, but I just found this fails under Windows XP running Python 2.7, example below using the following gzipped plain text file: https://github.com/biopython/biopython/blob/master/Doc/examples/ls_orchid.gbk.gz This works perfectly on Linux giving strings on both Python 2 and 3 - not I am printing with repr to confirm we have a string object: $ python2.7 -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 2.7.10 (default, Sep 28 2015, 13:58:31) [GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] Also with a slightly newer Python 2.7, $ /mnt/apps/python/2.7/bin/python -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 2.7.13 (default, Mar 9 2017, 15:07:48) [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)] $ python3.5 -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 3.5.0 (default, Sep 28 2015, 11:25:31) [GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] $ python3.4 -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 3.4.3 (default, Aug 21 2015, 11:12:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] $ python3.3 -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 3.3.0 (default, Nov 7 2012, 21:52:39) [GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] This works perfectly on macOS giving strings on both Python 2 and 3: $ python2.7 -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 2.7.10 (default, Jul 30 2016, 19:40:32) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] $ python3.6 -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline())); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 3.6.0 (v3.6.0:41df79263a11, Dec 22 2016, 17:23:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] This works perfectly on Python 3 running on Windows XP, C:\repositories\biopython\Doc\examples>c:\Python33\python.exe -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline()\ )); import sys; print(sys.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 3.3.5 (v3.3.5:62cf4e77f785, Mar 9 2014, 10:37:12) [MSC v.1600 32 bit (Intel)] C:\repositories\biopython\Doc\examples> C:\Python34\python.exe -c "import gzip; print(repr(gzip.open('ls_orchid.gbk.gz', 'rt').readline(\ ))); import sys; print(sy s.version)" 'LOCUS Z78533 740 bp DNA linear PLN 30-NOV-2006\n' 3.4.4 (v3.4.4:737efcadf5a6, Dec 20 2015, 19:28:18) [MSC v.1600 32 bit (Intel)] However, it fails on Windows XP running Python 2.7.11 and (after upgrading) Python 2.7.13 though: C:\repositories\biopython\Doc\examples>c:\Python27\python -c "import sys; print(sys.version); import gzip; print(repr(gzip.open('ls_orch\ id.gbk.gz', 'rt').readlines()))" 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] Traceback (most recent call last): File "", line 1, in File "c:\Python27\lib\gzip.py", line 34, in open return GzipFile(filename, mode, compresslevel) File "c:\Python27\lib\gzip.py", line 94, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') ValueError: Invalid mode ('rtb') Note that the strangely contradictory mode seems to be accepted by Python 2.7 under Linux or macOS: $ python Python 2.7.10 (default, Sep 28 2015, 13:58:31) [GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gzip >>> gzip.open('ls_orchid.gbk.gz', 'rt') >>> quit() $ python2.7 Python 2.7.10 (default, Jul 30 2016, 19:40:32) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import gzip >>> gzip.open('ls_orchid.gbk.gz', 'rt') >>> quit() ---------- components: Library (Lib) messages: 291259 nosy: maubp priority: normal severity: normal status: open title: gzip.open(filename, "rt") fails on Python 2.7.11 on win32, invalid mode rtb versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 05:56:44 2017 From: report at bugs.python.org (Louie Lu) Date: Fri, 07 Apr 2017 09:56:44 +0000 Subject: [New-bugs-announce] [issue30013] Compiling warning in Modules/posixmodule.c Message-ID: <1491559004.62.0.18619105038.issue30013@psf.upfronthosting.co.za> New submission from Louie Lu: Using gcc-6.3.1 20170306 on Linux 4.10.1, it gave the warning: gcc -pthread -c -Wno-unused-result -Wsign-compare -g -Og -Wall -Wstrict-prototypes -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -I. -I./Include -DPy_BUILD_CORE -o Python/pyctype.o Python/pyctype.c :./Modules/posixmodule.c: In function ?os_major_impl?: ./Modules/posixmodule.c:8584:13: warning: In the GNU C Library, "major" is defined by . For historical compatibility, it is currently defined by as well, but we plan to remove this soon. To use "major", include directly. If you did not intend to use a system-defined macro "major", you should undefine it after including . return major(device); ^~~~~~~~~~~~~ ./Modules/posixmodule.c: In function ?os_minor_impl?: ./Modules/posixmodule.c:8601:13: warning: In the GNU C Library, "minor" is defined by . For historical compatibility, it is currently defined by as well, but we plan to remove this soon. To use "minor", include directly. If you did not intend to use a system-defined macro "minor", you should undefine it after including . return minor(device); ^~~~~~~~~~~~~ ./Modules/posixmodule.c: In function ?os_makedev_impl?: ./Modules/posixmodule.c:8619:13: warning: In the GNU C Library, "makedev" is defined by . For historical compatibility, it is currently defined by as well, but we plan to remove this soon. To use "makedev", include directly. If you did not intend to use a system-defined macro "makedev", you should undefine it after including . return makedev(major, minor); ^~~~~~~~~~~~~~~~~~~~~ The problem introduce in glibc 2.25, going to deprecate the definition of 'major', 'minor', and 'makedev' by sys/types.h. And the autoconf didn't change the behavior of `AC_HEADER_MAJOR`, see: https://lists.gnu.org/archive/html/autoconf/2016-08/msg00014.html There is a workaround path for this in libvirt, which take from autoconf patch, see: https://www.redhat.com/archives/libvir-list/2016-September/msg00459.html ---------- components: Extension Modules messages: 291260 nosy: louielu priority: normal severity: normal status: open title: Compiling warning in Modules/posixmodule.c type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 06:40:07 2017 From: report at bugs.python.org (Giampaolo Rodola') Date: Fri, 07 Apr 2017 10:40:07 +0000 Subject: [New-bugs-announce] [issue30014] Speedup DefaultSelectors.modify() by 2x Message-ID: <1491561607.45.0.330964035773.issue30014@psf.upfronthosting.co.za> New submission from Giampaolo Rodola': Patch in attachment modifies DefaultSelector.modify() so that it uses the underlying selector's modify() method instead of unregister() and register() resulting in a 2x speedup. Without patch: ~/svn/cpython {master}$ ./python bench.py 0.006010770797729492 With patch: ~/svn/cpython {master}$ ./python bench.py 0.00330352783203125 ---------- files: selectors_modify.diff keywords: patch messages: 291261 nosy: giampaolo.rodola priority: normal severity: normal status: open title: Speedup DefaultSelectors.modify() by 2x Added file: http://bugs.python.org/file46787/selectors_modify.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 07:28:03 2017 From: report at bugs.python.org (=?utf-8?b?5p6X5bu65L2R?=) Date: Fri, 07 Apr 2017 11:28:03 +0000 Subject: [New-bugs-announce] [issue30015] Windows also treats full-width spaces as a delimiter when parsing arguments Message-ID: <1491564483.72.0.230621154457.issue30015@psf.upfronthosting.co.za> New submission from ???: Windows also treats full-width spaces as a delimiter when parsing command line arguments. Therefore, subprocess.run() and subprocess.Popen() also need to quote the arg in the sequence of arguments if there is any full-width spaces in it. Example: >> subprocess.run(['foo', 'half-width space', 'full-width?space']) should be executed as >> foo "half-width space" "full-width?space" Windows will treat it as 3 arguments but now it is incorrectly executed as >> foo "half-width space" full-width?space Windows will treat it as 4 arguments ---------- components: Library (Lib), Windows messages: 291262 nosy: paul.moore, steve.dower, tim.golden, zach.ware, ??? priority: normal severity: normal status: open title: Windows also treats full-width spaces as a delimiter when parsing arguments type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 07:52:24 2017 From: report at bugs.python.org (Jensen Taylor) Date: Fri, 07 Apr 2017 11:52:24 +0000 Subject: [New-bugs-announce] [issue30016] No sideways scrolling in IDLE Message-ID: <1491565944.11.0.294911632507.issue30016@psf.upfronthosting.co.za> New submission from Jensen Taylor: This has been a bug since 2.7 as far as I know. ---------- assignee: terry.reedy components: IDLE messages: 291263 nosy: Jensen Taylor, terry.reedy priority: normal severity: normal status: open title: No sideways scrolling in IDLE type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 10:21:06 2017 From: report at bugs.python.org (Jeremy Heiner) Date: Fri, 07 Apr 2017 14:21:06 +0000 Subject: [New-bugs-announce] [issue30017] zlib Message-ID: <1491574866.61.0.594608147469.issue30017@psf.upfronthosting.co.za> New submission from Jeremy Heiner: I had some statements inside a `with` statement to write data to an entry in a ZipFile. It worked great. I added a second `with` statement containing almost exactly the same statements. That still worked great. I refactored those common statements into a function and called that function from the two `with` statements... and got an exception: zlib.error: Error -2 while flushing: inconsistent stream state I can't figure out why it matters whether the writing happens in the `with` or in the function called by the `with`, but here's a trimmed-down version of the code that demonstrates the problem: -------------------------------------------------------------------------------- #!/usr/bin/env python import io, pprint, zipfile from zipfile import ZIP_DEFLATED def printLiteral( data, out ) : encoder = io.TextIOWrapper( out, encoding='utf-8', write_through=True ) pprint.pprint( data, stream=encoder ) data = { 'not' : 'much', 'just' : 'some K \N{RIGHTWARDS WHITE ARROW} V pairs' } with zipfile.ZipFile( 'zzz.zip', mode='w', compression=ZIP_DEFLATED ) as myzip : with myzip.open( 'this one works', 'w' ) as out : encoder = io.TextIOWrapper( out, encoding='utf-8', write_through=True ) pprint.pprint( data, stream=encoder ) with myzip.open( 'this one fails', 'w' ) as out : printLiteral( data, out ) print( 'printed but entry still open' ) print( 'entry has been closed but not file' ) print( 'zip file has been closed' ) -------------------------------------------------------------------------------- And here's the output on my Arch Linux 64bit with package `python 3.6.0-2`... A co-worker sees the same behavior on MacOS 10.11.6 Python 3.6.1 : -------------------------------------------------------------------------------- printed but entry still open Traceback (most recent call last): File "zzz.py", line 21, in print( 'printed but entry still open' ) File "/usr/lib/python3.6/zipfile.py", line 995, in close buf = self._compressor.flush() zlib.error: Error -2 while flushing: inconsistent stream state -------------------------------------------------------------------------------- I tried debugging this in PyDev but got lost. Turning off the compression makes the exception go away. ---------- messages: 291275 nosy: Jeremy Heiner priority: normal severity: normal status: open title: zlib type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 13:25:19 2017 From: report at bugs.python.org (Charles McEachern) Date: Fri, 07 Apr 2017 17:25:19 +0000 Subject: [New-bugs-announce] [issue30018] multiprocessing.Pool garbles call stack for __new__ Message-ID: <1491585919.25.0.878478189526.issue30018@psf.upfronthosting.co.za> New submission from Charles McEachern: I'm calling the constructor of Foo, a subclass of str. Expected output: Called Foo.__new__ with args = ('TIMESTAMP', 'INPUT0') TIMESTAMP OUTPUT0 When I make the call using a multiprocessing.pool.ThreadPool, it works fine. But when I make the call using a multiprocessing.Pool (using the apply or apply_async method), I get: Called Foo.__new__ with args = ('TIMESTAMP', 'INPUT0') Called Foo.__new__ with args = ('TIMESTAMP OUTPUT0',) Exception in thread Thread-3: ... ValueError: Bad Foo input: ('TIMESTAMP OUTPUT0',) That is, the object I just constructed seems to be getting shoved right back into the constructor. When I swap out the Foo class for the similar Goo class, which is not a str, and uses __init__ instead of __new__, I again see no problems: Called Goo.__init__ with args = ('TIMESTAMP', 'INPUT0') I see this in 2.7.9 as well as 3.4.5. Looks like it's present in 2.7.2 and 3.5.2 as well: https://github.com/charles-uno/python-new-pool-bug/issues/1 ---------- components: Library (Lib) files: newpool.py messages: 291278 nosy: Charles McEachern priority: normal severity: normal status: open title: multiprocessing.Pool garbles call stack for __new__ type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file46790/newpool.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 16:15:46 2017 From: report at bugs.python.org (David E. Franco G.) Date: Fri, 07 Apr 2017 20:15:46 +0000 Subject: [New-bugs-announce] [issue30019] IDLE got unexpexted bahavior when trying to use some characters Message-ID: <1491596146.33.0.970912299828.issue30019@psf.upfronthosting.co.za> New submission from David E. Franco G.: wandering for the internet I fount some unicode character in a random comment, and just for curiosity I wanted to use python (3.6.1) to see their value, so I copy those characters and paste them in IDLE, and in doing so it just close without warning or explanation. the character in question are: ? ? (chr(128299) and chr(128298)) then I put them in a script text = "? ?" print(text) and try to load it but instead it open a new empty scrip, again without apparent reason, which for some reason I can't close, I needed to kill the process for that. I try the same with the IDLE in python 2.7.13 for the first one I got Unsupported characters in input which at least is something, and changing the script a little # -*- coding: utf-8 -*- text = u"? ?" print(text) it work without problem and print correctly. Also opening the script in interactive mode (python -i myscript.py) it work as expected and I get their numbers (that I put above). So why is that? and please fix it. ---------- assignee: terry.reedy components: IDLE messages: 291289 nosy: David E. Franco G., terry.reedy priority: normal severity: normal status: open title: IDLE got unexpexted bahavior when trying to use some characters type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 7 23:04:51 2017 From: report at bugs.python.org (Isaac Morland) Date: Sat, 08 Apr 2017 03:04:51 +0000 Subject: [New-bugs-announce] [issue30020] Make attrgetter use namedtuple Message-ID: <1491620691.38.0.777081738207.issue30020@psf.upfronthosting.co.za> New submission from Isaac Morland: I would find it useful if the tuples returned by attrgetter functions were namedtuples. An initial look at the code for attrgetter suggests that this would be an easy change and should make little difference to performance. Giving a namedtuple where previously a tuple was returned seems unlikely to trigger bugs in existing code so I propose to simply change attrgetter rather than providing a parameter to specify whether or not to use the new behaviour. Patch will be forthcoming but comments appreciated. ---------- components: Library (Lib) messages: 291314 nosy: Isaac Morland priority: normal severity: normal status: open title: Make attrgetter use namedtuple type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 8 04:12:54 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 08 Apr 2017 08:12:54 +0000 Subject: [New-bugs-announce] [issue30021] Add examples for re.escape() Message-ID: <1491639174.85.0.00703016878323.issue30021@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds examples of using re.escape(). See also issue29995. ---------- assignee: docs at python components: Documentation, Regular Expressions messages: 291326 nosy: docs at python, ezio.melotti, mrabarnett, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add examples for re.escape() type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 8 06:05:39 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 08 Apr 2017 10:05:39 +0000 Subject: [New-bugs-announce] [issue30022] Get rid of using EnvironmentError and IOError Message-ID: <1491645939.23.0.623640815202.issue30022@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: EnvironmentError and IOError now are aliases to OSError. But some code still use them. Proposed patch replaces all uses of EnvironmentError and IOError (except tests and scripts) with OSError. This will make the code cleaner and more uniform. ---------- messages: 291332 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Get rid of using EnvironmentError and IOError type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 02:16:55 2017 From: report at bugs.python.org (Philip Lee) Date: Sun, 09 Apr 2017 06:16:55 +0000 Subject: [New-bugs-announce] [issue30023] Example code becomes invalid for "Why do lambdas defined in a loop with different values all return the same result?" Message-ID: <1491718615.9.0.544401247511.issue30023@psf.upfronthosting.co.za> New submission from Philip Lee: There example code here becomes invalid https://docs.python.org/3/faq/programming.html#why-do-lambdas-defined-in-a-loop-with-different-values-all-return-the-same-result >>> squares = [] >>> for x in range(5): squares.append(lambda: x**2) >>> squares [ at 0x01FB7A08>, at 0x01F82390>, at 0x01FBA3D8>, at 0x01FBA420>, at 0x01FBA468>] >>> There returned value is a List of lambda functions, not numbers ---------- assignee: docs at python components: Documentation messages: 291353 nosy: docs at python, iMath priority: normal severity: normal status: open title: Example code becomes invalid for "Why do lambdas defined in a loop with different values all return the same result?" type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 07:11:54 2017 From: report at bugs.python.org (Victor Varvariuc) Date: Sun, 09 Apr 2017 11:11:54 +0000 Subject: [New-bugs-announce] [issue30024] Treat `import a.b.c as m` as `m = sys.modules['a.b.c']` Message-ID: <1491736314.56.0.10945117979.issue30024@psf.upfronthosting.co.za> New submission from Victor Varvariuc: https://mail.python.org/pipermail/python-ideas/2017-April/045405.html Hi there. I asked a question on Stackoverflow: > (Pdb) import brain.utils.mail > (Pdb) import brain.utils.mail as mail_utils > *** AttributeError: module 'brain.utils' has no attribute 'mail' > > I always thought that import a.b.c as m is roughly equivalent to m = sys.modules['a.b.c']. Why AttributeError? Python 3.6 I was pointed out that this is a somewhat weird behavior of Python: > The statement is not quite true, as evidenced by the corner case you met, namely if the required modules already exist in sys.modules but are yet uninitialized. The import ... as requires that the module foo.bar is injected in foo namespace as the attribute bar, in addition to being in sys.modules, whereas the from ... import ... as looks for foo.bar in sys.modules. Why would `import a.b.c` work when `a.b.c` is not yet fully imported, but `import a.b.c as my_c` would not? I though it would be vice versa. Using `import a.b.c as my_c` allows avoiding a number of circular import issues. Right now I have to use `from a.b import c as my_c` as a workaround, but this doesn't look right. The enhancement is to treat `import x.y.z as m` as `m = importlib.import_module('x.y.z')`. I don't see how this will break anything. ---------- components: Interpreter Core messages: 291376 nosy: Victor.Varvariuc priority: normal severity: normal status: open title: Treat `import a.b.c as m` as `m = sys.modules['a.b.c']` type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 08:45:18 2017 From: report at bugs.python.org (David Kirkby) Date: Sun, 09 Apr 2017 12:45:18 +0000 Subject: [New-bugs-announce] [issue30025] useful information Message-ID: <1773686669.20170409154508@onetel.net> New submission from David Kirkby: Hi friend! I know you were looking for that kind of information, I guess I've just found it, here, take a look http://lexion-consultants.com/face.php?bcbd In haste, david.kirkby ---------- messages: 291378 nosy: drkirkby priority: normal severity: normal status: open title: useful information _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 12:23:03 2017 From: report at bugs.python.org (Max) Date: Sun, 09 Apr 2017 16:23:03 +0000 Subject: [New-bugs-announce] [issue30026] Hashable doesn't check for __eq__ Message-ID: <1491754983.55.0.46541300109.issue30026@psf.upfronthosting.co.za> New submission from Max: I think collections.abc.Hashable.__subclasshook__ should check __eq__ method in addition to __hash__ method. This helps detect classes that are unhashable due to: to __eq__ = None Of course, it still cannot detect: def __eq__: return NotImplemented but it's better than nothing. In addition, it's probably worth documenting that explicitly inheriting from Hashable has (correct but unexpected) effect of *suppressing* hashability that was already present: from collections.abc import Hashable class X: pass assert issubclass(X, Hashable) x = X() class X(Hashable): pass assert issubclass(X, Hashable) x = X() # Can't instantiate abstract class X with abstract methods ---------- assignee: docs at python components: Documentation, Interpreter Core messages: 291382 nosy: docs at python, max priority: normal severity: normal status: open title: Hashable doesn't check for __eq__ _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 15:36:58 2017 From: report at bugs.python.org (Arfrever Frehtes Taifersar Arahesis) Date: Sun, 09 Apr 2017 19:36:58 +0000 Subject: [New-bugs-announce] [issue30027] test_xml_etree and test_xml_etree_c fail due to AssertionError: unhandled warning DeprecationWarning Message-ID: <1491766618.38.0.276477637378.issue30027@psf.upfronthosting.co.za> New submission from Arfrever Frehtes Taifersar Arahesis: test_xml_etree.py and test_xml_etree_c.py on 2.7 branch fail since this commit: commit 68903b656d4e1011525a46cbd1338c6cbab83d6d Author: Serhiy Storchaka Date: Sun Apr 2 16:55:43 2017 +0300 bpo-15083: Convert ElementTree doctests to unittests. (#906) Output of test suite: [388/399] test_xdrlib [389/399] test_xml_etree /tmp/cpython/Lib/test/test_xml_etree.py:1674: DeprecationWarning: Overriding __eq__ blocks inheritance of __hash__ in 3.x class MutatingElementPath(str): /tmp/cpython/Lib/test/test_xml_etree.py:1684: DeprecationWarning: Overriding __eq__ blocks inheritance of __hash__ in 3.x class BadElementPath(str): test test_xml_etree crashed -- : unhandled warning DeprecationWarning('Overriding __eq__ blocks inheritance of __hash__ in 3.x',) [390/399/1] test_xml_etree_c test test_xml_etree_c crashed -- : unhandled warning DeprecationWarning('classic int division',) [391/399/2] test_xmllib ... 360 tests OK. 2 tests failed: test_xml_etree test_xml_etree_c Re-running failed tests in verbose mode Re-running test 'test_xml_etree' in verbose mode /tmp/cpython/Lib/test/test_xml_etree.py:1674: DeprecationWarning: Overriding __eq__ blocks inheritance of __hash__ in 3.x class MutatingElementPath(str): /tmp/cpython/Lib/test/test_xml_etree.py:1684: DeprecationWarning: Overriding __eq__ blocks inheritance of __hash__ in 3.x class BadElementPath(str): ... Ran 117 tests in 0.934s OK test test_xml_etree crashed -- : unhandled warning DeprecationWarning('Overriding __eq__ blocks inheritance of __hash__ in 3.x',) Traceback (most recent call last): File "./Lib/test/regrtest.py", line 989, in runtest_inner File "/tmp/cpython/Lib/test/test_xml_etree.py", line 2671, in test_main support.run_unittest(*test_classes) File "/tmp/cpython/Lib/test/test_xml_etree.py", line 2630, in __exit__ self.checkwarnings.__exit__(*args) File "/tmp/cpython/Lib/contextlib.py", line 24, in __exit__ self.gen.next() File "/tmp/cpython/Lib/test/test_support.py", line 905, in _filterwarnings raise AssertionError("unhandled warning %r" % reraise[0]) AssertionError: unhandled warning DeprecationWarning('Overriding __eq__ blocks inheritance of __hash__ in 3.x',) Re-running test 'test_xml_etree_c' in verbose mode ... Ran 118 tests in 0.112s OK (skipped=19) test test_xml_etree_c crashed -- : unhandled warning DeprecationWarning('classic int division',) Traceback (most recent call last): File "./Lib/test/regrtest.py", line 989, in runtest_inner File "/tmp/cpython/Lib/test/test_xml_etree_c.py", line 64, in test_main test_xml_etree.test_main(module=cET) File "/tmp/cpython/Lib/test/test_xml_etree.py", line 2671, in test_main support.run_unittest(*test_classes) File "/tmp/cpython/Lib/test/test_xml_etree.py", line 2630, in __exit__ self.checkwarnings.__exit__(*args) File "/tmp/cpython/Lib/contextlib.py", line 24, in __exit__ self.gen.next() File "/tmp/cpython/Lib/test/test_support.py", line 905, in _filterwarnings raise AssertionError("unhandled warning %r" % reraise[0]) AssertionError: unhandled warning DeprecationWarning('classic int division',) 2 tests failed again: test_xml_etree test_xml_etree_c ---------- assignee: serhiy.storchaka components: Tests messages: 291392 nosy: Arfrever, eli.bendersky, ezio.melotti, serhiy.storchaka priority: normal severity: normal status: open title: test_xml_etree and test_xml_etree_c fail due to AssertionError: unhandled warning DeprecationWarning versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 16:00:25 2017 From: report at bugs.python.org (Anselm Kruis) Date: Sun, 09 Apr 2017 20:00:25 +0000 Subject: [New-bugs-announce] [issue30028] make test.support.temp_cwd() fork-safe Message-ID: <1491768025.09.0.83114690073.issue30028@psf.upfronthosting.co.za> New submission from Anselm Kruis: The context manager test.support.temp_cwd() creates a temporary directory and removes it on exit. The test runner test.regrtest uses this context manager. I observed an annoying behaviour of test.support.temp_cwd() on Linux/UNIX: if the code, that runs in the temp_cwd() context forks and if the forked child terminates (without calling exec), then the temporary directory will be removed twice: by the child and by the parent. This can cause errors in the parent, if the parent tries to access the no longer existing directory. I discovered this problem, when a test in test_multiprocessing_fork failed and the test directory for the complete test.regrtest-run got removed. Of course all other tests failed too. I propose to modify test.support.temp_cwd() to remove the created directory only, if the process id (os.getpid()) is unchanged. I'll create a pull request. ---------- components: Tests messages: 291396 nosy: anselm.kruis priority: normal severity: normal status: open title: make test.support.temp_cwd() fork-safe type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 9 19:26:16 2017 From: report at bugs.python.org (anthony shaw) Date: Sun, 09 Apr 2017 23:26:16 +0000 Subject: [New-bugs-announce] [issue30029] Compiler "'await' outside function" error message is unreachable Message-ID: <1491780376.9.0.908690557717.issue30029@psf.upfronthosting.co.za> New submission from anthony shaw: This is related to issue26188, Using await in a simple statement (outside of an async def method) raises SyntaxError with the unhelpful message "invalid syntax". It seems obvious once you've read PEP492 in detail, but I think that as more and more developers use async/await this will stump lots of people. I've been trying to pick apart where this constraint is raised to see whether I can help with a PR, I've been through Grammar, Parser and then the AST and Compiler. Looking at https://github.com/python/cpython/blob/master/Python/compile.c#L4307-L4319 I can see there are helpful error messages, but during the tokenizer phase it checks that the syntax cannot be used unless you're within an async def method https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Parser/tokenizer.c#L1574-L1583 I can't reproduce this in a REPL for 3.7.0a0, it never gets any further than the grammar or tokenizer phase. ---------- messages: 291398 nosy: anthonypjshaw priority: normal severity: normal status: open title: Compiler "'await' outside function" error message is unreachable versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 10 05:17:38 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 10 Apr 2017 09:17:38 +0000 Subject: [New-bugs-announce] [issue30030] Simplify _RandomNameSequence Message-ID: <1491815858.77.0.234884449004.issue30030@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: _RandomNameSequence was added in issue589982 when generator functions were new optional feature. _RandomNameSequence is implemented as an iterator class. Proposed patch simplifies _RandomNameSequence by implementing it as a generator function. This is a private name, all uses of _RandomNameSequence() need only the support of the iterator protocol. ---------- components: Library (Lib) messages: 291425 nosy: haypo, pitrou, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Simplify _RandomNameSequence type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 10 09:06:37 2017 From: report at bugs.python.org (Pavlo Kapyshin) Date: Mon, 10 Apr 2017 13:06:37 +0000 Subject: [New-bugs-announce] [issue30031] Improve queens demo (use argparse and singular form) Message-ID: <1491829597.98.0.553566597714.issue30031@psf.upfronthosting.co.za> New submission from Pavlo Kapyshin: Currently Tools/demo/queens.py: - does manual sys.argv parsing - says ?Found 1 solutions? I propose to: 1) use argparse; 2) if q.nfound == 1, use ?solution?. I you are ok with this, I?ll make a pull request. ---------- components: Demos and Tools messages: 291428 nosy: paka priority: normal severity: normal status: open title: Improve queens demo (use argparse and singular form) type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 10 13:41:26 2017 From: report at bugs.python.org (Jon Ribbens) Date: Mon, 10 Apr 2017 17:41:26 +0000 Subject: [New-bugs-announce] [issue30032] email module creates base64 output with incorrect line breaks Message-ID: <1491846086.02.0.299208621231.issue30032@psf.upfronthosting.co.za> New submission from Jon Ribbens: The email module, when creating base64-encoded text parts, does not process line breaks correctly - RFC 2045 s6.8 says that line breaks must be converted to CRLF before base64-encoding, and the email module is not doing this. >>> from email.mime.text import MIMEText >>> import base64 >>> m = MIMEText("hello\nthere", _charset="utf-8") >>> m.as_string() 'Content-Type: text/plain; charset="utf-8"\nMIME-Version: 1.0\nContent-Transfer-Encoding: base64\n\naGVsbG8KdGhlcmU=\n' >>> base64.b64decode("aGVsbG8KdGhlcmU=") b'hello\nthere' You might say that it is the application's job to convert the line endings before calling MIMEText(), but I think all application authors would be surprised by this. Certainly the MailMan authors would be, as they say this is a Python bug not a MailMan bug ;-) ---------- components: Library (Lib) messages: 291434 nosy: jribbens priority: normal severity: normal status: open title: email module creates base64 output with incorrect line breaks type: behavior versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 10 13:45:40 2017 From: report at bugs.python.org (Jon Ribbens) Date: Mon, 10 Apr 2017 17:45:40 +0000 Subject: [New-bugs-announce] [issue30033] email module base64-encodes utf-8 text Message-ID: <1491846340.8.0.762117767462.issue30033@psf.upfronthosting.co.za> New submission from Jon Ribbens: The email module, when creating text parts using character encoding utf-8, base64-encodes the output even though this is often inappropriate (e.g. if it is a Western language it is almost never appropriate). >>> from email.mime.text import MIMEText >>> m = MIMEText("hello", _charset="utf-8") >>> m.as_string() 'Content-Type: text/plain; charset="utf-8"\nMIME-Version: 1.0\nContent-Transfer-Encoding: base64\n\naGVsbG8=\n' ---------- components: Library (Lib) messages: 291435 nosy: jribbens priority: normal severity: normal status: open title: email module base64-encodes utf-8 text type: behavior versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 10 17:50:25 2017 From: report at bugs.python.org (Keith Erskine) Date: Mon, 10 Apr 2017 21:50:25 +0000 Subject: [New-bugs-announce] [issue30034] csv reader chokes on bad quoting in large files Message-ID: <1491861025.06.0.741376367599.issue30034@psf.upfronthosting.co.za> New submission from Keith Erskine: If a csv file has a quote character at the beginning of a field but no closing quote, the csv module will keep reading the file until the very end in an attempt to close out the field. It's true this situation occurs only when the quoting in a csv file is incorrect, but it would be extremely helpful if the csv reader could be told to stop reading each row of fields when it encounters a newline character, even if it is within a quoted field at the time. At the moment, with large files, the csv reader will typically error out in this situation once it reads the maximum size of a string. Furthermore, this is not an easy situation to trap with custom code. Here's an example of the what I'm talking about. For a csv file with the following content: a,b,c d,"e,f g,h,i This code: import csv with open('file.txt') as f: reader = csv.reader(f) for row in reader: print(row) returns: ['a', 'b', 'c'] ['d', 'e,f\ng,h,i\n'] Note that the whole of the file after "e", including delimiters and newlines, has been added to the second field on the second line. This is correct csv behavior but is very unhelpful to me in this situation. On the grounds that most csv files do not have multiline values within them, perhaps a new dialect attribute called "multiline" could be added to the csv module, that defaults to True for backwards compatibility. It would indicate whether the csv file has any field values within it that span more than one line. If multiline is False, then the "parse_process_char" function in "_csv" would always close out a row of fields when it encounters a newline character. It might be best if this multiline attribute were taken into account only when "strict" is False. Right now, I do get badly-formatted files like this, and I cannot ask the source for a new file. I have to manually correct the file using a mixture of custom scripts and vi before the csv module will read it. It would be very helpful if csv would handle this directly. ---------- messages: 291453 nosy: keef604 priority: normal severity: normal status: open title: csv reader chokes on bad quoting in large files type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 00:34:45 2017 From: report at bugs.python.org (Sunyeop Lee) Date: Tue, 11 Apr 2017 04:34:45 +0000 Subject: [New-bugs-announce] [issue30035] [RFC] PyMemberDef.name should be const char * Message-ID: <1491885285.14.0.166738608305.issue30035@psf.upfronthosting.co.za> New submission from Sunyeop Lee: PyMemberDef from Noddy examaple: https://docs.python.org/3/extending/newtypes.html static PyMemberDef Noddy_members[] = { {"first", T_OBJECT_EX, offsetof(Noddy, first), 0, "first name"}, {"last", T_OBJECT_EX, offsetof(Noddy, last), 0, "last name"}, {"number", T_INT, offsetof(Noddy, number), 0, "noddy number"}, {NULL} /* Sentinel */ }; When compiling the code with the PyMemberDef above with GCC, it compiles well. However, with G++, ISO C++11 complains(warns) it is deprecated to convert string literal to 'char *' is deprecated [-Wc++11-compat-deprecated-writable-strings] Should the example code be fixed, or should PyMemberDef fixed? I think PyMemberDef.name should bo const char * instead of char *. Compiled with: g++ test.cpp -I/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/include/python3.6m -L/usr/local/opt/python3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/config-3.6m-darwin -lpython3.6m -fPIC -shared Compiler versions: $ gcc -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 8.1.0 (clang-802.0.38) Target: x86_64-apple-darwin16.5.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin $ g++ -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 8.1.0 (clang-802.0.38) Target: x86_64-apple-darwin16.5.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin ---------- components: Extension Modules files: test.cpp messages: 291459 nosy: Sunyeop Lee priority: normal severity: normal status: open title: [RFC] PyMemberDef.name should be const char * type: compile error versions: Python 3.6 Added file: http://bugs.python.org/file46795/test.cpp _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 03:45:41 2017 From: report at bugs.python.org (Dutcho) Date: Tue, 11 Apr 2017 07:45:41 +0000 Subject: [New-bugs-announce] [issue30036] The bugs website doesn't use httpS by default Message-ID: <1491896741.65.0.130091825674.issue30036@psf.upfronthosting.co.za> New submission from Dutcho: The footer of httpS://python.org links to httP://bugs.python.org, compromising user data for login and register options ---------- assignee: christian.heimes components: SSL messages: 291463 nosy: Dutcho, christian.heimes priority: normal severity: normal status: open title: The bugs website doesn't use httpS by default type: security versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 03:51:50 2017 From: report at bugs.python.org (Dutcho) Date: Tue, 11 Apr 2017 07:51:50 +0000 Subject: [New-bugs-announce] [issue30037] inspect documentation on code attributes incomplete Message-ID: <1491897110.71.0.340435065051.issue30037@psf.upfronthosting.co.za> New submission from Dutcho: The table at the top of the inspect documentation (https://docs.python.org/3/library/inspect.html#types-and-members) omits co_cellvars, co_freevars, and co_kwonlyargcount attributes of type code (note: the type's doc string does provide these attributes) ---------- assignee: docs at python components: Documentation messages: 291464 nosy: Dutcho, docs at python priority: normal severity: normal status: open title: inspect documentation on code attributes incomplete type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 05:11:23 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Tue, 11 Apr 2017 09:11:23 +0000 Subject: [New-bugs-announce] [issue30038] Race condition in how trip_signal writes to wakeup fd Message-ID: <1491901883.05.0.966298195873.issue30038@psf.upfronthosting.co.za> New submission from Nathaniel Smith: In trip_signal [1], the logic goes: 1) set the flag saying that this particular signal was tripped 2) write to the wakeup fd 3) set the global is_tripped flag saying "at least one signal was tripped", and do Py_AddPendingCall (which sets some global flags that the bytecode interpreter checks on every pass through the loop) So the problem here is that it's step (2) that wakes up the main thread to check for signals, but it's step (3) that actually arranges for the Python-level signal handler to run. (Step (1) turns out to be irrelevant, because no-one looks at the per-signal flags unless the global is_tripped flag is set. This might be why no-one noticed this bug through code inspection though ? I certainly missed it, despite explicitly checking for it several times!) The result is that the following sequence of events is possible: - signal arrives (e.g. SIGINT) - trip_signal writes to the wakeup fd - the main thread blocked in IO wait sees this, and wakes up - the main thread checks for signals, and doesn't find any - the main thread empties the wakeup fd - the main thread goes back to sleep - trip_signal sets the flags to request the Python-level signal handler be run - the main thread doesn't notice, because it's asleep It turns out that this is a real thing that can actually happen; it's causing an annoying intermittent failure in the trio testsuite on appveyor; and under the correct conditions I can reproduce it very reliably in my local Windows VM. See [2]. I think the fix is just to swap the order of steps (2) and (3), so we write to the wakeup fd last. Unfortunately I can't easily test this because I don't have a way to build CPython on Windows. But [2] has some IMHO pretty compelling evidence that this is what's happening. [1] https://github.com/python/cpython/blob/6fab78e9027f9ebd6414995580781b480433e595/Modules/signalmodule.c#L238-L291 [2] https://github.com/python-trio/trio/issues/119 ---------- messages: 291467 nosy: haypo, njs priority: normal severity: normal status: open title: Race condition in how trip_signal writes to wakeup fd versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 05:33:27 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Tue, 11 Apr 2017 09:33:27 +0000 Subject: [New-bugs-announce] [issue30039] Resuming a 'yield from' stack is broken if a signal arrives in the middle Message-ID: <1491903207.11.0.00986687800627.issue30039@psf.upfronthosting.co.za> New submission from Nathaniel Smith: If we have a chain of generators/coroutines that are 'yield from'ing each other, then resuming the stack works like: - call send() on the outermost generator - this enters _PyEval_EvalFrameDefault, which re-executes the YIELD_FROM opcode - which calls send() on the next generator - which enters _PyEval_EvalFrameDefault, which re-executes the YIELD_FROM opcode - ...etc. However, every time we enter _PyEval_EvalFrameDefault, the first thing we do is to check for pending signals, and if there are any then we run the signal handler. And if it raises an exception, then we immediately propagate that exception *instead* of starting to execute bytecode. This means that e.g. a SIGINT at the wrong moment can "break the chain" ? it can be raised in the middle of our yield from chain, with the bottom part of the stack abandoned for the garbage collector. The fix is pretty simple: there's already a special case in _PyEval_EvalFrameEx where it skips running signal handlers if the next opcode is SETUP_FINALLY. (I don't see how this accomplishes anything useful, but that's another story.) If we extend this check to also skip running signal handlers when the next opcode is YIELD_FROM, then that closes the hole ? now the exception can only be raised at the innermost stack frame. This shouldn't have any performance implications, because the opcode check happens inside the "slow path" after we've already determined that there's a pending signal or something similar for us to process; the vast majority of the time this isn't true. I'll post a PR in a few minutes that has a test case that demonstrates the problem and fails on current master, plus the fix. ---------- components: Interpreter Core messages: 291469 nosy: njs, yselivanov priority: normal severity: normal status: open title: Resuming a 'yield from' stack is broken if a signal arrives in the middle versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 05:39:20 2017 From: report at bugs.python.org (INADA Naoki) Date: Tue, 11 Apr 2017 09:39:20 +0000 Subject: [New-bugs-announce] [issue30040] new empty dict can be more small Message-ID: <1491903560.12.0.959603376514.issue30040@psf.upfronthosting.co.za> New submission from INADA Naoki: dict.clear() make the dict to empty key-sharing dict to reduce it's size. New dict can use same technique. $ ./python.default Python 3.7.0a0 (heads/master:6dfcc81, Apr 10 2017, 19:55:52) [GCC 6.2.0 20161005] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> d = {} >>> sys.getsizeof(d) 240 >>> d.clear() >>> sys.getsizeof(d) 72 $ ./python.patched Python 3.7.0a0 (heads/master-dirty:6dfcc81, Apr 11 2017, 18:11:02) [GCC 6.2.0 20161005] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.getsizeof({}) 72 ---------- messages: 291470 nosy: inada.naoki priority: normal severity: normal status: open title: new empty dict can be more small _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 06:04:41 2017 From: report at bugs.python.org (Dimitri Merejkowsky) Date: Tue, 11 Apr 2017 10:04:41 +0000 Subject: [New-bugs-announce] [issue30041] subprocess: weird behavior with shell=True and args being a list Message-ID: <1491905081.01.0.57278380542.issue30041@psf.upfronthosting.co.za> New submission from Dimitri Merejkowsky: If you have: subprocess.run(["ls", "--help"], shell=True) you'll see that the command run is actually just "ls", not "ls --help" ---------- components: Library (Lib) messages: 291473 nosy: Dimitri Merejkowsky priority: normal severity: normal status: open title: subprocess: weird behavior with shell=True and args being a list _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 09:21:34 2017 From: report at bugs.python.org (karan) Date: Tue, 11 Apr 2017 13:21:34 +0000 Subject: [New-bugs-announce] [issue30042] fcntl module foe windows Message-ID: <1491916894.36.0.686800924149.issue30042@psf.upfronthosting.co.za> New submission from karan: NameError: global name 'fcntl' is not defined such error occurs while running software on windows. does fcntl module (urwid) is available for windows?? ---------- messages: 291490 nosy: kk_pednekar priority: normal severity: normal status: open title: fcntl module foe windows type: resource usage versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 09:41:57 2017 From: report at bugs.python.org (karan) Date: Tue, 11 Apr 2017 13:41:57 +0000 Subject: [New-bugs-announce] [issue30043] fcntl module for windows platform Message-ID: <1491918117.96.0.537988722419.issue30043@psf.upfronthosting.co.za> New submission from karan: is there any fcntl module alternative available that would support windows platform?? in any of futures release of python is it possible that fcntl would support windows?? ---------- messages: 291493 nosy: kk_pednekar priority: normal severity: normal status: open title: fcntl module for windows platform type: resource usage _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 11:42:33 2017 From: report at bugs.python.org (Florent Coriat) Date: Tue, 11 Apr 2017 15:42:33 +0000 Subject: [New-bugs-announce] [issue30044] shutil.copystat should (allow to) copy ownership, and other attributes Message-ID: <1491925353.8.0.782070431072.issue30044@psf.upfronthosting.co.za> New submission from Florent Coriat: shutil.copystat() copies permissions, timestamps and even flags and xattrs (if supported), but not ownership. Furthermore, shutil.copy2() documentation until 2.7 used to say it behaves like cp -p, which preserves ownership, and not xattr nor flags. (On my system it silently fails to copy ownership when not root). It may not be related, but comments in source code for the except NotImplementedError block concerning chmod mistakenly mentions chown-related functions. I think copystat (and copy2) should at least provide an option to preserve ownership. I do not know if it currently preserves SELinux context and ACL, but if not, it may also allow it. It would be really useful for replication or backup applications to have a function that copies everything it can. ---------- components: Library (Lib) messages: 291499 nosy: noctiflore priority: normal severity: normal status: open title: shutil.copystat should (allow to) copy ownership, and other attributes type: enhancement versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 16:28:24 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 11 Apr 2017 20:28:24 +0000 Subject: [New-bugs-announce] [issue30045] Bad parameter name in re.escape() Message-ID: <1491942504.65.0.615945582018.issue30045@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Currently re.escape() parameter has a name "pattern", but in the documentation the name of the parameter is "string". The name "pattern" is not correct, and maybe even misleading. The argument of escape() is not a pattern, it is an arbitrary string, and escape() makes a pattern from it by escaping special characters. It is unlikely that the argument is passed to re.escape() by keyword. Therefore renaming it to "string" shouldn't break existing code. ---------- components: Regular Expressions messages: 291514 nosy: ezio.melotti, mrabarnett, r.david.murray, serhiy.storchaka priority: normal severity: normal status: open title: Bad parameter name in re.escape() type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 16:41:55 2017 From: report at bugs.python.org (Thomas Lotze) Date: Tue, 11 Apr 2017 20:41:55 +0000 Subject: [New-bugs-announce] [issue30046] csv: Inconsistency re QUOTE_NONNUMERIC Message-ID: <1491943315.79.0.569209717832.issue30046@psf.upfronthosting.co.za> New submission from Thomas Lotze: A csv.writer with quoting=csv.QUOTE_NONNUMERIC does not quote boolean values, which makes a csv.reader with the same quoting behaviour fail on that value: -------- csv.py ---------- import csv import io f = io.StringIO() writer = csv.writer(f, quoting=csv.QUOTE_NONNUMERIC) writer.writerow(['asdf', 1, True]) f.seek(0) reader = csv.reader(f, quoting=csv.QUOTE_NONNUMERIC) for row in reader: print(row) ---------------------- $ python3 csvbug.py Traceback (most recent call last): File "csvbug.py", line 12, in for row in reader: ValueError: could not convert string to float: 'True' ---------------------- I'd consider this inconsistency a bug, but in any case something that needs documenting. ---------- components: Library (Lib) messages: 291516 nosy: tlotze priority: normal severity: normal status: open title: csv: Inconsistency re QUOTE_NONNUMERIC type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 19:22:47 2017 From: report at bugs.python.org (OSAMU NAKAMURA) Date: Tue, 11 Apr 2017 23:22:47 +0000 Subject: [New-bugs-announce] [issue30047] Typos in Doc/library/select.rst Message-ID: <1491952967.06.0.143723768718.issue30047@psf.upfronthosting.co.za> New submission from OSAMU NAKAMURA: In 18.3.2. Edge and Level Trigger Polling (epoll) Objects, there is duplicated 'on' in description of `EPOLLEXCLUSIVE`. Wake only ... objects polling on on a fd. ^^^^^ ---------- assignee: docs at python components: Documentation messages: 291520 nosy: OSAMU.NAKAMURA, docs at python priority: normal severity: normal status: open title: Typos in Doc/library/select.rst versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 11 20:11:24 2017 From: report at bugs.python.org (Evgeny Kapun) Date: Wed, 12 Apr 2017 00:11:24 +0000 Subject: [New-bugs-announce] [issue30048] If a task is canceled at the right moment, the cancellation is ignored Message-ID: <1491955884.31.0.0746508313156.issue30048@psf.upfronthosting.co.za> New submission from Evgeny Kapun: If I run this code: import asyncio as a @a.coroutine def coro1(): yield from a.ensure_future(coro2()) print("Still here") yield from a.sleep(1) print("Still here 2") @a.coroutine def coro2(): yield from a.sleep(1) res = task.cancel() print("Canceled task:", res) loop = a.get_event_loop() task = a.ensure_future(coro1()) loop.run_until_complete(task) I expect the task to stop shortly after a call to cancel(). It should surely stop when I try to sleep(). But it doesn't. On my machine this prints: Canceled task: True Still here Still here 2 So, cancel() returns True, but the task doesn't seem to be canceled. ---------- components: asyncio messages: 291522 nosy: abacabadabacaba, yselivanov priority: normal severity: normal status: open title: If a task is canceled at the right moment, the cancellation is ignored type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 04:39:23 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 12 Apr 2017 08:39:23 +0000 Subject: [New-bugs-announce] [issue30049] Don't cache tp_iternext Message-ID: <1491986363.23.0.567694316296.issue30049@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Some operations cache the value of the tp_iternext slot and call it in a loop. But calling tp_iternext can run arbitrary code, release GIL and change the tp_iternext slot in the same or other thread. This can leads to visible behavior difference, such as different iterated values, different raised exceptions or infinite loop. In the past this even could cause a crash (see issue3720), but seems now this is impossible. Examples (list constructor caches tp_iternext, tuple constructor doesn't): >>> def make_iter(): ... class Iterator: ... def __iter__(self): ... return self ... def __next__(self): ... del Iterator.__next__ ... return 1 ... return Iterator() ... >>> tuple(make_iter()) Traceback (most recent call last): File "", line 1, in TypeError: 'Iterator' object is not iterable >>> list(make_iter()) Traceback (most recent call last): File "", line 1, in AttributeError: __next__ >>> >>> def make_iter2(): ... it2 = iter((2,)) ... def subiter(): ... Iterator.__next__ = Iterator.next2 ... yield 1 ... class Iterator(filter): ... def next2(self): ... return next(it2) ... return Iterator(lambda x: True, subiter()) ... >>> tuple(make_iter2()) (1, 2) >>> list(make_iter2()) [1] The tp_iternext is cached for performance, and removing the caching can cause performance regression. But actually the difference is very small. I found a measurable difference (up to 5%) only in following artificial examples in which tp_iternext is called in very tight and long loops: $ ./python -m perf timeit --compare-to ./python-orig -s 'a = [0]*1000000+[1]' -- 'next(filter(None, a))'python-orig: ..................... 47.2 ms +- 0.2 ms python: ..................... 49.7 ms +- 0.3 ms Mean +- std dev: [python-orig] 47.2 ms +- 0.2 ms -> [python] 49.7 ms +- 0.3 ms: 1.05x slower (+5%) $ ./python -m perf timeit --compare-to ./python-orig -s 'from itertools import repeat, islice' -- 'next(islice(repeat(1), 1000000, None))' python-orig: ..................... 15.4 ms +- 0.1 ms python: ..................... 16.0 ms +- 0.2 ms Mean +- std dev: [python-orig] 15.4 ms +- 0.1 ms -> [python] 16.0 ms +- 0.2 ms: 1.04x slower (+4%) $ ./python -m perf timeit --compare-to ./python-orig -s 'from itertools import repeat; from collections import deque' -- 'deque(repeat(1, 1000000), 0)' python-orig: ..................... 14.2 ms +- 0.1 ms python: ..................... 14.8 ms +- 0.2 ms Mean +- std dev: [python-orig] 14.2 ms +- 0.1 ms -> [python] 14.8 ms +- 0.2 ms: 1.05x slower (+5%) In all other other cases, when involved creation of a collection (list, bytearray, deque (with maxlen != 0) constructors, ''.join), or calling other code (builtins all, max, map, itertools functions), or for shorter loops the difference is hardly distinguished from the random noise. $ ./python -m perf timeit --compare-to ./python-orig -s 'a = [0]*1000' -- 'list(iter(a))' python-orig: ..................... 31.8 us +- 0.3 us python: ..................... 31.8 us +- 0.4 us Mean +- std dev: [python-orig] 31.8 us +- 0.3 us -> [python] 31.8 us +- 0.4 us: 1.00x faster (-0%) Not significant! $ ./python -m perf timeit --compare-to ./python-orig -s 'a = [1]*1000' -- 'all(a)' python-orig: ..................... 47.4 us +- 0.2 us python: ..................... 48.0 us +- 0.3 us Mean +- std dev: [python-orig] 47.4 us +- 0.2 us -> [python] 48.0 us +- 0.3 us: 1.01x slower (+1%) $ ./python -m perf timeit --compare-to ./python-orig -s 'a = [1]*1000' -- 'max(a)' python-orig: ..................... 108 us +- 1 us python: ..................... 108 us +- 1 us Mean +- std dev: [python-orig] 108 us +- 1 us -> [python] 108 us +- 1 us: 1.00x faster (-0%) Not significant! $ ./python -m perf timeit --compare-to ./python-orig -s 'a = [0]*1000000+[1]' -- 'next(filter(lambda x: x, a))' python-orig: ..................... 527 ms +- 8 ms python: ..................... 528 ms +- 2 ms Mean +- std dev: [python-orig] 527 ms +- 8 ms -> [python] 528 ms +- 2 ms: 1.00x slower (+0%) Not significant! $ ./python -m perf timeit --compare-to ./python-orig -s 'from itertools import repeat, islice' -- 'next(islice(repeat(1), 100, None))' python-orig: ..................... 4.72 us +- 0.05 us python: ..................... 4.72 us +- 0.04 us Mean +- std dev: [python-orig] 4.72 us +- 0.05 us -> [python] 4.72 us +- 0.04 us: 1.00x faster (-0%) Not significant! $ ./python -m perf timeit --compare-to ./python-orig -s 'from itertools import repeat; from collections import deque' -- 'deque(repeat(1, 100), 0)' python-orig: ..................... 4.16 us +- 0.11 us python: ..................... 4.11 us +- 0.05 us Mean +- std dev: [python-orig] 4.16 us +- 0.11 us -> [python] 4.11 us +- 0.05 us: 1.01x faster (-1%) ---------- components: Extension Modules, Interpreter Core messages: 291530 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Don't cache tp_iternext type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 04:54:33 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Wed, 12 Apr 2017 08:54:33 +0000 Subject: [New-bugs-announce] [issue30050] Please provide a way to disable the warning printed if the signal module's wakeup fd overflows Message-ID: <1491987273.45.0.570112423464.issue30050@psf.upfronthosting.co.za> New submission from Nathaniel Smith: When a wakeup fd is registered via signal.set_wakeup_fd, then the C level signal handler writes a byte to the wakeup fd on each signal received. If this write fails, then it prints an error message to the console. Some projects use the wakeup fd as a way to see which signals have occurred (asyncio, curio). Others use it only as a toggle for "a wake up is needed", and transmit the actual data out-of-line (twisted, tornado, trio ? I guess it has something to do with the letter "t"). One way that writing to the wakeup fd can fail is if the pipe or socket's buffer is already full, in which case we get EWOULDBLOCK or WSAEWOULDBLOCK. For asyncio/curio, this is a problem: it indicates a lost signal! Printing to the console isn't a great solution, but it's better than letting the error pass silently. For twisted/tornado/trio, this is a normal and expected thing ? the semantics we want are that after a signal is received then the fd will be readable, and if its buffer is full then it's certainly readable! So for them, EWOULDBLOCK/WSAEWOULDBLOCK are *success* conditions. Yet currently, the signal module insists on printing a scary message to the console whenever we succeed in this way. It would be nice if there were a way to disable this; perhaps something like: signal.set_wakeup_fd(fd, warn_on_full_buffer=False) This is particularly annoying for trio, because I try to minimize the size of the wakeup fd's send buffer to avoid wasting non-swappable kernel memory on what's essentially an overgrown bool. This ends up meaning that on Linux the buffer is 6 bytes, and on MacOS it's 1 byte. So currently I don't use the wakeup fd on Linux/MacOS, which is *mostly* OK but it would be better if we could use it. Trio bug with a few more details: https://github.com/python-trio/trio/issues/109 ---------- messages: 291532 nosy: haypo, njs priority: normal severity: normal status: open title: Please provide a way to disable the warning printed if the signal module's wakeup fd overflows versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 05:24:49 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Apr 2017 09:24:49 +0000 Subject: [New-bugs-announce] [issue30051] Document that the random module doesn't support fork Message-ID: <1491989089.77.0.552733144958.issue30051@psf.upfronthosting.co.za> New submission from STINNER Victor: When reviewing the issue #30030, it reminded me that the random module "doesn't support fork": after fork, the parent and the child produce the same "random" number sequence. I suggest to add a quick note about that: not a warning, just a note, as a reminder. There is an exception: SystemRandom produces a different sequence after fork, since it uses os.urandom() (which runs in the kernel). I am tempted to propose a solution in the note like using SystemRandom, but I'm not sure that it's a good idea. Some users may misunderstood the note and always use SystemRandom where random.Random is just fine for their needs. Proposed note: The random module doesn't support fork: the parent and the child process will produce the same number sequence. Proposed solution: If your code uses os.fork(), a workaround is to check if os.getpid() changes and in that case, create a new Random instance or reseed the RNG in the child process. -- The tempfile module reminds the pid and instanciates a new RNG on fork. Another option is to not add a note, but implement the workaround directly into the random module. But I don't think that it's worth it. Forking is a rare usecase, and calling os.getpid() may slowdown the random module. (I don't recall if os.getpid() requires a syscall or not, I'm quite sure that it's optimized at least on Linux to avoid a real syscall.) ---------- messages: 291534 nosy: haypo priority: normal severity: normal status: open title: Document that the random module doesn't support fork versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 08:49:34 2017 From: report at bugs.python.org (Cheryl Sabella) Date: Wed, 12 Apr 2017 12:49:34 +0000 Subject: [New-bugs-announce] [issue30052] URL Quoting page links to function Bytes instead of defintion Message-ID: <1492001374.68.0.411275444037.issue30052@psf.upfronthosting.co.za> New submission from Cheryl Sabella: On the URL Quoting page, the following line: `string may be either a str or a bytes.` Has the `str` link to: https://docs.python.org/3/library/stdtypes.html#str But the `bytes` link to: https://docs.python.org/3/library/functions.html#bytes Should the `bytes` link to? https://docs.python.org/3/library/stdtypes.html#bytes ---------- assignee: docs at python components: Documentation messages: 291546 nosy: csabella, docs at python priority: normal severity: normal status: open title: URL Quoting page links to function Bytes instead of defintion versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 08:50:46 2017 From: report at bugs.python.org (Brecht Machiels) Date: Wed, 12 Apr 2017 12:50:46 +0000 Subject: [New-bugs-announce] [issue30053] Problems building with --enable-profiling on macOS Message-ID: <1492001446.82.0.198717177169.issue30053@psf.upfronthosting.co.za> New submission from Brecht Machiels: The python.exe produced during the build process is somehow broken: $ ./python.exe -S Killed: 9 Strangely, it works when run from gdb: $ gdb -args ./python.exe -S GNU gdb (GDB) 7.12.1 Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-apple-darwin16.3.0". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./python.exe...done. (gdb) start Temporary breakpoint 1 at 0x10000109c: file ../Programs/python.c, line 28. Starting program: /Users/brechtm/Documents/Code/cpython/profile/python.exe -S [New Thread 0x1403 of process 62753] warning: unhandled dyld version (15) Thread 2 hit Temporary breakpoint 1, main (argc=2, argv=0x7fff5bfff460) at ../Programs/python.c:28 28 (void)_PyMem_SetupAllocators("malloc"); (gdb) c Continuing. Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] Python 3.7.0a0 (heads/master:3e0f1fc4e0, Apr 12 2017, 14:39:47) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.41)] on darwin Module readline not available. >>> ^D [Inferior 1 (process 62753) exited normally] (gdb) q I'm running macOS Sierra 10.12.4 (16E195) and XCode 8.3.1. $ gcc --version Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 8.1.0 (clang-802.0.41) Target: x86_64-apple-darwin16.5.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin Forcing the use of the Homebrew-provided GCC by invoking "make CC=gcc-6 CXX=g++-6" has the same result. $ gcc-6 --version gcc-6 (Homebrew GCC 6.3.0_1) 6.3.0 Copyright (C) 2016 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. The build is fine when --enable-profiling is not specified. ---------- components: macOS files: configure.txt messages: 291547 nosy: brechtm, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Problems building with --enable-profiling on macOS type: compile error versions: Python 3.7 Added file: http://bugs.python.org/file46798/configure.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 09:42:25 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Apr 2017 13:42:25 +0000 Subject: [New-bugs-announce] [issue30054] Expose tracemalloc C API to track/untrack memory blocks Message-ID: <1492004545.36.0.222275200226.issue30054@psf.upfronthosting.co.za> New submission from STINNER Victor: The issue #26530 added a private C API to manually track/untrack memory blocks in tracemalloc. I was just validated by Julian Taylor who confirms that the API works as expected: http://bugs.python.org/issue26530#msg291551 So I propose to make the 3 newly added functions public and document them: * _PyTraceMalloc_Track() * _PyTraceMalloc_Untrack() * _PyTraceMalloc_GetTraceback() ---------- messages: 291554 nosy: haypo, jtaylor priority: normal severity: normal status: open title: Expose tracemalloc C API to track/untrack memory blocks type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 10:59:31 2017 From: report at bugs.python.org (Marco Buttu) Date: Wed, 12 Apr 2017 14:59:31 +0000 Subject: [New-bugs-announce] [issue30055] Missed testcleanup in decimal.rst Message-ID: <1492009171.7.0.643211980228.issue30055@psf.upfronthosting.co.za> New submission from Marco Buttu: The testsetup in Doc/library/decimal.rst is not enough for isolating the tests in respect to the other rst files. Currently we have the following testsetup, without a testcleanup: .. testsetup:: * import decimal import math from decimal import * # make sure each group gets a fresh context setcontext(Context()) Without a testcleanup, the changes on the context will affect the other files that use the context (like Doc/library/statistics.rst). We should better isolate the tests adding also a testcleanup: .. testcleanup:: * # make sure other tests (outside this file) get a fresh context setcontext(Context()) I am opening a PR. ---------- assignee: docs at python components: Documentation messages: 291559 nosy: docs at python, marco.buttu, skrah priority: normal severity: normal status: open title: Missed testcleanup in decimal.rst type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 11:21:34 2017 From: report at bugs.python.org (Amy) Date: Wed, 12 Apr 2017 15:21:34 +0000 Subject: [New-bugs-announce] [issue30056] RuntimeWarning: invalid value encountered in maximum/minimum Message-ID: <1492010494.99.0.888369337557.issue30056@psf.upfronthosting.co.za> New submission from Amy: I just updated to numpy 1.12.1 and am getting this Runtime Warning when using numpy.minimum or numpy.maximum: RuntimeWarning: invalid value encountered in maximum Prior to updating, I was using numpy 1.10.x and had no issues running numpy.minimum or numpy.maximum ---------- messages: 291560 nosy: aching priority: normal severity: normal status: open title: RuntimeWarning: invalid value encountered in maximum/minimum type: compile error versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 11:52:29 2017 From: report at bugs.python.org (Jeroen Demeyer) Date: Wed, 12 Apr 2017 15:52:29 +0000 Subject: [New-bugs-announce] [issue30057] signal.signal should check tripped signals Message-ID: <1492012349.83.0.519367248904.issue30057@psf.upfronthosting.co.za> New submission from Jeroen Demeyer: There is a race condition in calling signal.signal() if the signal arrives while (or right before) signal.signal() is being executed: the function signal.signal(sig, action) marks the signal "sig" as not tripped. Because of this, signals can get lost. Instead, it would be better to call PyErr_CheckSignals() to check for pending signals. ---------- components: Interpreter Core messages: 291561 nosy: jdemeyer priority: normal severity: normal status: open title: signal.signal should check tripped signals versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 13:13:38 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 12 Apr 2017 17:13:38 +0000 Subject: [New-bugs-announce] [issue30058] Buffer overflow in kqueue.control() Message-ID: <1492017218.53.0.767201958064.issue30058@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The first parameter of kqueue.control() is documented as an iterable. But actually it should have a length. kqueue.control() uses PyObject_Size() for allocating an array and PyObject_GetIter()+PyIter_Next() for iterating kevent objects and filling the array. If the length and the iterator are not consistent this can lead to writing past the end of the array. ---------- components: Extension Modules, FreeBSD messages: 291563 nosy: koobs, serhiy.storchaka priority: normal severity: normal status: open title: Buffer overflow in kqueue.control() type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 13:25:17 2017 From: report at bugs.python.org (Michael Seifert) Date: Wed, 12 Apr 2017 17:25:17 +0000 Subject: [New-bugs-announce] [issue30059] No documentation for C type Py_Ellipsis Message-ID: <1492017917.83.0.872522904102.issue30059@psf.upfronthosting.co.za> New submission from Michael Seifert: The "Py_Ellipsis" object is part of the public C-API but it isn't documented anywhere. It is defined in "sliceobject.o/.h" so I created a PR and added it to the "slice" documentation. ---------- assignee: docs at python components: Documentation messages: 291567 nosy: MSeifert, docs at python priority: normal pull_requests: 1238 severity: normal status: open title: No documentation for C type Py_Ellipsis type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 14:01:11 2017 From: report at bugs.python.org (Stephen Kelly) Date: Wed, 12 Apr 2017 18:01:11 +0000 Subject: [New-bugs-announce] [issue30060] Crash on Py_Finalize if Py_NoSiteFlag is used Message-ID: <1492020071.31.0.759008537591.issue30060@psf.upfronthosting.co.za> New submission from Stephen Kelly: When attempting to use PyImport_ImportModule("os") (or to import many other libraries), there is a crash on Py_Finalize if Py_NoSiteFlag is set. The issue appears to be the use of frozenset() as a result of importing the module. I reproduced this on Windows after building 2.7.13 with VS 2015 by applying the following patch: Python changes. --- Include\\fileobject.h +++ Include\\fileobject.h @@ -70,7 +70,7 @@ */ int _PyFile_SanitizeMode(char *mode); -#if defined _MSC_VER && _MSC_VER >= 1400 +#if defined _MSC_VER && _MSC_VER >= 1400 && _MSC_VER < 1900 /* A routine to check if a file descriptor is valid on Windows. Returns 0 * and sets errno to EBADF if it isn't. This is to avoid Assertions * from various functions in the Windows CRT beginning with --- Modules\\posixmodule.c +++ Modules\\posixmodule.c @@ -529,7 +529,7 @@ #endif -#if defined _MSC_VER && _MSC_VER >= 1400 +#if defined _MSC_VER && _MSC_VER >= 1400 && _MSC_VER < 1900 /* Microsoft CRT in VS2005 and higher will verify that a filehandle is * valid and raise an assertion if it isn't. * Normally, an invalid fd is likely to be a C program error and therefore --- Modules\\timemodule.c +++ Modules\\timemodule.c @@ -68,6 +70,9 @@ #if defined(MS_WINDOWS) && !defined(__BORLANDC__) /* Win32 has better clock replacement; we have our own version below. */ #undef HAVE_CLOCK +#define timezone _timezone +#define tzname _tzname +#define daylight _daylight #endif /* MS_WINDOWS && !defined(__BORLANDC__) */ #if defined(PYOS_OS2) Backtrace: KernelBase.dll!00007ff963466142() Unknown > python27_d.dll!Py_FatalError(const char * msg) Line 1700 C python27_d.dll!PyThreadState_Get() Line 332 C python27_d.dll!set_dealloc(_setobject * so) Line 553 C python27_d.dll!_Py_Dealloc(_object * op) Line 2263 C python27_d.dll!PySet_Fini() Line 1084 C python27_d.dll!Py_Finalize() Line 526 C mn.exe!main(int argc, char * * argv) Line 40 C [External Code] Reproducing code: #include int main(int argc, char** argv) { // http://www.awasu.com/weblog/embedding-python/threads // #### Comment this to avoid crash Py_NoSiteFlag = 1; Py_Initialize(); PyEval_InitThreads(); // nb: creates and locks the GIL // NOTE: We save the current thread state, and restore it when we unload, // so that we can clean up properly. PyThreadState* pMainThreadState = PyEval_SaveThread(); // nb: this also releases the GIL PyEval_AcquireLock(); // nb: get the GIL PyThreadState* pThreadState = Py_NewInterpreter(); assert(pThreadState != NULL); PyEval_ReleaseThread(pThreadState); // nb: this also releases the GIL PyEval_AcquireThread(pThreadState); // Can reproduce by importing the os module, but the issue actually appears // because of the use of frozenset, so simplify to that. #if 0 PyObject* osModule = PyImport_ImportModule("os"); Py_DECREF(osModule); #endif // As in abc.py ABCMeta class PyRun_SimpleString("abstractmethods = frozenset(set())"); PyEval_ReleaseThread(pThreadState); // release the interpreter PyEval_AcquireThread(pThreadState); // nb: this also locks the GIL Py_EndInterpreter(pThreadState); PyEval_ReleaseLock(); // nb: release the GIL // clean up PyEval_RestoreThread(pMainThreadState); // nb: this also locks the GIL Py_Finalize(); } ---------- components: Interpreter Core messages: 291568 nosy: steveire priority: normal severity: normal status: open title: Crash on Py_Finalize if Py_NoSiteFlag is used type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 16:58:52 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 12 Apr 2017 20:58:52 +0000 Subject: [New-bugs-announce] [issue30061] Check if PyObject_Size() raised an error Message-ID: <1492030732.95.0.492946948296.issue30061@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: PyObject_Size(), PySequence_Size() and PyMapping_Size() can raise an exception. But not always this is checked after using them. This can lead to a crash. For example: >>> import io >>> class R(io.IOBase): ... def readline(self): return None ... >>> next(R()) Fatal Python error: a function returned a result with an error set TypeError: object of type 'NoneType' has no len() The above exception was the direct cause of the following exception: SystemError: returned a result with an error set Current thread 0xb749c700 (most recent call first): File "", line 1 in ---------- components: Extension Modules, IO, Interpreter Core messages: 291573 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Check if PyObject_Size() raised an error type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 17:43:42 2017 From: report at bugs.python.org (Adam Williamson) Date: Wed, 12 Apr 2017 21:43:42 +0000 Subject: [New-bugs-announce] [issue30062] datetime in Python 3.6+ no longer respects 'TZ' environment variable Message-ID: <1492033422.46.0.101980038958.issue30062@psf.upfronthosting.co.za> New submission from Adam Williamson: I can't figure out yet why this is, but it's very easy to demonstrate: [adamw at adam anaconda (time-log %)]$ python35 Python 3.5.2 (default, Feb 11 2017, 18:09:24) [GCC 7.0.1 20170209 (Red Hat 7.0.1-0.7)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> import datetime >>> os.environ['TZ'] = 'America/Winnipeg' >>> datetime.datetime.fromtimestamp(0) datetime.datetime(1969, 12, 31, 18, 0) >>> os.environ['TZ'] = 'Europe/London' >>> datetime.datetime.fromtimestamp(0) datetime.datetime(1970, 1, 1, 1, 0) >>> [adamw at adam anaconda (time-log %)]$ python3 Python 3.6.0 (default, Mar 21 2017, 17:30:34) [GCC 7.0.1 20170225 (Red Hat 7.0.1-0.10)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> import datetime >>> os.environ['TZ'] = 'America/Winnipeg' >>> datetime.datetime.fromtimestamp(0) datetime.datetime(1969, 12, 31, 16, 0) >>> os.environ['TZ'] = 'Europe/London' >>> datetime.datetime.fromtimestamp(0) datetime.datetime(1969, 12, 31, 16, 0) >>> That is, when deciding what timezone to use for operations that involve one, if the 'TZ' environment variable was set, Python 3.5 would use the timezone it was set to. Python 3.6 does not, it ignores it. As you can see, if I twiddle the 'TZ' setting and call `datetime.datetime.fromtimestamp(0)` repeatedly under Python 3.5, I get different results - each one is the wall clock time at the epoch (timestamp 0) in the timezone specified as 'TZ'. If I do the same on Python 3.6, the 'TZ' setting is ignored and I always get the same result (the wall clock time of 'the epoch' in Vancouver, which is my real timezone, and which I guess is being picked up from /etc/localtime or whatever). This wound up causing a problem in the Fedora / Red Hat installer, anaconda: https://bugzilla.redhat.com/show_bug.cgi?id=1433560 The 'current time zone' can be changed in anaconda. Shortly after it starts up, it automatically tries to guess the correct time zone via geolocation, and the user can also explicitly choose a timezone in the installer interface (or set one in a kickstart). Whenever the timezone is set in this way, an underlying library (libtimezonemap - https://launchpad.net/timezonemap) sets 'TZ' to the chosen timezone. It turns out other code in anaconda relies on Python respecting that setting, which Python 3.6 does not do. As a consequence, anaconda with Python 3.6 winds up setting the system time incorrectly. Also, the timestamps on all its log files are different now, and there may well be other consequences I didn't figure out yet. The same applies to, e.g., `datetime.datetime.now()`: you can perform the same experiment with Python 3.5 and 3.6. If you change the 'TZ' env var while calling `datetime.datetime.now()` after each change, on Python 3.5, the naive datetime object it returns is the current time *in that timezone*. On Python 3.6, regardless of what 'TZ' is set to, it always gives you the same time. Is this an intended and/or desired change that we should adjust to somehow? Is there another way a running Python process can change what "the current" timezone is, for the purposes of datetime calculations like this? ---------- components: Library (Lib) messages: 291574 nosy: adamwill priority: normal severity: normal status: open title: datetime in Python 3.6+ no longer respects 'TZ' environment variable versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 12 22:40:26 2017 From: report at bugs.python.org (Paul Durack) Date: Thu, 13 Apr 2017 02:40:26 +0000 Subject: [New-bugs-announce] [issue30063] DeprecationWarning in json/encoder.py Message-ID: <1492051226.05.0.41504745507.issue30063@psf.upfronthosting.co.za> New submission from Paul Durack: I have started receiving the following warnings which are starting to prevent an ipython session from functioning: /home/user/anaconda2/envs/cdatcmornclnco/lib/python2.7/json/encoder.py:207: DeprecationWarning: Interpreting naive datetime as local 2017-04-12 17:15:36.235571. Please add timezone info to timestamps. chunks = self.iterencode(o, _one_shot=True) /home/user/anaconda2/envs/cdatcmornclnco/lib/python2.7/json/encoder.py:207: DeprecationWarning: Interpreting naive datetime as local 2017-04-12 17:15:36.267401. Please add timezone info to timestamps. chunks = self.iterencode(o, _one_shot=True) The only way I can continue is to terminate the ipython shell and open a new instance. Can someone tell me what I need to do to solve the issue? Is there a json import somewhere that requires some new arguments? ---------- components: Library (Lib) messages: 291582 nosy: Paul Durack priority: normal severity: normal status: open title: DeprecationWarning in json/encoder.py versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 13 04:55:09 2017 From: report at bugs.python.org (Evgeny Kapun) Date: Thu, 13 Apr 2017 08:55:09 +0000 Subject: [New-bugs-announce] [issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled Message-ID: <1492073709.93.0.33740565749.issue30064@psf.upfronthosting.co.za> New submission from Evgeny Kapun: Code: import asyncio as a import socket as s @a.coroutine def coro(): s1, s2 = s.socketpair() s1.setblocking(False) s2.setblocking(False) try: yield from a.wait_for(loop.sock_recv(s2, 1), 1) except a.TimeoutError: pass yield from loop.sock_sendall(s1, b'\x00') yield s1.close() s2.close() loop = a.get_event_loop() loop.run_until_complete(coro()) Result: Exception in callback BaseSelectorEventLoop._sock_recv(, True, , 1) handle: , True, , 1)> Traceback (most recent call last): File "/usr/lib/python3.6/asyncio/events.py", line 127, in _run self._callback(*self._args) File "/usr/lib/python3.6/asyncio/selector_events.py", line 378, in _sock_recv self.remove_reader(fd) File "/usr/lib/python3.6/asyncio/selector_events.py", line 342, in remove_reader return self._remove_reader(fd) File "/usr/lib/python3.6/asyncio/selector_events.py", line 279, in _remove_reader key = self._selector.get_key(fd) File "/usr/lib/python3.6/selectors.py", line 189, in get_key return mapping[fileobj] File "/usr/lib/python3.6/selectors.py", line 70, in __getitem__ fd = self._selector._fileobj_lookup(fileobj) File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup return _fileobj_to_fd(fileobj) File "/usr/lib/python3.6/selectors.py", line 41, in _fileobj_to_fd raise ValueError("Invalid file descriptor: {}".format(fd)) ValueError: Invalid file descriptor: -1 ---------- components: asyncio messages: 291593 nosy: abacabadabacaba, yselivanov priority: normal severity: normal status: open title: BaseSelectorEventLoop.sock_{recv,sendall}() don't remove their callbacks when canceled type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 13 06:14:01 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 13 Apr 2017 10:14:01 +0000 Subject: [New-bugs-announce] [issue30065] Insufficient validation in _posixsubprocess.fork_exec() Message-ID: <1492078441.09.0.999748140134.issue30065@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: _posixsubprocess.fork_exec() takes a sequence of file descriptors. It first validates it, and since the validation is passed uses it without checking for errors. But since __len__, __getitem__ and __int__ can execute user code and release GIL, errors can occur after the validation. This can cause a crash. Proposed patch fixes this by the simplest way -- it restricts the type of a sequence to tuple and types of elements to int. Since _posixsubprocess is private module this shouldn't break third-party code. Other issue with _posixsubprocess.fork_exec() was that it converts args to a tuple or a list and iterate it without checking if the size is changed. ---------- components: Extension Modules messages: 291595 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Insufficient validation in _posixsubprocess.fork_exec() type: crash versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 13 09:09:16 2017 From: report at bugs.python.org (leopoldo) Date: Thu, 13 Apr 2017 13:09:16 +0000 Subject: [New-bugs-announce] [issue30066] anaconda3::traitsu::NotImplementedError Message-ID: <1492088956.84.0.603775450991.issue30066@psf.upfronthosting.co.za> New submission from leopoldo: the code below crash when run on anaconda3 import traits.api as trapi import traitsui.api as trui from traits.api import HasTraits, Str, Range, Enum class Person(HasTraits): name = Str('Jane Doe') age = Range(low=0) gender = Enum('female', 'male') person = Person(age=30) from traitsui.api import Item, RangeEditor, View person_view = View( Item('name'), Item('gender'), Item('age', editor=RangeEditor(mode='spinner')), buttons=['OK', 'Cancel'], resizable=True, ) person.configure_traits(view=person_view) --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) in () ----> 1 person.configure_traits(view=person_view) /home/leopoldo/anaconda3/lib/python3.6/site-packages/traits/has_traits.py in configure_traits(self, filename, view, kind, edit, context, handler, id, scrollable, **args) 2169 context = self 2170 rc = toolkit().view_application( context, self.trait_view( view ), -> 2171 kind, handler, id, scrollable, args ) 2172 if rc and (filename is not None): 2173 fd = None /home/leopoldo/anaconda3/lib/python3.6/site-packages/traitsui/toolkit.py in view_application(self, context, view, kind, handler, id, scrollable, args) 289 290 """ --> 291 raise NotImplementedError 292 293 #--------------------------------------------------------------------------- NotImplementedError: ---------- messages: 291611 nosy: leopoldotosi priority: normal severity: normal status: open title: anaconda3::traitsu::NotImplementedError type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 13 16:04:13 2017 From: report at bugs.python.org (Jakub Wilk) Date: Thu, 13 Apr 2017 20:04:13 +0000 Subject: [New-bugs-announce] [issue30067] _osx_support.py: misplaced flags in re.sub() Message-ID: <1492113853.22.0.518696650539.issue30067@psf.upfronthosting.co.za> New submission from Jakub Wilk: Lib/_osx_support.py contains the following line: flags = re.sub(r'-arch\s+\w+\s', ' ', flags, re.ASCII) But the 4th re.sub() argument is the maximum number of substitutions, so this is equivalent to: flags = re.sub(r'-arch\s+\w+\s', ' ', flags, count=256) It was probably meant to be: flags = re.sub(r'-arch\s+\w+\s', ' ', flags, flags=re.ASCII) This bug was found using pydiatra: http://jwilk.net/software/pydiatra ---------- components: Library (Lib) messages: 291631 nosy: jwilk priority: normal severity: normal status: open title: _osx_support.py: misplaced flags in re.sub() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 14 02:45:23 2017 From: report at bugs.python.org (Xiang Zhang) Date: Fri, 14 Apr 2017 06:45:23 +0000 Subject: [New-bugs-announce] [issue30068] missing iter(self) in _io._IOBase.readlines Message-ID: <1492152323.97.0.555088221985.issue30068@psf.upfronthosting.co.za> New submission from Xiang Zhang: In _io._IOBase.readlines, it straightly uses PyIter_Next(self). But iter(_io._IOBase) does more work than just returning itself. >>> import _io >>> f = _io._IOBase() >>> f.close() >>> f.readlines() Traceback (most recent call last): File "", line 1, in ValueError: I/O operation on closed file. >>> f.readlines(10) Traceback (most recent call last): File "", line 1, in AttributeError: '_io._IOBase' object has no attribute 'read' ---------- components: IO messages: 291641 nosy: xiang.zhang priority: normal severity: normal stage: patch review status: open title: missing iter(self) in _io._IOBase.readlines type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 14 06:09:49 2017 From: report at bugs.python.org (sylgar) Date: Fri, 14 Apr 2017 10:09:49 +0000 Subject: [New-bugs-announce] [issue30069] External library behave differently when loaded by Python (maybe thread issue) Message-ID: <1492164589.43.0.483719892275.issue30069@psf.upfronthosting.co.za> New submission from sylgar: Hello, I have libsylvain.so which has just one function (sylvain_display()) which displays an hello world windows through QT libraries. I have this one program `sylvain' which is a single line program which calls sylvain_display from libsylvain.so. When I run `sylvain', I have some debug output from QT on the console, and the hello world window is displayed on my TV (I am running this on a Raspberry Pi with FreeBSD). Now, when I run python2.7 or python3.6, and load libsylvain.so either through the ctypes module or through a C extension module, and call the sylvain_display() function from python, I have the console log output from QT as with the `sylvain' program, but I don't have the window on my TV! I guess the rendering to the TV is done on a separate thread and this one is not called or locked? I came up with this simple test program to isolate the issue because both PyQT and PySide now don't work as expected, I have console log but no actual TV output. This used to work one year ago, I don't know what has changed in Python. Hope you guys will figure this out. libsylvain.cpp: http://dev.sylvaingarrigues.com/libsylvain.cpp sylvain.cpp (test program for libsylvain): http://dev.sylvaingarrigues.com/sylvain.cpp sylvain-ctypes.py (load libsylvain): http://dev.sylvaingarrigues.com/sylvain-ctypes.py Also tried with a C extension to rule out ctypes bug, same issue: sylvainmodule.cpp (python module wrapper): http://dev.sylvaingarrigues.com/sylvainmodule.cpp sylvain-next.pu (load sylvainmodule and sylvain_display): http://dev.sylvaingarrigues.com/sylvain-cext.py build script: http://dev.sylvaingarrigues.com/build.sh ---------- components: Extension Modules, FreeBSD, ctypes messages: 291644 nosy: koobs, sylgar priority: normal severity: normal status: open title: External library behave differently when loaded by Python (maybe thread issue) versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 14 07:07:11 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 14 Apr 2017 11:07:11 +0000 Subject: [New-bugs-announce] [issue30070] Fix errors handling in the parser module Message-ID: <1492168031.64.0.372265650167.issue30070@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch fixes miscellaneous errors in errors handling in the parser module. This errors can cause leaking references, raising wrong exceptions, and even crashing. ---------- components: Extension Modules messages: 291645 nosy: benjamin.peterson, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Fix errors handling in the parser module type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 14 07:34:54 2017 From: report at bugs.python.org (Jeroen Demeyer) Date: Fri, 14 Apr 2017 11:34:54 +0000 Subject: [New-bugs-announce] [issue30071] Duck-typing inspect.isfunction() Message-ID: <1492169694.13.0.728047753298.issue30071@psf.upfronthosting.co.za> New submission from Jeroen Demeyer: Python is supposed to encourage duck-typing, but the "inspect" module doesn't follow this advice. A particular problem is that Cython functions are not recognized by the inspect module to be functions: http://cython.readthedocs.io/en/latest/src/userguide/limitations.html#inspect-support ---------- components: Library (Lib) messages: 291647 nosy: jdemeyer, scoder priority: normal severity: normal status: open title: Duck-typing inspect.isfunction() versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 14 08:36:04 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 14 Apr 2017 12:36:04 +0000 Subject: [New-bugs-announce] [issue30072] re function: convert count and flags parameters to keyword-only? Message-ID: <1492173364.71.0.473594285494.issue30072@psf.upfronthosting.co.za> New submission from STINNER Victor: The re API seems commonly misused. Example passing a re flag to re.sub(): >>> re.sub("A", "B", "ahah", re.I) 'ahah' No error, no warning, but it doesn't work. Oh, sub has 5 paramters, no 4... I suggest to convert count and flags to keyword-only parameters. To not break the world, especially legit code passing the count parameter as a position argument, an option is to have a deprecation period if these two parameters are passed a positional-only parameter. -- Another option would be to rely on the fact that re flags are now enums instead of raw integers, and so add basic type check... Is there are risk of applications using re flags serialized by pickle from Pyhon < 3.6 and so getting integers? Maybe the check should only be done if flags are passing as positional-only argument... but the implementation of such check seems may be overkill for such simple and performance-critical function, no? See issue #30067 for a recent bug in the Python stdlib! ---------- components: Library (Lib) messages: 291650 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: re function: convert count and flags parameters to keyword-only? versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 14 10:18:56 2017 From: report at bugs.python.org (Julian Taylor) Date: Fri, 14 Apr 2017 14:18:56 +0000 Subject: [New-bugs-announce] [issue30073] binary compressed file reading corrupts newlines (lzma, gzip, bz2) Message-ID: <1492179536.71.0.785460457662.issue30073@psf.upfronthosting.co.za> New submission from Julian Taylor: Probably a case of 'don't do that' but reading lines in a compressed files in binary mode produces bytes with invalid newlines in encodings that where '\n' is encoded as something else: with lzma.open("test.xz", "wt", encoding="UTF-32-LE") as f: f.write('0 1 2\n3 4 5'); lzma.open("test.xz", "rb").readlines()[0].decode('UTF-32-LE') Fails with: UnicodeDecodeError: 'utf-32-le' codec can't decode byte 0x0a in position 20: truncated data as readlines() produces: b'0\x00\x00\x00 \x00\x00\x001\x00\x00\x00 \x00\x00\x002\x00\x00\x00\n' The last newline should be '\n'.encode('UTF-32-LE') == b'\n\x00\x00\x00' ---------- components: Library (Lib) messages: 291661 nosy: jtaylor priority: normal severity: normal status: open title: binary compressed file reading corrupts newlines (lzma, gzip, bz2) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 15 03:46:43 2017 From: report at bugs.python.org (Xiang Zhang) Date: Sat, 15 Apr 2017 07:46:43 +0000 Subject: [New-bugs-announce] [issue30074] compile warnings of _PySlice_Unpack in 2.7 Message-ID: <1492242403.66.0.870885326202.issue30074@psf.upfronthosting.co.za> New submission from Xiang Zhang: Compile 2.7 now get many warning about _PySlice_Unpack, not in 3.x. See an example: http://buildbot.python.org/all/builders/x86%20Ubuntu%20Shared%202.7/builds/109/steps/compile/logs/warnings%20%2822%29. ---------- messages: 291708 nosy: serhiy.storchaka, xiang.zhang priority: normal severity: normal status: open title: compile warnings of _PySlice_Unpack in 2.7 versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 15 12:21:00 2017 From: report at bugs.python.org (Tithen Firion) Date: Sat, 15 Apr 2017 16:21:00 +0000 Subject: [New-bugs-announce] [issue30075] Printing ANSI Escape Sequences on Windows 10 Message-ID: <1492273260.15.0.200357892895.issue30075@psf.upfronthosting.co.za> New submission from Tithen Firion: Windows 10 supports ANSI Escape Sequences ( http://stackoverflow.com/a/38617204/2428152 https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx ) but Python just prints escape character. Adding `subprocess.call('', shell=True)` before printing solved the issue. Test code: import subprocess print('\033[0;31mTEST\033[0m') subprocess.call('', shell=True) print('\033[0;31mTEST\033[0m') output in attachment. Haven't tested it on other Python versions but it probably occurs on them too. ---------- components: IO files: example.png messages: 291719 nosy: Tithen Firion priority: normal severity: normal status: open title: Printing ANSI Escape Sequences on Windows 10 type: behavior versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file46805/example.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 15 13:27:19 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 15 Apr 2017 17:27:19 +0000 Subject: [New-bugs-announce] [issue30076] Opcode names BUILD_MAP_UNPACK_WITH_CALL and BUILD_TUPLE_UNPACK_WITH_CALL are too long Message-ID: <1492277239.5.0.348724747458.issue30076@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Two new opcodes BUILD_MAP_UNPACK_WITH_CALL and BUILD_TUPLE_UNPACK_WITH_CALL (added in 3.5 and 3.6) have too long names (26 and 28 characters). This exceeds the width of the opcode names in the dis module (20 characters). Increasing the width of the column will make opcode arguments too distant from opcode names, that will decrease readability. The better solution would be renaming these two opcodes. They are used for merging iterables and mappings of positional and keyword arguments when used var-positional (*args) and var-keyword (**kwargs) arguments. Maybe new names should reflect this. >>> dis.dis('f(a, b, *args, x=x, y=y, **kw)') 1 0 LOAD_NAME 0 (f) 2 LOAD_NAME 1 (a) 4 LOAD_NAME 2 (b) 6 BUILD_TUPLE 2 8 LOAD_NAME 3 (args) 10 BUILD_TUPLE_UNPACK_WITH_CALL 2 12 LOAD_NAME 4 (x) 14 LOAD_NAME 5 (y) 16 LOAD_CONST 0 (('x', 'y')) 18 BUILD_CONST_KEY_MAP 2 20 LOAD_NAME 6 (kw) 22 BUILD_MAP_UNPACK_WITH_CALL 2 24 CALL_FUNCTION_EX 1 26 RETURN_VALUE ---------- messages: 291725 nosy: ncoghlan, serhiy.storchaka priority: normal severity: normal status: open title: Opcode names BUILD_MAP_UNPACK_WITH_CALL and BUILD_TUPLE_UNPACK_WITH_CALL are too long type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 15 14:44:39 2017 From: report at bugs.python.org (Toby Thurston) Date: Sat, 15 Apr 2017 18:44:39 +0000 Subject: [New-bugs-announce] [issue30077] Support Apple AIFF-C pseudo compression in aifc.py Message-ID: <1492281879.16.0.0196863971193.issue30077@psf.upfronthosting.co.za> New submission from Toby Thurston: aifc.py fails to open AIFF files containing the compression type "sowt" in the COMM chunk with an "unsupported compression type" error. This compression type is an Apple specific extension that signals that the data is not actually compressed but is stored uncompressed in little Endian order. Supporting it would require a trivial change to allow the compression type as a byte-string and to add a do-nothing _convert routine. This would allow aifc.py to be used with AIFF files on Apple macOS. ---------- components: Extension Modules messages: 291727 nosy: thruston priority: normal severity: normal status: open title: Support Apple AIFF-C pseudo compression in aifc.py type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 15 16:49:45 2017 From: report at bugs.python.org (Ilya Kazakevich) Date: Sat, 15 Apr 2017 20:49:45 +0000 Subject: [New-bugs-announce] [issue30078] "-m unittest --help" says nothing about direct script exection Message-ID: <1492289385.37.0.918058800689.issue30078@psf.upfronthosting.co.za> New submission from Ilya Kazakevich: In Py3 it is possible to run test filelike "python -m unittest tests/test_something.py" (it is *not* possible in Py2!) Here is doc: https://docs.python.org/3/library/unittest.html But "--help" seems to be simply copied from Py2 because it does not have information nor examples about such execution. Please add it to "examples" section at least because this type of usage is very useful. ---------- assignee: docs at python components: Documentation messages: 291729 nosy: Ilya Kazakevich, docs at python priority: normal severity: normal status: open title: "-m unittest --help" says nothing about direct script exection versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 15 23:10:04 2017 From: report at bugs.python.org (Philip Lee) Date: Sun, 16 Apr 2017 03:10:04 +0000 Subject: [New-bugs-announce] [issue30079] Explain why it is recommended to pass args as a string rather than as a sequence If shell is True Message-ID: <1492312204.85.0.288489442565.issue30079@psf.upfronthosting.co.za> New submission from Philip Lee: The doc here https://docs.python.org/3/library/subprocess.html#subprocess.Popen says : "If shell is True, it is recommended to pass args as a string rather than as a sequence." but without explain why ? Please add the explanation ! while in https://docs.python.org/3/library/subprocess.html#frequently-used-arguments says: "args is required for all calls and should be a string, or a sequence of program arguments. Providing a sequence of arguments is generally preferred, as it allows the module to take care of any required escaping and quoting of arguments (e.g. to permit spaces in file names). If passing a single string, either shell must be True (see below) or else the string must simply name the program to be executed without specifying any arguments." In the case of shell =True , I found providing a sequence of arguments rather than a string argument can take the advantage of auto escaping and quoting of arguments (e.g. to permit spaces in file names) , so what is the advantage of pass args as a string rather than as a sequence as says in the doc when shell is True? ---------- assignee: docs at python components: Documentation messages: 291733 nosy: docs at python, iMath priority: normal severity: normal status: open title: Explain why it is recommended to pass args as a string rather than as a sequence If shell is True type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 16 02:12:44 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 16 Apr 2017 06:12:44 +0000 Subject: [New-bugs-announce] [issue30080] Add the --duplicate option for timeit Message-ID: <1492323164.81.0.971695586502.issue30080@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: One of the most used by me option of the "perf timeit" subcommand is --duplicate. It duplicates statements to reduce the overhead of the loop. This is necessary when measure the time of very fast statements. Proposed patch adds this option for CLI of the timeit module. Similar feature already was proposed in issue21988, but it automatically duplicated statements if they executed too fast. This patch does this only on explicit request. And it affects only command-line interface. You need to duplicate statements manually when use programming interface. ---------- components: Demos and Tools, Library (Lib) messages: 291736 nosy: Guido.van.Rossum, alex, arigo, georg.brandl, gvanrossum, haypo, haypo, pitrou, r.david.murray, rhettinger, serhiy.storchaka, steven.daprano, tim.peters priority: normal severity: normal stage: patch review status: open title: Add the --duplicate option for timeit type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 16 12:22:37 2017 From: report at bugs.python.org (Xiang Zhang) Date: Sun, 16 Apr 2017 16:22:37 +0000 Subject: [New-bugs-announce] [issue30081] Inconsistent handling of failure of PyModule_AddObject Message-ID: <1492359757.88.0.168735973244.issue30081@psf.upfronthosting.co.za> New submission from Xiang Zhang: The doc of PyModule_AddObject()[1] states it steals a reference to *value*. But this is only the case when it succeed. On failure the reference is not stolen. The usages of it across the code base are inconsistent. Some realizes this situation and depends on it: [2]. Some doesn't realize: [3]. Most just assume it always succeeds: [4]. BTW, it seems many modules doesn't release memories well in failure situations in their PyMOD_INIT. Maybe I miss some post-handling procedures? [1] https://docs.python.org/3/c-api/module.html#c.PyModule_AddObject [2] https://github.com/python/cpython/blob/master/Python/modsupport.c#L644 [3] https://github.com/python/cpython/blob/master/Modules/gcmodule.c#L1590 [4] https://github.com/python/cpython/blob/master/Modules/_datetimemodule.c#L5799 ---------- messages: 291750 nosy: xiang.zhang priority: normal severity: normal status: open title: Inconsistent handling of failure of PyModule_AddObject type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 16 13:26:26 2017 From: report at bugs.python.org (Philip Lee) Date: Sun, 16 Apr 2017 17:26:26 +0000 Subject: [New-bugs-announce] [issue30082] hide command prompt when using subprocess.Popen with shell=False on Windows Message-ID: <1492363586.51.0.498152844874.issue30082@psf.upfronthosting.co.za> New submission from Philip Lee: First, It is nearly useless for the command prompt to pop up during the running time of subprocess.Popen with shell=False. Second, the popping up command prompt would interrupt users and do bad to user experience of GUI applications. Third, I found QProcess within Qt won't pop up the command prompt in using. It would be convenient to add an argument to suppress the command prompt from popping up when using subprocess.Popen with shell=False on Windows, many users are missing the feature and these are many similar feature request questions like the following http://stackoverflow.com/questions/7006238/how-do-i-hide-the-console-when-i-use-os-system-or-subprocess-call http://stackoverflow.com/questions/1765078/how-to-avoid-console-window-with-pyw-file-containing-os-system-call/12964900#12964900 http://stackoverflow.com/questions/1016384/cross-platform-subprocess-with-hidden-window ---------- components: Library (Lib) messages: 291760 nosy: iMath priority: normal severity: normal status: open title: hide command prompt when using subprocess.Popen with shell=False on Windows type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 16 14:18:42 2017 From: report at bugs.python.org (=?utf-8?b?0JzQsNGA0Log0JrQvtGA0LXQvdCx0LXRgNCz?=) Date: Sun, 16 Apr 2017 18:18:42 +0000 Subject: [New-bugs-announce] [issue30083] Asyncio: GeneratorExit + strange exception Message-ID: <1492366722.76.0.0689582979435.issue30083@psf.upfronthosting.co.za> New submission from ???? ?????????: How to reproduce: Run the following program: ========================= import asyncio async def handle_connection(reader, writer): try: await reader.readexactly(42) except BaseException as err: print('Interesting: %r.' % err) raise finally: writer.close() loop = asyncio.get_event_loop() coro = asyncio.start_server(handle_connection, '127.0.0.1', 8888) server = loop.run_until_complete(coro) try: loop.run_forever() except KeyboardInterrupt: print('KeyboardInterrupt catched.') server.close() loop.run_until_complete(server.wait_closed()) loop.close() ========================= 0. Python 3.5.2 1. Connect using telnet to localhost and port 888, type one short line and press Enter. 2. Type Ctrl+C in terminal where programw is running. 3. You will see the following output: ========================= ^CKeyboardInterrupt catched. Interesting: GeneratorExit(). Exception ignored in: Traceback (most recent call last): File "bug.py", line 12, in handle_connection writer.close() File "/usr/lib/python3.5/asyncio/streams.py", line 306, in close return self._transport.close() File "/usr/lib/python3.5/asyncio/selector_events.py", line 591, in close self._loop.call_soon(self._call_connection_lost, None) File "/usr/lib/python3.5/asyncio/base_events.py", line 567, in call_soon handle = self._call_soon(callback, args) File "/usr/lib/python3.5/asyncio/base_events.py", line 576, in _call_soon self._check_closed() File "/usr/lib/python3.5/asyncio/base_events.py", line 356, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Task was destroyed but it is pending! task: wait_for=> ========================= This is almost canonical example of asyncio usage. So I have two questions: 1. Why coroutine is interrupted with GeneratorExit instead of CancelledError ? 2. Why something happend AFTER io loop is closed ? 3. How to code all that right ? I want to close connection on any error. Example provided is simplified code. In real code it looks like: ===== try: await asyncio.wait_for(self._handle_connection(reader, writer), 60) except asyncio.TimeoutError: writer.transport.abort() except asyncio.CancelledError: writer.transport.abort() except Exception: writer.transport.abort() finally: writer.close() ===== ---------- messages: 291763 nosy: socketpair priority: normal severity: normal status: open title: Asyncio: GeneratorExit + strange exception _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 16 18:39:03 2017 From: report at bugs.python.org (umedoblock) Date: Sun, 16 Apr 2017 22:39:03 +0000 Subject: [New-bugs-announce] [issue30084] about starred expression Message-ID: <1492382343.67.0.768693808659.issue30084@psf.upfronthosting.co.za> New submission from umedoblock: Hi, all. First of all, my python environment is below. Python 3.5.2+ (default, Sep 22 2016, 12:18:14) [GCC 6.2.0 20160927] on linux = differ evaluation order about starred expression I get below result then I run x.py ====================================================== File "/home/umedoblock/x.py", line 4 (*(1, 2)) ^ SyntaxError: can't use starred expression here ====================================================== Next, I comment out line 4 and run Python3. I got below result. And I feel strange behavior above result. Because I think that Python should return same result above and below. ====================================================== Traceback (most recent call last): File "/home/umedoblock/x.py", line 1, in list(*(1, 2)) TypeError: list() takes at most 1 argument (2 given) ====================================================== = pass or not about starred expression. list expression pass starred expression, the other hand tuple expression cannot pass starred expression. I hope to pass starred expression about list and tuple. >>> [*(1, 2)] [1, 2] >>> (*(1, 2)) File "", line 1 SyntaxError: can't use starred expression here ---------- components: Regular Expressions files: x.py messages: 291769 nosy: ezio.melotti, mrabarnett, umedoblock priority: normal severity: normal status: open title: about starred expression type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file46807/x.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 16 22:27:09 2017 From: report at bugs.python.org (Steven D'Aprano) Date: Mon, 17 Apr 2017 02:27:09 +0000 Subject: [New-bugs-announce] [issue30085] Discourage operator.__dunder__ functions Message-ID: <1492396029.61.0.720198059802.issue30085@psf.upfronthosting.co.za> New submission from Steven D'Aprano: As discussed on the Python-Ideas mailing list, it is time to discourage the use of operator.__dunder__ functions. Not to remove them or deprecate them, just change the documentation to make it clear that the dunderless versions are preferred. Guido +1'ed this suggestion, and there were no objections: https://mail.python.org/pipermail/python-ideas/2017-April/045424.html ---------- assignee: docs at python components: Documentation messages: 291774 nosy: docs at python, ncoghlan, steven.daprano, terry.reedy priority: normal severity: normal status: open title: Discourage operator.__dunder__ functions _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 17 00:43:18 2017 From: report at bugs.python.org (umedoblock) Date: Mon, 17 Apr 2017 04:43:18 +0000 Subject: [New-bugs-announce] [issue30086] type() and len() recognize "abc", expression as "abc" string. Message-ID: <1492404198.27.0.60955199982.issue30086@psf.upfronthosting.co.za> New submission from umedoblock: But I found a real bug to use a tuple with a comma. Python3 recognized "abc", expression as tuple of one element. But type() and len() recognize "abc", expression as "abc" string. So now, I found a real bug. I'll show you below sentences. >>> "abc", ('abc',) >>> obj = "abc", >>> obj ('abc',) >>> type(obj) >>> len(("abc",)) 1 >>> len(obj) 1 >>> type("abc",) >>> len("abc",) 3 ---------- components: Regular Expressions messages: 291781 nosy: ezio.melotti, mrabarnett, umedoblock priority: normal severity: normal status: open title: type() and len() recognize "abc", expression as "abc" string. type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 17 02:15:46 2017 From: report at bugs.python.org (Christoph Zimmermann) Date: Mon, 17 Apr 2017 06:15:46 +0000 Subject: [New-bugs-announce] [issue30087] pdb issue with type conversion Message-ID: <1492409746.73.0.239584999276.issue30087@psf.upfronthosting.co.za> New submission from Christoph Zimmermann: Types cannot be converted properly while running under pdb control: python >>> t=(1,2,3) >>> list(t) [1, 2, 3] python pdb.py (Pdb) t=(1,2,3) (Pdb) list(t) *** Error in argument: '(t)' ---------- components: Extension Modules messages: 291786 nosy: Christoph Zimmermann priority: normal severity: normal status: open title: pdb issue with type conversion versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 17 03:47:54 2017 From: report at bugs.python.org (Sviatoslav Sydorenko) Date: Mon, 17 Apr 2017 07:47:54 +0000 Subject: [New-bugs-announce] [issue30088] mailbox.Maildir doesn't create subdir structure when create=True and base dir exists Message-ID: <1492415274.72.0.826528835648.issue30088@psf.upfronthosting.co.za> New submission from Sviatoslav Sydorenko: Hi, I've faced an issue w/ `mailbox.Maildir()`. The case is following: 1. I create a folder with `tempfile.TemporaryDirectory()`, so it's empty 2. I pass that folder path as an argument when instantiating `mailbox.Maildir()` 3. Then I receive an exception happening because "there's no such file or directory" (namely `cur`, `tmp` or `new`) during interaction with Maildir **Expected result:** subdirs are created during `Maildir()` instance creation. **Actual result:** subdirs are assumed as existing which leads to exceptions during use. **Workaround:** remove the actual dir before passing the path to `Maildir()`. It will be created automatically with all subdirs needed. **Fix:** PR linked. Basically it adds creation of subdirs regardless of whether the base dir existed before. ---------- components: Library (Lib) messages: 291789 nosy: webknjaz priority: normal pull_requests: 1293 severity: normal status: open title: mailbox.Maildir doesn't create subdir structure when create=True and base dir exists type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 17 10:18:15 2017 From: report at bugs.python.org (Chinh Nguyen) Date: Mon, 17 Apr 2017 14:18:15 +0000 Subject: [New-bugs-announce] [issue30089] Automatic Unload of Dynamic Library Cause Segfault Message-ID: <1492438695.51.0.289749789728.issue30089@psf.upfronthosting.co.za> New submission from Chinh Nguyen: I'm using ctypes to access the PAM library to change a user's password. That is, using the function pam_chauthtok. This is occurring inside a python celery worker in FreeBSD. This will work the first time, the second time generates a segfault and crashes the worker. On attaching gdb to the worker process, I observe the following: * Crash occurs in function login_setcryptfmt * Setting a breakpoint there, I see the following after the first successful password change "warning: Temporarily disabling breakpoints for unloaded shared library "/lib/libcrypt.so.5" * When there is segfault on the second password change, the location of the segfault cannot be disassemble * It doesn't look like libcrypt is a direct dependency of libpam. So it looks like what is happening is this: * libcrypt is loaded (by python/system?) to invoke some password-related functions, it is then unloaded (by python/system?) * When the same function is invoked again, somehow libcrypt does not get loaded. This results in a function call to the same function address which is now invalid. My current work-around is to include libcrypto explicitly by binding to it though I don't use it directly. For example, libcrypt = CDLL(find_library("crypt")). Other notes: * This does not occur if I launch celery worker all running in the same process via the celery "green threads" module eventlet * This only happens if the celery worker is a python child process. I don't know how celery spawns child processes. ---------- components: ctypes messages: 291798 nosy: Chinh Nguyen priority: normal severity: normal status: open title: Automatic Unload of Dynamic Library Cause Segfault type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 17 12:27:57 2017 From: report at bugs.python.org (Emmanuel Arias) Date: Mon, 17 Apr 2017 16:27:57 +0000 Subject: [New-bugs-announce] [issue30090] Failed to build these modules: _ctypes Message-ID: <1492446477.68.0.86749049176.issue30090@psf.upfronthosting.co.za> New submission from Emmanuel Arias: Hello everybody, I am working with the code. I clone the repo, and make a pull upstream of github's cpython repository (master branch), and when I make: ./configure --with-pydebug && make -j build correctly but finished with this message: Failed to build these modules: _ctypes But the Test Result is: SUCCESS Regards ---------- components: Interpreter Core messages: 291800 nosy: eamanu priority: normal severity: normal status: open title: Failed to build these modules: _ctypes type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 17 13:19:05 2017 From: report at bugs.python.org (Jon Dufresne) Date: Mon, 17 Apr 2017 17:19:05 +0000 Subject: [New-bugs-announce] [issue30091] DeprecationWarning: invalid escape sequence: Only appears on first run Message-ID: <1492449545.72.0.587764680133.issue30091@psf.upfronthosting.co.za> New submission from Jon Dufresne: After upgrading to Python 3.6, I'm working towards cleaning up "DeprecationWarning: invalid escape sequence". I've noticed that the Deprecation warning only appears on the first run. It looks like once the code is compiled to `__pycache__`, the deprecation warning does not show. This makes debugging more difficult as I need clean out `__pycache__` directories for the runs to be reproducible. Example script: foo.py ``` import bar ``` bar.py ``` s = '\.' ``` First run ``` $ python36 -Wall foo.py .../test/bar.py:1: DeprecationWarning: invalid escape sequence \. s = '\.' ``` Second run (no DeprecationWarning) ``` $ python36 -Wall foo.py ``` Third run after cleaning ``` $ rm -rf __pycache__ $ python36 -Wall foo.py .../test/bar.py:1: DeprecationWarning: invalid escape sequence \. s = '\.' ``` I expect the deprecation warning to output on every run. ---------- components: Interpreter Core messages: 291805 nosy: jdufresne priority: normal severity: normal status: open title: DeprecationWarning: invalid escape sequence: Only appears on first run type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 03:01:18 2017 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 18 Apr 2017 07:01:18 +0000 Subject: [New-bugs-announce] [issue30092] Replace sys.version to sys.version_info in sysconfig.py Message-ID: <1492498878.43.0.141309360868.issue30092@psf.upfronthosting.co.za> New submission from Dong-hee Na: Not to rely on sys.version here, its format is an implementation detail of CPython, use sys.version_info or sys.hexversion ---------- components: Library (Lib) messages: 291824 nosy: corona10 priority: normal severity: normal status: open title: Replace sys.version to sys.version_info in sysconfig.py versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 09:11:45 2017 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 18 Apr 2017 13:11:45 +0000 Subject: [New-bugs-announce] [issue30093] Unicode eq operation with hash. Message-ID: <1492521105.55.0.00251223678903.issue30093@psf.upfronthosting.co.za> New submission from Dong-hee Na: If the Unicode compare operation is done by comparing the hashes, it is likely to be efficient because memory comparison is not necessary. If this idea is approved I could upload my PR right now. :-) (I already checked local unit test is passed.) ---------- components: Unicode messages: 291833 nosy: corona10, ezio.melotti, haypo priority: normal severity: normal status: open title: Unicode eq operation with hash. type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 14:32:36 2017 From: report at bugs.python.org (Erik Zoltan) Date: Tue, 18 Apr 2017 18:32:36 +0000 Subject: [New-bugs-announce] [issue30094] PDB enhancement Message-ID: <1492540356.16.0.824353037867.issue30094@psf.upfronthosting.co.za> New submission from Erik Zoltan: I have created a pdb enhancement that allows me to query an object's internals more gracefully. It is incredibly useful and it would be very easy to include this logic in the distribution version of pdb. I created my own modification of pdb called zdebug (attached as zdebug.py) that implements a new ppp debugging command. This command prints a formatted output of an object's internals. It could be smoother, and doesn't fully obey the programming conventions used with in pdb, and I'm not proposing to submit it as a patch. However the ppp command is pretty simple and incredibly useful. Here's a tiny example. I can drill into an object, see its internals, and interactively explore its property chain. (The zdebug.zbreak() call is equivalent to pdb.set_trace()). $ python3 >>> from datetime import date >>> today = date.today() >>> import zdebug >>> zdebug.zbreak() --Return-- > (1)()->None zdebug> p today datetime.date(2017, 4, 18) zdebug> ppp today ctime = day = 18 fromordinal = fromtimestamp = isocalendar = isoformat = isoweekday = max = 9999-12-31 min = 0001-01-01 month = 4 replace = resolution = 1 day, 0:00:00 strftime = timetuple = today = toordinal = weekday = year = 2017 zdebug> p today.day 18 zdebug> p today.year 2017 ---------- components: Library (Lib) files: zdebug.py messages: 291839 nosy: Erik Zoltan priority: normal severity: normal status: open title: PDB enhancement type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file46810/zdebug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 15:37:57 2017 From: report at bugs.python.org (Oz Tiram) Date: Tue, 18 Apr 2017 19:37:57 +0000 Subject: [New-bugs-announce] [issue30095] HTMLCalendar allow custom classes Message-ID: <1492544277.59.0.956887104329.issue30095@psf.upfronthosting.co.za> New submission from Oz Tiram: At the moment methods like HTMLCalendar.formatmonthname and HTMLCalendar.formatmonth have hard coded name 'month'. This class is pretty helpful as a good start, but if you want to customize the styles it's not helpful. I think it would be helpful for others too, if this would have be customize able. Would you accept a PR for such thing? ---------- components: Library (Lib) messages: 291841 nosy: Oz.Tiram priority: normal severity: normal status: open title: HTMLCalendar allow custom classes type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 15:51:38 2017 From: report at bugs.python.org (Brett Cannon) Date: Tue, 18 Apr 2017 19:51:38 +0000 Subject: [New-bugs-announce] [issue30096] Update examples in abc documentation to use abc.ABC Message-ID: <1492545098.58.0.705598301268.issue30096@psf.upfronthosting.co.za> New submission from Brett Cannon: I noticed that the documentation for the abc module (https://docs.python.org/3/library/abc.html) has all example classes use ABCMeta instead of ABC which is what most people probably want. To keep things simple the docs should probably be updated to inherit from abc.ABC. ---------- assignee: docs at python components: Documentation messages: 291842 nosy: brett.cannon, docs at python priority: normal severity: normal stage: needs patch status: open title: Update examples in abc documentation to use abc.ABC type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 18:11:52 2017 From: report at bugs.python.org (Raymond Hettinger) Date: Tue, 18 Apr 2017 22:11:52 +0000 Subject: [New-bugs-announce] [issue30097] Command-line option to suppress "from None" for debugging Message-ID: <1492553512.25.0.992335940269.issue30097@psf.upfronthosting.co.za> New submission from Raymond Hettinger: Filing this feature request on behalf of an engineering team that I work with. This team creates Python tools for use by other departments. Accordingly, their best practice is to use "raise CleanException from None" to give the cleanest error messages to their users while hiding the noise of implementation details and internal logic. The exposed exceptions are a documented, guaranteed part of the API that users can reliably catch and handle. This has worked well for them; however, when they are debugging the library itself it would be nice to have a way to suppress all the "from None" code and see fuller stack traces that indicate root causes. One way to do this would be to have a command-line switch such as "python -C testcode.py". Where the "-C" option means "Always show the cause of exceptions even when 'from None' is present. ---------- components: Interpreter Core messages: 291844 nosy: rhettinger priority: normal severity: normal status: open title: Command-line option to suppress "from None" for debugging type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 18:24:34 2017 From: report at bugs.python.org (crenwick) Date: Tue, 18 Apr 2017 22:24:34 +0000 Subject: [New-bugs-announce] [issue30098] Verbose TypeError for asyncio.ensure_future Message-ID: <1492554274.3.0.429824615733.issue30098@psf.upfronthosting.co.za> New submission from crenwick: Despite the shy mention in the docs, it was not clear to me that the future returned from asyncio.run_coroutine_threadsafe is not compatible with asyncio.ensure_future (and other asyncio functions), and it took me a fair amount of frustration and source-code-digging to figure out what was going on. To avoid this confusion for other users, I think that a verbose TypeError warning when a concurrent.futures.Future object is passed into asyncio.ensure_future would be very helpful. ---------- components: asyncio messages: 291845 nosy: crenwick, yselivanov priority: normal pull_requests: 1302 severity: normal status: open title: Verbose TypeError for asyncio.ensure_future type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 18 19:57:59 2017 From: report at bugs.python.org (Jake Merdich) Date: Tue, 18 Apr 2017 23:57:59 +0000 Subject: [New-bugs-announce] [issue30099] Lib2to3 fails with unreadable pickle file Message-ID: <1492559879.72.0.671474641555.issue30099@psf.upfronthosting.co.za> New submission from Jake Merdich: There seem to have been a few issues in the past with creating the lib2to3 pickle files with the right permissions due to umask behavior (#15890). While I'm unaware of the status installer-level fixes have given, it seems prudent that the installed python should always function, given there is an obvious and trivial fallback (regenerating the grammar tables at runtime). The current codebase will throw a PermissionDenied exception if the pickle file is unreadable, rather than fall back to generating the grammar tables. To reproduce: Install python2.6+, 3.0+ chmod o-r $PYTHON_INSTALL/lib/pythonX.Y/lib2to3/*.pickle pythonX.Y -c "import lib2to3.pygram" Notably, this sort of borked installation is quite hard to detect.... unless a user without root tries to run setuptools (*whistles*). ---------- components: 2to3 (2.x to 3.x conversion tool) files: 0001-Fallback-to-regenerating-2to3-grammars-on-read-fail.patch keywords: patch messages: 291853 nosy: Jake Merdich priority: normal severity: normal status: open title: Lib2to3 fails with unreadable pickle file type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 Added file: http://bugs.python.org/file46811/0001-Fallback-to-regenerating-2to3-grammars-on-read-fail.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 01:20:30 2017 From: report at bugs.python.org (donkopotamus) Date: Wed, 19 Apr 2017 05:20:30 +0000 Subject: [New-bugs-announce] [issue30100] WeakSet should all discard and remove on items that can have weak references Message-ID: <1492579230.86.0.379456919312.issue30100@psf.upfronthosting.co.za> New submission from donkopotamus: Currently WeakSet().discard([]) will raise a TypeError as we cannot take a weak reference to a list. However, that means a list can never be in a WeakSet, so WeakSet().discard([]) could instead be a no-op. Similarly WeakSet().remove([]) could be a KeyError. ---------- messages: 291861 nosy: donkopotamus priority: normal pull_requests: 1304 severity: normal status: open title: WeakSet should all discard and remove on items that can have weak references _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 13:18:46 2017 From: report at bugs.python.org (Eijebong) Date: Wed, 19 Apr 2017 17:18:46 +0000 Subject: [New-bugs-announce] [issue30101] Add support for ncurses A_ITALIC Message-ID: <1492622326.37.0.880167470539.issue30101@psf.upfronthosting.co.za> Changes by Eijebong : ---------- components: Library (Lib) nosy: Eijebong priority: normal pull_requests: 1307 severity: normal status: open title: Add support for ncurses A_ITALIC versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 14:16:08 2017 From: report at bugs.python.org (Gustavo Serra Scalet) Date: Wed, 19 Apr 2017 18:16:08 +0000 Subject: [New-bugs-announce] [issue30102] improve performance of libSSL usage on hashing Message-ID: <1492625768.04.0.0441991699383.issue30102@psf.upfronthosting.co.za> New submission from Gustavo Serra Scalet: To correctly pick the best algorithm for the current architecture, libssl needs to have OPENSSL_config(NULL) called as described on: https://wiki.openssl.org/index.php/Libcrypto_API This short change lead to a speedup of 50% on POWER8 when using hashlib.sha256 digest functionality as it now uses a SIMD approach that was already existing but not used by cpython. ---------- assignee: christian.heimes components: SSL messages: 291892 nosy: christian.heimes, gut priority: normal severity: normal status: open title: improve performance of libSSL usage on hashing type: enhancement versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 14:22:20 2017 From: report at bugs.python.org (Kyle Glowacki) Date: Wed, 19 Apr 2017 18:22:20 +0000 Subject: [New-bugs-announce] [issue30103] uu package uses old encoding Message-ID: <1492626140.89.0.479583121087.issue30103@psf.upfronthosting.co.za> New submission from Kyle Glowacki: Looking in the man pages for the uuencode and uudecode (http://www.manpagez.com/man/5/uuencode/), I see that the encoding used to go from ascii 32 to 95 but that 32 is deprecated and generally newer releases go from 33-96 (with 96 being used in place of 32). This replaces the " " in the encoding with "`". For example, the newest version of busybox only accepts the new encoding. The uu package has no way to specify to use this new encoding making it a pain to integrate. Oddly, the uu.decode function does properly decode files encoded using "`", but encode is unable to create them. ---------- components: Extension Modules messages: 291893 nosy: LawfulEvil priority: normal severity: normal status: open title: uu package uses old encoding type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 16:39:22 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 19 Apr 2017 20:39:22 +0000 Subject: [New-bugs-announce] [issue30104] Float rounding errors on AMD64 FreeBSD CURRENT Debug 3.x buildbot Message-ID: <1492634362.26.0.662150141345.issue30104@psf.upfronthosting.co.za> New submission from STINNER Victor: Since the build 154, many tests fail on the AMD64 FreeBSD CURRENT Debug 3.x buildbot slave because of float rounding errors. Failing tests: * test_cmath * test_float * test_json * test_marshal * test_math * test_statistics * test_strtod http://buildbot.python.org/all/builders/AMD64%20FreeBSD%20CURRENT%20Non-Debug%203.x/builds/154/steps/test/logs/stdio Problem: none of build 154 changes are related to floats * commit f9f87f0934ca570293ba7194bed3448a7f9bf39c * commit 947629916a5ecb1f6f6792e9b9234e084c5bf274 It looks more like a libc change of the FreeBSD CURRENT slave. Example of errors: ====================================================================== FAIL: test_specific_values (test.test_cmath.CMathTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/test_cmath.py", line 420, in test_specific_values msg=error_message) File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/test_cmath.py", line 149, in rAssertAlmostEqual '{!r} and {!r} are not sufficiently close'.format(a, b)) AssertionError: acos0036: acos(complex(-1.0009999999999992, 0.0)) Expected: complex(3.141592653589793, -0.04471763360830684) Received: complex(3.141592653589793, -0.04471763360829195) Received value insufficiently close to expected value. ====================================================================== FAIL: test_floats (test.test_json.test_float.TestPyFloat) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/test_json/test_float.py", line 8, in test_floats self.assertEqual(float(self.dumps(num)), num) AssertionError: 1.9275814160560202e-50 != 1.9275814160560204e-50 FAIL: test_bigcomp (test.test_strtod.StrtodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/test_strtod.py", line 213, in test_bigcomp self.check_strtod(s) File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/test_strtod.py", line 104, in check_strtod "expected {}, got {}".format(s, expected, got)) AssertionError: '0x1.8265ea9f864bcp+579' != '0x1.8265ea9f864bdp+579' - 0x1.8265ea9f864bcp+579 ? ^ + 0x1.8265ea9f864bdp+579 ? ^ : Incorrectly rounded str->float conversion for 29865e170: expected 0x1.8265ea9f864bcp+579, got 0x1.8265ea9f864bdp+579 ---------- messages: 291901 nosy: haypo, koobs priority: normal severity: normal status: open title: Float rounding errors on AMD64 FreeBSD CURRENT Debug 3.x buildbot versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 18:36:46 2017 From: report at bugs.python.org (kyuupichan) Date: Wed, 19 Apr 2017 22:36:46 +0000 Subject: [New-bugs-announce] [issue30105] Duplicated connection_made() call for some SSL connections Message-ID: <1492641406.9.0.0965010091679.issue30105@psf.upfronthosting.co.za> New submission from kyuupichan: An asyncio SSL server frequently sees duplicated connection_made() calls for an incoming SSL connection. It does not happen to all SSL connections; perhaps 10-25% of them. It never happens to TCP connections. Here are some examples of logs from one I run. I see this on MacOSX and DragonflyBSD, others have reported the same on Linux, so I believe it's a quirk in asyncio and not O/S specific. I assume it is a bug in the wrapping of the raw TCP transport with an SSL one. 2017-04-20 07:22:44.676180500 INFO:ElectrumX:[14755] SSL 218.185.137.81:52844, 256 total 2017-04-20 07:22:45.666747500 INFO:ElectrumX:[14756] SSL 218.185.137.81:52847, 256 total The log is output on a connection_made() callback to a protocol, and not from the protocol's constructor. They are very close together, from the same IP address and ports that are close together. The first connection was closed with connection_lost() before the 2nd connection_made() because the total session count (here 256) did not increase. Here is another section of my log with 2 more examples. Totals are not monotonic because of course disconnections (not logged) happen all the time. 2017-04-20 07:30:31.529671500 INFO:ElectrumX:[14796] SSL 193.90.12.86:42262, 259 total 2017-04-20 07:31:04.434559500 INFO:ElectrumX:[14797] SSL 70.199.157.209:10851, 259 total 2017-04-20 07:31:05.765178500 INFO:ElectrumX:[14798] SSL 70.199.157.209:10877, 259 total 2017-04-20 07:31:32.305260500 INFO:ElectrumX:[14799] SSL 64.113.32.29:35025, 256 total 2017-04-20 07:31:44.731859500 INFO:ElectrumX:[14800] SSL 188.107.123.236:60867, 255 total 2017-04-20 07:31:45.504245500 INFO:ElectrumX:[14801] SSL 188.107.123.236:60868, 255 total 2017-04-20 07:31:48.943430500 INFO:ElectrumX:[14802] SSL 136.24.49.122:54987, 255 total 2017-04-20 07:31:59.967676500 INFO:ElectrumX:[14803] TCP 113.161.81.136:2559, 256 total 2017-04-20 07:32:03.249780500 INFO:ElectrumX:[14804] SSL 69.121.8.201:63409, 256 total Another reason I believe this is an asyncio issue on the server side is that someone else's server software that doesn't use asyncio logs in a similar way and does not see duplicated incoming "connections". ---------- components: asyncio messages: 291919 nosy: kyuupichan, yselivanov priority: normal severity: normal status: open title: Duplicated connection_made() call for some SSL connections type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 18:54:22 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 19 Apr 2017 22:54:22 +0000 Subject: [New-bugs-announce] [issue30106] test_asyncore: test_handle_write() fails in tearDown() Message-ID: <1492642462.27.0.792408985482.issue30106@psf.upfronthosting.co.za> New submission from STINNER Victor: On the AMD64 FreeBSD CURRENT Non-Debug 3.x buildbot, test_handle_write() of test_asyncore now fails on calling asyncore.close_all() in tearDown(). Moreover, since my commit 7b9619ae249ed637924d1c76687b411061753e5a, the following test_quick_connect() now fails on self.fail("join() timed out"). I guess that asyncore.socket_map still contains unwanted sockets from test_handle_write() which failed. Attached PR should fix the test_handle_write() failure by calling asyncore.close_all() with ignore_all=True. http://buildbot.python.org/all/builders/AMD64%20FreeBSD%20CURRENT%20Non-Debug%203.x/builds/174/steps/test/logs/stdio ====================================================================== ERROR: test_handle_write (test.test_asyncore.TestAPI_UseIPv6Poll) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/test_asyncore.py", line 505, in tearDown asyncore.close_all() File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/asyncore.py", line 561, in close_all x.close() File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/asyncore.py", line 397, in close self.socket.close() File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/socket.py", line 417, in close self._real_close() File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/socket.py", line 411, in _real_close _ss.close(self) ConnectionResetError: [Errno 54] Connection reset by peer ====================================================================== FAIL: test_quick_connect (test.test_asyncore.TestAPI_UseIPv6Poll) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/support/__init__.py", line 2042, in decorator return func(*args) File "/usr/home/buildbot/python/3.x.koobs-freebsd-current.nondebug/build/Lib/test/test_asyncore.py", line 800, in test_quick_connect self.fail("join() timed out") AssertionError: join() timed out ---------- components: Tests messages: 291920 nosy: haypo priority: normal severity: normal status: open title: test_asyncore: test_handle_write() fails in tearDown() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 19:22:57 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 19 Apr 2017 23:22:57 +0000 Subject: [New-bugs-announce] [issue30107] python.core file created when running tests on AMD64 FreeBSD CURRENT Non-Debug 3.x buildbot Message-ID: <1492644177.56.0.807251797434.issue30107@psf.upfronthosting.co.za> New submission from STINNER Victor: Example of buildbot build which created a python.core file. Tests crashing Python on purpose should use test.support.SuppressCrashReport. I don't see which test can crash in the following list. http://buildbot.python.org/all/builders/AMD64%20FreeBSD%20CURRENT%20Non-Debug%203.x/builds/175/steps/test/logs/stdio 0:04:54 [132/404/6] test_raise passed -- running: test_subprocess (61 sec) 0:04:54 [133/404/6] test_dict passed -- running: test_subprocess (62 sec) 0:04:54 [134/404/6] test_telnetlib passed -- running: test_subprocess (62 sec) 0:04:55 [135/404/6] test_robotparser passed -- running: test_subprocess (63 sec) 0:04:56 [136/404/6] test_py_compile passed -- running: test_subprocess (64 sec) 0:04:56 [137/404/6] test_listcomps passed -- running: test_subprocess (64 sec) 0:04:56 [138/404/6] test_pty passed -- running: test_subprocess (64 sec) 0:04:56 [139/404/6] test_defaultdict passed -- running: test_subprocess (64 sec) running: test_io (30 sec), test_subprocess (94 sec) 0:05:27 [140/404/6] test_subprocess passed (94 sec) -- running: test_io (31 sec) 0:05:27 [141/404/6] test_unpack passed -- running: test_io (31 sec) 0:05:28 [142/404/6] test_bytes passed -- running: test_io (33 sec) 0:05:29 [143/404/6] test_weakset passed -- running: test_io (33 sec) 0:05:36 [144/404/6] test_eintr passed -- running: test_io (40 sec) 0:05:37 [145/404/6] test_userstring passed -- running: test_io (41 sec) 0:05:37 [146/404/6] test_support passed -- running: test_io (41 sec) 0:05:37 [147/404/6] test_ioctl skipped -- running: test_io (42 sec) test_ioctl skipped -- Unable to open /dev/tty 0:05:40 [148/404/6] test_cmd_line passed -- running: test_io (44 sec) 0:05:41 [149/404/6] test_turtle skipped -- running: test_io (45 sec) test_turtle skipped -- No module named '_tkinter' 0:05:41 [150/404/6] test_subclassinit passed -- running: test_io (45 sec) 0:05:42 [151/404/6] test_set passed -- running: test_io (46 sec) 0:05:43 [152/404/6] test_xml_etree_c passed -- running: test_io (47 sec) 0:05:43 [153/404/6] test_binascii passed -- running: test_io (47 sec) 0:05:48 [154/404/6] test_normalization passed -- running: test_io (52 sec) fetching http://www.pythontest.net/unicode/9.0.0/NormalizationTest.txt ... 0:05:49 [155/404/6] test_os passed -- running: test_io (54 sec) stty: stdin isn't a terminal 0:05:50 [156/404/6] test_abc passed -- running: test_io (54 sec) 0:05:50 [157/404/6] test_dummy_threading passed -- running: test_io (54 sec) 0:05:50 [158/404/6] test_pkgutil passed -- running: test_io (54 sec) 0:05:50 [159/404/6] test_compile passed -- running: test_io (55 sec) 0:05:51 [160/404/6] test_largefile passed -- running: test_io (55 sec) 0:05:54 [161/404/6] test_io failed (env changed) (57 sec) test_BufferedIOBase_destructor (test.test_io.CIOTest) ... ok ... test_interrupted_write_unbuffered (test.test_io.PySignalsTest) ... ok ---------------------------------------------------------------------- Ran 565 tests in 57.439s OK (skipped=2) Warning -- files was modified by test_io Before: [] After: ['python.core'] ---------- components: Tests messages: 291921 nosy: haypo, koobs priority: normal severity: normal status: open title: python.core file created when running tests on AMD64 FreeBSD CURRENT Non-Debug 3.x buildbot versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 20:23:12 2017 From: report at bugs.python.org (STINNER Victor) Date: Thu, 20 Apr 2017 00:23:12 +0000 Subject: [New-bugs-announce] [issue30108] test_site modifies sys.path Message-ID: <1492647792.33.0.749226067814.issue30108@psf.upfronthosting.co.za> New submission from STINNER Victor: See on Travis CI: https://travis-ci.org/python/cpython/jobs/223771666 Warning -- sys.path was modified by test_site Before: (47345855849656, ['', '/usr/local/lib/python37.zip', '/home/travis/build/python/cpython/Lib', '/home/travis/build/python/cpython/build/lib.linux-x86_64-3.7-pydebug'], ['', '/usr/local/lib/python37.zip', '/home/travis/build/python/cpython/Lib', '/home/travis/build/python/cpython/build/lib.linux-x86_64-3.7-pydebug']) After: (47345855849656, ['', '/usr/local/lib/python37.zip', '/home/travis/build/python/cpython/Lib', '/home/travis/build/python/cpython/build/lib.linux-x86_64-3.7-pydebug'], ['', '/usr/local/lib/python37.zip', '/home/travis/build/python/cpython/Lib', '/home/travis/build/python/cpython/build/lib.linux-x86_64-3.7-pydebug', '/home/travis/.local/lib/python3.7/site-packages']) ---------- components: Tests messages: 291923 nosy: haypo priority: normal severity: normal status: open title: test_site modifies sys.path versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 19 23:17:36 2017 From: report at bugs.python.org (Dong-hee Na) Date: Thu, 20 Apr 2017 03:17:36 +0000 Subject: [New-bugs-announce] [issue30109] make reindent failed. Message-ID: <1492658256.6.0.028215188728.issue30109@psf.upfronthosting.co.za> New submission from Dong-hee Na: When I try to `make reindent` It was failed with this messages. ```` ./python.exe ./Tools/scripts/reindent.py -r ./Lib Traceback (most recent call last): File "/Users/corona10/cpython/Lib/tokenize.py", line 404, in find_cookie codec = lookup(encoding) LookupError: unknown encoding: uft-8 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./Tools/scripts/reindent.py", line 329, in main() File "./Tools/scripts/reindent.py", line 101, in main check(arg) File "./Tools/scripts/reindent.py", line 115, in check check(fullname) File "./Tools/scripts/reindent.py", line 115, in check check(fullname) File "./Tools/scripts/reindent.py", line 121, in check encoding, _ = tokenize.detect_encoding(f.readline) File "/Users/corona10/cpython/Lib/tokenize.py", line 433, in detect_encoding encoding = find_cookie(first) File "/Users/corona10/cpython/Lib/tokenize.py", line 412, in find_cookie raise SyntaxError(msg) SyntaxError: unknown encoding for './Lib/test/bad_coding.py': uft-8 make: *** [reindent] Error 1 ``` ---------- messages: 291935 nosy: corona10 priority: normal severity: normal status: open title: make reindent failed. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 02:54:20 2017 From: report at bugs.python.org (Xiang Zhang) Date: Thu, 20 Apr 2017 06:54:20 +0000 Subject: [New-bugs-announce] [issue30110] test_asyncio reports reference leak Message-ID: <1492671260.02.0.938274219318.issue30110@psf.upfronthosting.co.za> New submission from Xiang Zhang: Running test suite with refleak hunter reports test_asyncio leaks referrences: 0:00:00 [1/1] test_asyncio Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.123 seconds beginning 9 repetitions 123456789 Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.112 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.104 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.113 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.104 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.120 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.142 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.141 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.107 seconds .Executing .start() done, defined at /home/angwer/repos/cpython/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/angwer/repos/cpython/Lib/asyncio/base_events.py:445> took 0.115 seconds . test_asyncio leaked [-2, 2, 0, 0] memory blocks, sum=0 test_asyncio failed in 4 min 27 sec 1 test failed: test_asyncio git bisect blames ba7e1f9a4e06c0b4ad594fd64edcaf7292515820. Looking at the patch it looks to me the problem is in test_get_event_loop_new_process(), the pool is not shutdown. ---------- messages: 291951 nosy: xiang.zhang, yselivanov priority: normal severity: normal status: open title: test_asyncio reports reference leak versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 05:15:30 2017 From: report at bugs.python.org (Marian Horban) Date: Thu, 20 Apr 2017 09:15:30 +0000 Subject: [New-bugs-announce] [issue30111] json module: encoder optimization Message-ID: <1492679730.92.0.726473199526.issue30111@psf.upfronthosting.co.za> New submission from Marian Horban: It is possible to improve performance of json module encoder. Since access to local variables is faster than to globals/builtins I propose to use locals instead of globals. Small test of such improvement: >>> import timeit >>> def flocal(name=False): ... for i in range(5): ... x = name ... >>> timeit.timeit("flocal()", "from __main__ import flocal", number=10000000) 5.0455567836761475 >>> >>> def fbuilt_in(): ... for i in range(5): ... x = False ... >>> >>> timeit.timeit("fbuilt_in()", "from __main__ import fbuilt_in", number=10000000) 5.451796054840088 ---------- components: Library (Lib) files: encoder_opt.patch keywords: patch messages: 291955 nosy: Marian Horban 2 priority: normal severity: normal status: open title: json module: encoder optimization type: performance versions: Python 2.7 Added file: http://bugs.python.org/file46815/encoder_opt.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 05:20:30 2017 From: report at bugs.python.org (David Kirkby) Date: Thu, 20 Apr 2017 09:20:30 +0000 Subject: [New-bugs-announce] [issue30112] useful things Message-ID: <1841114936.20170420122013@onetel.net> New submission from David Kirkby: Greetings! I've just come across some very useful things, you might like them too, just take a look here http://www.arqja.com/mission.php?9495 david.kirkby ---------- messages: 291956 nosy: drkirkby priority: normal severity: normal status: open title: useful things _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 05:36:32 2017 From: report at bugs.python.org (Louie Lu) Date: Thu, 20 Apr 2017 09:36:32 +0000 Subject: [New-bugs-announce] [issue30113] Add profile test case for trace_dispatch_return assertion Message-ID: <1492680992.58.0.5244415067.issue30113@psf.upfronthosting.co.za> New submission from Louie Lu: This is a sub-problem of #9285, in #9285, we aim to provide cProfile and profile a context manager, this will need to add code like this: def __enter__(self): self.set_cmd('') sys.setprofile(self.dispatcher) return self Unfortunately, when setting up profiler via `sys.setprofile`, it will immediately work on next line `return self`, which cause the assertion inside `trace_dispatch_return` claim this is a "Bad return". Technically, `profile.Profile` can not go return upper than it's frame inside the frame stack. This behavior can be observed by this code: def fib(n): if n > 2: return fib(n - 1) + fib(n - 2) return n def foo(): pr = profile.Profile() # Profile was set in the `foo` frame, it can't get more upper than this # that means, we can't return to global frame when this profile is set sys.setprofile(pr.dispatcher) fib(5) # We didn't stop the profile here via sys.setprofile(None) # So it will return 0xDEADBEAF to global frame and cause a bad return return 0xDEADBEAF foo() Here this issue will provide the test of this behavior, then will make some modify in #9285 to prevent this situation when using profile as a context manager. ---------- components: Library (Lib) messages: 291957 nosy: louielu, ncoghlan priority: normal severity: normal status: open title: Add profile test case for trace_dispatch_return assertion type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 06:04:06 2017 From: report at bugs.python.org (Marian Horban) Date: Thu, 20 Apr 2017 10:04:06 +0000 Subject: [New-bugs-announce] [issue30114] json module: it is not possible to override 'true', 'false' values during encoding bool Message-ID: <1492682646.2.0.186501711671.issue30114@psf.upfronthosting.co.za> New submission from Marian Horban: It is not possible to override 'true', 'false' values during encoding bool. For example if I want to dump dict like: {"key": True} and result must be not {"key": true} but let's say {"key": "TRUE"} It is really hard to force json.dumps function to do it. I understand that improving of json encoder performance causes this inflexible implementation. Perfect solution for extending/overriding json module would be move nested functions _iterencode_list _iterencode_dict _iterencode into class JSONEncoder as static methods. But it could make performance a bit worse. So if we cannot afford it I would propose to move function _make_iterencode to JSONEncoder as a static method. This change will not degrade performance. But it will be possible to override this method in SPECIFIC user's Encoder. ---------- components: Library (Lib) files: json_improvement.patch keywords: patch messages: 291959 nosy: Marian Horban 2 priority: normal severity: normal status: open title: json module: it is not possible to override 'true', 'false' values during encoding bool type: enhancement versions: Python 2.7 Added file: http://bugs.python.org/file46816/json_improvement.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 07:03:04 2017 From: report at bugs.python.org (Xiang Zhang) Date: Thu, 20 Apr 2017 11:03:04 +0000 Subject: [New-bugs-announce] [issue30115] test_logging report reference leak Message-ID: <1492686184.18.0.863470787856.issue30115@psf.upfronthosting.co.za> New submission from Xiang Zhang: 0:00:00 [1/1] test_logging beginning 9 repetitions 123456789 ......... test_logging leaked [24, -24, 1, 24] memory blocks, sum=25 test_logging failed in 3 min 15 sec 1 test failed: test_logging Seems d61910c598876788c9b4bf0e116370bbfc5a2f85 is responsible. ---------- messages: 291961 nosy: vinay.sajip, xiang.zhang priority: normal severity: normal status: open title: test_logging report reference leak _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 08:08:52 2017 From: report at bugs.python.org (m.meliani) Date: Thu, 20 Apr 2017 12:08:52 +0000 Subject: [New-bugs-announce] [issue30116] numpy.ndarray.T doesn't change the structure Message-ID: <1492690132.67.0.227049528037.issue30116@psf.upfronthosting.co.za> New submission from m.meliani: The few following lines, i believe, show how the numpy.ndarray.T or numpy.ndarray.transpose() don't change the structure of the data only the way they're displayed. Which is sometimes a problem when handling big quantities of data which you need to look at a certain way for sorting problems among others. >>> import numpy as np >>> x=np.array([[0,1,2],[1,2,3]]) >>> x=x.T >>> print x [[0 1] [1 2] [2 3]] >>> y=np.array([[0,1],[1,2],[2,3]]) >>> print y [[0 1] [1 2] [2 3]] >>> y.view('i8,i8') array([[(0, 1)], [(1, 2)], [(2, 3)]], dtype=[('f0', '>> x.view('i8,i8') Traceback (most recent call last): File "", line 1, in ValueError: new type not compatible with array. ---------- messages: 291967 nosy: m.meliani priority: normal severity: normal status: open title: numpy.ndarray.T doesn't change the structure type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 08:45:21 2017 From: report at bugs.python.org (STINNER Victor) Date: Thu, 20 Apr 2017 12:45:21 +0000 Subject: [New-bugs-announce] [issue30117] test_lib2to3.test_parser.test_all_project_files() fails Message-ID: <1492692321.21.0.476906243501.issue30117@psf.upfronthosting.co.za> New submission from STINNER Victor: The test_lib2to3.test_parser.test_all_project_files() test fails and produces an annoying output. See the issue #13125 Example of a buildbot: 0:11:09 [398/404] test_lib2to3 passed (39 sec) --- /root/buildarea/3.x.angelico-debian-amd64/build/Lib/lib2to3/tests/data/bom.py 2017-02-11 12:20:43.532000000 +1100 +++ @ 2017-04-20 22:05:37.157911808 +1000 @@ -1,2 +1,2 @@ -?# coding: utf-8 +# coding: utf-8 print "BOM BOOM!" Example by running: ./python -m test -v test_lib2to3 --- test_all_project_files (lib2to3.tests.test_parser.TestParserIdempotency) ... /home/haypo/prog/python/master/Lib/lib2to3/tests/test_parser.py:393: UserWarning: ParseError on file /home/haypo/prog/python/master/Lib/lib2to3/main.py (bad input: type=22, value='=', context=('', (130, 38))) warnings.warn('ParseError on file %s (%s)' % (filepath, err)) /home/haypo/prog/python/master/Lib/lib2to3/tests/test_parser.py:393: UserWarning: ParseError on file /home/haypo/prog/python/master/Lib/lib2to3/tests/pytree_idempotency.py (bad input: type=22, value='=', context=('', (49, 33))) warnings.warn('ParseError on file %s (%s)' % (filepath, err)) --- /home/haypo/prog/python/master/Lib/lib2to3/tests/data/bom.py 2017-02-10 23:10:03.392778645 +0100 +++ @ 2017-04-20 14:32:49.921613096 +0200 @@ -1,2 +1,2 @@ -?# coding: utf-8 +# coding: utf-8 print "BOM BOOM!" expected failure --- The test fails to parse the following code: --- from __future__ import print_function import sys print("WARNING", file=sys.stderr) --- whereas lib2to3 is able to parse the code. It seems like the code uses a lib2to3.tests.support.driver object. Maybe this object lacks the fixer which parses the __future__ imports? The minimum fix would be to make the test quiet: remove the warning, don't log anything. The best would be fix the test. ---------- components: Tests keywords: easy messages: 291970 nosy: haypo priority: normal severity: normal status: open title: test_lib2to3.test_parser.test_all_project_files() fails versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 11:39:27 2017 From: report at bugs.python.org (Louie Lu) Date: Thu, 20 Apr 2017 15:39:27 +0000 Subject: [New-bugs-announce] [issue30118] Adding unittest for cProfile / profile command line interface Message-ID: <1492702767.19.0.167930409114.issue30118@psf.upfronthosting.co.za> New submission from Louie Lu: Serhiy provide a cProfile / profile CLI optparse to argparse patch in #18971, it is time to add up the unittest of CLI test. I'll add the unittest these days for it. ---------- components: Library (Lib) messages: 291981 nosy: louielu, serhiy.storchaka priority: normal severity: normal status: open title: Adding unittest for cProfile / profile command line interface versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 13:57:20 2017 From: report at bugs.python.org (Dong-hee Na) Date: Thu, 20 Apr 2017 17:57:20 +0000 Subject: [New-bugs-announce] [issue30119] A remote attacker could possibly use this flaw to manipulate an FTP connection opened by a Python application Message-ID: <1492711040.26.0.220875177269.issue30119@psf.upfronthosting.co.za> New submission from Dong-hee Na: It was discovered that the FTP client implementation in the Networking component of Python failed to correctly handle user inputs. A remote attacker could possibly use this flaw to manipulate an FTP connection opened by a Python application if it could make it access a specially crafted FTP URL. See http://blog.blindspotsecurity.com/2017/02/advisory-javapython-ftp-injections.html and https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2017-3533 I upload the patch for this issue. ---------- messages: 291988 nosy: corona10 priority: normal severity: normal status: open title: A remote attacker could possibly use this flaw to manipulate an FTP connection opened by a Python application type: security _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 14:22:37 2017 From: report at bugs.python.org (=?utf-8?b?0JzQuNGF0LDQudC70L4g0JPQsNCy0LXQu9GP?=) Date: Thu, 20 Apr 2017 18:22:37 +0000 Subject: [New-bugs-announce] [issue30120] add new key words to keyword lib Message-ID: <1492712557.12.0.0900736271334.issue30120@psf.upfronthosting.co.za> Changes by ??????? ?????? : ---------- components: Library (Lib) nosy: ??????? ?????? priority: normal severity: normal status: open title: add new key words to keyword lib versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 20 18:43:55 2017 From: report at bugs.python.org (Segev Finer) Date: Thu, 20 Apr 2017 22:43:55 +0000 Subject: [New-bugs-announce] [issue30121] Windows: subprocess debug assertion on failure to execute the process Message-ID: <1492728235.69.0.898513798091.issue30121@psf.upfronthosting.co.za> New submission from Segev Finer: subprocess triggers a debug assertion in the CRT on failure to execute the process due to closing the pipe *handles* in the except clause using os.close rather than .Close() (os.close closes CRT file descriptors and not handles). In addition to that once this is fixed there is also a double free/close since we need to set `self._closed_child_pipe_fds = True` once we closed the handles in _execute_child so they won't also be closed in __init__. To reproduce, do this in a debug build of Python: import subprocess subprocess.Popen('exe_that_doesnt_exist.exe', stdout=subprocess.PIPE) See: https://github.com/python/cpython/pull/1218#discussion_r112550959 ---------- components: Library (Lib), Windows messages: 292002 nosy: Segev Finer, eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows: subprocess debug assertion on failure to execute the process type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 00:03:19 2017 From: report at bugs.python.org (Decorater) Date: Fri, 21 Apr 2017 04:03:19 +0000 Subject: [New-bugs-announce] [issue30122] Added missing things to Windows docs. Message-ID: <1492747399.44.0.402463454209.issue30122@psf.upfronthosting.co.za> New submission from Decorater: I realized the Windows docs lacked some information so I added it. I will try to create an cherry pick for 3.6 and 3.6 as well. Also if desired I could also see if it can be applied to the 2.7 branch as well. ---------- assignee: docs at python components: Documentation messages: 292005 nosy: Decorater, docs at python priority: normal severity: normal status: open title: Added missing things to Windows docs. versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 02:01:20 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 21 Apr 2017 06:01:20 +0000 Subject: [New-bugs-announce] [issue30123] test_venv failed without pip Message-ID: <1492754480.34.0.492184084687.issue30123@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: $ ./python -m test.regrtest -vuall test_venv ... ====================================================================== FAIL: test_with_pip (test.test_venv.EnsurePipTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython3.5/Lib/test/test_venv.py", line 428, in test_with_pip self.do_test_with_pip(False) File "/home/serhiy/py/cpython3.5/Lib/test/test_venv.py", line 382, in do_test_with_pip self.assertEqual(err, "") AssertionError: '/tmp/tmpxhgghyhm/bin/python: No module named pip\n' != '' - /tmp/tmpxhgghyhm/bin/python: No module named pip ---------------------------------------------------------------------- ---------- components: Tests messages: 292009 nosy: serhiy.storchaka, vinay.sajip priority: normal severity: normal status: open title: test_venv failed without pip type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 05:28:55 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 21 Apr 2017 09:28:55 +0000 Subject: [New-bugs-announce] [issue30124] Fix C aliasing issue in Python/dtoa.c to use strict aliasing on Clang 4.0 Message-ID: <1492766935.79.0.662322517567.issue30124@psf.upfronthosting.co.za> New submission from STINNER Victor: My change 28205b203a4742c40080b4a2b4b2dcd800716edc added -fno-strict-aliasing on clang to fix the compilation of Python/dtoa.c on clang 4.0. But it's only a temporary workaround until dtoa.c is fixed to respect C99 strict aliasing. Strict aliasing allows the compiler to enable more optimization, and so should make Python a little bit faster. It would only fix a regression, before my change Python was already build with strict aliasing More info about the issue: * bpo-30104 * https://bugs.llvm.org//show_bug.cgi?id=31928 ---------- components: Build messages: 292018 nosy: benjamin.peterson, haypo, mark.dickinson priority: normal severity: normal status: open title: Fix C aliasing issue in Python/dtoa.c to use strict aliasing on Clang 4.0 type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 08:23:18 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 21 Apr 2017 12:23:18 +0000 Subject: [New-bugs-announce] [issue30125] test_SEH() of test_ctypes logs "Windows fatal exception: access violation" Message-ID: <1492777398.7.0.77484107928.issue30125@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached PR removes the following scary log. http://buildbot.python.org/all/builders/AMD64%20Windows8.1%20Non-Debug%203.x/builds/655/steps/test/logs/stdio Windows fatal exception: access violation Current thread 0x00000b88 (most recent call first): File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\case.py", line 178 in handle File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\case.py", line 733 in assertRaises File "D:\buildarea\3.x.ware-win81-release\build\lib\ctypes\test\test_win32.py", line 47 in test_SEH File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\case.py", line 605 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\case.py", line 653 in __call__ File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 122 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 84 in __call__ File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 122 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 84 in __call__ File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 122 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 84 in __call__ File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 122 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 84 in __call__ File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 122 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\suite.py", line 84 in __call__ File "D:\buildarea\3.x.ware-win81-release\build\lib\unittest\runner.py", line 176 in run File "D:\buildarea\3.x.ware-win81-release\build\lib\test\support\__init__.py", line 1896 in _run_suite File "D:\buildarea\3.x.ware-win81-release\build\lib\test\support\__init__.py", line 1930 in run_unittest File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest.py", line 164 in test_runner File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest.py", line 165 in runtest_inner File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest.py", line 119 in runtest File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest_mp.py", line 71 in run_tests_slave File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\main.py", line 470 in _main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\main.py", line 463 in main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\main.py", line 527 in main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\regrtest.py", line 46 in _main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\regrtest.py", line 50 in File "D:\buildarea\3.x.ware-win81-release\build\lib\runpy.py", line 85 in _run_code File "D:\buildarea\3.x.ware-win81-release\build\lib\runpy.py", line 193 in _run_module_as_main ---------- components: Windows messages: 292030 nosy: haypo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_SEH() of test_ctypes logs "Windows fatal exception: access violation" versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 08:26:34 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 21 Apr 2017 12:26:34 +0000 Subject: [New-bugs-announce] [issue30126] CheckTraceCallbackContent of test_sqlite3 fails on OS X Tiger Message-ID: <1492777594.65.0.679686114743.issue30126@psf.upfronthosting.co.za> New submission from STINNER Victor: I suggest to skip the following test on OS X Tiger, since it fails but I'm not interested to fix it and it seems to be an old SQLite bug which was fixed after Tiger was released (I don't see this failure on any other buildbot). http://buildbot.python.org/all/builders/x86%20Tiger%203.x/builds/569/steps/test/logs/stdio ====================================================================== FAIL: CheckTraceCallbackContent (sqlite3.test.hooks.TraceCallbackTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/sqlite3/test/hooks.py", line 270, in CheckTraceCallbackContent self.assertEqual(traced_statements, queries) AssertionError: Lists differ: ['cre[19 chars]'insert into foo(x) values(1)', 'insert into foo(x) values(1)'] != ['cre[19 chars]'insert into foo(x) values(1)'] First list contains 1 additional elements. First extra element 2: 'insert into foo(x) values(1)' + ['create table foo(x)', 'insert into foo(x) values(1)'] - ['create table foo(x)', - 'insert into foo(x) values(1)', - 'insert into foo(x) values(1)'] ---------------------------------------------------------------------- ---------- components: Tests, macOS messages: 292032 nosy: haypo, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: CheckTraceCallbackContent of test_sqlite3 fails on OS X Tiger versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 09:20:52 2017 From: report at bugs.python.org (Diego Costantini) Date: Fri, 21 Apr 2017 13:20:52 +0000 Subject: [New-bugs-announce] [issue30127] argparse action not correctly describing the right behavior Message-ID: <1492780852.4.0.162539678149.issue30127@psf.upfronthosting.co.za> New submission from Diego Costantini: Here https://docs.python.org/2/library/argparse.html#action we have the following: >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action='store_true') >>> parser.add_argument('--bar', action='store_false') >>> parser.add_argument('--baz', action='store_false') >>> parser.parse_args('--foo --bar'.split()) Namespace(bar=False, baz=True, foo=True) baz should be False because omitted. I also tested it. ---------- assignee: docs at python components: Documentation messages: 292044 nosy: Diego Costantini, docs at python priority: normal severity: normal status: open title: argparse action not correctly describing the right behavior type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 10:27:21 2017 From: report at bugs.python.org (Ralph Corderoy) Date: Fri, 21 Apr 2017 14:27:21 +0000 Subject: [New-bugs-announce] [issue30128] xid_start definition for Unicode identifiers refers to xid_continue Message-ID: <1492784841.26.0.847903278881.issue30128@psf.upfronthosting.co.za> New submission from Ralph Corderoy: https://docs.python.org/3/reference/lexical_analysis.html#identifiers has a grammar. identifier ::= xid_start xid_continue* id_start ::= id_continue ::= xid_start ::= xid_continue ::= I struggle to make sense of it unless I remove `xid_continue*' from `xid_start's definition. I suspect it ended up there due to cut and paste. ---------- assignee: docs at python components: Documentation messages: 292049 nosy: docs at python, ralph.corderoy priority: normal severity: normal status: open title: xid_start definition for Unicode identifiers refers to xid_continue versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 10:27:51 2017 From: report at bugs.python.org (Skip Montanaro) Date: Fri, 21 Apr 2017 14:27:51 +0000 Subject: [New-bugs-announce] [issue30129] functools.partialmethod should look more like what it's impersonating. Message-ID: <1492784871.01.0.595019366474.issue30129@psf.upfronthosting.co.za> New submission from Skip Montanaro: I needed to create a partial method in Python 2.7, so I grabbed functools.partialmethod from a Python 3.5.2 install. For various reasons, one of the reasons I wanted this was to suck in some methods from a delegated class so they appeared in dir() and help() output on the primary class (the one containing the partialmethod objects). Suppose I have class Something: def meth(self, arg1, arg2): "meth doc" return arg1 + arg2 then in the metaclass for another class I construct an attribute (call it "mymeth") which is a partialmethod object. When I (for example), run pydoc, that other class's attribute appears as: mymeth = It would be nice if it at least included the doc string from meth, something like: mymeth = meth doc Even better would be a proper signature: mymeth(self, arg1, arg2) meth doc In my copy of functools.partialmethod, I inserted an extra line in __get__, right after the call to partial(): results.__doc__ = self.func.__doc__ That helps a bit, as I can print("mymeth doc:", inst.mymeth.__doc__) and see mymeth doc: meth doc displayed. That's not enough for help()/pydoc though. I suspect the heavy lifting will have to be done in pydoc.Doc.document(). inspect.isroutine() returns False for functools.partial objects. I also see _signature_get_partial() in inspect.py. That might be the source of the problem. When I create a partialmethod object in my little example, it actually looks like a functools.partial object, not a partialmethod object. It's not clear that this test: if isinstance(partialmethod, functools.partialmethod): in inspect._signature_from_callable() is testing for the correct type. Apologies that I can't easily provide a detailed example. My Python 2.x metaclass example (where I'm smashing methods from one class into another) doesn't work in Python 3.x for some reason, the whole partialmethod thing isn't available in Python 2.x (inspect doesn't know about partialmethod or partial) and it's not really a "hello world"-sized example anyway. I'll beat on things a bit more to try and craft a workable Python 3.x example. ---------- components: Library (Lib) messages: 292050 nosy: skip.montanaro priority: normal severity: normal status: open title: functools.partialmethod should look more like what it's impersonating. type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 11:00:36 2017 From: report at bugs.python.org (Alexander Gosselin) Date: Fri, 21 Apr 2017 15:00:36 +0000 Subject: [New-bugs-announce] [issue30130] array.array is not an instance of collections.MutableSequence Message-ID: <1492786836.55.0.150930236704.issue30130@psf.upfronthosting.co.za> New submission from Alexander Gosselin: array.array has all of the methods required by collections.MutableSequence, but: >>> import array >>> import collections >>> isinstance(array.array, collections.MutableSequence) False ---------- messages: 292053 nosy: Alexander Gosselin priority: normal severity: normal status: open title: array.array is not an instance of collections.MutableSequence type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 12:12:01 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 21 Apr 2017 16:12:01 +0000 Subject: [New-bugs-announce] [issue30131] test_logging leaks a "dangling" thread Message-ID: <1492791121.95.0.979158654075.issue30131@psf.upfronthosting.co.za> New submission from STINNER Victor: Example on Windows from AppVeyor: https://ci.appveyor.com/project/python/cpython/build/3.7.0a0.1402 Warning -- threading._dangling was modified by test_logging Before: <_weakrefset.WeakSet object at 0x027CBE30> After: <_weakrefset.WeakSet object at 0x027CBFF0> I also saw this warning on FreeBSD buildbots. ---------- components: Tests messages: 292058 nosy: haypo priority: normal severity: normal status: open title: test_logging leaks a "dangling" thread type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 12:17:40 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 21 Apr 2017 16:17:40 +0000 Subject: [New-bugs-announce] [issue30132] test_distutils leaks a vc140.pdb file Message-ID: <1492791460.36.0.72947186821.issue30132@psf.upfronthosting.co.za> New submission from STINNER Victor: http://buildbot.python.org/all/builders/AMD64%20Windows8%203.x/builds/566/steps/test/logs/stdio Warning -- files was modified by test_distutils Before: [] After: ['vc140.pdb'] ---------- components: Tests, Windows messages: 292060 nosy: haypo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_distutils leaks a vc140.pdb file type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 17:18:05 2017 From: report at bugs.python.org (Patrick Foley) Date: Fri, 21 Apr 2017 21:18:05 +0000 Subject: [New-bugs-announce] [issue30133] Strings that end with properly escaped backslashes cause error to be thrown in re.search/sub/etc. functions. Message-ID: <1492809485.52.0.0203844135424.issue30133@psf.upfronthosting.co.za> New submission from Patrick Foley: The following code demonstrates: import re text = 'ab\\' exp = re.compile('a') print(re.sub(exp, text, '')) If you remove the backslash(es), the code runs fine. This appears to be specific to the re module and only to strings that end in (even properly escaped) backslashes. You could easily receive raw data like this from freehand input sources so it would be nice not to have to remove trailing backslashes before running a regular expression. ---------- components: Regular Expressions files: sample.py messages: 292079 nosy: Patrick Foley, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: Strings that end with properly escaped backslashes cause error to be thrown in re.search/sub/etc. functions. versions: Python 2.7, Python 3.4 Added file: http://bugs.python.org/file46822/sample.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 21 23:49:31 2017 From: report at bugs.python.org (KINEBUCHI Tomohiko) Date: Sat, 22 Apr 2017 03:49:31 +0000 Subject: [New-bugs-announce] [issue30134] BytesWarning is missing from the documents Message-ID: <1492832971.85.0.91707037101.issue30134@psf.upfronthosting.co.za> New submission from KINEBUCHI Tomohiko: In Python 2.6, BytesWarning was added, but a description of that warning is missing from the document, library/exceptions.rst. ---------- assignee: docs at python components: Documentation messages: 292099 nosy: cocoatomo, docs at python priority: normal severity: normal status: open title: BytesWarning is missing from the documents versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 22 00:19:14 2017 From: report at bugs.python.org (Klaus Wolf) Date: Sat, 22 Apr 2017 04:19:14 +0000 Subject: [New-bugs-announce] [issue30135] default value of argument seems to be overwritten Message-ID: <1492834754.69.0.481762450521.issue30135@psf.upfronthosting.co.za> New submission from Klaus Wolf: Two function results differ if the parameter is given explictly instead of using the given default. (Enclosed example: A small simple interpreter of Forth language, both scripts should give the same result, but the first one (variant1) fails because the value from the first pass remains on the stack.) ---------- components: Interpreter Core files: misc_math.7z messages: 292100 nosy: approximately priority: normal severity: normal status: open title: default value of argument seems to be overwritten type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file46825/misc_math.7z _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 22 01:02:58 2017 From: report at bugs.python.org (Louie Lu) Date: Sat, 22 Apr 2017 05:02:58 +0000 Subject: [New-bugs-announce] [issue30136] Add test.support.script_helper to documentation Message-ID: <1492837378.99.0.351852944309.issue30136@psf.upfronthosting.co.za> New submission from Louie Lu: `test.support.script_helper` didn't document at `test` document. It should be add on. ---------- assignee: docs at python components: Documentation messages: 292103 nosy: docs at python, louielu priority: normal severity: normal status: open title: Add test.support.script_helper to documentation versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 22 10:01:58 2017 From: report at bugs.python.org (Akshay Deogaonkar) Date: Sat, 22 Apr 2017 14:01:58 +0000 Subject: [New-bugs-announce] [issue30137] Equivalent syntax regarding List returns List objects with non-similar list elements. Message-ID: <1492869718.96.0.983621663084.issue30137@psf.upfronthosting.co.za> New submission from Akshay Deogaonkar: lst = [0,1,2,3,4] print(lst[0:3]) #returns [0,1,2] print(lst[:3]) #returns [0,1,2] #Above two syntax returns same lists. print(lst[0:3:-1]) #returns [] print(lst[:3:-1]) #returns [4] #Here is a bug; what expected was that the both syntax would have returned the similar lists; however, they didn't! ---------- messages: 292120 nosy: Akshay Deogaonkar priority: normal severity: normal status: open title: Equivalent syntax regarding List returns List objects with non-similar list elements. type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 22 13:24:09 2017 From: report at bugs.python.org (Stephen J. Turnbull) Date: Sat, 22 Apr 2017 17:24:09 +0000 Subject: [New-bugs-announce] [issue30138] Incorrect documentation of replacement of slice of length 0 Message-ID: <1492881849.39.0.723167334866.issue30138@psf.upfronthosting.co.za> New submission from Stephen J. Turnbull: In section 4.6.3. "Mutable Sequence Types" of current documentation, Note 1 to the table says "[iterable] t must have the same length as the slice it is replacing." This is incorrect in the case of extension: s[len(s):] = t according to the rest of the documentation, as well as experiment. ---------- assignee: docs at python components: Documentation keywords: easy messages: 292127 nosy: docs at python, sjt priority: normal severity: normal status: open title: Incorrect documentation of replacement of slice of length 0 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 22 14:54:20 2017 From: report at bugs.python.org (Horacio Hoyos) Date: Sat, 22 Apr 2017 18:54:20 +0000 Subject: [New-bugs-announce] [issue30139] Fix for issue 8743 not available in python MacOS 3.5.1 Message-ID: <1492887260.31.0.89800718896.issue30139@psf.upfronthosting.co.za> New submission from Horacio Hoyos: Hi all, I was having issues while testing a custom Set implementation using the _collections_abc base MutableSet and found that my issue was apparently resolved with issue 8743. My test is simple: ms = MySetImpl() ms & 'testword' which should fail with TypeError, given that in the 8743 fix the __and__ incorporated a test for isinstance(other, Set). Looking at the _collections_abc.py in my installation (/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/_collections_abc.py) I can not see the changes in the patches submitted for issue 8743. ---------- components: Library (Lib), macOS messages: 292134 nosy: Horacio Hoyos, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Fix for issue 8743 not available in python MacOS 3.5.1 versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 00:21:44 2017 From: report at bugs.python.org (Stephan Hoyer) Date: Sun, 23 Apr 2017 04:21:44 +0000 Subject: [New-bugs-announce] [issue30140] Binary arithmetic does not always call subclasses first Message-ID: <1492921304.69.0.401017078402.issue30140@psf.upfronthosting.co.za> New submission from Stephan Hoyer: We are writing a system for overloading NumPy operations (see PR [1] and design doc [2]) that is designed to copy and extend Python's system for overloading binary operators. The reference documentation on binary arithmetic [3] states: > Note: If the right operand's type is a subclass of the left operand?s type and that subclass provides the reflected method for the operation, this method will be called before the left operand?s non-reflected method. This behavior allows subclasses to override their ancestors? operations. However, this isn't actually done if the right operand merely inherits from the left operand's type. In practice, CPython requires that the right operand defines a different method before it defers to it. Note that the behavior is different for comparisons, which defer to subclasses regardless of whether they implement a new method [4]. I think this behavior is a mistake and should be corrected. It is just as useful to write generic binary arithmetic methods that are well defined on subclasses as generic comparison operations. In fact, this is exactly the design pattern we propose for objects implementing special operators like NumPy arrays (see NDArrayOperatorsMixin in [1] and [2]). Here is a simple example, of a well-behaved that implements addition by wrapping its value and returns NotImplemented when the other operand has the wrong type: class A: def __init__(self, value): self.value = value def __add__(self, other): if not isinstance(other, A): return NotImplemented return type(self)(self.value + other.value) __radd__ = __add__ def __repr__(self): return f'{type(self).__name__}({self.value!r})' class B(A): pass class C(A): def __add__(self, other): if not isinstance(other, A): return NotImplemented return type(self)(self.value + other.value) __radd__ = __add__ A does not defer to B: >>> A(1) + B(1) A(2) But it does defer to C, which defines new methods (literally copied/pasted) for __add__/__radd__: >>> A(1) + C(1) C(2) With the current behavior, special operator implementations need to explicitly account for the possibility that they are being called from a subclass by returning NotImplemented. My guess is that this is rarely done, which means that most of these methods are broken when used with subclasses, or subclasses needlessly reimplement these methods. Can we fix this logic for Python 3.7? [1] https://github.com/numpy/numpy/pull/8247 [2] https://github.com/charris/numpy/blob/406bbc652424fff332f49b0d2f2e5aedd8191d33/doc/neps/ufunc-overrides.rst [3] https://docs.python.org/3/reference/datamodel.html#object.__ror__ [4] http://bugs.python.org/issue22052 ---------- messages: 292149 nosy: Stephan Hoyer priority: normal severity: normal status: open title: Binary arithmetic does not always call subclasses first type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 06:30:49 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Sun, 23 Apr 2017 10:30:49 +0000 Subject: [New-bugs-announce] [issue30141] If you forget to call do_handshake, then everything seems to work but hostname is disabled Message-ID: <1492943449.26.0.474412408504.issue30141@psf.upfronthosting.co.za> New submission from Nathaniel Smith: Basically what it says in the title... if you create an SSL object via wrap_socket with do_handshake_on_connect=False, or via wrap_bio, and then forget to call do_handshake and just go straight to sending and receiving data, then the encrypted connection is successfully established and everything seems to work. However, in this mode the hostname is silently *not* checked, so the connection is vulnerable to MITM attacks. (I guess from reading the SSL_read and SSL_write manpages that openssl is just silently doing the handshake automatically ? very helpfully! ? but it's only Python's do_handshake code that knows to check the hostname?) This doesn't affect correctly written programs that follow the documentation and either use do_handshake_on_connect=True (the default for wrap_socket) or explicitly call do_handshake, so it's not a super-scary bug. But IMHO it definitely shouldn't be this easy to silently fail-open. The attached test script sets up a TLS echo server that has a certificate for the host "trio-test-1.example.org" that's signed by a locally trusted CA, and then checks: - connecting to it with do_handshake and expecting the correct hostname: works, as expected - connecting to it with do_handshake and expecting a different hostname: correctly raises an error due to the mismatched hostnames - connecting to it withOUT do_handshake and expecting a different hostname: incorrectly succeeds at connecting, sending data, receiving data, etc., without any error and it checks using both ctx.wrap_socket(..., do_handshake_on_connect=False) and a little custom socket wrapper class defined using ctx.wrap_bio(...). I've only marked 3.5 and 3.6 as affected because those are the only versions I've tested, but I suspect other versions are affected as well. ---------- assignee: christian.heimes components: SSL files: ssl-handshake.zip messages: 292158 nosy: christian.heimes, njs priority: normal severity: normal status: open title: If you forget to call do_handshake, then everything seems to work but hostname is disabled versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file46827/ssl-handshake.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 08:00:24 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 23 Apr 2017 12:00:24 +0000 Subject: [New-bugs-announce] [issue30142] The "callable" fixer doesn't exist Message-ID: <1492948824.23.0.412292082563.issue30142@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The "callable" fixer was removed in dbdf029a5575f6e6ec0140260236963ed7d2c2be, but it still is mentioned in the documentation. https://docs.python.org/3/library/2to3.html#2to3fixer-callable ---------- assignee: docs at python components: 2to3 (2.x to 3.x conversion tool), Documentation messages: 292162 nosy: benjamin.peterson, docs at python, gregory.p.smith, serhiy.storchaka priority: normal severity: normal status: open title: The "callable" fixer doesn't exist versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 08:11:25 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 23 Apr 2017 12:11:25 +0000 Subject: [New-bugs-announce] [issue30143] Using collections ABC from collections.abc rather than collections in 2to3 converted code Message-ID: <1492949485.71.0.267091337094.issue30143@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch makes 2to3 generating a code that uses abstract collection classes Sequence and Mapping from collections.abc rather than collections. Since abstract collection classes now are defined in collections.abc and collections contains only aliases for compatibility, this is more idiomatic Python 3 code. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 292163 nosy: benjamin.peterson, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Using collections ABC from collections.abc rather than collections in 2to3 converted code type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 08:30:13 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 23 Apr 2017 12:30:13 +0000 Subject: [New-bugs-announce] [issue30144] Import collections ABC from collections.abc rather than collections Message-ID: <1492950613.95.0.166673742213.issue30144@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Now abstract collection classes are defined in collections.abc rather than collections. collections contains just aliases for compatibility. Importing collections ABC from collections.abc is more idiomatic. And when aliases will be removed from collection this will be the only way. But some code still imports them from collections. Proposed patch makes it importing them from collections.abc. The most basic modules like locale, weakref and pathlib could import them just from _collections_abc for decreasing the startup time, but this is different issue. The patch doesn't touch the collections module itself and its tests, and the _decimal module which imports collections.MutableMapping in C code (changing this would require more rewriting). ---------- components: Library (Lib) messages: 292164 nosy: rhettinger, serhiy.storchaka, stutzbach priority: normal severity: normal stage: patch review status: open title: Import collections ABC from collections.abc rather than collections type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 11:16:42 2017 From: report at bugs.python.org (Mariatta Wijaya) Date: Sun, 23 Apr 2017 15:16:42 +0000 Subject: [New-bugs-announce] [issue30145] Create a How to or Tutorial documentation for asyncio Message-ID: <1492960602.26.0.461554169011.issue30145@psf.upfronthosting.co.za> New submission from Mariatta Wijaya: We could use a How To or a tutorial for asyncio in the docs. ---------- assignee: docs at python components: Documentation, asyncio messages: 292168 nosy: Mariatta, docs at python, yselivanov priority: normal severity: normal status: open title: Create a How to or Tutorial documentation for asyncio versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 14:07:49 2017 From: report at bugs.python.org (Horacio Hoyos) Date: Sun, 23 Apr 2017 18:07:49 +0000 Subject: [New-bugs-announce] [issue30146] Fix for issue 8743 not available in python MacOS 3.6.1 Message-ID: <1492970869.72.0.0063699562935.issue30146@psf.upfronthosting.co.za> New submission from Horacio Hoyos: Hi, attempt 2. My system is MacOs Yosemite (10.10.5), I have installed Python 3.6.1 downloaded from the official Python website. I was having issues while testing a custom Set implementation using the _collections_abc base MutableSet and found that my issue was apparently resolved with issue 8743. From the fix, in the attached script I would expect the and operation between my set implementation and a string to fail with a TypeError, given that string is not an instance of Set. However, the error is not raised, i.e. the print statement is executed. >From the discussion on issue 8743, I would expect _collections_abc.py to have a test for Set instances, but is not the case (for example): def __and__(self, other): if not isinstance(other, Iterable): return NotImplemented return self._from_iterable(value for value in other if value in self) That is, I was expecting a isinstance(other, Set) somewhere there. In my previous post I was told my python installation was broken. However, I checked the collections_abc.py in my windows system and is the same. I am not an expert on the Python build system, but the patch in bug 8743 applies to /Lib/_abcoll.py, but I guess _collections_abc.py is generated somehow. ---------- components: Library (Lib), macOS messages: 292175 nosy: Horacio Hoyos, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Fix for issue 8743 not available in python MacOS 3.6.1 versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 14:39:56 2017 From: report at bugs.python.org (Greg Lindahl) Date: Sun, 23 Apr 2017 18:39:56 +0000 Subject: [New-bugs-announce] [issue30147] change in interface for compiled regex.pattern Message-ID: <1492972796.84.0.355071171235.issue30147@psf.upfronthosting.co.za> New submission from Greg Lindahl: The following script runs fine in python 3.6 and recently started failing the assertion in 3.7-dev and nightly import re r = re.compile(re.escape('/foo')) print(r) print(r.pattern) assert r.pattern.startswith('\\/') ---------- components: Regular Expressions messages: 292177 nosy: ezio.melotti, mrabarnett, wumpus priority: normal severity: normal status: open title: change in interface for compiled regex.pattern versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 15:26:02 2017 From: report at bugs.python.org (Jussi Pakkanen) Date: Sun, 23 Apr 2017 19:26:02 +0000 Subject: [New-bugs-announce] [issue30148] Pathological regex behaviour Message-ID: <1492975562.82.0.745519478527.issue30148@psf.upfronthosting.co.za> New submission from Jussi Pakkanen: Attached is a script that runs a single regex against one line of text taking over 12 seconds. If you run the exact same regex in Perl it finishes immediately. The slowness has something to do with spaces. If you replace consecutive spaces in the input with one, the evaluation is immediate. This bug was originally discovered here: https://bugzilla.gnome.org/show_bug.cgi?id=781569 ---------- components: Regular Expressions files: retest.py messages: 292181 nosy: ezio.melotti, jpakkane, mrabarnett priority: normal severity: normal status: open title: Pathological regex behaviour type: resource usage Added file: http://bugs.python.org/file46828/retest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 16:18:25 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 23 Apr 2017 20:18:25 +0000 Subject: [New-bugs-announce] [issue30149] inspect.signature() doesn't support partialmethod without explicit self parameter Message-ID: <1492978705.31.0.200843668782.issue30149@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: >>> import functools, inspect >>> class A: ... f = functools.partialmethod((lambda self, x, y, *args: ...), 1) ... >>> inspect.signature(A.f) >>> class A: ... f = functools.partialmethod((lambda *args: ...), 1) ... >>> inspect.signature(A.f) Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython/Lib/inspect.py", line 3007, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped) File "/home/serhiy/py/cpython/Lib/inspect.py", line 2757, in from_callable follow_wrapper_chains=follow_wrapped) File "/home/serhiy/py/cpython/Lib/inspect.py", line 2227, in _signature_from_callable return sig.replace(parameters=new_params) File "/home/serhiy/py/cpython/Lib/inspect.py", line 2780, in replace return_annotation=return_annotation) File "/home/serhiy/py/cpython/Lib/inspect.py", line 2725, in __init__ raise ValueError(msg) ValueError: duplicate parameter name: 'args' ---------- components: Library (Lib) messages: 292184 nosy: ncoghlan, rhettinger, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: inspect.signature() doesn't support partialmethod without explicit self parameter type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 23 18:01:08 2017 From: report at bugs.python.org (Julian Taylor) Date: Sun, 23 Apr 2017 22:01:08 +0000 Subject: [New-bugs-announce] [issue30150] raw debug allocators to not return malloc alignment Message-ID: <1492984868.37.0.0606415089722.issue30150@psf.upfronthosting.co.za> New submission from Julian Taylor: The debug raw allocator do not return the same alignment as malloc. See _PyMem_DebugRawAlloc: https://github.com/python/cpython/blob/master/Objects/obmalloc.c#L1873 The line return p + 2*SST adds 2 * sizeof(size_t) to the pointer returned by malloc. On for example x32 malloc returns 16 byte aligned memory but size_t is 4 bytes. This makes all memory returned by the debug allocators not aligned the what the system assumes on such platforms. ---------- components: Interpreter Core messages: 292187 nosy: jtaylor priority: normal severity: normal status: open title: raw debug allocators to not return malloc alignment versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 02:55:43 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Mon, 24 Apr 2017 06:55:43 +0000 Subject: [New-bugs-announce] [issue30151] Race condition in use of _PyOS_SigintEvent on windows Message-ID: <1493016943.26.0.351168771487.issue30151@psf.upfronthosting.co.za> New submission from Nathaniel Smith: As pointed out in this stackoverflow answer: http://stackoverflow.com/a/43578450/ and since I seem to be collecting signal-handling bugs these days :-), there's a race condition in how the interpreter uses _PyOS_SigintEvent to allow control-C to break out of functions like time.sleep on Windows. Suppose we have a call to time.sleep(), and the user hits control-C while it's running. What's supposed to happen is: - the windows implementation of pysleep in Modules/timemodule.c does ResetEvent(hInterruptEvent) - then it blocks waiting for the interrupt event to be set *or* the timeout to expire - the C-level signal handler runs in a new thread, which sets the "hey a signal arrived" flag and then sets the event - the main thread wakes up because the event is set, and runs PyErr_CheckSignals() - this notices that that the signal has arrived and runs the Python-level handler, all is good But what can happen instead is: - before doing CALL_FUNCTION, the eval loop checks to see if any signals have arrived. They haven't. - then the C implementation of time.sleep starts executing. - then a signal arrives; the signal handler sets the flag and sets the event - then the main thread clears the event again - then it blocks waiting for the event to be set or the timeout to expire. But the C-level signal handler's already done and gone, so we don't realize that the flag is set and we should wake up and run the Python-level signal handler. The solution is that immediately *after* calling ResetEvent(_PyOS_SigintEvent()) but *before* sleeping, we should call PyErr_CheckSignals(). This catches any signals that arrived before we called ResetEvent, and any signals that arrive after that will cause the event to become set and wake us up, so that eliminates the race condition. This same race-y pattern seems to apply to appear in Modules/timemodule.c, Modules/_multiprocessing/semaphore.c, and Modules/_winapi.c. _winapi.c also handles the event in a weird way that doesn't make sense to me ? if the user hits control-C it raises an OSError instead of running the signal handler? OTOH I *think* Modules/_io/winconsoleio.c already handles the race condition correctly, and I don't dare make a guess about whatever Parser/myreadline.c is doing. ---------- components: Interpreter Core messages: 292196 nosy: njs priority: normal severity: normal status: open title: Race condition in use of _PyOS_SigintEvent on windows versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 03:33:50 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 24 Apr 2017 07:33:50 +0000 Subject: [New-bugs-announce] [issue30152] Reduce the number of imports for argparse Message-ID: <1493019230.17.0.769999123229.issue30152@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Since argparse becomes more used as standard way of parsing command-line arguments, the number of imports involved when import argparse becomes more important. Proposed patch reduces that number by 10 modules. Unpatched: $ ./python -c 'import sys; s = set(sys.modules); import argparse; print(len(s), len(sys.modules), len(set(sys.modules) - s)); print(sorted(set(sys.modules) - s))' 35 65 30 ['_collections', '_functools', '_heapq', '_locale', '_operator', '_sre', '_struct', 'argparse', 'collections', 'collections.abc', 'copy', 'copyreg', 'enum', 'functools', 'gettext', 'heapq', 'itertools', 'keyword', 'locale', 'operator', 're', 'reprlib', 'sre_compile', 'sre_constants', 'sre_parse', 'struct', 'textwrap', 'types', 'warnings', 'weakref'] $ ./python -S -c 'import sys; s = set(sys.modules); import argparse; print(len(s), len(sys.modules), len(set(sys.modules) - s)); print(sorted(set(sys.modules) - s))' 23 61 38 ['_collections', '_collections_abc', '_functools', '_heapq', '_locale', '_operator', '_sre', '_stat', '_struct', 'argparse', 'collections', 'collections.abc', 'copy', 'copyreg', 'enum', 'errno', 'functools', 'genericpath', 'gettext', 'heapq', 'itertools', 'keyword', 'locale', 'operator', 'os', 'os.path', 'posixpath', 're', 'reprlib', 'sre_compile', 'sre_constants', 'sre_parse', 'stat', 'struct', 'textwrap', 'types', 'warnings', 'weakref'] Patched: $ ./python -c 'import sys; s = set(sys.modules); import argparse; print(len(s), len(sys.modules), len(set(sys.modules) - s)); print(sorted(set(sys.modules) - s))' 35 55 20 ['_collections', '_functools', '_locale', '_operator', '_sre', 'argparse', 'collections', 'copyreg', 'enum', 'functools', 'itertools', 'keyword', 'operator', 're', 'reprlib', 'sre_compile', 'sre_constants', 'sre_parse', 'types', 'weakref'] $ ./python -S -c 'import sys; s = set(sys.modules); import argparse; print(len(s), len(sys.modules), len(set(sys.modules) - s)); print(sorted(set(sys.modules) - s))' 23 51 28 ['_collections', '_collections_abc', '_functools', '_locale', '_operator', '_sre', '_stat', 'argparse', 'collections', 'copyreg', 'enum', 'errno', 'functools', 'genericpath', 'itertools', 'keyword', 'operator', 'os', 'os.path', 'posixpath', 're', 'reprlib', 'sre_compile', 'sre_constants', 'sre_parse', 'stat', 'types', 'weakref'] The patch defers importing rarely used modules. For example textwrap and gettext are used only for output a help and error messages. The patch also makes argparse itself be imported only when the module is used as a script, not just imported. The patch also replaces importing collections.abc with _collections_abc in some other basic modules (like pathlib), this could allow to avoid importing the collections package if it is not used. Unavoided imports: * functools is used in re for decorating _compile_repl with lru_cache. * collections is used in functools for making CacheInfo a named tuple. * enum is used in re for creating RegexFlag. * types is used in enum for decorating some properties with DynamicClassAttribute. ---------- components: Library (Lib) messages: 292200 nosy: bethard, haypo, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Reduce the number of imports for argparse type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 06:29:30 2017 From: report at bugs.python.org (=?utf-8?b?SmVzw7pzIENlYSBBdmnDs24=?=) Date: Mon, 24 Apr 2017 10:29:30 +0000 Subject: [New-bugs-announce] [issue30153] lru_cache should support invalidations Message-ID: <1493029770.27.0.722684328954.issue30153@psf.upfronthosting.co.za> New submission from Jes?s Cea Avi?n: I think that "functools.lru_cache()" should have the ability to "invalidate" a (possibly cached) value. Something like: @functools.lru_cache() def func(param): ... func.invalidate(PARAM) # discard this cached call, or ignore if not cached ---------- messages: 292216 nosy: jcea priority: normal severity: normal stage: needs patch status: open title: lru_cache should support invalidations type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 06:31:14 2017 From: report at bugs.python.org (Martijn Pieters) Date: Mon, 24 Apr 2017 10:31:14 +0000 Subject: [New-bugs-announce] [issue30154] subprocess.run with stderr connected to a pipe won't timeout when killing a never-ending shell commanad Message-ID: <1493029874.23.0.365021685695.issue30154@psf.upfronthosting.co.za> New submission from Martijn Pieters: You can't time out a process tree that includes a never-ending process, *and* which redirects stderr: cat >test.sh< /dev/null # never-ending EOF chmod +x test.sh python -c "import subprocess; subprocess.run(['./test.sh'], stderr=subprocess.PIPE, timeout=3)" This hangs forever; the timeout kicks in, but then the kill on the child process fails and Python forever tries to read stderr, which won't produce data. See https://github.com/python/cpython/blob/v3.6.1/Lib/subprocess.py#L407-L410. The `sh` process is killed, but listed as a zombie process and the `cat` process has migrated to parent id 1: ^Z bg jobs -lr [2]- 21906 Running bin/python -c "import subprocess; subprocess.run(['./test.sh'], stderr=subprocess.PIPE, timeout=3)" & pstree 21906 -+= 21906 mjpieters bin/python -c import subprocess; subprocess.run(['./test.sh'], stderr=subprocess.PIPE, timeout=3) \--- 21907 mjpieters (sh) ps -j | grep 'cat /dev/random' mjpieters 24706 1 24704 0 1 R s003 0:26.54 cat /dev/random mjpieters 24897 99591 24896 0 2 R+ s003 0:00.00 grep cat /dev/random Killing Python at that point leaves the `cat` process running indefinitely. Replace the `cat /dev/random > /dev/null` line with `sleep 10`, and the `subprocess.run()` call returns after 10+ seconds: cat >test.sh<", line 1, in File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.6/subprocess.py", line 403, in run with Popen(*popenargs, **kwargs) as process: File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.6/subprocess.py", line 707, in __init__ restore_signals, start_new_session) File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.6/subprocess.py", line 1326, in _execute_child raise child_exception_type(errno_num, err_msg) OSError: [Errno 8] Exec format error real 0m12.326s user 0m0.041s sys 0m0.018s When you redirect stdin instead, `process.communicate()` does return, but the `cat` subprocess runs on indefinitely nonetheless; only the `sh` process was killed. Is this something subprocess.run should handle better (perhaps by adding in a second timeout poll and a terminate())? Or should the documentation be updated to warn about this behaviour instead (with suitable advice on how to write a subprocess that can be killed properly). ---------- components: Library (Lib) messages: 292217 nosy: mjpieters priority: normal severity: normal status: open title: subprocess.run with stderr connected to a pipe won't timeout when killing a never-ending shell commanad type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 11:27:26 2017 From: report at bugs.python.org (Anthony Tuininga) Date: Mon, 24 Apr 2017 15:27:26 +0000 Subject: [New-bugs-announce] [issue30155] Add ability to get/set tzinfo on datetime instances in C API Message-ID: <1493047646.24.0.156322797242.issue30155@psf.upfronthosting.co.za> New submission from Anthony Tuininga: Right now there is no documented way to create a datetime instance with a tzinfo instance. The documented macros all hard code the value Py_None for the tzinfo parameter. Using the PyObject_Call() method instead of the macro for creating a datetime instance is ~5x slower. In addition, there is no macro or method for getting the tzinfo from an existing datetime instance. Perhaps creating DATE_GET_TZINFO and TIME_GET_TZINFO would be acceptable? The enhancement 10381 (http://bugs.python.org/issue10381) would also be needed. I can provide a GitHub PR if that would be helpful. I first want to make sure that such an effort would be appreciated! ---------- components: Library (Lib) messages: 292230 nosy: atuining priority: normal severity: normal status: open title: Add ability to get/set tzinfo on datetime instances in C API type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 13:43:47 2017 From: report at bugs.python.org (Oren Tirosh) Date: Mon, 24 Apr 2017 17:43:47 +0000 Subject: [New-bugs-announce] [issue30156] PYTHONDUMPREFS segfaults on exit Message-ID: <1493055827.6.0.962682353128.issue30156@psf.upfronthosting.co.za> New submission from Oren Tirosh: Reproduce: Py_DEBUG build PYTHONDUMPREFS=1 ./python -c pass (big dump of reference information) Segmentation fault git-bisected to commit 7822f151b68e40376af657d267ff774439d9adb9 ---------- components: Interpreter Core messages: 292232 nosy: orent, serhiy.storchaka priority: normal severity: normal status: open title: PYTHONDUMPREFS segfaults on exit type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 24 21:51:06 2017 From: report at bugs.python.org (Jake Davis) Date: Tue, 25 Apr 2017 01:51:06 +0000 Subject: [New-bugs-announce] [issue30157] csn.Sniffer.sniff() regex error Message-ID: <1493085066.27.0.612102662908.issue30157@psf.upfronthosting.co.za> New submission from Jake Davis: Line 220 of Lib/csv.py has an extra `>` in the first group: r'(?P>[^\w\n"\']) ---------- components: Library (Lib) messages: 292249 nosy: jcdavis1983 priority: normal pull_requests: 1389 severity: normal status: open title: csn.Sniffer.sniff() regex error versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 25 05:58:19 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 25 Apr 2017 09:58:19 +0000 Subject: [New-bugs-announce] [issue30158] Deprecation warnings emitted in test_importlib Message-ID: <1493114299.05.0.756941591425.issue30158@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: $ ./python -We -m test.regrtest -v test_importlib ... ====================================================================== ERROR: test_find_module (test.test_importlib.test_abc.Frozen_MetaPathFinderDefaultsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_importlib/test_abc.py", line 160, in test_find_module self.assertIsNone(self.ins.find_module('something', None)) File "/home/serhiy/py/cpython/Lib/test/test_importlib/test_abc.py", line 151, in find_module return super().find_module(fullname, path) File "/home/serhiy/py/cpython/Lib/importlib/abc.py", line 72, in find_module stacklevel=2) DeprecationWarning: MetaPathFinder.find_module() is deprecated since Python 3.4 in favor of MetaPathFinder.find_spec()(available since 3.4) ... ---------- components: Tests messages: 292259 nosy: brett.cannon, serhiy.storchaka priority: normal severity: normal status: open title: Deprecation warnings emitted in test_importlib type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 25 08:31:00 2017 From: report at bugs.python.org (=?utf-8?b?0JTQuNC70Y/QvSDQn9Cw0LvQsNGD0LfQvtCy?=) Date: Tue, 25 Apr 2017 12:31:00 +0000 Subject: [New-bugs-announce] [issue30159] gdb autoloading python-gdb.py Message-ID: <1493123460.02.0.558737612722.issue30159@psf.upfronthosting.co.za> New submission from ????? ????????: Please install python-gdb.py in $(datarootdir)/gdb/auto-load/$(libdir)/libpython3.5m.so.1.0-gdb.py during "make install", so that programs linked towards libpython3.5m.so.1.0 will auto-load the -gdb.py script, when debugged. Likewise for the other gdb versions. An alternative to achieve the same effect is to put python-gdb.py in a .debug_gdb_scripts section (https://sourceware.org/gdb/onlinedocs/gdb/dotdebug_005fgdb_005fscripts-section.html#dotdebug_005fgdb_005fscripts-section), but I don't know if $(strip) removes it. ---------- messages: 292260 nosy: dilyan.palauzov priority: normal severity: normal status: open title: gdb autoloading python-gdb.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 25 10:44:44 2017 From: report at bugs.python.org (Mike) Date: Tue, 25 Apr 2017 14:44:44 +0000 Subject: [New-bugs-announce] [issue30160] BaseHTTPRequestHandler.wfile: supported usage unclear Message-ID: <1493131484.95.0.842958745675.issue30160@psf.upfronthosting.co.za> New submission from Mike: The documentation for BaseHTTPRequestHandler explicitly prohibits protocol violations when writing to the `wfile` stream: > BaseHTTPRequestHandler has the following instance variables: > > [...] > > **`wfile`** > > > Contains the output stream for writing a response back to the client. > > Proper adherence to the HTTP protocol must be used when writing to this > > stream. I am interested in testing web browser behavior in response to protocol violations, and my initial interpretation of this text (and the term "must" in particular) is that such conditions are not guaranteed to achievable with this module. However, my colleague believes the text is simply intended to communicate that there is no safety mechanism in place, and that protocol violations will not be corrected. [1] Local testing and a quick reading of the source tends to confirm the latter interpretation, but this may simply be coincidental and not necessarily stable behavior. If it is in fact stable, then I would like to request a modification to the documentation. Changing the word "must" to "should" would help, although it might be better to be more explicit--something like, "Bytes are transmitted 'as-is'; HTTP protocol violations will not be corrected." Thanks! [1] https://github.com/w3c/web-platform-tests/issues/5668 ---------- assignee: docs at python components: Documentation messages: 292263 nosy: docs at python, jugglinmike priority: normal severity: normal status: open title: BaseHTTPRequestHandler.wfile: supported usage unclear _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 25 12:19:38 2017 From: report at bugs.python.org (Charles Cazabon) Date: Tue, 25 Apr 2017 16:19:38 +0000 Subject: [New-bugs-announce] [issue30161] Using `with` statement causes dict to start papering over attribute errors Message-ID: <1493137178.06.0.0316334052023.issue30161@psf.upfronthosting.co.za> New submission from Charles Cazabon: This is a weird one. I've reproduced it with 3 versions of 2.7, including the latest 2.7.13. I didn't find an open bug about this, but I had difficulty crafting a search string for it, so I may have missed something. Basically, using a `with` statement (maybe any such statement, but using an open file definitely does it, even when I do nothing with it) causes the built-in dict class to stop raising AttributeErrors, which can result in odd bugs. Example: Python 2.7.13 (default, Apr 25 2017, 10:12:36) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> with sys.stderr as foo: ... pass ... >>> {}.nosuchattribute >>> {}.nosuchattribute is None >>> I haven't tried the latest 3.x, but it's definitely still there in 3.2.3: Python 3.2.3 (default, Nov 17 2016, 01:04:00) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> with sys.stderr as foo: ... pass ... >>> {}.nosuchattribute >>> {}.nosuchattribute is None >>> ---------- components: Interpreter Core messages: 292270 nosy: charlesc priority: normal severity: normal status: open title: Using `with` statement causes dict to start papering over attribute errors type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 25 12:51:52 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 25 Apr 2017 16:51:52 +0000 Subject: [New-bugs-announce] [issue30162] Add _PyTuple_Empty and make PyTuple_New(0) never failing Message-ID: <1493139112.34.0.200828606997.issue30162@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch makes the empty tuple be allocated in static memory rather than dynamic memory, expose a reference to it as _PyTuple_Empty, and makes PyTuple_New(0) never raising exceptions. This allows to simplify the code. No longer need to call PyTuple_New(0), check it's result for errors, and clean up it after the use, you just can use a borrowed reference _PyTuple_Empty. _PyTuple_Empty is for CPython internal use only. ---------- components: Interpreter Core messages: 292274 nosy: haypo, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add _PyTuple_Empty and make PyTuple_New(0) never failing type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 25 22:33:05 2017 From: report at bugs.python.org (Louie Lu) Date: Wed, 26 Apr 2017 02:33:05 +0000 Subject: [New-bugs-announce] [issue30163] argparse mx_group is required, when action value equal default will be ignore Message-ID: <1493173985.7.0.845758399817.issue30163@psf.upfronthosting.co.za> New submission from Louie Lu: When adding mutually exclusive group and required is True, and the group argument has default value. If we type its default value, argparse will ignore the input and return `argument is required` ------- PoC -------- import argparse parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group(required=True) group.add_argument('-v', type=int, default=10) print(parser.parse_args()) ----- $ python tests.py -v 10 usage: tests.py [-h] -v V tests.py: error: one of the arguments -v is required $ python tests.py -v 11 Namespace(v=11) ---------- components: Library (Lib) messages: 292293 nosy: louielu priority: normal severity: normal status: open title: argparse mx_group is required, when action value equal default will be ignore versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 00:50:03 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 26 Apr 2017 04:50:03 +0000 Subject: [New-bugs-announce] [issue30164] Testing FTP support in urllib shouldn't use Debian FTP server Message-ID: <1493182203.15.0.353510107498.issue30164@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: test_urllib2net.py uses ftp://ftp.debian.org/ for testing FTP support in urllib. But Debian just announced shutting down its public FTP services. https://lists.debian.org/debian-announce/2017/msg00001.html ---------- components: Tests messages: 292299 nosy: orsenthil, serhiy.storchaka priority: normal severity: normal status: open title: Testing FTP support in urllib shouldn't use Debian FTP server type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 02:23:51 2017 From: report at bugs.python.org (Gregory P. Smith) Date: Wed, 26 Apr 2017 06:23:51 +0000 Subject: [New-bugs-announce] [issue30165] faulthandler acquires lock from signal handler, can deadlock while crashing Message-ID: <1493187831.24.0.928802817432.issue30165@psf.upfronthosting.co.za> New submission from Gregory P. Smith: https://github.com/python/cpython/blob/master/Modules/faulthandler.c#L240 faulthandler_dump_traceback() is called from the signal handler faulthandler_fatal_error() which needs to be async signal safe and only call async signal safe things... but faulthandler_dump_traceback calls PyGILState_GetThisThreadState() which ultimately calls thread.c's find_key() which acquires a lock: https://github.com/python/cpython/blob/master/Python/thread.c#L208 (*and* calls malloc!) This sometimes leads to a deadlock when the process is crashing, handled via faulthandler, instead of a crash with stacktrace information printed. The opposite of the happy debugging experience it is intended to provide. The _Py_DumpTracebackThreads() code that this calls also calls the same offending function. Despite having comments alluding to how it is called from within a signal handler. https://github.com/python/cpython/blob/master/Python/traceback.c#L754 This is a crashing exception. Rather than ever deadlock, we should do potentially dangerous things (we're already crashing!). Most of the time we'll be able to get and display useful information. On the occasions something bad happens as a result, at least the message printed to stderr before we started trying to do bad things will give a hint as to why the crash reporter crashed. I _believe_ we always want to use _PyThreadState_UncheckedGet() from the signal handler for the entire codepath. Effectively a simple read from TLS. No guarantees possible about the thread state list not in an intermediate state which will trip us up when dumping, but we could never guarantee that anyways. note: I saw https://bugs.python.org/issue23886 but it only seems quasi related though it is also about getting the thread state. ---------- assignee: haypo components: Extension Modules messages: 292307 nosy: gregory.p.smith, haypo priority: normal severity: normal stage: needs patch status: open title: faulthandler acquires lock from signal handler, can deadlock while crashing type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 04:34:19 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 26 Apr 2017 08:34:19 +0000 Subject: [New-bugs-announce] [issue30166] Import command-line parsing modules only when needed Message-ID: <1493195659.65.0.461959154112.issue30166@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Berker suggested to move this part from issue30152 to its own PR. When the file can be imported as a module and run as a script it is worth to make command-line parsing modules (getopt, optparse, argparse) be imported only when they are used, i.e. when the file is ran as a script. Most of the stdlib modules already do this. Proposed patch moves imports of command-line parsing modules and some other modules used only when the module is ran to the main() function or to the branch executed only if __name__ == "__main__". It doesn't change scripts and files that are purposed to be used only for running (__main__.py, main.py). ---------- components: Library (Lib) messages: 292319 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Import command-line parsing modules only when needed type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 04:53:33 2017 From: report at bugs.python.org (=?utf-8?q?Andr=C3=A9_Anjos?=) Date: Wed, 26 Apr 2017 08:53:33 +0000 Subject: [New-bugs-announce] [issue30167] site.main() does not work on Python 3.6 and superior Message-ID: <1493196813.64.0.567701209744.issue30167@psf.upfronthosting.co.za> New submission from Andr? Anjos: Apparently, "import site; site.main()" does not seem to work anymore on Python 3.6 and superior. The reason is a change on the behavior of "os.path.abspath(None)". Before Python 3.6, it used to report an AttributeError which is properly caught inside "site.abs_paths" (see: https://github.com/python/cpython/blob/master/Lib/site.py#L99), making it ignore __main__, one of sys.modules, which has __file__ and __cached__ set to None. With Python 3.6 and superior, os.path.abspath(None) reports a TypeError, which makes calling "site.main()" raise an exception and stop. How to reproduce: On python 3.6 or superior, do "import site; site.main()". Expected behavior: Exception is properly caught and treated inside "site.abs_paths", ignoring modules in which __file__ and/or __cached__ are set to None. ---------- components: Library (Lib) messages: 292325 nosy: anjos priority: normal severity: normal status: open title: site.main() does not work on Python 3.6 and superior type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 08:44:00 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Wed, 26 Apr 2017 12:44:00 +0000 Subject: [New-bugs-announce] [issue30168] Class Logger is unindented in the documentation. Message-ID: <1493210640.98.0.578541878256.issue30168@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Currently, `Logger` in `logging.rst` doesn't have an indent after `.. class:: Logger`. This causes the formatting for the specific section to look somewhat unexpected [1]. I've already created a PR that indents the methods/attributes accordingly. After @bitdancer's request, created this issue to get feedback from vinay if this was done on purpose. [1]: https://docs.python.org/3/library/logging.html#logging.Logger ---------- assignee: docs at python components: Documentation messages: 292336 nosy: Jim Fasarakis-Hilliard, docs at python, vinay.sajip priority: normal severity: normal status: open title: Class Logger is unindented in the documentation. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 08:54:26 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 26 Apr 2017 12:54:26 +0000 Subject: [New-bugs-announce] [issue30169] test_multiprocessing_spawn crashed on AMD64 Windows8.1 Non-Debug 3.x buildbot Message-ID: <1493211266.82.0.263574668344.issue30169@psf.upfronthosting.co.za> New submission from STINNER Victor: http://buildbot.python.org/all/builders/AMD64%20Windows8.1%20Non-Debug%203.x/builds/670/steps/test/logs/stdio 0:05:37 [255/404/1] test_multiprocessing_spawn crashed (Exit code 3221225477) Windows fatal exception: access violation Current thread 0x00001644 (most recent call first): File "D:\buildarea\3.x.ware-win81-release\build\lib\test\_test_multiprocessing.py", line 3997 in ManagerMixin File "D:\buildarea\3.x.ware-win81-release\build\lib\test\_test_multiprocessing.py", line 3988 in File "", line 205 in _call_with_frames_removed File "", line 679 in exec_module File "", line 655 in _load_unlocked File "", line 950 in _find_and_load_unlocked File "", line 961 in _find_and_load File "D:\buildarea\3.x.ware-win81-release\build\lib\test\test_multiprocessing_spawn.py", line 2 in File "", line 205 in _call_with_frames_removed File "", line 679 in exec_module File "", line 655 in _load_unlocked File "", line 950 in _find_and_load_unlocked File "", line 961 in _find_and_load File "", line 978 in _gcd_import File "D:\buildarea\3.x.ware-win81-release\build\lib\importlib\__init__.py", line 127 in import_module File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest.py", line 152 in runtest_inner File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest.py", line 119 in runtest File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\runtest_mp.py", line 71 in run_tests_slave File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\main.py", line 470 in _main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\main.py", line 463 in main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\libregrtest\main.py", line 527 in main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\regrtest.py", line 46 in _main File "D:\buildarea\3.x.ware-win81-release\build\lib\test\regrtest.py", line 50 in File "D:\buildarea\3.x.ware-win81-release\build\lib\runpy.py", line 85 in _run_code File "D:\buildarea\3.x.ware-win81-release\build\lib\runpy.py", line 193 in _run_module_as_main Note: same crash in test_multiprocessing_fork, even if Windows has no os.fork(): the crash occurs at lib\test\test_multiprocessing_fork.py:2 "import test._test_multiprocessing", before check if os.fork() exists. ---------- components: Library (Lib), Tests, Windows messages: 292337 nosy: davin, haypo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_multiprocessing_spawn crashed on AMD64 Windows8.1 Non-Debug 3.x buildbot type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 09:17:22 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 26 Apr 2017 13:17:22 +0000 Subject: [New-bugs-announce] [issue30170] "tests may fail, unable to create temporary directory" warning on buildbot: add a cleanup step to buildbots Message-ID: <1493212642.62.0.333467089335.issue30170@psf.upfronthosting.co.za> New submission from STINNER Victor: On buildbots, it's common to see such warning: 0:32:04 [269/404] test_property passed -- running: test_multiprocessing_spawn (74 sec) D:\buildarea\3.x.bolen-windows10\build\lib\test\support\__init__.py:1012: RuntimeWarning: tests may fail, unable to create temporary directory 'D:\\buildarea\\3.x.bolen-windows10\\build\\build\\test_python_1048': [WinError 183] Cannot create a file when that file already exists: 'D:\\buildarea\\3.x.bolen-windows10\\build\\build\\test_python_1048' with temp_dir(path=name, quiet=quiet) as temp_path: running: test_cmd_line_script (30 sec), test_multiprocessing_spawn (104 sec) I also got it *sometimes*. It took me months to understand where it does come from. It's quite stupid in fact: temporary directories are not removed if a test does crash (ex: segfault). Later, if a test process has the same PID than the crashed process, you get the warning. I suggest to add a "clean" step on buildbots to first remove old "test_python_*" directories leaked by previous runs. First, I wanted to add such cleanup in regrtest directly, but it's common that I run two main regrtest processes in parallel, and I would like to keep this feature. If regrtest starts by removing test_python_*: it will break currently running tests. I'm not 100% confident that the warning is caused by previous runs, but I think that it's worth it to try to cleanup to check if it's case ;-) ---------- components: Tests keywords: buildbot messages: 292340 nosy: haypo, serhiy.storchaka, zach.ware priority: normal severity: normal status: open title: "tests may fail, unable to create temporary directory" warning on buildbot: add a cleanup step to buildbots versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 09:52:50 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 26 Apr 2017 13:52:50 +0000 Subject: [New-bugs-announce] [issue30171] Emit ResourceWarning in multiprocessing Queue destructor Message-ID: <1493214770.49.0.943228970697.issue30171@psf.upfronthosting.co.za> New submission from STINNER Victor: A multiprocessing Queue object managers multiple resources: * a multiprocessing Pipe * a thread * (a lock and a semaphore) If a Queue is not cleaned up properly, your application may leak many resources. Try attached queue_leak.py to see an example "leaking a thread". I suggest to emit a ResourceWarning warning in Queue destrutor. I don't know what should be the test to decide if a warning must be emitted? * if the queue wasn't closed yet? * if the thread is alive? * if the queue wasn't closed yet and/or the thread is alive? (my favorite choice) Other examples of objects emitting ResourceWarning: * io files: io.FileIO, io.TextIOWrapper, etc. * socket.socket * subprocess.Popen: I recently added a ResourceWarning on that one * asyncio transports and event loops ---------- components: Library (Lib) files: queue_leak.py messages: 292346 nosy: davin, haypo, serhiy.storchaka priority: normal severity: normal status: open title: Emit ResourceWarning in multiprocessing Queue destructor type: resource usage versions: Python 3.7 Added file: http://bugs.python.org/file46830/queue_leak.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 10:43:35 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 26 Apr 2017 14:43:35 +0000 Subject: [New-bugs-announce] [issue30172] test_tools takes longer than 5 minutes on some buildbots Message-ID: <1493217815.35.0.304443202168.issue30172@psf.upfronthosting.co.za> New submission from STINNER Victor: On x86-64 El Capitan 3.x buildbot, test_tools takes longer than 5 minutes, whereas the overall test suite took 31 min. Is someone wrong in test_tools? http://buildbot.python.org/all/builders/x86-64%20El%20Capitan%203.x/builds/92/steps/test/logs/stdio Run tests in parallel using 2 child processes ... 10 slowest tests: - test_tools: 5 min 3 sec - test_tokenize: 4 min 52 sec - test_multiprocessing_spawn: 4 min 4 sec - test_datetime: 3 min 32 sec - test_lib2to3: 3 min 14 sec - test_mmap: 2 min 43 sec - test_multiprocessing_forkserver: 2 min 11 sec - test_multiprocessing_fork: 1 min 58 sec - test_io: 1 min 54 sec - test_subprocess: 1 min 11 sec ... Total duration: 31 min 20 sec ... ---------- components: Tests keywords: buildbot messages: 292355 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: test_tools takes longer than 5 minutes on some buildbots type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 10:50:09 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 26 Apr 2017 14:50:09 +0000 Subject: [New-bugs-announce] [issue30173] x86 Windows7 3.x buildbot has the issue #26624 bug Message-ID: <1493218209.79.0.825922715699.issue30173@psf.upfronthosting.co.za> New submission from STINNER Victor: It looks like the x86 Windows7 3.x buildbot has the issue #26624 bug: test__locale hangs. This buildbot slave is managed by David Bolen. @David: please see: * https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk * https://bugs.python.org/issue26624#msg270695 http://buildbot.python.org/all/builders/x86%20Windows7%203.x/builds/549/steps/compile/logs/stdio ValidateUcrtbase: setlocal set PYTHONPATH=D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\Lib "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\PCBuild\win32\python_d.exe" "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\PC\validate_ucrtbase.py" ucrtbased C:\Windows\system32\ucrtbased.dll is version 10.0.10240.16384 WARN: ucrtbased contains known issues. Please update the Windows 10 SDK. See: http://bugs.python.org/issue27705 https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk http://buildbot.python.org/all/builders/x86%20Windows7%203.x/builds/549/steps/test/logs/stdio 0:58:28 [335/404] test_base64 passed running: test_venv (56 sec), test__locale (30 sec) ... 1:07:49 [403/404] test_asyncgen passed -- running: test__locale (562 sec) command timed out: 3600 seconds without output running ['Tools\\buildbot\\test.bat', '-j2'], attempting to kill running: test__locale (592 sec) ... running: test__locale (4162 sec) 2:07:54 [404/404/1] test__locale crashed (Exit code 1) program finished with exit code 1 elapsedTime=7680.676000 ---------- components: Tests, Windows keywords: buildbot messages: 292356 nosy: db3l, haypo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: x86 Windows7 3.x buildbot has the issue #26624 bug versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 11:50:03 2017 From: report at bugs.python.org (Jelle Zijlstra) Date: Wed, 26 Apr 2017 15:50:03 +0000 Subject: [New-bugs-announce] [issue30174] Duplicate code in pickletools.py Message-ID: <1493221803.92.0.461723845799.issue30174@psf.upfronthosting.co.za> New submission from Jelle Zijlstra: The bytes1 ArgumentDescriptor is duplicated in pickletools.py. ---------- messages: 292364 nosy: Jelle Zijlstra, alexandre.vassalotti priority: normal pull_requests: 1408 severity: normal status: open title: Duplicate code in pickletools.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 11:50:28 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 26 Apr 2017 15:50:28 +0000 Subject: [New-bugs-announce] [issue30175] Random test_imaplib.test_logincapa_with_client_certfile failure on x86 Gentoo Installed with X 3.x Message-ID: <1493221828.31.0.979573561746.issue30175@psf.upfronthosting.co.za> New submission from STINNER Victor: http://buildbot.python.org/all/builders/x86%20Gentoo%20Installed%20with%20X%203.x/builds/593/steps/test/logs/stdio ====================================================================== ERROR: test_logincapa_with_client_certfile (test.test_imaplib.RemoteIMAP_SSLTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/test/test_imaplib.py", line 972, in test_logincapa_with_client_certfile certfile=CERTFILE) File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/imaplib.py", line 1280, in __init__ IMAP4.__init__(self, host, port) File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/imaplib.py", line 197, in __init__ self.open(host, port) File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/imaplib.py", line 1293, in open IMAP4.open(self, host, port) File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/imaplib.py", line 294, in open self.sock = self._create_socket() File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/imaplib.py", line 1285, in _create_socket server_hostname=self.host) File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/ssl.py", line 401, in wrap_socket _context=self, _session=session) File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/ssl.py", line 808, in __init__ self.do_handshake() File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/ssl.py", line 1061, in do_handshake self._sslobj.do_handshake() File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/ssl.py", line 683, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:749) ---------- components: Tests messages: 292365 nosy: haypo priority: normal severity: normal status: open title: Random test_imaplib.test_logincapa_with_client_certfile failure on x86 Gentoo Installed with X 3.x versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 12:10:21 2017 From: report at bugs.python.org (Xiang Zhang) Date: Wed, 26 Apr 2017 16:10:21 +0000 Subject: [New-bugs-announce] [issue30176] curses attribute constants list is incomplete Message-ID: <1493223021.33.0.50678717528.issue30176@psf.upfronthosting.co.za> Changes by Xiang Zhang : ---------- assignee: docs at python components: Documentation nosy: berker.peksag, docs at python, haypo, xiang.zhang priority: normal severity: normal stage: patch review status: open title: curses attribute constants list is incomplete versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 12:19:45 2017 From: report at bugs.python.org (Michael Shuffett) Date: Wed, 26 Apr 2017 16:19:45 +0000 Subject: [New-bugs-announce] [issue30177] pathlib.resolve(strict=False) only includes first child Message-ID: <1493223585.29.0.982122563762.issue30177@psf.upfronthosting.co.za> New submission from Michael Shuffett: According to the documentation https://docs.python.org/3/library/pathlib.html#pathlib.Path.resolve If strict is False, the path is resolved as far as possible and any remainder is appended without checking whether it exists. The current behavior is not consistent with this, and only appends the first remainder. For example: If we have an empty '/tmp' directory Path('/tmp/foo').resolve() and Path('/tmp/foo/bar').resolve() both result in Path('/tmp/foo') but Path('/tmp/foo/bar').resolve() should result in Path('/tmp/foo/bar') ---------- components: Library (Lib) messages: 292369 nosy: mshuffett priority: normal severity: normal status: open title: pathlib.resolve(strict=False) only includes first child type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 13:13:34 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Wed, 26 Apr 2017 17:13:34 +0000 Subject: [New-bugs-announce] [issue30178] Indent methods and attributes of MimeTypes class Message-ID: <1493226814.74.0.770717289652.issue30178@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Similar to #30168 opened earlier. The MimeTypes class's methods and attributes aren't indented and the resulting documentation is not indented and duplicates the class name. Didn't find anything that might indicate this was intentional when trying to blame this change. It seems to have happened at some point between 2.6 [1] and 2.7 [2], though, where the class directive was moved to the MimeTypes Objects section. Also didn't find an expert for this module in the index. Proposed change just indents these. [1]: https://github.com/python/cpython/blame/2.6/Doc/library/mimetypes.rst [2]: https://github.com/python/cpython/blame/2.7/Doc/library/mimetypes.rst ---------- assignee: docs at python components: Documentation messages: 292374 nosy: Jim Fasarakis-Hilliard, docs at python priority: normal severity: normal status: open title: Indent methods and attributes of MimeTypes class versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 18:19:39 2017 From: report at bugs.python.org (Cheryl Sabella) Date: Wed, 26 Apr 2017 22:19:39 +0000 Subject: [New-bugs-announce] [issue30179] Update Copyright to 2017 Message-ID: <1493245179.59.0.780402360042.issue30179@psf.upfronthosting.co.za> New submission from Cheryl Sabella: The copyright page is only through 2016. ---------- assignee: docs at python components: Documentation messages: 292383 nosy: csabella, docs at python priority: normal severity: normal status: open title: Update Copyright to 2017 type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 20:51:02 2017 From: report at bugs.python.org (Joe Jevnik) Date: Thu, 27 Apr 2017 00:51:02 +0000 Subject: [New-bugs-announce] [issue30180] PyArg_ParseTupleAndKeywords supports required keyword only arguments Message-ID: <1493254262.66.0.357413160828.issue30180@psf.upfronthosting.co.za> New submission from Joe Jevnik: I opened a pr to remove a line in the docs about $ needing to follow | in PyArg_ParseTupleAndKeywords. In practice, you can just use a $ to create required keyword arguments which intuitively makes sense. I was told this should raise a SystemError; however, you can create required keyword only arguments in Python so I am not sure why we would want to fail when this is done with PyArg_ParseTupleAndKeywords. ---------- messages: 292385 nosy: llllllllll, serhiy.storchaka priority: normal pull_requests: 1417 severity: normal status: open title: PyArg_ParseTupleAndKeywords supports required keyword only arguments _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 22:44:34 2017 From: report at bugs.python.org (Ben Finney) Date: Thu, 27 Apr 2017 02:44:34 +0000 Subject: [New-bugs-announce] [issue30181] Incorrect parsing of test case docstring Message-ID: <1493261074.73.0.807341942876.issue30181@psf.upfronthosting.co.za> New submission from Ben Finney: The docstring of a test case is not correctly parsed for display. The attached ?test_foo.py? module contains two test case functions. Both docstrings conform to PEP 257 : they have a single-line synopsis and some extra text in a new paragraph. However, only one of the functions has its docstring synopsis used in the output: ===== ====================================================================== FAIL: test_lower_returns_expected_code (test_foo.Foo_TestCase) Should return expected code. ---------------------------------------------------------------------- Traceback (most recent call last): [?] ====================================================================== FAIL: test_reverse_returns_expected_text (test_foo.Foo_TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): [?] ---------------------------------------------------------------------- Ran 2 tests in 0.001s ===== This violates the docstring parsing as described in PEP 257. The synopsis should be obtained by, first, stripping leading and trailing whitespace from the docstring; then, from that stripped text, taking the first line as the synopsis. So the expected output for ?test_foo.Foo_TestCase. test_reverse_returns_expected_text? should include its docstring synopsis, ?Should return expected reverse text.? ---------- components: Library (Lib) files: test_foo.py messages: 292387 nosy: benf_wspdigital priority: normal severity: normal status: open title: Incorrect parsing of test case docstring type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 Added file: http://bugs.python.org/file46831/test_foo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 26 22:50:12 2017 From: report at bugs.python.org (Jesse Gonzalez) Date: Thu, 27 Apr 2017 02:50:12 +0000 Subject: [New-bugs-announce] [issue30182] Incorrect in referring to ISO as "International Standards Organization" Message-ID: <1493261412.39.0.965032185029.issue30182@psf.upfronthosting.co.za> New submission from Jesse Gonzalez: When reviewing the Unicode HOWTO, I found a reference to ISO as "International Standards Organization", which should instead read "International Organization for Standardization". https://www.iso.org/home.html ---------- assignee: docs at python components: Documentation messages: 292389 nosy: Jesse Gonzalez, docs at python priority: normal severity: normal status: open title: Incorrect in referring to ISO as "International Standards Organization" _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 00:24:50 2017 From: report at bugs.python.org (David Haney) Date: Thu, 27 Apr 2017 04:24:50 +0000 Subject: [New-bugs-announce] [issue30183] [HPUX] compilation error in pytime.c with cc compiler Message-ID: <1493267090.1.0.00115311377195.issue30183@psf.upfronthosting.co.za> New submission from David Haney: When compiling on HP-UX with the native cc compiler, the following compilation error occurs in pytime.c cc -Ae -c -O -O -I. -I./Include -DPy_BUILD_CORE -o Python/pytime.o Python/pytime.c "Python/pytime.c", line 723: error #2020: identifier "CLOCK_MONOTONIC" is undefined const clockid_t clk_id = CLOCK_MONOTONIC; ^ 1 error detected in the compilation of "Python/pytime.c". *** Error exit code 2 Stop. HP-UX does not support the CLOCK_MONOTONIC state. ---------- components: Build messages: 292397 nosy: David Haney priority: normal severity: normal status: open title: [HPUX] compilation error in pytime.c with cc compiler type: compile error versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 02:38:26 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 27 Apr 2017 06:38:26 +0000 Subject: [New-bugs-announce] [issue30184] Add tests for invalid use of PyArg_ParseTupleAndKeywords Message-ID: <1493275106.78.0.678380665144.issue30184@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds tests that check that PyArg_ParseTupleAndKeywords() correctly detects errors in the format string and keywords list and raises SystemError. It also allows the format argument in _testcapi.parse_tuple_and_keywords() be string. ---------- components: Tests messages: 292407 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add tests for invalid use of PyArg_ParseTupleAndKeywords type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 06:51:47 2017 From: report at bugs.python.org (Antoine Pitrou) Date: Thu, 27 Apr 2017 10:51:47 +0000 Subject: [New-bugs-announce] [issue30185] forkserver process should silence KeyboardInterrupt Message-ID: <1493290307.72.0.662985232827.issue30185@psf.upfronthosting.co.za> New submission from Antoine Pitrou: The forkserver intermediate process is an implementation detail. However, if you Ctrl-C the main process, the forkserver process will exit with a KeyboardInterrupt traceback, even if the main process catches KeyboardInterrupt to exit silently. This produces stderr such as: $ ./python forkserversignal.py ^CTraceback (most recent call last): File "", line 1, in File "/home/antoine/cpython/default/Lib/multiprocessing/forkserver.py", line 164, in main rfds = [key.fileobj for (key, events) in selector.select()] File "/home/antoine/cpython/default/Lib/selectors.py", line 445, in select fd_event_list = self._epoll.poll(timeout, max_ev) KeyboardInterrupt For the sake of usability, forkserver should probably silence those tracebacks by default, for example by changing the default signal handler in the forkserver process (but children forked by the forkserver process should probably get the default Python signal handlers...). Not sure this can be considered a bugfix or an enhancement. ---------- components: Library (Lib) messages: 292420 nosy: davin, pitrou, rhettinger, sbt priority: normal severity: normal stage: needs patch status: open title: forkserver process should silence KeyboardInterrupt type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 08:18:00 2017 From: report at bugs.python.org (Sebastian Ernst) Date: Thu, 27 Apr 2017 12:18:00 +0000 Subject: [New-bugs-announce] [issue30186] Python interpreter calling "PathCchCombineEx" on startup, Windows 8 and above only Message-ID: <1493295480.01.0.116959327637.issue30186@psf.upfronthosting.co.za> New submission from Sebastian Ernst: I am investigating a bug in Wine: https://bugs.winehq.org/show_bug.cgi?id=42474 The Python 3.6(.1) interpreter fails to start on Wine because of an unimplemented function in Wine: "api-ms-win-core-path-l1-1-0.dll.PathCchCombineEx". While the missing function is clearly a problem in Wine, the fact that PathCchCombineEx is called in the first place is somewhat odd. The call was added to Python 3.6 on 09 Sep 2016 by Steve Dower of Microsoft: https://hg.python.org/cpython/rev/03517dd54977 Logically, Python 3.5.x and prior do not require this call and work flawlessly under Wine. Digging deeper into this, I found that PathCchCombineEx was introduced in Windows 8: https://msdn.microsoft.com/en-us/library/windows/desktop/hh707086(v=vs.85).aspx However, the following page states, that the current version of Python (3.6) should support Windows Vista and 7: https://docs.python.org/3/using/windows.html I am seeking clarification on why PathCchCombineEx is called during the Python interpreter startup although Wine pretends to be Windows 7 and although Python should support Windows Vista & 7. My thinking is that this call might also happen on an actual Windows 7 system under some circumstances and break Python there as well, which would make it a bug in Python. ---------- components: Interpreter Core, Windows messages: 292430 nosy: paul.moore, smernst, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python interpreter calling "PathCchCombineEx" on startup, Windows 8 and above only type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 10:07:54 2017 From: report at bugs.python.org (Chris Seto) Date: Thu, 27 Apr 2017 14:07:54 +0000 Subject: [New-bugs-announce] [issue30187] Regex becomes invalid in python 3.6 Message-ID: <1493302074.44.0.42828185983.issue30187@psf.upfronthosting.co.za> New submission from Chris Seto: Expected behavior: ~ ??? pyenv shell 3.5.2 ~ ??? python --version Python 3.5.2 ~ ??? python Python 3.5.2 (default, Oct 24 2016, 00:12:20) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.38)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> re.compile('[^\B]') re.compile('[^\\B]') >>> re.compile(r'[^\B]') re.compile('[^\\B]') Actual: ~ ??? pyenv shell 3.6.0 ~ ??? python --version Python 3.6.0 ~ ??? python Python 3.6.0 (default, Apr 26 2017, 17:24:07) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> re.compile('[^\B]') Traceback (most recent call last): File "", line 1, in File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/re.py", line 233, in compile return _compile(pattern, flags) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/re.py", line 301, in _compile p = sre_compile.compile(pattern, flags) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_compile.py", line 562, in compile p = sre_parse.parse(p, flags) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 856, in parse p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, False) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 415, in _parse_sub itemsappend(_parse(source, state, verbose)) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 526, in _parse code1 = _class_escape(source, this) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 336, in _class_escape raise source.error('bad escape %s' % escape, len(escape)) sre_constants.error: bad escape \B at position 2 >>> re.compile(r'[^\B]') Traceback (most recent call last): File "", line 1, in File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/re.py", line 233, in compile return _compile(pattern, flags) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/re.py", line 301, in _compile p = sre_compile.compile(pattern, flags) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_compile.py", line 562, in compile p = sre_parse.parse(p, flags) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 856, in parse p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, False) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 415, in _parse_sub itemsappend(_parse(source, state, verbose)) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 526, in _parse code1 = _class_escape(source, this) File "/Users/chrisseto/.pyenv/versions/3.6.0/lib/python3.6/sre_parse.py", line 336, in _class_escape raise source.error('bad escape %s' % escape, len(escape)) sre_constants.error: bad escape \B at position 2 ---------- components: Regular Expressions messages: 292445 nosy: Chris Seto2, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: Regex becomes invalid in python 3.6 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 11:07:08 2017 From: report at bugs.python.org (STINNER Victor) Date: Thu, 27 Apr 2017 15:07:08 +0000 Subject: [New-bugs-announce] [issue30188] test_nntplib: random EOFError in setUpClass() Message-ID: <1493305628.27.0.924756319794.issue30188@psf.upfronthosting.co.za> New submission from STINNER Victor: Example of failure: ====================================================================== ERROR: setUpClass (test.test_nntplib.NetworkedNNTPTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/haypo/prog/python/master/Lib/test/test_nntplib.py", line 289, in setUpClass cls.server = cls.NNTP_CLASS(cls.NNTP_HOST, timeout=TIMEOUT, usenetrc=False) File "/home/haypo/prog/python/master/Lib/nntplib.py", line 1048, in __init__ readermode, timeout) File "/home/haypo/prog/python/master/Lib/nntplib.py", line 330, in __init__ self.welcome = self._getresp() File "/home/haypo/prog/python/master/Lib/nntplib.py", line 449, in _getresp resp = self._getline() File "/home/haypo/prog/python/master/Lib/nntplib.py", line 437, in _getline if not line: raise EOFError EOFError ---------------------------------------------------------------------- Attached PR catch this error and skips the test. See also issue #19613 and #19756. ---------- components: Tests messages: 292450 nosy: haypo priority: normal severity: normal status: open title: test_nntplib: random EOFError in setUpClass() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 14:16:08 2017 From: report at bugs.python.org (Mario Viapiano) Date: Thu, 27 Apr 2017 18:16:08 +0000 Subject: [New-bugs-announce] [issue30189] SSL match_hostname does not accept IP Address Message-ID: <1493316968.53.0.265512116525.issue30189@psf.upfronthosting.co.za> New submission from Mario Viapiano: I need this patch to be available in python 2.7.13 https://bugs.python.org/issue23239 ---------- components: Extension Modules messages: 292468 nosy: emeve89 priority: normal severity: normal status: open title: SSL match_hostname does not accept IP Address type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 16:12:42 2017 From: report at bugs.python.org (Giampaolo Rodola') Date: Thu, 27 Apr 2017 20:12:42 +0000 Subject: [New-bugs-announce] [issue30190] unittest's assertAlmostEqual should show the difference Message-ID: <1493323962.23.0.575947649634.issue30190@psf.upfronthosting.co.za> New submission from Giampaolo Rodola': When comparing 2 numbers as "self.assertAlmostEqual(a, b, delta=1000)" the error message looks like this: AssertionError: 27332885 != 27391120 within 1000 delta Especially when a and b are big numbers or differ a lot, it would be useful to see the absolute difference between the 2 numbers as in: AssertionError: 27332885 != 27391120 within 1000 delta (58235 difference) ---------- messages: 292477 nosy: giampaolo.rodola priority: normal severity: normal stage: needs patch status: open title: unittest's assertAlmostEqual should show the difference versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 16:33:06 2017 From: report at bugs.python.org (Sogand Ka) Date: Thu, 27 Apr 2017 20:33:06 +0000 Subject: [New-bugs-announce] [issue30191] > Message-ID: <1493325186.88.0.762638771729.issue30191@psf.upfronthosting.co.za> New submission from Sogand Ka: I am using COM in python 2.7 (win32com) and here is my problem: >>> Vissim >>> Vissim.Net > why is it unknown? Also, I have to type everything in shell, since it does not show me the keywords. Would be great to see any helpful suggestion. ---------- components: Interpreter Core messages: 292479 nosy: Sogand Ka priority: normal severity: normal status: open title: > type: performance versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 17:20:03 2017 From: report at bugs.python.org (Neil Schemenauer) Date: Thu, 27 Apr 2017 21:20:03 +0000 Subject: [New-bugs-announce] [issue30192] hashlib module breaks with 64-bit kernel and 32-bit user space Message-ID: <1493328003.41.0.178410988374.issue30192@psf.upfronthosting.co.za> New submission from Neil Schemenauer: The test in setup.py to check for SSE2 support is incorrect. Checking that arch == x86_64 is not sufficient. If the kernel is 64-bit but Python is compiled with a 32-bit compiler, the _blake2 module will fail to build. The attached patch fixes this issue. I did a quick search of the x86_64 string, I don't see this mistake being made elsewhere but I imagine it could be done elsewhere. Obviously a machine with a 64-bit kernel and 32-bit userspace is a rare as hen's teeth these days. Still, I think it is worth fixing this bug. Python 3.6.1 (default, Apr 27 2017, 20:09:03) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import hashlib ERROR:root:code for hash blake2b was not found. Traceback (most recent call last): File "/home/nas/PPython-3.6.1/Lib/hashlib.py", line 243, in globals()[__func_name] = __get_hash(__func_name) File "/home/nas/PPython-3.6.1/Lib/hashlib.py", line 119, in __get_openssl_constructor return __get_builtin_constructor(name) File "/home/nas/PPython-3.6.1/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type blake2b ERROR:root:code for hash blake2s was not found. Traceback (most recent call last): File "/home/nas/PPython-3.6.1/Lib/hashlib.py", line 243, in globals()[__func_name] = __get_hash(__func_name) File "/home/nas/PPython-3.6.1/Lib/hashlib.py", line 119, in __get_openssl_constructor return __get_builtin_constructor(name) File "/home/nas/PPython-3.6.1/Lib/hashlib.py", line 113, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type blake2s ---------- components: Build messages: 292485 nosy: nascheme priority: normal severity: normal stage: patch review status: open title: hashlib module breaks with 64-bit kernel and 32-bit user space type: compile error versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 18:55:44 2017 From: report at bugs.python.org (Nikolay Kim) Date: Thu, 27 Apr 2017 22:55:44 +0000 Subject: [New-bugs-announce] [issue30193] Allow to load buffer objects with json.loads() Message-ID: <1493333744.87.0.320689923757.issue30193@psf.upfronthosting.co.za> New submission from Nikolay Kim: It is not possible to use buffer objects in json.loads() ---------- messages: 292487 nosy: fafhrd91 priority: normal severity: normal status: open title: Allow to load buffer objects with json.loads() versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 27 21:15:03 2017 From: report at bugs.python.org (Jacob B) Date: Fri, 28 Apr 2017 01:15:03 +0000 Subject: [New-bugs-announce] [issue30194] AttributeError on opening ZipFile Message-ID: <1493342103.41.0.38342721045.issue30194@psf.upfronthosting.co.za> New submission from Jacob B: The error occurs when I attempt to run the following code: from urllib.request import urlretrieve from os import path from zipfile import ZipFile download_url = "https://www.dropbox.com/s/obiqvrt4m53pmoz/tesseract-4.0.0-alpha.zip?dl=1" def setup_program(): zip_name = urlretrieve(download_url) zip_file = ZipFile(zip_name, "r") zip_file.extractall(path.abspath("__tesseract/")) zip_file.close() setup_program() # REMOVE after test I get the following traceback: $ python downloader.py Traceback (most recent call last): File "downloader.py", line 15, in setup_program() File "downloader.py", line 11, in setup_program zip_file = ZipFile(zip_name, "r") File "C:\Python36\lib\zipfile.py", line 1100, in __init__ self._RealGetContents() File "C:\Python36\lib\zipfile.py", line 1163, in _RealGetContents endrec = _EndRecData(fp) File "C:\Python36\lib\zipfile.py", line 241, in _EndRecData fpin.seek(0, 2) AttributeError: 'tuple' object has no attribute 'seek' ---------- messages: 292494 nosy: Jacob B2 priority: normal severity: normal status: open title: AttributeError on opening ZipFile versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 06:14:52 2017 From: report at bugs.python.org (mahboubi) Date: Fri, 28 Apr 2017 10:14:52 +0000 Subject: [New-bugs-announce] [issue30195] writing non-ascii characters in xml file using python code embedded in C Message-ID: <1493374492.88.0.89710812833.issue30195@psf.upfronthosting.co.za> New submission from mahboubi: my python code embedded in C program, uses etree from lxml to write a plain string as element attribute in xml file. the problem is when my string contains non english characters(non ascii), the program fails to write even with unicode conversion such as unicode(mystring, "utf-8"), but when I use python code only, it works. ---------- components: XML messages: 292521 nosy: aimad, benjamin.peterson, ezio.melotti, haypo, lemburg priority: normal severity: normal status: open title: writing non-ascii characters in xml file using python code embedded in C type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 06:19:20 2017 From: report at bugs.python.org (=?utf-8?b?SsOhY2h5bSBCYXJ2w61uZWs=?=) Date: Fri, 28 Apr 2017 10:19:20 +0000 Subject: [New-bugs-announce] [issue30196] Add __matmul__ to collections.Counter Message-ID: <1493374760.56.0.320449301178.issue30196@psf.upfronthosting.co.za> New submission from J?chym Barv?nek: The class collections.Counter should semantically contain only numbers, so it makes sense to define dot product od Counters, something like this: def __matmul__(self, other): return sum(self[x] * other[x] for x in self.keys() | other.keys()) I find this useful ocassionaly. ---------- components: Library (Lib) messages: 292522 nosy: J?chym Barv?nek priority: normal severity: normal status: open title: Add __matmul__ to collections.Counter type: enhancement versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 07:06:00 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 28 Apr 2017 11:06:00 +0000 Subject: [New-bugs-announce] [issue30197] Enhance swap_attr() and swap_item() in test.support Message-ID: <1493377560.96.0.413320673715.issue30197@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds two features to functions swap_attr() and swap_item() in the test.support module. 1. They now work (rather than failing in __exit__) when delete the attribute or item inside the with block. There were several cases when I refused to use these functions instead of manually coded try/finally due to lack of this feature. 2. The original value of the attribute or item can be assigned to the target of "as" in the with statement. This can save a line of the code in some cases. ---------- components: Tests messages: 292527 nosy: ezio.melotti, michael.foord, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Enhance swap_attr() and swap_item() in test.support type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 08:17:50 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 28 Apr 2017 12:17:50 +0000 Subject: [New-bugs-announce] [issue30198] distutils build_ext: don't run newer_group() in parallel in multiple threads when using parallel Message-ID: <1493381870.33.0.670258208626.issue30198@psf.upfronthosting.co.za> New submission from STINNER Victor: Since Python 3.5, distutils is able to build extensions in parallel, nice enhancement! But setup.py of CPython is 2x slower in parallel mode when all modules are already built: (1 sec vs 500 ms). Building extensions calls newer_group() which calls os.stat() 6,856 times. I wrote a cache for os.stat() but it has no impact on performance. It seems like threads are fighting to death for the GIL in the os.stat() race... Attached pull request calls newer_group() before spawning threads in parallel mode, so "setup.py build" takes the same time with and without parallel module, when all extensions are already built. I didn't measure performance when all extensions must be built. ---------- components: Distutils messages: 292530 nosy: dstufft, haypo, merwok priority: normal severity: normal status: open title: distutils build_ext: don't run newer_group() in parallel in multiple threads when using parallel type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 09:38:20 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 28 Apr 2017 13:38:20 +0000 Subject: [New-bugs-announce] [issue30199] Warning -- asyncore.socket_map was modified by test_ssl Message-ID: <1493386700.85.0.622931101422.issue30199@psf.upfronthosting.co.za> New submission from STINNER Victor: It seems like test_asyncore_server() of test_ssl doesn't cleanup properly asyncore on FreeBSD, and so following unit tests can be impacted (and fail). http://buildbot.python.org/all/builders/AMD64%20FreeBSD%20CURRENT%20Debug%203.x/builds/210/steps/test/logs/stdio test_asyncore_server (test.test_ssl.ThreadedTests) Check the example asyncore integration. ... server: new connection from 127.0.0.1:48985 client: sending b'FOO\n'... server: read b'FOO\n' from client client: read b'foo\n' client: closing connection. server: read b'over\n' from client client: connection closed. cleanup: stopping server. cleanup: joining server thread. cleanup: successfully joined. ok ... Warning -- asyncore.socket_map was modified by test_ssl Before: {} After: {6: } Maybe AsyncoreEchoServer.__exit__() should just ends with "asyncore.close_all(ignore_all=True)"? ---------- components: Tests messages: 292532 nosy: haypo priority: normal severity: normal status: open title: Warning -- asyncore.socket_map was modified by test_ssl type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 11:42:07 2017 From: report at bugs.python.org (Frank Pae) Date: Fri, 28 Apr 2017 15:42:07 +0000 Subject: [New-bugs-announce] [issue30200] tkinter ListboxSelect Message-ID: <1493394127.57.0.90610531486.issue30200@psf.upfronthosting.co.za> New submission from Frank Pae: prerequisite: you have more than one tkinter listbox behavior in py 2.7.13 and py 3.5.3: if you leave a listbox and goes to another, then the abandoned listbox create not a ListboxSelect Event behavior in py 3.6.0 and 3.6.1: if you leave a listbox and goes to another, then the abandoned listbox create a ListboxSelect Event and this gives a error-message I dont know if my program is false, but it works in 2.7 and 3.5 good, however not with 3.6 Thank you ---------- components: Tkinter files: tk_ListboxSelect.py messages: 292535 nosy: Frank Pae priority: normal severity: normal status: open title: tkinter ListboxSelect type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file46833/tk_ListboxSelect.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 12:20:49 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 28 Apr 2017 16:20:49 +0000 Subject: [New-bugs-announce] [issue30201] [3.5] RecvmsgIntoSCMRightsStreamTest fails with "OSError: [Errno 12] Cannot allocate memory" on macOS El Capitan Message-ID: <1493396449.34.0.335378086459.issue30201@psf.upfronthosting.co.za> New submission from STINNER Victor: The test fails on Python 3.5 but pass on Python 3.6, I don't know why. http://buildbot.python.org/all/builders/x86-64%20El%20Capitan%203.5/builds/31/steps/test/logs/stdio ====================================================================== ERROR: testFDPassEmpty (test.test_socket.RecvmsgSCMRightsStreamTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildbot/buildarea/3.5.billenstein-elcapitan/build/Lib/test/test_socket.py", line 2851, in testFDPassEmpty len(MSG), 10240), File "/Users/buildbot/buildarea/3.5.billenstein-elcapitan/build/Lib/test/test_socket.py", line 1955, in doRecvmsg result = sock.recvmsg(bufsize, *args) OSError: [Errno 12] Cannot allocate memory ====================================================================== ERROR: testFDPassEmpty (test.test_socket.RecvmsgIntoSCMRightsStreamTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildbot/buildarea/3.5.billenstein-elcapitan/build/Lib/test/test_socket.py", line 2851, in testFDPassEmpty len(MSG), 10240), File "/Users/buildbot/buildarea/3.5.billenstein-elcapitan/build/Lib/test/test_socket.py", line 2046, in doRecvmsg result = sock.recvmsg_into([buf], *args) OSError: [Errno 12] Cannot allocate memory ---------- components: Tests, macOS messages: 292538 nosy: haypo, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: [3.5] RecvmsgIntoSCMRightsStreamTest fails with "OSError: [Errno 12] Cannot allocate memory" on macOS El Capitan versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 28 17:56:12 2017 From: report at bugs.python.org (Brett Cannon) Date: Fri, 28 Apr 2017 21:56:12 +0000 Subject: [New-bugs-announce] [issue30202] Update test.test_importlib.test_abc to test find_spec() Message-ID: <1493416572.94.0.214494308366.issue30202@psf.upfronthosting.co.za> New submission from Brett Cannon: It looks like test_abc isn't really testing find_spec() very much compared to find_module(). There might also be some tests still using find_module() that should be updated to use find_spec() instead. ---------- components: Tests messages: 292552 nosy: brett.cannon, eric.snow, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Update test.test_importlib.test_abc to test find_spec() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 04:21:08 2017 From: report at bugs.python.org (Luke Campagnola) Date: Sat, 29 Apr 2017 08:21:08 +0000 Subject: [New-bugs-announce] [issue30203] AttributeError in Popen.communicate() Message-ID: <1493454068.21.0.26238856169.issue30203@psf.upfronthosting.co.za> New submission from Luke Campagnola: In my application, calling communicate() on a Popen instance is giving the following exception: . . . File "/usr/lib/python3.5/subprocess.py", line 1072, in communicate stdout, stderr = self._communicate(input, endtime, timeout) File "/usr/lib/python3.5/subprocess.py", line 1693, in _communicate stdout = self._fileobj2output[self.stdout] AttributeError: 'Popen' object has no attribute '_fileobj2output' I have not been able to reproduce this in a simple example, but I imagine this could happen if Popen._communicate() raises an exception in the first 20 lines or so--this would cause _communication_started to be set True, even though _fileobj2output has not been initialized. I suggest setting self._fileobj2output = None in Popen.__init__() and changing the relevant code in _communicate() from if not self._communication_started: self._fileobj2output = {} to: if self._fileobj2output is None: self._fileobj2output = {} ---------- messages: 292575 nosy: Luke Campagnola priority: normal severity: normal status: open title: AttributeError in Popen.communicate() type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 04:23:43 2017 From: report at bugs.python.org (Giampaolo Rodola') Date: Sat, 29 Apr 2017 08:23:43 +0000 Subject: [New-bugs-announce] [issue30204] socket.setblocking(0) changes socket.type Message-ID: <1493454223.18.0.808565419105.issue30204@psf.upfronthosting.co.za> New submission from Giampaolo Rodola': This caused me a lot of headaches (broken test) before figuring out what the heck was wrong: =) >>> import socket >>> s = socket.socket() >>> s.type >>> s.setblocking(0) >>> s.type 2049 >>> s.setblocking(1) >>> s.type getsockopt() on the other hand always tells the truth: >>> s.getsockopt(socket.SOL_SOCKET, socket.SO_TYPE) 1 ...so I suppose we can do that in the "type" property of the Python socket class. It appears the type is set in socket init: https://github.com/python/cpython/blob/1e2147b9d75a64df370a9393c2b5b9d170dc0afd/Modules/socketmodule.c#L904 ...and it's changed later in setblocking: https://github.com/python/cpython/blob/1e2147b9d75a64df370a9393c2b5b9d170dc0afd/Modules/socketmodule.c#L609 ---------- components: Library (Lib) messages: 292576 nosy: giampaolo.rodola priority: normal severity: normal status: open title: socket.setblocking(0) changes socket.type versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 04:55:10 2017 From: report at bugs.python.org (Giampaolo Rodola') Date: Sat, 29 Apr 2017 08:55:10 +0000 Subject: [New-bugs-announce] [issue30205] socket.getsockname() type mismatch with AF_UNIX on Linux Message-ID: <1493456110.97.0.437176774734.issue30205@psf.upfronthosting.co.za> New submission from Giampaolo Rodola': >>> import socket >>> s = socket.socket(socket.AF_UNIX) >>> s.getsockname() b'' >>> s.bind('foo') >>> s.getsockname() 'foo' ---------- messages: 292580 nosy: giampaolo.rodola priority: normal severity: normal status: open title: socket.getsockname() type mismatch with AF_UNIX on Linux _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 05:22:01 2017 From: report at bugs.python.org (Xiang Zhang) Date: Sat, 29 Apr 2017 09:22:01 +0000 Subject: [New-bugs-announce] [issue30206] data parameter for binascii.b2a_base64 is not positional-only Message-ID: <1493457721.34.0.92478948675.issue30206@psf.upfronthosting.co.za> New submission from Xiang Zhang: Before functions in binascii having *data* parameter all accept it as positionaly-only. But after #25357, binascii.b2a_base64 changed it to accept keyword argument also but it looks unintentional. ---------- components: Library (Lib) messages: 292585 nosy: haypo, martin.panter, serhiy.storchaka, xiang.zhang priority: normal severity: normal status: open title: data parameter for binascii.b2a_base64 is not positional-only type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 06:52:06 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 29 Apr 2017 10:52:06 +0000 Subject: [New-bugs-announce] [issue30207] Rename test.test_support to test.support in 2.7 Message-ID: <1493463126.66.0.743875413442.issue30207@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch converts test.test_support into package and rename it to test.support, move test.script_helper into the test.support package. Old names test.test_support and test.script_helper are left as aliases to test.support and test.support.script_helper for compatibility (hence most tests don't need modification). Benefits of this change: 1. This makes the structure of the test directory more compatible with 3.x. This helps bacporting tests. There were a number of cases when the only required change in backported 2.7 patch was changing "support" to "test_support". Many times backporting simple tests broke 2.7 buildbots because this change was not made. 2. This makes easier backporting changes in test.support itself and backporting 3.x features from test.support. ---------- components: Tests messages: 292590 nosy: ezio.melotti, michael.foord, ncoghlan, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Rename test.test_support to test.support in 2.7 type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 09:42:38 2017 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 29 Apr 2017 13:42:38 +0000 Subject: [New-bugs-announce] [issue30208] Small typos in IDLE doc Message-ID: <1493473358.59.0.135881718149.issue30208@psf.upfronthosting.co.za> New submission from Cheryl Sabella: Fix some small typos in IDLE doc. ---------- assignee: docs at python components: Documentation messages: 292593 nosy: csabella, docs at python priority: normal severity: normal status: open title: Small typos in IDLE doc type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 12:48:42 2017 From: report at bugs.python.org (Priit Oorn) Date: Sat, 29 Apr 2017 16:48:42 +0000 Subject: [New-bugs-announce] [issue30209] some UTF8 symbols Message-ID: <1493484522.27.0.538445993706.issue30209@psf.upfronthosting.co.za> New submission from Priit Oorn: It seems that idle has problems with some UTF8 / Unicode characters and loading files that have them inside them. I tried to do code to replace text with other symbols from unicode table. First in idle and pasting the symbol "?" into the idle it caused both the text editor and idle itself to close. Then made edit and paste in notepad.exe for the same file, which seemed to work fine... but when I tried to open the file in idle, it opened an empty editor which was bugged and unable to close and I had to kill it off from taskmanager. (Yes you can edit and write stuff into the empty new file, and even save it, but you can't close it from the top window X button and Exit causes it to close idle and keep the editor window still open) replacements = {'A':'?', 'B':'?', 'C':'?', 'D':'?'} ---------- files: goth - crasher.py messages: 292595 nosy: Priit Oorn priority: normal severity: normal status: open title: some UTF8 symbols type: crash versions: Python 3.6 Added file: http://bugs.python.org/file46842/goth - crasher.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 14:46:02 2017 From: report at bugs.python.org (Anthony Flury) Date: Sat, 29 Apr 2017 18:46:02 +0000 Subject: [New-bugs-announce] [issue30210] No Documentation on tkinter dnd module Message-ID: <1493491562.01.0.41832322565.issue30210@psf.upfronthosting.co.za> New submission from Anthony Flury: There is a level of drag and drop support within the tkinter package - namely the tkinter.dnd module. However there is no documentation at this time about drag and drop either on docs.python.org or on the tkinter reference manual. The only documentation available is via the help command in the python console. or by reading the source code - neither of which are the first point of call for documentation. ---------- assignee: docs at python components: Documentation messages: 292596 nosy: anthony-flury, docs at python priority: normal severity: normal status: open title: No Documentation on tkinter dnd module type: enhancement versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 15:54:20 2017 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 29 Apr 2017 19:54:20 +0000 Subject: [New-bugs-announce] [issue30211] Bdb: add docstrings Message-ID: <1493495660.85.0.350046044168.issue30211@psf.upfronthosting.co.za> New submission from Cheryl Sabella: Add docstrings to Bdb. See issue 19417. ---------- assignee: docs at python components: Documentation messages: 292598 nosy: csabella, docs at python, terry.reedy priority: normal pull_requests: 1467 severity: normal status: open title: Bdb: add docstrings type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 19:15:34 2017 From: report at bugs.python.org (david-cpi) Date: Sat, 29 Apr 2017 23:15:34 +0000 Subject: [New-bugs-announce] [issue30212] test_ssl.py is broken in Centos7 Message-ID: <1493507734.99.0.744824306698.issue30212@psf.upfronthosting.co.za> New submission from david-cpi: To make the test pass, I added the following try statement: try: sock.sendall(buf) except: pass ---------- components: Tests files: test.log messages: 292608 nosy: david-cpi priority: normal severity: normal status: open title: test_ssl.py is broken in Centos7 type: compile error versions: Python 3.5 Added file: http://bugs.python.org/file46844/test.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 29 21:33:15 2017 From: report at bugs.python.org (BoppreH) Date: Sun, 30 Apr 2017 01:33:15 +0000 Subject: [New-bugs-announce] [issue30213] ZipFile from 'a'ppend-mode file generates invalid zip Message-ID: <1493515995.55.0.223691905377.issue30213@psf.upfronthosting.co.za> New submission from BoppreH: I may be misunderstanding file modes or the `zipfile` library, but from zipfile import ZipFile ZipFile(open('a.zip', 'ab'), 'a').writestr('f.txt', 'z') unexpectedly creates an invalid zip file. 7zip is able to open and show the file list, but files inside look empty, and Windows simply says it's invalid. Changing the file mode from `ab` to `wb+` fixes the problem, but truncates the file, and `rb+` doesn't create the file. Calling `close` on both the `open` and `ZipFile` doesn't help either. Using `ZipFile(...).open` instead of `writestr` has the same problem. I could only reproduce this on [Windows 10, Python 3.6.1, 64 bit]. The zip file was proper on [Windows 10, Python 3.3.5, 32 bit], [Windows 10 Bash, Python 3.4.3, 64 bit], and [FreeBSD, Python 3.5.3, 64 bit]. This is my first bug report, so forgive me if I made any mistakes. ---------- components: Library (Lib), Windows messages: 292616 nosy: BoppreH, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ZipFile from 'a'ppend-mode file generates invalid zip type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 30 00:35:48 2017 From: report at bugs.python.org (Decorater) Date: Sun, 30 Apr 2017 04:35:48 +0000 Subject: [New-bugs-announce] [issue30214] make_zip.py lacks command a few line options and has a bug. Message-ID: <1493526948.12.0.00156119032978.issue30214@psf.upfronthosting.co.za> New submission from Decorater: make_zip.py does not offer the options to include tests with the tkinner stuff when making the full distributions immediately after building python using MSVC. Basically running makezip like this: ".\PCbuild\amd64\python.exe" ".\Tools\msi\make_zip.py" -a x64 -o ".\python36-x86-x64" or like this: ".\PCbuild\win32\python.exe" ".\Tools\msi\make_zip.py" -o ".\python36-x86" does not even include the tests and tkinner stuff that is optional when installing python by the installer. I would like command sqitches to be able to if desired. Also on the 64 bit copy it does mess up where none of the libs\*.lib, nor any of the assemblies are copied. And is reproducible on me. ---------- components: Demos and Tools, Tkinter, Windows messages: 292617 nosy: Decorater, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: make_zip.py lacks command a few line options and has a bug. versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 30 01:21:39 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 30 Apr 2017 05:21:39 +0000 Subject: [New-bugs-announce] [issue30215] Make re.compile() locale agnostic Message-ID: <1493529699.82.0.722013579131.issue30215@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Currently the result of re.compile() with the re.LOCALE flag depends on the locale at compile time. The locale at matching time should be the same as the locale at compile time, otherwise the matching can work incorrectly. This complicates caching in module global functions and increase the chance of race condition. Proposed patch makes re.compile() not depending on locale. Only the locale at matching time affects the result of matching. This is more comprehensive solution of issue22410. ---------- components: Extension Modules, Library (Lib), Regular Expressions messages: 292618 nosy: ezio.melotti, mrabarnett, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Make re.compile() locale agnostic type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 30 07:12:49 2017 From: report at bugs.python.org (Rupert Nash) Date: Sun, 30 Apr 2017 11:12:49 +0000 Subject: [New-bugs-announce] [issue30216] xdrlib.Unpacker.unpack_string returns bytes (docs say should be str) Message-ID: <1493550769.45.0.572734600084.issue30216@psf.upfronthosting.co.za> New submission from Rupert Nash: According to the docs this method should return a str, but it returns bytes https://docs.python.org/3.6/library/xdrlib.html#xdrlib.Unpacker.unpack_string I can see three options: a default encoding should be applied the documentation updated to make clear this returns bytes the method removed and just rely on the existing unpack_bytes which does what you'd expect (and in fact the two methods are aliases) ---------- components: Library (Lib) messages: 292624 nosy: rnash priority: normal severity: normal status: open title: xdrlib.Unpacker.unpack_string returns bytes (docs say should be str) type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 30 15:56:00 2017 From: report at bugs.python.org (Eric O. LEBIGOT) Date: Sun, 30 Apr 2017 19:56:00 +0000 Subject: [New-bugs-announce] [issue30217] Missing entry for the tilde (~) operator in the Index Message-ID: <1493582160.76.0.882487661277.issue30217@psf.upfronthosting.co.za> New submission from Eric O. LEBIGOT: The index (https://docs.python.org/3.6/genindex-Symbols.html) is missing an entry for the tilde operator ~ (there is also no entry under "tilde"). A relevant pointer could be to object.__invert__ (https://docs.python.org/3.6/reference/datamodel.html#object.__invert__). ---------- assignee: docs at python components: Documentation messages: 292641 nosy: docs at python, lebigot priority: normal severity: normal status: open title: Missing entry for the tilde (~) operator in the Index type: enhancement versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 30 17:50:48 2017 From: report at bugs.python.org (Jelle Zijlstra) Date: Sun, 30 Apr 2017 21:50:48 +0000 Subject: [New-bugs-announce] [issue30218] shutil.unpack_archive doesn't support PathLike Message-ID: <1493589048.1.0.871428133703.issue30218@psf.upfronthosting.co.za> New submission from Jelle Zijlstra: According to PEP 519, it should. I'll submit a PR soon. ---------- components: Library (Lib) messages: 292642 nosy: Jelle Zijlstra, brett.cannon, tarek priority: normal severity: normal status: open title: shutil.unpack_archive doesn't support PathLike versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________