From report at bugs.python.org Thu Mar 1 00:44:18 2018 From: report at bugs.python.org (Abhilash Raj) Date: Thu, 01 Mar 2018 05:44:18 +0000 Subject: [New-bugs-announce] [issue32975] mailbox: It would be nice to move mailbox.Message from legacy email.message.Message API to new EmailMessage API Message-ID: <1519883058.82.0.467229070634.issue32975@psf.upfronthosting.co.za> New submission from Abhilash Raj : Since Python 3.6 the new EmailMessage API seems to be the default but mailbox.Message still subclasses from the old email.message.Message API. It would be nice to get EmailMessage from mailbox so that one can rely on the new methods and content managers. Also, while it is possible to pass a constructor to mailbox.mbox to get the new EmailMessage style message, it is different from mailbox.Message which has some extra methods. ---------- components: email messages: 313082 nosy: barry, maxking, r.david.murray priority: normal severity: normal status: open title: mailbox: It would be nice to move mailbox.Message from legacy email.message.Message API to new EmailMessage API type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 1 07:52:49 2018 From: report at bugs.python.org (Mike Schmidt) Date: Thu, 01 Mar 2018 12:52:49 +0000 Subject: [New-bugs-announce] [issue32976] linux/random.h present but cannot be compiled Message-ID: <1519908769.96.0.467229070634.issue32976@psf.upfronthosting.co.za> New submission from Mike Schmidt : I am attempting to install python 3.6.4 to my home directory on a linux cluster where I do not have root access. A warning, "linux/random.h present but cannot be compiled", was emitted from the config process requesting that I report this here. A summary of commands used follows: $ wget https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tar.xz $ tar -xvf Python-3.6.4.tar.xz $ cd Python-3.6.4.tar.xz $ mkdir ~/python364 $ ./config --prefix /home/mikes/python364 --enable-optimizations The following may also be relevant: $ uname -a Linux JJM4 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux And the config.log is attached. ---------- components: Build files: config.log messages: 313095 nosy: mfschmidt priority: normal severity: normal status: open title: linux/random.h present but cannot be compiled type: compile error versions: Python 3.6 Added file: https://bugs.python.org/file47466/config.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 1 09:54:50 2018 From: report at bugs.python.org (Aaron Christianson) Date: Thu, 01 Mar 2018 14:54:50 +0000 Subject: [New-bugs-announce] [issue32977] added acts_like decorator to dataclasses module Message-ID: <1519916090.48.0.467229070634.issue32977@psf.upfronthosting.co.za> New submission from Aaron Christianson : I'm always writting these wrapper classes where I want to selectively want to expose the interface of some of the methods of certain attributes to co the containing object. This can mean I spend a lot of time implementing wrapper methods. That's no good. I wrote a class decorator to make this easy, and I realized it's a perfect complement to the new dataclasses module, though it can also be used with normal classes. I figured I'd check if you're interested in that. The interface looks like this: >>> from dataclasses import dataclass, acts_like >>> @acts_like('weight', ['__add__']) ... @acts_like('still_fresh', ['__bool__']) ... @dataclass ... class Spam: ... weight: int ... still_fresh: bool >>> s = Spam(42, False) >>> s + 3 45 >>> if not s: ... print('the spam is bad') the spam is bad It's a handy way to build objects with composition, but still get some of the benefits of inheritance in a *selective* and *explicite* way. Here's the code: https://github.com/ninjaaron/cpython/blob/acts_like/Lib/dataclasses.py#L978 May require some addtional twiddling to make it work with frozen dataclasses, but I don't think it should be a problem. ---------- components: Library (Lib) messages: 313096 nosy: eric.smith, ninjaaron priority: normal severity: normal status: open title: added acts_like decorator to dataclasses module type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 1 11:50:35 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 01 Mar 2018 16:50:35 +0000 Subject: [New-bugs-announce] [issue32978] Issues with reading large float values in AIFC files Message-ID: <1519923035.62.0.467229070634.issue32978@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The frequency rate is saved as a 80-bit floating point value in AIFC files. There are issues with reading large values. 1. Values with maximal exponent are read as aifc._HUGE_VAL which is less then sys.float_info.max. Thus greater values can be read as lesser values. >>> aifc._read_float(io.BytesIO(b'\x7f\xff\xff\xff\xff\xff\xff\xff\xf8\x00')) 1.79769313486231e+308 >>> aifc._read_float(io.BytesIO(b'\x43\xfe\xff\xff\xff\xff\xff\xff\xf8\x00')) 1.7976931348623157e+308 >>> aifc._read_float(io.BytesIO(b'\x43\xfe\xff\xff\xff\xff\xff\xff\xff\xff')) inf 2. If exponent is not maximal, but large enough, this cause an OverflowError. It would be better consistently return the maximal value or inf. >>> aifc._read_float(io.BytesIO(b'\x44\xfe\xff\xff\xff\xff\xff\xff\xf8\x00')) Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython3.7/Lib/aifc.py", line 198, in _read_float f = (himant * 0x100000000 + lomant) * pow(2.0, expon - 63) OverflowError: (34, 'Numerical result out of range') OverflowError when read a broken aifc file can be unexpected. The proposed PR tries to make reading floats more consistent. I'm not sure it is correct. ---------- components: Library (Lib) messages: 313098 nosy: mark.dickinson, serhiy.storchaka, tim.peters priority: normal severity: normal status: open title: Issues with reading large float values in AIFC files type: behavior versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 1 11:52:09 2018 From: report at bugs.python.org (Felix) Date: Thu, 01 Mar 2018 16:52:09 +0000 Subject: [New-bugs-announce] [issue32979] dict get() function equivalent for lists. Message-ID: <1519923129.67.0.467229070634.issue32979@psf.upfronthosting.co.za> New submission from Felix : Hi there! I hope this wasn't suggested before. I couldn't find any issues related to it. The `get()` function on the dictionary object is such a convenient way for retrieving items from a dict that might not exists. I always wondered why the list object does not have an equivalent? I constantly run into something like this: myval = mylist[1] if len(mylist) > 1 else None or worse like this: try: myval = mylist[1] except IndexError: myval = None While I think it would be nice to do it like this: myval = mylist.get(1) Any love for this? Cheers! :) ---------- messages: 313099 nosy: feluxe priority: normal severity: normal status: open title: dict get() function equivalent for lists. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 1 11:57:13 2018 From: report at bugs.python.org (Thomas Nyberg) Date: Thu, 01 Mar 2018 16:57:13 +0000 Subject: [New-bugs-announce] [issue32980] Remove functions that do nothing in _Py_InitializeCore() Message-ID: <1519923433.61.0.467229070634.issue32980@psf.upfronthosting.co.za> New submission from Thomas Nyberg : The `_PyFrame_Init()` and `PyByteArray_Init()` functions are called in these two locations in the `_Py_InitializeCore()` function: https://github.com/python/cpython/blob/master/Python/pylifecycle.c#L693-L694 https://github.com/python/cpython/blob/master/Python/pylifecycle.c#L699-L700 But their function definitions appear to do nothing: https://github.com/python/cpython/blob/master/Objects/frameobject.c#L555-L561 https://github.com/python/cpython/blob/master/Objects/bytearrayobject.c#L24-L28 I can understand leaving the functions in the source for backwards-compatibility, but why are they still being called in `_Py_InitializeCore()`? Seems like it just adds noise for those new to the cpython internals (I certainly found it confusing myself). Ned Batchelder recommended possibly making a change: https://mail.python.org/pipermail/python-list/2018-March/731402.html ---------- messages: 313100 nosy: thomas.nyberg priority: normal severity: normal status: open title: Remove functions that do nothing in _Py_InitializeCore() type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 1 19:36:19 2018 From: report at bugs.python.org (James Davis) Date: Fri, 02 Mar 2018 00:36:19 +0000 Subject: [New-bugs-announce] [issue32981] Catastrophic backtracking in poplib and difflib Message-ID: <1519950979.87.0.467229070634.issue32981@psf.upfronthosting.co.za> New submission from James Davis : Hi Python security team, My name is James Davis. I'm a security researcher at Virginia Tech. The python core (cpython) has 2 regular expressions vulnerable to catastrophic backtracking that look like potential DOS vectors. The vulnerable expressions are listed below. Each vulnerability has the following keys, explained in more detail below: - pattern - filesIn (as of December/January; I excluded any appearances in irrelevant-looking dirs, and in '.min' files) - stringLenFor10Sec - nPumpsFor10Sec - attackFormat - blowupCurve The attack format describes how to generate an attack string. On my machine, an attack string generated using nPumpsFor10Sec repetitions ("pumps") of the pump string(s) blocks the python regex engine for 10 seconds, though this will vary based on your hardware. Compose an attack string like this: 'prefix 1' + 'pump 1' X times + 'prefix 2' + 'pump 2' X times + ... + suffix Example: With pumpPairs: [{'prefix': 'a', 'pump': 'b'}], suffix: 'c', an attack string with three pumps would be: abbbc Catastrophic backtracking blows up at either an exponential rate or a super-linear (power law) rate. The blowupCurve indicates how severe the blow-up is. The 'type' is either EXP(onential) or POW(er law) in the number of pumps (x). The 'parms' are the parameters for the two curve types. The second parameter is more important, because: EXP: f(x) = parms[0] * parms[1]^x POW: f(x) = parms[0] * x^parms[1] JSON formatted: Vuln 1: { "attackFormat" : { "pumpPairs" : [ { "pump" : "]+>)", "filesIn" : [ [ "Lib/poplib.py" ] ] } Vuln 2: { "blowupCurve" : { "parms" : [ 1.31911634447601e-08, 1.89691808610459 ], "r2" : 0.998387790742004, "type" : "POWER" }, "stringLenFor10Sec" : 48328, "attackFormat" : { "pumpPairs" : [ { "pump" : "\t", "prefix" : "\t" } ], "suffix" : "##" }, "pattern" : "\\s*#?\\s*$", "filesIn" : [ [ "Lib/difflib.py" ] ], "nPumpsFor10Sec" : "48325" } ---------- components: Library (Lib) messages: 313119 nosy: davisjam priority: normal pull_requests: 5723 severity: normal status: open title: Catastrophic backtracking in poplib and difflib type: security _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 01:04:50 2018 From: report at bugs.python.org (Franklin? Lee) Date: Fri, 02 Mar 2018 06:04:50 +0000 Subject: [New-bugs-announce] [issue32982] Parse out invisible Unicode characters? Message-ID: <1519970690.52.0.467229070634.issue32982@psf.upfronthosting.co.za> New submission from Franklin? Lee : The following line should have a character that trips up the compiler. ?indices = range(5) The character is \u200e, and was inserted by Google Keep. (I've already reported the issue to Google as a regression.) Here's the error message: """ File "", line 3 ?indices = range(5) ^ SyntaxError: invalid character in identifier """ Depending on the terminal or editor, it may not be possible to tell the problem just from looking. Without knowledge/experience of Unicode, it may not be possible to figure out the problem at all. Since Python source now uses Unicode by default, should certain invisible characters be stripped out during compilation? ---------- components: Unicode messages: 313127 nosy: ezio.melotti, leewz, vstinner priority: normal severity: normal status: open title: Parse out invisible Unicode characters? type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 08:20:54 2018 From: report at bugs.python.org (Jiri Prajzner) Date: Fri, 02 Mar 2018 13:20:54 +0000 Subject: [New-bugs-announce] [issue32983] UnicodeDecodeError 'ascii' codec can't decode byte in position - ordinal not in range(128) Message-ID: <1519996854.15.0.467229070634.issue32983@psf.upfronthosting.co.za> New submission from Jiri Prajzner : Locate "Barra de navegaci?"->"T?rmino de b?squeda o direcci?n" and browse "http://www.columbia.edu/~fdc/utf8/" website - results in: Exception UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 73: ordinal not in range(128) If i correct the word navegaci? to navegaci?n, there's no UnicodeDecodeError ---------- components: Unicode messages: 313132 nosy: Jiri Prajzner, ezio.melotti, vstinner priority: normal severity: normal status: open title: UnicodeDecodeError 'ascii' codec can't decode byte in position - ordinal not in range(128) type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 12:13:39 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 02 Mar 2018 17:13:39 +0000 Subject: [New-bugs-announce] [issue32984] IDLE: set and unset __file__ for startup files Message-ID: <1520010819.42.0.467229070634.issue32984@psf.upfronthosting.co.za> New submission from Terry J. Reedy : 'python somefile.py' sets main.__file__ to 'somefile.py'. 'python' leaves __file__ unset. If PYTHONSTARTUP is set to somefile.py, 'python' executes somefile.py in main with __file__ set to 'somefile.py', then unsets __file__ before the >>> prompt, as if somefile has not been executed. Any explicit setting of __file__ in somefile is undone. tem2.py: print(__name__, __file__) __file__ = 'abc.py' > F:\dev\3x> set PYTHONSTARTUP=f:/python/a/tem2.py > F:\dev\3x> python ... __main__ f:/python/a/tem2.py >>> __file__ Traceback (most recent call last): File "", line 1, in NameError: name '__file__' is not defined With IDLE, when 'python -m idlelib.idle' is run with '-s' or '-r f:/python/a/tem2.py', NameError is raised for the print in tem2.py. This was reported this SO question. https://stackoverflow.com/questions/49054093/cannot-use-file-when-opening-module-in-idle In both cases, the file is run with execfile(filename). def execfile(self, filename, source=None): "Execute an existing file" if source is None: with tokenize.open(filename) as fp: source = fp.read() My guess is that wrapping the source with f"__file__ = {filename}\n" and "del __file__\n" should work. ---------- assignee: terry.reedy components: IDLE messages: 313140 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: set and unset __file__ for startup files type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 12:31:25 2018 From: report at bugs.python.org (=?utf-8?q?Alicia_Boya_Garc=C3=ADa?=) Date: Fri, 02 Mar 2018 17:31:25 +0000 Subject: [New-bugs-announce] [issue32985] subprocess.Popen: Confusing documentation for restore_signals Message-ID: <1520011885.85.0.467229070634.issue32985@psf.upfronthosting.co.za> New submission from Alicia Boya Garc?a : The docs state: > If restore_signals is true (the default) all signals that Python has set to SIG_IGN are restored to SIG_DFL in the child process before the exec. Currently this includes the SIGPIPE, SIGXFZ and SIGXFSZ signals. (POSIX only) The first phrase and the second may seem contradictory for anyone that uses signal handling in their code. I would definitely not describe the set of "SIGPIPE, SIGXFZ and SIGXFSZ" as "all signals that Python has set". It actually means "all the signals that Python set at startup"; the user could have changed different signals than these (e.g. SIGINT) for various purposes (e.g. not getting a KeyboardInterrupt while an interactive process is running). The current wording may suggest that all signals changed by the user are reset in the child process, but this is not the case -- I could confirm by looking at _Py_RestoreSignals(). Only these three specific signals are restored. ---------- components: Library (Lib) messages: 313146 nosy: ntrrgc priority: normal severity: normal status: open title: subprocess.Popen: Confusing documentation for restore_signals versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 12:51:29 2018 From: report at bugs.python.org (M J Harvey) Date: Fri, 02 Mar 2018 17:51:29 +0000 Subject: [New-bugs-announce] [issue32986] multiprocessing, default assumption of Pool size unhelpful Message-ID: <1520013089.33.0.467229070634.issue32986@psf.upfronthosting.co.za> New submission from M J Harvey : Hi, multiprocessing's default assumption about Pool size is os.cpu_count() ie all the cores visible to the OS. This is tremendously unhelpful when running multiprocessing code inside an HPC batch system (PBS Pro in my case), as there's no way to hint to the code that the # of cpus actually allocated to it may be fewer. It's quite tedious to have to explain this to every single person trying to use it. Proposal: multiprocessing should look for a hint for default Pool size from the environment variable "NCPUS" which most batch systems set. If that's not set, or its value is invalid, fall back to os.cpu_count() as before ---------- components: Library (Lib) messages: 313150 nosy: M J Harvey priority: normal severity: normal status: open title: multiprocessing, default assumption of Pool size unhelpful versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 18:32:49 2018 From: report at bugs.python.org (Steve B) Date: Fri, 02 Mar 2018 23:32:49 +0000 Subject: [New-bugs-announce] [issue32987] tokenize.py parses unicode identifiers incorrectly Message-ID: <1520033569.84.0.467229070634.issue32987@psf.upfronthosting.co.za> New submission from Steve B : Here is an example involving the unicode character MIDDLE DOT ? : The line ab?cd = 7 is valid Python 3 code and is happily accepted by the CPython interpreter. However, tokenize.py does not like it. It says that the middle-dot is an error token. Here is an example you can run to see that: import tokenize from io import BytesIO test = 'ab?cd = 7'.encode('utf-8') x = tokenize.tokenize(BytesIO(test).readline) for i in x: print(i) For reference, the official definition of identifiers is: https://docs.python.org/3.6/reference/lexical_analysis.html#identifiers and details about MIDDLE DOT are at https://www.unicode.org/Public/10.0.0/ucd/PropList.txt MIDDLE DOT has the "Other_ID_Continue" property, so I think the interpreter is behaving correctly (i.e. consistent with the documented spec), while tokenize.py is wrong. ---------- components: Library (Lib), Unicode messages: 313168 nosy: ezio.melotti, steve, vstinner priority: normal severity: normal status: open title: tokenize.py parses unicode identifiers incorrectly type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 19:43:22 2018 From: report at bugs.python.org (Adam Williamson) Date: Sat, 03 Mar 2018 00:43:22 +0000 Subject: [New-bugs-announce] [issue32988] datetime.datetime.strftime('%s') always uses local timezone, even with aware datetimes Message-ID: <1520037802.51.0.467229070634.issue32988@psf.upfronthosting.co.za> New submission from Adam Williamson : Test script: import pytz import datetime utc = pytz.timezone('UTC') print(datetime.datetime(2017, 1, 1, tzinfo=utc).strftime('%s')) Try running it with various system timezones: [adamw at xps13k pagure (more-timezone-fun %)]$ TZ='UTC' python /tmp/test2.py 1483228800 [adamw at xps13k pagure (more-timezone-fun %)]$ TZ='America/Winnipeg' python /tmp/test2.py 1483250400 [adamw at xps13k pagure (more-timezone-fun %)]$ TZ='America/Vancouver' python /tmp/test2.py 1483257600 That's Python 2.7.14; same results with Python 3.6.4. This does not seem correct. The correct Unix time for an aware datetime object should be a constant: for 2017-01-01 00:00 UTC it *is* 1483228800 . No matter what the system's local timezone, that should be the output of strftime('%s'), surely. What it seems to be doing instead is just outputting the Unix time for 2017-01-01 00:00 in the system timezone. I *do* note that strftime('%s') is completely undocumented in Python; neither https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior nor https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior mentions it. However, it does exist, and is used in the real world; I found this usage of it, and the bug, in a real project, Pagure. ---------- components: Library (Lib) messages: 313169 nosy: adamwill priority: normal severity: normal status: open title: datetime.datetime.strftime('%s') always uses local timezone, even with aware datetimes versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 2 19:44:36 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 03 Mar 2018 00:44:36 +0000 Subject: [New-bugs-announce] [issue32989] IDLE: Incorrect signature in call from editor to pyparse.find_good_parse_start Message-ID: <1520037876.54.0.467229070634.issue32989@psf.upfronthosting.co.za> New submission from Cheryl Sabella : >From msg312726 on issue32880. The call to find_good_parse_start: bod = y.find_good_parse_start(self.context_use_ps1, self._build_char_in_string_func(startatindex)) sends 3 parameters. And in pyparse.find_good_parse_start(), the signature allows 3. However, the signature is: def find_good_parse_start(self, is_char_in_string=None, _synchre=_synchre): This means that the `False` value in `self.use_context_ps1` is the first value instead of the function, so pyparse is always executing: if not is_char_in_string: # no clue -- make the caller pass everything return None Here's the commit that changed the signature: https://github.com/python/cpython/commit/b17544551fc8dfd1304d5679c6e444cad4d34d97 ---------- assignee: terry.reedy components: IDLE messages: 313170 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Incorrect signature in call from editor to pyparse.find_good_parse_start type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 3 04:25:58 2018 From: report at bugs.python.org (Andrea Celletti) Date: Sat, 03 Mar 2018 09:25:58 +0000 Subject: [New-bugs-announce] [issue32990] Supporting extensible format(PCM) for wave.open(read-mode) Message-ID: <1520069158.75.0.467229070634.issue32990@psf.upfronthosting.co.za> New submission from Andrea Celletti : The wave.Wave_read class currently supports 8, 16, 24, and 32 bit PCM files. Wave files are only supported if the wFormatTag in the format chunk matches the flag WAVE_FORMAT_PCM, which is correct but incomplete for 24 bit files. According to the specification the WAVE_FORMAT_EXTENSIBLE format should be used whenever the actual number of bits/sample is not equal to the container size. Based on this specification, most applications export 24 bit PCM with the WAVE_FORMAT_EXTENSIBLE flag since 24 is stored in container size 32. Importing these files causes wave.open to raise an exception. The specification also explains how to detect 24PCM exported in this fashion as "The first two bytes of the GUID form the sub-code specifying the data format code, e.g. WAVE_FORMAT_PCM.". In essence, we have to look at the first two bytes of the SubFormat tag and that will tell us if this file is PCM. Based on this premise, it appears to me that there is no reason for not adding support for both format specification as the rest of the file is exactly the same for both. I am attaching a file that can be used to test the exception being raised. Source: http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html ---------- components: Library (Lib) files: pluck-pcm24-ext.wav messages: 313183 nosy: acelletti priority: normal severity: normal status: open title: Supporting extensible format(PCM) for wave.open(read-mode) type: enhancement Added file: https://bugs.python.org/file47467/pluck-pcm24-ext.wav _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 3 23:47:12 2018 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 04 Mar 2018 04:47:12 +0000 Subject: [New-bugs-announce] [issue32991] AttributeError in doctest.DocTestFinder.find Message-ID: <1520138832.88.0.467229070634.issue32991@psf.upfronthosting.co.za> New submission from Jason R. Coombs : In Python 3.6, one could find doctests on a namespace package: ``` $ mkdir foo $ python3.6 Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import foo >>> foo.__file__ Traceback (most recent call last): File "", line 1, in AttributeError: module 'foo' has no attribute '__file__' >>> import doctest >>> doctest.DocTestFinder().find(foo) [] ``` In recent builds of Python 3.7, these namespace packages inherited a `__file__` attribute whose value is `None`, which causes DocTestFinder.find to fail: ``` $ python Python 3.7.0b2 (tags/v3.7.0b2:b0ef5c979b, Feb 27 2018, 20:38:21) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import doctest >>> import foo >>> foo.__file__ >>> doctest.DocTestFinder().find(foo) Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/doctest.py", line 893, in find file = inspect.getsourcefile(obj) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/inspect.py", line 687, in getsourcefile if any(filename.endswith(s) for s in all_bytecode_suffixes): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/inspect.py", line 687, in if any(filename.endswith(s) for s in all_bytecode_suffixes): AttributeError: 'NoneType' object has no attribute 'endswith' ``` Scanning through the recent changes, issue32305 seems to be related, but when I look at the code ancestry, I can't see the related commits on the 3.7 branch, so I couldn't immediately confirm if it is indeed implicated. I encountered this issue when testing jaraco.functools on Python 3.7.0b2 on macOS, but did not encounter it on Python 3.7.0a4+ as found on the Travis nightly builds. More details are logged in https://github.com/pytest-dev/pytest/issues/3276. I'm not sure yet whether inspect.getfile should be adapted to raise a TypeError in this case, or if doctest.DocTestFinder.find should account for getfile returning None. If we choose to update inspect.getfile, I should caution there's a bit of copy/paste there, so two branches of code will need to be updated. Barry, I'd love to hear what your thoughts are on this and what you'd like to do. And definitely let me know if I can help. ---------- components: Interpreter Core, Library (Lib) keywords: 3.7regression messages: 313197 nosy: barry, jason.coombs priority: normal severity: normal status: open title: AttributeError in doctest.DocTestFinder.find versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 4 07:39:16 2018 From: report at bugs.python.org (Petter Strandmark) Date: Sun, 04 Mar 2018 12:39:16 +0000 Subject: [New-bugs-announce] [issue32992] unittest: Automatically run coroutines in a loop Message-ID: <1520167156.98.0.467229070634.issue32992@psf.upfronthosting.co.za> New submission from Petter Strandmark : I am wondering whether it would be useful for unittest.TestCase to automatically run test methods that are coroutines within the default asyncio loop. Example: class TestAsync(unittest.TestCase): async def test_foo(self): result = await foo() self.assertEqual(result, 42) the test runner would then run test_foo within the default loop. If needed, we could also add functionality for providing a loop other than the default to the test class. It seems to me that this functionality would be pretty easy to add to Lib/unittest/case.py:615 . Personally, I think it would be useful. Right now I have to append every test case with a personal @run_in_loop decorator and I think unittest.TestCase could do this for me without breaking anything. ---------- components: Library (Lib) messages: 313211 nosy: Petter Strandmark priority: normal severity: normal status: open title: unittest: Automatically run coroutines in a loop type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 4 08:05:12 2018 From: report at bugs.python.org (yao zhihua) Date: Sun, 04 Mar 2018 13:05:12 +0000 Subject: [New-bugs-announce] [issue32993] issue30657 Incomplete fix Message-ID: <1520168712.39.0.467229070634.issue32993@psf.upfronthosting.co.za> New submission from yao zhihua : Due to the incomplete fix for CVE-2011-1521, urllib and urllib2 exist for this vulnerability and I tested on the version of Python 3.4.8 (default, Mar 4 2018, 20:37:04).I am sorry that I do not know how to fix it. ---------- components: Library (Lib) files: poc.py messages: 313212 nosy: yao zhihua priority: normal severity: normal status: open title: issue30657 Incomplete fix type: security versions: Python 3.4 Added file: https://bugs.python.org/file47469/poc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 4 15:47:27 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 04 Mar 2018 20:47:27 +0000 Subject: [New-bugs-announce] [issue32994] Building the html documentation is broken Message-ID: <1520196447.07.0.467229070634.issue32994@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : $ make html BLURB="python3 -m blurb" mkdir -p build Building NEWS from Misc/NEWS.d with blurb PATH=./venv/bin:$PATH sphinx-build -b html -d build/doctrees -D latex_elements.papersize= . build/html Running Sphinx v1.5.6 loading pickled environment... not yet created Theme error: no theme named 'python_docs_theme' found (missing theme.conf?) Makefile:43: recipe for target 'build' failed make: *** [build] Error 1 ---------- components: Build messages: 313219 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Building the html documentation is broken type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 05:11:32 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 05 Mar 2018 10:11:32 +0000 Subject: [New-bugs-announce] [issue32995] Add a glossary entry for context variables Message-ID: <1520244692.22.0.467229070634.issue32995@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : I think the term context variable is worth adding into the glossary. ---------- assignee: docs at python components: Documentation messages: 313241 nosy: docs at python, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Add a glossary entry for context variables type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 06:56:46 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 05 Mar 2018 11:56:46 +0000 Subject: [New-bugs-announce] [issue32996] Improve What's New in 3.7 Message-ID: <1520251006.69.0.467229070634.issue32996@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The following PR fixes and improves formatting in the "What's New in Python 3.7" document, adds links to issues and authors names. This is just one step. Somebody need to review NEWS entries and adds corresponding What's New entries if they are worth this, and later edit the wording of the final document. ---------- assignee: docs at python components: Documentation messages: 313243 nosy: docs at python, ned.deily, serhiy.storchaka priority: normal severity: normal status: open title: Improve What's New in 3.7 type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 10:12:47 2018 From: report at bugs.python.org (James Davis) Date: Mon, 05 Mar 2018 15:12:47 +0000 Subject: [New-bugs-announce] [issue32997] Catastrophic backtracking in fpformat Message-ID: <1520262767.1.0.467229070634.issue32997@psf.upfronthosting.co.za> New submission from James Davis : The decoder regex used to parse numbers in the fpformat module is vulnerable to catastrophic backtracking. '^([-+]?)0*(\d*)((?:\.\d*)?)(([eE][-+]?\d+)?)$' The substructure '0*(\d*)' is quadratic. An attack string like '+000....0++' blows up. There is a risk of DOS (REDOS) if a web app uses this module to format untrusted strings. ---------- components: Library (Lib) messages: 313249 nosy: davisjam priority: normal severity: normal status: open title: Catastrophic backtracking in fpformat type: security versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 10:40:37 2018 From: report at bugs.python.org (mike bayer) Date: Mon, 05 Mar 2018 15:40:37 +0000 Subject: [New-bugs-announce] [issue32998] regular expression regression in python 3.7 Message-ID: <1520264437.92.0.467229070634.issue32998@psf.upfronthosting.co.za> New submission from mike bayer : demo: import re inner = 'VARCHAR(30) COLLATE "en_US"' result = re.sub( r'((?: COLLATE.*)?)$', r'FOO\1', inner ) print(inner) print(result) in all Python versions prior to 3.7: VARCHAR(30) COLLATE "en_US" VARCHAR(30)FOO COLLATE "en_US" in Python 3.7.0b2: VARCHAR(30) COLLATE "en_US" VARCHAR(30)FOO COLLATE "en_US"FOO platform: Fedora 27 python build: Python 3.7.0b2 (default, Mar 5 2018, 09:37:32) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)] on linux ---------- components: Library (Lib) messages: 313251 nosy: zzzeek priority: normal severity: normal status: open title: regular expression regression in python 3.7 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 11:19:41 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Mon, 05 Mar 2018 16:19:41 +0000 Subject: [New-bugs-announce] [issue32999] issubclass(obj, abc.ABC) causes a segfault Message-ID: <1520266781.34.0.467229070634.issue32999@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : Demo: >>> from abc import ABC >>> issubclass(1, ABC) Segmentation fault (core dumped) The stack trace is attached. Before reimplementation of abc in C, the result was confusing too: Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32 >>> from abc import ABC >>> issubclass(1, ABC) Traceback (most recent call last): File "", line 1, in File "abc.py", line 230, in __subclasscheck__ File "_weakrefset.py", line 84, in add TypeError: cannot create weak reference to 'int' object ---------- components: Extension Modules files: stack-trace.txt messages: 313259 nosy: izbyshev, levkivskyi priority: normal severity: normal status: open title: issubclass(obj, abc.ABC) causes a segfault type: crash versions: Python 3.8 Added file: https://bugs.python.org/file47470/stack-trace.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 11:30:03 2018 From: report at bugs.python.org (John Brearley) Date: Mon, 05 Mar 2018 16:30:03 +0000 Subject: [New-bugs-announce] [issue33000] IDLEX GUI consumes all RAM for scrollback buffer, uses 161Bytes / character stored Message-ID: <1520267403.32.0.467229070634.issue33000@psf.upfronthosting.co.za> New submission from John Brearley : While running a tensorflow script in the IDLEX GUI that runs for 8 million steps and produce 2 lines stdout per step, my PC used all 16GB RAM and crashed the python process, not to mention messed up other apps, like Firefox & Norton AntiVirus. While the RAM was recovered, Firefox started responding, but Norton Antivirus didn?t, so the PC had to be rebooted. The issue is easily reproduced with the short print loop that dumps 20K lines of stdout, at 171 characters / line on the IDLEX GUI window. When the script is run in the IDLEX GUI, the Windows Task Manager shows the python process start at 19MB RAM consumption, then grows to 569MB RAM consumption. If I run the script a second time in the same IDLEX GUI window, it grows to 1.1GB RAM consumption. So 20K lines off output at 171 characters / line (?i: nnnnn? prefix + 2 * 80 byte string + newline) = 3.4M total characters stored in the scrollback buffer. The delta memory consumed was 569MB ? 19MB = 550MB. The RAM consumed / character is 550MB / 3.4M = 161 bytes / character. This seems excessively inefficient. I now understand how the tensorflow script would stop after 550K iterations and the 550K lines of stdout in the IDLEX GUI would consume all 16GB RAM on my PC. BTW, when I run the same test script in the WinPython command prompt window, it only consumes 4MB RAM while it runs. However the scrollback buffer is limited to 10K lines, wrapped at the 80 character mark, so much less data saved. I haven?t found any options in IDLEX GUI to limit the scrollback buffer size. My request is to review the scrollback memory storage algorithms. If nothing can be done to improve them, then please add a circular buffer to limit the memory consumption. # Print loop to test memory consumption in Python IDLEX GUI. s1 = "0123456789" s2 = s1+s1+s1+s1+s1+s1+s1+s1 for i in range(20000): print("i:", i, s2, s2) I am using Python 3.6.4 on Windows 7 PC, Intel i7-4770S, 3.1GHz, 16GB RAM. ---------- assignee: terry.reedy components: IDLE messages: 313263 nosy: jbrearley, terry.reedy priority: normal severity: normal status: open title: IDLEX GUI consumes all RAM for scrollback buffer, uses 161Bytes / character stored type: resource usage versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 13:04:42 2018 From: report at bugs.python.org (Steve Dower) Date: Mon, 05 Mar 2018 18:04:42 +0000 Subject: [New-bugs-announce] [issue33001] Buffer overflow vulnerability in os.symlink on Windows Message-ID: <1520273082.67.0.467229070634.issue33001@psf.upfronthosting.co.za> New submission from Steve Dower : On February 27th, 2018, the Python Security Response team was notified of a buffer overflow issue in the os.symlink() method on Windows. The issue affects all versions of Python between 3.2 and 3.6.4, including the 3.7 beta releases. It will be patched for the next releases of 3.4, 3.5, 3.6 and 3.7. Scripts may be vulnerable if they use os.symlink() on Windows and an attacker is able to influence the location where links are created. As os.symlink requires administrative privileges on most versions of Windows, exploits using this vulnerability are likely to achieve escalation of privilege. Besides applying the fix to CPython, scripts can also ensure that the length of each path argument is less than 260, and if the source is a relative path, that its combination with the destination is also shorter than 260 characters. That is: assert (len(src) < 260 and len(dest) < 260 and len(os.path.join(os.path.dirname(dest), src)) < 260) os.symlink(src, dest) Scripts that explicitly pass the target_is_directory argument as True are not vulnerable. Also, scripts on Python 3.5 that use bytes for paths are not vulnerable, because of a combination of stack layout and added parameter validation. I will be requesting a CVE for this once the patches are applied to maintenance branches, and then notifying the security-announce list. The patch has been reviewed by the PSRT and reporter, and while it prevents the buffer overflow, it does not raise any new errors or enable the use of long paths when creating symlinks. Many thanks to Alexey Izbyshev for the report, and helping us work through developing the patch. ---------- assignee: steve.dower components: Windows keywords: security_issue messages: 313275 nosy: izbyshev, paul.moore, steve.dower, tim.golden, zach.ware priority: critical severity: normal status: open title: Buffer overflow vulnerability in os.symlink on Windows type: security versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 16:07:14 2018 From: report at bugs.python.org (Josh Rosenberg) Date: Mon, 05 Mar 2018 21:07:14 +0000 Subject: [New-bugs-announce] [issue33002] Making a class formattable as hex/oct integer requires both __int__ and __index__ for no good reason Message-ID: <1520284034.09.0.467229070634.issue33002@psf.upfronthosting.co.za> New submission from Josh Rosenberg : In Python 2, making a user-defined class support formatting using the integer-specific type codes required that __int__ be defined and nothing else (that is, '%x' % Foo() only required Foo to provide a __int__ method). In Python 3, this was changed to perform the conversion via __index__ for the %o, %x and %X format types (to match how oct and hex behave), not __int__, but the pre-check for validity in unicodeobject.c's mainformatlong function is still based on PyNumber_Check, not PyIndex_Check, and PyNumber_Check is concerned solely with __int__ and __float__, not __index__. This means that a class with __index__ but not __int__ can't be used with the %o/%x/%X format codes (even though hex(mytype) and oct(mytype) work just fine). It seems to me that either: 1. PyNumber_Check should be a superset of PyIndex_Check (broader change, probably out of scope) or 2. mainformatlong should restrict the scope of the PyNumber_Check test to only being used for the non-'o'/'x'/'X' tests (where it's needed to avoid coercing strings and the like to integer). Change #2 should be safe, with no major side-effects; since PyLong and subclasses always passed the existing PyNumber_Check test anyway, and PyNumber_Index already performs PyIndex_Check, the only path that needs PyNumber_Check is the one that ends in calling PyNumber_Long. ---------- components: Interpreter Core messages: 313285 nosy: josh.r priority: normal severity: normal status: open title: Making a class formattable as hex/oct integer requires both __int__ and __index__ for no good reason versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 17:40:09 2018 From: report at bugs.python.org (Jason Madden) Date: Mon, 05 Mar 2018 22:40:09 +0000 Subject: [New-bugs-announce] [issue33005] 3.7.0b2 Interpreter crash in dev mode (or with PYTHONMALLOC=debug) with 'python -X dev -c 'import os; os.fork()' Message-ID: <1520289609.59.0.467229070634.issue33005@psf.upfronthosting.co.za> New submission from Jason Madden : At the request of Victor Stinner on twitter, I ran the gevent test suite with Python 3.7.0b2 with the new '-X dev' argument and discovered an interpreter crash. With a bit of work, it boiled down to a very simple command: $ env -i .runtimes/snakepit/python3.7.0b2 -X dev -c 'import os; os.fork()' *** Error in `.runtimes/snakepit/python3.7.0b2': munmap_chunk(): invalid pointer: 0x0000000001c43a80 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f5a971607e5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x1a8)[0x7f5a9716d698] .runtimes/snakepit/python3.7.0b2(_PyRuntimeState_Fini+0x30)[0x515d90] .runtimes/snakepit/python3.7.0b2[0x51445f] .runtimes/snakepit/python3.7.0b2[0x42ce40] .runtimes/snakepit/python3.7.0b2(_Py_UnixMain+0x7b)[0x42eaab] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f5a97109830] .runtimes/snakepit/python3.7.0b2(_start+0x29)[0x42a0d9] ======= Memory map: ======== 00400000-00689000 r-xp 00000000 08:01 177409 //.runtimes/versions/python3.7.0b2/bin/python3.7 00888000-00889000 r--p 00288000 08:01 177409 //.runtimes/versions/python3.7.0b2/bin/python3.7 00889000-008f3000 rw-p 00289000 08:01 177409 //.runtimes/versions/python3.7.0b2/bin/python3.7 008f3000-00914000 rw-p 00000000 00:00 0 01b84000-01c64000 rw-p 00000000 00:00 0 [heap] 7f5a96052000-7f5a96068000 r-xp 00000000 08:01 265946 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f5a96068000-7f5a96267000 ---p 00016000 08:01 265946 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f5a96267000-7f5a96268000 rw-p 00015000 08:01 265946 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f5a96268000-7f5a96273000 r-xp 00000000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96273000-7f5a96472000 ---p 0000b000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96472000-7f5a96473000 r--p 0000a000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96473000-7f5a96474000 rw-p 0000b000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96474000-7f5a9647a000 rw-p 00000000 00:00 0 7f5a9647a000-7f5a96485000 r-xp 00000000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96485000-7f5a96684000 ---p 0000b000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96684000-7f5a96685000 r--p 0000a000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96685000-7f5a96686000 rw-p 0000b000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96686000-7f5a9669c000 r-xp 00000000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9669c000-7f5a9689b000 ---p 00016000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9689b000-7f5a9689c000 r--p 00015000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9689c000-7f5a9689d000 rw-p 00016000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9689d000-7f5a9689f000 rw-p 00000000 00:00 0 7f5a9689f000-7f5a968a7000 r-xp 00000000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a968a7000-7f5a96aa6000 ---p 00008000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a96aa6000-7f5a96aa7000 r--p 00007000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a96aa7000-7f5a96aa8000 rw-p 00008000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a96acc000-7f5a96b4c000 rw-p 00000000 00:00 0 7f5a96b4c000-7f5a96b4e000 r-xp 00000000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96b4e000-7f5a96d4e000 ---p 00002000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96d4e000-7f5a96d4f000 r--p 00002000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96d4f000-7f5a96d51000 rw-p 00003000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96d51000-7f5a96e11000 rw-p 00000000 00:00 0 7f5a96e11000-7f5a970e9000 r--p 00000000 08:01 133586 /usr/lib/locale/locale-archive 7f5a970e9000-7f5a972a9000 r-xp 00000000 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a972a9000-7f5a974a9000 ---p 001c0000 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a974a9000-7f5a974ad000 r--p 001c0000 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a974ad000-7f5a974af000 rw-p 001c4000 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a974af000-7f5a974b3000 rw-p 00000000 00:00 0 7f5a974b3000-7f5a975bb000 r-xp 00000000 08:01 268926 /lib/x86_64-linux-gnu/libm-2.23.so 7f5a975bb000-7f5a977ba000 ---p 00108000 08:01 268926 /lib/x86_64-linux-gnu/libm-2.23.so 7f5a977ba000-7f5a977bb000 r--p 00107000 08:01 268926 /lib/x86_64-linux-gnu/libm-2.23.so 7f5a977bb000-7f5a977bc000 rw-p 00108000 08:01 268926 /lib/x86_64-linux-gnu/libm-2.23.so 7f5a977bc000-7f5a977be000 r-xp 00000000 08:01 268937 /lib/x86_64-linux-gnu/libutil-2.23.so 7f5a977be000-7f5a979bd000 ---p 00002000 08:01 268937 /lib/x86_64-linux-gnu/libutil-2.23.so 7f5a979bd000-7f5a979be000 r--p 00001000 08:01 268937 /lib/x86_64-linux-gnu/libutil-2.23.so 7f5a979be000-7f5a979bf000 rw-p 00002000 08:01 268937 /lib/x86_64-linux-gnu/libutil-2.23.so 7f5a979bf000-7f5a979c2000 r-xp 00000000 08:01 268932 /lib/x86_64-linux-gnu/libdl-2.23.so 7f5a979c2000-7f5a97bc1000 ---p 00003000 08:01 268932 /lib/x86_64-linux-gnu/libdl-2.23.so 7f5a97bc1000-7f5a97bc2000 r--p 00002000 08:01 268932 /lib/x86_64-linux-gnu/libdl-2.23.so 7f5a97bc2000-7f5a97bc3000 rw-p 00003000 08:01 268932 /lib/x86_64-linux-gnu/libdl-2.23.so 7f5a97bc3000-7f5a97bdb000 r-xp 00000000 08:01 268929 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f5a97bdb000-7f5a97dda000 ---p 00018000 08:01 268929 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f5a97dda000-7f5a97ddb000 r--p 00017000 08:01 268929 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f5a97ddb000-7f5a97ddc000 rw-p 00018000 08:01 268929 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f5a97ddc000-7f5a97de0000 rw-p 00000000 00:00 0 7f5a97de0000-7f5a97e06000 r-xp 00000000 08:01 268928 /lib/x86_64-linux-gnu/ld-2.23.so 7f5a97e10000-7f5a97fb5000 rw-p 00000000 00:00 0 7f5a97fb5000-7f5a97fdc000 r--p 00000000 08:01 135047 /usr/lib/locale/C.UTF-8/LC_CTYPE 7f5a97fdc000-7f5a97fe1000 rw-p 00000000 00:00 0 7f5a97ffd000-7f5a97ffe000 rw-p 00000000 00:00 0 7f5a97ffe000-7f5a98005000 r--s 00000000 08:01 529048 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache 7f5a98005000-7f5a98006000 r--p 00025000 08:01 268928 /lib/x86_64-linux-gnu/ld-2.23.so 7f5a98006000-7f5a98007000 rw-p 00026000 08:01 268928 /lib/x86_64-linux-gnu/ld-2.23.so 7f5a98007000-7f5a98008000 rw-p 00000000 00:00 0 7fff79aeb000-7fff79b0c000 rw-p 00000000 00:00 0 [stack] 7fff79b1d000-7fff79b20000 r--p 00000000 00:00 0 [vvar] 7fff79b20000-7fff79b22000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] The crash is reproducible on Ubuntu 16.04 with a pyenv-built 3.7.0b2 and on macOS 10.13 with the python.org build. Individually setting PYTHONMALLOC=debug also triggers the crash: $ PYTHONMALLOC=debug /usr/local/bin/python3.7 -c 'import os; os.fork()' Python(16996,0x7fffb1879340) malloc: *** error for object 0x7f90e6d01ff0: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug ---------- components: Interpreter Core messages: 313296 nosy: jmadden priority: normal severity: normal status: open title: 3.7.0b2 Interpreter crash in dev mode (or with PYTHONMALLOC=debug) with 'python -X dev -c 'import os; os.fork()' versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 17:36:31 2018 From: report at bugs.python.org (W. Trevor King) Date: Mon, 05 Mar 2018 22:36:31 +0000 Subject: [New-bugs-announce] [issue33003] urllib: Document parse_http_list Message-ID: <1520289391.9.0.467229070634.issue33003@psf.upfronthosting.co.za> New submission from W. Trevor King : Python has had a parse_http_list helper since urllib2 landed in 6d7e47b8ea (EXPERIMENTAL, 2000-01-20). With Python3 it was moved into urllib.request, and the implementation hasn't changed since (at least as of 4c19b9573, 2018-03-05). External projects depend on the currently undocumented function [1,2], so it would be nice to have some user-facing documentation for it. If that sounds appealing, I'm happy to work up a pull request. [1]: https://github.com/requests/requests/blob/v2.18.4/requests/compat.py#L42 [2]: https://github.com/requests/requests/blob/v2.18.4/requests/compat.py#L58 ---------- assignee: docs at python components: Documentation messages: 313294 nosy: docs at python, labrat priority: normal severity: normal status: open title: urllib: Document parse_http_list type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 17:39:41 2018 From: report at bugs.python.org (Marco Rougeth) Date: Mon, 05 Mar 2018 22:39:41 +0000 Subject: [New-bugs-announce] [issue33004] Shutil module functions could accept Path-like objects Message-ID: <1520289581.12.0.467229070634.issue33004@psf.upfronthosting.co.za> New submission from Marco Rougeth : This is issue is to suggest an enhancement to the shutil module, I believe it's quiet similar to the issue32642. I was using shutil.copytree to copy some files around and I tried to pass Path-like objects as input but got the exception "TypeError: argument should be string, bytes or integer, not PosixPath". e.g. build_path = BASE_DIR / 'build' static_path = BASE_DIR / 'static' shutil.copytree(static_path, build_path) As said in issue32642, it "wasn't obvious because Path objects appear as strings in normal debug output". I had a look at the shutil source code and it seems that it wouldn't be to hard to implement. I'd love to do it, if it makes sense. ---------- components: Library (Lib) messages: 313295 nosy: rougeth priority: normal severity: normal status: open title: Shutil module functions could accept Path-like objects type: enhancement versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 5 20:36:31 2018 From: report at bugs.python.org (Pierre Thibault) Date: Tue, 06 Mar 2018 01:36:31 +0000 Subject: [New-bugs-announce] [issue33006] docstring of filter function is incorrect Message-ID: <1520300191.48.0.467229070634.issue33006@psf.upfronthosting.co.za> New submission from Pierre Thibault : > help(filter) Help on built-in function filter in module __builtin__: filter(...) filter(function or None, sequence) -> list, tuple, or string Return those items of sequence for which function(item) is true. If function is None, return the items that are true. If sequence is a tuple or string, return the same type, else return a list. (END) The second argument can be an iterable. Suggestion: Replace the docstring with the definition found at https://docs.python.org/2/library/functions.html#filter. ---------- assignee: docs at python components: Documentation messages: 313302 nosy: Pierre Thibault, docs at python priority: normal severity: normal status: open title: docstring of filter function is incorrect versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 00:36:52 2018 From: report at bugs.python.org (Antony Lee) Date: Tue, 06 Mar 2018 05:36:52 +0000 Subject: [New-bugs-announce] [issue33007] Objects referencing private-mangled names do not roundtrip properly under pickling. Message-ID: <1520314612.17.0.467229070634.issue33007@psf.upfronthosting.co.za> New submission from Antony Lee : Consider the following example: import pickle class T: def __init__(self): self.attr = self.__foo def __foo(self): pass print(pickle.loads(pickle.dumps(T()))) This fails on 3.6 with `AttributeError: 'T' object has no attribute '__foo'` (i.e. there's a lookup on the unmangled name). As a comparison, replacing `__foo` with `_foo` results in working code. ---------- components: Library (Lib) messages: 313306 nosy: Antony.Lee priority: normal severity: normal status: open title: Objects referencing private-mangled names do not roundtrip properly under pickling. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 01:51:56 2018 From: report at bugs.python.org (W. Trevor King) Date: Tue, 06 Mar 2018 06:51:56 +0000 Subject: [New-bugs-announce] [issue33008] urllib.request.parse_http_list incorrectly strips backslashes Message-ID: <1520319116.64.0.467229070634.issue33008@psf.upfronthosting.co.za> New submission from W. Trevor King : Python currently strips backslashes from inside quoted strings: $ echo 'a="b\"c",d=e' | python3 -c 'from sys import stdin; from urllib.request import parse_http_list; print(parse_http_list(stdin.read()))' ['a="b"c"', 'd=e'] It should be printing: ['a="b\"c"', 'd=e'] The bug is this continue [1], which should be removed. This was not a problem with the original implementation [2]. It was introduced in [3] as a fix for #735248 with explicit tests asserting the broken behavior [3]. Stripping backslashes from the insides of quoted strings is not appropriate, because it breaks explicit unquoting with email.utils.unquote [4]: import email.utils import urllib.request list = r'"b\\"c"' entry = urllib.request.parse_http_list(list)[0] entry # '"b\\"c"', should be '"b\\\\"c"' email.utils.unquote(entry) # 'b"c', should be 'b\\"c' I'm happy to file patches against the various branches if that would help, but as a one-line removal (plus adjusting the tests), it might be easier if a maintainer files the patches. [1]: https://github.com/python/cpython/blob/v3.7.0b2/Lib/urllib/request.py#L1420 [2]: https://github.com/python/cpython/commit/6d7e47b8ea1b8cf82927dacc364689b8eeb8708b#diff-33f7983ed1a69d290366fe426880581cR777 [3]: https://github.com/python/cpython/commit/e1b13d20199f79ffd3407bbb14cc09b1b8fd70d2#diff-230a8abfedeaa9ae447490df08172b15R52 [4]: https://docs.python.org/3.5/library/email.util.html#email.utils.unquote ---------- components: Library (Lib) messages: 313308 nosy: labrat priority: normal severity: normal status: open title: urllib.request.parse_http_list incorrectly strips backslashes versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 02:33:47 2018 From: report at bugs.python.org (Antony Lee) Date: Tue, 06 Mar 2018 07:33:47 +0000 Subject: [New-bugs-announce] [issue33009] inspect.signature crashes on unbound partialmethods Message-ID: <1520321627.34.0.467229070634.issue33009@psf.upfronthosting.co.za> New submission from Antony Lee : The following example crashes Python 3.6: from functools import partialmethod import inspect class T: g = partialmethod((lambda self, x: x), 1) print(T().g()) # Correctly returns 1. print(T.g(T())) # Correctly returns 1. print(inspect.signature(T.g)) # Crashes. with File "/usr/lib/python3.6/inspect.py", line 3036, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped) File "/usr/lib/python3.6/inspect.py", line 2786, in from_callable follow_wrapper_chains=follow_wrapped) File "/usr/lib/python3.6/inspect.py", line 2254, in _signature_from_callable assert first_wrapped_param is not sig_params[0] IndexError: tuple index out of range ---------- components: Library (Lib) messages: 313309 nosy: Antony.Lee priority: normal severity: normal status: open title: inspect.signature crashes on unbound partialmethods versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 04:47:34 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Tue, 06 Mar 2018 09:47:34 +0000 Subject: [New-bugs-announce] [issue33010] os.path.isdir() returns True for broken directory symlinks or junctions Message-ID: <1520329654.61.0.467229070634.issue33010@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : os.path.isdir() violates its own documentation by returning True for broken directory symlinks or junctions, for which os.path.exists() returns False: >>> os.mkdir('b') >>> import _winapi >>> _winapi.CreateJunction('b', 'a') >>> os.rmdir('b') >>> os.path.exists('a') False >>> os.path.isdir('a') True The underlying problem is that os.path.isdir() uses GetFileAttributes, which is documented not to follow symlinks. Eryk, is there a cheaper way to check FILE_ATTRIBUTE_DIRECTORY on a path while following reparse points apart from CreateFile/GetFileInformationByHandleEx/CloseFile? Also, does it make sense to use GetFileAttributes as a fast path and use something like above as a fallback only if FILE_ATTRIBUTE_REPARSE_POINT is set, or does GetFileAttributes do something equivalently expensive under the hood? ---------- components: Extension Modules, Windows messages: 313314 nosy: eryksun, izbyshev, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.isdir() returns True for broken directory symlinks or junctions type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 06:12:50 2018 From: report at bugs.python.org (Cong Monkey) Date: Tue, 06 Mar 2018 11:12:50 +0000 Subject: [New-bugs-announce] [issue33011] Embedded 3.6.4 distribution does not add script parent as sys.path[0] Message-ID: <1520334770.43.0.467229070634.issue33011@psf.upfronthosting.co.za> New submission from Cong Monkey : Embedded 3.6.0 distribution does not insert script parent in sys.path[0], but the normal python do it. this make some failed, like when I try to do pip install future, it will failed as import src.future failed, which works for normal python. The root source maybe when python36._pth exist, python rewrite the flag and mark as isolate mode automate, which will not update sys.path when pymain_init_sys_path. I use a trick which is really bad as a work around(and even when site .main the sys.argv is not ready!), and hope upstream will fix the root source. ===============begin in my usercustomize.py:================ import sys import pathlib class DummyImportHook(object): def __init__(self, *args): self.is_script_path_to_sys_path_be_done = False pass def find_module(self, fullname, path=None): # print(f'{DummyImportHook.__name__} trigger {sys.argv if hasattr(sys, "argv") else ""} ') if not self.is_script_path_to_sys_path_be_done and hasattr(sys, 'argv'): if sys.argv[0] is not None: # print(f'{DummyImportHook.__name__}:argv is {sys.argv}') path_obj = pathlib.Path(sys.argv[0]) # #if path_obj.exists(): # print(f'{DummyImportHook.__name__}:I am try to add {str(path_obj.parent)} to sys.path') sys.path.insert(0, str(path_obj.parent)) print(f'{DummyImportHook.__name__}:current sys.path is {sys.path}') pass self.is_script_path_to_sys_path_be_done = True pass return None pass print(f'{DummyImportHook.__name__}:auto script path to sys.path hook load!') #sys.meta_path = [DummyImportHook()] sys.meta_path.insert(0,DummyImportHook()) ===============end in my usercustomize.py:================ ===============begin in my python36._pth======================== python36 . # Uncomment to run site.main() automatically import site ===============end in my python36._pth======================== BTW, where is Embedded distribution package script in python git repo? ---------- messages: 313316 nosy: Cong Monkey priority: normal severity: normal status: open title: Embedded 3.6.4 distribution does not add script parent as sys.path[0] versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 06:18:13 2018 From: report at bugs.python.org (Siddhesh Poyarekar) Date: Tue, 06 Mar 2018 11:18:13 +0000 Subject: [New-bugs-announce] [issue33012] Invalid function cast warnings with gcc 8 for METH_NOARGS Message-ID: <1520335093.04.0.467229070634.issue33012@psf.upfronthosting.co.za> New submission from Siddhesh Poyarekar : gcc 8 has added a new warning heuristic to detect invalid function casts and a stock python build seems to hit that warning quite often. The most common is the cast of a METH_NOARGS function (that uses just one argument) to a PyCFunction. The fix is pretty simple but needs to be applied widely. I'm slowly knocking them off in my spare time; WIP here, which has a few other types of warnings mixed in that I'll sift out during submission and also create separate bug reports for: https://github.com/siddhesh/cpython/tree/func-cast I'll clean up and post PR(s) once I am done but I figured I should file this report first since it is a pretty big change in terms of number of files touched and wanted to be sure that I'm making changes the way the community prefers. ---------- components: Build messages: 313317 nosy: siddhesh priority: normal severity: normal status: open title: Invalid function cast warnings with gcc 8 for METH_NOARGS type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 10:08:53 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Tue, 06 Mar 2018 15:08:53 +0000 Subject: [New-bugs-announce] [issue33013] Underscore in str.format with x option Message-ID: <1520348933.04.0.467229070634.issue33013@psf.upfronthosting.co.za> New submission from Cheryl Sabella : >From the doc (https://docs.python.org/3/library/string.html#format-specification-mini-language): > The '_' option signals the use of an underscore for a thousands separator for floating point presentation types and for integer presentation type 'd'. For integer presentation types 'b', 'o', 'x', and 'X', underscores will be inserted every 4 digits. For other presentation types, specifying this option is an error. >>> '{0:_}'.format(123456789) '123_456_789' >>> '{0:x}'.format(123456789) '75bcd15' >>> '{0:x_}'.format(123456789) Traceback (most recent call last): File "", line 1, in ValueError: Invalid format specifier What am I doing wrong? I read the doc as saying that using `type` of `x` would result in the `_` separator to be inserted every 4 characters, so I was expecting the output to be '75b_cd15'. Thanks! ---------- messages: 313330 nosy: csabella priority: normal severity: normal status: open title: Underscore in str.format with x option type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 10:50:40 2018 From: report at bugs.python.org (David Beazley) Date: Tue, 06 Mar 2018 15:50:40 +0000 Subject: [New-bugs-announce] [issue33014] Clarify doc string for str.isidentifier() Message-ID: <1520351440.06.0.467229070634.issue33014@psf.upfronthosting.co.za> New submission from David Beazley : This is a minor nit, but the doc string for str.isidentifier() states: Use keyword.iskeyword() to test for reserved identifiers such as "def" and "class". At first glance, I thought that it meant you'd do this (doesn't work): 'def'.iskeyword() As opposed to this: import keyword keyword.iskeyword('def') Perhaps a clarification that "keyword" refers to the keyword module could be added. Or better yet, just make 'iskeyword()` a string method ;-). ---------- assignee: docs at python components: Documentation messages: 313335 nosy: dabeaz, docs at python priority: normal severity: normal status: open title: Clarify doc string for str.isidentifier() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 13:56:13 2018 From: report at bugs.python.org (Siddhesh Poyarekar) Date: Tue, 06 Mar 2018 18:56:13 +0000 Subject: [New-bugs-announce] [issue33015] Fix function cast warning in thread_pthread.h Message-ID: <1520362573.83.0.467229070634.issue33015@psf.upfronthosting.co.za> New submission from Siddhesh Poyarekar : The PyThread_start_new_thread function takes a void (*)(void *) as the function argument, which does not match with the pthread_create callback which has type void *(*)(void *). I've got a fix for this that adds a wrapper function of the right type that subsequently calls the function passed to PyThread_start_new_thread. PR coming up. ---------- components: Build messages: 313357 nosy: siddhesh priority: normal severity: normal status: open title: Fix function cast warning in thread_pthread.h type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 17:31:37 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Tue, 06 Mar 2018 22:31:37 +0000 Subject: [New-bugs-announce] [issue33016] nt._getfinalpathname may use uninitialized memory Message-ID: <1520375497.65.0.467229070634.issue33016@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : The first call of GetFinalPathNameByHandleW requests the required buffer size for the NT path (VOLUME_NAME_NT), while the second call receives the DOS path (VOLUME_NAME_DOS) in the allocated buffer. Usually, NT paths are longer than DOS ones, for example: NT path: \Device\HarddiskVolume2\foo DOS path: \\?\C:\foo Or, for UNC paths: NT path: \Device\Mup\server\share\foo DOS path: \\?\UNC\server\share\foo However, it is not always the case. A volume can be mounted to an arbitrary path, and if a drive letter is not assigned to such a volume, GetFinalPathNameByHandle will use the mount point path instead of C: above. This way, a DOS path can be longer than an NT path. Since the result of the second call is not checked properly, this condition won't be detected, resulting in an out-of-bounds access and use of uninitialized memory later. Moreover, the path returned by GetFinalPathNameByHandle may change between the first and the second call, for example, because an intermediate directory was renamed. If the path becomes longer than buf_size, the same issue will occur. ---------- components: Extension Modules, Windows messages: 313366 nosy: izbyshev, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: nt._getfinalpathname may use uninitialized memory type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 6 22:17:23 2018 From: report at bugs.python.org (LCatro) Date: Wed, 07 Mar 2018 03:17:23 +0000 Subject: [New-bugs-announce] [issue33017] Special set-cookie setting will bypass Cookielib Message-ID: <1520392643.04.0.467229070634.issue33017@psf.upfronthosting.co.za> New submission from LCatro : PoC (PHP Version): header('Set-Cookie: test=123; max-age=a'); // PoC 1 header('Set-Cookie: test=123; domain=;'); // PoC 2 header('Set-Cookie: test=123; version=a;'); // PoC 3 PoC 1 will trigger int() convert string to number from max-age (lib/cookielib.py:1429).I give this value a string ,it will make except try: v = int(v) # lib/cookielib.py:1429 except ValueError: _debug(" missing or invalid (non-numeric) value for " "max-age attribute") bad_cookie = True break # lib/cookielib.py:1434 PoC 2 is a domain None value (lib/cookielib.py:1412).Cookielib will discard current cookie record. if k == "domain": # lib/cookielib.py:1411 if v is None: # lib/cookielib.py:1412 _debug(" missing value for domain attribute") bad_cookie = True break # lib/cookielib.py:1415 PoC 3 will trigger a int() convert except(lib/cookielib.py:1472).Cookielib will discard current cookie record too. version = standard.get("version", None) # lib/cookielib.py:1469 if version is not None: try: version = int(version) # lib/cookielib.py:1472 except ValueError: return None # invalid version, ignore cookie There are PoCs involve urllib and requests library . Full Code Analysis (Chinese Version): https://github.com/lcatro/Python_CookieLib_0day ---------- components: Library (Lib) files: poc.php messages: 313370 nosy: LCatro priority: normal severity: normal status: open title: Special set-cookie setting will bypass Cookielib versions: Python 2.7 Added file: https://bugs.python.org/file47472/poc.php _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 7 02:59:07 2018 From: report at bugs.python.org (Joshua Bronson) Date: Wed, 07 Mar 2018 07:59:07 +0000 Subject: [New-bugs-announce] [issue33018] Improve issubclass() error checking and message Message-ID: <1520409547.91.0.467229070634.issue33018@psf.upfronthosting.co.za> New submission from Joshua Bronson : Creating this issue by request of INADA Naoki to discuss my proposed patch in https://github.com/python/cpython/pull/5944. Copy/pasting from that PR: If you try something like issubclass('not a class', str), you get a helpful error message that immediately clues you in on what you did wrong: >>> issubclass('not a class', str) TypeError: issubclass() arg 1 must be a class ("AHA! I meant isinstance there. Thanks, friendly error message!") But if you try this with some ABC, the error message is much less friendly! >>> from some_library import SomeAbc >>> issubclass('not a class', SomeAbc) Traceback (most recent call last): File "", line 1, in File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/abc.py", line 230, in __subclasscheck__ cls._abc_negative_cache.add(subclass) File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/_weakrefset.py", line 84, in add self.data.add(ref(item, self._remove)) TypeError: cannot create weak reference to 'str' object ("WTF just went wrong?" Several more minutes of head-scratching ensues. Maybe a less experienced Python programmer who hits this hasn't seen weakrefs before and gets overwhelmed, maybe needlessly proceeding down a deep rabbit hole before realizing no knowledge of weakrefs was required to understand what they did wrong.) Or how about this example: >>> from collections import Reversible >>> issubclass([1, 2, 3], Reversible) Traceback (most recent call last): File "", line 1, in File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/abc.py", line 207, in __subclasscheck__ ok = cls.__subclasshook__(subclass) File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/_collections_abc.py", line 305, in __subclasshook__ return _check_methods(C, "__reversed__", "__iter__") File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/_collections_abc.py", line 73, in _check_methods mro = C.__mro__ AttributeError: 'list' object has no attribute '__mro__' Here you don't even get the same type of error (AttributeError rather than TypeError), which seems unintentionally inconsistent. This trivial patch fixes this, and will hopefully save untold numbers of future Python programmers some time and headache. Let me know if any further changes are required, and thanks in advance for reviewing. ---------- messages: 313376 nosy: inada.naoki, izbyshev, jab, serhiy.storchaka priority: normal pull_requests: 5781 severity: normal status: open title: Improve issubclass() error checking and message type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 7 14:07:32 2018 From: report at bugs.python.org (Antoine Pitrou) Date: Wed, 07 Mar 2018 19:07:32 +0000 Subject: [New-bugs-announce] [issue33019] Review usage of environment variables in the stdlib Message-ID: <1520449652.42.0.467229070634.issue33019@psf.upfronthosting.co.za> New submission from Antoine Pitrou : Python supports a mode where the interpreter ignores environment variables such as PYTHONPATH, etc. However, there are places in the stdlib where environment-sensitive decisions are made, without regard for the ignore-environment flag. Examples include: - ssl.get_default_verify_paths() queries SSL_CERT_FILE and SSL_CERT_DIR - shutil.which() queries PATH - the tempfile module queries TMPDIR, TEMP, TMP to select the defaut directory for temporary files Do you think those need to be sanitized? ---------- components: Library (Lib) messages: 313393 nosy: alex, christian.heimes, pitrou priority: normal severity: normal status: open title: Review usage of environment variables in the stdlib type: security versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 7 15:03:22 2018 From: report at bugs.python.org (Ben Kirshenbaum) Date: Wed, 07 Mar 2018 20:03:22 +0000 Subject: [New-bugs-announce] [issue33020] Tkinter old style classes Message-ID: <1520453002.72.0.467229070634.issue33020@psf.upfronthosting.co.za> New submission from Ben Kirshenbaum : Tkinter objects cannot handle the super() function, and probably other functions (I only found a problem with super()) ---------- components: Tkinter messages: 313397 nosy: benkir07 priority: normal severity: normal status: open title: Tkinter old style classes type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 7 18:03:24 2018 From: report at bugs.python.org (Nir Soffer) Date: Wed, 07 Mar 2018 23:03:24 +0000 Subject: [New-bugs-announce] [issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads Message-ID: <1520463804.4.0.467229070634.issue33021@psf.upfronthosting.co.za> New submission from Nir Soffer : If the file descriptor is on a non-responsive NFS server, calling fstat() can block for long time, hanging all threads. Most of the fstat() calls release the GIL around the call, but some calls seems to be forgotten. In python 3, the calls are handled now by _py_fstat(), releasing the GIL internally, but some calls use _py_fstat_noraise() which does not release the GIL. Most of the calls to _py_fstat_noraise() release the GIL around the call, except these 2 calls, affecting users of: - mmap.mmap() - os.urandom() - random.seed() In python there are more fstat() calls to fix, affecting users of: - imp.load_dynamic() - imp.load_source() - mmap.mmap() - mmapobject.size() - os.fdopen() - os.urandom() - random.seed() ---------- components: Library (Lib) messages: 313407 nosy: brett.cannon, eric.snow, ncoghlan, nirs, serhiy.storchaka, twouters, vstinner, yselivanov priority: normal severity: normal status: open title: Some fstat() calls do not release the GIL, possibly hanging all threads type: performance versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 7 22:25:22 2018 From: report at bugs.python.org (Dylan Dmitri Gray) Date: Thu, 08 Mar 2018 03:25:22 +0000 Subject: [New-bugs-announce] [issue33022] Floating Point Arithmetic Inconsistency (internal off-by-one) Message-ID: <1520479522.47.0.467229070634.issue33022@psf.upfronthosting.co.za> New submission from Dylan Dmitri Gray : ``` >>> for i in (1,2,3,1.0,2.0,3.0): print(i, i+9007199254740991) ... 1 9007199254740992 2 9007199254740993 3 9007199254740994 1.0 9007199254740992.0 2.0 9007199254740992.0 # <-- !!! 3.0 9007199254740994.0 ``` Notably 9007199254740991 = 2**53 -1 Probably an internal off by one? ---------- messages: 313420 nosy: ddg priority: normal severity: normal status: open title: Floating Point Arithmetic Inconsistency (internal off-by-one) versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 01:09:21 2018 From: report at bugs.python.org (Vitaly Kruglikov) Date: Thu, 08 Mar 2018 06:09:21 +0000 Subject: [New-bugs-announce] [issue33023] Unable to copy ssl.SSLContext Message-ID: <1520489361.61.0.467229070634.issue33023@psf.upfronthosting.co.za> New submission from Vitaly Kruglikov : ``` import copy import ssl copy.copy(ssl.create_default_context()) ``` results in `TypeError: can't pickle SSLContext objects` This prevents me from being able to `copy.deepcopy()` an object that references `ssl.SSLContext`. The apparent root cause is apparently that `ssl.SSLContext` passes an extra arg to its `__new__` method, but doesn't implement the method `__getnewargs__` that would let `copy` extract the extra arg. ---------- messages: 313422 nosy: vitaly.krug priority: normal severity: normal status: open title: Unable to copy ssl.SSLContext versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 01:24:49 2018 From: report at bugs.python.org (Vitaly Kruglikov) Date: Thu, 08 Mar 2018 06:24:49 +0000 Subject: [New-bugs-announce] [issue33024] asyncio.WriteTransport.set_write_buffer_limits orders its args unintuitively and inconsistently with its companion function's return value Message-ID: <1520490289.19.0.467229070634.issue33024@psf.upfronthosting.co.za> New submission from Vitaly Kruglikov : `asyncio.WriteTransport.set_write_buffer_limits()` uses an unintuitive order of the args (high, low). I would expect `low` to be the first arg, especially since `asyncio.WriteTransport.get_write_buffer_limits()` returns them in the opposite order. This ordering and inconsistency with the companion function's return value is error-prone. See https://docs.python.org/3/library/asyncio-protocol.html#asyncio.WriteTransport.set_write_buffer_limits ---------- components: asyncio messages: 313423 nosy: asvetlov, vitaly.krug, yselivanov priority: normal severity: normal status: open title: asyncio.WriteTransport.set_write_buffer_limits orders its args unintuitively and inconsistently with its companion function's return value type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 01:57:47 2018 From: report at bugs.python.org (Vitaly Kruglikov) Date: Thu, 08 Mar 2018 06:57:47 +0000 Subject: [New-bugs-announce] [issue33025] urlencode produces bad output from ssl.CERT_NONE and friends that chokes decoders Message-ID: <1520492267.34.0.467229070634.issue33025@psf.upfronthosting.co.za> New submission from Vitaly Kruglikov : ``` In [9]: from urllib.parse import urlencode, parse_qs In [10]: import ast, ssl In [11]: d = dict(cert_reqs=ssl.CERT_NONE) In [12]: urlencode(d) Out[12]: 'cert_reqs=VerifyMode.CERT_NONE' In [25]: parse_qs('cert_reqs=VerifyMode.CERT_NONE') Out[25]: {'cert_reqs': ['VerifyMode.CERT_NONE']} In [29]: ast.literal_eval('VerifyMode.CERT_NONE') Traceback (most recent call last) ... ValueError: malformed node or string: <_ast.Attribute object at 0x105c22358> ``` This used to work fine and produce `'cert_reqs=0'` on Python 2.7, allowing it to be decoded properly downstream. However, `'cert_reqs=VerifyMode.CERT_NONE'` can't be decoded generically. So, something it's that used to work in prior python versions that is breaking now. Additional information. json.dumps() actually dumps that value as a number instead of 'VerifyMode.CERT_NONE'. It appears that urlencode doesn't work properly with enums, where I would expect it to emit the numeric value of the enum. ---------- assignee: christian.heimes components: Library (Lib), SSL messages: 313424 nosy: christian.heimes, vitaly.krug priority: normal severity: normal status: open title: urlencode produces bad output from ssl.CERT_NONE and friends that chokes decoders type: crash versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 02:46:38 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 08 Mar 2018 07:46:38 +0000 Subject: [New-bugs-announce] [issue33026] Fix jumping out of "with" block Message-ID: <1520495198.82.0.467229070634.issue33026@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The proposed PR fixes jumping from "with" block. Currently the exit function is left on the stack. This fix is for 3.8 only. 3.7 and older versions are affected by this bug, but since the code was significantly changed in 3.8, I'm not sure it will be so easy to fix it in older versions. ---------- components: Interpreter Core messages: 313426 nosy: pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Fix jumping out of "with" block type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 04:49:30 2018 From: report at bugs.python.org (=?utf-8?b?UGF3ZcWC?=) Date: Thu, 08 Mar 2018 09:49:30 +0000 Subject: [New-bugs-announce] [issue33027] handling filename encoding in Content-Disposition by cgi.FieldStorage Message-ID: <1520502570.19.0.467229070634.issue33027@psf.upfronthosting.co.za> New submission from Pawe? : It appears that cgi.FieldStorage does not handle Content-Disposition with filenames with defined encoding. (according to RFC5987) Example: ''' Content-Disposition: form-data; name="file"; filename*=utf-8''upload_test_file_%C5%82%C3%B3%C4%85%C3%A4.txt ''' The way to reproduce this is to either try to parse above or write a tiny webapp using a web framework that uses CGI for handling file uploads (webpy) and try to upload a file using `requests` - or any library that uses urllib3 for building POST with multipart/form-data. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition https://tools.ietf.org/html/rfc5987 ---------- components: Library (Lib) messages: 313430 nosy: pawciobiel priority: normal severity: normal status: open title: handling filename encoding in Content-Disposition by cgi.FieldStorage type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 05:01:02 2018 From: report at bugs.python.org (Richard Neumann) Date: Thu, 08 Mar 2018 10:01:02 +0000 Subject: [New-bugs-announce] [issue33028] tempfile.TemporaryDirectory incorrectly documented Message-ID: <1520503262.87.0.467229070634.issue33028@psf.upfronthosting.co.za> New submission from Richard Neumann : The tempfile.TemporaryDirectory is incorrectly documented at https://docs.python.org/3.6/library/tempfile.html#tempfile.TemporaryDirectory. It is described as a function, though actually being a class (unlinke tempfile.NamedTemporaryFile). The respective property "name" and method "cleanup" are only documented in the continuous text but not explicitely highlighted as the properties and method of e.g. TarFile (https://docs.python.org/3/library/tarfile.html#tarfile-objects). ---------- assignee: docs at python components: Documentation messages: 313431 nosy: Richard Neumann, docs at python priority: normal severity: normal status: open title: tempfile.TemporaryDirectory incorrectly documented type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 10:15:02 2018 From: report at bugs.python.org (Siddhesh Poyarekar) Date: Thu, 08 Mar 2018 15:15:02 +0000 Subject: [New-bugs-announce] [issue33029] Invalid function cast warnings with gcc 8 for getter and setter functions Message-ID: <1520522102.88.0.467229070634.issue33029@psf.upfronthosting.co.za> New submission from Siddhesh Poyarekar : gcc 8 has added a new warning heuristic to detect invalid function casts and a stock python build seems to hit that warning quite often. bug 33012 fixes the most trivial case of METH_NOARGS, this bug is to track a similarly trivial but widely applicable fix, which is to cast getter and setter functions. Patches coming up over the weekend. ---------- components: Build messages: 313443 nosy: siddhesh priority: normal severity: normal status: open title: Invalid function cast warnings with gcc 8 for getter and setter functions type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 11:26:01 2018 From: report at bugs.python.org (Steve Dower) Date: Thu, 08 Mar 2018 16:26:01 +0000 Subject: [New-bugs-announce] [issue33030] GetLastError() may be overwritten by Py_END_ALLOW_THREADS Message-ID: <1520526361.38.0.467229070634.issue33030@psf.upfronthosting.co.za> New submission from Steve Dower : Most Win32 API calls are made within Py_BEGIN_ALLOW_THREADS blocks, as they do not access Python objects and so we can release the GIL. However, in general, error handling occurs after the Py_END_ALLOW_THREADS line. Due to the design of the Win32 API, the pattern looks like this: Py_BEGIN_ALLOW_THREADS ret = ApiCall(...); Py_END_ALLOW_THREADS if (FAILED(ret)) { error_code = GetLastError(); } However, Py_END_ALLOW_THREADS also makes Win32 API calls (to acquire the GIL), and if any of these fail then the error code may be overwritten. Failures in Py_END_ALLOW_THREADS are either fatal (in which case we don't care about the preceding error any more) or signal a retry (in which case we *do* care about the preceding error), but in the latter case we may have lost the error code. Further, while Win32 APIs are not _supposed_ to set the last error to ERROR_SUCCESS (0) when they succeed, some occasionally do. We should update Py_END_ALLOW_THREADS to preserve the last error code when necessary. Ideally, if we don't have to do any work to reacquire the GIL, we shouldn't do any work to preserve the error code either. ---------- components: Windows messages: 313447 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: test needed status: open title: GetLastError() may be overwritten by Py_END_ALLOW_THREADS type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 13:39:09 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 08 Mar 2018 18:39:09 +0000 Subject: [New-bugs-announce] [issue33031] Questionable code in OrderedDict definition Message-ID: <1520534349.37.0.467229070634.issue33031@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The array of PyMethodDef for OrderedDict contains explicit definitions of methods like __delitem__, __eq__ and __init__. The purpose is aligning docstrings with Python implementation. But this doesn't work. Slot wrappers replace these descriptors. And docstings are standard docstrings for corresponding slot wrappers. Thus this code doesn't work. And it looks dangerous, since functions are casted to incompatible function types. Even if they are never used, the compiler (gcc 8) produces warnings (see issue33012). May be this is even undefined behavior. In that case the compiler can generate arbitrary code. I suggest to remove these definitions. ---------- components: Extension Modules messages: 313452 nosy: eric.snow, serhiy.storchaka priority: normal severity: normal status: open title: Questionable code in OrderedDict definition type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 18:22:58 2018 From: report at bugs.python.org (Nick Coghlan) Date: Thu, 08 Mar 2018 23:22:58 +0000 Subject: [New-bugs-announce] [issue33032] Mention implicit cache in struct.Struct docs Message-ID: <1520551378.96.0.467229070634.issue33032@psf.upfronthosting.co.za> New submission from Nick Coghlan : The struct.Struct docs claim that creating and re-using a Struct object will be noticeably faster than calling the module level methods repeatedly with the same format string, as it will avoid parsing the format string multiple times: https://docs.python.org/3/library/struct.html#struct.Struct This claim is questionable, as struct has used an internal Struct cache since at least 2.5, so if you're using less than 100 different struct layouts in any given process, the only thing you'll be saving is a string-keyed dictionary lookup. ---------- assignee: docs at python components: Documentation messages: 313468 nosy: docs at python, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Mention implicit cache in struct.Struct docs type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 8 19:47:58 2018 From: report at bugs.python.org (Antony Lee) Date: Fri, 09 Mar 2018 00:47:58 +0000 Subject: [New-bugs-announce] [issue33033] Clarify that the signed number convertors to PyArg_ParseTuple... *do* overflow checking Message-ID: <1520556478.43.0.467229070634.issue33033@psf.upfronthosting.co.za> New submission from Antony Lee : At https://docs.python.org/3/c-api/arg.html#numbers, it is explicitly documented that the unsigned number convertors do not perform overflow checking. Implicitly, this suggests that the signed convertors *do* perform overflow checking, which they indeed do; but it would be nice to document this behavior explicitly (as overflow checking is not always expected of C-level functions). ---------- assignee: docs at python components: Documentation messages: 313471 nosy: Antony.Lee, docs at python priority: normal severity: normal status: open title: Clarify that the signed number convertors to PyArg_ParseTuple... *do* overflow checking versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 9 03:24:01 2018 From: report at bugs.python.org (Jonathan) Date: Fri, 09 Mar 2018 08:24:01 +0000 Subject: [New-bugs-announce] [issue33034] urllib.parse.urlparse and urlsplit not raising ValueError for bad port Message-ID: <1520583841.64.0.467229070634.issue33034@psf.upfronthosting.co.za> New submission from Jonathan : (Confirmed in 2.7.14, 3.5.4, and 3.6.3) I have this really bad URL from a crawl: "http://Server=sde; Service=sde:oracle$sde:oracle11g:geopp; User=bodem; Version=SDE.DEFAULT" if I try and parse it with wither urlparse or urlsplit it works - no errors. But when I try and get the port, I get a ValueError. > from urllib.parse import urlparse > r = urlparse('http://Server=sde; Service=sde:oracle$sde:oracle11g:geopp; User=bodem; Version=SDE.DEFAULT') ParseResult(scheme='http', netloc='Server=sde; Service=sde:oracle$sde:oracle11g:geopp; User=bodem; Version=SDE.DEFAULT', path='', params='', query='', fragment='') Ok, great, now to use the result: > print(r.port) Traceback (most recent call last): File "", line 1, in File "E:\Software\_libs\Python36\lib\urllib\parse.py", line 167, in port port = int(port, 10) ValueError: invalid literal for int() with base 10: 'oracle$sde:oracle11g:geopp; User=bodem; Version=SDE.DEFAULT' I'm not a Python Guru, but to me at least it's inconsistent with how every other Python Function works. In all other builtin functions I've used it would fail with the exception when I ran the function, not when I try and get the results. This caused a good few minutes of head-scratching while I tried to debug why my try/except wasn't catching it. This inconsistency makes the results more difficult to use. Now a user needs to wrap all calls to the *results* in a try/except, or write an entire function just to "read" the results into a won't-except tuple/dict. Seems sub-optimal. (May relate to: https://bugs.python.org/issue20059) ---------- messages: 313475 nosy: jonathan-lp priority: normal severity: normal status: open title: urllib.parse.urlparse and urlsplit not raising ValueError for bad port versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 9 10:27:32 2018 From: report at bugs.python.org (Stephen Wille Padnos) Date: Fri, 09 Mar 2018 15:27:32 +0000 Subject: [New-bugs-announce] [issue33035] Some examples in documentation section 4.7.2 are incorrect Message-ID: <1520609252.53.0.467229070634.issue33035@psf.upfronthosting.co.za> New submission from Stephen Wille Padnos : Section 4.7.2 of the documentation, "Keyword Arguments", has several examples of valid calls to the sample function parrot. The function definition is: def parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'): The last two calls in the valid samples are actually not valid, since they are missing the required "voltage" parameter. parrot('a million', 'bereft of life', 'jump') # 3 positional arguments parrot('a thousand', state='pushing up the daisies') # 1 positional, 1 keyword They should be changed to include a value for voltage, along with a change to the comment: parrot(1000, 'a million', 'bereft of life', 'jump') # 4 positional arguments parrot(1000, 'a thousand', state='pushing up the daisies') # 2 positional, 1 keyword This issue is present in all currently available versions of the documentation: 2.7; 3.5; 3.6.4; pre (3.7); and dev (3.8). ---------- assignee: docs at python components: Documentation messages: 313485 nosy: docs at python, stephenwp priority: normal severity: normal status: open title: Some examples in documentation section 4.7.2 are incorrect type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 9 11:53:58 2018 From: report at bugs.python.org (Nathan Henrie) Date: Fri, 09 Mar 2018 16:53:58 +0000 Subject: [New-bugs-announce] [issue33036] test_selectors.PollSelectorTestCase failing on macOS 10.13.3 Message-ID: <1520614438.13.0.467229070634.issue33036@psf.upfronthosting.co.za> New submission from Nathan Henrie : Failing for me on latest 3.6, 3.6.1, 3.5.5, may be related to https://bugs.python.org/issue32517, presumably a change on macOS KQueue stuff. Can anyone else on macOS 10.13.3 see if they can reproduce? ``` make clean && ./configure --with-pydebug && make -j ./python.exe -m unittest -v test.test_selectors.PollSelectorTestCase ``` ---------- components: Tests messages: 313487 nosy: n8henrie priority: normal severity: normal status: open title: test_selectors.PollSelectorTestCase failing on macOS 10.13.3 versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 9 16:00:42 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Fri, 09 Mar 2018 21:00:42 +0000 Subject: [New-bugs-announce] [issue33037] Skip sending/receiving after SSL transport closing Message-ID: <1520629242.5.0.467229070634.issue33037@psf.upfronthosting.co.za> New submission from Andrew Svetlov : Now asyncio raises exceptions like "None type has no method feed_appdata" because self._sslpipe is set to None on closing. See https://github.com/aio-libs/aiohttp/issues/2546 for more details. IMHO the fix should just skip accessing self._sslpipe methods if the pipe was deleted. ---------- components: Library (Lib), asyncio messages: 313505 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Skip sending/receiving after SSL transport closing versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 9 19:46:10 2018 From: report at bugs.python.org (Diego Argueta) Date: Sat, 10 Mar 2018 00:46:10 +0000 Subject: [New-bugs-announce] [issue33038] GzipFile doesn't always ignore None as filename Message-ID: <1520642770.84.0.467229070634.issue33038@psf.upfronthosting.co.za> New submission from Diego Argueta : The Python documentation states that if the GzipFile can't determine a filename from `fileobj` it'll use an empty string and won't be included in the header. Unfortunately, this doesn't work for SpooledTemporaryFile which has a `name` attribute but doesn't set it initially. The result is a crash. To reproduce ``` import gzip import tempfile with tempfile.SpooledTemporaryFile() as fd: with gzip.GzipFile(mode='wb', fileobj=fd) as gz: gz.write(b'asdf') ``` Result: ``` Traceback (most recent call last): File "", line 2, in File "/Users/diegoargueta/.pyenv/versions/2.7.14/lib/python2.7/gzip.py", line 136, in __init__ self._write_gzip_header() File "/Users/diegoargueta/.pyenv/versions/2.7.14/lib/python2.7/gzip.py", line 170, in _write_gzip_header fname = os.path.basename(self.name) File "/Users/diegoargueta/.pyenv/versions/gds27/lib/python2.7/posixpath.py", line 114, in basename i = p.rfind('/') + 1 AttributeError: 'NoneType' object has no attribute 'rfind' ``` This doesn't happen on Python 3.6, where the null filename is handled properly. I've attached a patch file that fixed the issue for me. ---------- components: Library (Lib) files: gzip_filename_fix.patch keywords: patch messages: 313512 nosy: da priority: normal severity: normal status: open title: GzipFile doesn't always ignore None as filename type: crash versions: Python 2.7 Added file: https://bugs.python.org/file47473/gzip_filename_fix.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 03:57:11 2018 From: report at bugs.python.org (Nick Coghlan) Date: Sat, 10 Mar 2018 08:57:11 +0000 Subject: [New-bugs-announce] [issue33039] int() and math.trunc don't accept objects that only define __index__ Message-ID: <1520672231.04.0.467229070634.issue33039@psf.upfronthosting.co.za> New submission from Nick Coghlan : (Note: I haven't categorised this yet, as I'm not sure how it *should* be categorised) Back when the __index__/nb_index slot was added, the focus was on allowing 3rd party integer types to be used in places where potentially lossy conversion with __int__/nb_int *wasn't* permitted. However, this has led to an anomaly where the lossless conversion method *isn't* tried implicitly for the potentially lossy int() and math.trunc() calls, but is tried automatically in other contexts: ``` >>> import math >>> class MyInt: ... def __index__(self): ... return 42 ... >>> int(MyInt()) Traceback (most recent call last): File "", line 1, in TypeError: int() argument must be a string, a bytes-like object or a number, not 'MyInt' >>> math.trunc(MyInt()) Traceback (most recent call last): File "", line 1, in TypeError: type MyInt doesn't define __trunc__ method >>> hex(MyInt()) '0x2a' >>> len("a" * MyInt()) 42 ``` Supporting int() requires also setting `__int__`: ``` >>> MyInt.__int__ = MyInt.__index__ >>> int(MyInt()) 42 ``` Supporting math.trunc() requires also setting `__trunc__`: ``` >>> MyInt.__trunc__ = MyInt.__index__ >>> math.trunc(MyInt()) 42 ``` (This anomaly was noticed by Eric Appelt while updating the int() docs to cover the fallback to trying __trunc__ when __int__ isn't defined: https://github.com/python/cpython/pull/6022#issuecomment-371695913) ---------- messages: 313515 nosy: Eric Appelt, mark.dickinson, ncoghlan priority: normal severity: normal status: open title: int() and math.trunc don't accept objects that only define __index__ _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 04:18:46 2018 From: report at bugs.python.org (TitanSnow) Date: Sat, 10 Mar 2018 09:18:46 +0000 Subject: [New-bugs-announce] [issue33040] Make itertools.islice supports negative values for start and stop arguments for sized iterable object Message-ID: <1520673526.96.0.467229070634.issue33040@psf.upfronthosting.co.za> New submission from TitanSnow : ``islice()`` does not support negative values for start or stop, which does not matter for plain iterators. However, for some cases, we have a sized iterable object which is not subscriptable, using ``islice()`` makes code ugly:: d = OrderedDict() for i in range(10): d[i] = i dv = d.keys() # dv is a KeysView which is a sized iterable # now I wanna get a slice of dv which does not contain the last element islice(dv, len(dv) - 1) As it shows, I have to use ``len(dv)`` to get its length. For sized iterable objects, ``islice()`` could support negative values for start or stop. In this way, the above code can be written like this:: islice(dv, -1) For iterable objects which is not sized, it could still be not supported:: islice(iter(range(10)), -1) raises a ValueError as its original behavior. ---------- components: Library (Lib) files: islice.patch keywords: patch messages: 313517 nosy: tttnns priority: normal severity: normal status: open title: Make itertools.islice supports negative values for start and stop arguments for sized iterable object type: enhancement Added file: https://bugs.python.org/file47475/islice.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 08:10:41 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 10 Mar 2018 13:10:41 +0000 Subject: [New-bugs-announce] [issue33041] Issues with "async for" Message-ID: <1520687441.8.0.467229070634.issue33041@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : There is a number of issues with "async for". 1. When assigning to the target raises StopAsyncIteration (in custom __setitem__, __setattr__ or __iter__) it will be silenced and will cause to stop iteration. 2. StopAsyncIteration is dynamically looked up in globals. If set the global StopAsyncIteration or delete it from builtins (for example at the shutdown stage), this will break any "async for". 3. The f_lineno setter doesn't handle jumping into or out of the "async for" block. Jumping into is not forbidden, and jumping out doesn't update the stack correctly. This can cause a crash or incorrect behavior (like iterating wrong loop). 4. The compiler doesn't check all errors when creating new blocks. Some blocks are not used. And the resulting bytecode is suboptimal. I'll create a series of pull request for fixing all this issue. Some of them can be backported. Others require changes in bytecode or are too hard for implementing in 3.7 and earlier versions (the related code was changed in issue17611). Some of them depend on other PRs or other issues (like issue33026) and need to wait until their be merged. ---------- assignee: serhiy.storchaka components: Interpreter Core messages: 313526 nosy: serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Issues with "async for" type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 12:00:47 2018 From: report at bugs.python.org (Hartmut Goebel) Date: Sat, 10 Mar 2018 17:00:47 +0000 Subject: [New-bugs-announce] [issue33042] New 3.7 startup sequence crashes PyInstaller Message-ID: <1520701247.46.0.467229070634.issue33042@psf.upfronthosting.co.za> New submission from Hartmut Goebel : PyInstaller is a tool for freezing Python applications into stand-alone packages, much like py2exe. py2app, and bbfreeze. PyInstaller is providing *one* bootloader for all versions of Python supported (2.7, 3.4-3.6). In PyInstaller the startup sequence is implemented in pyi_pylib_start_python() in bootloader/src/pyi_pythonlib.c. The workflow roughly is: - SetProgramName - SetPythonHome - Py_SetPath - Setting runtime options - some flags using the global variables - PySys_AddWarnOption -> crash - Py_Initialize - PySys_SetPath The crash occurs due to tstate (thread state) not being initialized when calling PySys_AddWarnOption. ---------- components: Interpreter Core messages: 313546 nosy: htgoebel priority: normal severity: normal status: open title: New 3.7 startup sequence crashes PyInstaller type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 12:30:28 2018 From: report at bugs.python.org (Carol Willing) Date: Sat, 10 Mar 2018 17:30:28 +0000 Subject: [New-bugs-announce] [issue33043] Add a 'Contributing to Docs' link at the bottom of docs.python.org Message-ID: <1520703028.47.0.467229070634.issue33043@psf.upfronthosting.co.za> New submission from Carol Willing : Adding a 'Contributing to Docs' link at the bottom of the docs.python.org page between 'Reporting bugs' and 'About Documentation'. This could link to the devguide section on contributing to docs or provide a short paragraph including: - the importance of CPython docs as well as other Python projects (warehouse, popular libraries, etc.) - link to the devguide and its section on documentation - link to the core mentorship mailing list - link to docs mailing list As an example, the Rust project's Docs-Contributing page is a good start: https://www.rust-lang.org/en-US/contribute-docs.html ---------- assignee: docs at python components: Documentation messages: 313549 nosy: docs at python, willingc priority: normal severity: normal stage: needs patch status: open title: Add a 'Contributing to Docs' link at the bottom of docs.python.org _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 16:41:25 2018 From: report at bugs.python.org (Ishan Srivastava) Date: Sat, 10 Mar 2018 21:41:25 +0000 Subject: [New-bugs-announce] [issue33044] pdb from base class, get inside a method of derived class Message-ID: <1520718085.04.0.467229070634.issue33044@psf.upfronthosting.co.za> New submission from Ishan Srivastava : I need to use `pdb.set_trace()` in the base class. It has a method: ``` def run(self, *args, **kwargs): raise NotImplementedError ``` Since this base class is derived by many subclasses I don't know before hand which class' `run()` method I need to get inside. Also there is some pre processing of the arguments given to the `run()` method. So when `pdb` reaches the line, ``` q=self.run(arguments) ``` and I hit `s` it acts as if I have given the command `next`. How can I get inside the derived class' `run()` method with `pdb` and debug the code over there? ---------- components: Library (Lib) messages: 313557 nosy: ishanSrt priority: normal severity: normal status: open title: pdb from base class, get inside a method of derived class type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 10 17:27:05 2018 From: report at bugs.python.org (Matt Eaton) Date: Sat, 10 Mar 2018 22:27:05 +0000 Subject: [New-bugs-announce] [issue33045] SSL Dcumentation Error Message-ID: <1520720825.94.0.467229070634.issue33045@psf.upfronthosting.co.za> New submission from Matt Eaton : I was reading through the SSL documentation and noticed a typo on Diffe-Hellman and wanted to clean it up. PR is coming soon. ---------- assignee: docs at python components: Documentation messages: 313559 nosy: agnosticdev, docs at python priority: normal severity: normal status: open title: SSL Dcumentation Error type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 00:14:08 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 11 Mar 2018 05:14:08 +0000 Subject: [New-bugs-announce] [issue33046] IDLE option to strip trailing whitespace automatically on save Message-ID: <1520745248.32.0.467229070634.issue33046@psf.upfronthosting.co.za> New submission from Raymond Hettinger : Add option to IDLE preferences in the general section to automatically run Strip Trailing Whitespace before saving. People who use Strip Trailing Whitespace generally do so just before saving and they do it over and over again as they develop and check in code. It would be nice to have this done automatically. In general, trailing whitespace is almost never desireable. ---------- assignee: terry.reedy components: IDLE messages: 313580 nosy: rhettinger, terry.reedy priority: normal severity: normal status: open title: IDLE option to strip trailing whitespace automatically on save type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 13:19:59 2018 From: report at bugs.python.org (Adrien) Date: Sun, 11 Mar 2018 17:19:59 +0000 Subject: [New-bugs-announce] [issue33047] "RuntimeError: dictionary changed size during iteration" using trace.py module Message-ID: <1520788799.16.0.467229070634.issue33047@psf.upfronthosting.co.za> New submission from Adrien : Hello. I am strangely encountering an error whil trying to run "python -m trace -c script.py" on this simple code: > import multiprocessing > queue = multiprocessing.Queue() > queue.put("a") Which raises on Windows 10 using Python 3.6.3: > Traceback (most recent call last): > File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main > "__main__", mod_spec) > File "/usr/lib/python3.6/runpy.py", line 85, in _run_code > exec(code, run_globals) > File "/usr/lib/python3.6/trace.py", line 742, in > main() > File "/usr/lib/python3.6/trace.py", line 739, in main > results.write_results(opts.missing, opts.summary, opts.coverdir) > File "/usr/lib/python3.6/trace.py", line 258, in write_results > for filename, lineno in self.counts: > RuntimeError: dictionary changed size during iteration Fixing it seems straightforward, but I do not know what is causing the bug internally. ---------- components: Library (Lib) messages: 313604 nosy: Delgan priority: normal severity: normal status: open title: "RuntimeError: dictionary changed size during iteration" using trace.py module type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 14:12:42 2018 From: report at bugs.python.org (Antoine Pitrou) Date: Sun, 11 Mar 2018 18:12:42 +0000 Subject: [New-bugs-announce] [issue33048] macOS job broken on Travis CI Message-ID: <1520791962.83.0.467229070634.issue33048@psf.upfronthosting.co.za> New submission from Antoine Pitrou : Well, it didn't take long. The macOS job broke on Travis CI. Apparently "brew" thinks Python 2 and Python 3 are the same thing, so "brew install python3" now fails with an error message telling to use "brew upgrade" instead. But it did work before... https://travis-ci.org/python/cpython/jobs/352036713 This should be simple to fix, though I wonder whether this could be taken over by our macOS maintainers? :-) ---------- components: Build messages: 313605 nosy: ned.deily, pitrou, ronaldoussoren priority: normal severity: normal status: open title: macOS job broken on Travis CI type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 14:16:31 2018 From: report at bugs.python.org (Trey Hunner) Date: Sun, 11 Mar 2018 18:16:31 +0000 Subject: [New-bugs-announce] [issue33049] itertools.count() confusingly mentions zip() and sequence numbers Message-ID: <1520792191.75.0.467229070634.issue33049@psf.upfronthosting.co.za> New submission from Trey Hunner : >From the itertools documentation: https://docs.python.org/3/library/itertools.html?highlight=itertools#itertools.count > Also, used with zip() to add sequence numbers. I'm not certain what the goal of the original sentence was, but I think it's unclear as currently written. I assume this is what's meant: my_sequence = [1, 2, 3, 4] for i, item in zip(count(1), my_sequence): print(i, item) This is a strange thing to note though because enumerate would be a better use here. my_sequence = [1, 2, 3, 4] for i, item in enumerate(my_sequence, start=1): print(i, item) Maybe what is meant is that count can be used with a step while enumerate cannot? my_sequence = [1, 2, 3, 4] for i, item in zip(count(step=5), my_sequence): print(i, item) If that's the case it seems like step should instead be mentioned there instead of "sequence numbers". ---------- assignee: docs at python components: Documentation messages: 313606 nosy: docs at python, trey priority: normal severity: normal status: open title: itertools.count() confusingly mentions zip() and sequence numbers type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 14:17:26 2018 From: report at bugs.python.org (Timothy VanSlyke) Date: Sun, 11 Mar 2018 18:17:26 +0000 Subject: [New-bugs-announce] [issue33050] Centralized documentation of assumptions made by C code Message-ID: <1520792246.5.0.467229070634.issue33050@psf.upfronthosting.co.za> New submission from Timothy VanSlyke : It would be nice for those who write C extensions to have a resource that explicitly states what assumptions are made by the CPython implementation that are otherwise implementation-defined in standard C. For example, Python versions >= 3.6 require: - That UCHAR_MAX is defined to be 255 (from Python.h) - That fixed-width intXX_t and uintXX_t types are provided in stdint.h and inttypes.h (from PEP7) These two requirements also pretty much guarantee that CHAR_BIT == 8. These kinds of things are nice to know for anybody working with the C API; we can make the same assumptions in our own code without having hunt through the API docs or the CPython source tree every time something comes up. >From what I've found, there isn't any component of the documentation that explicitly lists these assumptions in one place (I apologize if there is and I somehow missed it...). Could this be addressed in the future? Thanks! ---------- assignee: docs at python components: Documentation messages: 313607 nosy: docs at python, tvanslyke priority: normal severity: normal status: open title: Centralized documentation of assumptions made by C code versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 15:26:19 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sun, 11 Mar 2018 19:26:19 +0000 Subject: [New-bugs-announce] [issue33051] IDLE: Create new tab for editor options in configdialog Message-ID: <1520796379.91.0.467229070634.issue33051@psf.upfronthosting.co.za> New submission from Cheryl Sabella : Split out editor options from general tab in Config Dialog. ---------- assignee: terry.reedy components: IDLE messages: 313618 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Create new tab for editor options in configdialog type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 11 17:32:59 2018 From: report at bugs.python.org (Antoine Pitrou) Date: Sun, 11 Mar 2018 21:32:59 +0000 Subject: [New-bugs-announce] [issue33052] Sporadic segmentation fault in test_datetime Message-ID: <1520803979.78.0.467229070634.issue33052@psf.upfronthosting.co.za> New submission from Antoine Pitrou : Just spotted this in a Travis-CI job: https://travis-ci.org/python/cpython/jobs/351010039#L2002 I'm not sure there's anything to do but I figured it was worth reporting anyway. ---------- components: Library (Lib), Tests messages: 313623 nosy: belopolsky, pitrou priority: normal severity: normal status: open title: Sporadic segmentation fault in test_datetime type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 07:57:15 2018 From: report at bugs.python.org (Antti Haapala) Date: Mon, 12 Mar 2018 11:57:15 +0000 Subject: [New-bugs-announce] [issue33053] Running a module with `-m` will add empty directory to sys.path Message-ID: <1520855835.03.0.467229070634.issue33053@psf.upfronthosting.co.za> New submission from Antti Haapala : I think this is a really stupid security bug. Running a module with `-mmodule` seems to add '' as a path in sys.path, and in front. This is doubly wrong, because '' will stand for whatever the current working directory might happen to be at the time of the *subsequent import statements*, i.e. it is far worse than https://bugs.python.org/issue16202 I.e. whereas python3 /usr/lib/module.py wouldn't do that, python3 -mmodule would make it so that following a chdirs in code, imports would be executed from arbitrary locations. Verified on MacOS X, Ubuntu 17.10, using variety of Python versions up to 3.7. ---------- components: Interpreter Core messages: 313641 nosy: ztane priority: normal severity: normal status: open title: Running a module with `-m` will add empty directory to sys.path type: security _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 09:12:55 2018 From: report at bugs.python.org (Kenneth Chik) Date: Mon, 12 Mar 2018 13:12:55 +0000 Subject: [New-bugs-announce] [issue33054] unittest blocks when testing function using multiprocessing.Pool with state spawn Message-ID: <1520860375.4.0.467229070634.issue33054@psf.upfronthosting.co.za> New submission from Kenneth Chik : I am not sure if this is python or OS problem, I just installed Ubuntu 18.04 LTS which comes with python3 v3.6.4. When I try to unittest code which contains multiprocessing.Pool with spawn, the unittest.main() blocks after completing all the tests. This problem did not exist on prior versions of Ubuntu/Python. Below is the printout after I KeyboardInterrupt the process. ... ---------------------------------------------------------------------- Ran 13 tests in 10.472s OK ^CException ignored in: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 1294, in _shutdown t.join() File "/usr/lib/python3.6/threading.py", line 1056, in join self._wait_for_tstate_lock() File "/usr/lib/python3.6/threading.py", line 1072, in _wait_for_tstate_lock elif lock.acquire(block, timeout): KeyboardInterrupt The spawned processes have all completed and not visible on the process list, but the semaphore_tracker process is still there. This is the same with previous working system though. Thanks. ---------- messages: 313648 nosy: Kenneth Chik priority: normal severity: normal status: open title: unittest blocks when testing function using multiprocessing.Pool with state spawn type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 10:28:02 2018 From: report at bugs.python.org (FHTMitchell) Date: Mon, 12 Mar 2018 14:28:02 +0000 Subject: [New-bugs-announce] [issue33055] bytes does not implement __bytes__() Message-ID: <1520864882.07.0.467229070634.issue33055@psf.upfronthosting.co.za> New submission from FHTMitchell : Every object which has a corresponding dunder protocol also implements said protocol with one exception: >>> 'hello'.__str__() 'hello' >>> (3.14).__float__() 3.14 >>> (101).__int__() 101 >>> True.__bool__() True >>> iter(range(10)).__iter__() >>> b'hello'.__bytes__() --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ----> 1 b'hello'.__bytes__() AttributeError: 'bytes' object has no attribute '__bytes__' This was brought up on SO as being inconsistent: https://stackoverflow.com/questions/49236655/bytes-doesnt-have-bytes-method/49237034?noredirect=1#comment85477673_49237034 ---------- components: Interpreter Core messages: 313653 nosy: FHTMitchell priority: normal severity: normal status: open title: bytes does not implement __bytes__() type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 11:09:15 2018 From: report at bugs.python.org (Thomas Moreau) Date: Mon, 12 Mar 2018 15:09:15 +0000 Subject: [New-bugs-announce] [issue33056] LEaking files in concurrent.futures.process Message-ID: <1520867355.07.0.467229070634.issue33056@psf.upfronthosting.co.za> New submission from Thomas Moreau : The recent changes introduced by https://github.com/python/cpython/pull/3895 leaks some file descriptors (the Pipe open in _ThreadWakeup). They should be properly closed at shutdown. ---------- components: Library (Lib) messages: 313656 nosy: tomMoral priority: normal pull_requests: 5845 severity: normal status: open title: LEaking files in concurrent.futures.process type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 12:18:05 2018 From: report at bugs.python.org (Ben Feinstein) Date: Mon, 12 Mar 2018 16:18:05 +0000 Subject: [New-bugs-announce] [issue33057] logging.Manager.logRecordFactory is never used Message-ID: <1520871485.85.0.467229070634.issue33057@psf.upfronthosting.co.za> New submission from Ben Feinstein : In logging.Manager, the logRecordFactory attribute is never used. One would expect that makeRecord() (in logging.Logger) would generate a record using its manager's logRecordFactory, or fallback to the global _logRecordFactory (if has no manager, or manager.logRecordFactory is None), but the latter is used exclusively. ---------- components: Library (Lib) files: issue_logRecordFactory.py messages: 313662 nosy: feinsteinben, vinay.sajip priority: normal severity: normal status: open title: logging.Manager.logRecordFactory is never used type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47478/issue_logRecordFactory.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 15:31:15 2018 From: report at bugs.python.org (Eddie Elizondo) Date: Mon, 12 Mar 2018 19:31:15 +0000 Subject: [New-bugs-announce] [issue33058] Enhancing Python Message-ID: <1520883075.96.0.467229070634.issue33058@psf.upfronthosting.co.za> Change by Eddie Elizondo : ---------- nosy: elizondo93 priority: normal severity: normal status: open title: Enhancing Python type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 17:05:31 2018 From: report at bugs.python.org (=?utf-8?q?Andreas_K=C3=B6ltringer?=) Date: Mon, 12 Mar 2018 21:05:31 +0000 Subject: [New-bugs-announce] [issue33059] netrc module validates file mode only for /home/user/.netrc Message-ID: <1520888731.28.0.467229070634.issue33059@psf.upfronthosting.co.za> New submission from Andreas K?ltringer : On my first try to use the netrc module I got back the error: "~/.netrc access too permissive: access permissions must restrict access to only the owner" I changed the file permissions and wrapped this up in try-except and went on to write some unit tests (using tempfile), assuming that the file mode checks would be performed on any netrc file I passed into the constructor (yes, I did not read the documentation sufficiently well). Anyway, I believe that these security checks should be done for any netrc file (they contain sensitive information no matter where they are located on the file system). There was already a discussion on the topic https://bugs.python.org/issue14984 where there was concern regarding backwards-compatibility and the idea to re-visit this issue "in the future". That was in 2013, so maybe this "future" is now? ---------- components: Library (Lib) messages: 313701 nosy: akoeltringer priority: normal severity: normal status: open title: netrc module validates file mode only for /home/user/.netrc type: security versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 17:12:20 2018 From: report at bugs.python.org (Bob Klahn) Date: Mon, 12 Mar 2018 21:12:20 +0000 Subject: [New-bugs-announce] [issue33060] Installation hangs at "Publishing product information" Message-ID: <1520889140.52.0.467229070634.issue33060@psf.upfronthosting.co.za> New submission from Bob Klahn : I am unable to install Python 2.7.14 on my Windows 7 PC. Using python-2.7.14.amd64.msi . The installation hangs at the "Publishing product information" step. Subsequent installation attempts result in the message "Python 2.7.14 (64-bit) setup was interrupted. Your system has not been modified. To install this program at a later time, please run the installation again. Click the Finish button to exit the Installer." I need to be able to make this work! With this version of Windows and this version of Python. Help! ---------- components: Installation messages: 313702 nosy: bobstones priority: normal severity: normal status: open title: Installation hangs at "Publishing product information" versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 18:15:08 2018 From: report at bugs.python.org (Allen Tracht) Date: Mon, 12 Mar 2018 22:15:08 +0000 Subject: [New-bugs-announce] [issue33061] NoReturn missing from __all__ in typing.py Message-ID: <1520892908.5.0.467229070634.issue33061@psf.upfronthosting.co.za> Change by Allen Tracht : ---------- components: Library (Lib) nosy: Allen Tracht priority: normal severity: normal status: open title: NoReturn missing from __all__ in typing.py type: behavior versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 18:26:22 2018 From: report at bugs.python.org (Vitaly Kruglikov) Date: Mon, 12 Mar 2018 22:26:22 +0000 Subject: [New-bugs-announce] [issue33062] ssl_renegotiate() doesn't seem to be exposed Message-ID: <1520893582.26.0.467229070634.issue33062@psf.upfronthosting.co.za> New submission from Vitaly Kruglikov : I need to write a test for my client to make sure it's non-blocking ssl interactions are able to survive SSL renegotiation. However, I can't seem to find anything in our python ssl API that calls `SSL_renegotiate()` in order to force renegotiation. ---------- assignee: christian.heimes components: SSL messages: 313706 nosy: christian.heimes, vitaly.krug priority: normal severity: normal status: open title: ssl_renegotiate() doesn't seem to be exposed type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 22:31:35 2018 From: report at bugs.python.org (Siming Yuan) Date: Tue, 13 Mar 2018 02:31:35 +0000 Subject: [New-bugs-announce] [issue33063] failed to build _ctypes: undefined reference to `ffi_closure_FASTCALL' Message-ID: <1520908295.51.0.467229070634.issue33063@psf.upfronthosting.co.za> New submission from Siming Yuan : compiling Python 3.5.5 under RHEL 6.4, 32-bit: build/temp.linux-x86_64-3.5/opt/python/Python-3.5.5/Modules/_ctypes/libffi/src/x86/ffi.o: In function `ffi_prep_closure_loc': /opt/python/Python-3.5.5/Modules/_ctypes/libffi/src/x86/ffi.c:678: undefined reference to `ffi_closure_FASTCALL' /usr/bin/ld: build/temp.linux-x86_64-3.5/opt/python/Python-3.5.5/Modules/_ctypes/libffi/src/x86/ffi.o: relocation R_386_GOTOFF against undefined hidden symbol `ffi_closure_FASTCALL' can not be used when making a shared object /usr/bin/ld: final link failed: Bad value related to https://bugs.python.org/issue23042 - but it seems like the patch for x86/ffi.c never made it to release. ---------- components: Build messages: 313716 nosy: siming85 priority: normal severity: normal status: open title: failed to build _ctypes: undefined reference to `ffi_closure_FASTCALL' versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 12 22:50:30 2018 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Tue, 13 Mar 2018 02:50:30 +0000 Subject: [New-bugs-announce] [issue33064] lib2to3 fails on a trailing comma after **kwargs in a function signature Message-ID: <1520909430.12.0.467229070634.issue33064@psf.upfronthosting.co.za> New submission from ?ukasz Langa : Title says all. I have a patch. ---------- assignee: lukasz.langa messages: 313718 nosy: lukasz.langa priority: normal severity: normal status: open title: lib2to3 fails on a trailing comma after **kwargs in a function signature versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 01:02:16 2018 From: report at bugs.python.org (Joshua De La Rosa) Date: Tue, 13 Mar 2018 05:02:16 +0000 Subject: [New-bugs-announce] [issue33065] debugger issue concerning importing user created modules into another program Message-ID: <1520917336.65.0.467229070634.issue33065@psf.upfronthosting.co.za> New submission from Joshua De La Rosa : Taking my first coding class, so I don't know much about coding or python in general, but I ran into a problem when using the Debugger function for a homework assignment that neither I nor my professor could make sense of. My program executes successfully without running the Debugger or, in the case that I am running the Debugger, it only raises an error when I "Step" through the imported module that I implemented in another program, rather than just hitting "Go". The error it reports is: AttributeError: '_ModuleLock' object has no attribute 'name' Not sure which file to submit, since the I created my own module that is used in the program that raises the error when I step through it with the Debugger mode on. ---------- assignee: terry.reedy components: IDLE messages: 313721 nosy: jcdlr, terry.reedy priority: normal severity: normal status: open title: debugger issue concerning importing user created modules into another program type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 05:19:42 2018 From: report at bugs.python.org (hubo) Date: Tue, 13 Mar 2018 09:19:42 +0000 Subject: [New-bugs-announce] [issue33066] raise an exception from multiple positions break the traceback frames Message-ID: <1520932782.62.0.467229070634.issue33066@psf.upfronthosting.co.za> New submission from hubo : The attachment is a script that demonstrates the behavior. The simple unittest script should exit very quickly, but in fact, it runs indefinitely. It uses asyncio to reproduce the result, but other concurrent technologies are also affected. In Python 3, traceback of an exception is mutable. When the exception is re-raised, current frames are appended to the traceback. But when the same exception object is raised in multiple position (e.g. passed to different coroutines with futures), the frames are appended in the same list, so the tracebacks are mixed together. assertRaises in unittest calls traceback.clear_frames internally. When the tracebacks are mixed, it may clear a running frame (in a generator), producing strange behaviors. ---------- files: test1.py messages: 313735 nosy: hubo1016 priority: normal severity: normal status: open title: raise an exception from multiple positions break the traceback frames type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file47480/test1.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 08:06:43 2018 From: report at bugs.python.org (Christian Heimes) Date: Tue, 13 Mar 2018 12:06:43 +0000 Subject: [New-bugs-announce] [issue33067] http.client no longer sends HTTP request in one TCP package Message-ID: <1520942803.97.0.467229070634.issue33067@psf.upfronthosting.co.za> New submission from Christian Heimes : https://bugs.python.org/issue23302 changed how http.client sends request. endheaders() no longer sends header and message body in one TCP package if the total payload is smaller than TCP max segment size. https://github.com/python/cpython/blob/3.5/Lib/http/client.py#L934-L936 uses two send calls to send header and body. This causes very simple HTTP servers in embedded devices to fail. Matthew Garrett noticed the bug, see https://twitter.com/mjg59/status/972985566387032064 / https://twitter.com/mjg59/status/973000950439817217 We should try to send requests as one TCP package again. TCP_CORK may do the trick. Or we should fix our custom implementation of send. It has multiple issues, e.g. a fixed buffer. The buffer size is suboptimal for small MTU and jumbo frames. ---------- components: Library (Lib) keywords: 3.5regression messages: 313743 nosy: benjamin.peterson, christian.heimes priority: normal severity: normal status: open title: http.client no longer sends HTTP request in one TCP package type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 08:50:49 2018 From: report at bugs.python.org (=?utf-8?q?David_Luke=C5=A1?=) Date: Tue, 13 Mar 2018 12:50:49 +0000 Subject: [New-bugs-announce] [issue33068] Inconsistencies in parsing (evaluating?) longstrings Message-ID: <1520945449.38.0.467229070634.issue33068@psf.upfronthosting.co.za> New submission from David Luke? : """ \""" """ evaluates to ' """ ' (as expected), but without the surrounding spaces, """\"""""" evaluates to '"' instead of '"""'. Is this expected behavior? If I'm reading the definition of string syntax in https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals correctly, it shouldn't be. ---------- components: Interpreter Core messages: 313745 nosy: David Luke? priority: normal severity: normal status: open title: Inconsistencies in parsing (evaluating?) longstrings versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 09:43:49 2018 From: report at bugs.python.org (Paul Ganssle) Date: Tue, 13 Mar 2018 13:43:49 +0000 Subject: [New-bugs-announce] [issue33069] Maintainer information discarded when writing PKG-INFO Message-ID: <1520948629.75.0.467229070634.issue33069@psf.upfronthosting.co.za> New submission from Paul Ganssle : This is basically the same as issue 962772, as there seems to have been a regression. The current version of distutils discards author= metadata if maintainer= is present, even though PEP 345 has added the Maintainer: and Maintainer-Email: metadata fields. I think that the way forward is to have write_pkg_info generate separate Maintainer: and Maintainer-Email: fields. ---------- components: Distutils messages: 313747 nosy: dstufft, eric.araujo, p-ganssle priority: normal severity: normal status: open title: Maintainer information discarded when writing PKG-INFO type: behavior versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 09:48:51 2018 From: report at bugs.python.org (Andreas Schwab) Date: Tue, 13 Mar 2018 13:48:51 +0000 Subject: [New-bugs-announce] [issue33070] Add platform triplet for RISC-V Message-ID: <1520948931.79.0.467229070634.issue33070@psf.upfronthosting.co.za> Change by Andreas Schwab : ---------- components: Build nosy: schwab priority: normal severity: normal status: open title: Add platform triplet for RISC-V type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 10:21:27 2018 From: report at bugs.python.org (Paul Ganssle) Date: Tue, 13 Mar 2018 14:21:27 +0000 Subject: [New-bugs-announce] [issue33071] Document that PyPI no longer requires 'register' Message-ID: <1520950887.05.0.467229070634.issue33071@psf.upfronthosting.co.za> New submission from Paul Ganssle : I've been asked to post this by @brainwave (who is having some trouble getting an account on bpo due to technical difficulties). Per twine's github issue 311 ( https://github.com/pypa/twine/issues/311 ), it seems that distutil's docs Update setuptools and distutils docs, e.g., https://docs.python.org/3.6/distutils/packageindex.html#the-upload-command should be clarified to indicate that PyPI does not require register anymore, although other package indexes might. ---------- assignee: docs at python components: Distutils, Documentation messages: 313749 nosy: docs at python, dstufft, eric.araujo, p-ganssle priority: normal severity: normal status: open title: Document that PyPI no longer requires 'register' type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 15:59:22 2018 From: report at bugs.python.org (Mark Shannon) Date: Tue, 13 Mar 2018 19:59:22 +0000 Subject: [New-bugs-announce] [issue33072] The interpreter bytecodes for with statements are overly complex. Message-ID: <1520971162.63.0.467229070634.issue33072@psf.upfronthosting.co.za> New submission from Mark Shannon : The bytecodes WITH_CLEANUP_START and WITH_CLEANUP_FINISH are complex and implement entirely different behavior depending on what is on the stack. This is unnecessary as the same semantics can be implemented with much simpler bytecodes and using the compiler do much of the work now done at runtime. ---------- components: Interpreter Core messages: 313771 nosy: Mark.Shannon priority: normal severity: normal status: open title: The interpreter bytecodes for with statements are overly complex. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 13 17:25:05 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Tue, 13 Mar 2018 21:25:05 +0000 Subject: [New-bugs-announce] [issue33073] Add as_integer_ratio() to int() objects Message-ID: <1520976305.18.0.467229070634.issue33073@psf.upfronthosting.co.za> New submission from Raymond Hettinger : Goal: make int() more interoperable with float by making a float/Decimal method also available on ints. This will let mypy treat ints as a subtype of floats. See: https://mail.python.org/pipermail/python-dev/2018-March/152384.html Open question: Is this also desired for fractions.Fraction and numbers.Rational? ---------- assignee: Nofar Schnider components: Library (Lib) messages: 313780 nosy: Nofar Schnider, gvanrossum, rhettinger priority: normal severity: normal status: open title: Add as_integer_ratio() to int() objects type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 14 02:34:12 2018 From: report at bugs.python.org (Robert Xiao) Date: Wed, 14 Mar 2018 06:34:12 +0000 Subject: [New-bugs-announce] [issue33074] dbm corrupts index on macOS (_dbm module) Message-ID: <1521009252.74.0.467229070634.issue33074@psf.upfronthosting.co.za> New submission from Robert Xiao : Environment: Python 3.6.4, macOS 10.12.6 Python 3's dbm appears to corrupt the key index on macOS if objects >4KB are inserted. Code: <<<<<<<<<<< import dbm import contextlib with contextlib.closing(dbm.open('test', 'n')) as db: for k in range(128): db[('%04d' % k).encode()] = b'\0' * (k * 128) with contextlib.closing(dbm.open('test', 'r')) as db: print(len(db)) print(len(list(db.keys()))) >>>>>>>>>>> On my machine, I get the following: <<<<<<<<<<< 94 Traceback (most recent call last): File "test.py", line 10, in print(len(list(db.keys()))) SystemError: Negative size passed to PyBytes_FromStringAndSize >>>>>>>>>>> (The error says PyString_FromStringAndSize on Python 2.x but is otherwise the same). The expected output, which I see on Linux (using gdbm), is 128 128 I get this error with the following Pythons on my system: /usr/bin/python2.6 - Apple-supplied Python 2.6.9 /usr/bin/python - Apple-supplied Python 2.7.13 /opt/local/bin/python2.7 - MacPorts Python 2.7.14 /usr/local/bin/python - Python.org Python 2.7.13 /usr/local/bin/python3.5 - Python.org Python 3.5.1 /usr/local/bin/python3.6 - Python.org Python 3.6.4 This seems like a very big problem - silent data corruption with no warning. It appears related to issue30388, but in that case they were seeing sporadic failures. The deterministic script above causes failures in every case. This was discovered after running some code which used shelve (which uses dbm under the hood) in Python 3, but the bug clearly applies to Python 2 as well. ---------- files: test.db messages: 313809 nosy: nneonneo priority: normal severity: normal status: open title: dbm corrupts index on macOS (_dbm module) versions: Python 2.7, Python 3.5, Python 3.6 Added file: https://bugs.python.org/file47484/test.db _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 14 09:25:09 2018 From: report at bugs.python.org (Vlad Shcherbina) Date: Wed, 14 Mar 2018 13:25:09 +0000 Subject: [New-bugs-announce] [issue33075] typing.NamedTuple does not deduce Optional[] from using None as default field value Message-ID: <1521033909.39.0.467229070634.issue33075@psf.upfronthosting.co.za> New submission from Vlad Shcherbina : from typing import * def f(arg: str = None): pass print(get_type_hints(f)) # {'arg': typing.Union[str, NoneType]} # as expected class T(NamedTuple): field: str = None print(get_type_hints(T)) # {'field': } # but it should be # {'field': typing.Union[str, NoneType]} # for consistency ---------- components: Library (Lib) messages: 313819 nosy: vlad priority: normal severity: normal status: open title: typing.NamedTuple does not deduce Optional[] from using None as default field value type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 14 17:14:36 2018 From: report at bugs.python.org (Adrien) Date: Wed, 14 Mar 2018 21:14:36 +0000 Subject: [New-bugs-announce] [issue33076] Trying to cleanly terminate a threaded Queue at exit of program raises an "EOFError" Message-ID: <1521062076.69.0.467229070634.issue33076@psf.upfronthosting.co.za> New submission from Adrien : Hi. I use a worker Thread to which I communicate trough a multiprocessing Queue. I would like to properly close this daemon thread when my program terminates, so I registered a "stop()" function using "atexit.register()". However, this raises an "EOFError" because the multiprocessing module uses "atexit.register()" too and closes the Queue internal pipe connections before that my thread ends. After scratching inside the multiprocessing module, I tried to summarize my understanding of the problem here: https://stackoverflow.com/a/49244528/2291710 I joined a demonstration script that triggers the bug with (at least) Python 3.5/3.6 on both Windows and Linux. The issue is fixable by forcing multiprocessing "atexit.register()" before mine with "import multiprocessing.queues", but this means I would rely on an implementation detail, and others dynamic calls made to "atexit.register()" (like one I saw in multiprocessing "get_logger()" for example) could break it again. I first thought that "atexit.register()" could accept an optional "priority" argument, but every developers would probably want to be first. Could a subtle change be made however to guarantee that registered functions are executed before Python internal ones? As for now, the atexit statement "The assumption is that lower level modules will normally be imported before higher level modules and thus must be cleaned up later" is not quite true. I do not know what to do with it, from what I know there is no way to achieve an automatic yet clean closure of such worker, so I would like to know if some kind of fix is possible for a future version of Python. Thanks for your time. ---------- components: Library (Lib) files: bug.py messages: 313841 nosy: Delgan priority: normal severity: normal status: open title: Trying to cleanly terminate a threaded Queue at exit of program raises an "EOFError" type: behavior versions: Python 3.5, Python 3.6 Added file: https://bugs.python.org/file47486/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 14 18:24:50 2018 From: report at bugs.python.org (=?utf-8?b?0JXQstCz0LXQvdC40Lkg0JzQsNGF0LzRg9C00L7Qsg==?=) Date: Wed, 14 Mar 2018 22:24:50 +0000 Subject: [New-bugs-announce] [issue33077] typing: Unexpected result with value of instance of class inherited from typing.NamedTuple Message-ID: <1521066290.44.0.467229070634.issue33077@psf.upfronthosting.co.za> New submission from ??????? ???????? : Overwriting of default values not working, and used default value of base class. Unittest file if attachment described a problem. ---------- components: Library (Lib) files: python_test.py messages: 313843 nosy: ??????? ???????? priority: normal severity: normal status: open title: typing: Unexpected result with value of instance of class inherited from typing.NamedTuple type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file47487/python_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 15 02:36:54 2018 From: report at bugs.python.org (Thomas Moreau) Date: Thu, 15 Mar 2018 06:36:54 +0000 Subject: [New-bugs-announce] [issue33078] Queue with maxsize can lead to deadlocks Message-ID: <1521095814.22.0.467229070634.issue33078@psf.upfronthosting.co.za> New submission from Thomas Moreau : The fix for the Queue._feeder does not properly handle the size of the Queue. This can lead to a situation where the Queue is considered as Full when it is empty. Here is a reproducing script: ``` import multiprocessing as mp q = mp.Queue(1) class FailPickle(): def __reduce__(self): raise ValueError() q.put(FailPickle()) print("Queue is full:", q.full()) q.put(0) print(f"Got result: {q.get()}") ``` ---------- components: Library (Lib) messages: 313855 nosy: davin, pitrou, tomMoral priority: normal severity: normal status: open title: Queue with maxsize can lead to deadlocks type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 15 06:20:38 2018 From: report at bugs.python.org (Simon Lipp) Date: Thu, 15 Mar 2018 10:20:38 +0000 Subject: [New-bugs-announce] [issue33079] subprocess: document the interaction between subprocess.Popen and os.set_inheritable Message-ID: <1521109238.05.0.467229070634.issue33079@psf.upfronthosting.co.za> New submission from Simon Lipp : >From current `os` documentation: > A file descriptor has an ?inheritable? flag which indicates if the file descriptor can be inherited by child processes from current `subprocess` documentation: > If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed It would be helpful to explicitly specify that subprocess.Popen does not takes into account the inheritable flag ; thas is, that inheritable fds will still be closed with open_fds = False, and that non-inheritable fds will still be kept with open_fds = True. ---------- assignee: docs at python components: Documentation messages: 313868 nosy: docs at python, sloonz priority: normal severity: normal status: open title: subprocess: document the interaction between subprocess.Popen and os.set_inheritable versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 15 12:46:10 2018 From: report at bugs.python.org (Alexander Kanavin) Date: Thu, 15 Mar 2018 16:46:10 +0000 Subject: [New-bugs-announce] [issue33080] regen-importlib is causing build races against other regen-all targets in Makefile.pre.in Message-ID: <1521132370.51.0.467229070634.issue33080@psf.upfronthosting.co.za> New submission from Alexander Kanavin : You can see here: https://github.com/python/cpython/blob/master/Makefile.pre.in#L708 that regen-importlib is building a binary from .o files which are built from .c and .h files, which are, at the same time, regenerated by other regen- targets. This does cause build errors in heavily parallelized builds, we've been seeing it regularly in Yocto Project lately: https://bugzilla.yoctoproject.org/show_bug.cgi?id=12596 I tried to see if I can easily correct target dependencies in the makefile, but couldn't figure it out. So, a workaround, for us, would be to issue 'make regen-importlib' ahead of other things: make regen-importlib make regen-all ---------- components: Build messages: 313894 nosy: Alexander Kanavin priority: normal severity: normal status: open title: regen-importlib is causing build races against other regen-all targets in Makefile.pre.in versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 15 14:05:17 2018 From: report at bugs.python.org (Henrique Andrade) Date: Thu, 15 Mar 2018 18:05:17 +0000 Subject: [New-bugs-announce] [issue33081] multiprocessing Queue leaks a file descriptor associated with the pipe writer Message-ID: <1521137117.03.0.467229070634.issue33081@psf.upfronthosting.co.za> New submission from Henrique Andrade : A simple example like such demonstrates that one of the file descriptors associated with the underlying pipe will be leaked: >>> from multiprocessing.queues import Queue >>> x = Queue() >>> x.close() Right after the queue is created we get (assuming the Python interpreter is associated with pid 8096 below): > ll /proc/8096/fd total 0 dr-x------ 2 hcma hcma 0 2018-03-15 14:03:23.210089578 -0400 . dr-xr-xr-x 9 hcma hcma 0 2018-03-15 14:03:23.190089760 -0400 .. lrwx------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 0 -> /dev/pts/25 lrwx------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 1 -> /dev/pts/25 lrwx------ 1 hcma hcma 64 2018-03-15 14:03:23.210089578 -0400 2 -> /dev/pts/25 lr-x------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 3 -> pipe:[44076946] l-wx------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 4 -> pipe:[44076946] lr-x------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 5 -> /dev/urandom After close(): > ll /proc/8096/fd total 0 dr-x------ 2 hcma hcma 0 2018-03-15 14:03:23.210089578 -0400 . dr-xr-xr-x 9 hcma hcma 0 2018-03-15 14:03:23.190089760 -0400 .. lrwx------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 0 -> /dev/pts/25 lrwx------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 1 -> /dev/pts/25 lrwx------ 1 hcma hcma 64 2018-03-15 14:03:23.210089578 -0400 2 -> /dev/pts/25 lr-x------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 3 -> pipe:[44076946] l-wx------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 4 -> pipe:[44076946] lr-x------ 1 hcma hcma 64 2018-03-15 14:03:33.145998954 -0400 5 -> /dev/urandom ---------- components: Library (Lib) messages: 313899 nosy: Henrique Andrade priority: normal severity: normal status: open title: multiprocessing Queue leaks a file descriptor associated with the pipe writer versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 15 14:50:50 2018 From: report at bugs.python.org (Chad) Date: Thu, 15 Mar 2018 18:50:50 +0000 Subject: [New-bugs-announce] [issue33082] multiprocessing docs bury very important 'callback=' args Message-ID: <1521139850.07.0.467229070634.issue33082@psf.upfronthosting.co.za> New submission from Chad : Callbacks are really important in multiprocessing. Doc writer almost ignores them. ---------- assignee: docs at python components: Documentation messages: 313905 nosy: chadmiller-amzn, docs at python priority: normal severity: normal status: open title: multiprocessing docs bury very important 'callback=' args type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 15 16:38:33 2018 From: report at bugs.python.org (Mark Dickinson) Date: Thu, 15 Mar 2018 20:38:33 +0000 Subject: [New-bugs-announce] [issue33083] math.factorial accepts non-integral Decimal instances Message-ID: <1521146313.09.0.467229070634.issue33083@psf.upfronthosting.co.za> New submission from Mark Dickinson : Observed by Terry Reedy in the issue #25735 discussion (msg255479): >>> factorial(decimal.Decimal(5.2)) 120 This should be either raising an exception (either ValueError or TypeError, depending on whether we want to accept only integral Decimal values, or prohibit Decimal values altogether), or possibly returning an approximation to Gamma(6.2) (=169.406099461722999629...) I'd prefer that we prohibit a Decimal input altogether, but accepting integral Decimal instances would parallel the current behaviour with floats: >>> factorial(5.2) Traceback (most recent call last): File "", line 1, in ValueError: factorial() only accepts integral values >>> factorial(5.0) 120 Terry also observed: >>> factorial(Fraction(5)) Traceback (most recent call last): File "", line 1, in TypeError: an integer is required (got type Fraction) ---------- messages: 313912 nosy: facundobatista, mark.dickinson, rhettinger, skrah, terry.reedy priority: normal severity: normal status: open title: math.factorial accepts non-integral Decimal instances type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 01:58:02 2018 From: report at bugs.python.org (Luc) Date: Fri, 16 Mar 2018 05:58:02 +0000 Subject: [New-bugs-announce] [issue33084] Computing median, median_high an median_low in statistics library Message-ID: <1521179882.82.0.467229070634.issue33084@psf.upfronthosting.co.za> New submission from Luc : When a list or dataframe serie contains NaN(s), the median, median_low and median_high are computed in Python 3.6.4 statistics library, however, the results are wrong. Either, it should return a NaN just like when we try to compute a mean or point the user to drop the NaNs before computing those statistics. Example: import numpy as np import statistics as stats data = [75, 90,85, 92, 95, 80, np.nan] Median = stats.median(data) Median_low = stats.median_low(data) Median_high = stats.median_high(data) The results from above return ALL 90 which are incorrect. Correct answers should be: Median = 87.5 Median_low = 85 Median_high = 92 Thanks, Luc ---------- components: Library (Lib) messages: 313933 nosy: dcasmr priority: normal severity: normal status: open title: Computing median, median_high an median_low in statistics library type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 02:59:50 2018 From: report at bugs.python.org (chenkai) Date: Fri, 16 Mar 2018 06:59:50 +0000 Subject: [New-bugs-announce] [issue33085] *** Error in `python': double free or corruption (out): 0x00007ff5254d50d0 *** Message-ID: <1521183590.06.0.467229070634.issue33085@psf.upfronthosting.co.za> New submission from chenkai <13016135670 at 163.com>: When I finished the installation of readline (6.2.4.1? and then run python, it's crashed? *** Error in `python': double free or corruption (out): 0x00007ff5254d50d0 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x7c503)[0x7ff52416e503] /lib/libpython3.6m.so.1.0(PyOS_Readline+0xec)[0x7ff524e3ff3c] /lib/libpython3.6m.so.1.0(+0x694d6)[0x7ff524e414d6] /lib/libpython3.6m.so.1.0(+0x6ae78)[0x7ff524e42e78] /lib/libpython3.6m.so.1.0(PyTokenizer_Get+0x9)[0x7ff524e43cb9] /lib/libpython3.6m.so.1.0(+0x6811e)[0x7ff524e4011e] /lib/libpython3.6m.so.1.0(PyParser_ASTFromFileObject+0x8b)[0x7ff524f76ecb] /lib/libpython3.6m.so.1.0(+0x19f0ea)[0x7ff524f770ea] /lib/libpython3.6m.so.1.0(PyRun_InteractiveLoopFlags+0x76)[0x7ff524f77416] /lib/libpython3.6m.so.1.0(PyRun_AnyFileExFlags+0x3e)[0x7ff524f77c7e] /lib/libpython3.6m.so.1.0(Py_Main+0xd97)[0x7ff524f93827] python(main+0x16c)[0x400b1c] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7ff524113b35] python[0x400bda] ======= Memory map: ======== 00400000-00401000 r-xp 00000000 08:03 52652831 /root/python36/bin/python3 00601000-00602000 r--p 00001000 08:03 52652831 /root/python36/bin/python3 00602000-00603000 rw-p 00002000 08:03 52652831 /root/python36/bin/python3 01bb5000-01c92000 rw-p 00000000 00:00 0 [heap] 7ff518000000-7ff518021000 rw-p 00000000 00:00 0 7ff518021000-7ff51c000000 ---p 00000000 00:00 0 7ff51d0c4000-7ff51d0d9000 r-xp 00000000 08:03 84 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7ff51d0d9000-7ff51d2d8000 ---p 00015000 08:03 84 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7ff51d2d8000-7ff51d2d9000 r--p 00014000 08:03 84 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7ff51d2d9000-7ff51d2da000 rw-p 00015000 08:03 84 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7ff51d2da000-7ff51d2ff000 r-xp 00000000 08:03 106273 /usr/lib64/libtinfo.so.5.9 7ff51d2ff000-7ff51d4ff000 ---p 00025000 08:03 106273 /usr/lib64/libtinfo.so.5.9 7ff51d4ff000-7ff51d503000 r--p 00025000 08:03 106273 /usr/lib64/libtinfo.so.5.9 7ff51d503000-7ff51d504000 rw-p 00029000 08:03 106273 /usr/lib64/libtinfo.so.5.9 7ff51d504000-7ff51d52a000 r-xp 00000000 08:03 90038 /usr/lib64/libncurses.so.5.9 7ff51d52a000-7ff51d729000 ---p 00026000 08:03 90038 /usr/lib64/libncurses.so.5.9 7ff51d729000-7ff51d72a000 r--p 00025000 08:03 90038 /usr/lib64/libncurses.so.5.9 7ff51d72a000-7ff51d72b000 rw-p 00026000 08:03 90038 /usr/lib64/libncurses.so.5.9 7ff51d73e000-7ff51d779000 r-xp 00000000 08:03 18819792 /root/python36/lib/python3.6/site-packages/readline.cpython-36m-x86_64-linux-gnu.so 7ff51d779000-7ff51d979000 ---p 0003b000 08:03 18819792 /root/python36/lib/python3.6/site-packages/readline.cpython-36m-x86_64-linux-gnu.so 7ff51d979000-7ff51d97b000 r--p 0003b000 08:03 18819792 /root/python36/lib/python3.6/site-packages/readline.cpython-36m-x86_64-linux-gnu.so 7ff51d97b000-7ff51d982000 rw-p 0003d000 08:03 18819792 /root/python36/lib/python3.6/site-packages/readline.cpython-36m-x86_64-linux-gnu.so 7ff51d982000-7ff51d9c4000 rw-p 00000000 00:00 0 7ff51d9c4000-7ff51d9c6000 r-xp 00000000 08:03 35550108 /usr/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so 7ff51d9c6000-7ff51dbc6000 ---p 00002000 08:03 35550108 /usr/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so 7ff51dbc6000-7ff51dbc7000 r--p 00002000 08:03 35550108 /usr/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so 7ff51dbc7000-7ff51dbc9000 rw-p 00003000 08:03 35550108 /usr/lib/python3.6/lib-dynload/_heapq.cpython-36m-x86_64-linux-gnu.so 7ff51dbc9000-7ff5240f2000 r--p 00000000 08:03 16882256 /usr/lib/locale/locale-archive 7ff5240f2000-7ff5242a8000 r-xp 00000000 08:03 65201 /usr/lib64/libc-2.17.so 7ff5242a8000-7ff5244a8000 ---p 001b6000 08:03 65201 /usr/lib64/libc-2.17.so 7ff5244a8000-7ff5244ac000 r--p 001b6000 08:03 65201 /usr/lib64/libc-2.17.so 7ff5244ac000-7ff5244ae000 rw-p 001ba000 08:03 65201 /usr/lib64/libc-2.17.so 7ff5244ae000-7ff5244b3000 rw-p 00000000 00:00 0 7ff5244b3000-7ff5245b3000 r-xp 00000000 08:03 65209 /usr/lib64/libm-2.17.so 7ff5245b3000-7ff5247b3000 ---p 00100000 08:03 65209 /usr/lib64/libm-2.17.so 7ff5247b3000-7ff5247b4000 r--p 00100000 08:03 65209 /usr/lib64/libm-2.17.so 7ff5247b4000-7ff5247b5000 rw-p 00101000 08:03 65209 /usr/lib64/libm-2.17.so 7ff5247b5000-7ff5247b7000 r-xp 00000000 08:03 90003 /usr/lib64/libutil-2.17.so 7ff5247b7000-7ff5249b6000 ---p 00002000 08:03 90003 /usr/lib64/libutil-2.17.so 7ff5249b6000-7ff5249b7000 r--p 00001000 08:03 90003 /usr/lib64/libutil-2.17.so 7ff5249b7000-7ff5249b8000 rw-p 00002000 08:03 90003 /usr/lib64/libutil-2.17.so 7ff5249b8000-7ff5249ba000 r-xp 00000000 08:03 65207 /usr/lib64/libdl-2.17.so 7ff5249ba000-7ff524bba000 ---p 00002000 08:03 65207 /usr/lib64/libdl-2.17.so 7ff524bba000-7ff524bbb000 r--p 00002000 08:03 65207 /usr/lib64/libdl-2.17.so 7ff524bbb000-7ff524bbc000 rw-p 00003000 08:03 65207 /usr/lib64/libdl-2.17.so 7ff524bbc000-7ff524bd3000 r-xp 00000000 08:03 89995 /usr/lib64/libpthread-2.17.so 7ff524bd3000-7ff524dd2000 ---p 00017000 08:03 89995 /usr/lib64/libpthread-2.17.so 7ff524dd2000-7ff524dd3000 r--p 00016000 08:03 89995 /usr/lib64/libpthread-2.17.so 7ff524dd3000-7ff524dd4000 rw-p 00017000 08:03 89995 /usr/lib64/libpthread-2.17.so 7ff524dd4000-7ff524dd8000 rw-p 00000000 00:00 0 ???(??) ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 313934 nosy: chenkai priority: normal severity: normal status: open title: *** Error in `python': double free or corruption (out): 0x00007ff5254d50d0 *** type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 12:58:08 2018 From: report at bugs.python.org (Gabriel Hearot) Date: Fri, 16 Mar 2018 16:58:08 +0000 Subject: [New-bugs-announce] [issue33086] pip: IndexError Message-ID: <1521219488.5.0.467229070634.issue33086@psf.upfronthosting.co.za> New submission from Gabriel Hearot : Traceback (most recent call last): File "setup.py", line 45, in classifiers=[] File "/usr/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.6/distutils/command/upload.py", line 63, in run self.upload_file(command, pyversion, filename) File "/usr/lib/python3.6/distutils/command/upload.py", line 156, in upload_file value = value[1] IndexError: tuple index out of range ---------- components: Library (Lib) messages: 313958 nosy: hearot priority: normal severity: normal status: open title: pip: IndexError type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 13:44:07 2018 From: report at bugs.python.org (Elliot Jenner) Date: Fri, 16 Mar 2018 17:44:07 +0000 Subject: [New-bugs-announce] [issue33087] No reliable clean shutdown method Message-ID: <1521222247.28.0.467229070634.issue33087@psf.upfronthosting.co.za> New submission from Elliot Jenner : Ptyhon lacks a reliable clean shutdown method. sys.exit(), which should be this method, does not reliably perform this function as it merely terminates the thread it is called from (duplicating the functionality of thread.exit()), exit() and quit() are not supposed to be used except in terminal windows, raise SystemExit has the same issues as sys.exit() and is bad practice, and os._exit() immediately kills everything and does not clean up, which can cause issues with residuals. This is especially important as some interpreters will break calls (including most worryingly try-except clauses) into threads invisibly, leading to whichever method is used being called in a non-main thread without anything the programmer can do about it even when you are not intentionally using threading. Ideally, sys.exit() should be changed to properly close down the entire program, as there is no need for 2 functionally identical exit functions, but this may cause legacy issues. Regardless, a method that ALWAYS kill the program and all threads while still cleaning up, regardless of where it is called from, is needed. ---------- components: Library (Lib) messages: 313961 nosy: Void2258 priority: normal severity: normal status: open title: No reliable clean shutdown method type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 14:10:26 2018 From: report at bugs.python.org (Jeff DuMonthier) Date: Fri, 16 Mar 2018 18:10:26 +0000 Subject: [New-bugs-announce] [issue33088] Cannot pass a SyncManager proxy to a multiprocessing subprocess on Windows Message-ID: <1521223826.02.0.467229070634.issue33088@psf.upfronthosting.co.za> New submission from Jeff DuMonthier : The following simple example code creates a started SyncManager and passes it as an argument to a subprocess started with multiprocessing.Process(). It works on Linux and Mac OS but fails on Windows. import multiprocessing as mp def subProcFn(m1): pass if __name__ == "__main__": __spec__ = None m1 = mp.Manager() p1 = mp.Process(target=subProcFn, args=(m1,)) p1.start() p1.join() This is the traceback in Spyder: runfile('D:/ManagerBug.py', wdir='D:') Traceback (most recent call last): File "", line 1, in runfile('D:/ManagerBug.py', wdir='D:') File "...\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "...\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "D:/ManagerBug.py", line 22, in p1.start() File "...\anaconda3\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "...\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "...\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "...\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "...\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle weakref objects ---------- components: Windows messages: 313964 nosy: jjdmon, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Cannot pass a SyncManager proxy to a multiprocessing subprocess on Windows type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 14:50:58 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 16 Mar 2018 18:50:58 +0000 Subject: [New-bugs-announce] [issue33089] Add multi-dimensional Euclidean distance function to the math module Message-ID: <1521226258.78.0.467229070634.issue33089@psf.upfronthosting.co.za> New submission from Raymond Hettinger : A need for a distance-between-two-points function arises frequently enough to warrant consideration for inclusion in the math module. It shows-up throughout mathematics -- everywhere from simple homework problems for kids to machine learning and computer vision. In the latter cases, the function is called frequently and would benefit from a fast C implementation that includes good error checking and is algorithmically smart about numerical issues such as overflow and loss-of-precision. A simple implementation would be something like this: def dist(p, q): 'Multi-dimensional Euclidean distance' # XXX needs error checking: len(p) == len(q) return sqrt(sum((x0 - x1) ** 2 for x0, x1 in zip(p, q))) The implementation could also include value added features such as hypot() style scaling to mitigate overflow during the squaring step: def dist2(p, q): # https://en.wikipedia.org/wiki/Hypot#Implementation diffs = [x0 - x1 for x0, x1 in zip(p, q)] scale = max(diffs, key=abs) return abs(scale) * sqrt(fsum((d/scale) ** 2 for d in diffs)) ---------- components: Library (Lib) messages: 313967 nosy: mark.dickinson, rhettinger, skrah, steven.daprano, tim.peters priority: normal severity: normal status: open title: Add multi-dimensional Euclidean distance function to the math module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 15:43:10 2018 From: report at bugs.python.org (Robert Xiao) Date: Fri, 16 Mar 2018 19:43:10 +0000 Subject: [New-bugs-announce] [issue33090] race condition between send and recv in _ssl with non-zero timeout Message-ID: <1521229390.38.0.467229070634.issue33090@psf.upfronthosting.co.za> New submission from Robert Xiao : Environment: Several versions of Python (see below), macOS 10.12.6 The attached script creates an SSL echo server (fairly standard), connects to the server, and spawns a read and write thread. The write thread repeatedly shovels data into the connection, while the read thread receives data and prints a dot for each successful read. The socket has a timeout of 10 seconds set: if the timeout is 0, the script blows up immediately due to blocking, and if the timeout is -1 nothing bad happens. On Linux and the default Mac Python 2.6, the script prints an endless series of dots as expected. On most other versions of Mac Python (2.7, 3.5, 3.6), the script dies quite quickly (within 1-2 seconds) with an error like this: $ /usr/bin/python2.7 test_ssl.py Got connection from ('127.0.0.1', 49683) ..................................Exception in thread ReadThread: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run self.__target(*self.__args, **self.__kwargs) File "test_ssl.py", line 93, in read_thread csocket.recv() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 734, in recv return self.read(buflen) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 621, in read v = self._sslobj.read(len or 1024) error: [Errno 35] Resource temporarily unavailable The error can be one of the following: [Py2.7] error: [Errno 35] Resource temporarily unavailable [Py2.7] SSLWantReadError: The operation did not complete (read) (_ssl.c:1752) [Py3.x] BlockingIOError: [Errno 35] Resource temporarily unavailable [Py3.x] ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:1974) [Py3.6] ssl.SSLError: Invalid error code (_ssl.c:2217) The last error occurs under much rarer circumstances, but appears to be associated with the same underlying bug. The "invalid error code" is 0 when tested with a debugger, indicating a successful completion (but somehow the error logic gets triggered anyway). This was tested with the following configurations: macOS: /usr/bin/python2.6: Python 2.6.9 from Apple [ok] macOS: /usr/bin/python2.7: Python 2.7.10 from Apple [crashes] macOS: /usr/local/bin/python2.7: Python 2.7.13 from Python.org [crashes] macOS: /usr/local/bin/python3.5: Python 3.5.1 from Python.org [crashes] macOS: /usr/local/bin/python3.6: Python 3.6.4 from Python.org [crashes] macOS: /opt/local/bin/python2.7: Python 2.7.14 from MacPorts [crashes] A number of these were tested on a second machine (to rule out any strange environment issues), and the same results were obtained. ---------- assignee: christian.heimes components: SSL, macOS files: test_ssl.py messages: 313970 nosy: christian.heimes, ned.deily, nneonneo, ronaldoussoren priority: normal severity: normal status: open title: race condition between send and recv in _ssl with non-zero timeout type: crash versions: Python 2.7, Python 3.5, Python 3.6 Added file: https://bugs.python.org/file47489/test_ssl.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 16 17:22:01 2018 From: report at bugs.python.org (Alfred Krohmer) Date: Fri, 16 Mar 2018 21:22:01 +0000 Subject: [New-bugs-announce] [issue33091] ssl.SSLError: Invalid error code (_ssl.c:2217) Message-ID: <1521235321.12.0.467229070634.issue33091@psf.upfronthosting.co.za> New submission from Alfred Krohmer : OpenSSL version: 1.1.0.g-1 OS: Arch Linux I'm creating an SSL socket like this: s = socket.create_connection((self.host, 443), 60) c = ssl.create_default_context() c.set_alpn_protocols(['spdy/2']) self.ss = c.wrap_socket(s, server_hostname=self.host) I'm then reading from the socket in one thread and writing to it in another thread. I'm experiencing strange behaviour. Sometimes I randomly get the error message in the title when invoking self.ss.recv(). After investigating the exception, I found that exc.errno = 10, which, according to the OpenSSL documentation means SSL_ERROR_WANT_ASYNC_JOB. This constant is never used in the _ssl.c file in cpython. This seems to me like an OpenSSL error that needs to be handled in the Python implementation but is not. Also sometimes I have random write timeouts when invoking self.ss.send() (in those cases it seems unlikely to me that those are caused by the server). Also I found here: https://github.com/python/cpython/blob/v3.6.4/Modules/_ssl.c#L2184 that Python uses SSL_get_error in an non-mutex locked section. But the OpenSSL documentation of SSL_get_error states the following: In addition to ssl and ret, SSL_get_error() inspects the current thread's OpenSSL error queue. Thus, SSL_get_error() must be used in the same thread that performed the TLS/SSL I/O operation, and no other OpenSSL function calls should appear in between. The current thread's error queue must be empty before the TLS/SSL I/O operation is attempted, or SSL_get_error() will not work reliably. According to that, shouldn't the _PySSL_UPDATE_ERRNO_IF macro be called *after* PySSL_END_ALLOW_THREADS? ---------- assignee: christian.heimes components: SSL messages: 313973 nosy: christian.heimes, devkid priority: normal severity: normal status: open title: ssl.SSLError: Invalid error code (_ssl.c:2217) type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 17 08:33:10 2018 From: report at bugs.python.org (Mark Shannon) Date: Sat, 17 Mar 2018 12:33:10 +0000 Subject: [New-bugs-announce] [issue33092] The bytecode for f-string formatting is inefficient. Message-ID: <1521289990.47.0.467229070634.issue33092@psf.upfronthosting.co.za> New submission from Mark Shannon : f-string expressions can be formatted in four ways: with or without a conversion and with or without a format specifier Rather than have one bytecode that parses the opcode argument at runtime it would be more efficient and produce a cleaner interpreter for the compiler to produce one or two bytecode as required. The bytecodes should be: CONVERT_VALUE convert_fn FORMAT_SIMPLE FORMAT_WITH_SPEC For simple format expressions with no conversion or format specifier, which make up about 3/4 of all format expressions in the standard library, just the bytecode FORMAT_SIMPLE need be executed. ---------- components: Interpreter Core messages: 314000 nosy: Mark.Shannon priority: normal severity: normal status: open title: The bytecode for f-string formatting is inefficient. type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 17 13:36:10 2018 From: report at bugs.python.org (Eric Toombs) Date: Sat, 17 Mar 2018 17:36:10 +0000 Subject: [New-bugs-announce] [issue33093] Fatal error on SSL transport Message-ID: <1521308170.69.0.467229070634.issue33093@psf.upfronthosting.co.za> New submission from Eric Toombs : I'm not exactly sure what caused this error, but I was a client receiving messages on a websocket for a while (about 12 hours). Suddenly all incoming data stopped, then nothing happened for about 5 hours. Finally, I received a ConnectionClosed and the following appeared on stdout: ``` Fatal error on SSL transport protocol: transport: <_SelectorSocketTransport closing fd=15 read=idle write=> Traceback (most recent call last): File "/usr/lib/python3.6/asyncio/sslproto.py", line 636, in _process_write_backlog ssldata, offset = self._sslpipe.feed_appdata(data, offset) AttributeError: 'NoneType' object has no attribute 'feed_appdata' ``` I can't imagine this is what was supposed to happen. This has happened about three times now, so I can confirm it is reproducible. I'm writing a minimalist client now to see if I can isolate the problem any further. It's still unclear, though, which layer is responsible---websockets or asyncio. The websockets issue is here: https://github.com/aaugustin/websockets/issues/356 ---------- components: asyncio messages: 314008 nosy: Eric Toombs, asvetlov, yselivanov priority: normal severity: normal status: open title: Fatal error on SSL transport versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 17 19:03:50 2018 From: report at bugs.python.org (Adrian Stachlewski) Date: Sat, 17 Mar 2018 23:03:50 +0000 Subject: [New-bugs-announce] [issue33094] dataclasses: ClassVar attributes are not working properly Message-ID: <1521327830.3.0.467229070634.issue33094@psf.upfronthosting.co.za> New submission from Adrian Stachlewski : Class variables should behave in the same way whether with or without ClassVar annotation. Unfortunately there are not. class A: __slots__ = () x: ClassVar = set() A() # it's ok @dataclass class B: __slots__ = () x = set() B() # ok too @dataclass class C: __slots__ = () # cannot use set() because of error x: ClassVar = field(default_factory=set) C() # AttributeError: 'C' object has no attribute 'x' Exception is raised from __init__ method, with flag init=False nothing changes. Python version: 3.7.0b2 ---------- components: Library (Lib) messages: 314017 nosy: stachel priority: normal severity: normal status: open title: dataclasses: ClassVar attributes are not working properly type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 18 01:57:16 2018 From: report at bugs.python.org (Nick Coghlan) Date: Sun, 18 Mar 2018 05:57:16 +0000 Subject: [New-bugs-announce] [issue33095] Cross-reference isolated mode from relevant locations Message-ID: <1521352636.34.0.467229070634.issue33095@psf.upfronthosting.co.za> New submission from Nick Coghlan : In https://bugs.python.org/issue33053#msg313966, jwilk noted that it isn't obvious from https://docs.python.org/3/using/cmdline.html#cmdoption-m how to keep the current directory from being added to `sys.path` when using the -m switch. The answer is to pass the `-I` switch as well (to activate isolated mode), but there's no cross reference to help readers discover that fact. https://docs.python.org/3/using/cmdline.html#id2 is the main documentation for isolated mode, so the steps needed to close this issue are: 1. At least add a reference from the -m switch documentation to the -I switch documentation 2. Review the other parts of the `using` docs that describe how `sys.path` is initialised, and reference the -I switch documentation where relevant ---------- assignee: docs at python components: Documentation keywords: easy messages: 314022 nosy: docs at python, jwilk, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Cross-reference isolated mode from relevant locations type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 18 05:09:10 2018 From: report at bugs.python.org (Igor Yakovchenko) Date: Sun, 18 Mar 2018 09:09:10 +0000 Subject: [New-bugs-announce] [issue33096] ttk.Treeview.insert() does not allow to insert item with "False" iid Message-ID: <1521364150.91.0.467229070634.issue33096@psf.upfronthosting.co.za> New submission from Igor Yakovchenko : ttk.Treeview.insert(... iid=None, ...) method has a check: if iid: res = self.tk.call(self._w, "insert", parent, index, "-id", iid, *opts) else: res = self.tk.call(self._w, "insert", parent, index, *opts) Documentation says that "If iid is specified, it is used as the item identifier", but as you can see from code, iid is used only if it's "True". It means that you cannot use iids like 0, 0.0 etc. ---------- components: Tkinter messages: 314032 nosy: gpolo, serhiy.storchaka, truestarecat priority: normal severity: normal status: open title: ttk.Treeview.insert() does not allow to insert item with "False" iid type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 18 12:41:13 2018 From: report at bugs.python.org (Mark Nemec) Date: Sun, 18 Mar 2018 16:41:13 +0000 Subject: [New-bugs-announce] [issue33097] concurrent.futures executors accept tasks after interpreter shutdown Message-ID: <1521391273.18.0.467229070634.issue33097@psf.upfronthosting.co.za> New submission from Mark Nemec : Currently, one can submit a task to an executor (both ThreadPoolExecutor and ProcessPoolExecutor) during interpreter shutdown. One way to do this is to register function fun with atexit as below. @atexit.register def fun(): pool.submit(print, "apple") The future is accepted and goes into PENDING state. However, this can cause issues if the _python_exit function (located in concurrent/futures/thread.py and/or concurrent/futures/process.py) executes before function fun. Function _python_exit will shutdown the running workers in the pool and hence there will be no workers running by the time fun is executed so the future will be left in PENDING state forever. The solution submitted here is to instead raise a RuntimeException when a task is submitted during interpreter shutdown. This is the same behaviour as when the shutdown method of an executor is called explicitly. ---------- components: Library (Lib) messages: 314044 nosy: Mark Nemec priority: normal severity: normal status: open title: concurrent.futures executors accept tasks after interpreter shutdown type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 18 18:00:28 2018 From: report at bugs.python.org (Aristide Grange) Date: Sun, 18 Mar 2018 22:00:28 +0000 Subject: [New-bugs-announce] [issue33098] add implicit conversion for random.choice() on a dict Message-ID: <1521410428.93.0.467229070634.issue33098@psf.upfronthosting.co.za> New submission from Aristide Grange : In Python 3, the expression: ```python random.choice(d) ``` where `d` is a `dict`, raises this error: ``` ~/anaconda3/lib/python3.6/random.py in choice(self, seq) 256 except ValueError: 257 raise IndexError('Cannot choose from an empty sequence') from None --> 258 return seq[i] 259 260 def shuffle(self, x, random=None): KeyError: 2 ``` Converting `d` into a list restores the Python 2's behavior: ```python random.choice(list(d)) ``` I am aware that the keys of a dict have now their own type. But IMHO the error message is rather uninformative, and above all, couldn't this conversion be made implicitely under the hood? ---------- messages: 314062 nosy: Aristide Grange priority: normal severity: normal status: open title: add implicit conversion for random.choice() on a dict type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 18 18:18:03 2018 From: report at bugs.python.org (Jay Yin) Date: Sun, 18 Mar 2018 22:18:03 +0000 Subject: [New-bugs-announce] [issue33099] test_poplib hangs with the changes done in PR Message-ID: <1521411483.84.0.467229070634.issue33099@psf.upfronthosting.co.za> New submission from Jay Yin : my test hangs locally on my computer with the changes I've done in bpo-32642 but doesn't hang on TravisCI, anyone able to help with checking what's wrong here (sounds like another edge case with my env but I could be wrong) the trace for the command https://pastebin.com/q4FKnPZH ---------- components: Tests messages: 314064 nosy: jayyin11043 priority: normal severity: normal status: open title: test_poplib hangs with the changes done in PR versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 18 23:30:15 2018 From: report at bugs.python.org (Adrian Stachlewski) Date: Mon, 19 Mar 2018 03:30:15 +0000 Subject: [New-bugs-announce] [issue33100] dataclasses and __slots__ - non-default argument (member_descriptor) Message-ID: <1521430215.98.0.467229070634.issue33100@psf.upfronthosting.co.za> New submission from Adrian Stachlewski : I've tried to declare two classes @dataclass class Base: __slots__ = ('x',) x: Any @dataclass class Derived(Base): x: int y: int As long as I correctly understood PEP 557 (inheritance part), changing type of variable is possible. This code produce error: TypeError: non-default argument 'y' follows default argument 'x' variable in Derived class has changed default from MISSING to member_descriptor and that's the reason of the exception. ---------- components: Library (Lib) messages: 314077 nosy: stachel priority: normal severity: normal status: open title: dataclasses and __slots__ - non-default argument (member_descriptor) type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 19 09:19:15 2018 From: report at bugs.python.org (Yomguithereal) Date: Mon, 19 Mar 2018 13:19:15 +0000 Subject: [New-bugs-announce] [issue33101] Possible name inversion in heapq implementation Message-ID: <1521465555.06.0.467229070634.issue33101@psf.upfronthosting.co.za> New submission from Yomguithereal : Hello Python team, I might be hallucinating but I am under the impression that the `heapq` module uses reverse naming. What I mean is that it seems to me that the _siftup method should actually be named _siftdown and, the other way around, _siftdown should be named _siftup. This has absolutely no practical consequence since the module works as it should but I am a bit confused since I don't know if the module got naming wrong or if it followed another canonical naming I don't know about. I am willing to open a PR to fix this if the named reverasl was to be confirmed. Good day to you. ---------- components: Library (Lib) messages: 314093 nosy: Yomguithereal priority: normal severity: normal status: open title: Possible name inversion in heapq implementation type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 19 09:25:56 2018 From: report at bugs.python.org (amjad ben hedhili) Date: Mon, 19 Mar 2018 13:25:56 +0000 Subject: [New-bugs-announce] [issue33102] get the nth folder Message-ID: <1521465956.78.0.467229070634.issue33102@psf.upfronthosting.co.za> New submission from amjad ben hedhili : It will be handy if there was an os or os.path function that returns the path to the nth directory in a given path for example: given path = "C:\Users\User\AppData\Local\Programs\Python\Python36\Lib\asyncio\__init__.py" os.path.nthpath(path, 2) returns "C:\Users\User\AppData\Local\Programs\Python\Python36\Lib" ---------- messages: 314094 nosy: amjad ben hedhili priority: normal severity: normal status: open title: get the nth folder type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 19 09:36:19 2018 From: report at bugs.python.org (amjad ben hedhili) Date: Mon, 19 Mar 2018 13:36:19 +0000 Subject: [New-bugs-announce] [issue33103] Syntax to get multiple items from an iterable Message-ID: <1521466579.3.0.467229070634.issue33103@psf.upfronthosting.co.za> New submission from amjad ben hedhili : It will be much of improvement for readability to write: my_list = ["John", "Richard", "Alice", 1, True, 2.1, "End"] a, b, c = my_list[1, 3, -1] instead of: my_list = ["John", "Richard", "Alice", 1, True, 2.1, "End"] a, b, c = my_list[1], my_list[3], my_list[-1] ---------- messages: 314095 nosy: amjad ben hedhili priority: normal severity: normal status: open title: Syntax to get multiple items from an iterable type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 19 12:21:06 2018 From: report at bugs.python.org (Eric Appelt) Date: Mon, 19 Mar 2018 16:21:06 +0000 Subject: [New-bugs-announce] [issue33104] Documentation for EXTENDED_ARG in dis module is incorrect for >=3.6 Message-ID: <1521476466.68.0.467229070634.issue33104@psf.upfronthosting.co.za> New submission from Eric Appelt : The documentation for the EXTENDED_ARG instruction in the dis module documentation refers to the way the opcode worked before 3.6: https://docs.python.org/3.6/library/dis.html#opcode-EXTENDED_ARG As I understand, since moving to 2-byte wordcode in 3.6, each EXTENDED_ARG effectively adds a byte to the argument of the next instruction and they can be chained to allow up to a 32-bit argument. The current documentation refers the 2-byte arguments from the older bytecode used in 3.5 and below. I'm trying to think of a clear and concise wording for how it works now and will add a PR to fix this issue unless someone gets to it before me. ---------- assignee: docs at python components: Documentation messages: 314100 nosy: Eric Appelt, docs at python priority: normal severity: normal status: open title: Documentation for EXTENDED_ARG in dis module is incorrect for >=3.6 versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 19 14:28:31 2018 From: report at bugs.python.org (Luis Conejo-Alpizar) Date: Mon, 19 Mar 2018 18:28:31 +0000 Subject: [New-bugs-announce] [issue33105] os.isfile returns false on Windows when file path is longer than 260 characters Message-ID: <1521484111.52.0.467229070634.issue33105@psf.upfronthosting.co.za> New submission from Luis Conejo-Alpizar : Windows has a maximum path length limitation of 260 characters. This limitation, however, can be bypassed in the scenario described below. When this occurs, os.isfile() will return false, even when the affected file does exist. For Windows systems, the behavior should be for os.isfile() to return an exception in this case, indicating that maximum path length has been exceeded. Sample scenario: 1. Let's say you have a folder, named F1 and located in your local machine at this path: C:\tc\proj\MTV\cs_fft\Milo\Fries\STL\BLNA\F1\ 2. Inside of that folder, you have a log file with this name: This_is_a_really_long_file_name_that_by_itself_is_not_capable_of_exceeding_the_path_length_limitation_Windows_has_in_pretty_much_every_single_version_of_Wind.log 3. The combined length of the path and the file is exactly 260 characters, so Windows lets you get away with it when the file is initially created and/or placed there. 4. Later, you decide to make the F1 folder available on your network, under this name: \\tst\tc\proj\MTV\cs_fft\Milo\Fries\STL\BLNA\F1\ 5. Your log file continues to be in the folder, but its full network path is now 263 characters, effectively violating the maximum path length limitation. 6. If you use os.listdir() on the networked folder, the log file will come up. 7. Now, if you try os.path.isfile(os.path.join(networked_path,logfile_name)) it will return false, even though the file is indeed there and is indeed a file. ---------- components: Library (Lib) messages: 314109 nosy: ldconejo priority: normal severity: normal status: open title: os.isfile returns false on Windows when file path is longer than 260 characters type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 19 17:44:11 2018 From: report at bugs.python.org (sds) Date: Mon, 19 Mar 2018 21:44:11 +0000 Subject: [New-bugs-announce] [issue33106] Deleting a key in a read-only gdbm results in KeyError, not gdbm.error Message-ID: <1521495851.1.0.467229070634.issue33106@psf.upfronthosting.co.za> New submission from sds : deleting a key from a read-only gdbm should be gdbm.error, not KeyError: >>> import gdbm >>> db = gdbm.open("foo","n") # create new >>> db["a"] = "b" >>> db.close() >>> db = gdbm.open("foo","r") # read only >>> db["x"] = "1" Traceback (most recent call last): File "", line 1, in gdbm.error: Reader can't store # correct >>> db["a"] 'b' >>> del db["a"] Traceback (most recent call last): File "", line 1, in KeyError: 'a' # WRONG! should be the same as above ---------- components: Library (Lib) messages: 314119 nosy: sam-s priority: normal severity: normal status: open title: Deleting a key in a read-only gdbm results in KeyError, not gdbm.error type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 20 09:10:42 2018 From: report at bugs.python.org (Siyuan Ren) Date: Tue, 20 Mar 2018 13:10:42 +0000 Subject: [New-bugs-announce] [issue33107] Feature request: more typing.SupportsXXX Message-ID: <1521551442.12.0.467229070634.issue33107@psf.upfronthosting.co.za> New submission from Siyuan Ren : Currently in module `typing` we have the following classes * SupportsInt * SupportsFloat * SupportsComplex * SupportsBytes * SupportsRound There is no reason that people only need these classes. They may need, say, `SupportsIndex` to denote all integer like types, `SupportsAdd` for arithmetic types, etc. It is best that the list of `SupportsXXX` are expanded to be as complete as possible, and even better, a mechanism for user specified `SupportsXXX` be provided. ---------- components: Library (Lib) messages: 314141 nosy: Siyuan Ren priority: normal severity: normal status: open title: Feature request: more typing.SupportsXXX type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 20 09:22:06 2018 From: report at bugs.python.org (Kiril Dimitrov) Date: Tue, 20 Mar 2018 13:22:06 +0000 Subject: [New-bugs-announce] [issue33108] Unicode char 304 in lowercase has len 2 Message-ID: <1521552126.55.0.467229070634.issue33108@psf.upfronthosting.co.za> New submission from Kiril Dimitrov : >>> chr(304) '?' >>> chr(304).lower() 'i?' >>> len(chr(304).lower()) 2 This breaks unicode text matching. There is no other unicode character with the same behaviour (in 3.6.2 and 3.6.4). ---------- components: Unicode messages: 314142 nosy: Kiril Dimitrov, ezio.melotti, vstinner priority: normal severity: normal status: open title: Unicode char 304 in lowercase has len 2 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 20 09:41:59 2018 From: report at bugs.python.org (Wolfgang Maier) Date: Tue, 20 Mar 2018 13:41:59 +0000 Subject: [New-bugs-announce] [issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True Message-ID: <1521553319.23.0.467229070634.issue33109@psf.upfronthosting.co.za> New submission from Wolfgang Maier : I find the True default for 'required' quite cumbersome introduced as a result of issue 26510. With existing parsers it can unnecessarily break compatibility between Python3.x versions only to make porting a bit easier for Python2 users. I think, this late in the life cycle of Python2, within Python3 compatibility should be ranked higher than py2to3 portability. Command line parsing of a package of mine has long used optional subparsers (without me even thinking much about the fact). Now in 3.7, running python3.7 -m MyPackage without arguments (the parser is in __main__.py) I get the ill-formatted error message: __main__.py: error: the following arguments are required: while my code in 3.3 - 3.6 was catching the empty Namespace returned and printed a help message. Because the 'required' keyword argument did not exist in < 3.7 there was no simple way for me to write code that is compatible between all 3.x versions. What I ended up doing now is to check sys.argv before trying to parse things, then print the help message, when that only has a single item, just to keep my existing code working. OTOH, everything would be just fine with a default value of False. Also that truncated error message should be fixed before 3.7 gets released. ---------- components: Library (Lib) messages: 314145 nosy: Anthony Sottile, bethard, eric.araujo, memeplex, paul.j3, wolma priority: normal severity: normal status: open title: argparse: make new 'required' argument to add_subparsers default to False instead of True type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 20 12:12:43 2018 From: report at bugs.python.org (Sam Martin) Date: Tue, 20 Mar 2018 16:12:43 +0000 Subject: [New-bugs-announce] [issue33110] Adding a done callback to a concurrent.futures Future once it has already completed, may raise an exception, contrary to docs Message-ID: <1521562363.32.0.467229070634.issue33110@psf.upfronthosting.co.za> New submission from Sam Martin : Whilst working with concurrent.futures and ThreadPoolExecutors, my colleague and I have noted some undocumented behaviour. When adding a done_callback to a future that has already completed, we note that that callback is executed directly, outside of any try...except statement. However, both the docs and the _invoke_callbacks methods will wrap any done_callbacks in a try...except statement which logs the result and returns. Could we either update the documentation to mention that callback behaviour may raise if added to an already completed future, or simply add the same try...except wrapping that the _invoke_callback method already uses please? (Preferably the latter) The two pieces of the futures library I am referring to can be viewed here: _invoke_callbacks: https://github.com/python/cpython/blob/master/Lib/concurrent/futures/_base.py#L323 add_done_callback: https://github.com/python/cpython/blob/master/Lib/concurrent/futures/_base.py#L403 I would note, that the test code which covers this area of the code, doesn't currently exercise this particular condition. The closest test I could find is test_done_callback_already_failed, which checks that a callback can retrieve an exception from a future, but it does not validate what happens when a callback raises when the future it is attached to is already complete. Source: https://github.com/python/cpython/blob/c3d9508ff22ece9a96892b628dd5813e2fb0cd80/Lib/test/test_concurrent_futures.py#L1012 The other test closely related is test_done_callback_raises, however this doesn't check the behaviour of a callback when added to an already completed future. We should be able to simulate this by moving the f.set_result line to above the f.add_done_callback lines? Source: https://github.com/python/cpython/blob/c3d9508ff22ece9a96892b628dd5813e2fb0cd80/Lib/test/test_concurrent_futures.py#L990 ---------- components: Library (Lib) messages: 314151 nosy: samm priority: normal severity: normal status: open title: Adding a done callback to a concurrent.futures Future once it has already completed, may raise an exception, contrary to docs type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 20 15:20:56 2018 From: report at bugs.python.org (Ethan Welty) Date: Tue, 20 Mar 2018 19:20:56 +0000 Subject: [New-bugs-announce] [issue33111] Merely importing tkinter breaks parallel code (multiprocessing, sharedmem) Message-ID: <1521573656.32.0.467229070634.issue33111@psf.upfronthosting.co.za> New submission from Ethan Welty : Merely importing tkinter breaks the use of parallel code on my system (Mac OSX 10.11.6, tested on Python 2.7.13 / 2.7.14 / 3.5.0 / 3.6.4, all barebones distributions installed with pyenv). I've tested this with both multiprocessing and sharedmem (see minimal scripts below). The issue seems to apply only to functions that evoke multithreading within their respective package (e.g. `numpy.matmul()`, `cv2.SIFT.detectAndCompute()`). If I make the matrix in the scripts below much smaller (e.g. change `5000` to `5`), avoiding internal multithreading, the scripts work. ## with `multiprocessing` ```python import numpy as np import multiprocessing import _tkinter def parallel_matmul(x): R = np.random.randn(3, 3) return np.matmul(R, x) pool = multiprocessing.Pool(4) results = pool.map(parallel_matmul, [np.random.randn(3, 5000) for i in range(2)]) ``` > *Code never exits and Python has to be force quit* ## with `sharedmem` ```python import numpy as np import sharedmem import _tkinter def parallel_matmul(x): R = np.random.randn(3, 3) return np.matmul(R, x) with sharedmem.MapReduce() as pool: results = pool.map(parallel_matmul, [np.random.randn(3, 5000) for i in range(2)]) ``` > sharedmem.sharedmem.SlaveException: slave process 1 killed by signal 11 ---------- components: Tkinter messages: 314160 nosy: ezwelty priority: normal severity: normal status: open title: Merely importing tkinter breaks parallel code (multiprocessing, sharedmem) type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 20 16:09:08 2018 From: report at bugs.python.org (Martin) Date: Tue, 20 Mar 2018 20:09:08 +0000 Subject: [New-bugs-announce] [issue33112] SequenceMatcher bug Message-ID: <1521576548.54.0.467229070634.issue33112@psf.upfronthosting.co.za> New submission from Martin : difflib.SequenceMatcher fails to make a proper alignment between 2 sequences with only 3 single letter changes. Its performance is completely off with a similarity ratio of 0.16, in stead of the more accurate 0.99. Here is a snippet to replicate the failure: >>> aa_ref = 'MTLFTTLLVLIFERLFKLGEHWQLDHRLEAFFRRVKHFSLGRTLGMTIIAMGVTFLLLRALQGVLFNVPTLLVWLLIGLLCIGAGKVRLHYHAYLTAASRNDSHARATMAGELTMIHGVPAGCDEREYLRELQNALLWINFRFYLAPLFWLIVGGTWGPVTLMGYAFLRAWQYWLARYQTPHHRLQSGIDAVLHVLDWVPVRLAGVVYALIGHGEKALPAWFASLGDFHTSQYQVLTRLAQFSLAREPHVDKVETPKAAVSMAKKTSFVVVVVIALLTIYGALV' >>> aa_seq = 'MTLFTTLLVLIFERLFKLGEHWQLDHRLEAFFRRVKHFSLGRTLCMTIIAMGVTFLLLRALQGVLFNVPTLLVWLLIGLLCIGAGKVRLHYHAYLTAASRNDSHAHATMAGELTMIHGVPAGCDEREYLRELQNALLWINFRFYLAPLFWLIVGGTWGPVTLMGYAFLRAWQYWLARYQTPHHRLQSGIDAVLHALDWVPVRLAGVVYALIGHGEKALPAWFASLGDFHTSQYQVLTRLAQFSLAREPHVDKVETPKAAVSMAKKTSFVVVVVIALLTIYGALV' >>> sum(a!=b for a, b in zip(aa_ref, aa_seq)) 3 >>> match = SequenceMatcher(a=aa_ref, b=aa_seq) >>> match.ratio() 0.1619718309859155 >>> match.get_opcodes() [('equal', 0, 43, 0, 43), ('delete', 43, 79, 43, 43), ('equal', 79, 81, 43, 45), ('replace', 81, 122, 45, 80), ('equal', 122, 123, 80, 81), ('replace', 123, 284, 81, 284)] ---------- messages: 314163 nosy: mcft priority: normal severity: normal status: open title: SequenceMatcher bug type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 21 02:28:15 2018 From: report at bugs.python.org (guohui) Date: Wed, 21 Mar 2018 06:28:15 +0000 Subject: [New-bugs-announce] [issue33113] Query performance is very low and can even lead to denial of service Message-ID: <1521613695.07.0.467229070634.issue33113@psf.upfronthosting.co.za> New submission from guohui : I found a issue in regex (findall search)function, when seaching some content by some pattern, the function return for a long long time, match performance is very low. I think this issue could lead to too low query performance, or a attacker may exploit the issue to cause a denail of service condition. system: python 2.7.14 regex(2018.2.21) poc: import re pat = r'^(\(?[\w\d\-\.\\]{3,}\|?){1,}[\w\d\-\.\\]{3,}\)?$' #plaintext content content = r'(ftp\x3a\x2f\x2f|http\x3a\x2f\x2f|https\x3a\x2f\x2f|c\x3a\x2f\x2f|d\x3a\x2f\x2f|e\x3a\x2f\x2f)a' result = re.findall(pat, content) print result ---------- components: Regular Expressions files: test_performance.py messages: 314187 nosy: ezio.melotti, ghi5107, mrabarnett priority: normal severity: normal status: open title: Query performance is very low and can even lead to denial of service type: security versions: Python 2.7 Added file: https://bugs.python.org/file47495/test_performance.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 21 10:52:57 2018 From: report at bugs.python.org (Scott Eilerman) Date: Wed, 21 Mar 2018 14:52:57 +0000 Subject: [New-bugs-announce] [issue33114] random.sample() behavior is unexpected/unclear from docs Message-ID: <1521643977.18.0.467229070634.issue33114@psf.upfronthosting.co.za> New submission from Scott Eilerman : I ran into a "bug" when using random.sample() in which I got some results I didn't expect. After digging a little more, this is either a side effect of the optimization that's made when k > 5, or I am using the function in a way that wasn't intended. If that's the case, I would recommend calling out this behavior in the documentation. The crux of the issue is that, for a given seed, random.sample(choices,k) gives the same sequence of results for k=1 to k=5, but that sequence can be different (for the same seed) at k=6 and higher. From my initial testing this seems to only occur when 'choices' has an even length. Example code to reproduce this issue: import random seed = 199 choices = range(-10,12) for k in range(10): random.seed(seed) print(random.sample(choices,k)) Example code to look at many different occurrences of this issue: import random choices = range(-10,12) count = 0 for seed in range(200): for k in range(8): random.seed(seed) seq1 = random.sample(choices, k) random.seed(seed) seq2 = random.sample(choices, k+1) if seq1 != seq2[:-1]: print(seed) print(seq1) print(seq2) count += 1 print(f'Number of bugged results: {count}/200') To illustrate the odd/even issue, changing choices to range(-10,11) results in zero bugged results. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 314201 nosy: Scott Eilerman, docs at python priority: normal severity: normal status: open title: random.sample() behavior is unexpected/unclear from docs type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 21 11:48:58 2018 From: report at bugs.python.org (Marat Sharafutdinov) Date: Wed, 21 Mar 2018 15:48:58 +0000 Subject: [New-bugs-announce] [issue33115] Asyncio loop blocks with a lot of parallel tasks Message-ID: <1521647338.78.0.467229070634.issue33115@psf.upfronthosting.co.za> New submission from Marat Sharafutdinov : I want to schedule a lot of parallel tasks, but it becomes slow with loop blocking: ```python import asyncio task_count = 10000 async def main(): for x in range(1, task_count + 1): asyncio.ensure_future(f(x)) async def f(x): if x % 1000 == 0 or x == task_count: print(f'Run f({x})') await asyncio.sleep(1) loop.call_later(1, lambda: asyncio.ensure_future(f(x))) loop = asyncio.get_event_loop() loop.set_debug(True) loop.run_until_complete(main()) loop.run_forever() ``` Outputs: ``` Executing result=None created at /usr/lib/python3.6/asyncio/base_events.py:446> took 0.939 seconds ... Executing , None) at /usr/lib/python3.6/asyncio/futures.py:339 created at /usr/lib/python3.6/asyncio/tasks.py:480> took 0.113 seconds ... Executing wait_for=()] created at /usr/lib/python3.6/asyncio/base_events.py:275> created at test_aio.py:13> took 0.100 seconds ... ``` What can be another way to schedule a lot of parallel tasks? ---------- components: asyncio messages: 314207 nosy: asvetlov, decaz, yselivanov priority: normal severity: normal status: open title: Asyncio loop blocks with a lot of parallel tasks type: resource usage versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 21 16:47:50 2018 From: report at bugs.python.org (Eric V. Smith) Date: Wed, 21 Mar 2018 20:47:50 +0000 Subject: [New-bugs-announce] [issue33116] Field is not exposed in dataclasses.__all__ Message-ID: <1521665270.67.0.467229070634.issue33116@psf.upfronthosting.co.za> New submission from Eric V. Smith : 'Field' needs to be added to __all__. ---------- assignee: eric.smith messages: 314222 nosy: eric.smith priority: normal severity: normal status: open title: Field is not exposed in dataclasses.__all__ type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 21 17:06:46 2018 From: report at bugs.python.org (Henrique Fingler) Date: Wed, 21 Mar 2018 21:06:46 +0000 Subject: [New-bugs-announce] [issue33117] asyncio example uses non-existing/documented method Message-ID: <1521666406.67.0.467229070634.issue33117@psf.upfronthosting.co.za> New submission from Henrique Fingler : In the documentation of asyncio.run_coroutine_threadsafe(coro, loop), in Section 19.5.3.6 (https://docs.python.org/3/library/asyncio-task.html#asyncio.run_coroutine_threadsafe), the example code does the following: future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout) == 3 The problem is that the result method of a future, according to the documentation doesn't take parameters. It's in Section 19.5.3.4 (https://docs.python.org/3.8/library/asyncio-task.html#asyncio.Future.done) result() Return the result this future represents. The same function is used in Section 18.5.9.3 (https://docs.python.org/3/library/asyncio-dev.html#concurrency-and-multithreading) This error is present in all Python 3.* docs. From the asyncio source code (https://github.com/python/cpython/blob/master/Lib/asyncio/futures.py), we have this in the Future class definition: class Future: """This class is *almost* compatible with concurrent.futures.Future. Differences: - This class is not thread-safe. - result() and exception() do not take a timeout argument and raise an exception when the future isn't done yet. .... So this example needs to be reworked, I'd do it if I knew more about asyncio. My ideas involve either using a add_done_callback with a flag or just busy waiting until future.done(). ---------- assignee: docs at python components: Documentation messages: 314223 nosy: Henrique Fingler, docs at python priority: normal severity: normal status: open title: asyncio example uses non-existing/documented method versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 02:40:11 2018 From: report at bugs.python.org (Vitaly Kruglikov) Date: Thu, 22 Mar 2018 06:40:11 +0000 Subject: [New-bugs-announce] [issue33118] No clean way to get notified when a Transport's write buffer empties out Message-ID: <1521700811.42.0.467229070634.issue33118@psf.upfronthosting.co.za> New submission from Vitaly Kruglikov : There doesn't appear to be an ordained mechanism for getting notified when a Transport's (or WriteTransport's) write buffer drains to zero (i.e., all output data has been transferred to socket). I don't want to hijack `set_write_buffer_limits()` for this purpose, because that would preclude me from using it for its intended purpose. I see that transport in selector_events.py has a private method `_make_empty_waiter()`, which is along the lines of what I need, but it's private and is used by `BaseSelectorEventLoop._sendfile_native()`. Just like `BaseSelectorEventLoop._sendfile_native()`, my app needs equivalent functionality in order to be able to run the loop (`run_until_complete()`) until the transport's write buffer empties out. ---------- components: asyncio messages: 314236 nosy: asvetlov, vitaly.krug, yselivanov priority: normal severity: normal status: open title: No clean way to get notified when a Transport's write buffer empties out type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 05:21:13 2018 From: report at bugs.python.org (Jonathan Huot) Date: Thu, 22 Mar 2018 09:21:13 +0000 Subject: [New-bugs-announce] [issue33119] python sys.argv argument parsing not clear Message-ID: <1521710473.18.0.467229070634.issue33119@psf.upfronthosting.co.za> New submission from Jonathan Huot : Executing python modules with -m can lead to weird sys.argv parsing. "Argument parsing" section at https://docs.python.org/3.8/tutorial/interpreter.html#argument-passing mention : - When -m module is used, sys.argv[0] is set to the full name of the located module. The word "located" is used, but it doesn't mention anything when the module is not *yet* "located". For instance, let's see what is the sys.argv for each python files: $ cat mainmodule/__init__.py import sys; print("{}: {}".format(sys.argv, __file__)) $ cat mainmodule/submodule/__init__.py import sys; print("{}: {}".format(sys.argv, __file__)) $ cat mainmodule/submodule/foobar.py import sys; print("{}: {}".format(sys.argv, __file__)) Then we call "foobar" with -m: $ python -m mainmodule.submodule.foobar -o -b ['-m', '-o', 'b']: (..)/mainmodule/__init__.py ['-m', '-o', 'b']: (..)/mainmodule/submodule/__init__.py ['(..)/mainmodule/submodule/foobar.py', '-o', 'b']: (..)/mainmodule/submodule/foobar.py $ We notice that only "-m" is in sys.argv before we found "foobar". This can lead to a lot of troubles when we have meaningful processing in __init__.py which rely on sys.argv to initialize stuff. IMHO, it either should be the sys.argv intact ['-m', 'mainmodule.submodule.foobar', '-o', '-b'] or empty ['', '-o', '-b'] or only the latest ['-o', '-b'], but it should not be ['-m', '-o', '-b'] which is very confusing. ---------- assignee: docs at python components: Documentation messages: 314239 nosy: Jonathan Huot, docs at python priority: normal severity: normal status: open title: python sys.argv argument parsing not clear type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 09:36:41 2018 From: report at bugs.python.org (Peter) Date: Thu, 22 Mar 2018 13:36:41 +0000 Subject: [New-bugs-announce] [issue33120] infinite loop in inspect.unwrap(unittest.mock.call) Message-ID: <1521725801.92.0.467229070634.issue33120@psf.upfronthosting.co.za> New submission from Peter : The following module will eat all available RAM if executed: import inspect import unittest.mock print(inspect.unwrap(unittest.mock.call)) inspect.unwrap has loop protection against functions that wrap themselves, but unittest.mock.call creates new object on demand. ---------- components: Library (Lib) messages: 314254 nosy: peterdemin priority: normal severity: normal status: open title: infinite loop in inspect.unwrap(unittest.mock.call) type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 10:15:08 2018 From: report at bugs.python.org (joders) Date: Thu, 22 Mar 2018 14:15:08 +0000 Subject: [New-bugs-announce] [issue33121] recv returning 0 on closed connection not documented Message-ID: <1521728108.91.0.467229070634.issue33121@psf.upfronthosting.co.za> New submission from joders : The "Linux Programmer's Manual" states: When a stream socket peer has performed an orderly shutdown, the return value will be 0 (the traditional "end-of-file" return). I find that information pretty important which is why I am asking if you might want to add it to the python documentation as well. It would have prevented a bug in my code. ---------- assignee: docs at python components: Documentation messages: 314260 nosy: docs at python, joders priority: normal severity: normal status: open title: recv returning 0 on closed connection not documented type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 10:29:38 2018 From: report at bugs.python.org (=?utf-8?q?J=C3=BCrgen?=) Date: Thu, 22 Mar 2018 14:29:38 +0000 Subject: [New-bugs-announce] [issue33122] ftplib: FTP_TLS seems to have problems with sites that close the encrypted channel themselfes Message-ID: <1521728978.52.0.467229070634.issue33122@psf.upfronthosting.co.za> New submission from J?rgen : Hi, I'm not quite sure, if you would actually call this a bug, but it is very molesting at least ;o) I use ftplib.FTP_TLS to connect to a z/OS ftp server. With a minor change it works very well (happy to have found this library). The problem I have is, that without any change, an exception is raised after every single command I invoke, even though the server sends back an ok message. The exception is an OSError which is raised while executing conn.unwrap(). It seems the connection is already closed when this is called and thus an exception is raised. But handling this exception outside the FTP_TLS-class makes no sense, because then every command would raise an exception and the "good" exceptions could not be distinguised from the ones that are really searious so easily anymore (I mean: if i get an exception that a connection could not be closed, because someone else closed it before, that's not very serious, is it?). Suggestions to solve this: small solution: allow the programmer to decide what to do, by creating subclasses This is "factor-out" the unwrap logic in a separate method or function, so at least users of the class can overwrite the behavior, without having to rebuild the whole logic of the affected methods. In my quick solution I created a new method in class FTP: def __handleAutoCloseSSL__(self, conn): if self.autoCloseModeSSL == 'NONE' or self.autoCloseModeSSL is None or _SSLSocket is None or not isinstance(conn, _SSLSocket): # do nothing pass elif self.autoCloseModeSSL in ('SAFE', 'HIDE'): try: conn.unwrap() except OSError as ex: if self.autoCloseModeSSL != 'HIDE': print('Caught exception %s while calling conn.unwrap()' % str(ex)) else: # Standard mode (usally self.autoCloseModeSSL =='STANDARD' but anything else is accepted as well) # the original code was: #if _SSLSocket is not None and isinstance(conn, _SSLSocket): # conn.unwrap() conn.unwrap() And the class variable: autoCloseModeSSL = 'STANDARD' Then I called it from methods (instead of doing conn.unwrap() there directly): retbinary retlines storbinary storlines Ok, maybe not that sexy, but it works :o) And if you don't like the hack with instance variable autoCloseModeSSL, you could just transfer the original conn.unwrap() in an extra method which could then be overwritten by programmers in subclasses. This would already help me very much, because I know that patching a library is not a good idea. Even more if it is a communication library that might be updated from time to time. ---------- components: Library (Lib) messages: 314261 nosy: jottbe priority: normal severity: normal status: open title: ftplib: FTP_TLS seems to have problems with sites that close the encrypted channel themselfes type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 13:56:36 2018 From: report at bugs.python.org (rbu) Date: Thu, 22 Mar 2018 17:56:36 +0000 Subject: [New-bugs-announce] [issue33123] Path.unlink should have a missing_ok parameter Message-ID: <1521741396.21.0.467229070634.issue33123@psf.upfronthosting.co.za> New submission from rbu : Similarly to how several pathlib file creation functions have an "exists_ok" parameter, we should introduce "missing_ok" that makes removal functions not raise an exception when a file or directory is already absent. IMHO, this should cover Path.unlink and Path.rmdir. Note, Path.resolve() has a "strict" parameter since 3.6 that does the same thing. Naming this of this new parameter tries to be consistent with the "exists_ok" parameter as that is more explicit about what it does (as opposed to "strict"). ---------- components: Library (Lib) messages: 314277 nosy: rbu priority: normal severity: normal status: open title: Path.unlink should have a missing_ok parameter type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 18:17:11 2018 From: report at bugs.python.org (Neil Schemenauer) Date: Thu, 22 Mar 2018 22:17:11 +0000 Subject: [New-bugs-announce] [issue33124] Lazy execution of module bytecode Message-ID: <1521757031.84.0.467229070634.issue33124@psf.upfronthosting.co.za> New submission from Neil Schemenauer : This is an experimental patch that implements lazy execution of top-level definitions in modules (functions, classes, imports, global constants). See Tools/lazy_compile/README.txt for details. ---------- components: Interpreter Core messages: 314294 nosy: nascheme priority: low severity: normal stage: patch review status: open title: Lazy execution of module bytecode type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 22 21:03:31 2018 From: report at bugs.python.org (Steven Noonan) Date: Fri, 23 Mar 2018 01:03:31 +0000 Subject: [New-bugs-announce] [issue33125] Windows 10 ARM64 platform support Message-ID: <1521767011.97.0.467229070634.issue33125@psf.upfronthosting.co.za> New submission from Steven Noonan : The Windows 10 ARM64 release is out along with a bunch of ARM64 devices. This version of Windows has full support for building native Win32 applications (this isn't just some rehash of Windows RT). It also can run x86 (but not x86_64) apps under a transparent emulation layer. I would like to see a native build of Python on Windows 10 ARM64. I did some very basic work to get it compiling (add 10.0.16299.0 as DefaultWindowsSDKVersion, add WindowsSDKDesktopARM64Support property). But there's still a lot missing: ssl, tk, and ctypes don't build. ssl/ctypes have some assembly that needs writing/porting. tk has some kind of build failure with the newer Windows SDK: https://core.tcl.tk/tk/tktview?name=3d34589aa0 ---------- components: Windows messages: 314295 nosy: Steven Noonan, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows 10 ARM64 platform support type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 23 12:49:06 2018 From: report at bugs.python.org (Antoine Pitrou) Date: Fri, 23 Mar 2018 16:49:06 +0000 Subject: [New-bugs-announce] [issue33126] Some C buffer protocol APIs not documented Message-ID: <1521823746.22.0.467229070634.issue33126@psf.upfronthosting.co.za> New submission from Antoine Pitrou : The following C functions are available for C code but not documented: - PyBuffer_ToContiguous() - PyBuffer_FromContiguous() - PyObject_CopyData() I am not sure how to describe those functions myself. ---------- assignee: docs at python components: Documentation messages: 314315 nosy: docs at python, pitrou, skrah priority: normal severity: normal stage: needs patch status: open title: Some C buffer protocol APIs not documented type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 23 15:01:38 2018 From: report at bugs.python.org (Charles) Date: Fri, 23 Mar 2018 19:01:38 +0000 Subject: [New-bugs-announce] [issue33127] Python 2.7.14 won't build ssl module with Libressl 2.7.0 Message-ID: <1521831698.99.0.467229070634.issue33127@psf.upfronthosting.co.za> New submission from Charles : On macOS I could build python 2.7.14 with libressl 2.6.4 without any problems. If I try to build that same version of python with libressl 2.7.0, I get the failure pasted in below. /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:141:12: error: static declaration of 'X509_NAME_ENTRY_set' follows non-static declaration static int X509_NAME_ENTRY_set(const X509_NAME_ENTRY *ne) ^ /usr/local/ssl/include/openssl/x509.h:1139:6: note: previous declaration is here int X509_NAME_ENTRY_set(const X509_NAME_ENTRY *ne); ^ /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:153:25: error: static declaration of 'SSL_CTX_get_default_passwd_cb' follows non-static declaration static pem_password_cb *SSL_CTX_get_default_passwd_cb(SSL_CTX *ctx) ^ /usr/local/ssl/include/openssl/ssl.h:1368:18: note: previous declaration is here pem_password_cb *SSL_CTX_get_default_passwd_cb(SSL_CTX *ctx); ^ /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:158:14: error: static declaration of 'SSL_CTX_get_default_passwd_cb_userdata' follows non-static declaration static void *SSL_CTX_get_default_passwd_cb_userdata(SSL_CTX *ctx) ^ /usr/local/ssl/include/openssl/ssl.h:1370:7: note: previous declaration is here void *SSL_CTX_get_default_passwd_cb_userdata(SSL_CTX *ctx); ^ /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:163:12: error: static declaration of 'X509_OBJECT_get_type' follows non-static declaration static int X509_OBJECT_get_type(X509_OBJECT *x) ^ /usr/local/ssl/include/openssl/x509_vfy.h:428:5: note: previous declaration is here int X509_OBJECT_get_type(const X509_OBJECT *a); ^ /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:168:14: error: static declaration of 'X509_OBJECT_get0_X509' follows non-static declaration static X509 *X509_OBJECT_get0_X509(X509_OBJECT *x) ^ /usr/local/ssl/include/openssl/x509_vfy.h:430:7: note: previous declaration is here X509 *X509_OBJECT_get0_X509(const X509_OBJECT *xo); ^ /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:173:31: error: static declaration of 'X509_STORE_get0_objects' follows non-static declaration static STACK_OF(X509_OBJECT) *X509_STORE_get0_objects(X509_STORE *store) { ^ /usr/local/ssl/include/openssl/x509_vfy.h:438:24: note: previous declaration is here STACK_OF(X509_OBJECT) *X509_STORE_get0_objects(X509_STORE *xs); ^ /Users/chdiza/.tmp/tmpdir/python27-20180323-74284-f2auy2/Python-2.7.14/Modules/_ssl.c:177:27: error: static declaration of 'X509_STORE_get0_param' follows non-static declaration static X509_VERIFY_PARAM *X509_STORE_get0_param(X509_STORE *store) ^ /usr/local/ssl/include/openssl/x509_vfy.h:450:20: note: previous declaration is here X509_VERIFY_PARAM *X509_STORE_get0_param(X509_STORE *ctx); ^ 7 errors generated. ---------- assignee: christian.heimes components: SSL messages: 314320 nosy: chdiza, christian.heimes priority: normal severity: normal status: open title: Python 2.7.14 won't build ssl module with Libressl 2.7.0 type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 23 19:30:41 2018 From: report at bugs.python.org (Hartmut Goebel) Date: Fri, 23 Mar 2018 23:30:41 +0000 Subject: [New-bugs-announce] [issue33128] PathFinder is twice on sys.meta_path Message-ID: <1521847841.75.0.467229070634.issue33128@psf.upfronthosting.co.za> New submission from Hartmut Goebel : As of Python 3.7.0b2 _frozen_importlib_external.PathFinder exists twice on sys.meta_path, and it is the same object: $ python -S Python 3.7.0b2 (default, Mar 22 2018, 20:09:00) [GCC 5.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print(sys.meta_path) [, , , ] >>> print([id(p) for p in sys.meta_path]) [24427944, 24430216, 24517416, 24517416] >>> ---------- components: Interpreter Core messages: 314340 nosy: htgoebel priority: normal severity: normal status: open title: PathFinder is twice on sys.meta_path type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 23 19:33:17 2018 From: report at bugs.python.org (Alan Du) Date: Fri, 23 Mar 2018 23:33:17 +0000 Subject: [New-bugs-announce] [issue33129] Add kwarg-only option to dataclass Message-ID: <1521847997.9.0.467229070634.issue33129@psf.upfronthosting.co.za> New submission from Alan Du : I'd like to request a new option to the `dataclasses.dataclass` decorator to make the `__init__` keyword-only. The two use-cases I have in mind are: (1) Using as a dataclass big-bag-of-config. In this scenario, forcing the user to specify the keywords is a lot nicer than passing in a dozen positional parameters. (2) Having kwarg-only parameters means that inheritance and default parameters play nicely with each other again instead of raising a TypeError. ---------- components: Library (Lib) messages: 314341 nosy: alan_du, eric.smith priority: normal severity: normal status: open title: Add kwarg-only option to dataclass type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 23 20:05:44 2018 From: report at bugs.python.org (Vince Reuter) Date: Sat, 24 Mar 2018 00:05:44 +0000 Subject: [New-bugs-announce] [issue33130] functools.reduce signature/docstring discordance Message-ID: <1521849944.24.0.467229070634.issue33130@psf.upfronthosting.co.za> New submission from Vince Reuter : The signature for functools.reduce correctly refers to the collection parameter as an iterable, but the docstring refers to it as "sequence," which the input need not be and does not match the parameter name despite being italicized. ---------- assignee: docs at python components: Documentation messages: 314344 nosy: docs at python, vreuter priority: normal pull_requests: 5951 severity: normal status: open title: functools.reduce signature/docstring discordance type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 24 03:27:41 2018 From: report at bugs.python.org (Nick Coghlan) Date: Sat, 24 Mar 2018 07:27:41 +0000 Subject: [New-bugs-announce] [issue33131] Upgrade to pip 10 for Python 3.7 Message-ID: <1521876461.03.0.467229070634.issue33131@psf.upfronthosting.co.za> New submission from Nick Coghlan : Paul brought up recently [1] that with pip 10.0.0 due for release next month [2], we'd really prefer to ship that in Python 3.7.0 (such that 3.7 launches with PEP 518/517 pyproject.toml support), rather than shipping with 9.0.x and then upgrading to 10.0.0 in Python 3.7.1. The timing is such that 10.0.0 won't quite be ready for 3.7.0b3, but it should be released before 3.7.0b4 at the end of April. [1] https://github.com/pypa/packaging-problems/issues/127#issuecomment-374183609 [2] https://mail.python.org/pipermail/distutils-sig/2018-March/032047.html ---------- messages: 314360 nosy: Marcus.Smith, dstufft, ncoghlan, ned.deily, paul.moore priority: deferred blocker severity: normal stage: needs patch status: open title: Upgrade to pip 10 for Python 3.7 type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 24 08:32:25 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 24 Mar 2018 12:32:25 +0000 Subject: [New-bugs-announce] [issue33132] Possible refcount issues in the compiler Message-ID: <1521894745.19.0.467229070634.issue33132@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : There are several possible reference leaks in compiler.c. When implicit (in VISIT* or ADDOP_* macros) "return" is occurred between creating a new object and ADDOP_N, there is a possible reference leaks. ADDOP_O followed by Py_DECREF contains a possible reference leaks. And in compiler_from_import() names can be decrefed twice. The following PR fixes these issues. ---------- assignee: serhiy.storchaka components: Interpreter Core messages: 314365 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Possible refcount issues in the compiler type: resource usage versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 24 14:16:02 2018 From: report at bugs.python.org (Ivan Levkivskyi) Date: Sat, 24 Mar 2018 18:16:02 +0000 Subject: [New-bugs-announce] [issue33133] Don't return implicit optional types by get_type_hints Message-ID: <1521915362.58.0.467229070634.issue33133@psf.upfronthosting.co.za> New submission from Ivan Levkivskyi : Currently this code def f(x: int = None): pass get_type_hints(f) returns {'x': Optional[int]}. I propose to abandon this behaviour. Although there is not yet a definitive decision about this aspect of PEP 484, see https://github.com/python/typing/issues/275, I think at least at runtime we should not do this. ---------- components: Library (Lib) messages: 314378 nosy: gvanrossum, levkivskyi priority: normal severity: normal status: open title: Don't return implicit optional types by get_type_hints type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 24 17:11:21 2018 From: report at bugs.python.org (Eric V. Smith) Date: Sat, 24 Mar 2018 21:11:21 +0000 Subject: [New-bugs-announce] [issue33134] dataclasses: use function dispatch instead of multiple tests for adding __hash__ Message-ID: <1521925881.85.0.467229070634.issue33134@psf.upfronthosting.co.za> New submission from Eric V. Smith : There's already a table lookup for what action to take when adding __hash__. Change it to a function dispatch table, instead of using strings and testing them. ---------- assignee: eric.smith components: Library (Lib) messages: 314385 nosy: eric.smith priority: normal severity: normal status: open title: dataclasses: use function dispatch instead of multiple tests for adding __hash__ versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 02:10:23 2018 From: report at bugs.python.org (Nick Coghlan) Date: Sun, 25 Mar 2018 06:10:23 +0000 Subject: [New-bugs-announce] [issue33135] Define field prefixes for the various config structs Message-ID: <1521958223.34.0.467229070634.issue33135@psf.upfronthosting.co.za> New submission from Nick Coghlan : While working on https://bugs.python.org/issue33042, I found it hard to keep track of which kind of config struct a particular piece of code was referencing. As a particularly relevant example, we currently have 3 different "warnoptions" fields: the private-to-main one for reading the command line settings, the "wchar_t *" list in the core config, and the "PyObject *" list object in the main interpreter config (which is also the one aliased as sys.warnoptions). What do you think of adopting a convention where: * the command line fields all gain a "cmd_" prefix * the core config fields all gain a "c_" prefix * the interpreter config fields all gain a "py_" prefix We'd then have "cmd_warnoptions", "c_warnoptions", and "py_warnoptions" as the field names, and it would be more self-evident which layer we were working at in any particular piece of code. ---------- messages: 314398 nosy: eric.snow, ncoghlan, vstinner priority: normal severity: normal stage: needs patch status: open title: Define field prefixes for the various config structs type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 05:19:22 2018 From: report at bugs.python.org (Christian Heimes) Date: Sun, 25 Mar 2018 09:19:22 +0000 Subject: [New-bugs-announce] [issue33136] Harden ssl module against CVE-2018-8970 Message-ID: <1521969562.74.0.467229070634.issue33136@psf.upfronthosting.co.za> New submission from Christian Heimes : Since 3.7, the ssl module uses X509_VERIFY_PARAM_set1_host() to put the burden of hostname matching on OpenSSL. More specific, it calls X509_VERIFY_PARAM_set1_host(param, server_hostname, 0). The namelen=0 parameter means that OpenSSL handles server_hostname as a NUL-terminated C string. LibreSSL 2.7.0 added X509_VERIFY_PARAM_set1_host(), but took the implementation from BoringSSL instead of OpenSSL. The BoringSSL implementation doesn't support namelen=0. X509_VERIFY_PARAM_set1_host(param, server_hostname, 0) returns success but doesn't configure the SSL connection for hostname verification. As a result, LibreSSL 2.7.0 doesn't perform any hostname matching. All trusted certificates are accepted for just any arbitrary hostname. This misbehavior left Python 3.7 beta open to man-in-the-middle attack. LibreSSL 2.7.1 has fixed the issue. To harden the ssl module against, I'm also changing our implementation to use strlen() instead of 0. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8970 https://bugs.chromium.org/p/boringssl/issues/detail?id=30 https://bugs.chromium.org/p/chromium/issues/detail?id=824799 (restricted for now) ---------- assignee: christian.heimes components: SSL messages: 314400 nosy: christian.heimes priority: high severity: normal stage: needs patch status: open title: Harden ssl module against CVE-2018-8970 type: security versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 08:01:28 2018 From: report at bugs.python.org (Xavier de Gaye) Date: Sun, 25 Mar 2018 12:01:28 +0000 Subject: [New-bugs-announce] [issue33137] line traces may be missed on backward jumps when instrumented with dtrace Message-ID: <1521979288.09.0.467229070634.issue33137@psf.upfronthosting.co.za> New submission from Xavier de Gaye : In _PyEval_EvalFrameDefault(), the call to maybe_dtrace_line() sets frame->f_lasti to instr_prev and for that reason, in the ensuing call to maybe_call_line_trace(), the call_trace() function with PyTrace_LINE is not called as it should when a backward jump is not to the first instruction of the line. ---------- components: Interpreter Core messages: 314408 nosy: lukasz.langa, serhiy.storchaka, xdegaye priority: normal severity: normal stage: needs patch status: open title: line traces may be missed on backward jumps when instrumented with dtrace type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 12:21:38 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 25 Mar 2018 16:21:38 +0000 Subject: [New-bugs-announce] [issue33138] Improve standard error for uncopyable types Message-ID: <1521994898.68.0.467229070634.issue33138@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Currently most extension types are not pickleable and copyable. The default error messages is "can't pickle XXX objects". This is confusing in case of copying because not all know that copying falls back to the pickle protocol (see for example issue33023). The proposed PR changes the default error messages to more neutral "cannot serialize 'XXX' object". This or similar error messages are already used in some classes (files, sockets, compressors/decompressors). It also removes __getstate__ methods raising an error from non-pickleable extension types. They where added when extension types were pickleable by default (fixed in issue22995). Now they are not needed. ---------- components: Interpreter Core messages: 314418 nosy: alexandre.vassalotti, christian.heimes, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Improve standard error for uncopyable types type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 17:10:27 2018 From: report at bugs.python.org (Peter Rounce) Date: Sun, 25 Mar 2018 21:10:27 +0000 Subject: [New-bugs-announce] [issue33139] Bdb doesn't find instruction in linecache after pdb.set_trace() following os.chdir("/tmp") Message-ID: <1522012227.23.0.467229070634.issue33139@psf.upfronthosting.co.za> New submission from Peter Rounce : In my view there is a fault in python3 pdb in that if you use pdb.set_trace() after using os.chdir() to change the cwd to a directory that does not contain the source code being executed, then there is no instruction output on next or step. This is shown in the following code where I have added print lines to bdb.py file to show the errors. [The tes program is attached. python3 testpdb.py # To output a line of code the canonic function in bdp.py is called # to build an absolute path to the source code being executed. PRINT --> canonic line 32 - canonic = None PRINT --> canonic line 36 - canonic_abs = /home/pythontest/Software/python/3/testpdb.py # the following is printed after the call to linecache and shows # the file accessed, the line number in the code and # the instruction string returned PRINT --> filename: /home/pythontest/Software/python/3/testpdb.py - lineno: 11, line: e=d+5 > /home/pythontest/Software/python/3/testpdb.py(11)() -> e=d+5 (Pdb) c # The program is continued and os.chdir("/tmp") is executed. # Another pdb.set_trace() has been executed, which creates a new Pdb # class instance, and thus a new Bdb instance, where Bdb.fncache # used by the canonic function is {}. # The canonic function is passed just the filename 'testpdb.py" and # canonic uses os.path.abs to get a full path. Of course this gives # the wrong path to testpdb.py since it just prepends the current # cwd, thus:- PRINT --> canonic line 32 - canonic = None PRINT --> canonic line 36 - canonic_abs = /tmp/testpdb.py # the call to linecache in format_cache_entry (line 411) doesn't # find the source code so returns an empty string. PRINT --> filename: /tmp/testpdb.py - lineno: 15, line: > /tmp/testpdb.py(15)() (Pdb) c Why canonic is using os.path.abs is not clear to me: it seems to be a mistake, but it is surprising that it has not been found, if this is the case. It is interesting to note that linecache itself, when reading from a file with just a filename (and not an absolute path) does not try to guess the path with os.path.abs but looks down the python 'sys.path' to find the full path to the file. This would look like a reasonable solution, but it might be better to extend the existing code by checking the full path from the 'os.path.abs' instruction with an os.exists call and if this fails doing a search down 'sys.path'. The modified code in bdb.py for this solution is:- def getfullpath(self, basename) : for dirname in sys.path: try: fullname = os.path.join(dirname, basename) except (TypeError, AttributeError): # Not sufficiently string-like to do anything useful with. continue try: stat = os.stat(fullname) break except OSError: pass else: return [] return fullname def canonic(self, filename): if filename == "<" + filename[1:-1] + ">": return filename canonic = self.fncache.get(filename) if not canon ic: canonicabs = canonic = os.path.abspath(filename) canonic = os.path.normcase(canonic) # if path does not exists look down sys.path if not os.path.exists(canonic) : canonic = self.getfullpath(filename) canonic = os.path.normcase(canonic) self.fncache[filename] = canonic return canonic ---------- components: Library (Lib) files: testpdb.py messages: 314435 nosy: prounce priority: normal severity: normal status: open title: Bdb doesn't find instruction in linecache after pdb.set_trace() following os.chdir("/tmp") type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file47498/testpdb.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 18:55:36 2018 From: report at bugs.python.org (Eryk Sun) Date: Sun, 25 Mar 2018 22:55:36 +0000 Subject: [New-bugs-announce] [issue33140] shutil.chown on Windows Message-ID: <1522018536.28.0.467229070634.issue33140@psf.upfronthosting.co.za> New submission from Eryk Sun : shutil.chown is defined in Windows even though it's only written for Unix and only documented as available in Unix. Defining it should be skipped on Windows. Possibly in 3.8 shutil.chown could be implemented on Windows by calling a new os.set_owner function that supports user/group names and SID strings. It could copy how icacls.exe allows using SDDL aliases and string SIDs that begin with an asterisk (e.g. "*BA" and "*S-1-32-544" for BUILTIN\Administrators). If the string starts with an asterisk, get the SID via ConvertStringSidToSid. Otherwise get the SID via LookupAccountName. Then to modify the file's user and group, try to enable SeRestorePrivilege for the current thread and call Set[Named]SecurityInfo. ---------- components: IO, Library (Lib), Windows messages: 314436 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: shutil.chown on Windows type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 25 22:25:49 2018 From: report at bugs.python.org (Rick Teachey) Date: Mon, 26 Mar 2018 02:25:49 +0000 Subject: [New-bugs-announce] [issue33141] descriptor __set_name__ feature broken for dataclass descriptor fields Message-ID: <1522031149.18.0.467229070634.issue33141@psf.upfronthosting.co.za> New submission from Rick Teachey : Summary: The descriptor `__set_name__` functionality (introduced in Python 3.6) does not seem to be working correctly for `dataclass.Field` objects with a default pointing to a descriptor. I have attached a file demonstrating the trouble. Details: If I set a `dataclass` class object field to a `dataclass.field` with a descriptor object for the `default` argument, the descriptor's `__set_name__` method is not called during initialization. This is unexpected because descriptors themselves seem to work pretty much flawlessly, otherwise. (Bravo on that by the way! Working descriptors isn't mentioned at all in the PEP as a feature but I was very pleased to see them working!!) System details: Python 3.7b02 Windows 10 PyCharm Community Edition btw this is my first ever Python bug report; I hope I did a good job. ---------- files: broken__set_name__.py messages: 314438 nosy: Ricyteach, eric.smith priority: normal severity: normal status: open title: descriptor __set_name__ feature broken for dataclass descriptor fields type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file47499/broken__set_name__.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 04:52:29 2018 From: report at bugs.python.org (Andreas Jung) Date: Mon, 26 Mar 2018 08:52:29 +0000 Subject: [New-bugs-announce] [issue33142] Fatal Python error: Py_Initialize: Unable to get the locale encoding Message-ID: <1522054349.44.0.467229070634.issue33142@psf.upfronthosting.co.za> New submission from Andreas Jung : Unable to build Python 3.6.4 from sources on a fresh Debian system: @plone /tmp/Python-3.6.4 $ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -I. -I./Include -DPy_BUILD_CORE \ -DGITVERSION="\"`LC_ALL=C `\"" \ -DGITTAG="\"`LC_ALL=C `\"" \ -DGITBRANCH="\"`LC_ALL=C `\"" \ -o Modules/getbuildinfo.o ./Modules/getbuildinfo.c rm -f libpython3.6m.a ar rc libpython3.6m.a Modules/getbuildinfo.o ar rc libpython3.6m.a Parser/acceler.o Parser/grammar1.o Parser/listnode.o Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o Parser/firstsets.o Parser/grammar.o Parser/pgen.o Parser/myreadline.o Parser/parsetok.o Parser/tokenizer.o ar rc libpython3.6m.a Objects/abstract.o Objects/accu.o Objects/boolobject.o Objects/bytes_methods.o Objects/bytearrayobject.o Objects/bytesobject.o Objects/cellobject.o Objects/classobject.o Objects/codeobject.o Objects/complexobject.o Objects/descrobject.o Objects/enumobject.o Objects/exceptions.o Objects/genobject.o Objects/fileobject.o Objects/floatobject.o Objects/frameobject.o Objects/funcobject.o Objects/iterobject.o Objects/listobject.o Objects/longobject.o Objects/dictobject.o Objects/odictobject.o Objects/memoryobject.o Objects/methodobject.o Objects/moduleobject.o Objects/namespaceobject.o Objects/object.o Objects/obmalloc.o Objects/capsule.o Objects/rangeobject.o Objects/setobject.o Objects/sliceobject.o Objects/structseq.o Objects/tupleobject.o Objects/typeobject.o Objects/unicodeobject.o Objects/unicodectype.o Objects/weakrefobject.o ar rc libpython3.6m.a Python/_warnings.o Python/Python-ast.o Python/asdl.o Python/ast.o Python/bltinmodule.o Python/ceval.o Python/compile.o Python/codecs.o Python/dynamic_annotations.o Python/errors.o Python/frozenmain.o Python/future.o Python/getargs.o Python/getcompiler.o Python/getcopyright.o Python/getplatform.o Python/getversion.o Python/graminit.o Python/import.o Python/importdl.o Python/marshal.o Python/modsupport.o Python/mystrtoul.o Python/mysnprintf.o Python/peephole.o Python/pyarena.o Python/pyctype.o Python/pyfpe.o Python/pyhash.o Python/pylifecycle.o Python/pymath.o Python/pystate.o Python/pythonrun.o Python/pytime.o Python/random.o Python/structmember.o Python/symtable.o Python/sysmodule.o Python/traceback.o Python/getopt.o Python/pystrcmp.o Python/pystrtod.o Python/pystrhex.o Python/dtoa.o Python/formatter_unicode.o Python/fileutils.o Python/dynload_shlib.o Python/thread.o Python/frozen.o ar rc libpython3.6m.a Modules/config.o Modules/getpath.o Modules/main.o Modules/gcmodule.o ar rc libpython3.6m.a Modules/_threadmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/_operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/atexitmodule.o Modules/signalmodule.o Modules/_stat.o Modules/timemodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/faulthandler.o Modules/_tracemalloc.o Modules/hashtable.o Modules/symtablemodule.o Modules/xxsubtype.o ranlib libpython3.6m.a gcc -pthread -Xlinker -export-dynamic -o python Programs/python.o libpython3.6m.a -lpthread -ldl -lutil -lm gcc -pthread -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.6m.a -lpthread -ldl -lutil -lm ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi Could not find platform independent libraries Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] Fatal Python error: Py_Initialize: Unable to get the locale encoding ModuleNotFoundError: No module named 'encodings' Current thread 0x00007f1757349440 (most recent call first): Aborted generate-posix-vars failed Makefile:575: recipe for target 'pybuilddir.txt' failed make: *** [pybuilddir.txt] Error 1 ---------- messages: 314442 nosy: ajung priority: normal severity: normal status: open title: Fatal Python error: Py_Initialize: Unable to get the locale encoding type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 05:26:20 2018 From: report at bugs.python.org (Anders Rundgren) Date: Mon, 26 Mar 2018 09:26:20 +0000 Subject: [New-bugs-announce] [issue33143] encode UTF-16 generates unexpected results Message-ID: <1522056380.43.0.467229070634.issue33143@psf.upfronthosting.co.za> New submission from Anders Rundgren : Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> v = '\u20ac' >>> print (v) ? >>> v.encode('utf-16') b'\xff\xfe\xac ' >>> v.encode('utf-16_be') b' \xac' I had expected to get pair of bytes with 20 AC for the ? symbol ---------- components: Unicode messages: 314443 nosy: anders.rundgren.net at gmail.com, ezio.melotti, vstinner priority: normal severity: normal status: open title: encode UTF-16 generates unexpected results type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 11:00:28 2018 From: report at bugs.python.org (Wolfgang Maier) Date: Mon, 26 Mar 2018 15:00:28 +0000 Subject: [New-bugs-announce] [issue33144] random._randbelow optimization Message-ID: <1522076428.25.0.467229070634.issue33144@psf.upfronthosting.co.za> New submission from Wolfgang Maier : Given that the random module goes a long way to ensure optimal performance, I was wondering why the check for a match between the random and getrandbits methods is performed per call of Random._randbelow, when it could also be done at instantiation time (the attached patch uses __init_subclass__ for that purpose and, in my hands, gives 10-25% speedups for calls to methods relying on _randbelow). Is it really necessary to guard against someone monkey patching the methods rather than using inheritance? ---------- components: Library (Lib) files: randbelow.patch keywords: patch messages: 314455 nosy: rhettinger, wolma priority: normal severity: normal status: open title: random._randbelow optimization type: performance versions: Python 3.8 Added file: https://bugs.python.org/file47501/randbelow.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 12:29:47 2018 From: report at bugs.python.org (Rolf Eike Beer) Date: Mon, 26 Mar 2018 16:29:47 +0000 Subject: [New-bugs-announce] [issue33145] unaligned accesses in siphash24() lead to crashes on sparc Message-ID: <1522081787.81.0.467229070634.issue33145@psf.upfronthosting.co.za> Change by Rolf Eike Beer : ---------- components: Library (Lib) nosy: Dakon priority: normal pull_requests: 5983 severity: normal status: open title: unaligned accesses in siphash24() lead to crashes on sparc versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 12:40:34 2018 From: report at bugs.python.org (Jason R. Coombs) Date: Mon, 26 Mar 2018 16:40:34 +0000 Subject: [New-bugs-announce] [issue33146] contextlib.suppress should capture exception for inspection and filter on substrings Message-ID: <1522082434.07.0.467229070634.issue33146@psf.upfronthosting.co.za> New submission from Jason R. Coombs : I propose the following expansion of the interface of contextlib.suppress. Currently, when entering the context, suppress returns None. Instead, it could return an object that provides some detail about the exception. Inspiration for an implementation exists in pytest (https://github.com/pytest-dev/pytest/blob/ff3d13ed0efab6692a07059b1d61c53eec6e0412/_pytest/python_api.py#L627), capturing the commonly-encountered use-cases, where one wishes to capture, suppress, and then act on a subset of exceptions, allowing others to raise normally. In [py-181](https://github.com/pytest-dev/py/pull/181), I suggest exposing this functionality generally, but others had an instinct similar to mine - that perhaps the stdlib should be providing this interface. In addition to saving the exception for inspection, the pytest implementation also allows a "message" to be supplied (for those exceptions where only some subset of the class of Exception is suppressed). I present this concept here for consideration and feedback. Can contextlib.suppress be expanded with such an interface? ---------- components: Library (Lib) messages: 314461 nosy: jason.coombs priority: normal severity: normal status: open title: contextlib.suppress should capture exception for inspection and filter on substrings versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 12:45:21 2018 From: report at bugs.python.org (Paul Hoffman) Date: Mon, 26 Mar 2018 16:45:21 +0000 Subject: [New-bugs-announce] [issue33147] Update references for RFC 3548 to RFC 4648 Message-ID: <1522082721.06.0.467229070634.issue33147@psf.upfronthosting.co.za> New submission from Paul Hoffman : serhiy-storchaka asked me to open an issue about whether Python implements RFC 4648. As far as I can tell it does, correctly, for the parts of RFC 4648 covered in the doc. My PR was about simply updating a reference to an RFC that was made obsolete. ---------- messages: 314462 nosy: paulehoffman priority: normal pull_requests: 5985 severity: normal status: open title: Update references for RFC 3548 to RFC 4648 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 18:50:43 2018 From: report at bugs.python.org (Vitaly Kruglikov) Date: Mon, 26 Mar 2018 22:50:43 +0000 Subject: [New-bugs-announce] [issue33148] RuntimeError('Event loop is closed') after cancelling getaddrinfo and closing loop Message-ID: <1522104643.31.0.467229070634.issue33148@psf.upfronthosting.co.za> New submission from Vitaly Kruglikov : I see this exception on the terminal: ``` exception calling callback for Traceback (most recent call last): File "/usr/local/Cellar/python/3.6.4_3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/Cellar/python/3.6.4_3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/futures.py", line 414, in _call_set_state dest_loop.call_soon_threadsafe(_set_state, destination, source) File "/usr/local/Cellar/python/3.6.4_3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 620, in call_soon_threadsafe self._check_closed() File "/usr/local/Cellar/python/3.6.4_3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 357, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed ``` When executing this code: ``` import asyncio while True: loop = asyncio.new_event_loop() coro = loop.getaddrinfo('www.google.com', 80) task = asyncio.ensure_future(coro, loop=loop) task.cancel() loop.call_soon_threadsafe(loop.stop) loop.run_forever() loop.close() ``` Shouldn't a cancelled operation go away (or at least pretend to go away) cleanly? ---------- components: asyncio messages: 314484 nosy: asvetlov, vitaly.krug, yselivanov priority: normal severity: normal status: open title: RuntimeError('Event loop is closed') after cancelling getaddrinfo and closing loop versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 19:57:23 2018 From: report at bugs.python.org (Isaac Elliott) Date: Mon, 26 Mar 2018 23:57:23 +0000 Subject: [New-bugs-announce] [issue33149] Parser stack overflows Message-ID: <1522108643.61.0.467229070634.issue33149@psf.upfronthosting.co.za> New submission from Isaac Elliott : python3's parser stack overflows on deeply-nested expressions, for example: [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] or aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa())))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) These are both minimal examples, so if you remove one level of nesting from either then python3 will behave normally. ---------- messages: 314485 nosy: Isaac Elliott priority: normal severity: normal status: open title: Parser stack overflows versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 26 22:39:48 2018 From: report at bugs.python.org (Arno-Can Uestuensoez) Date: Tue, 27 Mar 2018 02:39:48 +0000 Subject: [New-bugs-announce] [issue33150] Signature error for methods of class configparser.Interpolation Message-ID: <1522118388.18.0.467229070634.issue33150@psf.upfronthosting.co.za> New submission from Arno-Can Uestuensoez : I am not sure whether this is already covered by an issue, it is present in 3.6.2 and 3.6.4. The class Interpolation in the configparser module causes an exception: File "/opt/python/python-3.6.4/lib/python3.6/configparser.py", line 1123, in _join_multiline_values name, val) TypeError: before_read() missing 1 required positional argument: 'value' This is due to the missing 'parser' parameter at the call of 'Interploation.xyz()' methods, also the case for several other method calls. class Interpolation: """Dummy interpolation that passes the value through with no changes.""" def before_read(self, parser, section, option, value): return value ... Same for derived classes see e.g. class BasicInterpolation(Interpolation): ... class ExtendedInterpolation(Interpolation): ... A work around seems to be: - defining a dummy with changed signatures as parameter 'interpolation' ---------- components: Argument Clinic, Build, asyncio messages: 314493 nosy: acue, asvetlov, larry, yselivanov priority: normal severity: normal status: open title: Signature error for methods of class configparser.Interpolation type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 00:47:44 2018 From: report at bugs.python.org (Barry A. Warsaw) Date: Tue, 27 Mar 2018 04:47:44 +0000 Subject: [New-bugs-announce] [issue33151] importlib.resources breaks on subdirectories Message-ID: <1522126064.63.0.467229070634.issue33151@psf.upfronthosting.co.za> New submission from Barry A. Warsaw : Found a bug when trying to read a resource from a subpackage in a zip file. I was actually surprised we didn't have a test for this AFAICT, and when I added one, it did fail. I have a PR coming soon. ---------- assignee: barry components: Library (Lib) messages: 314500 nosy: barry, brett.cannon priority: normal severity: normal status: open title: importlib.resources breaks on subdirectories versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 04:05:41 2018 From: report at bugs.python.org (Windson Yang) Date: Tue, 27 Mar 2018 08:05:41 +0000 Subject: [New-bugs-announce] [issue33152] clean code Message-ID: <1522137941.19.0.467229070634.issue33152@psf.upfronthosting.co.za> New submission from Windson Yang : https://github.com/python/cpython/blob/master/Lib/timeit.py#L202 use a list comprehension instead ---------- components: Distutils messages: 314504 nosy: Windson Yang, dstufft, eric.araujo priority: normal severity: normal status: open title: clean code type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 06:10:51 2018 From: report at bugs.python.org (Ivan Zakharyaschev) Date: Tue, 27 Mar 2018 10:10:51 +0000 Subject: [New-bugs-announce] [issue33153] interpreter crash when multiplying large tuples Message-ID: <1522145451.62.0.467229070634.issue33153@psf.upfronthosting.co.za> New submission from Ivan Zakharyaschev : The issue https://bugs.python.org/msg314475 has arisen for tuples (but not for lists, as in the example there) in 2.7.14 for me. How should we fix it in a better way? This bug is not reproducible in python 3.5.4. [builder at localhost ~]$ python Python 2.7.14 (default, Nov 7 2017, 17:07:17) [GCC 6.3.1 20170118 (ALT 6.3.1-alt2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> x = [0] * 2**20 >>> x *= 2**20 Traceback (most recent call last): File "", line 1, in MemoryError >>> x = [0,0,0,0,0,0] * 2**20 >>> x *= 2**20 Traceback (most recent call last): File "", line 1, in MemoryError >>> x = ('a', 'b') >>> x = ('a', 'b') * 2**20 >>> x *= 2**20 Segmentation fault [builder at localhost ~]$ python --version Python 2.7.14 [builder at localhost ~]$ python Python 2.7.14 (default, Nov 7 2017, 17:07:17) [GCC 6.3.1 20170118 (ALT 6.3.1-alt2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.maxsize 2147483647 >>> sys.maxint 2147483647 >>> [builder at localhost ~]$ python RPM/BUILD/Python-2.7.14/Lib/test/test_tuple.py test_addmul (__main__.TupleTest) ... ok test_bigrepeat (__main__.TupleTest) ... Segmentation fault [builder at localhost ~]$ ---------- components: Interpreter Core messages: 314508 nosy: imz priority: normal severity: normal status: open title: interpreter crash when multiplying large tuples type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 06:54:43 2018 From: report at bugs.python.org (Arno-Can Uestuensoez) Date: Tue, 27 Mar 2018 10:54:43 +0000 Subject: [New-bugs-announce] [issue33154] subprocess.Popen ResourceWarning should have activation-deactivation flags Message-ID: <1522148083.75.0.467229070634.issue33154@psf.upfronthosting.co.za> New submission from Arno-Can Uestuensoez : The subprocess call *subprocess.Popen* in Python3.6 was added a number of resource warnings, including subprocess run-state and open files. This is a very good facility for debugging, but causes a lot of trouble for programs relying on subprocesses via the STDIO/STDERR interface. The STDIO/STDERR interfaces are very common when shell utilities are incorporated into high level Python programs. The other issue is the unit testing of command line tools as black-box tests, these solely rely on the STDOUT and STDERR interface. I am currently finishing a subprocess test package with common code for Python2.7 and Python3.5+, so facing some trouble with IO filtering. Examples are attached. A system call should process the common IO interfaces of the called subprocesses by default without any additional output. So a call flag and/or an environment variable should be introduced in addition, which allows the activation and deactivation of these messages. The default should be *deactive*. ---------- components: Library (Lib) files: python3-output.txt messages: 314511 nosy: acue, martin.panter, pitrou, python-dev, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: subprocess.Popen ResourceWarning should have activation-deactivation flags type: resource usage versions: Python 3.6 Added file: https://bugs.python.org/file47502/python3-output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 07:22:48 2018 From: report at bugs.python.org (Mads Jensen) Date: Tue, 27 Mar 2018 11:22:48 +0000 Subject: [New-bugs-announce] [issue33155] Use super().method instead in Logging Message-ID: <1522149768.39.0.467229070634.issue33155@psf.upfronthosting.co.za> New submission from Mads Jensen : There are lots of legacy calls in the form of ClassName.method, which should be replaced with super().method. This is an issue in many modules; I've been asked to create a report for each module that the PR touches. ---------- components: Library (Lib) messages: 314517 nosy: madsjensen priority: normal severity: normal status: open title: Use super().method instead in Logging _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 07:24:09 2018 From: report at bugs.python.org (Mads Jensen) Date: Tue, 27 Mar 2018 11:24:09 +0000 Subject: [New-bugs-announce] [issue33156] Use super().method instead in email classes. Message-ID: <1522149849.99.0.467229070634.issue33156@psf.upfronthosting.co.za> New submission from Mads Jensen : There are lots of legacy calls in the form of ClassName.method, which should be replaced with super().method. ---------- components: email messages: 314519 nosy: barry, madsjensen, r.david.murray priority: normal pull_requests: 5997 severity: normal status: open title: Use super().method instead in email classes. versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 11:15:51 2018 From: report at bugs.python.org (yemiteliyadu) Date: Tue, 27 Mar 2018 15:15:51 +0000 Subject: [New-bugs-announce] [issue33157] Strings beginning with underscore not removed from lists - feature or bug? Message-ID: <1522163751.07.0.467229070634.issue33157@psf.upfronthosting.co.za> New submission from yemiteliyadu : Strings beginning with underscore not removed from lists Reproducible as shown below: Python 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 12:04:33) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> test = ["a","_a","b","_b"] >>> for i in test: print(i) ... a _a b _b >>> for i in test: test.remove(i) ... >>> test ['_a', '_b'] >>> Is this a feature or a bug? A search through the docs did not show any mention of this. ---------- messages: 314530 nosy: yemiteliyadu priority: normal severity: normal status: open title: Strings beginning with underscore not removed from lists - feature or bug? type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 12:17:09 2018 From: report at bugs.python.org (Samwyse) Date: Tue, 27 Mar 2018 16:17:09 +0000 Subject: [New-bugs-announce] [issue33158] Add fileobj property to csv reader and writer objects Message-ID: <1522167429.73.0.467229070634.issue33158@psf.upfronthosting.co.za> New submission from Samwyse : Many objects have properties that allow access to the arguments used to create them. In particular, file objects have a name property that returns the name used when opening a file. A fileobj property would be convenient, as you otherwise, for example, need to pass an extra argument to routines that need both the csv object and the underlying file object. Adopting this enhancement would also provide consistency with the dialect constructer argument, which is available as an object property. Changing the fileobj while the csv object is in use would open a can of worms, so this should be a read-only property. Optionally, the fileobj property could be reflected in the DictReader and DictWriter classes, but the value would be accessible via the .reader and .writer properties of those classes. ---------- components: Library (Lib) messages: 314538 nosy: samwyse priority: normal severity: normal status: open title: Add fileobj property to csv reader and writer objects type: enhancement versions: Python 2.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 15:32:44 2018 From: report at bugs.python.org (skreft) Date: Tue, 27 Mar 2018 19:32:44 +0000 Subject: [New-bugs-announce] [issue33159] Implement PEP 473 Message-ID: <1522179164.75.0.467229070634.issue33159@psf.upfronthosting.co.za> New submission from skreft : Implement PEP 473. ---------- messages: 314546 nosy: skreft priority: normal severity: normal status: open title: Implement PEP 473 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 17:07:48 2018 From: report at bugs.python.org (Facundo Batista) Date: Tue, 27 Mar 2018 21:07:48 +0000 Subject: [New-bugs-announce] [issue33160] Negative values in positional access inside formatting Message-ID: <1522184868.7.0.467229070634.issue33160@psf.upfronthosting.co.za> New submission from Facundo Batista : This works fine: >>> "{[0]}".format([1, 2, 3]) '1' This should work too: >>> "{[-1]}".format([1, 2, 3]) Traceback (most recent call last): File "", line 1, in TypeError: list indices must be integers or slices, not str ---------- messages: 314549 nosy: facundobatista priority: normal severity: normal status: open title: Negative values in positional access inside formatting versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 27 20:47:21 2018 From: report at bugs.python.org (Ekin Dursun) Date: Wed, 28 Mar 2018 00:47:21 +0000 Subject: [New-bugs-announce] [issue33161] Refactor of pathlib's _WindowsBehavior.gethomedir Message-ID: <1522198041.64.0.467229070634.issue33161@psf.upfronthosting.co.za> New submission from Ekin Dursun : At line 245, default value for drv is provided with KeyError handling, but it is better to use dict's get method. ---------- components: Library (Lib), Windows messages: 314562 nosy: onlined, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Refactor of pathlib's _WindowsBehavior.gethomedir type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 04:02:50 2018 From: report at bugs.python.org (Nikunj jain) Date: Wed, 28 Mar 2018 08:02:50 +0000 Subject: [New-bugs-announce] [issue33162] TimedRotatingFileHandler in logging module Message-ID: <1522224170.9.0.467229070634.issue33162@psf.upfronthosting.co.za> New submission from Nikunj jain : Currently the TimedRotatingFileHandler in Python, when rotates the log file, invents a new file extension by adding the new date in the end of the file name. It would be really good if a prefix option could be provided which instead of adding the new date in end, will add it in the beginning. ---------- messages: 314569 nosy: Nikunj jain priority: normal severity: normal status: open title: TimedRotatingFileHandler in logging module type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 04:13:33 2018 From: report at bugs.python.org (Ned Deily) Date: Wed, 28 Mar 2018 08:13:33 +0000 Subject: [New-bugs-announce] [issue33163] Upgrade pip to 9.0.3 and setuptools to v39.0.1 Message-ID: <1522224813.43.0.467229070634.issue33163@psf.upfronthosting.co.za> New submission from Ned Deily : pip and setuptools were updated in the following commits: PR 6184 d93b5161af12291f3f98a260c90cc2975ea9e9cd for master (3.8.0) PR 6185 8f46176f0e19d31d8642735e535183a39c5e0bdc for 3.7 (3.7.0rc3) PR 6186 560ea272b01acaa6c531cc7d94331b2ef0854be6 for 3.6 (3.6.5) PR 6187 1ce4e5bee6df476836f799456f2caf77cd13dc97 for 2.7 (2.7.15) Need to add NEWS entries for them (to follow). ---------- components: Build messages: 314570 nosy: dstufft, ned.deily priority: normal severity: normal status: open title: Upgrade pip to 9.0.3 and setuptools to v39.0.1 versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 05:04:44 2018 From: report at bugs.python.org (David Carlier) Date: Wed, 28 Mar 2018 09:04:44 +0000 Subject: [New-bugs-announce] [issue33164] Blake 2 module update Message-ID: <1522227884.31.0.467229070634.issue33164@psf.upfronthosting.co.za> Change by David Carlier : ---------- components: Extension Modules nosy: David Carlier priority: normal pull_requests: 6013 severity: normal status: open title: Blake 2 module update versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 05:05:21 2018 From: report at bugs.python.org (Nick Coghlan) Date: Wed, 28 Mar 2018 09:05:21 +0000 Subject: [New-bugs-announce] [issue33165] Add stacklevel parameter to logging APIs Message-ID: <1522227921.35.0.467229070634.issue33165@psf.upfronthosting.co.za> New submission from Nick Coghlan : warnings.warn() offers a stacklevel parameter to make it easier to write helper functions that generate warnings - by passing "stacklevel=2", you can ensure the warning is attributed to the caller of the helper function, rather than to the helper function itself. There isn't currently a similarly clear way to write helper functions that emit logging messages - if the format includes "pathname", "filename", "module", "function", or "lineno", then those will always report the location of the helper function, rather than the caller of the helper function. It would be convenient if logging.debug() et al accepted a "stacklevel" parameter the same way the warnings module does (although this may require some adjustments to the Logger.findCaller method API) ---------- components: Library (Lib) messages: 314578 nosy: ncoghlan, vinay.sajip priority: normal severity: normal stage: needs patch status: open title: Add stacklevel parameter to logging APIs type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 05:44:42 2018 From: report at bugs.python.org (yanir hainick) Date: Wed, 28 Mar 2018 09:44:42 +0000 Subject: [New-bugs-announce] [issue33166] os.cpu_count() returns wrong number of processors on specific systems Message-ID: <1522230282.76.0.467229070634.issue33166@psf.upfronthosting.co.za> Change by yanir hainick : ---------- components: Windows nosy: paul.moore, steve.dower, tim.golden, yanirh, zach.ware priority: normal severity: normal status: open type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 07:48:46 2018 From: report at bugs.python.org (Matt Eaton) Date: Wed, 28 Mar 2018 11:48:46 +0000 Subject: [New-bugs-announce] [issue33167] RFC Documentation Updates to urllib.parse.rst Message-ID: <1522237726.19.0.467229070634.issue33167@psf.upfronthosting.co.za> New submission from Matt Eaton : A recent patch that I worked on resulted in an agreement that there could be a use case for a new URL API to be added to urllib.parse. See: https://bugs.python.org/issue33034 In my research to develop this new API I have been looking at the documentation for urllib.parse -https://docs.python.org/3/library/urllib.parse.html and thought that the descriptions for the RFC documents could use an update to better reflect the meaning of the document. ---------- assignee: docs at python components: Documentation messages: 314584 nosy: agnosticdev, docs at python priority: normal severity: normal status: open title: RFC Documentation Updates to urllib.parse.rst versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 10:21:21 2018 From: report at bugs.python.org (Christoph Reiter) Date: Wed, 28 Mar 2018 14:21:21 +0000 Subject: [New-bugs-announce] [issue33168] distutils build/build_ext and --debug Message-ID: <1522246881.58.0.467229070634.issue33168@psf.upfronthosting.co.za> New submission from Christoph Reiter : The distutils "build" and "build_ext" commands provide a "--debug" option to enable building with debug information. But this option doesn't have any affect because the default CFLAGS contain "-g" (python3-config --cflags) so debug information is always included and "-g0" isn't passed if debug is False. Is this intentional? ---------- components: Distutils messages: 314593 nosy: dstufft, eric.araujo, lazka priority: normal severity: normal status: open title: distutils build/build_ext and --debug _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 10:30:18 2018 From: report at bugs.python.org (Guido van Rossum) Date: Wed, 28 Mar 2018 14:30:18 +0000 Subject: [New-bugs-announce] [issue33169] importlib.invalidate_caches() doesn't clear all caches Message-ID: <1522247418.61.0.467229070634.issue33169@psf.upfronthosting.co.za> New submission from Guido van Rossum : See https://github.com/python/mypy/pull/4811. To summarize, importlib.invalidate_caches() doesn't clear the negative cache in sys.path_importer_cache. Could be related to https://bugs.python.org/issue30891? ---------- messages: 314595 nosy: gvanrossum priority: normal severity: normal status: open title: importlib.invalidate_caches() doesn't clear all caches versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 10:51:28 2018 From: report at bugs.python.org (Maxim Avanov) Date: Wed, 28 Mar 2018 14:51:28 +0000 Subject: [New-bugs-announce] [issue33170] New type based on int() created with typing.NewType is not consistent Message-ID: <1522248688.28.0.467229070634.issue33170@psf.upfronthosting.co.za> New submission from Maxim Avanov : >From my understanding of the docs section on new types, https://docs.python.org/3/library/typing.html#newtype the new type based on int() should just pass the value into the base constructor. However, ``` PercentDiscount = NewType('PercentDiscount', int) >>> PercentDiscount(50) == int(50) True >>> int('50') == int(50) True >>> PercentDiscount('50') == PercentDiscount(50) False ``` ---------- components: Library (Lib) messages: 314598 nosy: avanov priority: normal severity: normal status: open title: New type based on int() created with typing.NewType is not consistent versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 10:56:29 2018 From: report at bugs.python.org (yanir hainick) Date: Wed, 28 Mar 2018 14:56:29 +0000 Subject: [New-bugs-announce] [issue33171] multiprocessing won't utilize all of platform resources Message-ID: <1522248989.15.0.467229070634.issue33171@psf.upfronthosting.co.za> New submission from yanir hainick : I'm using either multiprocessing package or concurrent.futures for some embarrassingly parallel application. I performed a simple test: basically making n_jobs calls for a simple function - 'sum(list(range(n)))', with n large enough so that the operation is a few seconds long - where n_jobs > n_logical_cores. Tried it on two platforms: first platform: server with X4 Intel Xeon E5-4620 (8 physical, 16 logical), running a 64bit Windows Server 2012 R2 Standard. *** second platform: server with X2 Intel Xeon Gold 6138 (20 physical, 40 logical), running a 64bit Windows Server 2016 Standard. *** first platform reaches 100% utilization. second platform reaches 25% utilization. ---------- components: Windows messages: 314600 nosy: paul.moore, steve.dower, tim.golden, yanirh, zach.ware priority: normal severity: normal status: open title: multiprocessing won't utilize all of platform resources type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 11:45:10 2018 From: report at bugs.python.org (Jonathan) Date: Wed, 28 Mar 2018 15:45:10 +0000 Subject: [New-bugs-announce] [issue33172] Update built-in version of SQLite3 Message-ID: <1522251910.92.0.467229070634.issue33172@psf.upfronthosting.co.za> New submission from Jonathan : The current version of SQLite (in Python 3.6) is 3.7.17 which was released almost 5 years ago - https://www.sqlite.org/releaselog/3_7_17.html Given that user updating of the version of SQLite used by Python is something of a pain (and the process is different across platforms (*and* different again for virtual-envs across platforms)), can the built-in version please be updated to a more recent version? This will allow usage of new SQLite features and users can benefit from a lot of performance enhancements/optimisations too. SQLite has excellent backwards compatibility, so except for any regressions (and they run over a hundred million tests per release to keep them to a minimum), any newer version will be backwards compatible with that version. Thanks ---------- messages: 314610 nosy: jonathan-lp priority: normal severity: normal status: open title: Update built-in version of SQLite3 versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 12:28:48 2018 From: report at bugs.python.org (Walt Askew) Date: Wed, 28 Mar 2018 16:28:48 +0000 Subject: [New-bugs-announce] [issue33173] GzipFile's .seekable() returns True even if underlying buffer is not seekable Message-ID: <1522254528.0.0.467229070634.issue33173@psf.upfronthosting.co.za> New submission from Walt Askew : The seekable method on gzip.GzipFile always returns True, even if the underlying buffer is not seekable. However, if seek is called on the GzipFile, the seek will fail unless the underlying buffer is seekable. This can cause consumers of the GzipFile object to mistakenly believe calling seek on the object is safe, when in fact it will lead to an exception. For example, this led to a bug when I was trying to use requests & boto3 to stream & decompress an S3 upload like so: resp = requests.get(uri, stream=True) decompressed = gzip.GzipFile(fileobj=resp.raw) boto3.client('s3').upload_fileobj(decompressed, Bucket=bucket, Key=key) boto3 checks the seekable method on the the GzipFile, chooses a code path based on the file being seekable but later raises an exception when the seek call fails because the underlying HTTP stream is not seekable. ---------- components: Library (Lib) messages: 314613 nosy: Walt Askew priority: normal severity: normal status: open title: GzipFile's .seekable() returns True even if underlying buffer is not seekable versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 28 16:01:02 2018 From: report at bugs.python.org (William Scullin) Date: Wed, 28 Mar 2018 20:01:02 +0000 Subject: [New-bugs-announce] [issue33174] error building the _sha3 module with Intel 2018 compilers Message-ID: <1522267262.01.0.467229070634.issue33174@psf.upfronthosting.co.za> New submission from William Scullin : When building Python 3.6.X and later with icc (18.0.0.128 or 18.0.1.163), there's an error building the _sha3 module with any optimization level other than -O0: building '_sha3' extension icc -pthread -fPIC -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -std=c99 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -fp-model strict -I./Include -I. -I/usr/local/include -I/derp/Python-3.6.4/Include -I/derp/Python-3.6.4 -c /derp/Python-3.6.4/Modules/_sha3/ sha3module.c -o build/temp.linux-x86_64-3.6/derp/Python-3.6.4/Modules/_sha3/sha3module.o ": internal error: ** The compiler has encountered an unexpected problem. ** Segmentation violation signal raised. ** Access violation or stack overflow. Please contact Intel Support for assistance. compilation aborted for /derp/Python-3.6.4/Modules/_sha3/sha3module.c (code 4) ... [ jlselogin2: Python-3.6.4 ]$ if I drop to -O0, compilation works every time. I haven't found disabling any particular set of optimizations to be useful in obtaining a successful build. ... [ jlselogin2: Python-3.6.4 ]$ dropping to -O0, compilation works every time. I haven't found disabling any particular set of optimizations to be useful in obtaining a successful build with icc. Intel has been notified and a bug filed as this is really a compiler bug. On the Python side, it does not appear possible to use Modules/Setup to drop the optimization level for just _sha3 and I'm hunting for a workaround. ---------- components: Installation messages: 314619 nosy: wscullin priority: normal severity: normal status: open title: error building the _sha3 module with Intel 2018 compilers type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 04:11:13 2018 From: report at bugs.python.org (Eric V. Smith) Date: Thu, 29 Mar 2018 08:11:13 +0000 Subject: [New-bugs-announce] [issue33175] dataclasses should look up __set_name__ on class, not instance Message-ID: <1522311073.3.0.467229070634.issue33175@psf.upfronthosting.co.za> New submission from Eric V. Smith : Reported by Jelle Zijlstra at https://github.com/python/cpython/pull/6260#pullrequestreview-107905037 ---------- assignee: eric.smith components: Library (Lib) messages: 314636 nosy: eric.smith priority: normal severity: normal stage: needs patch status: open title: dataclasses should look up __set_name__ on class, not instance type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 04:21:52 2018 From: report at bugs.python.org (Antoine Pitrou) Date: Thu, 29 Mar 2018 08:21:52 +0000 Subject: [New-bugs-announce] [issue33176] Allow memoryview.cast(readonly=...) Message-ID: <1522311712.08.0.467229070634.issue33176@psf.upfronthosting.co.za> New submission from Antoine Pitrou : It may be useful to get a readonly view of a memoryview. ---------- components: Interpreter Core messages: 314637 nosy: pitrou, skrah priority: normal severity: normal status: open title: Allow memoryview.cast(readonly=...) type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 04:47:59 2018 From: report at bugs.python.org (Joongi Kim) Date: Thu, 29 Mar 2018 08:47:59 +0000 Subject: [New-bugs-announce] [issue33177] make install hangs on macOS when there is an existing Python app Message-ID: <1522313279.55.0.467229070634.issue33177@psf.upfronthosting.co.za> New submission from Joongi Kim : I have installed Python 3.6.4 for macOS by downloading from the official site (www.python.org) and then tried installing 3.6.5 using pyenv. Then the installation process hangs here: https://user-images.githubusercontent.com/555156/38078784-57e44462-3378-11e8-8011-9579afc3c811.png There is a 2-years old issue in pyenv (https://github.com/pyenv/pyenv/issues/512) but this may have to be fixed from here. ---------- components: Installation, macOS messages: 314639 nosy: achimnol, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: make install hangs on macOS when there is an existing Python app type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 09:48:23 2018 From: report at bugs.python.org (emezh) Date: Thu, 29 Mar 2018 13:48:23 +0000 Subject: [New-bugs-announce] [issue33178] Add support for BigEndianUnion and LittleEndianUnion in ctypes Message-ID: <1522331303.68.0.467229070634.issue33178@psf.upfronthosting.co.za> New submission from emezh : Python documentation says that "To build structures with non-native byte order, you can use one of the BigEndianStructure, LittleEndianStructure, BigEndianUnion, and LittleEndianUnion base classes" However, BigEndianUnion ad LittleEndianUnion are not implemented >>> from ctypes import * >>> BigEndianStructure >>> BigEndianUnion Traceback (most recent call last): File "", line 1, in NameError: name 'BigEndianUnion' is not defined Is that something that can be added? See also https://bugs.python.org/issue19023 ---------- components: ctypes messages: 314647 nosy: Eugene Mezhibovsky priority: normal severity: normal status: open title: Add support for BigEndianUnion and LittleEndianUnion in ctypes type: enhancement versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 10:15:52 2018 From: report at bugs.python.org (Nick Coghlan) Date: Thu, 29 Mar 2018 14:15:52 +0000 Subject: [New-bugs-announce] [issue33179] Investigate using a context variable for zero-arg super initialisation Message-ID: <1522332952.84.0.467229070634.issue33179@psf.upfronthosting.co.za> New submission from Nick Coghlan : As noted in https://docs.python.org/3/reference/datamodel.html?#creating-the-class-object, implementing PEP 487 required the introduction of __classcell__ as a way for __build_class__ to pass the zero-arg super() cell object through to type.__new__. Now that Python 3.7+ offers context variables, we may be able to design a more robust (and better hidden) alternative which stashes the "current zero-arg super cell object" in a context variable, allowing type.__new__ to retrieve it when needed, without having to pass it through the class body execution namespace. ---------- messages: 314650 nosy: Martin.Teichmann, encukou, ncoghlan, yselivanov priority: normal severity: normal status: open title: Investigate using a context variable for zero-arg super initialisation type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 10:53:31 2018 From: report at bugs.python.org (Steve Dower) Date: Thu, 29 Mar 2018 14:53:31 +0000 Subject: [New-bugs-announce] [issue33180] Flag for unusable sys.executable Message-ID: <1522335211.5.0.467229070634.issue33180@psf.upfronthosting.co.za> New submission from Steve Dower : If you host Python in another program, it's likely that sys.executable is not pointing to a normal Python interpreter. This can cause libraries such as multiprocessing to fail when they try to launch the interpreter again. Worse, it may have launched your application many more times before failure :) I think we should add either a flag to indicate to any such library that sys.executable is not useful for relaunching Python, or a field that points to the actual executable but can safely be left None (or for most horrendous generality, a list of arguments to relaunch, as sometimes a command line option can get you into a normal interpreter). These would be set by embedders only, and Programs/python.c would set the "normal" values. Thoughts? ---------- assignee: steve.dower messages: 314655 nosy: eric.snow, ncoghlan, steve.dower priority: normal severity: normal status: open title: Flag for unusable sys.executable type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 12:08:41 2018 From: report at bugs.python.org (Oliver Urs Lenz) Date: Thu, 29 Mar 2018 16:08:41 +0000 Subject: [New-bugs-announce] [issue33181] SimpleHTTPRequestHandler shouldn't redirect to directories with code 301 Message-ID: <1522339721.91.0.467229070634.issue33181@psf.upfronthosting.co.za> New submission from Oliver Urs Lenz : SimpleHTTPRequestHandler.send_head() has this bit: if os.path.isdir(path): parts = urllib.parse.urlsplit(self.path) if not parts.path.endswith('/'): # redirect browser - doing basically what apache does self.send_response(HTTPStatus.MOVED_PERMANENTLY) https://github.com/python/cpython/blob/521995205a2cb6b504fe0e39af22a81f785350a3/Lib/http/server.py#L676 I think there are two issues here: 1) why should the server return a redirect code here, and not (in the code that immediately follows) when it serves an index file? 2) code 301 (permanent redirect) is really unforgiving, browsers like Firefox and Chrome will permanently cache the redirect, making it essentially impossible to undo if you do not control the client, and not trivial even if you do. This will probably not change on the browser-side, general opinion seems to be that limited caching should either be specified in the response header or that a different redirect code should be sent back. https://lists.w3.org/Archives/Public/ietf-http-wg/2017OctDec/thread.html#msg363 Therefore I would like to propose that preferably, - no redirect code be sent back, or else that - a different redirect code be sent back, or else that - no-caching or a time limit be added to the header (This may require that send_head check for index files instead) ---------- components: Library (Lib) messages: 314663 nosy: oulenz priority: normal severity: normal status: open title: SimpleHTTPRequestHandler shouldn't redirect to directories with code 301 versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 14:43:37 2018 From: report at bugs.python.org (Bernhard Rosenkraenzer) Date: Thu, 29 Mar 2018 18:43:37 +0000 Subject: [New-bugs-announce] [issue33182] Python 3.7.0b3 fails to build with clang 6.0 Message-ID: <1522349017.92.0.467229070634.issue33182@psf.upfronthosting.co.za> New submission from Bernhard Rosenkraenzer : Python 3.7.0b3 fails to build with clang 6.0 (implicit cast from void* to a different pointer type is an error now): /usr/bin/clang++ -c -Wno-unused-result -Wsign-compare -Wunreachable-code -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -Os -gdwarf-4 -Wstrict-aliasing=2 -pipe -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -flto -O3 -g -Os -gdwarf-4 -Wstrict-aliasing=2 -pipe -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -flto -O3 -D_GNU_SOURCE -fPIC -fwrapv -I/usr/include/ncursesw -flto -Os -gdwarf-4 -Wstrict-aliasing=2 -pipe -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -flto -O3 -D_GNU_SOURCE -fPIC -fwrapv -I/usr/include/ncursesw -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -Os -gdwarf-4 -Wstrict-aliasing=2 -pipe -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -flto -O3 -D_GNU_SOURCE -fPIC -fwrapv -I/usr/include/ncursesw -fprofile-instr-generate -I. -I./Include -Os -gdwarf-4 -Wstrict-aliasing=2 -pipe -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -flto -O3 -D_GNU_SOURCE -fPIC -fwrapv -I/usr/include/ncursesw -Os -gdwarf-4 -Wstrict-aliasing=2 -pipe -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -flto -O3 -D_GNU_SOURCE -fPIC -fwrapv -I/usr/include/ncursesw -fPIC -DPy_BUILD_CORE -o Programs/_testembed.o ./Programs/_testembed.c clang-6.0: warning: treating 'c' input as 'c++' when in C++ mode, this behavior is deprecated [-Wdeprecated] ./Programs/_testembed.c:173:34: warning: ISO C++11 does not allow conversion from string literal to 'wchar_t *' [-Wwritable-strings] wchar_t *static_warnoption = L"once"; ^ ./Programs/_testembed.c:174:31: warning: ISO C++11 does not allow conversion from string literal to 'wchar_t *' [-Wwritable-strings] wchar_t *static_xoption = L"also_not_an_option=2"; ^ ./Programs/_testembed.c:177:14: error: cannot initialize a variable of type 'wchar_t *' with an rvalue of type 'void *' wchar_t *dynamic_once_warnoption = calloc(warnoption_len+1, sizeof(wchar_t)); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./Programs/_testembed.c:178:14: error: cannot initialize a variable of type 'wchar_t *' with an rvalue of type 'void *' wchar_t *dynamic_xoption = calloc(xoption_len+1, sizeof(wchar_t)); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2 warnings and 2 errors generated. make[3]: *** [Makefile:777: Programs/_testembed.o] Error 1 ---------- components: Tests files: python-3.7.0b3-clang-6.0.patch keywords: patch messages: 314666 nosy: bero priority: normal severity: normal status: open title: Python 3.7.0b3 fails to build with clang 6.0 type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file47505/python-3.7.0b3-clang-6.0.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 16:07:40 2018 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Blondon?=) Date: Thu, 29 Mar 2018 20:07:40 +0000 Subject: [New-bugs-announce] [issue33183] Refactoring: replacing some assertTrue by assertIn Message-ID: <1522354060.39.0.467229070634.issue33183@psf.upfronthosting.co.za> New submission from St?phane Blondon : In several cases, tests use ```self.assertTrue(a in b)```. Using ```self.assertIn(a, b)``` seems to be better. For examples: ./Lib/test/test_inspect.py: self.assertTrue('(po, pk' in repr(sig)) ./Lib/test/test_configparser.py: self.assertTrue('that_value' in cf['Spacey Bar']) ./Lib/test/test_collections.py: self.assertTrue(elem in c) There are some cases where ```self.assertTrue(a not in b)``` could be replaced by ```self.assertNotIn(a, b)``` ./Lib/tkinter/test/test_ttk/test_widgets.py: self.assertTrue('.' not in value) ./Lib/test/mapping_tests.py: self.assertTrue(not ('a' in d)) self.assertTrue('a' not in d) $ find . -name "*.py" | xargs grep -r "assertTrue.* in " finds 131 occurences but there are some false positives inside the output. I can write a patch if you are interested. ---------- components: Tests messages: 314670 nosy: sblondon priority: normal severity: normal status: open title: Refactoring: replacing some assertTrue by assertIn type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 29 22:11:58 2018 From: report at bugs.python.org (Ned Deily) Date: Fri, 30 Mar 2018 02:11:58 +0000 Subject: [New-bugs-announce] [issue33184] Update OpenSSL to 1.1.0h / 1.0.2o Message-ID: <1522375918.6.0.467229070634.issue33184@psf.upfronthosting.co.za> New submission from Ned Deily : https://www.openssl.org/source/ ---------- messages: 314675 nosy: christian.heimes, ned.deily, steve.dower, zach.ware priority: deferred blocker severity: normal stage: needs patch status: open title: Update OpenSSL to 1.1.0h / 1.0.2o versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 11:51:57 2018 From: report at bugs.python.org (Ned Batchelder) Date: Fri, 30 Mar 2018 15:51:57 +0000 Subject: [New-bugs-announce] [issue33185] Python 3.7.0b3 fails in pydoc where b2 did not. Message-ID: <1522425117.82.0.467229070634.issue33185@psf.upfronthosting.co.za> New submission from Ned Batchelder : "pydoc coverage" worked with 3.7b2, but fails with a surprising ModuleNotFoundError for configparser with b3. The configparser is importable in the Python interpreter. I tried with -v to what imports were attempted, and configparser isn't even mentioned until the failure message. Complete reproduction: # Using 3.7.0b2 $ mktmpenv -p /usr/local/pythonz/pythons/CPython-3.7.0b2/bin/python3.7 -n -q /Library/Python/2.7/site-packages/virtualenv.py:1098: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp Collecting pip Using cached pip-9.0.3-py2.py3-none-any.whl Collecting setuptools Using cached setuptools-39.0.1-py2.py3-none-any.whl Installing collected packages: pip, setuptools Found existing installation: pip 7.1.2 Uninstalling pip-7.1.2: Successfully uninstalled pip-7.1.2 Found existing installation: setuptools 18.2 Uninstalling setuptools-18.2: Successfully uninstalled setuptools-18.2 Successfully installed pip-9.0.3 setuptools-39.0.1 This is a temporary environment. It will be deleted when you run 'deactivate'. $ python -V Python 3.7.0b2 $ pip install -q coverage==4.5.1 $ pydoc coverage Help on package coverage: NAME coverage - Code coverage measurement for Python. DESCRIPTION Ned Batchelder https://nedbatchelder.com/code/coverage PACKAGE CONTENTS __main__ annotate backunittest backward bytecode cmdline collector config control data debug env execfile files html misc multiproc parser phystokens pickle2json plugin plugin_support python pytracer report results summary templite tracer version xmlreport DATA __url__ = 'https://coverage.readthedocs.io' version_info = (4, 5, 1, 'final', 0) VERSION 4.5.1 FILE /usr/local/virtualenvs/tmp-e3c595a6301312d/lib/python3.7/site-packages/coverage/__init__.py $ deactivate Removing temporary environment: tmp-e3c595a6301312d Removing tmp-e3c595a6301312d... # Using 3.7.0b3 $ mktmpenv -p /usr/local/pythonz/pythons/CPython-3.7.0b3/bin/python3.7 -n -q /Library/Python/2.7/site-packages/virtualenv.py:1098: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp Collecting pip Using cached pip-9.0.3-py2.py3-none-any.whl Collecting setuptools Using cached setuptools-39.0.1-py2.py3-none-any.whl Installing collected packages: pip, setuptools Found existing installation: pip 7.1.2 Uninstalling pip-7.1.2: Successfully uninstalled pip-7.1.2 Found existing installation: setuptools 18.2 Uninstalling setuptools-18.2: Successfully uninstalled setuptools-18.2 Successfully installed pip-9.0.3 setuptools-39.0.1 This is a temporary environment. It will be deleted when you run 'deactivate'. $ python -V Python 3.7.0b3 $ pip install -q coverage==4.5.1 $ pydoc coverage problem in coverage - ModuleNotFoundError: No module named 'configparser' $ python -c 'import configparser; print(configparser)' $ python -v -m pydoc coverage import _frozen_importlib # frozen import _imp # builtin import '_thread' # import '_warnings' # import '_weakref' # import '_frozen_importlib_external' # import '_io' # import 'marshal' # import 'posix' # import _thread # previously loaded ('_thread') import '_thread' # import _weakref # previously loaded ('_weakref') import '_weakref' # # installing zipimport hook import 'zipimport' # # installed zipimport hook import _thread # previously loaded ('_thread') import '_thread' # import _weakref # previously loaded ('_weakref') import '_weakref' # # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/__init__.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__init__.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/__init__.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/codecs.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/codecs.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/codecs.cpython-37.pyc' import '_codecs' # import 'codecs' # <_frozen_importlib_external.SourceFileLoader object at 0x1005da438> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/aliases.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/aliases.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/aliases.cpython-37.pyc' import 'encodings.aliases' # <_frozen_importlib_external.SourceFileLoader object at 0x1005e7f60> import 'encodings' # <_frozen_importlib_external.SourceFileLoader object at 0x1005cbe80> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/utf_8.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/utf_8.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/utf_8.cpython-37.pyc' import 'encodings.utf_8' # <_frozen_importlib_external.SourceFileLoader object at 0x1005fac88> import '_signal' # # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/latin_1.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/latin_1.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/encodings/__pycache__/latin_1.cpython-37.pyc' import 'encodings.latin_1' # <_frozen_importlib_external.SourceFileLoader object at 0x1005fe710> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/io.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/io.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/io.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/abc.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/abc.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/abc.cpython-37.pyc' import '_abc' # import 'abc' # <_frozen_importlib_external.SourceFileLoader object at 0x1005fecc0> import 'io' # <_frozen_importlib_external.SourceFileLoader object at 0x1005fe908> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/site.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/site.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/os.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/os.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/os.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/stat.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/stat.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/stat.cpython-37.pyc' import '_stat' # import 'stat' # <_frozen_importlib_external.SourceFileLoader object at 0x10069d630> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/posixpath.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/posixpath.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/posixpath.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/genericpath.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/genericpath.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/genericpath.cpython-37.pyc' import 'genericpath' # <_frozen_importlib_external.SourceFileLoader object at 0x1006ad0b8> import 'posixpath' # <_frozen_importlib_external.SourceFileLoader object at 0x10069dcf8> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/_collections_abc.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/_collections_abc.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/_collections_abc.cpython-37.pyc' import '_collections_abc' # <_frozen_importlib_external.SourceFileLoader object at 0x1006ad6d8> import 'os' # <_frozen_importlib_external.SourceFileLoader object at 0x1006102b0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/_bootlocale.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/_bootlocale.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/_bootlocale.cpython-37.pyc' import '_locale' # import '_bootlocale' # <_frozen_importlib_external.SourceFileLoader object at 0x100610278> import 'site' # <_frozen_importlib_external.SourceFileLoader object at 0x100603828> Python 3.7.0b3 (default, Mar 29 2018, 23:29:31) [Clang 9.0.0 (clang-900.0.39.2)] on darwin Type "help", "copyright", "credits" or "license" for more information. # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/runpy.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/runpy.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/runpy.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/__init__.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__init__.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/__init__.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/types.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/types.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/types.cpython-37.pyc' import 'types' # <_frozen_importlib_external.SourceFileLoader object at 0x1006f1be0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/warnings.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/warnings.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/warnings.cpython-37.pyc' import 'warnings' # <_frozen_importlib_external.SourceFileLoader object at 0x1006f1d30> import 'importlib' # <_frozen_importlib_external.SourceFileLoader object at 0x1006f17f0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/machinery.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/machinery.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/machinery.cpython-37.pyc' import 'importlib.machinery' # <_frozen_importlib_external.SourceFileLoader object at 0x1006fbda0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/util.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/util.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/util.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/abc.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/abc.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/importlib/__pycache__/abc.cpython-37.pyc' import 'importlib.abc' # <_frozen_importlib_external.SourceFileLoader object at 0x100706b38> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/contextlib.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/contextlib.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/contextlib.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/collections/__pycache__/__init__.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/collections/__init__.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/collections/__pycache__/__init__.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/operator.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/operator.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/operator.cpython-37.pyc' import '_operator' # import 'operator' # <_frozen_importlib_external.SourceFileLoader object at 0x100763940> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/keyword.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/keyword.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/keyword.cpython-37.pyc' import 'keyword' # <_frozen_importlib_external.SourceFileLoader object at 0x10076bac8> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/heapq.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/heapq.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/heapq.cpython-37.pyc' # extension module '_heapq' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_heapq.cpython-37m-darwin.so' # extension module '_heapq' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_heapq.cpython-37m-darwin.so' import '_heapq' # <_frozen_importlib_external.ExtensionFileLoader object at 0x10076f978> import 'heapq' # <_frozen_importlib_external.SourceFileLoader object at 0x10076f438> import 'itertools' # # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/reprlib.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/reprlib.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/reprlib.cpython-37.pyc' import 'reprlib' # <_frozen_importlib_external.SourceFileLoader object at 0x10076fa58> import '_collections' # import 'collections' # <_frozen_importlib_external.SourceFileLoader object at 0x100742b00> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/functools.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/functools.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/functools.cpython-37.pyc' import '_functools' # import 'functools' # <_frozen_importlib_external.SourceFileLoader object at 0x100742ef0> import 'contextlib' # <_frozen_importlib_external.SourceFileLoader object at 0x100712470> import 'importlib.util' # <_frozen_importlib_external.SourceFileLoader object at 0x1007062e8> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/pkgutil.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/pkgutil.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/pkgutil.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/weakref.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/weakref.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/weakref.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/_weakrefset.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/_weakrefset.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/_weakrefset.cpython-37.pyc' import '_weakrefset' # <_frozen_importlib_external.SourceFileLoader object at 0x1007ad390> import 'weakref' # <_frozen_importlib_external.SourceFileLoader object at 0x100791160> import 'pkgutil' # <_frozen_importlib_external.SourceFileLoader object at 0x1007480f0> import 'runpy' # <_frozen_importlib_external.SourceFileLoader object at 0x1006f1400> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/pydoc.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/pydoc.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/pydoc.cpython-37.pyc' # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/inspect.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/inspect.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/inspect.cpython-37.pyc' # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/dis.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/dis.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/dis.cpython-37.pyc' # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/opcode.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/opcode.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/opcode.cpython-37.pyc' # extension module '_opcode' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_opcode.cpython-37m-darwin.so' # extension module '_opcode' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_opcode.cpython-37m-darwin.so' import '_opcode' # <_frozen_importlib_external.ExtensionFileLoader object at 0x10081ca20> import 'opcode' # <_frozen_importlib_external.SourceFileLoader object at 0x10081c2e8> import 'dis' # <_frozen_importlib_external.SourceFileLoader object at 0x10080f0f0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/collections/__pycache__/abc.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/collections/abc.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/collections/__pycache__/abc.cpython-37.pyc' import 'collections.abc' # <_frozen_importlib_external.SourceFileLoader object at 0x100824a58> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/enum.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/enum.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/enum.cpython-37.pyc' import 'enum' # <_frozen_importlib_external.SourceFileLoader object at 0x100824a90> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/linecache.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/linecache.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/linecache.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/tokenize.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/tokenize.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/tokenize.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/re.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/re.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/re.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/sre_compile.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/sre_compile.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/sre_compile.cpython-37.pyc' import '_sre' # # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/sre_parse.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/sre_parse.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/sre_parse.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/sre_constants.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/sre_constants.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/sre_constants.cpython-37.pyc' import 'sre_constants' # <_frozen_importlib_external.SourceFileLoader object at 0x10085edd8> import 'sre_parse' # <_frozen_importlib_external.SourceFileLoader object at 0x1008575f8> import 'sre_compile' # <_frozen_importlib_external.SourceFileLoader object at 0x100846f98> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/copyreg.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/copyreg.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/copyreg.cpython-37.pyc' import 'copyreg' # <_frozen_importlib_external.SourceFileLoader object at 0x1008742e8> import 're' # <_frozen_importlib_external.SourceFileLoader object at 0x1008460f0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/token.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/token.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/token.cpython-37.pyc' import 'token' # <_frozen_importlib_external.SourceFileLoader object at 0x1008745f8> import 'tokenize' # <_frozen_importlib_external.SourceFileLoader object at 0x100826ba8> import 'linecache' # <_frozen_importlib_external.SourceFileLoader object at 0x10081cbe0> import 'inspect' # <_frozen_importlib_external.SourceFileLoader object at 0x1007e9780> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/platform.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/platform.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/platform.cpython-37.pyc' # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/subprocess.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/subprocess.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/subprocess.cpython-37.pyc' import 'time' # # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/signal.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/signal.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/signal.cpython-37.pyc' import 'signal' # <_frozen_importlib_external.SourceFileLoader object at 0x1008a9080> import 'errno' # # extension module '_posixsubprocess' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_posixsubprocess.cpython-37m-darwin.so' # extension module '_posixsubprocess' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_posixsubprocess.cpython-37m-darwin.so' import '_posixsubprocess' # <_frozen_importlib_external.ExtensionFileLoader object at 0x1008b2550> # extension module 'select' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/select.cpython-37m-darwin.so' # extension module 'select' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/select.cpython-37m-darwin.so' import 'select' # <_frozen_importlib_external.ExtensionFileLoader object at 0x1008b25c0> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/selectors.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/selectors.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/selectors.cpython-37.pyc' # extension module 'math' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/math.cpython-37m-darwin.so' # extension module 'math' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/math.cpython-37m-darwin.so' import 'math' # <_frozen_importlib_external.ExtensionFileLoader object at 0x100918160> import 'selectors' # <_frozen_importlib_external.SourceFileLoader object at 0x1008b2940> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/threading.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/threading.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/threading.cpython-37.pyc' # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/traceback.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/traceback.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/traceback.cpython-37.pyc' import 'traceback' # <_frozen_importlib_external.SourceFileLoader object at 0x100930240> import 'threading' # <_frozen_importlib_external.SourceFileLoader object at 0x1008ba940> import 'subprocess' # <_frozen_importlib_external.SourceFileLoader object at 0x100894e80> import 'platform' # <_frozen_importlib_external.SourceFileLoader object at 0x100808208> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/urllib/__pycache__/__init__.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/urllib/__init__.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/urllib/__pycache__/__init__.cpython-37.pyc' import 'urllib' # <_frozen_importlib_external.SourceFileLoader object at 0x1008a02e8> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/urllib/__pycache__/parse.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/urllib/parse.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/urllib/__pycache__/parse.cpython-37.pyc' import 'urllib.parse' # <_frozen_importlib_external.SourceFileLoader object at 0x1008a0240> # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/getopt.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/getopt.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/getopt.cpython-37.pyc' # /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/gettext.cpython-37.pyc matches /usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/gettext.py # code object from '/usr/local/pythonz/pythons/CPython-3.7.0b3/lib/python3.7/__pycache__/gettext.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/locale.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/locale.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/locale.cpython-37.pyc' import 'locale' # <_frozen_importlib_external.SourceFileLoader object at 0x100971a20> import 'gettext' # <_frozen_importlib_external.SourceFileLoader object at 0x100952da0> import 'getopt' # <_frozen_importlib_external.SourceFileLoader object at 0x1009527f0> # possible namespace for ./coverage # possible namespace for /Users/ned/coverage # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/__init__.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__init__.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/__init__.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/version.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/version.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/version.cpython-37.pyc' import 'coverage.version' # <_frozen_importlib_external.SourceFileLoader object at 0x1009a0748> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/control.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/control.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/control.cpython-37.pyc' import 'atexit' # # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/env.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/env.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/env.cpython-37.pyc' import 'coverage.env' # <_frozen_importlib_external.SourceFileLoader object at 0x100a6b4a8> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/annotate.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/annotate.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/annotate.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/files.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/files.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/files.cpython-37.pyc' # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/hashlib.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/hashlib.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/hashlib.cpython-37.pyc' # extension module '_hashlib' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_hashlib.cpython-37m-darwin.so' # extension module '_hashlib' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_hashlib.cpython-37m-darwin.so' import '_hashlib' # <_frozen_importlib_external.ExtensionFileLoader object at 0x100a7a6a0> # extension module '_blake2' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_blake2.cpython-37m-darwin.so' # extension module '_blake2' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_blake2.cpython-37m-darwin.so' import '_blake2' # <_frozen_importlib_external.ExtensionFileLoader object at 0x100a7ada0> # extension module '_sha3' loaded from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_sha3.cpython-37m-darwin.so' # extension module '_sha3' executed from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/lib-dynload/_sha3.cpython-37m-darwin.so' import '_sha3' # <_frozen_importlib_external.ExtensionFileLoader object at 0x100a7aeb8> import 'hashlib' # <_frozen_importlib_external.SourceFileLoader object at 0x100a75b70> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/fnmatch.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/fnmatch.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/fnmatch.cpython-37.pyc' import 'fnmatch' # <_frozen_importlib_external.SourceFileLoader object at 0x100a7a080> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/ntpath.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/ntpath.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/__pycache__/ntpath.cpython-37.pyc' import 'ntpath' # <_frozen_importlib_external.SourceFileLoader object at 0x100a800f0> # /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/backward.cpython-37.pyc matches /usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/backward.py # code object from '/usr/local/virtualenvs/tmp-95cdcd5f6881b25d/lib/python3.7/site-packages/coverage/__pycache__/backward.cpython-37.pyc' problem in coverage - ModuleNotFoundError: No module named 'configparser' # destroy coverage.backward # destroy coverage.files # destroy coverage.annotate # destroy coverage.control # destroy coverage # clear builtins._ # clear sys.path # clear sys.argv # clear sys.ps1 # clear sys.ps2 # clear sys.last_type # clear sys.last_value # clear sys.last_traceback # clear sys.path_hooks # clear sys.path_importer_cache # clear sys.meta_path # clear sys.__interactivehook__ # clear sys.flags # clear sys.float_info # restore sys.stdin # restore sys.stdout # restore sys.stderr # cleanup[2] removing sys # cleanup[2] removing builtins # cleanup[2] removing _frozen_importlib # cleanup[2] removing _imp # cleanup[2] removing _thread # cleanup[2] removing _warnings # cleanup[2] removing _weakref # cleanup[2] removing _frozen_importlib_external # cleanup[2] removing _io # cleanup[2] removing marshal # cleanup[2] removing posix # cleanup[2] removing zipimport # cleanup[2] removing encodings # cleanup[2] removing codecs # cleanup[2] removing _codecs # cleanup[2] removing encodings.aliases # cleanup[2] removing encodings.utf_8 # cleanup[2] removing _signal # cleanup[2] removing __main__ # destroy __main__ # cleanup[2] removing encodings.latin_1 # cleanup[2] removing io # cleanup[2] removing abc # cleanup[2] removing _abc # cleanup[2] removing site # destroy site # cleanup[2] removing os # cleanup[2] removing stat # cleanup[2] removing _stat # cleanup[2] removing posixpath # cleanup[2] removing genericpath # cleanup[2] removing os.path # cleanup[2] removing _collections_abc # cleanup[2] removing _bootlocale # destroy _bootlocale # cleanup[2] removing _locale # cleanup[2] removing runpy # destroy runpy # cleanup[2] removing importlib # cleanup[2] removing importlib._bootstrap # cleanup[2] removing importlib._bootstrap_external # cleanup[2] removing types # cleanup[2] removing warnings # cleanup[2] removing importlib.machinery # cleanup[2] removing importlib.util # cleanup[2] removing importlib.abc # cleanup[2] removing contextlib # destroy contextlib # cleanup[2] removing collections # cleanup[2] removing operator # destroy operator # cleanup[2] removing _operator # cleanup[2] removing keyword # destroy keyword # cleanup[2] removing heapq # cleanup[2] removing _heapq # cleanup[2] removing itertools # cleanup[2] removing reprlib # destroy reprlib # cleanup[2] removing _collections # cleanup[2] removing functools # cleanup[2] removing _functools # cleanup[2] removing pkgutil # cleanup[2] removing weakref # destroy weakref # cleanup[2] removing _weakrefset # destroy _weakrefset # cleanup[2] removing inspect # cleanup[2] removing dis # cleanup[2] removing opcode # destroy opcode # cleanup[2] removing _opcode # cleanup[2] removing collections.abc # cleanup[2] removing enum # cleanup[2] removing linecache # cleanup[2] removing tokenize # cleanup[2] removing re # cleanup[2] removing sre_compile # cleanup[2] removing _sre # cleanup[2] removing sre_parse # cleanup[2] removing sre_constants # destroy sre_constants # cleanup[2] removing copyreg # cleanup[2] removing token # cleanup[2] removing platform # cleanup[2] removing subprocess # cleanup[2] removing time # cleanup[2] removing signal # cleanup[2] removing errno # cleanup[2] removing _posixsubprocess # cleanup[2] removing select # cleanup[2] removing selectors # cleanup[2] removing math # cleanup[2] removing threading # cleanup[2] removing traceback # destroy traceback # cleanup[2] removing urllib # cleanup[2] removing urllib.parse # cleanup[2] removing getopt # destroy getopt # cleanup[2] removing gettext # destroy gettext # cleanup[2] removing locale # cleanup[2] removing coverage.version # destroy coverage.version # cleanup[2] removing atexit # cleanup[2] removing coverage.env # destroy coverage.env # cleanup[2] removing hashlib # destroy hashlib # cleanup[2] removing _hashlib # cleanup[2] removing _blake2 # cleanup[2] removing _sha3 # cleanup[2] removing fnmatch # destroy fnmatch # cleanup[2] removing ntpath # destroy ntpath # destroy _sha3 # destroy _blake2 # destroy inspect # destroy pkgutil # destroy platform # destroy urllib # destroy urllib.parse # destroy importlib.util # destroy importlib.abc # destroy importlib.machinery # destroy zipimport # destroy dis # destroy importlib # destroy token # destroy types # destroy _opcode # destroy subprocess # destroy io # destroy signal # destroy warnings # destroy errno # destroy selectors # destroy threading # destroy _signal # destroy _posixsubprocess # destroy math # destroy select # destroy locale # destroy encodings # destroy atexit # destroy _hashlib # cleanup[3] wiping _frozen_importlib # destroy _frozen_importlib_external # cleanup[3] wiping _imp # cleanup[3] wiping _thread # cleanup[3] wiping _warnings # cleanup[3] wiping _weakref # cleanup[3] wiping _io # cleanup[3] wiping marshal # cleanup[3] wiping posix # cleanup[3] wiping codecs # cleanup[3] wiping _codecs # cleanup[3] wiping encodings.aliases # cleanup[3] wiping encodings.utf_8 # cleanup[3] wiping encodings.latin_1 # cleanup[3] wiping abc # cleanup[3] wiping _abc # cleanup[3] wiping os # destroy abc # destroy posixpath # cleanup[3] wiping stat # cleanup[3] wiping _stat # destroy _stat # cleanup[3] wiping genericpath # cleanup[3] wiping _collections_abc # cleanup[3] wiping _locale # cleanup[3] wiping importlib._bootstrap # cleanup[3] wiping collections # destroy _collections_abc # destroy heapq # destroy collections.abc # cleanup[3] wiping _operator # destroy _operator # cleanup[3] wiping _heapq # cleanup[3] wiping itertools # cleanup[3] wiping _collections # destroy _collections # cleanup[3] wiping functools # destroy _abc # cleanup[3] wiping _functools # destroy _functools # cleanup[3] wiping enum # cleanup[3] wiping linecache # destroy tokenize # cleanup[3] wiping re # destroy _locale # destroy enum # destroy sre_compile # destroy functools # destroy copyreg # cleanup[3] wiping _sre # cleanup[3] wiping sre_parse # cleanup[3] wiping time # cleanup[3] wiping sys # cleanup[3] wiping builtins # destroy stat # destroy genericpath # destroy _heapq # destroy re # destroy _sre # destroy sre_parse $ ---------- components: Library (Lib) keywords: 3.7regression messages: 314686 nosy: nedbat priority: normal severity: normal status: open title: Python 3.7.0b3 fails in pydoc where b2 did not. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 12:06:55 2018 From: report at bugs.python.org (alexey@altlinux.org) Date: Fri, 30 Mar 2018 16:06:55 +0000 Subject: [New-bugs-announce] [issue33186] Memory corruption with urllib.parse Message-ID: <1522426015.93.0.467229070634.issue33186@psf.upfronthosting.co.za> New submission from alexey at altlinux.org : There is a strange behavior while processing data in a "for" loop with urllib.parse.unquote() - looks like memory corruption - a list contains elements that have never been appended. I'll explain the testcase. I spotted the problem by checking for any remains of url encoding left in output_list. There were these strings with url encoding left - "bad_boys" dict in testcase. Now, when iterating through input_list (read from "data.txt"), I'm checking for those problematic entries and printing what is being appended to the output_list as well as all problematic (unwanted, "Bad Boys") and converted problematic entries ("Normal conversions") existing in the output_list. At some point, unwanted entries appear in output_list. The resulting output_list contains converted and unconverted problematic entries, though input_list's length equals output_list's length. data.txt needs to be saved along with testcase.py, and then you can run testcase.py. The output of running testcase.py: Bad Boys are: seil%2fturbo_firmware 140335684191552 intelligent_platforms_proficy_hmi%2fscada_cimplicity 140335684515920 seil%2fneu_2fe_plus_firmware 140335684536080 seil%2fb1_firmware 140335684134640 eil%2fx2_firmware 140335684191984 seil%2fx1_firmware 140335684190832 seil%2fx2_firmware 140335684190904 seil%2fx86_firmware 140335684192488 Input list length is: 17094 Bad Boy detected Element type: Convertation: seil%2fb1_firmware 140335679096848 >> seil/b1_firmware 140335681345768 Just appended: seil/b1_firmware 140335681345768 Normal conversions in output list: seil/b1_firmware 140335681345768 Bad Boy detected Element type: Convertation: seil%2fx1_firmware 140335679096920 >> seil/x1_firmware 140335681345840 Just appended: seil/x1_firmware 140335681345840 Normal conversions in output list: seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 Bad Boy detected Element type: Convertation: seil%2fx2_firmware 140335679096992 >> seil/x2_firmware 140335681345912 Just appended: seil/x2_firmware 140335681345912 Normal conversions in output list: seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 seil/x2_firmware 140335681345912 Bad Boy detected Element type: Convertation: seil%2fx86_firmware 140335679134936 >> seil/x86_firmware 140335681346704 Just appended: seil/x86_firmware 140335681346704 Normal conversions in output list: seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 seil/x2_firmware 140335681345912 seil/x86_firmware 140335681346704 Bad Boys in output list: eil%2fx2_firmware 140335681346272 Bad Boy detected Element type: Convertation: seil%2fturbo_firmware 140335679200976 >> seil/turbo_firmware 140335679269456 Just appended: seil/turbo_firmware 140335679269456 Normal conversions in output list: seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 seil/x2_firmware 140335681345912 seil/x86_firmware 140335681346704 seil/turbo_firmware 140335679269456 Bad Boys in output list: eil%2fx2_firmware 140335681346272 seil%2fb1_firmware 140335679267800 seil%2fx1_firmware 140335679267872 seil%2fx2_firmware 140335679267944 seil%2fx86_firmware 140335679269384 Bad Boy detected Element type: Convertation: seil%2fneu_2fe_plus_firmware 140335678867056 >> seil/neu_2fe_plus_firmware 140335680328928 Just appended: seil/neu_2fe_plus_firmware 140335680328928 Normal conversions in output list: seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 seil/x2_firmware 140335681345912 seil/x86_firmware 140335681346704 seil/turbo_firmware 140335679269456 seil/neu_2fe_plus_firmware 140335680328928 Bad Boys in output list: eil%2fx2_firmware 140335681346272 seil%2fb1_firmware 140335679267800 seil%2fx1_firmware 140335679267872 seil%2fx2_firmware 140335679267944 seil%2fx86_firmware 140335679269384 seil%2fturbo_firmware 140335679849576 Bad Boy detected Element type: Convertation: intelligent_platforms_proficy_hmi%2fscada_cimplicity 140335678546800 >> intelligent_platforms_proficy_hmi/scada_cimplicity 140335681194376 Just appended: intelligent_platforms_proficy_hmi/scada_cimplicity 140335681194376 Normal conversions in output list: seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 seil/x2_firmware 140335681345912 seil/x86_firmware 140335681346704 seil/turbo_firmware 140335679269456 seil/neu_2fe_plus_firmware 140335680328928 intelligent_platforms_proficy_hmi/scada_cimplicity 140335681194376 Bad Boys in output list: eil%2fx2_firmware 140335681346272 seil%2fb1_firmware 140335679267800 seil%2fx1_firmware 140335679267872 seil%2fx2_firmware 140335679267944 seil%2fx86_firmware 140335679269384 seil%2fturbo_firmware 140335679849576 seil%2fneu_2fe_plus_firmware 140335678934512 FINAL RESULTS Output list length is: 17094 Normal conversions in output list (Bad Boys -related): seil/b1_firmware 140335681345768 seil/x1_firmware 140335681345840 seil/x2_firmware 140335681345912 seil/x86_firmware 140335681346704 seil/turbo_firmware 140335679269456 seil/neu_2fe_plus_firmware 140335680328928 intelligent_platforms_proficy_hmi/scada_cimplicity 140335681194376 Bad Boys in output list: eil%2fx2_firmware 140335681346272 seil%2fb1_firmware 140335679267800 seil%2fx1_firmware 140335679267872 seil%2fx2_firmware 140335679267944 seil%2fx86_firmware 140335679269384 seil%2fturbo_firmware 140335679849576 seil%2fneu_2fe_plus_firmware 140335678934512 intelligent_platforms_proficy_hmi%2fscada_cimplicity 140335681195728 ---------- components: Library (Lib) files: testcase.py messages: 314688 nosy: alexey at altlinux.org priority: normal severity: normal status: open title: Memory corruption with urllib.parse type: security versions: Python 3.5 Added file: https://bugs.python.org/file47506/testcase.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 12:34:54 2018 From: report at bugs.python.org (Stefan Behnel) Date: Fri, 30 Mar 2018 16:34:54 +0000 Subject: [New-bugs-announce] [issue33187] Document ElementInclude (XInclude) support in ElementTree Message-ID: <1522427694.7.0.467229070634.issue33187@psf.upfronthosting.co.za> New submission from Stefan Behnel : The ElementInclude module in ElementTree seems undocumented. I couldn't find any documentation in the stdlib docs. Pretty much the only source that I could find is here: http://effbot.org/zone/element-xinclude.htm I noticed it while looking for a place to document the changes of #20928. ---------- assignee: docs at python components: Documentation messages: 314691 nosy: docs at python, scoder priority: normal severity: normal status: open title: Document ElementInclude (XInclude) support in ElementTree type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 14:03:43 2018 From: report at bugs.python.org (Rick Teachey) Date: Fri, 30 Mar 2018 18:03:43 +0000 Subject: [New-bugs-announce] [issue33188] dataclass MRO entry resolution for type variable metaclasses: TypeError Message-ID: <1522433023.64.0.467229070634.issue33188@psf.upfronthosting.co.za> New submission from Rick Teachey : I'm getting the following error at when attempting to create a new `dataclass` with any base class that is supplied a type variable: TypeError: type() doesn't support MRO entry resolution; use types.new_class() In principle, it seems like this shouldn't cause any problems, since this works as expected: @dataclass class MyClass(Generic[MyTypeVar]): pass The problem seems to be the call to `type` in `make_dataclass` for instantiating the class object, since `type` doesn't support type variables. If I replace the `dataclass` line that produces the error with the following code block, it seems to work: try: cls = type(cls_name, bases, namespace) except TypeError: cls = types.new_class(cls_name, bases, namespace) I haven't tested, but it might be possible to just remove the call to `type` altogether. I've attached a file demonstrating the issue. ---------- components: Library (Lib) files: dataclass_metaclass_issue.py messages: 314703 nosy: Ricyteach, eric.smith priority: normal severity: normal status: open title: dataclass MRO entry resolution for type variable metaclasses: TypeError type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47508/dataclass_metaclass_issue.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 16:25:16 2018 From: report at bugs.python.org (Riccardo Polignieri) Date: Fri, 30 Mar 2018 20:25:16 +0000 Subject: [New-bugs-announce] [issue33189] pygettext doesn't work with f-strings Message-ID: <1522441516.38.0.467229070634.issue33189@psf.upfronthosting.co.za> New submission from Riccardo Polignieri : Tested (on windows) with python 3.6, but I guess it's the same in py3.7: # test.py def hello(x): print(_(f'hello {x}')) > py pygettext.py test.py Traceback (most recent call last): File "C:\Program Files\Python36\Tools\i18n\pygettext.py", line 623, in if __name__ == '__main__': File "C:\Program Files\Python36\Tools\i18n\pygettext.py", line 597, in main for _token in tokens: File "C:\Program Files\Python36\Tools\i18n\pygettext.py", line 328, in __call__ ## 'tstring:', tstring File "C:\Program Files\Python36\Tools\i18n\pygettext.py", line 382, in __openseen elif ttype == tokenize.STRING: File "C:\Program Files\Python36\Tools\i18n\pygettext.py", line 236, in safe_eval # unwrap quotes, safely File "", line 1, in NameError: name 'x' is not defined ---------- components: Demos and Tools messages: 314712 nosy: Riccardo Polignieri priority: normal severity: normal status: open title: pygettext doesn't work with f-strings type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 16:43:02 2018 From: report at bugs.python.org (Rick Teachey) Date: Fri, 30 Mar 2018 20:43:02 +0000 Subject: [New-bugs-announce] [issue33190] problem with ABCMeta.__prepare__ when called after types.new_class Message-ID: <1522442582.8.0.467229070634.issue33190@psf.upfronthosting.co.za> New submission from Rick Teachey : I am pretty sure this is a bug. If not I apologize. Say I want to dynamically create a new `C` class, with base class `MyABC` (and dynamically assigned abstract method `m`). This works fine if I use `type`, but if I use `new_class`, the keyword argument to the `m` method implementation gets lost somewhere in the call to `ABCMeta.__prepare___`. I've attached a file to demo. Thanks. ---------- components: Library (Lib) files: abcmeta_prepare.py messages: 314714 nosy: Ricyteach, levkivskyi priority: normal severity: normal status: open title: problem with ABCMeta.__prepare__ when called after types.new_class type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47509/abcmeta_prepare.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 30 17:09:31 2018 From: report at bugs.python.org (Zachary Ware) Date: Fri, 30 Mar 2018 21:09:31 +0000 Subject: [New-bugs-announce] [issue33191] Refleak in posix_spawn Message-ID: <1522444171.85.0.467229070634.issue33191@psf.upfronthosting.co.za> New submission from Zachary Ware : There is a refleak in posix_spawn; see for example http://buildbot.python.org/all/#/builders/114/builds/53 The attached PR fixes it, but I am not confident that I did it correctly. ---------- components: Interpreter Core messages: 314719 nosy: gregory.p.smith, ned.deily, pablogsal, vstinner, zach.ware priority: critical severity: normal status: open title: Refleak in posix_spawn versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 31 18:44:55 2018 From: report at bugs.python.org (Nathaniel Smith) Date: Sat, 31 Mar 2018 22:44:55 +0000 Subject: [New-bugs-announce] [issue33192] asyncio should use signal.set_wakeup_fd on Windows Message-ID: <1522536295.45.0.467229070634.issue33192@psf.upfronthosting.co.za> New submission from Nathaniel Smith : I thought there was already a bug for this, but it came up in conversation again and I can't find one, so, here you go... It looks like originally there was this bug for making control-C wake up the asyncio event loop in Windows: https://github.com/python/asyncio/issues/191 This required some changes to signal.set_wakeup_fd to work on Windows, which was done in bpo-22018. But I guess the last step got lost in the shuffle: right now signal.set_wakeup_fd works fine on Windows, but asyncio doesn't actually use it. This means that on Windows you can't wake up this program using control-C: >>> import asyncio >>> asyncio.run(asyncio.sleep(100000000)) Both of the Windows event loops should register a wakeup socket with signal.set_wakeup_fd, and arrange for the loop to wake up when data arrives on that socket, and read from it until it's empty again. (And once the loop is awake, Python's normal control-C handling will kick in.) That will make control-C on Windows work similarly to how it does on Unix. ---------- messages: 314749 nosy: asvetlov, giampaolo.rodola, njs, vstinner, yselivanov priority: normal severity: normal status: open title: asyncio should use signal.set_wakeup_fd on Windows versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 31 22:25:43 2018 From: report at bugs.python.org (Stuart Cuthbertson) Date: Sun, 01 Apr 2018 02:25:43 +0000 Subject: [New-bugs-announce] [issue33193] Cannot create a venv on Windows when directory path contains dollar character Message-ID: <1522549543.99.0.467229070634.issue33193@psf.upfronthosting.co.za> New submission from Stuart Cuthbertson : I should clarify first that I haven't reproduced the following bug specifically with venv. I was asked to raise this here after raising an identical issue about virtualenv (https://github.com/pypa/virtualenv/issues/1154); a GitHub user told me this would also apply to venv. The bug with virtualenv is that it errors if passed a directory that contains a $ (dollar symbol). $ is a valid character for Windows directory names, filenames, and usernames. So running something simple like `python3 -m venv` (presumably) can fail in some valid Windows directories. The full error traceback for virtualenv is available at the above GitHub URL. A commenter in the virtualenv project (see https://github.com/pypa/virtualenv/issues/457#issuecomment-377159868) suggested that this happens because the directory path is passed as-is (with $) to distutils, and distutils is seeing the text following the $ as a placeholder and trying to replace it with a variable, which isn't found. ---------- components: Windows messages: 314755 nosy: Stuart Cuthbertson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Cannot create a venv on Windows when directory path contains dollar character type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 31 22:47:34 2018 From: report at bugs.python.org (Marco Rougeth) Date: Sun, 01 Apr 2018 02:47:34 +0000 Subject: [New-bugs-announce] [issue33194] Path-file objects does not have method to delete itself if its a file Message-ID: <1522550854.13.0.467229070634.issue33194@psf.upfronthosting.co.za> New submission from Marco Rougeth : Path has the method `.rmdir()` for removing the directory, but it doesn't have anything if it correspond to a file. The `os.remove` could be used here, but I think it should have a more appropriate/explicit name like `.rmfile()`. If it make sense, I'd be glad to work on it. ---------- messages: 314756 nosy: rougeth priority: normal severity: normal status: open title: Path-file objects does not have method to delete itself if its a file type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________