From report at bugs.python.org Wed Mar 1 00:54:36 2017 From: report at bugs.python.org (Alex CHEN) Date: Wed, 01 Mar 2017 05:54:36 +0000 Subject: [New-bugs-announce] [issue29682] Checks for null return value Message-ID: <1488347676.64.0.259401612021.issue29682@psf.upfronthosting.co.za> New submission from Alex CHEN: Hi, Our tool reported a position that doesn't check for returned value (from a function that might returns null). might need a look that is there any problem or I am missing something. in function PyUnknownEncodingHandler of file pyexpat.c, if (namespace_separator != NULL) { self->itself = XML_ParserCreateNS(encoding, *namespace_separator); } else { self->itself = XML_ParserCreate(encoding); // could XML_ParserCreate returns null in this point? } ..... XML_SetHashSalt(self->itself, // if it does return null, null pointer will passed into XML_SetHashSalt and will be dereferenced. (unsigned long)_Py_HashSecret.prefix); #endif ---------- messages: 288739 nosy: alexc priority: normal severity: normal status: open title: Checks for null return value versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 03:52:12 2017 From: report at bugs.python.org (Brian Coleman) Date: Wed, 01 Mar 2017 08:52:12 +0000 Subject: [New-bugs-announce] [issue29683] _PyCode_SetExtra behaviour wrong on allocation failure and after realloc Message-ID: <1488358332.92.0.449242887398.issue29683@psf.upfronthosting.co.za> New submission from Brian Coleman: On PyMem_Malloc failure, _PyCode_SetExtra should set co_extra->ce_size = 0. On PyMem_Realloc failure, _PyCode_SetExtra should set co_extra->ce_size = 0. On PyMem_Realloc success, _PyCode_SetExtra should set all unused slots in co_extra->ce_extras to NULL. I will add a GitHub PR for this shortly. ---------- components: Interpreter Core messages: 288745 nosy: brianfcoleman priority: normal severity: normal status: open title: _PyCode_SetExtra behaviour wrong on allocation failure and after realloc type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 04:20:00 2017 From: report at bugs.python.org (INADA Naoki) Date: Wed, 01 Mar 2017 09:20:00 +0000 Subject: [New-bugs-announce] [issue29684] Minor regression in PyEval_CallObjectWithKeywords() Message-ID: <1488360000.25.0.226175161117.issue29684@psf.upfronthosting.co.za> New submission from INADA Naoki: This issue is spin off issue29548. PyEval_CallObjectWithKeywords(PyObject *func, PyObject *args, PyObject *kwargs) should raise TypeError when kwargs is not dict. But after this commit [1], assert(PyDict_Check(kwargs)) can be called when args==NULL. [1] https://github.com/python/cpython/commit/155ea65e5c88d250a752ee5321860ef11ede4085 ---------- keywords: 3.6regression messages: 288748 nosy: haypo, inada.naoki priority: normal severity: normal status: open title: Minor regression in PyEval_CallObjectWithKeywords() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 10:42:34 2017 From: report at bugs.python.org (Marco) Date: Wed, 01 Mar 2017 15:42:34 +0000 Subject: [New-bugs-announce] [issue29685] test_gdb failed Message-ID: <1488382954.54.0.564501434146.issue29685@psf.upfronthosting.co.za> New submission from Marco: make test output studio at linux:~/Python-3.6.0> ./python -m test -v test_gdb == CPython 3.6.0 (default, Mar 1 2017, 15:51:48) [GCC 4.8.5] == Linux-4.4.49-16-default-x86_64-with-SuSE-42.2-x86_64 little-endian == hash algorithm: siphash24 64bit == cwd: /home/studio/Python-3.6.0/build/test_python_32667 == encodings: locale=UTF-8, FS=utf-8 Testing with flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=0, hash_randomization=1, isolated=0) Run tests sequentially 0:00:00 [1/1] test_gdb test test_gdb crashed -- Traceback (most recent call last): File "/home/studio/Python-3.6.0/Lib/test/libregrtest/runtest.py", line 152, in runtest_inner the_module = importlib.import_module(abstest) File "/home/studio/Python-3.6.0/Lib/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 978, in _gcd_import File "", line 961, in _find_and_load File "", line 950, in _find_and_load_unlocked File "", line 655, in _load_unlocked File "", line 678, in exec_module File "", line 205, in _call_with_frames_removed File "/home/studio/Python-3.6.0/Lib/test/test_gdb.py", line 46, in gdb_version, gdb_major_version, gdb_minor_version = get_gdb_version() File "/home/studio/Python-3.6.0/Lib/test/test_gdb.py", line 43, in get_gdb_version raise Exception("unable to parse GDB version: %r" % version) Exception: unable to parse GDB version: '' test_gdb failed 1 test failed: test_gdb Total duration: 31 ms Tests result: FAILURE ---------- components: Tests messages: 288760 nosy: MarcoC priority: normal severity: normal status: open title: test_gdb failed type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 11:21:40 2017 From: report at bugs.python.org (=?utf-8?q?Vin=C3=ADcius_Dantas?=) Date: Wed, 01 Mar 2017 16:21:40 +0000 Subject: [New-bugs-announce] [issue29686] Unittest - Return empty string instead of None object on shortDescription() Message-ID: <1488385300.33.0.815532691176.issue29686@psf.upfronthosting.co.za> New submission from Vin?cius Dantas: I have been browsing around the unittest standard library, and I realized that TestCase's shortDescription() method at lib/pythonX.X/unittest/case.py returns None when the there is no docstring on the test that is running. As shortDescription() should obviously return a string, I would recommend returning an empty string instead of None when no docstring is found. This came to mind when I was using testscenario package, which only displays the scenarioname when shortDescription() returns something but None. When we are starting from scratch a test suite, docstrings are left for another stage, when we have running (probably failed, if we are TDDing) unittests. Last yet not least, I am sure it's a good practice to avoid returning None, which forces None-checks, returning empty strings, lists, objects of the return type expected from that function. ---------- components: Tests messages: 288763 nosy: viniciusd priority: normal severity: normal status: open title: Unittest - Return empty string instead of None object on shortDescription() type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 12:29:32 2017 From: report at bugs.python.org (Rares Vernica) Date: Wed, 01 Mar 2017 17:29:32 +0000 Subject: [New-bugs-announce] [issue29687] smtplib does not support proxy Message-ID: <1488389372.28.0.822188705276.issue29687@psf.upfronthosting.co.za> New submission from Rares Vernica: smtplib does not support connections through a proxy. The accepted workaround is something like: ``` import smtplib import socks socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4, proxy_host, proxy_port) socks.wrapmodule(smtplib) smtp = smtplib.SMTP() ``` The side-effects of `socks.wrapmodule` impact other libraries which don't need to use the proxy, like `requests`. See here for a disucssion https://github.com/kennethreitz/requests/issues/3890 ---------- components: Library (Lib) messages: 288765 nosy: rares priority: normal severity: normal status: open title: smtplib does not support proxy _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 12:37:28 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Wed, 01 Mar 2017 17:37:28 +0000 Subject: [New-bugs-announce] [issue29688] Document Path.absolute Message-ID: <1488389848.79.0.700814097868.issue29688@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Method absolute of Path objects lacked documentation, proposed PR adds relevant method to docs. ---------- assignee: docs at python components: Documentation messages: 288767 nosy: Jim Fasarakis-Hilliard, docs at python priority: normal severity: normal status: open title: Document Path.absolute versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 1 13:52:55 2017 From: report at bugs.python.org (Codey Oxley) Date: Wed, 01 Mar 2017 18:52:55 +0000 Subject: [New-bugs-announce] [issue29689] Asyncio-namespace helpers for async_generators Message-ID: <1488394375.71.0.89348090398.issue29689@psf.upfronthosting.co.za> New submission from Codey Oxley: Expanding an async_generator to any container-type currently makes you do an async-for loop/comprehension. There are some third-party libs (aitertools) that have helpers but it would be nice for this to be upstream for list, tuple, dict, set, etc. Usage might be: expanded: List[int] = await asyncio.list(gen()) ---------- components: Library (Lib), asyncio messages: 288773 nosy: Codey Oxley, gvanrossum, yselivanov priority: normal severity: normal status: open title: Asyncio-namespace helpers for async_generators type: enhancement versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 02:31:29 2017 From: report at bugs.python.org (Mathieu Dupuy) Date: Thu, 02 Mar 2017 07:31:29 +0000 Subject: [New-bugs-announce] [issue29690] no %z directive for strptime in python2, doc says nothing about it Message-ID: <1488439889.25.0.877178323072.issue29690@psf.upfronthosting.co.za> New submission from Mathieu Dupuy: ? ~ cat dt.py from datetime import * dt = datetime.strptime('+1720', '%z') print(dt) ? ~ python2 dt.py Traceback (most recent call last): File "dt.py", line 2, in dt = datetime.strptime('+1720', '%z') File "/usr/lib/python2.7/_strptime.py", line 324, in _strptime (bad_directive, format)) ValueError: 'z' is a bad directive in format '%z' ? ~ python3 dt.py 1900-01-01 00:00:00+17:20 We should either mention it in doc, either cherry-pick the code from python3 ---------- components: Library (Lib) messages: 288782 nosy: deronnax priority: normal severity: normal status: open title: no %z directive for strptime in python2, doc says nothing about it type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 03:42:36 2017 From: report at bugs.python.org (Jelle Zijlstra) Date: Thu, 02 Mar 2017 08:42:36 +0000 Subject: [New-bugs-announce] [issue29691] Some tests fail in coverage Travis check Message-ID: <1488444156.84.0.711063027432.issue29691@psf.upfronthosting.co.za> New submission from Jelle Zijlstra: A few tests fail in the coverage Travis target (see e.g. https://travis-ci.org/python/cpython/jobs/206480468): test_traceback and test_xml_etree. I extracted the actual failures by running in verbose mode locally: ====================================================================== FAIL: test_recursive_traceback_cpython_internal (test.test_traceback.TracebackFormatTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jelle/cpython-dev/cpython-1/Lib/test/test_traceback.py", line 431, in test_recursive_traceback_cpython_internal self._check_recursive_traceback_display(render_exc) File "/home/jelle/cpython-dev/cpython-1/Lib/test/test_traceback.py", line 347, in _check_recursive_traceback_display self.assertEqual(actual[-1], expected[-1]) AssertionError: 'RecursionError: maximum recursion depth exceeded in comparison' != 'RecursionError: maximum recursion depth exceeded' - RecursionError: maximum recursion depth exceeded in comparison ? -------------- + RecursionError: maximum recursion depth exceeded ====================================================================== FAIL: test_recursive_traceback_python (test.test_traceback.TracebackFormatTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jelle/cpython-dev/cpython-1/Lib/test/test_traceback.py", line 423, in test_recursive_traceback_python self._check_recursive_traceback_display(traceback.print_exc) File "/home/jelle/cpython-dev/cpython-1/Lib/test/test_traceback.py", line 347, in _check_recursive_traceback_display self.assertEqual(actual[-1], expected[-1]) AssertionError: 'RecursionError: maximum recursion depth exceeded in comparison' != 'RecursionError: maximum recursion depth exceeded' - RecursionError: maximum recursion depth exceeded in comparison ? -------------- + RecursionError: maximum recursion depth exceeded ====================================================================== FAIL: test_bug_xmltoolkit63 (test.test_xml_etree.BugsTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jelle/cpython-dev/cpython-1/Lib/test/test_xml_etree.py", line 1538, in test_bug_xmltoolkit63 self.assertEqual(sys.getrefcount(None), count) AssertionError: 505087 != 505084 Fixing this will improve the coverage check on PRs. ---------- components: Tests messages: 288786 nosy: Jelle Zijlstra, eli.bendersky, ezio.melotti, michael.foord, scoder priority: normal severity: normal status: open title: Some tests fail in coverage Travis check versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 05:11:35 2017 From: report at bugs.python.org (Nick Coghlan) Date: Thu, 02 Mar 2017 10:11:35 +0000 Subject: [New-bugs-announce] [issue29692] contextlib.contextmanager may incorrectly unchain RuntimeError Message-ID: <1488449495.24.0.0941910154043.issue29692@psf.upfronthosting.co.za> New submission from Nick Coghlan: As part of PEP 479, an extra check was added to contextlib._GeneratorContextManager to avoid getting confused when a StopIteration exception was raised in the body of the with statement, and hence thrown into the generator body implementing the context manager. This extra check should only be used when the passed in exception is `StopIteration`, but that guard is currently missing, so it may unchain arbitrary RuntimeError exceptions if they set their `__cause__` to the originally passed in value. Compare the current contextmanager behaviour: ``` >>> from contextlib import contextmanager >>> @contextmanager ... def chain_thrown_exc(): ... try: ... yield ... except Exception as exc: ... raise RuntimeError("Chained!") from exc ... >>> with chain_thrown_exc(): ... 1/0 ... Traceback (most recent call last): File "", line 2, in ZeroDivisionError: division by zero ``` To the expected inline behaviour: ``` >>> try: ... 1/0 ... except Exception as exc: ... raise RuntimeError("Chained!") from exc ... Traceback (most recent call last): File "", line 2, in ZeroDivisionError: division by zero The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 4, in RuntimeError: Chained! ``` ---------- keywords: 3.5regression messages: 288793 nosy: ncoghlan priority: normal severity: normal stage: test needed status: open title: contextlib.contextmanager may incorrectly unchain RuntimeError type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 08:31:49 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 02 Mar 2017 13:31:49 +0000 Subject: [New-bugs-announce] [issue29693] DeprecationWarning/SyntaxError in test_import Message-ID: <1488461509.07.0.849251480688.issue29693@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: $ ./python -Wa -m test.regrtest test_import Run tests sequentially 0:00:00 [1/1] test_import /home/serhiy/py/cpython/Lib/test/test_import/__init__.py:88: DeprecationWarning: invalid escape sequence \( self.assertRegex(str(cm.exception), "cannot import name 'i_dont_exist' from 'os' \(.*os.py\)") /home/serhiy/py/cpython/Lib/test/test_import/__init__.py:96: DeprecationWarning: invalid escape sequence \( self.assertRegex(str(cm.exception), "cannot import name 'i_dont_exist' from 'select' \(.*\.(so|pyd)\)") 1 test OK. Total duration: 2 sec Tests result: SUCCESS $ ./python -We -m test.regrtest test_import Run tests sequentially 0:00:00 [1/1] test_import test test_import crashed -- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/libregrtest/runtest.py", line 152, in runtest_inner the_module = importlib.import_module(abstest) File "/home/serhiy/py/cpython/Lib/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 978, in _gcd_import File "", line 961, in _find_and_load File "", line 950, in _find_and_load_unlocked File "", line 655, in _load_unlocked File "", line 675, in exec_module File "", line 782, in get_code File "", line 742, in source_to_code File "", line 205, in _call_with_frames_removed File "/home/serhiy/py/cpython/Lib/test/test_import/__init__.py", line 88 self.assertRegex(str(cm.exception), "cannot import name 'i_dont_exist' from 'os' \(.*os.py\)") ^ SyntaxError: invalid escape sequence \( test_import failed 1 test failed: test_import Total duration: 244 ms Tests result: FAILURE ---------- components: Tests messages: 288799 nosy: benjamin.peterson, brett.cannon, eric.snow, ncoghlan, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: DeprecationWarning/SyntaxError in test_import type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 08:53:33 2017 From: report at bugs.python.org (whitespacer) Date: Thu, 02 Mar 2017 13:53:33 +0000 Subject: [New-bugs-announce] [issue29694] race condition in pathlib mkdir with flags parents=True Message-ID: <1488462813.87.0.464471853988.issue29694@psf.upfronthosting.co.za> New submission from whitespacer: When pathlib mkdir is called with parents=True and some parent doesn't exists it recursively calls self.parent.mkdir(parents=True) after catching OSError. However after catching of OSError and before call to self.parent.mkdir(parents=True) somebody else can create parent dir, which will lead to FileExistsError exception. ---------- messages: 288801 nosy: whitespacer priority: normal severity: normal status: open title: race condition in pathlib mkdir with flags parents=True type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 09:54:04 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 02 Mar 2017 14:54:04 +0000 Subject: [New-bugs-announce] [issue29695] Weird keyword parameter names in builtins Message-ID: <1488466444.68.0.907226246044.issue29695@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patches deprecate the "x" keyword parameter in int(), bool() and float() and the "sequence" keyword parameter in list() and tuple(). Name "x" is meaningless, and name "sequence" is misleading (any iterable is accepted, not just sequence). The documentation uses name "iterable" for list() and tuple(). It is never documented that any of these parameters are accepted by keywords. There was only a test for int(), but it was added just for increasing coverity, not to test intended behavior. Does this mean that the support of keyword arguments can be removed without deprecation? The general idea got preliminary approval from Guido (https://mail.python.org/pipermail/python-ideas/2017-March/044959.html). ---------- components: Interpreter Core files: deprecate-keyword-x.patch keywords: patch messages: 288802 nosy: gvanrossum, haypo, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Weird keyword parameter names in builtins type: enhancement versions: Python 3.7 Added file: http://bugs.python.org/file46684/deprecate-keyword-x.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 10:34:03 2017 From: report at bugs.python.org (Facundo Batista) Date: Thu, 02 Mar 2017 15:34:03 +0000 Subject: [New-bugs-announce] [issue29696] Use namedtuple in Formatter.parse iterator response Message-ID: <1488468843.51.0.704010418399.issue29696@psf.upfronthosting.co.za> New submission from Facundo Batista: Right now: >>> Formatter().parse("mira como bebebn los peces en el {rio} {de} {la} plata") >>> next(_) ('mira como bebebn los peces en el ', 'rio', '', None) This returned tuple should be a namedtuple, so it's self-explained for people exploring this (and usage of the fields become clearer) ---------- components: Library (Lib) messages: 288807 nosy: facundobatista priority: normal severity: normal status: open title: Use namedtuple in Formatter.parse iterator response type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 11:18:22 2017 From: report at bugs.python.org (Christian Heimes) Date: Thu, 02 Mar 2017 16:18:22 +0000 Subject: [New-bugs-announce] [issue29697] Wrong ECDH configuration with OpenSSL 1.1 Message-ID: <1488471502.97.0.24501918149.issue29697@psf.upfronthosting.co.za> New submission from Christian Heimes: I think I made a mistake during the port to OpenSSL 1.1.x. defined(OPENSSL_VERSION_1_1) is on the wrong ifndef block. ------------------------------------------------------------------ Old code #ifndef OPENSSL_NO_ECDH /* Allow automatic ECDH curve selection (on OpenSSL 1.0.2+), or use prime256v1 by default. This is Apache mod_ssl's initialization policy, so we should be safe. */ #if defined(SSL_CTX_set_ecdh_auto) SSL_CTX_set_ecdh_auto(self->ctx, 1); #else { EC_KEY *key = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1); SSL_CTX_set_tmp_ecdh(self->ctx, key); EC_KEY_free(key); } #endif #endif ------------------------------------------------------------------ New code with OpenSSL 1.1.x compatibility #ifndef OPENSSL_NO_ECDH /* Allow automatic ECDH curve selection (on OpenSSL 1.0.2+), or use prime256v1 by default. This is Apache mod_ssl's initialization policy, so we should be safe. OpenSSL 1.1 has it enabled by default. */ #if defined(SSL_CTX_set_ecdh_auto) && !defined(OPENSSL_VERSION_1_1) SSL_CTX_set_ecdh_auto(self->ctx, 1); #else { EC_KEY *key = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1); SSL_CTX_set_tmp_ecdh(self->ctx, key); EC_KEY_free(key); } #endif #endif ---------- assignee: christian.heimes components: SSL keywords: 3.6regression messages: 288812 nosy: christian.heimes priority: normal severity: normal status: open title: Wrong ECDH configuration with OpenSSL 1.1 type: behavior versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 11:46:33 2017 From: report at bugs.python.org (Nikita Kniazev) Date: Thu, 02 Mar 2017 16:46:33 +0000 Subject: [New-bugs-announce] [issue29698] _collectionsmodule.c: Replace `n++; while (--n)` with `for (; n; --n)` Message-ID: <1488473193.67.0.278300394177.issue29698@psf.upfronthosting.co.za> New submission from Nikita Kniazev: I have failed to find previous discussion when `while (n--)` was changed to `n++; while (--n)`. (commits 306d6b1ea6bf2582b9284be2fd27275abbade3e1, 165eee214bc388eb588db33385ca49ddbb305565) It is clear for me that n++; while (--n) and for (; n; --n) are interchangable statements, but here is the prof of it http://coliru.stacked-crooked.com/a/a6fc4108b223e7b2. According to asm out https://godbolt.org/g/heHM33 the `for` loop is even shorter (takes less instructions). While I believe that location of `cmp`/`jmp` instruction makes no sense and performance is the same, but I have made a benchmark. ``` Run on (4 X 3310 MHz CPU s) 02/27/17 22:10:55 Benchmark Time CPU Iterations ------------------------------------------------------------ BM_while_loop/2 13 ns 13 ns 56089384 BM_while_loop/4 17 ns 16 ns 40792279 BM_while_loop/8 24 ns 24 ns 29914338 BM_while_loop/16 40 ns 40 ns 20396140 BM_while_loop/32 84 ns 80 ns 8974301 BM_while_loop/64 146 ns 146 ns 4487151 BM_while_loop/128 270 ns 269 ns 2492862 BM_while_loop/128 267 ns 266 ns 2639500 BM_while_loop/512 1022 ns 1022 ns 641022 BM_while_loop/4096 8203 ns 8344 ns 89743 BM_while_loop/32768 66971 ns 66750 ns 11218 BM_while_loop/262144 545833 ns 546003 ns 1000 BM_while_loop/2097152 4376095 ns 4387528 ns 160 BM_while_loop/8388608 17654654 ns 17883041 ns 41 BM_for_loop/2 13 ns 13 ns 56089384 BM_for_loop/4 15 ns 15 ns 49857230 BM_for_loop/8 21 ns 21 ns 32051077 BM_for_loop/16 37 ns 37 ns 19509351 BM_for_loop/32 81 ns 80 ns 8974301 BM_for_loop/64 144 ns 128 ns 4985723 BM_for_loop/128 265 ns 263 ns 3205108 BM_for_loop/128 265 ns 266 ns 2639500 BM_for_loop/512 1036 ns 1022 ns 641022 BM_for_loop/4096 8314 ns 8344 ns 89743 BM_for_loop/32768 67345 ns 66750 ns 11218 BM_for_loop/262144 541310 ns 546004 ns 1000 BM_for_loop/2097152 4354986 ns 4387528 ns 160 BM_for_loop/8388608 17592428 ns 17122061 ns 41 ``` ```cpp #include #define MAKE_ROTL_BENCHMARK(name) \ static void BM_##name(benchmark::State& state) { \ while (state.KeepRunning()) { \ int n = name(state.range(0)); \ } \ } \ /**/ int while_loop(int n) { int sum = 0; n++; while (--n) { sum += 1; } return sum; } int for_loop(int n) { int sum = 0; for(; n; --n) { sum += 1; } return sum; } MAKE_ROTL_BENCHMARK(while_loop) MAKE_ROTL_BENCHMARK(for_loop) BENCHMARK(BM_while_loop)->RangeMultiplier(2)->Range(2, 8<<4); BENCHMARK(BM_while_loop)->Range(8<<4, 8<<20); BENCHMARK(BM_for_loop)->RangeMultiplier(2)->Range(2, 8<<4); BENCHMARK(BM_for_loop)->Range(8<<4, 8<<20); BENCHMARK_MAIN() ``` ---------- components: Interpreter Core messages: 288815 nosy: Kojoley priority: normal pull_requests: 329 severity: normal status: open title: _collectionsmodule.c: Replace `n++; while (--n)` with `for (; n; --n)` type: enhancement versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 14:12:24 2017 From: report at bugs.python.org (Daniel Kahn Gillmor) Date: Thu, 02 Mar 2017 19:12:24 +0000 Subject: [New-bugs-announce] [issue29699] shutil.rmtree should not fail with FileNotFoundError (race condition) Message-ID: <1488481944.03.0.0307704051627.issue29699@psf.upfronthosting.co.za> New submission from Daniel Kahn Gillmor: There is a race condition in shutil.rmtree, where if a file gets removed between when rmtree plans to remove it and when it gets around to removing it, a FileNotFound exception gets raised. The expected semantics of rmtree imply that if the filesystem tree is removed, then the command has succeeded, so it doesn't make sense for rmtree to raise a FileNotFound error if someone else happened to have deleted the file before rmtree gets to it. I'm attaching a C program (for GNU/Linux) which uses inotify to remove the other file in a directory when either file is removed. This triggers the rmtree failure. This behavior has caused a number of workarounds in external projects, like: https://bitbucket.org/vinay.sajip/python-gnupg/commits/492fd45ca073a90aac434320fb0c8fe8d01f782b https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gpgme.git;a=commitdiff;h=de8494b16bc50c60a8438f2cae1f8c88e8949f7a It would be better for shutil.rmtree to ignore this particular exception (FileNotFoundError). Another option for users is to set ignore_errors=True, but this ends up ignoring *all* errors, which doesn't seem like the right decision. Finally, of course, a user could specify some sort of onerror function that explictly ignores FileNotFoundError, but this seems pretty complicated for the common pattern. It's possible that shutil.rmtree() wants to raise FileNotFoundError if the actual argument passed by the user does not itself exist, but it really doesn't make sense to raise that error for any of the elements further down in the tree. ---------- components: Library (Lib) files: breaker.c messages: 288822 nosy: dkg priority: normal severity: normal status: open title: shutil.rmtree should not fail with FileNotFoundError (race condition) type: crash versions: Python 3.5 Added file: http://bugs.python.org/file46687/breaker.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 14:48:15 2017 From: report at bugs.python.org (Gregory P. Smith) Date: Thu, 02 Mar 2017 19:48:15 +0000 Subject: [New-bugs-announce] [issue29700] readline memory corruption when sys.stdin fd >= FD_SETSIZE for select() Message-ID: <1488484095.89.0.988668664064.issue29700@psf.upfronthosting.co.za> New submission from Gregory P. Smith: The readline module causes memory corruption (sometimes a crash) when the sys.stdin file descriptor is out of bounds for its FD_SET() call within readline.c's readline_until_enter_or_signal() function. https://github.com/python/cpython/blob/master/Modules/readline.c#L1228 A tiny program reproducing this problem is attached. FD_SET should not be used if the file descriptor is too large for use in select() (ie: >= FD_SETSIZE). OTOH, we should probably just ditch select() entirely and use poll() here so that this issue does not exist. On Python 2.7-3.6 we probably need to preserve both select and poll options for platform compatibility reasons since those shipped that way. For Python 3.7 I suggest we stop supporting platforms that do not have poll() unless anyone knows of any that actually exist. ---------- components: Extension Modules files: crash_readline_fdset.py messages: 288825 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: readline memory corruption when sys.stdin fd >= FD_SETSIZE for select() type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 Added file: http://bugs.python.org/file46689/crash_readline_fdset.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 15:12:12 2017 From: report at bugs.python.org (=?utf-8?q?Mathias_Fr=C3=B6jdman?=) Date: Thu, 02 Mar 2017 20:12:12 +0000 Subject: [New-bugs-announce] [issue29701] Add close method to queue.Queue Message-ID: <1488485532.45.0.976459019786.issue29701@psf.upfronthosting.co.za> New submission from Mathias Fr?jdman: queue.Queue should have a close() method. The result of calling the method would be to raise a new exception - queue.Closed, for any subsequent calls to Queue.put, and after the queue is empty, also for Queue.get. Why: To allow producers (callers of Queue.put) to signal there will be no more items, and consumers may stop asking for more by calling Queue.get. Currently the opposite (ie. waiting until all produced items/"tasks" have been consumed and handled) is possible with Queue.task_done() and Queue.join(). This functionality is useful in both application and library code. For example in AMQP, a server may push new messages over a TCP connection to a consumer, which translates into the library calling Queue.put for received messages, and the application using the library calling Queue.get to receive any new messages. The consumer may however be cancelled at any time, or the TCP connection closed and the Queue.get caller signaled that there will be no more messages. With Queue.close() that is easy - without it one needs to wrap the Queue.get calls). In an application context where a KeyboardInterrupt should lead to closing the application cleanly, being able to call Queue.close(), catching the Closed exception in any consumers (some of which may be in other threads) and exiting cleanly makes the job that much easier. A common pattern in working around this issue is to call Queue.put(None), and treat a None from Queue.get() as a signal to clean up. This works well when one knows there is at most one consumer. In the case of many consumers, one needs to wrap the Queue and for example add another None to the queue in consumers to not leave any remaining get() call waiting indefinitely. This pattern occurs even in the standard library: https://github.com/python/cpython/blob/7b90e3674be86479c51faf872d0b9367c9fc2f96/Lib/concurrent/futures/thread.py#L141 If accepting this proposal, a corresponding change should be made to asyncio.Queue. I have a tentative implementation (no tests or doc outside the module) in https://github.com/mwfrojdman/cpython/blob/closeable_queue/Lib/queue.py The Queue.close() method has an optional argument clear (default False), which clears the queue of items if set to true. This is useful for example when exiting an application, and one doesn't want consumers to get any more items before being raised a Closed exception. The changes are backwards compatible for users of the class, ie. if Queue.close() is not called, the behavior stays intact. Because of the clear argument, there is a new private method Queue._clear(), which does the actual clearing of the queue representation. Subclasses for which self.queue.clear() doesn't cut it, need to override it before .close(True) works. Background: https://github.com/python/asyncio/pull/415#issuecomment-263658986 ---------- components: Library (Lib) messages: 288828 nosy: mwf priority: normal severity: normal status: open title: Add close method to queue.Queue type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 16:49:04 2017 From: report at bugs.python.org (Armen Levonian) Date: Thu, 02 Mar 2017 21:49:04 +0000 Subject: [New-bugs-announce] [issue29702] Error 0x80070003: Failed to launch elevated child process Message-ID: <1488491344.75.0.39896878007.issue29702@psf.upfronthosting.co.za> New submission from Armen Levonian: For some reason, after uninstalling Python 3.5.2 on my Windows 10 (64 bit - latest version), I am no longer able to install any new version of Python after version 3.4.3 I keep getting the failure to elevate privileges. I have of course tried to run the installer as Admin, or even launch a command prompt as admin or powershell as admin then run the installer with no luck. Disabling Windows Defender did not help. I have also lowered UAC all the way down, to no reporting, still no luck. The log file attached is for the 64 bit installer for 3.5.3 but I get identical results with the 32 bit installers for version 3.5.2 and 3.5.1. I can install latest Anaconda and also Python 3.4.3 without issues. I also had no problem installing 3.5.3 on another of my machines with same Windows 10 (64 bit) and that machine accepts the install without issues. Once again, I had version 3.5.2 running fine until I uninstalled it to upgrade. ---------- components: Installation, Windows files: Python 3.5.3 (64-bit)_20170302121038.log messages: 288834 nosy: alevonian, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Error 0x80070003: Failed to launch elevated child process type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file46690/Python 3.5.3 (64-bit)_20170302121038.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 16:55:45 2017 From: report at bugs.python.org (Yury Selivanov) Date: Thu, 02 Mar 2017 21:55:45 +0000 Subject: [New-bugs-announce] [issue29703] Fix asyncio to support instantiation of new event loops in subprocesses Message-ID: <1488491745.35.0.512874829003.issue29703@psf.upfronthosting.co.za> New submission from Yury Selivanov: Proxy for https://github.com/python/asyncio/pull/497 Ned, this needs to be in 3.6.1, working code from 3.4 doesn't work in 3.6.0: http://stackoverflow.com/questions/42546099/python-asyncio-migrate-from-3-4-to-3-5/42566336#42566336 ---------- assignee: yselivanov keywords: 3.5regression, 3.6regression messages: 288835 nosy: larry, ned.deily, yselivanov priority: release blocker severity: normal stage: resolved status: open title: Fix asyncio to support instantiation of new event loops in subprocesses type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 17:24:23 2017 From: report at bugs.python.org (Seth Michael Larson) Date: Thu, 02 Mar 2017 22:24:23 +0000 Subject: [New-bugs-announce] [issue29704] Can't read data from Transport after asyncio.SubprocessStreamProtocol closes Message-ID: <1488493463.16.0.114641402405.issue29704@psf.upfronthosting.co.za> New submission from Seth Michael Larson: Copied from https://github.com/python/asyncio/issues/484 """ >From https://bugs.python.org/issue23242#msg284930 The following script is used to reproduce the bug: import asyncio async def execute(): process = await asyncio.create_subprocess_exec( "timeout", "0.1", "cat", "/dev/urandom", stdout=asyncio.subprocess.PIPE) while True: data = await process.stdout.read(65536) print('read %d bytes' % len(data)) if data: await asyncio.sleep(0.3) else: break asyncio.get_event_loop().run_until_complete(execute()) will produce following output and terminate with exception: read 65536 bytes read 65536 bytes Traceback (most recent call last): File "read_subprocess.py", line 18, in asyncio.get_event_loop().run_until_complete(execute()) File "/usr/lib/python3.6/asyncio/base_events.py", line 466, in run_until_complete return future.result() File "read_subprocess.py", line 9, in execute data = await process.stdout.read(65536) File "/usr/lib/python3.6/asyncio/streams.py", line 634, in read self._maybe_resume_transport() File "/usr/lib/python3.6/asyncio/streams.py", line 402, in _maybe_resume_transport self._transport.resume_reading() File "/usr/lib/python3.6/asyncio/unix_events.py", line 401, in resume_reading self._loop._add_reader(self._fileno, self._read_ready) AttributeError: 'NoneType' object has no attribute '_add_reader' When the process exits https://github.com/python/asyncio/blob/master/asyncio/unix_events.py#L444 is called which sets this._loop = None Next time read() is called on the pipe the above exception is thrown. I have tried to fix this issue myself but would sometimes have read terminate too early and miss the last chunks of data. """ - BotoX ---------- messages: 288839 nosy: SethMichaelLarson, yselivanov priority: normal pull_requests: 337 severity: normal status: open title: Can't read data from Transport after asyncio.SubprocessStreamProtocol closes _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 18:36:55 2017 From: report at bugs.python.org (James Crowther) Date: Thu, 02 Mar 2017 23:36:55 +0000 Subject: [New-bugs-announce] [issue29705] socket.gethostbyname, getaddrinfo etc broken on MacOS 10.12 Message-ID: <1488497815.81.0.0112571433534.issue29705@psf.upfronthosting.co.za> New submission from James Crowther: Currently I can't use socket to resolve host names to IP addresses. This is something critical to mine as well as other applications that run over networks. When I attempt to do the following: import socket socket.getaddrinfo(hostname, None) or socket.gethostbyname(hostname) I get socket.gaierror: [Errno 8] nodename nor servename provided, or not known. This works perfectly on both linux kubuntu 16.0. and windows 7,10. Seems that the introduction of Yosemite might be the point at which this broke by doing a simple google search for "macos socket.gethostbyname gaierror". ---------- components: Library (Lib) messages: 288840 nosy: James Crowther priority: normal severity: normal status: open title: socket.gethostbyname, getaddrinfo etc broken on MacOS 10.12 type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 2 22:43:57 2017 From: report at bugs.python.org (David E. Franco G.) Date: Fri, 03 Mar 2017 03:43:57 +0000 Subject: [New-bugs-announce] [issue29706] IDLE needs syntax highlighting for async and await Message-ID: <1488512637.52.0.156111675331.issue29706@psf.upfronthosting.co.za> New submission from David E. Franco G.: Well, this is pretty self explanatory, when playing with this new features of async and await (https://docs.python.org/3.5/whatsnew/3.5.html#new-features) I found to me surprise that there is no syntax highlighting for it in the IDLE for py3.5 and also for py3.6 So I humbly ask for its addition. Thanks ---------- assignee: terry.reedy components: IDLE files: no syntax highlighting.png messages: 288849 nosy: David E. Franco G., terry.reedy priority: normal severity: normal status: open title: IDLE needs syntax highlighting for async and await type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file46692/no syntax highlighting.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 02:09:34 2017 From: report at bugs.python.org (ollieparanoid) Date: Fri, 03 Mar 2017 07:09:34 +0000 Subject: [New-bugs-announce] [issue29707] os.path.ismount() always returns false for mount --bind on same filesystem Message-ID: <1488524974.65.0.103028894103.issue29707@psf.upfronthosting.co.za> New submission from ollieparanoid: After mounting a folder to another folder on the same filesystem with mount --bind, os.path.ismount() still returns False on the destination folder (although there is a mountpoint). A shell script to reproduce this is below. (Maybe this can be fixed by using /proc/mounts (if available, may not be the case eg. for chroots) for verifying, if the destination folder is really a mountpoint on POSIX/Linux. Although I am not sure how consistent that is through POSIX.) --- #!/bin/sh # Output: # contents of /tmp/destination (should have test.py -> obviously mounted): # test.py # os.path.ismount(): False # create source and destination folders source=/tmp/source destination=/tmp/destination mkdir -p $source $destination # add the python script in the source folder echo "import os.path" >> $source/test.py echo "print('os.path.ismount(): ' + str(os.path.ismount('$destination')))" >> $source/test.py # do the mount --bind sudo mount --bind $source $destination echo "contents of $destination (should have test.py -> obviously mounted):" ls $destination # show the python bug python3 $source/test.py # clean up sudo umount $destination rm $source/test.py rm -d $source $destination ---------- components: Library (Lib) messages: 288863 nosy: Oliver Smith priority: normal severity: normal status: open title: os.path.ismount() always returns false for mount --bind on same filesystem type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 06:36:06 2017 From: report at bugs.python.org (Bernhard M. Wiedemann) Date: Fri, 03 Mar 2017 11:36:06 +0000 Subject: [New-bugs-announce] [issue29708] support reproducible Python builds Message-ID: <1488540966.18.0.904677570473.issue29708@psf.upfronthosting.co.za> New submission from Bernhard M. Wiedemann: See https://reproducible-builds.org/ and https://reproducible-builds.org/docs/buy-in/ for why this is a good thing to have in general. Fedora, openSUSE and possibly other Linux distributions package .pyc files as part of their binary rpm packages and they are not trivial to drop [1]. A .pyc header includes the timestamp of the source .py file which creates non-reproducible builds when the .py file is touched during build time (e.g. for a version.py). As of 2017-02-10 in openSUSE Factory this affected 476 packages (such as python-amqp and python3-Twisted). [1] http://lists.opensuse.org/opensuse-packaging/2017-02/msg00086.html ---------- components: Build, Distutils messages: 288880 nosy: bmwiedemann, dstufft, merwok priority: normal pull_requests: 353 severity: normal status: open title: support reproducible Python builds versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 07:11:40 2017 From: report at bugs.python.org (Stefan Pochmann) Date: Fri, 03 Mar 2017 12:11:40 +0000 Subject: [New-bugs-announce] [issue29709] Short-circuiting not only on False and True Message-ID: <1488543100.15.0.348225972332.issue29709@psf.upfronthosting.co.za> New submission from Stefan Pochmann: The notes at https://docs.python.org/3/library/stdtypes.html#boolean-operations-and-or-not say that `or` "only evaluates the second argument if the first one is False" and that `and` "only evaluates the second argument if the first one is True". Should say "false" and "true" instead of "False" and "True". ---------- assignee: docs at python components: Documentation messages: 288881 nosy: Stefan Pochmann, docs at python priority: normal severity: normal status: open title: Short-circuiting not only on False and True type: enhancement versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 09:38:39 2017 From: report at bugs.python.org (Nick Coghlan) Date: Fri, 03 Mar 2017 14:38:39 +0000 Subject: [New-bugs-announce] [issue29710] Incorrect representation caveat on bitwise operation docs Message-ID: <1488551919.31.0.20533072889.issue29710@psf.upfronthosting.co.za> New submission from Nick Coghlan: The docs on bitwise operations at https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types include the caveated sentence: Negative numbers are treated as their 2?s complement value (this assumes that there are enough bits so that no overflow occurs during the operation). This sentence isn't correct now that integers are always arbitrary length. The bitwise inversion will never overflow, and is instead calculated as "-(n+1)" rather than literally flipping bits in the representation: https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bitwise-operations ---------- assignee: docs at python components: Documentation messages: 288890 nosy: docs at python, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Incorrect representation caveat on bitwise operation docs type: enhancement versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 09:54:46 2017 From: report at bugs.python.org (Julien Duponchelle) Date: Fri, 03 Mar 2017 14:54:46 +0000 Subject: [New-bugs-announce] [issue29711] When you use stop_serving in proactor loop it's kill all listening servers Message-ID: <1488552886.51.0.862043273037.issue29711@psf.upfronthosting.co.za> New submission from Julien Duponchelle: If you stop a server when you use the proactor loop all other servers will be killed. ---------- components: asyncio messages: 288894 nosy: gvanrossum, noplay, yselivanov priority: normal severity: normal status: open title: When you use stop_serving in proactor loop it's kill all listening servers versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 10:45:23 2017 From: report at bugs.python.org (Lai, Yian) Date: Fri, 03 Mar 2017 15:45:23 +0000 Subject: [New-bugs-announce] [issue29712] --enable-optimizations does not work with --enbale-shared Message-ID: <1488555923.14.0.550158164587.issue29712@psf.upfronthosting.co.za> New submission from Lai, Yian: I want to altinstall 3.6 with LTO+PGO optimizations, so: ./configure --enable-shared --enable-optimizations --prefix=$HOME/.local LDFLAGS=-Wl,-rpath=$HOME/.local/lib make (./configure arguments refer to issue #27685) But I get in trouble when running compiled python to generate posix vars: ... gcc -pthread -shared -Wl,-rpath=/home/halfcoder/.local/lib -fprofile-generate -Wl,--no-as-needed -o libpython3.so -Wl,-hlibpython3.so libpython3.6m.so gcc -pthread -Wl,-rpath=/home/halfcoder/.local/lib -fprofile-generate -Xlinker -export-dynamic -o python Programs/python.o -L. -lpython3.6m -lpthread -ldl -lutil -lm LD_LIBRARY_PATH=/home/halfcoder/.local/src/Python-3.6.0-optmiz ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi ./python: symbol lookup error: ./python: undefined symbol: __gcov_indirect_call_profiler generate-posix-vars failed make[2]: *** [pybuilddir.txt] Error 1 make[2]: Leaving directory `/home/halfcoder/.local/src/Python-3.6.0-optmiz' make[1]: *** [build_all_generate_profile] Error 2 make[1]: Leaving directory `/home/halfcoder/.local/src/Python-3.6.0-optmiz' make: *** [profile-opt] Error 2 gcc information below: Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/cloog-install --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ---------- components: Build messages: 288898 nosy: halfcoder priority: normal severity: normal status: open title: --enable-optimizations does not work with --enbale-shared versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 11:04:44 2017 From: report at bugs.python.org (dagnam) Date: Fri, 03 Mar 2017 16:04:44 +0000 Subject: [New-bugs-announce] [issue29713] String changes whether or not '\x81' is present Message-ID: <1488557084.0.0.892901950688.issue29713@psf.upfronthosting.co.za> New submission from dagnam: print '\xa3\xb5\xdd\xf7\xa9\xa7\xab\xd8\xef\xc7\xac\xf4\xfb\xb7' #gives ?????????????? print '\xa3\xb5\xdd\xf7\xa9\xa7\xab\xd8\xef\xc7\xac\xf4\xfb\xb7\x81' #gives ?????????????? print '\x81\xa3' print '\xa3' ?? ? ---------- messages: 288900 nosy: dagnam priority: normal severity: normal status: open title: String changes whether or not '\x81' is present type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 14:05:28 2017 From: report at bugs.python.org (Nick Huber) Date: Fri, 03 Mar 2017 19:05:28 +0000 Subject: [New-bugs-announce] [issue29714] can't interpolate byte string with \x00 before replacement identifier Message-ID: <1488567928.06.0.163368634546.issue29714@psf.upfronthosting.co.za> New submission from Nick Huber: Python 3.6.0 (default, Mar 3 2017, 00:15:36) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> b'a\x00%i' % 1 Traceback (most recent call last): File "", line 1, in TypeError: not all arguments converted during bytes formatting >>> b'a%i' % 1 b'a1' >>> b'a%i\x00' % 1 b'a1\x00' On python3.5, this works in all the scenarios Python 3.5.1 (default, Jan 14 2017, 03:58:20) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> b'a\x00%i' % 1 b'a\x001' >>> b'a%i' % 1 b'a1' >>> b'a%i\x00' % 1 b'a1\x00' ---------- components: Interpreter Core messages: 288912 nosy: Nick Huber priority: normal severity: normal status: open title: can't interpolate byte string with \x00 before replacement identifier versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 16:07:15 2017 From: report at bugs.python.org (Max Rothman) Date: Fri, 03 Mar 2017 21:07:15 +0000 Subject: [New-bugs-announce] [issue29715] Arparse improperly handles "-_" Message-ID: <1488575235.56.0.838313390775.issue29715@psf.upfronthosting.co.za> New submission from Max Rothman: In the case detailed below, argparse.ArgumentParser improperly parses the argument string "-_": ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('first') print(parser.parse_args(['-_'])) ``` Expected behavior: prints Namespace(first='-_') Actual behavior: prints usage message The issue seems to be specific to the string "-_". Either character alone or both in the opposite order does not trigger the issue. ---------- components: Library (Lib) messages: 288929 nosy: Max Rothman priority: normal severity: normal status: open title: Arparse improperly handles "-_" type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 16:26:26 2017 From: report at bugs.python.org (James O) Date: Fri, 03 Mar 2017 21:26:26 +0000 Subject: [New-bugs-announce] [issue29716] Python 3 Module doc still sounds like __init__.py is required Message-ID: <1488576386.83.0.924613404524.issue29716@psf.upfronthosting.co.za> New submission from James O: PEP 420 says "Allowing implicit namespace packages means that the requirement to provide an __init__.py file can be dropped completely..." (as described here: http://stackoverflow.com/questions/37139786/is-init-py-not-required-for-packages-in-python-3) The documentation for modules doesn't seem to reflect this change. My "enhancement suggestion" is that a sentence (and perhaps an example) be added that explicitly states or shows an import from another directory. (P.S. This is my first Python bug submission, so if it's silly, let me know. Thanks!) ---------- assignee: docs at python components: Documentation messages: 288932 nosy: James O, docs at python priority: normal severity: normal status: open title: Python 3 Module doc still sounds like __init__.py is required type: enhancement versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 3 17:33:57 2017 From: report at bugs.python.org (Vahid Mardani) Date: Fri, 03 Mar 2017 22:33:57 +0000 Subject: [New-bugs-announce] [issue29717] `loop.add_reader` and `< New submission from Vahid Mardani: Assume this simple script for reading from stdin: ```python #! /usr/bin/env python3 import sys import os import asyncio async def main(loop): done = False fileno = sys.stdin.fileno() def _reader(): nonlocal done chunk = os.read(fileno, 1024) if not chunk: loop.remove_reader(fileno) done = True return print(chunk.decode(), end='') loop.add_reader(fileno, _reader) while not done: await asyncio.sleep(1) if __name__ == '__main__': main_loop = asyncio.get_event_loop() main_loop.run_until_complete(main(main_loop)) ``` When I run it by: ```bash $ ./stdin_issue.py < hello > EOF ``` I get: ``` Traceback (most recent call last): File "/usr/lib/python3.5/asyncio/selector_events.py", line 234, in add_reader key = self._selector.get_key(fd) File "/usr/lib/python3.5/selectors.py", line 191, in get_key raise KeyError("{!r} is not registered".format(fileobj)) from None KeyError: '0 is not registered' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./stdin_issue.py", line 41, in main_loop.run_until_complete(main(main_loop)) File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete return future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "./stdin_issue.py", line 34, in main loop.add_reader(fileno, _reader) File "/usr/lib/python3.5/asyncio/selector_events.py", line 237, in add_reader (handle, None)) File "/usr/lib/python3.5/selectors.py", line 411, in register self._epoll.register(key.fd, epoll_events) PermissionError: [Errno 1] Operation not permitted ``` But the: ```bash echo "Hello" | ./stdin_issue.py ``` Is working well. I was already tried this with the `select.select` directly, and it working: ``` def using_select(): files = [sys.stdin.fileno()] while True: readables, _, errors = select(files, [], files) if errors: print('ERROR:', errors) return if readables: for f in readables: chunk = os.read(f, 1024) if not chunk: return print(chunk.decode(), end='') ``` ---------- components: asyncio messages: 288941 nosy: gvanrossum, vahid.mardani, yselivanov priority: normal severity: normal status: open title: `loop.add_reader` and `< _______________________________________ From report at bugs.python.org Fri Mar 3 22:49:42 2017 From: report at bugs.python.org (Decorater) Date: Sat, 04 Mar 2017 03:49:42 +0000 Subject: [New-bugs-announce] [issue29718] Fixed compile on cygwin. Message-ID: <1488599382.84.0.138970896501.issue29718@psf.upfronthosting.co.za> New submission from Decorater: Cygwin had an issue with building and installing python after it was configured. The main issue was the TLS key stuff which would make python fail to fully build or work correctly. This issue contains a patch for cygwin specifically to make it compile and work fully. It uses the __CYGWIN__ macro for separating the code from this patch with the TLS code on the other targets. This should help fix issues that was present in the standard library and setup.py in the repo for cygwin as well. ---------- components: Build, Installation, Interpreter Core messages: 288949 nosy: Decorater priority: normal severity: normal status: open title: Fixed compile on cygwin. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 4 06:42:25 2017 From: report at bugs.python.org (INADA Naoki) Date: Sat, 04 Mar 2017 11:42:25 +0000 Subject: [New-bugs-announce] [issue29719] "Date" of what's new is confusing Message-ID: <1488627745.64.0.63277330553.issue29719@psf.upfronthosting.co.za> New submission from INADA Naoki: See https://docs.python.org/3/whatsnew/3.6.html At top: :Release: |release| :Date: |today| :Editors: Elvis Pranskevichus , Yury Selivanov This |today| is replaced with day when HTML is build (like "Last updated:" in footer). This is near to the date this page is modified last on docs.python.org, until clean rebuild happens. But other cases, this shows only when this HTML is built. It's confusing. How about replacing |today| to Python 3.6.0 release date, or removing the ":Date: |today|" line? ---------- messages: 288976 nosy: inada.naoki priority: normal severity: normal status: open title: "Date" of what's new is confusing _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 4 11:12:04 2017 From: report at bugs.python.org (Oren Milman) Date: Sat, 04 Mar 2017 16:12:04 +0000 Subject: [New-bugs-announce] [issue29720] potential silent truncation in PyLong_AsVoidPtr Message-ID: <1488643924.71.0.864156118334.issue29720@psf.upfronthosting.co.za> New submission from Oren Milman: I am not sure whether such a platform exists, but on a platform where SIZEOF_VOID_P < SIZEOF_LONG, PyLong_AsVoidPtr (which is in Objects/longobject.c) is: long x; if (PyLong_Check(vv) && _PyLong_Sign(vv) < 0) x = PyLong_AsLong(vv); else x = PyLong_AsUnsignedLong(vv); if (x == -1 && PyErr_Occurred()) return NULL; return (void *)x; Thus, for example, 'PyLong_AsVoidPtr(PyLong_FromUnsignedLong(ULONG_MAX))' would silently truncate ULONG_MAX, and return without an error. An easy fix would be (mainly) to add to PyLong_AsVoidPtr 'Py_BUILD_ASSERT(SIZEOF_LONG <= SIZEOF_VOID_P);', but I am not sure we can make that assumption. Note that a compile time error is already raised: - by Objects/longobject.h, in case SIZEOF_VOID_P is different from SIZEOF_INT, SIZEOF_LONG and SIZEOF_LONG_LONG - by Modules/_multiprocessing/multiprocessing.h, in case SIZEOF_VOID_P is different from SIZEOF_LONG and SIZEOF_LONG_LONG ---------- components: Interpreter Core messages: 288984 nosy: Oren Milman priority: normal severity: normal status: open title: potential silent truncation in PyLong_AsVoidPtr type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 4 14:21:19 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 04 Mar 2017 19:21:19 +0000 Subject: [New-bugs-announce] [issue29721] "abort: repository . not found!" during the build of Python 2.7 Message-ID: <1488655279.64.0.490520715338.issue29721@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The message "abort: repository . not found!" is output to stderr three times during the build of Python 2.7. To be more precise, it happens during the build of getbuildinfo.c. $ touch Modules/getbuildinfo.c $ make -s abort: repository . not found! abort: repository . not found! abort: repository . not found! libpython2.7.a(posixmodule.o): ? ??????? ?posix_tmpnam?: /home/serhiy/py/cpython2.7/./Modules/posixmodule.c:7614: ????????????: the use of `tmpnam_r' is dangerous, better use `mkstemp' libpython2.7.a(posixmodule.o): ? ??????? ?posix_tempnam?: /home/serhiy/py/cpython2.7/./Modules/posixmodule.c:7561: ????????????: the use of `tempnam' is dangerous, better use `mkstemp' building dbm using gdbm Python build finished, but the necessary bits to build these modules were not found: _bsddb bsddb185 sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. All works correctly when build Python 3.x. This looks related to issue12346. ---------- components: Build messages: 288991 nosy: benjamin.peterson, r.david.murray, serhiy.storchaka priority: normal severity: normal status: open title: "abort: repository . not found!" during the build of Python 2.7 type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 4 22:26:38 2017 From: report at bugs.python.org (Adam) Date: Sun, 05 Mar 2017 03:26:38 +0000 Subject: [New-bugs-announce] [issue29722] heapq.merge docs don't handle reverse flag well Message-ID: <1488684398.15.0.985624873811.issue29722@psf.upfronthosting.co.za> New submission from Adam: The docs for heapq.merge are a little misleading. Iterables passed into heapq.merge with the reversed flag set to True must be sorted from largest to smallest to achieve the desired sorting effect, but the paragraph describing the function in the general case states that they should be sorted from smallest to largest. ---------- assignee: docs at python components: Documentation messages: 288997 nosy: adamniederer, docs at python priority: normal pull_requests: 388 severity: normal status: open title: heapq.merge docs don't handle reverse flag well type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 08:31:48 2017 From: report at bugs.python.org (Ned Batchelder) Date: Sun, 05 Mar 2017 13:31:48 +0000 Subject: [New-bugs-announce] [issue29723] 3.6.1rc1 adds the current directory to sys.path when running a subdirectory's __main__.py; previous versions did not Message-ID: <1488720708.95.0.279516549191.issue29723@psf.upfronthosting.co.za> New submission from Ned Batchelder: 3.6.1rc1 adds the current directory to sys.path when running a subdirectory's __main__.py Previous versions, including 3.6.0, did not. Is this intentional? $ pwd /Users/ned/foo $ cat main361/__main__.py import pprint, sys pprint.pprint(sys.path) $ for ver in 2.7.13 3.4.6 3.5.3 3.6.0 3.6.1rc1; do > py=/usr/local/pythonz/pythons/CPython-$ver/bin/python > $py -V > $py main361 > done Python 2.7.13 ['main361', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python27.zip', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/plat-darwin', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/plat-mac', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/plat-mac/lib-scriptpackages', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/lib-tk', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/lib-old', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/lib-dynload', '/usr/local/pythonz/pythons/CPython-2.7.13/lib/python2.7/site-packages'] Python 3.4.6 ['main361', '/usr/local/pythonz/pythons/CPython-3.4.6/lib/python34.zip', '/usr/local/pythonz/pythons/CPython-3.4.6/lib/python3.4', '/usr/local/pythonz/pythons/CPython-3.4.6/lib/python3.4/plat-darwin', '/usr/local/pythonz/pythons/CPython-3.4.6/lib/python3.4/lib-dynload', '/usr/local/pythonz/pythons/CPython-3.4.6/lib/python3.4/site-packages'] Python 3.5.3 ['main361', '/usr/local/pythonz/pythons/CPython-3.5.3/lib/python35.zip', '/usr/local/pythonz/pythons/CPython-3.5.3/lib/python3.5', '/usr/local/pythonz/pythons/CPython-3.5.3/lib/python3.5/plat-darwin', '/usr/local/pythonz/pythons/CPython-3.5.3/lib/python3.5/lib-dynload', '/usr/local/pythonz/pythons/CPython-3.5.3/lib/python3.5/site-packages'] Python 3.6.0 ['main361', '/usr/local/pythonz/pythons/CPython-3.6.0/lib/python36.zip', '/usr/local/pythonz/pythons/CPython-3.6.0/lib/python3.6', '/usr/local/pythonz/pythons/CPython-3.6.0/lib/python3.6/lib-dynload', '/usr/local/pythonz/pythons/CPython-3.6.0/lib/python3.6/site-packages'] Python 3.6.1rc1 ['main361', '/Users/ned/foo', '/usr/local/pythonz/pythons/CPython-3.6.1rc1/lib/python36.zip', '/usr/local/pythonz/pythons/CPython-3.6.1rc1/lib/python3.6', '/usr/local/pythonz/pythons/CPython-3.6.1rc1/lib/python3.6/lib-dynload', '/usr/local/pythonz/pythons/CPython-3.6.1rc1/lib/python3.6/site-packages'] $ ---------- messages: 289009 nosy: nedbat priority: normal severity: normal status: open title: 3.6.1rc1 adds the current directory to sys.path when running a subdirectory's __main__.py; previous versions did not versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 11:32:04 2017 From: report at bugs.python.org (Chris Warrick) Date: Sun, 05 Mar 2017 16:32:04 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue29724=5D_Itertools_docs_pro?= =?utf-8?q?pose_a_harmful_=E2=80=9Cspeedup=E2=80=9D_without_any_explanatio?= =?utf-8?q?n?= Message-ID: <1488731524.9.0.615939918768.issue29724@psf.upfronthosting.co.za> New submission from Chris Warrick: The itertools recipes list [0] ends with the following dubious advice: > Note, many of the above recipes can be optimized by replacing global lookups with local variables defined as default values. For example, the dotproduct recipe can be written as: > > def dotproduct(vec1, vec2, sum=sum, map=map, mul=operator.mul): > return sum(map(mul, vec1, vec2)) This is presented in the document without any explanation. It may confuse beginners into always doing it in their code (as evidenced in #python today), leading to unreadable code and function signatures. There is also no proof of there being a significant speed difference by using this ?trick?. In my opinion, this should not be part of the documentation, or should provide proof that it can provide a real, noticeable speedup and is not premature optimization. (Added in [1] by Raymond Hettinger ? added to nosy list) [0]: https://docs.python.org/3/library/itertools.html#itertools-recipes [1]: https://github.com/python/cpython/commit/fc91aa28fd8dad5280fd4d3a4747b5e08ee37ac0 ---------- assignee: docs at python components: Documentation messages: 289020 nosy: Kwpolska, docs at python, rhettinger priority: normal severity: normal status: open title: Itertools docs propose a harmful ?speedup? without any explanation versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 11:47:29 2017 From: report at bugs.python.org (=?utf-8?q?J=C3=BCrgen_A=2E_Erhard?=) Date: Sun, 05 Mar 2017 16:47:29 +0000 Subject: [New-bugs-announce] [issue29725] sqlite3.Cursor doesn't properly document "arraysize" Message-ID: <1488732449.69.0.822466282685.issue29725@psf.upfronthosting.co.za> New submission from J?rgen A. Erhard: It's an attribute mentioned in fetchmany and fetchall, but it's not in the list with those two, but it should be, since the section says "A Cursor instance has the following attributes and methods." and it is an attribute. ---------- assignee: docs at python components: Documentation messages: 289023 nosy: docs at python, jae priority: normal severity: normal status: open title: sqlite3.Cursor doesn't properly document "arraysize" _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 12:15:54 2017 From: report at bugs.python.org (dillon.brock) Date: Sun, 05 Mar 2017 17:15:54 +0000 Subject: [New-bugs-announce] [issue29726] test_xmlrpc raises DeprecationWarnings Message-ID: <1488734154.48.0.272131402942.issue29726@psf.upfronthosting.co.za> New submission from dillon.brock: In 3 unit tests, test_xmlrpc calls assertRaises(Exception, expectedRegex='method'), causing DeprecationWarnings. These calls should be replaced with assertRaisesRegex(Exception, 'method'). 0:20:56 [378/404] test_xmlrpc 127.0.0.1 - - [05/Mar/2017 11:53:45] "POST / HTTP/1.1" 200 - 127.0.0.1 - - [05/Mar/2017 11:53:45] "POST / HTTP/1.1" 200 - /home/dillon/src/cpython/Lib/test/test_xmlrpc.py:430: DeprecationWarning: 'expected_regex' is an invalid keyword argument for this function with self.assertRaises(Exception, expected_regex='method'): /home/dillon/src/cpython/Lib/test/test_xmlrpc.py:423: DeprecationWarning: 'expected_regex' is an invalid keyword argument for this function with self.assertRaises(Exception, expected_regex='method'): /home/dillon/src/cpython/Lib/test/test_xmlrpc.py:415: DeprecationWarning: 'expected_regex' is an invalid keyword argument for this function with self.assertRaises(Exception, expected_regex='method'): ---------- components: Tests messages: 289026 nosy: dillon.brock priority: normal severity: normal status: open title: test_xmlrpc raises DeprecationWarnings type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 13:49:23 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 05 Mar 2017 18:49:23 +0000 Subject: [New-bugs-announce] [issue29727] collections.abc.Reversible doesn't fully support the reversing protocol Message-ID: <1488739763.14.0.639498346483.issue29727@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: collections.abc.Reversible doesn't work with types that are supported by reserved() but neither have the __reversed__ method nor are explicitly registered as collections.abc.Sequence. For example: >>> issubclass(array.array, collections.abc.Reversible) False The reversing protocol as well as the iterating protocol is supported not only by special method, but implicitly if the class has implemented __getitem__ and __len__ methods. >>> class Counter(int): ... def __getitem__(s, i): return i ... def __len__(s): return s ... >>> list(reversed(Counter(10))) [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] >>> issubclass(Counter, collections.abc.Reversible) False typing.Reversible starves from the same bug. See https://github.com/python/typing/issues/170. ---------- components: Library (Lib) messages: 289039 nosy: gvanrossum, rhettinger, serhiy.storchaka, stutzbach priority: normal severity: normal status: open title: collections.abc.Reversible doesn't fully support the reversing protocol type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 14:07:58 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Sun, 05 Mar 2017 19:07:58 +0000 Subject: [New-bugs-announce] [issue29728] Expose TCP_NOTSENT_LOWAT Message-ID: <1488740878.04.0.852546577047.issue29728@psf.upfronthosting.co.za> New submission from Nathaniel Smith: https://github.com/python/cpython/pull/477 ---------- components: Library (Lib) messages: 289041 nosy: njs priority: normal severity: normal status: open title: Expose TCP_NOTSENT_LOWAT _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 14:59:58 2017 From: report at bugs.python.org (Nic Watson) Date: Sun, 05 Mar 2017 19:59:58 +0000 Subject: [New-bugs-announce] [issue29729] UUID bytes constructor has too-tight an assertion Message-ID: <1488743998.42.0.177002814613.issue29729@psf.upfronthosting.co.za> New submission from Nic Watson: The assertion: File "/usr/lib/python3.6/uuid.py", line 150, in __init__ assert isinstance(bytes, bytes_), repr(bytes) is too specific (and IMHO, unpythonic). One may want to pass a bytearray or a memoryview. See int.from_bytes for an example that takes "bytes" but accepts anything that acts like a bytes. A simple solution may be to delete the assertion (it worked for me). Example code: import uuid b = uuid.uuid1().bytes ba = bytearray(b) print(uuid.UUID(bytes=b)) # another API that works similarly, accepts a bytearray print(int.from_bytes(ba, byteorder='big')) # fails on assertion print(uuid.UUID(bytes=ba)) ---------- components: Extension Modules messages: 289045 nosy: jnwatson priority: normal severity: normal status: open title: UUID bytes constructor has too-tight an assertion type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 5 18:11:18 2017 From: report at bugs.python.org (Oren Milman) Date: Sun, 05 Mar 2017 23:11:18 +0000 Subject: [New-bugs-announce] [issue29730] unoptimal calls to PyNumber_Check Message-ID: <1488755478.86.0.364261574579.issue29730@psf.upfronthosting.co.za> New submission from Oren Milman: ------------ current state ------------ if (PyNumber_Check(obj)) { someVar = PyNumber_AsSsize_t(obj, SomeError); if (someVar == -1 && PyErr_Occurred()) { return errVal; } } else { PyErr_Format(PyExc_TypeError, "integer argument expected, got '%.200s'", Py_TYPE(obj)->tp_name); return errVal; } Something similar to this happens in: - Modules/mmapmodule.c in mmap_convert_ssize_t - Modules/_io/_iomodule.c in _PyIO_ConvertSsize_t - Modules/_io/stringio.c in: * _io_StringIO_read_impl * _io_StringIO_readline_impl * _io_StringIO_truncate_impl (Moreover, in: - Objects/bytes_methods.c in parse_args_finds_byte - Objects/exceptions.c in oserror_init PyNumber_AsSsize_t is called only if PyNumber_Check returns true.) Note that: - PyNumber_Check checks whether nb_int != NULL or nb_float != NULL. - PyNumber_AsSsize_t calls PyNumber_Index, which, before calling nb_index, raises a TypeError (with a similar error message) in case nb_index == NULL. - The docs say '... when __index__() is defined __int__() should also be defined ...'. So the behavior with and without the call to PyNumber_Check is quite the same. The only potential advantage of calling PyNumber_Check is skipping the call to PyNumber_AsSsize_t. But PyNumber_AsSsize_t would be called also in case nb_index == NULL and (nb_int != NULL or nb_float != NULL). Thus, the only case in which the call to PyNumber_Check might be useful, is when nb_int == nb_float == nb_index == NULL. ------------ proposed changes ------------ Either remove each of these calls to PyNumber_Check, or at least replace it with a call to PyIndex_Check, which checks whether nb_index != NULL, and thus would be more useful than PyNumber_Check. Note that such a change shouldn't affect the behavior, except for a slightly different wording of the error message in case a TypeError is raised. ---------- components: IO messages: 289048 nosy: Oren Milman priority: normal severity: normal status: open title: unoptimal calls to PyNumber_Check type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 00:59:48 2017 From: report at bugs.python.org (Matthias Bussonnier) Date: Mon, 06 Mar 2017 05:59:48 +0000 Subject: [New-bugs-announce] [issue29731] Ability to filter warnings to print current stack Message-ID: <1488779988.78.0.195524634738.issue29731@psf.upfronthosting.co.za> New submission from Matthias Bussonnier: The warning module is extremely useful, especially the ability to change various level of verbosity and filter by multiple criteria. It is though sometime hard to pinpoint why a warning was triggered, or actually what triggered a warning. This is often due to the fact that libraries don't set the `stacklevel=...` correctly, that it is hard to get right (like at import time), or because warnings get triggered by complex, not obvious codepaths. One workaround is to switch from `always`/`once` to "error" to raise exceptions, but that can be quite troublesome in production, if other warnings are encountered in the meantime. Would it be accepted to add a warning filter type "stack" (or whatever name please you ...) that would not only print the current warning, but the stack leading to it. That is to say output almost identical as "error" but not actually raising ? Assuming above is reasonable, I have a working patch (both in the c and py implementation of warnings), though it can't seem to find how respect the `stacklevel=...` parameter unless `warn_explicit` is changed to allow an additional `frame` argument. Would adding this field be acceptable ? Or arguably, if one is interested in the stack, ignoring the stacklevel may make sens as one may want to explore the full reason of the warning (ie its source that may be hidden by stacklevel=... ) , and not only what triggered it. ---------- components: Library (Lib) messages: 289066 nosy: mbussonn priority: normal severity: normal status: open title: Ability to filter warnings to print current stack type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 03:13:44 2017 From: report at bugs.python.org (Kamil Frankowicz) Date: Mon, 06 Mar 2017 08:13:44 +0000 Subject: [New-bugs-announce] [issue29732] Heap out of bounds read in tok_nextc() Message-ID: <1488788024.7.0.161413972339.issue29732@psf.upfronthosting.co.za> New submission from Kamil Frankowicz: After some fuzz testing I found a crashing test case. Version: 2.7.13 compiled from source with Clang 3.9.1. To reproduce: python python_hoobr_tok_nextc.py Extract from Valgrind log (full log file at https://gist.github.com/fumfel/f9780e567dec761f8524523fff040742): ==15583== Process terminating with default action of signal 11 (SIGSEGV) ==15583== Bad permissions for mapped region at address 0x5F36000 ==15583== at 0x41EBC4: tok_nextc (tokenizer.c:861) ==15583== by 0x41ABA2: tok_get (tokenizer.c:1568) ==15583== by 0x41ABA2: PyTokenizer_Get (tokenizer.c:1681) ==15583== by 0x4171D4: parsetok (parsetok.c:159) ==15583== by 0x417DC0: PyParser_ParseFileFlagsEx (parsetok.c:106) ==15583== by 0x5C4A1D: PyParser_ASTFromFile (pythonrun.c:1499) ==15583== by 0x5C4C28: PyRun_FileExFlags (pythonrun.c:1354) ==15583== by 0x5C4009: PyRun_SimpleFileExFlags (pythonrun.c:948) ==15583== by 0x5C34AA: PyRun_AnyFileExFlags (pythonrun.c:752) ==15583== by 0x416478: Py_Main (main.c:640) ==15583== by 0x578782F: (below main) (libc-start.c:291) ---------- components: Interpreter Core files: python_hoobr_tok_nextc.py messages: 289078 nosy: Kamil Frankowicz priority: normal severity: normal status: open title: Heap out of bounds read in tok_nextc() type: crash versions: Python 2.7 Added file: http://bugs.python.org/file46704/python_hoobr_tok_nextc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 06:23:33 2017 From: report at bugs.python.org (jiangwanwei) Date: Mon, 06 Mar 2017 11:23:33 +0000 Subject: [New-bugs-announce] [issue29733] concurrent.futures as_completed raise TimeoutError wrong Message-ID: <1488799413.84.0.9750346011.issue29733@psf.upfronthosting.co.za> New submission from jiangwanwei: when I use as_completed function to wait my futures, if I sleep more than timeout seconds in each iteration , I found that futures has been set result, but raise TimeoutError. as my test example code shows: from concurrent import futures from multiprocessing import current_process import time def run(count): cp = current_process() print(cp.name, 'begin', count, 'at', time.time()) time.sleep(count) print(cp.name, 'end', count, 'at', time.time()) return count if __name__ == '__main__': ppe = futures.ProcessPoolExecutor(max_workers=4) cp = current_process() fs = [ppe.submit(run, i) for i in range(4)] print('begin receive at', time.time()) for f in futures.as_completed(fs, timeout=5): time.sleep(5) print(cp.name, 'receive', f.result(), 'at', time.time()) print(cp.name, 'receive', [f.result() for f in fs], 'at', time.time()) print('end receive at', time.time()) run above-mentioned example code, it will output : begin receive at 1488799136.471536 Process-1 begin 0 at 1488799136.472969 Process-1 end 0 at 1488799136.473114 Process-3 begin 1 at 1488799136.473741 Process-2 begin 2 at 1488799136.474226 Process-4 begin 3 at 1488799136.474561 Process-3 end 1 at 1488799137.474495 Process-2 end 2 at 1488799138.475289 Process-4 end 3 at 1488799139.475696 MainProcess receive 0 at 1488799141.478663 MainProcess receive [0, 1, 2, 3] at 1488799141.478787 Traceback (most recent call last): File "test_futures.py", line 23, in for f in futures.as_completed(fs, timeout=5): File "/Users/jiangwanwei/anaconda3/lib/python3.5/concurrent/futures/_base.py", line 213, in as_completed len(pending), len(fs))) concurrent.futures._base.TimeoutError: 3 (of 4) futures unfinished ---------- messages: 289093 nosy: jiangwanwei priority: normal severity: normal status: open title: concurrent.futures as_completed raise TimeoutError wrong type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 06:26:57 2017 From: report at bugs.python.org (Eryk Sun) Date: Mon, 06 Mar 2017 11:26:57 +0000 Subject: [New-bugs-announce] [issue29734] nt._getfinalpathname handle leak Message-ID: <1488799617.7.0.144245331277.issue29734@psf.upfronthosting.co.za> New submission from Eryk Sun: The implementation of nt._getfinalpathname leaks a File handle if calling GetFinalPathNameByHandle fails. The latter function is practically guaranteed to fail when resolving the path for a non-file-system device. It also fails when VOLUME_NAME_DOS is requested for a volume GUID path that isn't currently mounted as either a DOS drive letter or an NTFS junction. In this case requesting VOLUME_NAME_GUID should work. For example, when I try calling _getfinalpathname to resolve the device paths \\?\MAILSLOT, \\?\PIPE, \\?\UNC, \\?\C:, \\?\PhysicalDrive0, \\?\NUL, \\?\CONIN$, and \\?\COM1, I get the following list of leaked handles: 0x168 File \Device\Mailslot 0x16c File \Device\NamedPipe 0x178 File \Device\Mup 0x17c File \Device\HarddiskVolume2 0x180 File \Device\Harddisk0\DR0 0x18c File \Device\Null 0x194 File \Device\ConDrv 0x198 File \Device\Serial0 (The above is from a context manager that checks for leaked handles using ctypes to call the PssCaptureSnapshot API, which was introduced in Windows 8.1. I think Process Snapshotting is the only Windows API that uses the kernel's ability to fork a clone of a process.) The reason that GetFinalPathNameByHandle fails in these cases is that the information classes it queries are typically only serviced by file systems. Other I/O devices (e.g. disk and volume devices) will fail these I/O requests. It happens that GetFinalPathNameByHandle starts with an NtQueryObject request that succeeds in these cases (it's the source of the above native NT device names), but it doesn't stop there. It continues requesting information from the device and the mount-point manager until it either has everything or a request fails. Also, in os__getfinalpathname_impl, I notice that it's switching from VOLUME_NAME_NT in the first call that's used to get the buffer size to VOLUME_NAME_DOS in the second call. It should use VOLUME_NAME_DOS in both cases, or better yet, add a keyword-only argument to select a different volume-name style (i.e. None, DOS, GUID, or NT). ---------- components: Extension Modules, Windows messages: 289095 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: nt._getfinalpathname handle leak type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 08:22:46 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Mar 2017 13:22:46 +0000 Subject: [New-bugs-announce] [issue29735] Optimize functools.partial() for positional arguments Message-ID: <1488806566.01.0.913410050089.issue29735@psf.upfronthosting.co.za> New submission from STINNER Victor: The pull request makes functools.partial() faster for positional arguments. It avoids the creation of a tuple for positional arguments. It allocates a small buffer for up to 5 parameters. But it seems like even if the small buffer is not used, it's still faster. Use small buffer, total: 2 positional arguments. haypo at smithers$ ./python -m perf timeit -s 'from functools import partial; f = lambda x, y: None; g = partial(f, 1)' 'g(2)' --duplicate=100 --compare-to ../master-ref/python --python-names=ref:patch --python-names=ref:patch ref: ..................... 138 ns +- 1 ns patch: ..................... 121 ns +- 1 ns Median +- std dev: [ref] 138 ns +- 1 ns -> [patch] 121 ns +- 1 ns: 1.14x faster (-12%) Don't use small buffer, total: 6 positional arguments. haypo at smithers$ ./python -m perf timeit -s 'from functools import partial; f = lambda a1, a2, a3, a4, a5, a6: None; g = partial(f, 1, 2, 3, 4, 5)' 'g(6)' --duplicate=100 --compare-to ../master-ref/python --python-names=ref:patch --python-names=ref:patch ref: ..................... 156 ns +- 1 ns patch: ..................... 136 ns +- 0 ns Median +- std dev: [ref] 156 ns +- 1 ns -> [patch] 136 ns +- 0 ns: 1.15x faster (-13%) Another benchmark with 10 position arguments: haypo at smithers$ ./python -m perf timeit -s 'from functools import partial; f = lambda a1, a2, a3, a4, a5, a6, a7, a8, a9, a10: None; g = partial(f, 1, 2, 3, 4, 5)' 'g(6, 7, 8, 9, 10)' --duplicate=100 --compare-to ../master-ref/python --python-names=ref:patch --python-names=ref:patch ref: ..................... 193 ns +- 1 ns patch: ..................... 166 ns +- 2 ns Median +- std dev: [ref] 193 ns +- 1 ns -> [patch] 166 ns +- 2 ns: 1.17x faster (-14%) ---------- messages: 289100 nosy: haypo priority: normal severity: normal status: open title: Optimize functools.partial() for positional arguments type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 09:56:14 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Mar 2017 14:56:14 +0000 Subject: [New-bugs-announce] [issue29736] Optimize builtin types constructor Message-ID: <1488812174.68.0.707747009021.issue29736@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached PR replaces PyArg_ParseTupleAndKeywords() with _PyArg_ParseTupleAndKeywordsFast() to optimize the constructor of the builtin types: * bool: bool_new() * bytes: bytes_new() * complex: complex_new() * float: float_new() * int: long_new() * list: list_init() * str: unicode_new() * tuple: tuple_new() When using keywords, the speedup is between 1.55x faster and 1.92x faster. When using only positional arguments, the speedup is between 1.07x faster and 1.14x faster. Results of attached bench.py: +-----------------------------------------------+--------+---------------------+ | Benchmark | ref | changed | +===============================================+========+=====================+ | complex(real=0.0, imag=0.0) | 452 ns | 1.92x faster (-48%) | +-----------------------------------------------+--------+---------------------+ | bytes("x", encoding="ascii", errors="strict") | 498 ns | 1.88x faster (-47%) | +-----------------------------------------------+--------+---------------------+ | str(b"x", encoding="ascii") | 340 ns | 1.55x faster (-35%) | +-----------------------------------------------+--------+---------------------+ | list([None]) | 208 ns | 1.14x faster (-12%) | +-----------------------------------------------+--------+---------------------+ | int(0) | 113 ns | 1.11x faster (-10%) | +-----------------------------------------------+--------+---------------------+ | float(1.0) | 110 ns | 1.10x faster (-9%) | +-----------------------------------------------+--------+---------------------+ | str("x") | 115 ns | 1.10x faster (-9%) | +-----------------------------------------------+--------+---------------------+ | tuple((None,)) | 111 ns | 1.10x faster (-9%) | +-----------------------------------------------+--------+---------------------+ | bytes(b"x") | 126 ns | 1.10x faster (-9%) | +-----------------------------------------------+--------+---------------------+ | bool(True) | 107 ns | 1.09x faster (-8%) | +-----------------------------------------------+--------+---------------------+ | complex(0.0, 0.0) | 176 ns | 1.07x faster (-7%) | +-----------------------------------------------+--------+---------------------+ ---------- files: bench.py messages: 289111 nosy: haypo priority: normal severity: normal status: open title: Optimize builtin types constructor type: performance versions: Python 3.7 Added file: http://bugs.python.org/file46705/bench.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 14:48:36 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 06 Mar 2017 19:48:36 +0000 Subject: [New-bugs-announce] [issue29737] Optimize concatenating empty tuples Message-ID: <1488829716.87.0.539974742004.issue29737@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Since tuples are immutable, concatenating with empty tuple can be optimized by returning an opposite argument. Microbenchmarks (the difference is larger for larger tuples): $ ./python -m perf timeit --duplicate=100 -s 'a = (1, 2)' 'a + ()' Unpatched: Median +- std dev: 288 ns +- 12 ns Patched: Median +- std dev: 128 ns +- 5 ns $ ./python -m perf timeit --duplicate=100 -s 'a = (1, 2)' '() + a' Unpatched: Median +- std dev: 285 ns +- 16 ns Patched: Median +- std dev: 128 ns +- 6 ns Non-empty tuples are not affected: $ ./python -m perf timeit --duplicate=100 -s 'a = (1, 2)' 'a + a' Unpatched: Median +- std dev: 321 ns +- 24 ns Patched: Median +- std dev: 317 ns +- 26 ns ---------- components: Interpreter Core messages: 289129 nosy: haypo, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Optimize concatenating empty tuples type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 15:10:11 2017 From: report at bugs.python.org (Olivier Vielpeau) Date: Mon, 06 Mar 2017 20:10:11 +0000 Subject: [New-bugs-announce] [issue29738] Fix memory leak in SSLSocket.getpeercert() Message-ID: <1488831011.78.0.556870534746.issue29738@psf.upfronthosting.co.za> New submission from Olivier Vielpeau: The code snippet in #25569 reproduces the memory leak with Python 3.6.0 and 2.7.13. The current memory leak is a regression that was introduced in #26470. Going to attach a PR on github that fixes the issue shortly. ---------- assignee: christian.heimes components: SSL messages: 289130 nosy: christian.heimes, olivielpeau priority: normal severity: normal status: open title: Fix memory leak in SSLSocket.getpeercert() type: resource usage versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 15:48:19 2017 From: report at bugs.python.org (Jack Cushman) Date: Mon, 06 Mar 2017 20:48:19 +0000 Subject: [New-bugs-announce] [issue29739] zipfile raises wrong exception for some incorrect passwords Message-ID: <1488833299.28.0.469862145344.issue29739@psf.upfronthosting.co.za> New submission from Jack Cushman: This bug arises when attempting to unzip a password-protected zipfile using the wrong password. Usually when zipfile extraction is attempted with an incorrect password, zipfile raise `RuntimeError("Bad password for file")`. But for a small subset of passwords (about .4% of possible passwords), it instead raises `BadZipfile("Bad CRC-32 for file")`. Attached is a script that attempts to decrypt a zip file using every 3-letter uppercase password. (This assumes you have first created the zip file, by running something like: `echo "stuff" > /tmp/foo.txt; zip -e -P password /tmp/foo.zip /tmp/foo.txt`.) The specific passwords that trigger the wrong exception will vary each time the zip file is created. On my system, for a particular zip file, the result is this output: BadZipFile b'ACB' BadZipFile b'AMJ' BadZipFile b'ASL' BadZipFile b'AZV' BadZipFile b'BCI' BadZipFile b'BMV' BadZipFile b'BQG' BadZipFile b'BRB' BadZipFile b'BYH' BadZipFile b'CHU' BadZipFile b'CTV' BadZipFile b'DEF' BadZipFile b'DHJ' BadZipFile b'DSR' BadZipFile b'EWG' BadZipFile b'GOK' BadZipFile b'GUK' BadZipFile b'HGL' BadZipFile b'HPV' BadZipFile b'IAC' BadZipFile b'IGQ' BadZipFile b'IHG' BadZipFile b'ILB' BadZipFile b'IRJ' BadZipFile b'JDW' BadZipFile b'JIT' BadZipFile b'JMK' BadZipFile b'JPD' BadZipFile b'JWL' BadZipFile b'JXS' BadZipFile b'KAR' BadZipFile b'KKH' BadZipFile b'LNW' BadZipFile b'MEL' BadZipFile b'NDY' BadZipFile b'NFJ' BadZipFile b'NLU' BadZipFile b'NQU' BadZipFile b'OXC' BadZipFile b'PHA' BadZipFile b'PQY' BadZipFile b'QCN' BadZipFile b'QFT' BadZipFile b'QMB' BadZipFile b'QWZ' BadZipFile b'QYS' BadZipFile b'RBR' BadZipFile b'SKU' BadZipFile b'SLG' BadZipFile b'STU' BadZipFile b'SUP' BadZipFile b'UCD' BadZipFile b'UOA' BadZipFile b'UQM' BadZipFile b'VAO' BadZipFile b'VEQ' BadZipFile b'VJW' BadZipFile b'VVH' BadZipFile b'WDA' BadZipFile b'XCR' BadZipFile b'XIY' BadZipFile b'XLG' BadZipFile b'YJA' BadZipFile b'YMA' BadZipFile b'YRB' BadZipFile b'ZHT' BadZipFile b'ZVJ' BadZipFile b'ZWR' BadZipFile b'ZZT' 69 out of 17576 passwords raise BadZipFile Versions: I reproduced this in Python 2.7.10 and 3.6.0, using a zip file created on Mac OS 10.12.3 with this zip version: $ zip --version Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license. This is Zip 3.0 (July 5th 2008), by Info-ZIP. Compiled with gcc 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34) for Unix (Mac OS X) on Jul 30 2016. ---------- components: Library (Lib) files: fail.py messages: 289132 nosy: jcushman priority: normal severity: normal status: open title: zipfile raises wrong exception for some incorrect passwords type: behavior versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file46706/fail.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 16:25:08 2017 From: report at bugs.python.org (Markus) Date: Mon, 06 Mar 2017 21:25:08 +0000 Subject: [New-bugs-announce] [issue29740] Visual C++ CRT security update from 14 June 2011 Message-ID: <1488835508.8.0.372164127647.issue29740@psf.upfronthosting.co.za> New submission from Markus: In 14 June 2011 Microsoft released Visual C++ 2008 runtime MFC Security Update https://www.microsoft.com/en-us/download/details.aspx?id=26368 The Security Update also updates the CRT runtime (used by Python 2.7) Without the security update, Python 2.7.13 uses vc90.crt 9.0.30729.4940 With the security update, Python 2.7.13 uses vc90.crt 9.0.30729.6161 (Use e.g. Sysinternals procexp to see) Why does Python not install the vc90.crt of the security update? ---------- components: Build, Windows messages: 289135 nosy: markuskramerIgitt, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Visual C++ CRT security update from 14 June 2011 type: security versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 16:27:25 2017 From: report at bugs.python.org (Oren Milman) Date: Mon, 06 Mar 2017 21:27:25 +0000 Subject: [New-bugs-announce] [issue29741] BytesIO methods don't accept integer types, while StringIO counterparts do Message-ID: <1488835645.34.0.376224338979.issue29741@psf.upfronthosting.co.za> New submission from Oren Milman: ------------ current state ------------ import io class IntLike(): def __init__(self, num): self._num = num def __index__(self): return self._num __int__ = __index__ io.StringIO('blah blah').read(IntLike(2)) io.StringIO('blah blah').readline(IntLike(2)) io.StringIO('blah blah').truncate(IntLike(2)) io.BytesIO(b'blah blah').read(IntLike(2)) io.BytesIO(b'blah blah').readline(IntLike(2)) io.BytesIO(b'blah blah').truncate(IntLike(2)) The three StringIO methods are called without any error, but each of the three BytesIO methods raises a "TypeError: integer argument expected, got 'IntLike'". This is because the functions which implement the StringIO methods (in Modules/_io/stringio.c): - _io_StringIO_read_impl - _io_StringIO_readline_impl - _io_StringIO_truncate_impl use PyNumber_AsSsize_t, which might call nb_index. However, the functions which implement the BytesIO methods (in Modules/_io/bytesio.c): - _io_BytesIO_read_impl - _io_BytesIO_readline_impl - _io_BytesIO_truncate_impl use PyLong_AsSsize_t, which accepts only Python ints (or objects whose type is a subclass of int). ------------ proposed changes ------------ - change those BytesIO methods so that they would accept integer types (i.e. classes that define __index__), mainly by replacing PyLong_AsSsize_t with PyNumber_AsSsize_t - add tests to Lib/test/test_memoryio.py to verify that all six aforementioned methods accept integer types ---------- components: IO messages: 289136 nosy: Oren Milman priority: normal severity: normal status: open title: BytesIO methods don't accept integer types, while StringIO counterparts do type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 17:35:43 2017 From: report at bugs.python.org (Nikolay Kim) Date: Mon, 06 Mar 2017 22:35:43 +0000 Subject: [New-bugs-announce] [issue29742] asyncio get_extra_info() throws exception Message-ID: <1488839743.56.0.670449147303.issue29742@psf.upfronthosting.co.za> New submission from Nikolay Kim: https://github.com/python/asyncio/issues/494 ---------- messages: 289138 nosy: fafhrd91 priority: normal pull_requests: 435 severity: normal status: open title: asyncio get_extra_info() throws exception versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 18:00:45 2017 From: report at bugs.python.org (Nikolay Kim) Date: Mon, 06 Mar 2017 23:00:45 +0000 Subject: [New-bugs-announce] [issue29743] Closing transport during handshake process leaks open socket Message-ID: <1488841245.61.0.346906238419.issue29743@psf.upfronthosting.co.za> New submission from Nikolay Kim: https://github.com/python/asyncio/issues/487 https://github.com/KeepSafe/aiohttp/issues/1679 ---------- messages: 289143 nosy: fafhrd91 priority: normal pull_requests: 436 severity: normal status: open title: Closing transport during handshake process leaks open socket versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 19:16:37 2017 From: report at bugs.python.org (Andrea Giovannucci) Date: Tue, 07 Mar 2017 00:16:37 +0000 Subject: [New-bugs-announce] [issue29744] memmap behavior changed Message-ID: <1488845797.14.0.556609006147.issue29744@psf.upfronthosting.co.za> New submission from Andrea Giovannucci: The previous version 2.7.12 was returning a memmap file when slicing with a list of integers, now it returns an array. This affects the behaviour of several functions in my package. Is that an aware choice or a side product of some other change? ---------- components: Demos and Tools messages: 289146 nosy: agiovannucci priority: normal severity: normal status: open title: memmap behavior changed type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 6 19:24:13 2017 From: report at bugs.python.org (Nikolay Kim) Date: Tue, 07 Mar 2017 00:24:13 +0000 Subject: [New-bugs-announce] [issue29745] asyncio: Make pause/resume_reading idepotent and no-op for closed transports Message-ID: <1488846253.21.0.430989162462.issue29745@psf.upfronthosting.co.za> New submission from Nikolay Kim: https://github.com/python/asyncio/issues/488 ---------- messages: 289147 nosy: fafhrd91 priority: normal pull_requests: 439 severity: normal status: open title: asyncio: Make pause/resume_reading idepotent and no-op for closed transports versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 05:41:07 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 07 Mar 2017 10:41:07 +0000 Subject: [New-bugs-announce] [issue29746] Update marshal docs to Python 3 Message-ID: <1488883267.82.0.811271477172.issue29746@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The marshal module documentation still uses terms from Python 2. It mentions sys.stdin and os.popen() as legitimate sources (but they are text files). ---------- assignee: serhiy.storchaka components: Documentation messages: 289157 nosy: serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Update marshal docs to Python 3 type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 07:51:32 2017 From: report at bugs.python.org (=?utf-8?q?Vin=C3=ADcius_Dantas?=) Date: Tue, 07 Mar 2017 12:51:32 +0000 Subject: [New-bugs-announce] [issue29747] unittest - assertDoesNotRaise Message-ID: <1488891092.02.0.36768274646.issue29747@psf.upfronthosting.co.za> New submission from Vin?cius Dantas: Unittest provides us some assert methods, yet one is missing: the assertDoesNotRaise context. When running tests, tests may end up as failures, successes or errors. It's worth noting that errors and failures are conceptually different, and that's the point on having an assertDoesNotRaise context, alike the assertRaises context. This context would be useful, for example, when using Selenium client, it would be helpful to know if an alert popped, given there is no method to check if there is an alert, we'd use a code like: with assertDoesNotRaise(NoAlertPresentException): driver.switch_to.alert.text It is also important to mention that it makes explicit what we are testing. After all, explicit is better than implicit. ---------- components: Library (Lib) messages: 289161 nosy: viniciusd priority: normal severity: normal status: open title: unittest - assertDoesNotRaise type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 08:06:47 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 07 Mar 2017 13:06:47 +0000 Subject: [New-bugs-announce] [issue29748] Argument Clinic: slice index converter Message-ID: <1488892007.86.0.01356022303.issue29748@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Following PR adds the slice index converter. It can be used for converting indices in methods like list.index() and str.find(). ---------- components: Argument Clinic messages: 289162 nosy: larry, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Argument Clinic: slice index converter type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 08:44:49 2017 From: report at bugs.python.org (STINNER Victor) Date: Tue, 07 Mar 2017 13:44:49 +0000 Subject: [New-bugs-announce] [issue29749] Outdated int() docstring Message-ID: <1488894289.51.0.465519491821.issue29749@psf.upfronthosting.co.za> New submission from STINNER Victor: bpo-29695 removed "bad keyword parameters in int(), bool(), float(), list() and tuple()", but int docstring (at least) is now outdated: haypo at selma$ ./python Python 3.7.0a0 (master:8f6b344d368c15c3fe56c65c2f2776e7766fef55, Mar 7 >>> help(int) class int(object) | int(x=0) -> integer | int(x, base=10) -> integer ... >>> int(x=0) TypeError: 'x' is an invalid keyword argument for this function ---------- messages: 289163 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: Outdated int() docstring versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 15:40:24 2017 From: report at bugs.python.org (david) Date: Tue, 07 Mar 2017 20:40:24 +0000 Subject: [New-bugs-announce] [issue29750] smtplib doesn't handle unicode passwords Message-ID: <1488919224.48.0.0966376973087.issue29750@psf.upfronthosting.co.za> New submission from david: Trying to use unicode passwords on smtplib fails miserably on python3. My particular issue arises on line 643 of said library: (code, resp) = self.docmd(encode_base64(password.encode('ascii'), eol='')) which obviously dies when trying to handle unicode chars. ---------- components: Library (Lib) messages: 289184 nosy: david__ priority: normal severity: normal status: open title: smtplib doesn't handle unicode passwords versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 16:27:48 2017 From: report at bugs.python.org (Cubi) Date: Tue, 07 Mar 2017 21:27:48 +0000 Subject: [New-bugs-announce] [issue29751] PyLong_FromString fails on decimals with leading zero and base=0 Message-ID: <1488922068.27.0.463542328186.issue29751@psf.upfronthosting.co.za> New submission from Cubi: Calling PyLong_FromString(str, NULL, 0) fails, if str is a string containing a decimal number with leading zeros, even though such strings should be parsed as decimal numbers according to the documentation: "If base is 0, the radix will be determined based on the leading characters of str: if str starts with '0x' or '0X', radix 16 will be used; if str starts with '0o' or '0O', radix 8 will be used; if str starts with '0b' or '0B', radix 2 will be used; otherwise radix 10 will be used" Examples: PyLong_FromString("15", NULL, 0); // Returns int(15) (Correct) PyLong_FromString("0xF", NULL, 0); // Returns int(15) (Correct) PyLong_FromString("015", NULL, 0); // Should return int(15), but raises ValueError: invalid literal for int() with base 0: '015' Version information: Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] on win32 ---------- components: Interpreter Core messages: 289188 nosy: cubinator priority: normal severity: normal status: open title: PyLong_FromString fails on decimals with leading zero and base=0 type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 17:29:52 2017 From: report at bugs.python.org (Ethan Furman) Date: Tue, 07 Mar 2017 22:29:52 +0000 Subject: [New-bugs-announce] [issue29752] Enum._missing_ not called for __getattr__ failures Message-ID: <1488925792.17.0.971645599389.issue29752@psf.upfronthosting.co.za> New submission from Ethan Furman: class Label(Enum): RedApple = 1 GreenApple = 2 @classmethod def _missing_(cls, name): for member in cls: if member.name.lower() == name.lower(): return member Currently, _missing_ is only called when using the functional API. In words: Label('redapple') # works Label.redapple # does not ---------- assignee: ethan.furman messages: 289191 nosy: barry, eli.bendersky, ethan.furman priority: normal severity: normal stage: test needed status: open title: Enum._missing_ not called for __getattr__ failures type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 7 19:36:40 2017 From: report at bugs.python.org (Charles Machalow) Date: Wed, 08 Mar 2017 00:36:40 +0000 Subject: [New-bugs-announce] [issue29753] Ctypes Packing Incorrectly - Linux Message-ID: <1488933400.56.0.160685455632.issue29753@psf.upfronthosting.co.za> New submission from Charles Machalow: There appears to be a bug related to sizing/packing of ctypes Structures on Linux. I'm not quite sure how, but this structure: class MyStructure(Structure): _pack_ = 1 _fields_ = [ ("P", c_uint16), # 2 Bytes ("L", c_uint16, 9), ("Pro", c_uint16, 1), ("G", c_uint16, 1), ("IB", c_uint16, 1), ("IR", c_uint16, 1), ("R", c_uint16, 3), # 4 Bytes ("T", c_uint32, 10), ("C", c_uint32, 20), ("R2", c_uint32, 2) # 8 Bytes ] Gives back a sizeof of 8 on Windows and 10 on Linux. The inconsistency makes it difficult to have code work cross-platform. Running the given test.py file will print out the size of the structure on your platform. Tested with Python 2.7.6 and Python 3.4.3 (builtin to Ubuntu 14.04), Python 2.7.13, (built from source) both on Ubuntu 14.04. On Linux all Python builds were 32 bit. On Windows I tried with 2.7.7 (both 32 and 64 bit). I believe on both platforms it should return a sizeof 8. ---------- components: ctypes files: test.py messages: 289193 nosy: Charles Machalow, amaury.forgeotdarc, belopolsky, meador.inge priority: normal severity: normal status: open title: Ctypes Packing Incorrectly - Linux type: behavior versions: Python 2.7, Python 3.4 Added file: http://bugs.python.org/file46708/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 00:32:52 2017 From: report at bugs.python.org (=?utf-8?q?Tomas_Daba=C5=A1inskas?=) Date: Wed, 08 Mar 2017 05:32:52 +0000 Subject: [New-bugs-announce] [issue29754] sorted ignores reverse=True when sorting produces same list Message-ID: <1488951172.21.0.568731538203.issue29754@psf.upfronthosting.co.za> New submission from Tomas Daba?inskas: sorted ignores reverse=True when sorting produces same list, I was expecting reverse regardless of the sorting outcome. Python 3.5.2 (default, Jul 17 2016, 00:00:00) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> data = [{'name': 'first', 'weight': 1},{'name': 'second', 'weight': 1},{'name': 'third', 'weight': 1}, {'name': 'fourth', 'weight': 1}] >>> sorted(data, key=lambda x: x['weight'], reverse=True) [{'name': 'first', 'weight': 1}, {'name': 'second', 'weight': 1}, {'name': 'third', 'weight': 1}, {'name': 'fourth', 'weight': 1}] >>> sorted(data, key=lambda x: x['weight'], reverse=True) == sorted(data, key=lambda x: x['weight']).reverse() False Thanks! ---------- components: Library (Lib) messages: 289202 nosy: Tomas Daba?inskas priority: normal severity: normal status: open title: sorted ignores reverse=True when sorting produces same list versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 04:17:48 2017 From: report at bugs.python.org (Petri Savolainen) Date: Wed, 08 Mar 2017 09:17:48 +0000 Subject: [New-bugs-announce] [issue29755] python3 gettext.lgettext sometimes returns bytes, not string Message-ID: <1488964668.24.0.723245092961.issue29755@psf.upfronthosting.co.za> New submission from Petri Savolainen: On Debian stable (Python 3.4), with the LANGUAGE environment variable set to "C" or "en_US.UTF-8", the following produces a string: d = gettext.textdomain('apt-listchanges') print(gettext.lgettext("Informational notes")) However, setting the language, for example fi_FI.UTF-8, it will output a bytes object. Same apparently happens with some other languages, too. Why is this? The discrepancy is not documented anywhere, AFAIK. Is this a bug or intended behavior depending on some (undocumented) circumstances? Given both the above examples define UTF-8 as the encoding, the result value does not depend directly on the encoding. The docs say lgettext should merely return the translation in a particular encoding. It does not say the return value will be switched from a string to bytes as well. I saw this originally in the Debian bug tracker and thought the issue merits at least clarification here as well (link to Debian bug below for reference). (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818728) No idea if this happens on Python > 3.4 or another platforms. I would guess so, but have not had time to confirm. ---------- messages: 289220 nosy: petri priority: normal severity: normal status: open title: python3 gettext.lgettext sometimes returns bytes, not string type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 09:03:53 2017 From: report at bugs.python.org (Alexander Todorov) Date: Wed, 08 Mar 2017 14:03:53 +0000 Subject: [New-bugs-announce] [issue29756] List count() counts True as 1 Message-ID: <1488981833.28.0.455455965624.issue29756@psf.upfronthosting.co.za> New submission from Alexander Todorov: When using list.count() I get the following results >>> [1, 2, 3].count(1) 1 >>> [1, 2, 3, True].count(2) 1 >>> [1, 2, 3, True].count(True) 2 >>> [1, 2, 3, True].count(1) 2 as you can see True is considered the same as 1. The documentation for the count method says: count(...) L.count(value) -> integer -- return number of occurrences of value so IMO the above behavior is wrong. Seeing this on a RHEL 7 system with Python 3.5.1 and 2.7.5 ---------- messages: 289235 nosy: Alexander Todorov priority: normal severity: normal status: open title: List count() counts True as 1 versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 09:45:17 2017 From: report at bugs.python.org (Kostis Anagnostopoulos) Date: Wed, 08 Mar 2017 14:45:17 +0000 Subject: [New-bugs-announce] [issue29757] The loop in utility `socket.create_connection()` swallows previous errors Message-ID: <1488984317.87.0.296010902983.issue29757@psf.upfronthosting.co.za> New submission from Kostis Anagnostopoulos: ## Context The utility method `socket.create_connection()` currently works like that: 1. resolve the destination-address into one or more IP(v4 & v6) addresses; 2. loop on each IP address and stop to the 1st one to work; 3. if none works, re-raise the last error. ## The problem So currently the loop in `socket.create_connection()` ignores all intermediate errors and reports only the last connection failure, which might be irrelevant. For instance, when both IPv4 & IPv6 networks are supported, usually the last address is a IPv6 address and it frequently fails with an irrelevant error - the actual cause have already been ignored. ## Possible solutions & open questions To facilitate network debugging, there are at least 3 options: a. log each failure [as they happen](/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/socket.py#L717), but that would get the final failure twice: once as a (warning?) message, and once as an exception . b. collect all failures and log them only when connection fails to collect the errors, but that might miss important infos to the user; c. collect and return all failures in list attached to the raised exception. A question for cases (a) & (b) is what logging "means" to use: the `warnings` or `logging` module? And if `logging` is chosen, log them in `'DEBUG'` or `'WARNING'` level? Case (c) sidesteps the above questions. ---------- components: Library (Lib) messages: 289238 nosy: ankostis priority: normal pull_requests: 463 severity: normal status: open title: The loop in utility `socket.create_connection()` swallows previous errors versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 11:47:30 2017 From: report at bugs.python.org (Tristan Croll) Date: Wed, 08 Mar 2017 16:47:30 +0000 Subject: [New-bugs-announce] [issue29758] Previously-working SWIG code fails in Python 3.6 Message-ID: <1488991650.0.0.826679402246.issue29758@psf.upfronthosting.co.za> New submission from Tristan Croll: Possibly related to http://bugs.python.org/issue29327 - yields the same error message: Objects/tupleobject.c:81: bad argument to internal function I have a large SWIG project which was previously working well in Python 3.5. After migrating to Python 3.6.0, I find I can still create any wrapped object from Python via its constructor(s), but any internal function that returns certain objects fails with the above message. I have so far been unable to find any distinction between classes that do and don't return successfully. Take the below (and attached) headers, for example. Functions that return Spacegroup objects work, as do those that return Metric_tensor objects from the attached cell.h. On the other hand, functions returning Cell or Cell_descr objects fail with the above message. Yet in all cases I can successfully call the objects' constructors. Not ashamed to say I'm a bit lost here. #ifndef CLIPPER_SPACEGROUP #define CLIPPER_SPACEGROUP #include "symop.h" #include "spacegroup_data.h" namespace clipper { // forward definitions class HKL; class HKL_class; class Coord_frac; //! spacegroup description /*! The spacegroup description is a compact description of a spacegroup. It may be initialised from Hall or H-M symbols, a string of symops or a number. Internally a hash code is used to refer to the spacegroup, so this object is only 32 bits in size. For more details of spacegroup symbols, see Sydney R. Hall & Ralf W. Grosse-Kunstleve 'Concise Space-Group Symbols', http://www.kristall.ethz.ch/LFK/software/sginfo/hall_symbols.html */ class Spgr_descr { public: enum TYPE { Hall, HM, XHM, Symops, Number, Unknown }; //! null constructor Spgr_descr(); //! constructor: from symbol or operators. explicit Spgr_descr( const String& symb, TYPE type = Unknown ); //! constructor: from number. explicit Spgr_descr( const int& num ); //! return the spacegroup number int spacegroup_number() const; //! return the Hall symbol String symbol_hall() const; //! return the H-M symbol String symbol_hm() const; //! return the extended H-M symbol String symbol_xhm() const; //! return the extension H-M symbol String symbol_hm_ext() const; //! set preferred default spacegroup choice static void set_preferred( const char& c ); //! Vector of symop codes and associated methods class Symop_codes : public std::vector { public: //! initialise from Hall symbol void init_hall( const String& symb ); //! initialise from symops void init_symops( const String& symb ); //! expand (incomplete) list of symops Symop_codes expand() const; //! return primitive non-inversion ops (by computation) Symop_codes primitive_noninversion_ops() const; //! return inversion ops (by computation) Symop_codes inversion_ops() const; //! return primitive incl inversion ops (by computation) Symop_codes primitive_ops() const; //! return lattice centering ops (by computation) Symop_codes centering_ops() const; //! return Laue ops Symop_codes laue_ops() const; //! return point group ops Symop_codes pgrp_ops() const; //! return Patterson ops Symop_codes patterson_ops() const; //! return minimal list of generator ops Symop_codes generator_ops() const; //! return product of this (expanded) list by another (expanded) list Symop_codes product( const Symop_codes& ops2 ) const; //! return hash code of symop list unsigned int hash() const; }; //! constructor: from symop list. explicit Spgr_descr( const Symop_codes& ops ); //! return the generators for the spacegroup const Symop_codes& generator_ops() const { return generators_; } //! return the hash code for the spacegroup \internal const unsigned int& hash() const { return hash_; } protected: unsigned int hash_; //!< hash code of spacegroup Symop_codes generators_; //!< codes for symop generators static char pref_12, pref_hr; //!< preferred origin and hex/romb symbols }; // ObjectCache data type class Spgr_cacheobj { public: typedef Spgr_descr Key; Spgr_cacheobj( const Key& spgr_cachekey ); //!< construct entry bool matches( const Key& spgr_cachekey ) const; //!< compare entry String format() const; //!< string description // data Key spgr_cachekey_; //!< spacegroup cachekey int nsym, nsymn, nsymi, nsymc, nsymp; //!< number of syms: total, primitive int lgrp; //!< Laue group number std::vector symops; //!< symmetry operators std::vector isymops; //!< symmetry operators Vec3<> asu_min_, asu_max_; //!< real space ASU static Mutex mutex; //!< thread safety }; //! Spacegroup object /*! The spacegroup object is a full description of a spacegroup, including all the most regularly used information in an efficient form. It may be initialised from a clipper::Spgr_descr. This object. For more details of spacegroup symbols, see Sydney R. Hall & Ralf W. Grosse-Kunstleve 'Concise Space-Group Symbols', http://www.kristall.ethz.ch/LFK/software/sginfo/hall_symbols.html */ class Spacegroup : public Spgr_descr { public: //! enumeration for fast construction of Null or P1 spacegroup enum TYPE { Null, P1 }; //! enumeration for cell axes enum AXIS { A=0, B=1, C=2 }; //! null constructor Spacegroup() {}; //! constructor: fast constructor for Null or P1 spacegroup explicit Spacegroup( TYPE type ); //! constructor: from spacegroup description explicit Spacegroup( const Spgr_descr& spgr_descr ); //! initialiser: from spacegroup description void init( const Spgr_descr& spgr_descr ); //! test if object has been initialised bool is_null() const; // methods //! get spacegroup description inline const Spgr_descr& descr() const { return (*this); } //! get number of symops inline const int& num_symops() const { return nsym; } //! get number of primitive symops (identical to num_primitive_symops()) inline const int& num_primops() const { return num_primitive_symops(); } //! get number of primitive symops (inc identity and inversion) inline const int& num_primitive_symops() const { return nsymp; } //! get number of centering symops (inc identity) inline const int& num_centering_symops() const { return nsymc; } //! get number of inversion symops (inc identity) inline const int& num_inversion_symops() const { return nsymi; } //! get number of primitive non-inversion symops (inc identity) inline const int& num_primitive_noninversion_symops() const { return nsymn;} //! get n'th symop inline const Symop& symop( const int& sym_no ) const { return symops[sym_no]; } //! get n'th primitive symop (identical to symop(sym_no)) inline const Symop& primitive_symop( const int& sym_no ) const { return symops[sym_no]; } //! get n'th inversion symop (0...1 max) inline const Symop& inversion_symop( const int& sym_no ) const { return symops[nsymn*sym_no]; } //! get n'th centering symop (0...3 max) inline const Symop& centering_symop( const int& sym_no ) const { return symops[nsymp*sym_no]; } //! get the order of rotational symmetry about a given axis int order_of_symmetry_about_axis( const AXIS axis ) const; //! get 'class' of reflection: multiplicity, allowed phase, absence HKL_class hkl_class( const HKL& hkl ) const; //! test if hkl is in default reciprocal ASU bool recip_asu( const HKL& hkl ) const; //! get symop number corresponding to the product of two symops int product_op( const int& s1, int& s2 ) const; //! get symop number corresponding to the inverse of a symop int inverse_op( const int& s ) const; //! get map ASU, upper bound Coord_frac asu_max() const; //! get map ASU, lower bound Coord_frac asu_min() const; //! test if change of hand preserves spacegroup bool invariant_under_change_of_hand() const; // inherited functions listed for documentation purposes //-- int spacegroup_number() const; //-- String symbol_hall() const; //-- String symbol_hm() const; //! return the Laue group symbol String symbol_laue() const; //! Return P1 spacegroup static Spacegroup p1() { return Spacegroup( P1 ); } //! Return null spacegroup static Spacegroup null() { return Spacegroup( Null ); } void debug() const; private: ObjectCache::Reference cacheref; //!< object cache reference const Symop* symops; //!< fast access ptr const Isymop* isymops; //!< fast access ptr data::ASUfn asufn; //!< fast access ptr int nsym, nsymn, nsymi, nsymc, nsymp; //!< fast access copies }; } // namespace clipper #endif ---------- components: Interpreter Core files: cell.h messages: 289245 nosy: Tristan Croll priority: normal severity: normal status: open title: Previously-working SWIG code fails in Python 3.6 versions: Python 3.6 Added file: http://bugs.python.org/file46711/cell.h _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 12:50:46 2017 From: report at bugs.python.org (Michael) Date: Wed, 08 Mar 2017 17:50:46 +0000 Subject: [New-bugs-announce] [issue29759] Deadlock in multiprocessing.pool.Pool on terminate Message-ID: <1488995446.21.0.189589252767.issue29759@psf.upfronthosting.co.za> New submission from Michael: Following code snippet causes a deadlock on Linux: """ import multiprocessing.pool import signal def signal_handler(signum, frame): pass if __name__ == '__main__': signal.signal(signal.SIGTERM, signal_handler) pool = multiprocessing.pool.Pool(processes=1) pool.terminate() # alternatively - raise Exception("EXCEPTION") """ The reason is that the termination code starts before the worker processes being fully initialized. Here, parent process acquires a forever-lock: """ @staticmethod def _help_stuff_finish(inqueue, task_handler, size): # task_handler may be blocked trying to put items on inqueue util.debug('removing tasks from inqueue until task handler finished') inqueue._rlock.acquire() < ----------------- while task_handler.is_alive() and inqueue._reader.poll(): inqueue._reader.recv() time.sleep(0) """ And then the worker processes are getting stuck here: """ def worker(...): while maxtasks is None or (maxtasks and completed < maxtasks): try: task = get() < ----------------- trying to acquire the same lock except (EOFError, OSError): util.debug('worker got EOFError or OSError -- exiting') break """ Whats going on then? As far as the default process start method is set to 'fork', worker subprocesses inherit parent's signal handler. Trying to terminate workers from _terminate_pool() doesn't have any effect. Finally, processes enter into a deadlock when parent join()-s workers. ---------- components: Library (Lib) messages: 289248 nosy: mapozyan priority: normal severity: normal status: open title: Deadlock in multiprocessing.pool.Pool on terminate versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 14:32:09 2017 From: report at bugs.python.org (Matt Bogosian) Date: Wed, 08 Mar 2017 19:32:09 +0000 Subject: [New-bugs-announce] [issue29760] tarfile chokes on reading .tar file with no entries (but does fine if the same file is bzip2'ed) Message-ID: <1489001529.94.0.345849682069.issue29760@psf.upfronthosting.co.za> New submission from Matt Bogosian: It looks like there's a problem examining ``.tar`` files with no entries: ``` $ # ================================================================== $ # Extract test cases (attached to this bug report) $ tar xpvf tarfail.tar.bz2 x tarfail/ x tarfail/tarfail.py x tarfail/test.tar x tarfail/test.tar.bz2 $ cd tarfail $ # ================================================================== $ # Note that test.tar.bz2 is just test.tar, but bzip2'ed: $ bzip2 -c test.tar | openssl dgst -sha256 ; openssl dgst -sha256 test.tar.bz2 f4fad25a0e7a451ed906b76846efd6d2699a65b40795b29553addc35bf9a75c8 SHA256(test.tar.bz2)= f4fad25a0e7a451ed906b76846efd6d2699a65b40795b29553addc35bf9a75c8 $ wc -c test.tar* # these are not empty files 10240 test.tar 46 test.tar.bz2 10286 total $ tar tpvf test.tar # no entries $ tar tpvf test.tar.bz2 # no entries $ # ================================================================== $ # test.tar.bz2 works, but test.tar causes problems (tested in 2.7, $ # 3.5, and 3.6): $ python2.7 tarfail.py opening /?/tarfail/test.tar.bz2 opening /?/tarfail/test.tar E ====================================================================== ERROR: test_next (__main__.TestTarFileNext) ---------------------------------------------------------------------- Traceback (most recent call last): File "tarfail.py", line 29, in test_next next_info = tar_file.next() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2350, in next self.fileobj.seek(self.offset - 1) IOError: [Errno 22] Invalid argument ---------------------------------------------------------------------- Ran 1 test in 0.005s FAILED (errors=1) $ python3.5 tarfail.py opening /?/tarfail/test.tar.bz2 opening /?/tarfail/test.tar E ====================================================================== ERROR: test_next (__main__.TestTarFileNext) ---------------------------------------------------------------------- Traceback (most recent call last): File "tarfail.py", line 29, in test_next next_info = tar_file.next() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tarfile.py", line 2273, in next self.fileobj.seek(self.offset - 1) OSError: [Errno 22] Invalid argument ---------------------------------------------------------------------- Ran 1 test in 0.066s FAILED (errors=1) $ python3.6 tarfail.py opening /?/tarfail/test.tar.bz2 opening /?/tarfail/test.tar E ====================================================================== ERROR: test_next (__main__.TestTarFileNext) ---------------------------------------------------------------------- Traceback (most recent call last): File "tarfail.py", line 29, in test_next next_info = tar_file.next() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/tarfile.py", line 2279, in next self.fileobj.seek(self.offset - 1) OSError: [Errno 22] Invalid argument ---------------------------------------------------------------------- Ran 1 test in 0.090s FAILED (errors=1) ``` Here's the issue (as far as I can tell): ``` $ ipdb tarfail.py > /?/tarfail/tarfail.py(3)() 2 ----> 3 from __future__ import ( 4 absolute_import, division, print_function, unicode_literals, ipdb> b /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py:2350 Breakpoint 1 at /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py:2350 ipdb> c opening /?/tarfail/test.tar.bz2 > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py(2350)next() 2349 if self.offset != self.fileobj.tell(): 1> 2350 self.fileobj.seek(self.offset - 1) 2351 if not self.fileobj.read(1): ipdb> self.fileobj ipdb> self.offset, self.fileobj.tell(), self.offset - 1 (0, 512, -1) ipdb> c opening /?/tarfail/test.tar > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py(2350)next() 2349 if self.offset != self.fileobj.tell(): 1> 2350 self.fileobj.seek(self.offset - 1) 2351 if not self.fileobj.read(1): ipdb> self.fileobj ipdb> self.offset, self.fileobj.tell(), self.offset - 1 (0, 512, -1) ipdb> c E ====================================================================== ERROR: test_next (__main__.TestTarFileNext) ---------------------------------------------------------------------- Traceback (most recent call last): File "tarfail.py", line 29, in test_next next_info = tar_file.next() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2350, in next self.fileobj.seek(self.offset - 1) IOError: [Errno 22] Invalid argument ---------------------------------------------------------------------- Ran 1 test in 38.300s FAILED (errors=1) The program exited via sys.exit(). Exit status: True > /?/tarfail/tarfail.py(3)() 2 ----> 3 from __future__ import ( 4 absolute_import, division, print_function, unicode_literals, ipdb> EOF ``` Apparently, ``bz2.BZ2File`` allows seeking to pre-0 (negative) values, whereas more primitive files are not so forgiving. The offending line looks like it can be traced back to this commit: https://github.com/python/cpython/blame/2.7/Lib/tarfile.py#L2350 https://github.com/python/cpython/blame/3.3/Lib/tarfile.py#L2252 https://github.com/python/cpython/blame/3.4/Lib/tarfile.py#L2252 https://github.com/python/cpython/blame/3.5/Lib/tarfile.py#L2273 https://github.com/python/cpython/blame/3.6/Lib/tarfile.py#L2286 (My apologies for not catching this sooner.) ---------- components: Library (Lib) files: tarfail.tar.bz2 messages: 289253 nosy: posita priority: normal severity: normal status: open title: tarfile chokes on reading .tar file with no entries (but does fine if the same file is bzip2'ed) type: crash versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 Added file: http://bugs.python.org/file46712/tarfail.tar.bz2 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 16:08:17 2017 From: report at bugs.python.org (Jorge Cisneros) Date: Wed, 08 Mar 2017 21:08:17 +0000 Subject: [New-bugs-announce] [issue29761] Wrong size of c_ulong in linux, windows version is fine Message-ID: <1489007297.65.0.421023863942.issue29761@psf.upfronthosting.co.za> New submission from Jorge Cisneros: In the linux version 2.7.12 the size of c_ulong and c_ulonglong is the same, that is not correct Python 2.7.12 (default, Mar 6 2017, 18:06:04) [GCC 4.9.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import * >>> print sizeof(c_ulong),sizeof(c_ulonglong) 8 8 Doing the same in Windows, the results are correct. Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import * >>> print sizeof(c_ulong),sizeof(c_ulonglong) 4 8 >>> ---------- components: ctypes messages: 289257 nosy: Jorge Cisneros priority: normal severity: normal status: open title: Wrong size of c_ulong in linux, windows version is fine type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 16:14:36 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 08 Mar 2017 21:14:36 +0000 Subject: [New-bugs-announce] [issue29762] Use "raise from None" Message-ID: <1489007676.29.0.138385616596.issue29762@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Sometimes after catching some exception the new exception of more appropriate type and with more appropriate message is raised. The initial exception often is not relevant to the final exception, it is raised only due to using EAFP rather than LBYL. It should be excluded from the traceback by using "raise from None". This idiom is often used. Following PR makes it be used in more cases. ---------- components: Library (Lib) messages: 289258 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Use "raise from None" type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 16:39:39 2017 From: report at bugs.python.org (Brett Cannon) Date: Wed, 08 Mar 2017 21:39:39 +0000 Subject: [New-bugs-announce] [issue29763] test_site failing on AppVeyor Message-ID: <1489009179.7.0.320606146583.issue29763@psf.upfronthosting.co.za> New submission from Brett Cannon: E.g. https://ci.appveyor.com/project/python/cpython/build/3.7.0a0.142. This looks to be the last consistent failure on AppVeyor. ---------- components: Windows messages: 289260 nosy: brett.cannon, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_site failing on AppVeyor versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 19:46:30 2017 From: report at bugs.python.org (Alexey Trenikhin) Date: Thu, 09 Mar 2017 00:46:30 +0000 Subject: [New-bugs-announce] [issue29764] PyUnicode_Decode with encoding utf8 crashes Message-ID: <1489020390.12.0.948063239633.issue29764@psf.upfronthosting.co.za> New submission from Alexey Trenikhin: #include int main(){ PyUnicode_Decode("abcdef", 4, "utf_8", "ignore"); return 0; } crashes on linux and Windows (but works fine with encoding "utf-8" ) ---------- components: Unicode files: test.c messages: 289261 nosy: Alexey Trenikhin, ezio.melotti, haypo priority: normal severity: normal status: open title: PyUnicode_Decode with encoding utf8 crashes type: crash versions: Python 2.7 Added file: http://bugs.python.org/file46713/test.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 20:00:20 2017 From: report at bugs.python.org (ada) Date: Thu, 09 Mar 2017 01:00:20 +0000 Subject: [New-bugs-announce] [issue29765] 2.7.12 compile error from ssl related Message-ID: <1489021220.48.0.868281972312.issue29765@psf.upfronthosting.co.za> New submission from ada: Download the python version 2.7.12 source code from official web site. Compile the python code by the following steps: sudo ./configure sudo make sudo make install but from make, I get the following errors: /Modules/_ssl.c: In function ?_create_tuple_for_X509_NAME?: ./Modules/_ssl.c:684: error: dereferencing pointer to incomplete type ./Modules/_ssl.c:701: error: dereferencing pointer to incomplete type ./Modules/_ssl.c: In function ?_get_peer_alt_names?: ./Modules/_ssl.c:804: error: dereferencing pointer to incomplete type ./Modules/_ssl.c:809: error: dereferencing pointer to incomplete type ./Modules/_ssl.c:815: error: dereferencing pointer to incomplete type ./Modules/_ssl.c:876: warning: ?ASN1_STRING_data? is deprecated (declared at /usr/local/include/openssl/asn1.h:553) ./Modules/_ssl.c: In function ?_get_crl_dp?: ./Modules/_ssl.c:1029: error: dereferencing pointer to incomplete type ./Modules/_ssl.c: In function ?PySSL_compression?: ./Modules/_ssl.c:1446: error: dereferencing pointer to incomplete type ./Modules/_ssl.c:1448: error: dereferencing pointer to incomplete type ./Modules/_ssl.c: In function ?context_new?: ./Modules/_ssl.c:2000: warning: ?TLSv1_method? is deprecated (declared at /usr/local/include/openssl/ssl.h:1596) ./Modules/_ssl.c:2003: warning: ?TLSv1_1_method? is deprecated (declared at /usr/local/include/openssl/ssl.h:1602) please help me check where I take a mistake. The related information is: Linux version: Linux root:2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux OpenSSL version: [root at root Python-2.7.12]# rpm -aq | grep openssl openssl-static-1.0.1e-48.el6_8.4.x86_64 openssl-1.0.1e-48.el6_8.4.x86_64 openssl-devel-1.0.1e-48.el6_8.4.x86_64 COMPILER version: [root at root Python-2.7.12]# yum list installed | grep -i gcc gcc.x86_64 4.4.7-17.el6 gcc-c++.x86_64 4.4.7-17.el6 gcc-gfortran.x86_64 4.4.7-17.el6 libgcc.i686 4.4.7-17.el6 libgcc.x86_64 4.4.7-17.el6 Thanks in advance. ---------- assignee: christian.heimes components: SSL messages: 289262 nosy: ada, christian.heimes priority: normal severity: normal status: open title: 2.7.12 compile error from ssl related type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 20:15:04 2017 From: report at bugs.python.org (Hanno Schlichting) Date: Thu, 09 Mar 2017 01:15:04 +0000 Subject: [New-bugs-announce] [issue29766] --with-lto still implied by --enable-optimizations in Python 2.7 Message-ID: <1489022104.7.0.601332376307.issue29766@psf.upfronthosting.co.za> New submission from Hanno Schlichting: I think the fix for issue28032 wasn't applied correctly to the 2.7 branch. Compare the change in Python 2.7: https://github.com/python/cpython/commit/9cbfa79111e7152231556a21af90a220b72ed086#diff-e2d5a00791bce9a01f99bc6fd613a39dL6425 vs. for example Python 3.5: https://github.com/python/cpython/commit/14c7f71150c94ca35ca913b15c3d0cd236691ed6#diff-e2d5a00791bce9a01f99bc6fd613a39dL6567 In Python 3.5 the Py_LTO='true' line was before the Darwin block and got removed. In Python 2.7 the line was after the block and was left in place. I'm guessing this was a simply mistake, while backporting the change. ---------- components: Build messages: 289263 nosy: Hanno Schlichting, gregory.p.smith priority: normal severity: normal status: open title: --with-lto still implied by --enable-optimizations in Python 2.7 versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 8 22:54:20 2017 From: report at bugs.python.org (Shuo Li) Date: Thu, 09 Mar 2017 03:54:20 +0000 Subject: [New-bugs-announce] [issue29767] build python failed on test_socket due to unused_port is actually used. Message-ID: <1489031660.84.0.964088839039.issue29767@psf.upfronthosting.co.za> New submission from Shuo Li: I am running a debian system. And trying to build cpython3.6 from source. When I run make altinstall, it failed on test_socket. Reporting cli attribute is missing. After some trouble shooting, it seems the support.get_unused_port() is not reliable. Then I modified it and return a port I am sure no one is using, the build succeeded. ---------- components: Build messages: 289268 nosy: Shuo Li priority: normal severity: normal status: open title: build python failed on test_socket due to unused_port is actually used. versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 00:43:24 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 09 Mar 2017 05:43:24 +0000 Subject: [New-bugs-announce] [issue29768] Fix expact version check Message-ID: <1489038204.73.0.479111635922.issue29768@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The compile-time check for expat version added in issue14234 doesn't work with expat 3.0.0. Following PR fixes this. ---------- components: Extension Modules, XML messages: 289273 nosy: gregory.p.smith, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Fix expact version check type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 06:46:10 2017 From: report at bugs.python.org (Wolfgang Maier) Date: Thu, 09 Mar 2017 11:46:10 +0000 Subject: [New-bugs-announce] [issue29769] pkgutil._iter_file_finder_modules should not be fooled by *.py folders Message-ID: <1489059970.36.0.419253660084.issue29769@psf.upfronthosting.co.za> New submission from Wolfgang Maier: The current implementation of _iter_file_finder_modules parses folders with a valid Python module extension as modules (e.g. it would report a *folder* xy.py as a module xy). As a result, e.g., pydoc.apropos('') fails if such a folder is found anywhere on sys.path. I'm attaching a patch that fixes this and also brings a few minor improvements (like using a set instead of a dict with 1 values and reusing the function in ImpImporter). However, I have a question about it (which is also the reason why I didn't turn this into a PR right away): in addition to checking that an item detected as a module is not a directory, I think it would be good to also check that an __init__ module inside a possible package really is a file. If I uncomment the respective check in the patch though, I'm getting a test_pydoc failure because the test creates a package directory with no access to contained file attributes. So even though there is an __init__.py file in the package dir the isfile() check fails. I think that should, in fact, happen and the pydoc test is wrong, but apparently whoever wrote the test had a different opinion. Any thoughts? ---------- components: Library (Lib) files: pkgutil.patch keywords: patch messages: 289285 nosy: ncoghlan, wolma priority: normal severity: normal status: open title: pkgutil._iter_file_finder_modules should not be fooled by *.py folders type: behavior versions: Python 3.7 Added file: http://bugs.python.org/file46714/pkgutil.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 07:49:52 2017 From: report at bugs.python.org (Wolfgang Langner) Date: Thu, 09 Mar 2017 12:49:52 +0000 Subject: [New-bugs-announce] [issue29770] Executable help output (--help) at commandline is wrong for option -B Message-ID: <1489063792.65.0.00439171993212.issue29770@psf.upfronthosting.co.za> New submission from Wolfgang Langner: The output for "python --help" for the option -B is wrong. It contains also the old pyo files. But they were removed. Output is: -B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x should be: -B : don't write .pyc files on import; also PYTHONDONTWRITEBYTECODE=x ---------- assignee: docs at python components: Documentation messages: 289287 nosy: docs at python, tds333 priority: normal severity: normal status: open title: Executable help output (--help) at commandline is wrong for option -B type: enhancement versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 08:10:10 2017 From: report at bugs.python.org (Jack) Date: Thu, 09 Mar 2017 13:10:10 +0000 Subject: [New-bugs-announce] [issue29771] An email and MIME handling package - Add support to send CC of email Message-ID: <1489065010.61.0.0444038371691.issue29771@psf.upfronthosting.co.za> New submission from Jack: Currently using the package we can only define emails in the 'TO' as shown here: https://docs.python.org/2/library/email-examples.html#email-examples There is no support for email to be sent as CC or BCC which is useful quality in many emails. Please see if this can be added. ---------- messages: 289291 nosy: Nonickname priority: normal severity: normal status: open title: An email and MIME handling package - Add support to send CC of email type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 09:44:56 2017 From: report at bugs.python.org (Kinebuchi Tomohiko) Date: Thu, 09 Mar 2017 14:44:56 +0000 Subject: [New-bugs-announce] [issue29772] Unintentionally deleted line on library/collections.rst Message-ID: <1489070696.66.0.394233240398.issue29772@psf.upfronthosting.co.za> New submission from Kinebuchi Tomohiko: The last part of "Counter objects" section has a strange line: "in Smalltalk." https://docs.python.org/2.7/library/collections.html#counter-objects The line just before "in Smalltalk" might be deleted by accident. Related issue is bpo-25910 [1]_ and the applied patch is this [2]_, although an intended patch might looks like this [3]_. .. [1] http://bugs.python.org/issue25910 .. [2] https://hg.python.org/cpython/rev/14e00e7e4d51#l15.7 patch for the 2.7 branch .. [3] https://hg.python.org/cpython/rev/ce5ef48b5140#l21.7 patch for the 3.5 branch I will create a pull request. ---------- assignee: docs at python components: Documentation messages: 289300 nosy: cocoatomo, docs at python priority: normal severity: normal status: open title: Unintentionally deleted line on library/collections.rst versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 10:09:02 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 09 Mar 2017 15:09:02 +0000 Subject: [New-bugs-announce] [issue29773] Additional float-from-string tests Message-ID: <1489072142.51.0.122050091916.issue29773@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Following PR adds more corner cases in the test for calling float() with invalid string. ---------- components: Tests messages: 289301 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Additional float-from-string tests type: enhancement versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 10:27:35 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 09 Mar 2017 15:27:35 +0000 Subject: [New-bugs-announce] [issue29774] Improve zipfile handling of corrupted extra field Message-ID: <1489073255.72.0.694063401806.issue29774@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The zipfile module can raise struct.error when process corrupted extra field. This issue was partially resolved by issue14315. Following PR converts struct.error to BadZipFile in other case. ---------- components: Library (Lib) messages: 289302 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Improve zipfile handling of corrupted extra field type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 11:25:23 2017 From: report at bugs.python.org (Paul Moore) Date: Thu, 09 Mar 2017 16:25:23 +0000 Subject: [New-bugs-announce] [issue29775] There appears to be a spurious ^0 in sys.version for 3.6.1rc1 Message-ID: <1489076723.99.0.3863763672.issue29775@psf.upfronthosting.co.za> New submission from Paul Moore: The 3.6.1rc1 build seems to have a spurious "^0" at the end of the version, before the VCS ID - 3.6.1rc1^0): >py -3.6 Python 3.6.1rc1 (v3.6.1rc1^0:e0fbe5feee4f9c00f09eb9659c2182183036261a, Mar 4 2017, 20:00:12) [MSC v.1900 64 bit (AMD64)] on win32 >>> sys.version '3.6.1rc1 (v3.6.1rc1^0:e0fbe5feee4f9c00f09eb9659c2182183036261a, Mar 4 2017, 20:00:12) [MSC v.1900 64 bit (AMD64)]' It's not showing in sys.version_info, so it's probably only cosmetic. Also, I don't think this is really a release blocker - just marking it as such so it gets checked (I wonder if it's an artifact of the github migration, I think git uses ^0 to mean something specific in relation to commit IDs?) I've only checked on Windows. I don't know if it's the same on Unix. If it's deemed cosmetic, I'm happy for it to be downgraded to non-blocking, or even closed as not an issue. Just wanted to flag it up in case it's a symptom of something deeper. ---------- assignee: ned.deily components: Interpreter Core messages: 289305 nosy: ned.deily, paul.moore, steve.dower priority: release blocker severity: normal status: open title: There appears to be a spurious ^0 in sys.version for 3.6.1rc1 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 12:10:02 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 09 Mar 2017 17:10:02 +0000 Subject: [New-bugs-announce] [issue29776] Modernize properties Message-ID: <1489079402.47.0.435152463031.issue29776@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Following PR updates Python sources to use new shiny syntax for properties. It replaces the old code def _foo(self): ... def _set_foo(self, value): ... foo = property(_foo, _set_foo) with the new code @property def foo(self): ... @foo.setter def foo(self, value): ... New syntax was added in Python 2.4. ---------- components: Library (Lib) messages: 289309 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Modernize properties type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 15:35:40 2017 From: report at bugs.python.org (Alan Evangelista) Date: Thu, 09 Mar 2017 20:35:40 +0000 Subject: [New-bugs-announce] [issue29777] argparse arguments in main parser hide an argument in subparser Message-ID: <1489091740.64.0.424181036659.issue29777@psf.upfronthosting.co.za> New submission from Alan Evangelista: If you have a argument named -- in a subparser and two arguments named -- in the main parser and call the Python executable with python -- argparse fails with: error: ambiguous option: -- could match --, -- This probably happens due to how the argument abbreviation parsing is implemented. Is it possible to support disabling argument abbreviation in Python 2.7, as it is done in Python 3? ---------- components: Library (Lib) messages: 289327 nosy: Alan Evangelista priority: normal severity: normal status: open title: argparse arguments in main parser hide an argument in subparser versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 9 23:58:18 2017 From: report at bugs.python.org (Tibor Csonka) Date: Fri, 10 Mar 2017 04:58:18 +0000 Subject: [New-bugs-announce] [issue29778] _Py_CheckPython3 uses uninitialized dllpath when embedder sets module path with Py_SetPath Message-ID: <1489121898.66.0.224917416789.issue29778@psf.upfronthosting.co.za> New submission from Tibor Csonka: When Py_SetPath is used to set up module path at initialization, the Py_SetPath causes getpathp.c::calculate_path not to be called. However, calculate path is the only function calling getpathp.c::get_progpath which initializes the local dllpath static variable. Later the interpreter tries to load python3.dll and uses dllpath which is empty by default. This empty path gets joined with \python3.dll and \DLLs\python3.dll which is used in the LoadLibraryExW resulting in loading python3.dll from the root location of the windows drive the application is running from. The behavior was reproduced using PyInstaller but it is present in any embedding application which uses Py_SetPath. ---------- components: Windows messages: 289334 nosy: Tibor Csonka, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: _Py_CheckPython3 uses uninitialized dllpath when embedder sets module path with Py_SetPath versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 02:40:03 2017 From: report at bugs.python.org (Levi Sabah) Date: Fri, 10 Mar 2017 07:40:03 +0000 Subject: [New-bugs-announce] [issue29779] New environment variable PYTHONHISTORY Message-ID: <1489131603.26.0.135268292164.issue29779@psf.upfronthosting.co.za> Changes by Levi Sabah <0xl3vi at gmail.com>: ---------- nosy: 0xl3vi priority: normal pull_requests: 488 severity: normal status: open title: New environment variable PYTHONHISTORY versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 04:47:54 2017 From: report at bugs.python.org (Keyvan Hedayati) Date: Fri, 10 Mar 2017 09:47:54 +0000 Subject: [New-bugs-announce] [issue29780] Interpreter hang on self._epoll.poll(timeout, max_ev) Message-ID: <1489139274.42.0.877186515769.issue29780@psf.upfronthosting.co.za> New submission from Keyvan Hedayati: Hello We have an issue with our application, it randomly hangs and doesn't respond to new requests, at first we thought the problem lies within our code but after attaching to hanged process using gdb I couldn't find any code related to our application so I thought it might be python bug. Here is the info extracted from gdb: https://gist.github.com/k1-hedayati/96e28bf590c4392840650902cb5eceda Python 3.5.2 We run multiple instances of our application and they are fine for a couple of days/hours but suddenly one of them starts hanging and others follow and unfortunately we can't reproduce the problem. I'll be glad to receive any advice on how to solve or debug this issue. ---------- components: asyncio messages: 289344 nosy: gvanrossum, k1.hedayati, yselivanov priority: normal severity: normal status: open title: Interpreter hang on self._epoll.poll(timeout, max_ev) type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 05:03:04 2017 From: report at bugs.python.org (Cory Benfield) Date: Fri, 10 Mar 2017 10:03:04 +0000 Subject: [New-bugs-announce] [issue29781] SSLObject.version returns incorrect value before handshake. Message-ID: <1489140184.21.0.804659104334.issue29781@psf.upfronthosting.co.za> New submission from Cory Benfield: The SSLObject object from the ssl module has a version() method that is undocumented. A reasonable assumption for the behaviour of that method is that it would follow the behaviour of the same method on SSLSocket(), which has the following documentation: > Return the actual SSL protocol version negotiated by the connection as > a string, or None is no secure connection is established. As of this > writing, possible return values include "SSLv2", "SSLv3", "TLSv1", > "TLSv1.1" and "TLSv1.2". Recent OpenSSL versions may define more return > values. However, SSLObject does not follow that behaviour: Python 3.6.0 (default, Jan 18 2017, 18:08:34) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import ssl >>> ctx = ssl.create_default_context() >>> in_bio = ssl.MemoryBIO() >>> out_bio = ssl.MemoryBIO() >>> buffers = ctx.wrap_bio(in_bio, out_bio) >>> buffers.version() 'TLSv1.2' That is, a SSLObject that does not have a TLS session established will incorrectly report that it is using a TLS version. This method should return None in this case. ---------- assignee: christian.heimes components: SSL messages: 289346 nosy: Lukasa, christian.heimes priority: normal severity: normal status: open title: SSLObject.version returns incorrect value before handshake. versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 05:53:57 2017 From: report at bugs.python.org (Niklas Fiekas) Date: Fri, 10 Mar 2017 10:53:57 +0000 Subject: [New-bugs-announce] [issue29782] Use __builtin_clzl for bits_in_digit if available Message-ID: <1489143237.03.0.753738364063.issue29782@psf.upfronthosting.co.za> New submission from Niklas Fiekas: Baseline performance (9e6ac83acae): $ ./python -m timeit "12345678 == 12345678.0" 5000000 loops, best of 5: 40 nsec per loop $ ./python -m timeit "1 == 1.0" 10000000 loops, best of 5: 38.8 nsec per loop $ ./python -m timeit "(1234578987654321).bit_length()" 10000000 loops, best of 5: 39.4 nsec per loop Upcoming PR: $ ./python -m timeit "12345678 == 12345678.0" 10000000 loops, best of 5: 34.3 nsec per loop $ ./python -m timeit "1 == 1.0" 10000000 loops, best of 5: 34.4 nsec per loop $ ./python -m timeit "(1234578987654321).bit_length()" 10000000 loops, best of 5: 36.4 nsec per loop ---------- components: Interpreter Core messages: 289353 nosy: niklasf priority: normal severity: normal status: open title: Use __builtin_clzl for bits_in_digit if available type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 09:17:29 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 10 Mar 2017 14:17:29 +0000 Subject: [New-bugs-announce] [issue29783] Modify codecs.open() to use the io module instead of codecs.StreamReaderWriter() Message-ID: <1489155449.02.0.609817047168.issue29783@psf.upfronthosting.co.za> New submission from STINNER Victor: The codecs.StreamReaderWriter() class still has old unfixed issues like the issue #12508 (open since 2011). This issue is even seen as a security vulnerability by the owasp-pysec project: https://github.com/ebranca/owasp-pysec/wiki/Unicode-string-silently-truncated I propose to modify codecs.open() to reuse the io module: call io.open() with newline=''. The io module is now battle-tested and handles well many corner cases of incremental codecs with multibyte encodings. With this change, codecs.open() cannot be used with non-text encodings... but I'm not sure that this feature ever worked in Python 3: $ ./python -bb Python 3.7.0a0 >>> import codecs >>> f = codecs.open('test', 'w', encoding='rot13') >>> f.write('hello') TypeError: a bytes-like object is required, not 'str' >>> f.write(b'hello') TypeError: a bytes-like object is required, not 'dict' The next step would be to deprecate the codecs.StreamReaderWriter class and the codecs.open(). But my latest attempt to deprecate them was the PEP 400 and it wasn't a full success, so I now prefer to move step by step :-) Attached PR: * Modify codecs.open() to use io.open() * Remove "; use codecs.open() to handle arbitrary codecs" from io.open() and _pyio.open() error messages * Replace codecs.open() with open() at various places ---------- components: Unicode messages: 289362 nosy: ezio.melotti, haypo, lemburg, serhiy.storchaka priority: normal severity: normal status: open title: Modify codecs.open() to use the io module instead of codecs.StreamReaderWriter() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 09:45:51 2017 From: report at bugs.python.org (Maxime Buquet) Date: Fri, 10 Mar 2017 14:45:51 +0000 Subject: [New-bugs-announce] [issue29784] Erroneous link in shutil.copy description Message-ID: <1489157151.86.0.545519035584.issue29784@psf.upfronthosting.co.za> New submission from Maxime Buquet: https://docs.python.org/3/library/shutil.html#shutil.copy The link to "copy()" in the description seems to be pointing to the copy module, but I suppose it was meant to point at shutil.copy. ---------- assignee: docs at python components: Documentation messages: 289370 nosy: docs at python, pep. priority: normal severity: normal status: open title: Erroneous link in shutil.copy description _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 09:52:38 2017 From: report at bugs.python.org (Maxime Buquet) Date: Fri, 10 Mar 2017 14:52:38 +0000 Subject: [New-bugs-announce] [issue29785] Registration link sent via email by the tracker is http Message-ID: <1489157558.36.0.800169235988.issue29785@psf.upfronthosting.co.za> New submission from Maxime Buquet: The link[1] sent via email by the tracker for registration confirmation is http, whereas https is already setup on the tracker itself. Would it be possible to change it to https. [1] http://bugs.python.org/?@action=confrego&otk=TOKEN ---------- components: Demos and Tools messages: 289372 nosy: pep. priority: normal severity: normal status: open title: Registration link sent via email by the tracker is http _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 10:15:39 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 10 Mar 2017 15:15:39 +0000 Subject: [New-bugs-announce] [issue29786] asyncio.wrap_future() is not documented Message-ID: <1489158939.82.0.933594898276.issue29786@psf.upfronthosting.co.za> New submission from STINNER Victor: The following asyncio function is not documented. Is it deliberate? The function is exported in the asyncio module. def wrap_future(future, *, loop=None): """Wrap concurrent.futures.Future object.""" ---------- assignee: docs at python components: Documentation, asyncio messages: 289376 nosy: docs at python, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: asyncio.wrap_future() is not documented versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 10:50:57 2017 From: report at bugs.python.org (Ulrich Petri) Date: Fri, 10 Mar 2017 15:50:57 +0000 Subject: [New-bugs-announce] [issue29787] Internal importlib frames visible when module imported by import_module throws exception Message-ID: <1489161057.85.0.765402774015.issue29787@psf.upfronthosting.co.za> New submission from Ulrich Petri: Importing a module that raises an exception on import trough `importlib.import_module()` causes importlib to not strip it's internal frames from the traceback. Minimal example: --a.py-- import importlib importlib.import_module("b") --a.py-- --b.py-- raise Exception() --b.py-- #~ python3.6 a.py Traceback (most recent call last): File "a.py", line 3, in importlib.import_module("b") File "/Users/ulo/.pythonz/pythons/CPython-3.6.0/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 978, in _gcd_import File "", line 961, in _find_and_load File "", line 950, in _find_and_load_unlocked File "", line 655, in _load_unlocked File "", line 678, in exec_module File "", line 205, in _call_with_frames_removed File "/Users/ulo/t/b.py", line 1, in raise Exception() Exception ---------- messages: 289381 nosy: ulope priority: normal severity: normal status: open title: Internal importlib frames visible when module imported by import_module throws exception type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 11:13:44 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 10 Mar 2017 16:13:44 +0000 Subject: [New-bugs-announce] [issue29788] Add absolute_path option to tarfile, disabled by default Message-ID: <1489162424.66.0.632437535873.issue29788@psf.upfronthosting.co.za> New submission from STINNER Victor: I noticed that "python3 -m tarfile -x archive.tar" uses absolute paths by default, whereas the UNIX tar command doesn't by default. The UNIX tar command requires to add explicitly --absolute-paths (-P) option. I suggest to add a boolean absolute_path option to tarfile, disabled by default. Example to create such archive. See that tar also removes "/" by default and requires to pass explicitly -P: $ cd $HOME # /home/haypo $ echo TEST > test $ tar -cf test.tar /home/haypo/test tar: Removing leading `/' from member names $ rm -f test.tar $ tar -P -cf test.tar /home/haypo/test $ rm -f test Extracting such archive using tar is safe *by default*: $ mkdir z $ cd z $ tar -xf ~/test.tar tar: Removing leading `/' from member names $ find . ./home ./home/haypo ./home/haypo/test Extracting such archive using Python is unsafe: $ python3 -m tarfile -e ~/test.tar $ cat ~/test TEST $ pwd /home/haypo/z Python creates files outside the current directory which is unsafe, wheras tar doesn't. ---------- messages: 289388 nosy: haypo priority: normal severity: normal status: open title: Add absolute_path option to tarfile, disabled by default type: security versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 11:14:55 2017 From: report at bugs.python.org (STINNER Victor) Date: Fri, 10 Mar 2017 16:14:55 +0000 Subject: [New-bugs-announce] [issue29789] zipfile: Add absolute_path option, disabled by default Message-ID: <1489162495.01.0.911543004747.issue29789@psf.upfronthosting.co.za> New submission from STINNER Victor: Same issue than tarfile issue #29788, but on zipfile. I suggest to add a boolean absolute_path option to zipfile, disabled by default. ---------- components: Library (Lib) messages: 289389 nosy: haypo priority: normal severity: normal status: open title: zipfile: Add absolute_path option, disabled by default type: security versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 15:53:03 2017 From: report at bugs.python.org (Ivan Anishchuk) Date: Fri, 10 Mar 2017 20:53:03 +0000 Subject: [New-bugs-announce] [issue29790] Optional use of /dev/random on linux Message-ID: <1489179183.06.0.602244031276.issue29790@psf.upfronthosting.co.za> New submission from Ivan Anishchuk: Right now secrets module uses SystemRandom which is hardcoded to use os.urandom() which is fine for most users but some have good hardware sources of entropy (or otherwise replenish entropy pool) in which case it would be much better to use getrandom() with GRND_RANDOM flag i.e. to read from /dev/random pool. Simply subclassing SystemRandom is not enough, the idea is to make it possible for every library and program to use the big entropy pool if it's available. So I'm thinking it would be best to configure it with an environment variable, something like PYTHONTRUERANDOM or PYTHONDEVRANDOM. Admittedly, only a small subset of users would benefit from this but changes required are also small and I'm willing to do all the work here. Are there any reason this patch won't be accepted? Any preferences regarding variable name? ---------- components: Library (Lib) messages: 289410 nosy: IvanAnishchuk priority: normal severity: normal status: open title: Optional use of /dev/random on linux type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 10 16:06:33 2017 From: report at bugs.python.org (Lucio Ricardo Montero Valenzuela) Date: Fri, 10 Mar 2017 21:06:33 +0000 Subject: [New-bugs-announce] [issue29791] print documentation: flush is also a keyword argument Message-ID: <1489179993.27.0.31165504732.issue29791@psf.upfronthosting.co.za> New submission from Lucio Ricardo Montero Valenzuela: In the print() function documentation (https://docs.python.org/3/library/functions.html#print), the first line says "Print objects to the text stream file, separated by sep and followed by end. sep, end and file, if present, must be given as keyword arguments.", but the function definition is said to be "print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)". Based on the Python user function definition grammar, the only way of passing an value to a non-star parameters that appear after a star-parameter is with the keyword (so the interpreter knows not to push the value into the star-parameter list/mapping). So the flush parameter can be set only naming explicitly the keyword 'flush' ?Isn't it?. So the first line of the print() function documentation should say "Print objects to the text stream file, separated by sep and followed by end. sep, end, file and flush, if present, must be given as keyword arguments.". Flush is a new parameter, so maybe you forgot to update this line of the documentation to include it. Best regards. ---------- assignee: docs at python components: Documentation messages: 289411 nosy: Lucio Ricardo Montero Valenzuela, docs at python priority: normal severity: normal status: open title: print documentation: flush is also a keyword argument type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 11 06:33:02 2017 From: report at bugs.python.org (David MacIver) Date: Sat, 11 Mar 2017 11:33:02 +0000 Subject: [New-bugs-announce] [issue29792] "Fatal Python error: Cannot recover from stack overflow." from pure Python code Message-ID: <1489231982.25.0.506172470023.issue29792@psf.upfronthosting.co.za> New submission from David MacIver: When run under Python 3.6.0 or 3.5.1 (and presumably other versions of Python 3) the attached code fails with "Fatal Python error: Cannot recover from stack overflow." then aborts with a core dump and an error code indicating it got a SIGABRT. On Python 2.7 it instead hangs indefinitely. Obviously this code is stupid and shouldn't be expected to do anything very reasonable - It's shrunk down from what was probably just a bug on my end in a larger example - but it seemed like it might be symptomatic of a more general class of problems. ---------- files: recursionerror.py messages: 289441 nosy: David MacIver priority: normal severity: normal status: open title: "Fatal Python error: Cannot recover from stack overflow." from pure Python code versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file46720/recursionerror.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 11 08:46:33 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 11 Mar 2017 13:46:33 +0000 Subject: [New-bugs-announce] [issue29793] Convert some builtin types constructors to Argument Clinic Message-ID: <1489239993.83.0.329707117156.issue29793@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Following PR converts some builtin types constructors to Argument Clinic: complex.__new__ float.__new__ function.__new__ int.__new__ mappingproxy.__new__ module.__init__ property.__init__ structseq.__new__ ---------- components: Interpreter Core messages: 289446 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: Convert some builtin types constructors to Argument Clinic type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 11 11:00:21 2017 From: report at bugs.python.org (ppperry) Date: Sat, 11 Mar 2017 16:00:21 +0000 Subject: [New-bugs-announce] [issue29794] Incorrect error message on invalid __class__ assignments Message-ID: <1489248021.29.0.604014111232.issue29794@psf.upfronthosting.co.za> New submission from ppperry: If you try to set the __class__ of a type which doesn't support "__class__" assignments, you get the error message: TypeError: __class__ assignment only supported for heap types or ModuleType subclasses However, the actual restriction doesn't require a subclass of "ModuleType"; the below code works: import random class M(type(random)):pass random.__class__ = M Thus the error message is incorrect. ---------- components: Interpreter Core messages: 289448 nosy: ppperry priority: normal severity: normal status: open title: Incorrect error message on invalid __class__ assignments type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 11 14:25:49 2017 From: report at bugs.python.org (Max) Date: Sat, 11 Mar 2017 19:25:49 +0000 Subject: [New-bugs-announce] [issue29795] Clarify how to share multiprocessing primitives Message-ID: <1489260349.05.0.251290010989.issue29795@psf.upfronthosting.co.za> New submission from Max: It seems both me and many other people (judging from SO questions) are confused about whether it's ok to write this: from multiprocessing import Process, Queue q = Queue() def f(): q.put([42, None, 'hello']) def main(): p = Process(target=f) p.start() print(q.get()) # prints "[42, None, 'hello']" p.join() if __name__ == '__main__': main() It's not ok (doesn't work on Windows presumably because somehow when it's pickled, the connection between global queues in the two processes is lost; works on Linux, because I guess fork keeps more information than pickle, so the connection is maintained). I thought it would be good to clarify in the docs that all the Queue() and Manager().* and other similar objects should be passed as parameters not just defined as globals. ---------- assignee: docs at python components: Documentation messages: 289454 nosy: docs at python, max priority: normal severity: normal status: open title: Clarify how to share multiprocessing primitives type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 11 20:01:35 2017 From: report at bugs.python.org (Zachary Ware) Date: Sun, 12 Mar 2017 01:01:35 +0000 Subject: [New-bugs-announce] [issue29796] test_weakref hangs on AppVeyor (2.7) Message-ID: <1489280495.74.0.987527414659.issue29796@psf.upfronthosting.co.za> New submission from Zachary Ware: See PR493 (https://ci.appveyor.com/project/python/cpython/build/2.7.13+.184) for an example. I'd rather not merge PR493, which adds AppVeyor to 2.7, until this is resolved. ---------- components: Windows messages: 289461 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: test needed status: open title: test_weakref hangs on AppVeyor (2.7) type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 12 00:45:09 2017 From: report at bugs.python.org (Max) Date: Sun, 12 Mar 2017 05:45:09 +0000 Subject: [New-bugs-announce] [issue29797] Deadlock with multiprocessing.Queue() Message-ID: <1489297509.89.0.0510120322322.issue29797@psf.upfronthosting.co.za> New submission from Max: Using multiprocessing.Queue() with several processes writing very fast results in a deadlock both on Windows and UNIX. For example, this code: from multiprocessing import Process, Queue, Manager import time, sys def simulate(q, n_results): for i in range(n_results): time.sleep(0.01) q.put(i) def main(): n_workers = int(sys.argv[1]) n_results = int(sys.argv[2]) q = Queue() proc_list = [Process(target=simulate, args=(q, n_results), daemon=True) for i in range(n_workers)] for proc in proc_list: proc.start() for i in range(5): time.sleep(1) print('current approximate queue size:', q.qsize()) alive = [p.pid for p in proc_list if p.is_alive()] if alive: print(len(alive), 'processes alive; among them:', alive[:5]) else: break for p in proc_list: p.join() print('final appr queue size', q.qsize()) if __name__ == '__main__': main() hangs on Windows 10 (python 3.6) with 2 workers and 1000 results each, and on Ubuntu 16.04 (python 3.5) with 100 workers and 100 results each. The print out shows that the queue has reached the full size, but a bunch of processes are still alive. Presumably, they somehow manage to lock themselves out even though they don't depend on each other (must be in the implementation of Queue()): current approximate queue size: 9984 47 processes alive; among them: [2238, 2241, 2242, 2244, 2247] current approximate queue size: 10000 47 processes alive; among them: [2238, 2241, 2242, 2244, 2247] The deadlock disappears once multiprocessing.Queue() is replaced with multiprocessing.Manager().Queue() - or at least I wasn't able to replicate it with a reasonable number of processes and results. ---------- components: Library (Lib) messages: 289479 nosy: max priority: normal severity: normal status: open title: Deadlock with multiprocessing.Queue() type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 12 01:13:31 2017 From: report at bugs.python.org (Nick Coghlan) Date: Sun, 12 Mar 2017 06:13:31 +0000 Subject: [New-bugs-announce] [issue29798] Handle "git worktree" in "make patchcheck" Message-ID: <1489299211.77.0.0188090534059.issue29798@psf.upfronthosting.co.za> New submission from Nick Coghlan: While backporting issue 29656 to get "make patchcheck" to play nice with git PR branches, I discovered an incompatibility between the way "git worktree" works and the assumptions in "patchcheck.py". Specifically, in a worktree, ".git" is a file, rather than a directory: $ cat .git gitdir: /home/ncoghlan/devel/cpython/.git/worktrees/py27 So the current isdir() check should be relaxed to just an exists() check. ---------- assignee: ncoghlan components: Demos and Tools messages: 289481 nosy: ncoghlan priority: normal severity: normal status: open title: Handle "git worktree" in "make patchcheck" type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 12 08:49:16 2017 From: report at bugs.python.org (Jaysinh shukla) Date: Sun, 12 Mar 2017 12:49:16 +0000 Subject: [New-bugs-announce] [issue29799] Add tests for header API of 'urllib.request.Request' class Message-ID: <1489322956.53.0.812108141257.issue29799@psf.upfronthosting.co.za> Changes by Jaysinh shukla : ---------- components: Tests nosy: jaysinh.shukla priority: normal severity: normal status: open title: Add tests for header API of 'urllib.request.Request' class type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 12 17:10:14 2017 From: report at bugs.python.org (Michael Seifert) Date: Sun, 12 Mar 2017 21:10:14 +0000 Subject: [New-bugs-announce] [issue29800] functools.partial segfaults in repr when keywords attribute is abused Message-ID: <1489353014.2.0.721319606303.issue29800@psf.upfronthosting.co.za> New submission from Michael Seifert: It's possible to create a segfault when one (inappropriatly) changes the functools.partial.keywords attribute manually. A minimal example reproducing the segfault is: >>> from functools import partial >>> p = partial(int) >>> p.keywords[1] = 10 >>> repr(p) System: Windows 10 Python: 3.5.3, 3.6.0, master-branch ---------- components: Library (Lib) messages: 289510 nosy: MSeifert priority: normal severity: normal status: open title: functools.partial segfaults in repr when keywords attribute is abused type: crash versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 12 22:58:12 2017 From: report at bugs.python.org (shayan) Date: Mon, 13 Mar 2017 02:58:12 +0000 Subject: [New-bugs-announce] [issue29801] amazing! Message-ID: <1150825008.20170313055754@yahoo.com> New submission from shayan: Dear, You won't believe what I've just read, this is so amazing, read more at http://popular.wonderamau.tv/4948 Best, shili8_256 ---------- messages: 289525 nosy: SH4Y4N priority: normal severity: normal status: open title: amazing! _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 13 00:26:58 2017 From: report at bugs.python.org (Artem Smotrakov) Date: Mon, 13 Mar 2017 04:26:58 +0000 Subject: [New-bugs-announce] [issue29802] A possible null-pointer dereference in struct.s_unpack_internal() Message-ID: <1489379218.66.0.044857321803.issue29802@psf.upfronthosting.co.za> New submission from Artem Smotrakov: Attached struct_unpack_crash.py results to a null-pointer dereference in s_unpack_internal() function of _struct module: ASAN:SIGSEGV ================================================================= ==20245==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7facd2cea83a bp 0x000000000000 sp 0x7ffd0250f860 T0) #0 0x7facd2cea839 in s_unpack_internal /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:1515 #1 0x7facd2ceab69 in Struct_unpack_impl /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:1570 #2 0x7facd2ceab69 in unpack_impl /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:2192 #3 0x7facd2ceab69 in unpack /home/artem/projects/python/src/cpython-asan/Modules/clinic/_struct.c.h:215 #4 0x474397 in _PyMethodDef_RawFastCallKeywords Objects/call.c:618 #5 0x474397 in _PyCFunction_FastCallKeywords Objects/call.c:690 #6 0x42685f in call_function Python/ceval.c:4817 #7 0x42685f in _PyEval_EvalFrameDefault Python/ceval.c:3298 #8 0x54b164 in PyEval_EvalFrameEx Python/ceval.c:663 #9 0x54b164 in _PyEval_EvalCodeWithName Python/ceval.c:4173 #10 0x54b252 in PyEval_EvalCodeEx Python/ceval.c:4200 #11 0x54b252 in PyEval_EvalCode Python/ceval.c:640 #12 0x431e0e in run_mod Python/pythonrun.c:976 #13 0x431e0e in PyRun_FileExFlags Python/pythonrun.c:929 #14 0x43203b in PyRun_SimpleFileExFlags Python/pythonrun.c:392 #15 0x446354 in run_file Modules/main.c:338 #16 0x446354 in Py_Main Modules/main.c:809 #17 0x41df71 in main Programs/python.c:69 #18 0x7facd58ac82f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f) #19 0x428728 in _start (/home/artem/projects/python/build/cpython-asan/bin/python3.7+0x428728) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /home/artem/projects/python/src/cpython-asan/Modules/_struct.c:1515 s_unpack_internal ==20245==ABORTING Looks like _struct implementation assumes that PyStructObject->s_codes cannot be null, but it may happen if a bytearray was passed to unpack(). PyStructObject->s_codes becomes null in a couple of places in _struct.c, but that's not the case. unpack() calls _PyArg_ParseStack() with cache_struct_converter() which maintains a cache. Even if unpack() was called incorrectly with a string as second parameter (see below), this value is going to be cached anyway. Next time, if the same format string is used, the value is going to be retrieved from the cache. But PyStructObject->s_codes is still not null in cache_struct_converter() function. If you watch "s_object" under gdb, you can see that "s_codes" becomes null here: PyBuffer_FillInfo (view=0x7fffffffd700, obj=obj at entry=0x7ffff7e50730, buf=0x8df478 <_PyByteArray_empty_string>, len=0, readonly=readonly at entry=0, flags=0) at Objects/abstract.c:647 647 view->format = NULL; (gdb) bt #0 PyBuffer_FillInfo (view=0x7fffffffd700, obj=obj at entry=0x7ffff7e50730, buf=0x8df478 <_PyByteArray_empty_string>, len=0, readonly=readonly at entry=0, flags=0) at Objects/abstract.c:647 #1 0x000000000046020c in bytearray_getbuffer (obj=0x7ffff7e50730, view=, flags=) at Objects/bytearrayobject.c:72 #2 0x0000000000560b0a in getbuffer (errmsg=, view=0x7fffffffd700, arg=0x7ffff7e50730) at Python/getargs.c:1380 #3 convertsimple (freelist=0x7fffffffd3b0, bufsize=256, msgbuf=0x7fffffffd4c0 "must be bytes-like object, not str", flags=2, p_va=0x0, p_format=, arg=0x7ffff7e50730) at Python/getargs.c:938 #4 convertitem (arg=0x7ffff7e50730, p_format=p_format at entry=0x7fffffffd3a8, p_va=p_va at entry=0x7fffffffd610, flags=flags at entry=2, levels=levels at entry=0x7fffffffd3c0, msgbuf=msgbuf at entry=0x7fffffffd4c0 "must be bytes-like object, not str", bufsize=256, freelist=0x7fffffffd3b0) at Python/getargs.c:596 #5 0x0000000000561d6f in vgetargs1_impl (compat_args=compat_args at entry=0x0, stack=stack at entry=0x61600004b520, nargs=2, format=format at entry=0x7ffff35d5c88 "O&y*:unpack", p_va=p_va at entry=0x7fffffffd610, flags=flags at entry=2) at Python/getargs.c:388 #6 0x00000000005639b0 in _PyArg_ParseStack_SizeT ( args=args at entry=0x61600004b520, nargs=, format=format at entry=0x7ffff35d5c88 "O&y*:unpack") at Python/getargs.c:163 #7 0x00007ffff35d2df8 in unpack (module=module at entry=0x7ffff7e523b8, args=args at entry=0x61600004b520, nargs=, kwnames=kwnames at entry=0x0) at /home/artem/projects/python/src/cpython-asan/Modules/clinic/_struct.c.h:207 #8 0x0000000000474398 in _PyMethodDef_RawFastCallKeywords (kwnames=0x0, nargs=140737352377272, args=0x61600004b520, self=0x7ffff7e523b8, method=0x7ffff37d94e0 ) at Objects/call.c:618 #9 _PyCFunction_FastCallKeywords (func=func at entry=0x7ffff7e53828, args=args at entry=0x61600004b520, nargs=nargs at entry=2, kwnames=kwnames at entry=0x0) at Objects/call.c:690 #10 0x0000000000426860 in call_function (kwnames=0x0, oparg=2, pp_stack=) at Python/ceval.c:4817 #11 _PyEval_EvalFrameDefault (f=, throwflag=) at Python/ceval.c:3298 #12 0x000000000054b165 in PyEval_EvalFrameEx (throwflag=0, f=0x61600004b398) at Python/ceval.c:663 #13 _PyEval_EvalCodeWithName (_co=_co at entry=0x7ffff7ed3ae0, globals=globals at entry=0x7ffff7f2f150, locals=locals at entry=0x7ffff7ed3ae0, args=args at entry=0x0, argcount=argcount at entry=0, kwnames=kwnames at entry=0x0, kwargs=0x8, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:4173 #14 0x000000000054b253 in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=locals at entry=0x7ffff7ed3ae0, globals=globals at entry=0x7ffff7f2f150, _co=_co at entry=0x7ffff7ed3ae0) at Python/ceval.c:4200 #15 PyEval_EvalCode (co=co at entry=0x7ffff7ed3ae0, globals=globals at entry=0x7ffff7f16288, locals=locals at entry=0x7ffff7f16288) at Python/ceval.c:640 #16 0x0000000000431e0f in run_mod (arena=0x7ffff7f2f150, flags=0x7fffffffdb60, locals=0x7ffff7f16288, globals=0x7ffff7f16288, filename=0x7ffff7e534b0, mod=0x625000021078) at Python/pythonrun.c:976 #17 PyRun_FileExFlags (fp=0x61600003cc80, filename_str=, start=, globals=0x7ffff7f16288, locals=0x7ffff7f16288, closeit=1, flags=0x7fffffffdb60) at Python/pythonrun.c:929 #18 0x000000000043203c in PyRun_SimpleFileExFlags (fp=0x61600003cc80, filename=, closeit=1, flags=0x7fffffffdb60) at Python/pythonrun.c:392 #19 0x0000000000446355 in run_file (p_cf=0x7fffffffdb60, filename=0x60800000bf20 L"struct_unpack_crash.py", fp=0x61600003cc80) at Modules/main.c:338 #20 Py_Main (argc=argc at entry=2, argv=argv at entry=0x60300000efe0) at Modules/main.c:809 #21 0x000000000041df72 in main (argc=2, argv=) at ./Programs/python.c:69 I am not sure if it should cache an object if a error occurred. But clearing the cache in case of error seems to fix this null-pointer dereference. Here is a patch (untested): diff --git a/Modules/clinic/_struct.c.h b/Modules/clinic/_struct.c.h index 71ac290..9573769 100644 --- a/Modules/clinic/_struct.c.h +++ b/Modules/clinic/_struct.c.h @@ -206,6 +206,7 @@ unpack(PyObject *module, PyObject **args, Py_ssize_t nargs, PyObject *kwnames) if (!_PyArg_ParseStack(args, nargs, "O&y*:unpack", cache_struct_converter, &s_object, &buffer)) { + _clearcache_impl(NULL); goto exit; } If this solution is okay, then _clearcache_impl() should probably be called in a couple of other unpack functions. ---------- files: struct_unpack_crash.py messages: 289531 nosy: artem.smotrakov priority: normal severity: normal status: open title: A possible null-pointer dereference in struct.s_unpack_internal() type: crash versions: Python 3.7 Added file: http://bugs.python.org/file46722/struct_unpack_crash.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 13 02:15:46 2017 From: report at bugs.python.org (Xiang Zhang) Date: Mon, 13 Mar 2017 06:15:46 +0000 Subject: [New-bugs-announce] [issue29803] Remove some redandunt ops in unicodeobject.c Message-ID: <1489385746.75.0.206550301215.issue29803@psf.upfronthosting.co.za> Changes by Xiang Zhang : ---------- components: Interpreter Core nosy: haypo, serhiy.storchaka, xiang.zhang priority: normal severity: normal stage: patch review status: open title: Remove some redandunt ops in unicodeobject.c type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 13 09:50:54 2017 From: report at bugs.python.org (Iryna) Date: Mon, 13 Mar 2017 13:50:54 +0000 Subject: [New-bugs-announce] [issue29804] test_ctypes test_pass_by_value fails on arm64 (aarch64) architecture Message-ID: <1489413054.1.0.4012026352.issue29804@psf.upfronthosting.co.za> New submission from Iryna: I am trying to build Python 3.6.1rc1 on Fedora, and have the following test failing on arm64 (aarch64) architecture: ====================================================================== FAIL: test_pass_by_value (ctypes.test.test_structures.StructureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.1rc1/Lib/ctypes/test/test_structures.py", line 413, in test_pass_by_value self.assertEqual(s.first, 0xdeadbeef) AssertionError: 195948557 != 3735928559 ---------------------------------------------------------------------- The build log is attached. The test was added in this commit [1] as a fix for bpo-29565. Any idea what this can be related to? [1] https://github.com/python/cpython/commit/3cc5817cfaf5663645f4ee447eaed603d2ad290a ---------- components: Tests, ctypes files: Python3.6.1rc1_build_log.txt messages: 289539 nosy: ishcherb priority: normal severity: normal status: open title: test_ctypes test_pass_by_value fails on arm64 (aarch64) architecture versions: Python 3.6 Added file: http://bugs.python.org/file46724/Python3.6.1rc1_build_log.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 13 17:03:44 2017 From: report at bugs.python.org (Laurent Mazuel) Date: Mon, 13 Mar 2017 21:03:44 +0000 Subject: [New-bugs-announce] [issue29805] Pathlib.replace cannot move file to a different drive on Windows if filename different Message-ID: <1489439024.83.0.354569832809.issue29805@psf.upfronthosting.co.za> New submission from Laurent Mazuel: Trying to use Pathlib and Path.replace on Windows if drive are different leads to an issue: File "D:\myscript.py", line 184, in update client_generated_path.replace(destination_folder) File "c:\program files (x86)\python35-32\Lib\pathlib.py", line 1273, in replace self._accessor.replace(self, target) File "c:\program files (x86)\python35-32\Lib\pathlib.py", line 377, in wrapped return strfunc(str(pathobjA), str(pathobjB), *args) OSError: [WinError 17] The system cannot move the file to a different disk drive: 'C:\\MyFolder' -> 'D:\\MyFolderNewName' This is a known situation of os.rename, and workaround I found is to use shutil or to copy/delete manually in two steps (e.g. http://stackoverflow.com/questions/21116510/python-oserror-winerror-17-the-system-cannot-move-the-file-to-a-different-d) When using Pathlib, it's not that easy to workaround using shutil (even if thanks to Brett Cannon now shutil accepts Path in Py3.6, not everybody has Py3.6). At least this should be documented with a recommendation for that situation. I love Pathlib and it's too bad my code becomes complicated when it was so simple :( ---------- components: IO, Windows messages: 289549 nosy: Laurent.Mazuel, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Pathlib.replace cannot move file to a different drive on Windows if filename different versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 13 23:18:20 2017 From: report at bugs.python.org (Anne Moroney) Date: Tue, 14 Mar 2017 03:18:20 +0000 Subject: [New-bugs-announce] [issue29806] Requesting version info with lowercase -v or -vv causes an import crash Message-ID: <1489461500.81.0.649560868679.issue29806@psf.upfronthosting.co.za> New submission from Anne Moroney: In trying to test the new feature in 3.6.0, $ python -VV # get more info than python -V or python --version I found several oddities. 1.On both Amazon Linux AMI Python 2.7.12 and also Anaconda Python 3.6.0, using lowercase v's causes a crash on some kind of import. 1a.AWS first lines are: [ec2-user at ip-172-31-2-101 ~]$ python -v # installing zipimport hook import zipimport # builtin # installed zipimport hook # /usr/lib64/python2.7/site.pyc matches /usr/lib64/python2.7/site.py import site # precompiled from /usr/lib64/python2.7/site.pyc 1b.Conda first lines are: (py36aws)me:tool-aws me$ python -v import _frozen_importlib # frozen import _imp # builtin import sys # builtin import '_warnings' # //etc 1c.In both cases, after lots of stuff, must quit python with quit() 2.Anaconda does not provide more information. Is that expected? (py36aws) $ python -VV Python 3.6.0 :: Continuum Analytics, Inc. (py36aws)$ python -V Python 3.6.0 :: Continuum Analytics, Inc. (py36aws)$ python --version Python 3.6.0 :: Continuum Analytics, Inc. ---------- assignee: docs at python components: Documentation messages: 289561 nosy: AnneTheAgile, docs at python priority: normal severity: normal status: open title: Requesting version info with lowercase -v or -vv causes an import crash type: crash versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 02:18:04 2017 From: report at bugs.python.org (Steve Carter) Date: Tue, 14 Mar 2017 06:18:04 +0000 Subject: [New-bugs-announce] [issue29807] ArgParse page in library reference rewrite Message-ID: <1489472284.38.0.40207115242.issue29807@psf.upfronthosting.co.za> New submission from Steve Carter: Originally raise as https://github.com/python/pythondotorg/issues/1059 Although it's a reference page, it is clouded by too many examples and too little reference material. Moreover, the examples are not real-world applications of argument parsing. I propose removing the "process some integers" example, replacing it with something more typically gnu style, e.g., myapp.py [--quiet] [--log-level _n_] [--title=STR] {get | put} [FSPEC [, FSPEC...]]. This shows the user how to to many of the common command-line tasks. [I'm tentatively offering to do this, but I haven't yet found the content I need to revise.] ---------- assignee: docs at python components: Documentation messages: 289569 nosy: docs at python, sweavo priority: normal severity: normal status: open title: ArgParse page in library reference rewrite versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 05:36:41 2017 From: report at bugs.python.org (=?utf-8?b?0JzQsNGA0Log0JrQvtGA0LXQvdCx0LXRgNCz?=) Date: Tue, 14 Mar 2017 09:36:41 +0000 Subject: [New-bugs-announce] [issue29808] SyslogHandler: should not raise exception in constructor if connection fails Message-ID: <1489484201.47.0.147684605376.issue29808@psf.upfronthosting.co.za> New submission from ???? ?????????: Syslog handler already able to ignore temporary errors while seding logs. So he knows that syslog server may be not reachable at the moment. But when we say about constructor, it fails with error. I have fixed that -- now it will ignore such errors, and try to re-connect on every logging as it was before. C's version does the same. ---------- components: Library (Lib) messages: 289573 nosy: mmarkk priority: normal severity: normal status: open title: SyslogHandler: should not raise exception in constructor if connection fails type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 10:29:49 2017 From: report at bugs.python.org (Jason R. Coombs) Date: Tue, 14 Mar 2017 14:29:49 +0000 Subject: [New-bugs-announce] [issue29809] TypeError in traceback.print_exc - unicode does not have the buffer interface Message-ID: <1489501789.51.0.676200983775.issue29809@psf.upfronthosting.co.za> New submission from Jason R. Coombs: I'm writing a routine that captures exceptions and logs them to a database. In doing so, I encountered a situation that when parsing a Unicode file that has an IndentationError (SyntaxError), print_exc will fail when it tries to render the unicode line. Here's a script that replicates the failure. # coding: utf-8 from __future__ import unicode_literals import io import sys import traceback PY3 = sys.version_info > (3,) print(sys.version) buffer = io.StringIO() if PY3 else io.BytesIO() try: args = str(''), 7, 2, ' // test' raise IndentationError('failed', args) except Exception: traceback.print_exc(file=buffer) And the output $ python2 test-unicode-tb.py 2.7.13 (default, Dec 24 2016, 21:20:02) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] Traceback (most recent call last): File "test-unicode-tb.py", line 19, in traceback.print_exc(file=buffer) File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/traceback.py", line 233, in print_exc print_exception(etype, value, tb, limit, file) File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/traceback.py", line 128, in print_exception _print(file, line, '') File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/traceback.py", line 13, in _print file.write(str+terminator) TypeError: 'unicode' does not have the buffer interface The same test runs without error on Python 3. It's surprising to me that I'm the first person to encounter this issue. Is it possible I'm abusing the tokenize module and a unicode value shouldn't be present in the args for the IndentationError? ---------- messages: 289592 nosy: jason.coombs priority: normal severity: normal status: open title: TypeError in traceback.print_exc - unicode does not have the buffer interface versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 12:17:49 2017 From: report at bugs.python.org (Alex Gaynor) Date: Tue, 14 Mar 2017 16:17:49 +0000 Subject: [New-bugs-announce] [issue29810] Rename ssl.Purpose.{CLIENT, SERVER}_AUTH Message-ID: <1489508269.56.0.751351378955.issue29810@psf.upfronthosting.co.za> New submission from Alex Gaynor: The names are super misleading. First, they're written in a way that's the opposite of how people think about these things (CLIENT_AUTH -> server socket; SERVER_AUTH -> client socket). Second, they're misleading, you can have TLS which is *mutually* authenticated. Third, CLIENT_AUTH is very frequently used for a server socket where the client isn't authenticated (at the TLS layer) at all! A simple fix would be to add: Purpose.{CLIENT,SERVER}_SOCKET and alias the old names to those values. ---------- messages: 289601 nosy: alex priority: normal severity: normal status: open title: Rename ssl.Purpose.{CLIENT,SERVER}_AUTH _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 17:36:22 2017 From: report at bugs.python.org (STINNER Victor) Date: Tue, 14 Mar 2017 21:36:22 +0000 Subject: [New-bugs-announce] [issue29811] Use FASTCALL in call.c callmethod() to avoid temporary tuple Message-ID: <1489527382.61.0.961347131551.issue29811@psf.upfronthosting.co.za> New submission from STINNER Victor: call_method() of typeobject.c has been optimized to avoid temporary method object and to avoid temporary tuple in the issue #29507. Optimizing callmethod() of call.c was already discussed on issue #29507 but no decision was taken. Since call.c code is more complex, I created a new issue. ---------- messages: 289620 nosy: haypo, inada.naoki priority: normal severity: normal status: open title: Use FASTCALL in call.c callmethod() to avoid temporary tuple type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 19:03:41 2017 From: report at bugs.python.org (R. David Murray) Date: Tue, 14 Mar 2017 23:03:41 +0000 Subject: [New-bugs-announce] [issue29812] test for token.py, and consistency tests for tokenize.py Message-ID: <1489532621.9.0.449448187774.issue29812@psf.upfronthosting.co.za> New submission from R. David Murray: http://bugs.python.org/issue24622 made reminded me that a while back we added tests for the keyword module that includes a test that if you run it, you get the result that is checked in. The same thing could be done for the token.py module. And then we could add a cross-check test that tokenize.py has all the symbols defined as well. ---------- components: Tests keywords: easy messages: 289628 nosy: r.david.murray priority: normal severity: normal stage: needs patch status: open title: test for token.py, and consistency tests for tokenize.py type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 14 22:40:50 2017 From: report at bugs.python.org (Michael Seifert) Date: Wed, 15 Mar 2017 02:40:50 +0000 Subject: [New-bugs-announce] [issue29813] PyTuple_GetSlice documentation incorrect Message-ID: <1489545650.77.0.905284505898.issue29813@psf.upfronthosting.co.za> New submission from Michael Seifert: The PyTuple_GetSlice documentation says it "Take a slice of the tuple pointed to by p from low to high and return it as a new tuple." [0] However in case the start is <= 0 and the stop is >= tuplesize it doesn't return the promised "new tuple", it just returns the tuplepointer after incrementing it's refcount [1]. The behaviour is fine (it gave me a bit of a headache though), however could a note/warning/sentence be included in the docs mentioning that special case? [0] https://docs.python.org/3/c-api/tuple.html#c.PyTuple_GetSlice [1] https://github.com/python/cpython/blob/master/Objects/tupleobject.c#L414 ---------- assignee: docs at python components: Documentation messages: 289632 nosy: MSeifert, docs at python priority: normal severity: normal status: open title: PyTuple_GetSlice documentation incorrect type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 02:26:24 2017 From: report at bugs.python.org (Mateusz Bysiek) Date: Wed, 15 Mar 2017 06:26:24 +0000 Subject: [New-bugs-announce] [issue29814] parsing f-strings -- opening brace of expression gets duplicated when preceeded by backslash Message-ID: <1489559184.67.0.191870108589.issue29814@psf.upfronthosting.co.za> New submission from Mateusz Bysiek: with Python 3.6.0 and the following script: ``` #!/usr/bin/env python3.6 import ast code1 = '''"\\{x}"''' code2 = '''f"\\{x}"''' tree1 = ast.parse(code1, mode='eval') print(ast.dump(tree1)) tree2 = ast.parse(code2, mode='eval') print(ast.dump(tree2)) ``` I get the following output: ``` Expression(body=Str(s='\\{x}')) Expression(body=JoinedStr(values=[Str(s='\\{'), FormattedValue(value=Name(id='x', ctx=Load()), conversion=-1, format_spec=None)])) ``` Therefore, the normal string is `'\\{x}'`. But the f-string has two parts: `'\\{'` and an expression `Name(id='x', ctx=Load())`. Where does the `{` in the string part of f-string come from? I can't believe this is the intended behavior... Or, is it? When I escape the backslash once like above, what gets parsed is actually unescaped backslash. So this might just boil down to inconsistency in parsing `\{` in normal vs. f-strings. I originally discovered this in typed_ast https://github.com/python/typed_ast/issues/34 but the behaviour of ast is identical and since developers of typed_ast aim at compatibility with ast, I bring this issue here. ---------- components: Library (Lib) messages: 289642 nosy: mbdevpl priority: normal severity: normal status: open title: parsing f-strings -- opening brace of expression gets duplicated when preceeded by backslash type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 03:56:40 2017 From: report at bugs.python.org (Marcos Thomaz) Date: Wed, 15 Mar 2017 07:56:40 +0000 Subject: [New-bugs-announce] [issue29815] Fail at divide a negative integer number for a positive integer number Message-ID: <1489564600.52.0.574999326638.issue29815@psf.upfronthosting.co.za> New submission from Marcos Thomaz: At divide a negative integer number for a positive integer number, the result is wrong. For example, in operation: a, b, c = -7, 2, 7 d = divmod(a, b) print a//b, a%b, c[0], c // b, c%b The values printed are -4 1 3 1 ---------- components: Interpreter Core messages: 289647 nosy: marcosthomazs priority: normal severity: normal status: open title: Fail at divide a negative integer number for a positive integer number type: crash versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 04:55:17 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 15 Mar 2017 08:55:17 +0000 Subject: [New-bugs-announce] [issue29816] Get rid of C limitation for shift count in right shift Message-ID: <1489568117.43.0.447594281596.issue29816@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Currently the value of right operand of the right shift operator is limited by C Py_ssize_t type. >>> 1 >> 10**100 Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t >>> (-1) >> 10**100 Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t >>> 1 >> -10**100 Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t >>> (-1) >> -10**100 Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t But this is artificial limitation. Right shift can be extended to support arbitrary integers. `x >> very_large_value` should be 0 for non-negative x and -1 for negative x. `x >> negative_value` should raise ValueError. >>> 1 >> 10 0 >>> (-1) >> 10 -1 >>> 1 >> -10 Traceback (most recent call last): File "", line 1, in ValueError: negative shift count >>> (-1) >> -10 Traceback (most recent call last): File "", line 1, in ValueError: negative shift count ---------- components: Interpreter Core messages: 289650 nosy: Oren Milman, mark.dickinson, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Get rid of C limitation for shift count in right shift type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 05:47:30 2017 From: report at bugs.python.org (Jan) Date: Wed, 15 Mar 2017 09:47:30 +0000 Subject: [New-bugs-announce] [issue29817] File IO read, write, read causes garbage data write. Message-ID: <1489571250.73.0.412338383823.issue29817@psf.upfronthosting.co.za> New submission from Jan: In Python 2.7.12 when reading, writing and subsequently reading again from a file, python seems to write garbage. For example when running this in python IDLE: import os testPath = r"myTestFile.txt" ## Make sure the file exists and its empty with open(testPath,"w") as tFile: tFile.write("") print "Our Test File: ", os.path.abspath(testPath ) with open(testPath, "r+") as tFile: ## First we read the file data = tFile.read() ## Now we write some data tFile.write('Some Data') ## Now we read the file again tFile.read() When now looking at the file the data is the following: Some Data @ sb d Z d d l m Z d d d ? ? YZ e d k r^ d d l m Z e d d d d e ?n d S( s9 Implement Idle Shell history mechanism with History ... As mentioned in the comments on stack overflow ( see link ) this might be a buffer overrun but I am not sure. Also I guess this could be used as a security vulnerability... http://stackoverflow.com/questions/40373457/python-r-read-write-read-writes-garbage-to-a-file?noredirect=1#comment72580538_40373457 ---------- components: IO, Interpreter Core, Windows messages: 289657 nosy: jan, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: File IO read, write, read causes garbage data write. type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 07:27:39 2017 From: report at bugs.python.org (Nick Coghlan) Date: Wed, 15 Mar 2017 11:27:39 +0000 Subject: [New-bugs-announce] [issue29818] Py_SetStandardStreamEncoding leads to a memory error in debug mode Message-ID: <1489577259.98.0.691600629889.issue29818@psf.upfronthosting.co.za> New submission from Nick Coghlan: For PEP 538, setting PYTHONIOENCODING turned out to have undesirable side effects on Python 2 instances in subprocesses, since Python 2 has no 'surrogateescape' error handler. So I switched to using the "Py_SetStandardStreamEncoding" API defined in http://bugs.python.org/issue16129 instead, but this turns out to have problematic interactions with the dynamic memory allocator management, so it fails with a fatal exception in debug mode. An example of the error can be seen here: https://travis-ci.org/python/cpython/jobs/211293576 The problem appears to be that between the allocation of the memory with `_PyMem_RawStrdup` in `Py_SetStandardStreamEncoding` and the release of that memory in `initstdio`, the active memory manager has changed (at least in a debug build), so the deallocation as part of the interpreter startup fails. That interpretation is based on this comment in Programs/python.c: ``` /* Force again malloc() allocator to release memory blocks allocated before Py_Main() */ (void)_PyMem_SetupAllocators("malloc"); ``` The allocations in Py_SetStandardStreamEncoding happen before the call to Py_Main/Py_Initialize, but the deallocation happens in Py_Initialize. The "fix" I applied to the PEP branch was to make the default allocator conditional in Programs/python.c as well: ``` #ifdef Py_DEBUG (void)_PyMem_SetupAllocators("malloc_debug"); # else (void)_PyMem_SetupAllocators("malloc"); # endif ``` While that works (at least in the absence of a PYTHONMALLOC setting) it seems fragile. It would be nicer if there was a way for Py_SetStandardStreamEncoding to indicate which allocator should be used for the deallocation. ---------- messages: 289668 nosy: haypo, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Py_SetStandardStreamEncoding leads to a memory error in debug mode type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 11:11:06 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 15 Mar 2017 15:11:06 +0000 Subject: [New-bugs-announce] [issue29819] Avoid raising OverflowError in truncate() if possible Message-ID: <1489590666.79.0.55108836997.issue29819@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: os.truncate(), os.ftruncate() and truncate() methods of file-like objects raise OverflowError when the argument is out of ranger of certain C type. It would be better to extend the behavior of small integers to large integers. ValueError is raised for negative integers and it should be raised for negative integers out of range. Values larger than the current size should be ignored in BytesIO.truncate() and StringIO.truncate(). BytesIO.truncate() and StringIO.truncate() should never raise OverflowError. Since the behavior of underlying OS functions is OS and FS depended, OverflowError can be raised for large integers in os.truncate(), os.ftruncate() and FileIO.truncate(). ---------- components: IO, Library (Lib) messages: 289677 nosy: benjamin.peterson, serhiy.storchaka, stutzbach priority: normal severity: normal stage: needs patch status: open title: Avoid raising OverflowError in truncate() if possible type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 11:29:20 2017 From: report at bugs.python.org (Marco Buttu) Date: Wed, 15 Mar 2017 15:29:20 +0000 Subject: [New-bugs-announce] [issue29820] Broken link to "GUI Programming with Python: QT Edition" book Message-ID: <1489591760.97.0.349566849467.issue29820@psf.upfronthosting.co.za> New submission from Marco Buttu: In [*] the link to "GUI Programming with Python: QT Edition by Boudewijn Rempt", does not work. I did not find an official web page for this book, but it is really outdated (2002), so maybe we can take only the reference to the Mark Summerfield's book, about PyQt4. Let me know, so in case I will open a PR. [*] https://docs.python.org/3/library/othergui.html ---------- assignee: docs at python components: Documentation messages: 289678 nosy: docs at python, marco.buttu priority: normal severity: normal status: open title: Broken link to "GUI Programming with Python: QT Edition" book versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 12:35:14 2017 From: report at bugs.python.org (Oliver Etchebarne (drmad)) Date: Wed, 15 Mar 2017 16:35:14 +0000 Subject: [New-bugs-announce] [issue29821] importing module shutil executes file 'copy.py' Message-ID: <1489595714.25.0.323545213804.issue29821@psf.upfronthosting.co.za> New submission from Oliver Etchebarne (drmad): I didn't research this issue further. Create a file 'test.py', and write only 'import shutil'. Then create a file 'copy.py' in the same directory, and write something inside, like 'print ("OH NO")'. When you run test.py, 'copy.py' is executed, and prints the string. Tested with python 3.5 and 3.6. Works as expected (test.py doing nothing) in python 2.7 ---------- messages: 289681 nosy: Oliver Etchebarne (drmad) priority: normal severity: normal status: open title: importing module shutil executes file 'copy.py' type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 13:06:09 2017 From: report at bugs.python.org (Nate Soares) Date: Wed, 15 Mar 2017 17:06:09 +0000 Subject: [New-bugs-announce] [issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__ Message-ID: <1489597569.75.0.0334116955024.issue29822@psf.upfronthosting.co.za> New submission from Nate Soares: Here's an example test that fails: def test_isabstract_during_init_subclass(self): from abc import ABCMeta, abstractmethod isabstract_checks = [] class AbstractChecker(metaclass=ABCMeta): def __init_subclass__(cls): abstract_checks.append(inspect.isabstract(cls)) class AbstractClassExample(AbstractChecker): @abstractmethod def foo(self): pass class ClassExample(AbstractClassExample): def foo(self): pass self.assertEqual(isabstract_checks, [True, False]) To run the test, you'll need to be on a version of python where bpo-29581 is fixed (e.g., a cpython branch with https://github.com/python/cpython/pull/527 merged) in order for __init_subclass__ to work with ABCMeta at all in the first place. I have a simple patch to inspect.isabstract that fixes this, and will make a PR shortly. ---------- messages: 289682 nosy: So8res priority: normal severity: normal status: open title: inspect.isabstract does not work on abstract base classes during __init_subclass__ versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 15 23:53:24 2017 From: report at bugs.python.org (Aleksey Bilogur) Date: Thu, 16 Mar 2017 03:53:24 +0000 Subject: [New-bugs-announce] [issue29823] ython guesses XSL mimetype when passed an XML file Message-ID: <1489636404.8.0.316801040698.issue29823@psf.upfronthosting.co.za> New submission from Aleksey Bilogur: Copied over from an unanswered StackOverflow thread (http://stackoverflow.com/questions/42542433/why-does-python-mimetype-guess-xsl-when-passed-an-xml-file): When passed a file with a mimetype application/xml, the Python std lib mimetypes.guess_extension method guesses an .xsl extension. This is actually hard-coded in. This seems wrong to me. But what do I know? ---------- components: Library (Lib) messages: 289705 nosy: Aleksey Bilogur priority: normal severity: normal status: open title: ython guesses XSL mimetype when passed an XML file versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 04:03:29 2017 From: report at bugs.python.org (Suphannee) Date: Thu, 16 Mar 2017 08:03:29 +0000 Subject: [New-bugs-announce] [issue29824] Hostname validation in SSL match_hostname() Message-ID: <1489651409.35.0.325138165365.issue29824@psf.upfronthosting.co.za> New submission from Suphannee: 1. Allowing attempting to match invalid hostname According to domain name specification in RFC 1035, only alphanumeric, dot and hyphen are valid characters in domain name. We observe that the function match_hostname() in Lib/ssl.py allows other special characters (e.g., '=', '&') in hostname when attempting to match with certificate commonName (CN)/subjectAltName DNS. An example would be matching hostname "example.a=.com" with certificate CN/DNS "example.a=.com" or CN/DNS "*.a=.example.com". Ensuring that CN/DNS with invalid characters are rejected, will make the library more robust against attacks that utilize such characters. 2. Matching wildcard in public suffix As noted in section 7.2 of RFC 6125, some wildcard location specifications are not clear. We found that the function allows wildcard over public suffix in certificate as well as allows attempting to match in hostname verification, e.g., matches hostname "google.com" and "example.com" with certificate CN/DNS "*.com". This is not an RFC violation, but we might benefit from implementing the check, for example "*.one_label" is restricted. A better option will be having a list of all TLD's and check against it. Thanks. ---------- assignee: christian.heimes components: SSL messages: 289708 nosy: christian.heimes, ssivakorn priority: normal severity: normal status: open title: Hostname validation in SSL match_hostname() type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 04:47:57 2017 From: report at bugs.python.org (LCatro) Date: Thu, 16 Mar 2017 08:47:57 +0000 Subject: [New-bugs-announce] [issue29825] PyFunction_New() not validate code object Message-ID: <1489654077.04.0.544465484801.issue29825@psf.upfronthosting.co.za> New submission from LCatro: PyFunction_New() not validate code object ,so we can make a string object to fake code object This is Python ByteCode : LOAD_CONST 'CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC\x41\x41\x41\x41' MAKE_FUNCTION 0 in source code ,we can see that string object trace to variant v TARGET(MAKE_FUNCTION) { v = POP(); /* code object */ <= now it is a string object x = PyFunction_New(v, f->f_globals); <= using in there and than ,we making a string object will taking into PyFunction_New() PyFunction_New(PyObject *code, PyObject *globals) { PyFunctionObject *op = PyObject_GC_New(PyFunctionObject, &PyFunction_Type); static PyObject *__name__ = 0; if (op != NULL) { <= there just check new alloc object point but not checking the argument code's python type (actually it is TYPE_CODE) .. PyObject *doc; PyObject *consts; PyObject *module; op->func_weakreflist = NULL; Py_INCREF(code); op->func_code = code; Py_INCREF(globals); op->func_globals = globals; op->func_name = ((PyCodeObject *)code)->co_name; Py_INCREF(op->func_name); <= it will make an arbitrary address inc by one .. Opcode MAKE_CLOSURE similar too .. TARGET(MAKE_CLOSURE) { v = POP(); /* code object */ x = PyFunction_New(v, f->f_globals); poc and crash detail in update file ---------- components: Interpreter Core files: inc_by_one.rar messages: 289710 nosy: imso666 priority: normal severity: normal status: open title: PyFunction_New() not validate code object type: security versions: Python 2.7 Added file: http://bugs.python.org/file46728/inc_by_one.rar _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 05:35:22 2017 From: report at bugs.python.org (Marco Viscito) Date: Thu, 16 Mar 2017 09:35:22 +0000 Subject: [New-bugs-announce] [issue29826] " don't work on Mac Message-ID: <1489656922.89.0.360252821771.issue29826@psf.upfronthosting.co.za> New submission from Marco Viscito: When typing the ' key or the " key on the IDLE Python application for macOS, the application. I think it might have something to do with that beta version of Tcl/Tk (8.5.9) as Python says it is 'unstable'. ---------- files: Screen Shot 2017-03-16 at 09.34.26.png messages: 289711 nosy: Marco Viscito priority: normal severity: normal status: open title: " don't work on Mac type: crash versions: Python 3.6 Added file: http://bugs.python.org/file46729/Screen Shot 2017-03-16 at 09.34.26.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 10:27:55 2017 From: report at bugs.python.org (Marko Mavrinac) Date: Thu, 16 Mar 2017 14:27:55 +0000 Subject: [New-bugs-announce] [issue29827] os.path.exists() returns False for certain file name Message-ID: <1489674475.28.0.107711560644.issue29827@psf.upfronthosting.co.za> New submission from Marko Mavrinac: I have two files in two different folders, both on desktop. If I try using os.path.exists() on both of them, it returns True for one file and False for the other file. After renaming file that I got False from, I get returned True and when I rename it back, it returns False again. File name causing the problem is "testni.wav", I assume it's anything starting with "test", but I'm sure you guys will know better. Thank you Detailed description with screenshots can be seen here: http://stackoverflow.com/questions/42834408/os-path-exists-returning-false-for-one-and-true-for-another-file-both-files-e ---------- components: Windows messages: 289717 nosy: Marko Mavrinac, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.exists() returns False for certain file name type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 12:45:39 2017 From: report at bugs.python.org (Antoine Pitrou) Date: Thu, 16 Mar 2017 16:45:39 +0000 Subject: [New-bugs-announce] [issue29828] Allow registering after-fork initializers in multiprocessing Message-ID: <1489682739.34.0.223711216906.issue29828@psf.upfronthosting.co.za> New submission from Antoine Pitrou: Currently, multiprocessing has hard-coded logic to re-seed the Python random generator (in the random module) whenever a process is forked. This is present in two places: `Popen._launch` in `popen_fork.py` and `serve_one` in `forkserver.py` (for the "fork" and "forkserver" spawn methods, respectively). However, other libraries would like to benefit from this mechanism. For example, Numpy has its own random number generator that would also benefit from re-seeding after fork(). Currently, this is solvable using multiprocessing.Pool which has an `initializer` argument. However, concurrent.futures' ProcessPool does not offer such facility; nor do other ways of launching child processes, such as (simply) instantiating a new Process object. Therefore, I'd like to propose adding a new top-level function in multiprocessing (and also a new Context method) to register a new initializer function for use after fork(). That way, each library can add its own initializers if desired, freeing users from the burden of doing so in their applications. ---------- components: Library (Lib) messages: 289721 nosy: davin, pitrou, sbt priority: normal severity: normal stage: needs patch status: open title: Allow registering after-fork initializers in multiprocessing type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 12:58:21 2017 From: report at bugs.python.org (Steve Barnes) Date: Thu, 16 Mar 2017 16:58:21 +0000 Subject: [New-bugs-announce] [issue29829] Documentation lacks clear warning of subprocess issue with pythonw Message-ID: <1489683501.47.0.952186876903.issue29829@psf.upfronthosting.co.za> New submission from Steve Barnes: When running under pythonw, or pyinstaller with the -w flag, modules that use subprocess calls such as popen, run, etc. will crash if the default `stdout=None, stderr=None` behaviour is used rather than PIPE. This is an obscure problem which is very hard to debug yet there is no warning in the documentation on this. I would like to suggest adding a :warning:`stdout=None, stderr=None` must not be used in any of the calls in this module when running under pythonw due to the lack of sys.stdout & sys.stderr in that case. Please use `stdout=PIPE, stderr=PIPE` instead. A patch against the default branch would be: diff -r 4243df51fe43 Doc/library/subprocess.rst --- a/Doc/library/subprocess.rst Fri Feb 10 14:19:36 2017 +0100 +++ b/Doc/library/subprocess.rst Thu Mar 16 16:56:24 2017 +0000 @@ -33,6 +33,13 @@ function for all use cases it can handle. For more advanced use cases, the underlying :class:`Popen` interface can be used directly. +.. warning:: Do not use default parameters on Windows with pythonw. + + As pythonw deletes `sys.stdout` & `sys.stderr` the use of the default + parameters, `stdout=None, stderr=None,`, which defaults to being + `stdout=sys.stdout, stderr=sys.stderr,` may cause unexpected crashes + it is recommended to use `stdout=PIPE, stderr=PIPE,` instead. + The :func:`run` function was added in Python 3.5; if you need to retain compatibility with older versions, see the :ref:`call-function-trio` section. ---------- assignee: docs at python components: Documentation messages: 289722 nosy: Steve Barnes, docs at python priority: normal severity: normal status: open title: Documentation lacks clear warning of subprocess issue with pythonw type: behavior versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 18:35:12 2017 From: report at bugs.python.org (Manuel Jacob) Date: Thu, 16 Mar 2017 22:35:12 +0000 Subject: [New-bugs-announce] [issue29830] pyexpat.errors doesn't have __spec__ and __loader__ set Message-ID: <1489703712.88.0.588940526219.issue29830@psf.upfronthosting.co.za> New submission from Manuel Jacob: The same applies to pyexpat.model. It seems like pyexpat is the only builtin module which has submodules (errors, model). Normally, as I understand it, the module gets imported given a spec and the import machinery ensures that this spec ends up in the __spec__ attribute of the module. But in this case only pyexpat gets imported by importlib. The submodules are added when initializing the module. Also, importlib's BuiltinImporter assumes that a builtin module is never a package. Is this reasonable in this case? ---------- messages: 289737 nosy: mjacob priority: normal severity: normal status: open title: pyexpat.errors doesn't have __spec__ and __loader__ set _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 16 21:43:15 2017 From: report at bugs.python.org (quanyechavshuo) Date: Fri, 17 Mar 2017 01:43:15 +0000 Subject: [New-bugs-announce] [issue29831] os.path.exists seems can not recgnize "~" Message-ID: <1489714995.82.0.360951614677.issue29831@psf.upfronthosting.co.za> New submission from quanyechavshuo: os.system is ok to recgnize "~",but os.path.exists can not recgnize "~". eg: #1.py: import os os.system("ls -al ~/.zshrc") python3 1.py output: -rw-r--r-- 1 root wheel 5391 3 14 18:12 /var/root/.zshrc #2.py: import os a=os.path.exists("~/.zshrc") print(a) python3 2.py output: False ---------- messages: 289740 nosy: quanyechavshuo priority: normal severity: normal status: open title: os.path.exists seems can not recgnize "~" type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 02:43:17 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Mar 2017 06:43:17 +0000 Subject: [New-bugs-announce] [issue29832] Don't refer to getsockaddrarg in error messages Message-ID: <1489732997.44.0.201686346308.issue29832@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: getsockaddrarg() is an internal C function in the socket module implementation used in a number of socket methods (bind(), connect(), connect_ex(), sendto(), sendmsg()) for creating C structure sock_addr_t from Python tuple. Error messages raised when pass incorrect socket address argument to these function contain the name "getsockaddrarg" despite the fact that it is not directly exposed at Python level, nor the name of standard C function. >>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.bind(42) Traceback (most recent call last): File "", line 1, in TypeError: getsockaddrarg: AF_INET address must be tuple, not int >>> s.bind(()) Traceback (most recent call last): File "", line 1, in TypeError: getsockaddrarg() takes exactly 2 arguments (0 given) I think that error messages shouldn't refer to non-existing function "getsockaddrarg()". This issue is a part of more general issue28261. ---------- components: Extension Modules messages: 289745 nosy: Oren Milman, serhiy.storchaka priority: normal severity: normal status: open title: Don't refer to getsockaddrarg in error messages type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 04:31:01 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Mar 2017 08:31:01 +0000 Subject: [New-bugs-announce] [issue29833] Avoid raising OverflowError if possible Message-ID: <1489739461.89.0.615647789001.issue29833@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: OverflowError usually is caused by platform limitations. It is raised on the fence between Python and C when convert Python integer to C integer type. On other platform the same input can be accepted or cause raising ValueError if the value is out of range. I think we should avoid raising OverflowError if possible. If the function accepts only non-negative integers, it should raise the same ValueError, IndexError or OverflowError for -10**100 as for -1. If the function bounds integer value to the range from 0 to 100, it should do this also for integers that don't fit in C integer type. If large argument means allocating an amount of memory that exceeds the address space, it should raise MemoryError rather than OverflowError. This principle is already supported in the part of the interpreter. For example: >>> 'abc'[:10**100] 'abc' >>> 'abc'[-10**100:] 'abc' >>> bytes([10**100]) Traceback (most recent call last): File "", line 1, in ValueError: bytes must be in range(0, 256) >>> round(1.2, 10**100) 1.2 >>> round(1.2, -10**100) 0.0 >>> math.factorial(-10**100) Traceback (most recent call last): File "", line 1, in ValueError: factorial() not defined for negative values This is a meta-issue. Concrete changes will be made in sub-issues. ---------- components: Interpreter Core messages: 289747 nosy: Oren Milman, haypo, mark.dickinson, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Avoid raising OverflowError if possible type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 04:44:59 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Mar 2017 08:44:59 +0000 Subject: [New-bugs-announce] [issue29834] Raise ValueError rather of OverflowError in PyLong_AsUnsignedLong() Message-ID: <1489740299.73.0.174810615045.issue29834@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: OverflowError is raised when Python integer doesn't fit in C integer type due to platform limitations. Different platforms have different limits. But in PyLong_AsUnsignedLong() only the upper limit is platform-depended. Negative integers always are not accepted. PyLong_AsUnsignedLong() is used for values that can be only non-negative. I think that ValueError is more appropriate in this case than OverflowError. ---------- components: Interpreter Core messages: 289748 nosy: Oren Milman, serhiy.storchaka priority: normal severity: normal status: open title: Raise ValueError rather of OverflowError in PyLong_AsUnsignedLong() type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 08:08:00 2017 From: report at bugs.python.org (Oren Milman) Date: Fri, 17 Mar 2017 12:08:00 +0000 Subject: [New-bugs-announce] [issue29835] py_blake2*_new_impl produces inconsistent error messages, and raises OverflowError where ValueError might be better Message-ID: <1489752480.08.0.482670845607.issue29835@psf.upfronthosting.co.za> New submission from Oren Milman: 1. currently, py_blake2s_new_impl and py_blake2b_new_impl might produce inconsistent error messages (note that the first one happens on platforms where sizeof(long) > 4): >>> hashlib.blake2b(leaf_size=1 << 32) Traceback (most recent call last): File "", line 1, in OverflowError: leaf_size is too large >>> hashlib.blake2b(leaf_size=1 << 1000) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C unsigned long >>> hashlib.blake2b(depth=256) Traceback (most recent call last): File "", line 1, in ValueError: depth must be between 1 and 255 >>> hashlib.blake2b(depth=256 << 1000) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C long there are similar inconsistent error messages when the function receives big and small out of range values for other arguments, too. 2. it might be better to raise a ValueError in the following cases: >>> hashlib.blake2b(leaf_size=-1) Traceback (most recent call last): File "", line 1, in OverflowError: can't convert negative value to unsigned int >>> hashlib.blake2b(depth=-1 << 1000) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C long and maybe also in this case? >>> hashlib.blake2b(depth=1 << 1000) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C long this might be considered as a sub-issue of #29833. note that solving the issue for leaf_size might be easier after #29834 is resolved, as one could add something like the following to the if block of (leaf_size == (unsigned long) -1 && PyErr_Occurred()): err = PyErr_Occurred(); if (PyErr_GivenExceptionMatches(err, PyExc_OverflowError) || PyErr_GivenExceptionMatches(err, PyExc_ValueError)) { PyErr_SetString(err, "leaf_size must be between 0 and 2**32-1"); } however, depth and other arguments are parsed by py_blake*_new, which is generated by the argument clinic, so ISTM that solving the issue for them might be harder. (this issue was created while working on #15988, as can be seen in the code review comments of PR 668.) ---------- messages: 289757 nosy: Oren Milman, serhiy.storchaka priority: normal severity: normal status: open title: py_blake2*_new_impl produces inconsistent error messages, and raises OverflowError where ValueError might be better type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 11:35:26 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Fri, 17 Mar 2017 15:35:26 +0000 Subject: [New-bugs-announce] [issue29836] Remove nturl2path from test_sundry and amend its docstring Message-ID: <1489764926.92.0.485440810457.issue29836@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: After discussion on [1] this PR removes nturl2path from test_sundry and ammends its docstring to include a note on how it is an implementation detail and tested elsewhere. ---------- messages: 289760 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Remove nturl2path from test_sundry and amend its docstring _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 11:40:22 2017 From: report at bugs.python.org (justin) Date: Fri, 17 Mar 2017 15:40:22 +0000 Subject: [New-bugs-announce] [issue29837] python3 pycopg2 import issue on solaris 10 Message-ID: <1489765222.46.0.96777133455.issue29837@psf.upfronthosting.co.za> New submission from justin: Hi, I have installed psycopg2 through pip3, but when I tried to import it, I got the following error. what could be the problem? help> psycopg2 problem in psycopg2 - ImportError: ld.so.1: python3.3: fatal: relocation error: file /opt/csw/lib/python3.3/site-packages/psycopg2/_psycopg.so: symbol timeradd: referenced symbol not found ------------------------ thanks justin ---------- components: Build messages: 289763 nosy: juwang priority: normal severity: normal status: open title: python3 pycopg2 import issue on solaris 10 type: compile error versions: Python 3.3 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 15:39:52 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Mar 2017 19:39:52 +0000 Subject: [New-bugs-announce] [issue29838] Check that sq_length and mq_length return non-negative result Message-ID: <1489779592.24.0.829471920216.issue29838@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Following PR adds several asserts for checking that sq_length and mq_length either return non-negative result or raise an exception. One assert already was in PySequence_GetItem(). ---------- components: Interpreter Core messages: 289777 nosy: haypo, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Check that sq_length and mq_length return non-negative result type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 15:52:14 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Mar 2017 19:52:14 +0000 Subject: [New-bugs-announce] [issue29839] Avoid raising OverflowError in len() when __len__() returns negative large value Message-ID: <1489780334.51.0.943711507872.issue29839@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: For now len() raises ValueError if __len__() returns small negative integer and OverflowError if __len__() returns large negative integer. >>> class NegativeLen: ... def __len__(self): ... return -10 ... >>> len(NegativeLen()) Traceback (most recent call last): File "", line 1, in ValueError: __len__() should return >= 0 >>> class HugeNegativeLen: ... def __len__(self): ... return -sys.maxsize-10 ... >>> len(HugeNegativeLen()) Traceback (most recent call last): File "", line 1, in OverflowError: cannot fit 'int' into an index-sized integer Proposed patch makes it always raising ValueError. ---------- components: Interpreter Core messages: 289779 nosy: Oren Milman, haypo, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Avoid raising OverflowError in len() when __len__() returns negative large value type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 16:27:26 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Mar 2017 20:27:26 +0000 Subject: [New-bugs-announce] [issue29840] Avoid raising OverflowError in bool() Message-ID: <1489782446.56.0.355449112547.issue29840@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: For now bool() raises OverflowError if __bool__ is not defined and __len__ returns large value. >>> class A: ... def __len__(self): ... return 1 << 1000 ... >>> bool(A()) Traceback (most recent call last): File "", line 1, in OverflowError: cannot fit 'int' into an index-sized integer >>> bool(range(1<<1000)) Traceback (most recent call last): File "", line 1, in OverflowError: Python int too large to convert to C ssize_t Proposed patch makes bool() returning True if len() raises OverflowError. This is an alternate solution of issue28876. ---------- components: Interpreter Core messages: 289781 nosy: mark.dickinson, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Avoid raising OverflowError in bool() type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 16:33:15 2017 From: report at bugs.python.org (Oren Milman) Date: Fri, 17 Mar 2017 20:33:15 +0000 Subject: [New-bugs-announce] [issue29841] errors raised by bytes and bytearray constructors for invalid size argument Message-ID: <1489782795.95.0.350894370649.issue29841@psf.upfronthosting.co.za> New submission from Oren Milman: currently (on my Windows 10): >>> bytes(-1 << 1000) Traceback (most recent call last): File "", line 1, in OverflowError: cannot fit 'int' into an index-sized integer >>> bytes(-1) Traceback (most recent call last): File "", line 1, in ValueError: negative count >>> bytes(sys.maxsize + 1) Traceback (most recent call last): File "", line 1, in OverflowError: cannot fit 'int' into an index-sized integer for the same size arguments, bytearray raises the same errors. thus, in accordance with #29833 (this is a sub-issue of #29833) for each of the constructors of bytes and bytearray: 1. ValueErrors with the same error message should be raised for any negative size argument (big negative as well as small negative). 2. MemoryError should be raised for any size argument bigger than sys.maxsize. Moreover, currently: >>> bytes(sys.maxsize - 25) Traceback (most recent call last): File "", line 1, in MemoryError >>> bytes(sys.maxsize - 24) Traceback (most recent call last): File "", line 1, in OverflowError: byte string is too large >>> bytes(sys.maxsize) Traceback (most recent call last): File "", line 1, in OverflowError: byte string is too large for each of these size arguments, bytearray raises a MemoryError. IMHO, to make the error messages more consistent, the constructor of bytes should raise a MemoryError for any too large size argument, as the constructor of bytearray already does. ---------- components: Interpreter Core messages: 289783 nosy: Oren Milman, serhiy.storchaka priority: normal severity: normal status: open title: errors raised by bytes and bytearray constructors for invalid size argument type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 17 21:32:56 2017 From: report at bugs.python.org (Josh Rosenberg) Date: Sat, 18 Mar 2017 01:32:56 +0000 Subject: [New-bugs-announce] [issue29842] Executor.map should not submit all futures prior to yielding any results Message-ID: <1489800776.56.0.575396558681.issue29842@psf.upfronthosting.co.za> New submission from Josh Rosenberg: As currently implemented, Executor.map is not particularly lazy. Specifically, if given huge argument iterables, it will not begin yielding results until all tasks have been submitted; if given an infinite input iterable, it will run out of memory before yielding a single result. This makes it unusable as a drop in replacement for plain map, which, being lazy, handles infinite iterables just fine, and produces results promptly. Proposed change makes Executor.map begin yielding results for large iterables without submitting every task up front. As a reasonable default, I have it submit a number of tasks equal to twice the number of workers, submitting a new task immediately after getting results for the next future in line, before yielding the result (to ensure the number of outstanding futures stays constant). A new keyword-only argument, prefetch, is provided to explicitly specify how many tasks should be queued above and beyond the number of workers. Working on submitting pull request now. ---------- components: Library (Lib) messages: 289789 nosy: josh.r priority: normal severity: normal status: open title: Executor.map should not submit all futures prior to yielding any results versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 05:56:32 2017 From: report at bugs.python.org (Oren Milman) Date: Sat, 18 Mar 2017 09:56:32 +0000 Subject: [New-bugs-announce] [issue29843] errors raised by ctypes.Array for invalid _length_ attribute Message-ID: <1489830992.7.0.11223013414.issue29843@psf.upfronthosting.co.za> New submission from Oren Milman: With regard to ctypes.Array: currently: >>> from ctypes import * >>> class T(Array): ... _type_ = c_int ... _length_ = -1 ... Traceback (most recent call last): File "", line 1, in OverflowError: array too large >>> class T(Array): ... _type_ = c_int ... _length_ = -1 << 1000 ... Traceback (most recent call last): File "", line 1, in OverflowError: The '_length_' attribute is too large Obviously, here the _length_ attribute is too small, not too large. Thus, the error messages should be changed to be more accurate (optimally, for any negative _length_, the error message should be the same). Moreover, in accordance with #29833 (this is a sub-issue of #29833), ValueError should be raised for any negative _length_ attribute (instead of OverflowError). Also, Note that currently, in case _length_ == 0, no error is raised. ISTM that a ctypes Array of length 0 is useless, so maybe we should raise a ValueError in this case too? ---------- components: ctypes messages: 289800 nosy: Oren Milman, serhiy.storchaka priority: normal severity: normal status: open title: errors raised by ctypes.Array for invalid _length_ attribute type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 06:03:53 2017 From: report at bugs.python.org (Paul "TBBle" Hampson) Date: Sat, 18 Mar 2017 10:03:53 +0000 Subject: [New-bugs-announce] [issue29844] Windows Python installers not installing DLL to System32/SysWOW64 Message-ID: <1489831433.22.0.028475249754.issue29844@psf.upfronthosting.co.za> New submission from Paul "TBBle" Hampson: As noted in https://github.com/python/cpython/tree/master/Tools/msi === When installed for all users, the following files are installed to either "%SystemRoot%\System32" or "%SystemRoot%\SysWOW64" as appropriate. For the current user, they are installed in the Python install directory. .\python3x.dll The core interpreter .\python3.dll The stable ABI reference === However, at least with the Python 3.5.3 and Python 3.6.0 installers from the official download page, even an all-users install puts the relevant DLLs in the installation directory instead. This is the both with the command-line option and checking the relevant box during installation. I've also confirmed that it happens whether you add Python to the path or not. The latter is my use-case as I have multiple versions of Python installed and use the Python Launcher for Windows to select a version to run or virtualenv to build. Looking at the source, I suspect this feature was completely lost when the MSI build system was rewritten in commit https://github.com/python/cpython/commit/bb24087a2cbfb186b540cc71a74ec8c39c1ebe3a (formerly https://hg.python.org/cpython/rev/e7dbef447157) for issue #23260 which removed all references to SystemFolder or System64Folder ---------- messages: 289801 nosy: TBBle priority: normal severity: normal status: open title: Windows Python installers not installing DLL to System32/SysWOW64 versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 09:52:21 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 18 Mar 2017 13:52:21 +0000 Subject: [New-bugs-announce] [issue29845] Mark tests that use _testcapi as CPython-only Message-ID: <1489845141.37.0.776933976205.issue29845@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Most tests that use _testcapi are optional or marked as CPython-only. But there are few tests that aren't. Proposed patch fixes this. ---------- components: Tests messages: 289812 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Mark tests that use _testcapi as CPython-only type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 10:35:07 2017 From: report at bugs.python.org (Adam Stewart) Date: Sat, 18 Mar 2017 14:35:07 +0000 Subject: [New-bugs-announce] [issue29846] ImportError: No module named _io Message-ID: <1489847707.41.0.897250288335.issue29846@psf.upfronthosting.co.za> New submission from Adam Stewart: I'm trying to build Python 2.7.13 with clang on macOS 10.12.3 with the Spack package manager, but it fails to build the _io module. The exact error message from the build log can be seen here: https://github.com/LLNL/spack/issues/3478#issuecomment-287548431 This only seems to occur for me on macOS; I can't reproduce it on Linux. I checked my environment, but I don't have any Python-related environment variables, nor do I have any variables like DYLD_LIBRARY_PATH set that could cause problems. I'm a developer for the Spack package manager, so any problems that you help me solve will greatly benefit our community. I have attached the build log. Please let me know if there is any more information I can provide you with. ---------- components: Build files: spack-build.out messages: 289815 nosy: ajstewart priority: normal severity: normal status: open title: ImportError: No module named _io type: compile error versions: Python 2.7 Added file: http://bugs.python.org/file46734/spack-build.out _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 11:22:47 2017 From: report at bugs.python.org (Jelle Zijlstra) Date: Sat, 18 Mar 2017 15:22:47 +0000 Subject: [New-bugs-announce] [issue29847] Path takes and ignores **kwargs Message-ID: <1489850567.46.0.646194395205.issue29847@psf.upfronthosting.co.za> New submission from Jelle Zijlstra: pathlib.Path.__new__ takes **kwargs, but doesn't do anything with them (https://github.com/python/cpython/blob/master/Lib/pathlib.py#L979). This doesn't appear to be documented. This feature should presumably be either documented or removed (probably removed unless I'm missing some reason for having it). Brief discussion on a typeshed PR at https://github.com/python/typeshed/pull/991#discussion-diff-105813974R100 ---------- messages: 289817 nosy: Jelle Zijlstra, pitrou priority: normal severity: normal status: open title: Path takes and ignores **kwargs _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 11:51:37 2017 From: report at bugs.python.org (Decorater) Date: Sat, 18 Mar 2017 15:51:37 +0000 Subject: [New-bugs-announce] [issue29848] Cannot use Decorators of the same class that requires an instance of itself to change variables in that class. Message-ID: <1489852297.0.0.839012781566.issue29848@psf.upfronthosting.co.za> New submission from Decorater: So, many people rely on Decorators in class that accepts an instance of itself like so: class ExampleClass: """Example class with an example decorator that cannot be used.""" def __init__(self): self.list_of_items = [] def add_item(self, item): self.list_of_items.append(item) @self.add_item("test_item") def test_item(): print("Example function of ExampleClass that demonstrates the inability to use decorators with self passed to it.") Many people fall for this on classes and then they are like "Why is it not letting me do this?". As such there is got to be a way to somehow support something like this in Python 3.7 as it could be useful on classes like this. The class above is an example, however I know of an library out there that allows you to import from a file and also allows you to use the same thing (that is imported) that would be bound to "self.[whatever it is called in the class]". As such people try to avoid that import and use the one in "self.[whatever it is called in the class]" to try to fit their needs (which ends up failing for them). ---------- messages: 289818 nosy: Decorater priority: normal severity: normal status: open title: Cannot use Decorators of the same class that requires an instance of itself to change variables in that class. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 14:59:56 2017 From: report at bugs.python.org (Xiang Zhang) Date: Sat, 18 Mar 2017 18:59:56 +0000 Subject: [New-bugs-announce] [issue29849] fix memory in import_from Message-ID: <1489863596.03.0.14582514818.issue29849@psf.upfronthosting.co.za> New submission from Xiang Zhang: import_from suffer from memory leak since #29546. Propose a PR to fix it. :-) ---------- messages: 289825 nosy: barry, brett.cannon, mbussonn, xiang.zhang priority: normal severity: normal stage: patch review status: open title: fix memory in import_from versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 18:02:19 2017 From: report at bugs.python.org (Gabriel POTTER) Date: Sat, 18 Mar 2017 22:02:19 +0000 Subject: [New-bugs-announce] [issue29850] file access, other drives Message-ID: <1489874539.85.0.804795402296.issue29850@psf.upfronthosting.co.za> New submission from Gabriel POTTER: If python 3 is installed on another drive (for instance D:/), then it cannot access any C:/ files, but can access D:/ files. I use: open("C:/path/....") The same function did work under python 2.7 but now doesn't anymore. That means that os.path.isfile, open(file).... do not work. Any help ? ---------- components: Windows messages: 289830 nosy: Gabriel POTTER, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: file access, other drives type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 18 20:09:56 2017 From: report at bugs.python.org (Richard Cooper) Date: Sun, 19 Mar 2017 00:09:56 +0000 Subject: [New-bugs-announce] [issue29851] importlib.reload references None object Message-ID: <1489882196.37.0.503568145678.issue29851@psf.upfronthosting.co.za> New submission from Richard Cooper: importlib.reload doesn't work; gives an error about NoneType having no name attribute. See attached a simple repo testcase When run it yields the following [disappointing] result. I'm running Python3.0.6.1 (installed from brew) on OSX 10.12.3 ``` iMac:python_package_loader cooper$ python3 bug.py module loaded Traceback (most recent call last): File "bug.py", line 14, in importlib.reload(sys.modules[moduleName]) File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/__init__.py", line 166, in reload _bootstrap._exec(spec, module) File "", line 589, in _exec AttributeError: 'NoneType' object has no attribute 'name' ``` ---------- components: Library (Lib) files: bug.py messages: 289834 nosy: Richard Cooper priority: normal severity: normal status: open title: importlib.reload references None object type: crash Added file: http://bugs.python.org/file46737/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 19 07:28:47 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 19 Mar 2017 11:28:47 +0000 Subject: [New-bugs-announce] [issue29852] Argument Clinic: add common converter to Py_ssize_t that accepts None Message-ID: <1489922927.22.0.81107726334.issue29852@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Many methods in the io module accept int and None and convert the argument to Py_ssize_t. Proposed patch adds common Argument Clinic converter for that case. The Py_ssize_t converter now takes the accept argument that can be {int} (the default) or {int, NoneType}. In the latter case None is acceptable value which means using the default value. Similar converter was previously used locally in the io module, now it is used also in the mmap module. Examples: _io.BytesIO.read size: Py_ssize_t(accept={int, NoneType}) = -1 / _io.BytesIO.truncate size: Py_ssize_t(accept={int, NoneType}, c_default="self->pos") = None / ---------- components: Argument Clinic, IO messages: 289847 nosy: benjamin.peterson, larry, serhiy.storchaka, stutzbach priority: normal severity: normal stage: patch review status: open title: Argument Clinic: add common converter to Py_ssize_t that accepts None type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 19 11:09:05 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Sun, 19 Mar 2017 15:09:05 +0000 Subject: [New-bugs-announce] [issue29853] Improve exception messages for remove and index methods Message-ID: <1489936145.67.0.822710231299.issue29853@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Currently, there's a discrepancy in the exception reporting for the `.index` and `.remove` methods of many objects: For arrays: array.remove(val) -> ValueError: array.remove(x): x not in list array.index(val) -> ValueError: array.index(x): x not in list not only is always printing `x` not in list not informative, it's wrong since it isn't a list. For tuples: tuple.index(val) -> ValueError: tuple.index(x): x not in tuple For lists: list.remove(val) -> ValueError: list.remove(x): x not in list list.index(val) produces a more informative message: ValueError: is not in list For deques: deque.remove(val) -> ValueError: deque.remove(x): x not in deque similarly to lists, `deque.index(val)` prints the actual argument supplied. I'm not sure if there's valid reasoning behind not providing the repr of the arguments in all `remove` methods but, if there isn't, I'd like to suggest changing all of them to use PyErr_Format and produce more informative messages: array.remove(val) -> ValueError: is not in array array.index(val) -> ValueError: is not in array tuple.index(val) -> ValueError: is not in tuple list.remove(val) -> ValueError: is not in list deque.remove(val) -> ValueError: is not in deque ---------- messages: 289854 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Improve exception messages for remove and index methods type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 19 18:15:24 2017 From: report at bugs.python.org (Nir Soffer) Date: Sun, 19 Mar 2017 22:15:24 +0000 Subject: [New-bugs-announce] [issue29854] Segfault when readline history is more then 2 * history size Message-ID: <1489961724.11.0.51675334909.issue29854@psf.upfronthosting.co.za> New submission from Nir Soffer: GNU readline let the user select limit the history size by setting: $ cat ~/.inputrc set history-size 1000 So I cooked this test script: $ cat history.py from __future__ import print_function import readline readline.read_history_file(".history") print("current_history_length", readline.get_current_history_length()) print("history_length", readline.get_history_length()) print("history_get_item(1)", readline.get_history_item(1)) print("history_get_item(1000)", readline.get_history_item(1000)) input() readline.write_history_file(".history") And this history file generator: $ cat make-history for i in range(2000): print("%04d" % i) Generating .history file with 2000 entries: $ python3 make-history > .history Finally running the test script: $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) None please crash Segmentation fault (core dumped) So we have few issues here: - segfault - history_get_item returns None for both 1 and 1000 although we have 1000 items in history - history_length is always wrong (-1), instead of the expected value (1000), set in .inputrc Running with gdb we see: $ gdb python3 GNU gdb (GDB) Fedora 7.12.1-46.fc25 Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from python3...Reading symbols from /usr/lib/debug/usr/libexec/system-python.debug...done. done. (gdb) run history.py Starting program: /usr/bin/python3 history.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) None crash? Program received signal SIGSEGV, Segmentation fault. 0x00007fffeff60fab in call_readline (sys_stdin=, sys_stdout=, prompt=) at /usr/src/debug/Python-3.5.2/Modules/readline.c:1281 1281 line = (const char *)history_get(length)->line; (gdb) list 1276 if (using_libedit_emulation) { 1277 /* handle older 0-based or newer 1-based indexing */ 1278 line = (const char *)history_get(length + libedit_history_start - 1)->line; 1279 } else 1280 #endif /* __APPLE__ */ 1281 line = (const char *)history_get(length)->line; 1282 else 1283 line = ""; 1284 if (strcmp(p, line)) 1285 add_history(p); So we assume that history_get(length) returns non-null when length > 0, but this assumption is not correct. In 2 other usages in Modules/readline.c, we validate that history_get() return value is not null before using it. If we change the .history contents to 1999 lines, we get: $ python3 make-history | head -1999 > .history $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) 0999 crash? $ wc -l .history 1000 .history $ head -1 .history 1000 $ tail -1 .history crash? So now it does not crash, but item 1 is still None. Trying again with history file with 1000 entries: $ python3 make-history | head -1000 > .history $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) 0000 history_get_item(1000) 0999 looks fine! $ wc -l .history 1000 .history $ head -1 history head: cannot open 'history' for reading: No such file or directory $ head -1 .history 0001 $ tail -1 .history looks fine! Finally trying with 1001 items: $ python3 make-history | head -1001 > .history $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) 0999 And item 1 is wrong. I got same results with python 2.7, 3.5 and master on fedora 25. The root cause seems to be a readline bug when history file is bigger than the history-size in .inputrc, but I could not find yet readline library documentation, so I don't know if the issues is incorrect usage of the readline apis, or bug in readline. ---------- components: Extension Modules messages: 289865 nosy: nirs priority: normal severity: normal status: open title: Segfault when readline history is more then 2 * history size versions: Python 2.7, Python 3.5, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 19 19:37:32 2017 From: report at bugs.python.org (assume_away) Date: Sun, 19 Mar 2017 23:37:32 +0000 Subject: [New-bugs-announce] [issue29855] The traceback compounding of RecursionError fails to work with __get__ Message-ID: <1489966652.17.0.834127602926.issue29855@psf.upfronthosting.co.za> New submission from assume_away: class Property: def __init__(self, getter): self.getter = getter def __get__(self, instance, cls): return self.getter(cls if instance is None else instance) class MyClass: @Property def something(cls): return cls.something Calling MyClass.something will show all 990+ RecursionError message. ---------- messages: 289867 nosy: assume_away priority: normal severity: normal status: open title: The traceback compounding of RecursionError fails to work with __get__ type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 19 22:52:31 2017 From: report at bugs.python.org (Raphael McSinyx) Date: Mon, 20 Mar 2017 02:52:31 +0000 Subject: [New-bugs-announce] [issue29856] curses online documentation typo Message-ID: <1489978351.51.0.185284735277.issue29856@psf.upfronthosting.co.za> New submission from Raphael McSinyx: I think there is a typo in curses online documentation about key constants: https://docs.python.org/3.7/library/curses.html#constants The key KEY_SEXIT is described as `Shifted Dxit' while I think that should be `Shifted Exit'. ---------- assignee: docs at python components: Documentation messages: 289868 nosy: McSinyx, docs at python priority: normal severity: normal status: open title: curses online documentation typo versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 02:33:43 2017 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 20 Mar 2017 06:33:43 +0000 Subject: [New-bugs-announce] [issue29857] Provide `sys._raw_argv` for host application's command line arguments Message-ID: <1489991623.12.0.90529490385.issue29857@psf.upfronthosting.co.za> New submission from Nick Coghlan: Issue 14208 was ultimately resolved through an import system specific solution, with PEP 451 making the module name passed to `python -m` available as `__main__.__spec__.name`. However, there are other situations where it may be useful to offer an implementation-dependent attribute in the `sys` module that provides access to a copy of the host application's raw `argv` details, rather than the filtered `sys.argv` details that are left after the host application's command line processing has been completed. In the case of CPython, where `sys.argv` represents the arguments to the Python level __main__ function, `sys._raw_argv` would be a copy of the argv argument to the C level main() or wmain() function (as appropriate for the platform). ---------- messages: 289873 nosy: ncoghlan priority: normal severity: normal status: open title: Provide `sys._raw_argv` for host application's command line arguments type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 05:19:13 2017 From: report at bugs.python.org (anton-ryzhov) Date: Mon, 20 Mar 2017 09:19:13 +0000 Subject: [New-bugs-announce] [issue29858] inspect.signature includes bound argument for wrappers around decorated bound methods Message-ID: <1490001553.25.0.894325859627.issue29858@psf.upfronthosting.co.za> New submission from anton-ryzhov: If we wrap function with bound method, which is also a wrapper around function, `inspect.signature` will not do `skip_bound_arg`. It will use `inspect.unwrap` and pass by bound method from outer function to inner one. Reproduce: ``` import functools, inspect def decorator(func): @functools.wraps(func) def inner(*args): return func(*args) return inner class Foo(object): @decorator def bar(self, testarg): pass f = Foo() baz = decorator(f.bar) assert inspect.signature(baz) == inspect.signature(f.bar) ``` ---------- components: Library (Lib) messages: 289879 nosy: anton-ryzhov priority: normal severity: normal status: open title: inspect.signature includes bound argument for wrappers around decorated bound methods versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 08:08:19 2017 From: report at bugs.python.org (Daniel Birnstiel) Date: Mon, 20 Mar 2017 12:08:19 +0000 Subject: [New-bugs-announce] [issue29859] Return code of pthread_* in thread_pthread.h is not used for perror Message-ID: <1490011699.61.0.174999793884.issue29859@psf.upfronthosting.co.za> New submission from Daniel Birnstiel: Python/thread_pthread.h:145 defines the CHECK_STATUS macro used for printing error messages in case any of the calls fail. CHECK_STATUS uses perror for formatting an error message, which relies on the global erno being set (see man perror). Since the pthread functions return their status code instead of setting erno (which might not even work in threaded environments), no additional information is displayed. See for example produced by PyThread_release_lock: pthread_mutex_lock[3]: Undefined error: 0 pthread_cond_signal: Undefined error: 0 pthread_mutex_unlock[3]: Undefined error: 0 The correct solution would be to use strerror(status) in order to show the proper message. ---------- components: Interpreter Core messages: 289884 nosy: Birne94 priority: normal severity: normal status: open title: Return code of pthread_* in thread_pthread.h is not used for perror type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 10:32:53 2017 From: report at bugs.python.org (Lord Anton Hvornum) Date: Mon, 20 Mar 2017 14:32:53 +0000 Subject: [New-bugs-announce] [issue29860] smtplib.py doesn't capitalize EHLO. Message-ID: <1490020373.29.0.308540627587.issue29860@psf.upfronthosting.co.za> New submission from Lord Anton Hvornum: ``` File "mail.py", line 9, in smtp_server.starttls(context) File "/usr/lib/python3.6/smtplib.py", line 748, in starttls self.ehlo_or_helo_if_needed() File "/usr/lib/python3.6/smtplib.py", line 600, in ehlo_or_helo_if_needed (code, resp) = self.helo() File "/usr/lib/python3.6/smtplib.py", line 429, in helo (code, msg) = self.getreply() File "/usr/lib/python3.6/smtplib.py", line 393, in getreply raise SMTPServerDisconnected("Connection unexpectedly closed") ``` This happens due to the server expecting commands (like EHLO, STARTTLS) being strict upper-case. And when the SMTP command isn't, it drops us. This is a rare edge case since most mail servers handles shady client data in numerous different ways (such as gmail never sending QUIT for instance). I don't know of a work-around for this and the documentation states `EHLO` is being sent (https://docs.python.org/3/library/smtplib.html), so I guess the lib assumes that's the case as well. ---------- components: Library (Lib) messages: 289886 nosy: Lord Anton Hvornum priority: normal severity: normal status: open title: smtplib.py doesn't capitalize EHLO. versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 14:14:14 2017 From: report at bugs.python.org (Antoine Pitrou) Date: Mon, 20 Mar 2017 18:14:14 +0000 Subject: [New-bugs-announce] [issue29861] multiprocessing Pool keeps objects (tasks, args, results) alive too long Message-ID: <1490033654.49.0.302428807584.issue29861@psf.upfronthosting.co.za> New submission from Antoine Pitrou: The various workers in multiprocessing.Pool keep a reference to the last encountered task or task result. This means some data may be kept alive even after the caller is done with them, as long as some other task doesn't clobber the relevant variables. Specifically, Pool._handle_tasks(), Pool._handle_results() and the toplevel worker() function fail to clear references at the end of each loop. Originally reported at https://github.com/dask/distributed/issues/956 ---------- components: Library (Lib) messages: 289894 nosy: davin, pitrou, sbt priority: normal severity: normal stage: needs patch status: open title: multiprocessing Pool keeps objects (tasks, args, results) alive too long type: resource usage versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 15:08:48 2017 From: report at bugs.python.org (Brett Cannon) Date: Mon, 20 Mar 2017 19:08:48 +0000 Subject: [New-bugs-announce] [issue29862] Fix grammar in importlib.reload() exception Message-ID: <1490036928.42.0.0658763709681.issue29862@psf.upfronthosting.co.za> New submission from Brett Cannon: https://github.com/python/cpython/blob/05f53735c8912f8df1077e897f052571e13c3496/Lib/importlib/__init__.py#L140 "reload() argument must be a module" (missing the "a"). ---------- assignee: brett.cannon components: Library (Lib) messages: 289901 nosy: brett.cannon priority: normal severity: normal status: open title: Fix grammar in importlib.reload() exception versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 20 18:33:19 2017 From: report at bugs.python.org (Brett Cannon) Date: Mon, 20 Mar 2017 22:33:19 +0000 Subject: [New-bugs-announce] [issue29863] Add a COMPACT constant to the json module Message-ID: <1490049199.07.0.61664177999.issue29863@psf.upfronthosting.co.za> New submission from Brett Cannon: In issue #29540 there was a suggestion to add a `compact` argument to json.dump() and json.dumps(). That was eventually rejected as adding complexity to an API that's already messy. But in GH-72 someone created a COMPACT constant to the json module which gets a similar effect as a `compact` argument but without expanding any APIs. Unfortunately I think the constant proposal got lost in discussion of adding the `compact` argument, so I'm opening a new issue to make a final decision as to whether we should accept/reject the COMPACT constant idea. ---------- components: Library (Lib) messages: 289905 nosy: brett.cannon, ezio.melotti, rhettinger priority: normal severity: normal status: open title: Add a COMPACT constant to the json module type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 02:02:13 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 21 Mar 2017 06:02:13 +0000 Subject: [New-bugs-announce] [issue29864] Misuse of Py_SIZE in dict.fromkey() Message-ID: <1490076133.0.0.154466123244.issue29864@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: In dict.fromkeys() implementation when a dict is passed its size is determined by using the Py_SIZE macro. This is not correct since PyDictObject is not a PyVarObject (but see issue28988). ---------- components: Interpreter Core messages: 289915 nosy: inada.naoki, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Misuse of Py_SIZE in dict.fromkey() type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 02:19:52 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 21 Mar 2017 06:19:52 +0000 Subject: [New-bugs-announce] [issue29865] Use PyXXX_GET_SIZE macros rather than Py_SIZE for concrete types Message-ID: <1490077192.4.0.970659730607.issue29865@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch replaces Py_SIZE with PyXXX_GET_SIZE macros for concrete types. For details see https://mail.python.org/pipermail/python-dev/2017-March/147628.html . Py_SIZE still is used in concrete type implementations and when set the new size: `Py_SIZE(obj) = newsize`. ---------- components: Extension Modules, Interpreter Core messages: 289916 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Use PyXXX_GET_SIZE macros rather than Py_SIZE for concrete types type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 02:25:21 2017 From: report at bugs.python.org (Decorater) Date: Tue, 21 Mar 2017 06:25:21 +0000 Subject: [New-bugs-announce] [issue29866] Added datetime_diff to datetime.py. Message-ID: <1490077521.83.0.664563461017.issue29866@psf.upfronthosting.co.za> New submission from Decorater: The datetime_diff function compares two datetime objects and returns the time since the prior datetime objects (based on the current datetime object) in a what that is readable by humans. This is useful when one might need to compare two datetime objects, keeping ones codebase as small as possible (to ensure fast Python code), and to reduce 'hacks' in their code to make the comparison more readable by humans the github pull request comes with changes to datetime.rst as well Note: This is currently targeting 3.7, however if you guys desire you guys could backport it into 3.5 and 3.6. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 289917 nosy: Decorater, docs at python priority: normal severity: normal status: open title: Added datetime_diff to datetime.py. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 03:53:03 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 21 Mar 2017 07:53:03 +0000 Subject: [New-bugs-announce] [issue29867] Add asserts in PyXXX_GET_SIZE macros Message-ID: <1490082783.53.0.325668446272.issue29867@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds asserts for checking the type in macros PyTuple_GET_SIZE, PyList_GET_SIZE and PySet_GET_SIZE. This can help to find the misuse of these macros. Asserts already are used in macros PyBytes_GET_SIZE, PyByteArray_GET_SIZE, PyUnicode_GET_SIZE and PyDict_GET_SIZE. See also the discussion on Python-Dev: https://mail.python.org/pipermail/python-dev/2017-March/147628.html . This change can break the code that uses these macros for setting the size. For example one place in odictobject.c. But I expect that such cases are rare. And all these macros are not in the limited API. ---------- components: Interpreter Core messages: 289927 nosy: haypo, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add asserts in PyXXX_GET_SIZE macros type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 18:59:06 2017 From: report at bugs.python.org (John Wiseman) Date: Tue, 21 Mar 2017 22:59:06 +0000 Subject: [New-bugs-announce] [issue29868] multiprocessing.dummy missing cpu_count Message-ID: <1490137146.46.0.712424049962.issue29868@psf.upfronthosting.co.za> New submission from John Wiseman: The documentation for the multiprocessing.dummy module says that it "replicates the API of multiprocessing." In Python 2.7, I can import multiprocessing.dummy and use the cpu_count function: $ python2 Python 2.7.12 (default, Oct 29 2016, 19:21:06) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing.dummy as mp >>> mp.cpu_count() 8 But in Python 3.6, multiprocessing.dummy is missing cpu_count: $ python3 Python 3.6.0 (default, Mar 21 2017, 13:27:21) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing.dummy as mp >>> mp.cpu_count() Traceback (most recent call last): File "", line 1, in AttributeError: module 'multiprocessing.dummy' has no attribute 'cpu_count' I would expect that multiprocessing.dummy would have cpu_count, since that function is available in multiprocessing, and it's there in Python 2.7. It looks like it was removed in commit 04842a8, "Remove unused or redundant imports in concurrent.futures and multiprocessing" (Florent Xicluna 5 years ago). Maybe the removal was inadvertent? I realize there are several workarounds, but based on the documentation and the behavior of previous versions, I wouldn't have expected this breaking change. ---------- components: Library (Lib) messages: 289950 nosy: johnwiseman priority: normal severity: normal status: open type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 20:12:25 2017 From: report at bugs.python.org (Nevada Sanchez) Date: Wed, 22 Mar 2017 00:12:25 +0000 Subject: [New-bugs-announce] [issue29869] Underscores in numeric literals not supported in lib2to3. Message-ID: <1490141545.37.0.710126019998.issue29869@psf.upfronthosting.co.za> New submission from Nevada Sanchez: The following should work in Python 3.6 ``` from lib2to3.pgen2 import driver from lib2to3 import pytree from lib2to3 import pygram _GRAMMAR_FOR_PY3 = pygram.python_grammar_no_print_statement.copy() parser_driver = driver.Driver(_GRAMMAR_FOR_PY3, convert=pytree.convert) tree = parser_driver.parse_string('100_1\n', debug=False) ``` ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 289951 nosy: nevsan priority: normal severity: normal status: open title: Underscores in numeric literals not supported in lib2to3. type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 21:10:31 2017 From: report at bugs.python.org (Alexander Mohr) Date: Wed, 22 Mar 2017 01:10:31 +0000 Subject: [New-bugs-announce] [issue29870] ssl socket leak Message-ID: <1490145031.23.0.122177607626.issue29870@psf.upfronthosting.co.za> New submission from Alexander Mohr: When upgrading to 3.5.3 we noticed that the requests module was leaking memory rather quickly. This led to me logging the issue: https://github.com/kennethreitz/requests/issues/3933. After more investigation I've found that the leak is caused by the raw python SSL sockets. I've created a test file here: https://gist.github.com/thehesiod/ef79dd77e2df7a0a7893dfea6325d30a which allows you to reproduce the leak with raw python ssl socket (CLIENT_TYPE = ClientType.RAW), aiohttp or requests. They all leak in a similar way due to their use of the python SSL socket objects. I tried tracing the memory usage with tracemalloc but nothing interesting popped up so I believe this is a leak in the native code. A docker cloud image is available here: amohr/testing:stretch_request_leak based on: ``` FROM debian:stretch COPY request_https_leak.py /tmp/request_https_leak.py RUN apt-get update && \ apt-get install -y python3.5 python3-pip git RUN python3 -m pip install requests git+git://github.com/thehesiod/pyca.git at fix-py3#egg=calib setproctitle requests psutil ``` I believe this issue was introduced in python 3.5.3 as we're not seeing the leak with 3.5.2. Also I haven't verified yet if this happens on non-debian systems. I'll update if I have any more info. I believe 3.6 is similarly impacted but am not 100% certain yet. ---------- assignee: christian.heimes components: SSL messages: 289954 nosy: christian.heimes, thehesiod priority: normal severity: normal status: open title: ssl socket leak type: resource usage versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 21:33:16 2017 From: report at bugs.python.org (Josh Rosenberg) Date: Wed, 22 Mar 2017 01:33:16 +0000 Subject: [New-bugs-announce] [issue29871] Enable optimized locks on Windows Message-ID: <1490146396.76.0.76409234296.issue29871@psf.upfronthosting.co.za> New submission from Josh Rosenberg: Kristjan wrote improved locking primitives in #15038 that use the new (in Vista) SRWLock and Condition Variable APIs. SRWLocks (used in exclusive mode only) replace Critical Sections, which is slower than SRWLock and provides no features we use that might justify it. Condition Variables replace Semaphores, where the former is user mode (cheap) and the latter kernel mode (expensive). These changes remain disabled by default. Given that CPython dropped support for pre-Vista OSes in 3.5, I propose enabling the faster locking primitives by default. The PR I'll submit leaves the condition variable emulation code in the source so it's available to people who might try to build XP/WS03 compatible code, it just tweaks the define so it defaults to using the Vista+ APIs. Based on the numbers from #15038, we should expect to see a significant improvement in speed. ---------- components: Interpreter Core, Windows messages: 289955 nosy: josh.r, kristjan.jonsson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable optimized locks on Windows type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 22:50:12 2017 From: report at bugs.python.org (James Triveri) Date: Wed, 22 Mar 2017 02:50:12 +0000 Subject: [New-bugs-announce] [issue29872] My reply Message-ID: New submission from James Triveri: reply from james.triveri at gmail.com ---------- messages: 289962 nosy: jtrive84 priority: normal severity: normal status: open title: My reply _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 21 23:39:35 2017 From: report at bugs.python.org (Alex CHEN) Date: Wed, 22 Mar 2017 03:39:35 +0000 Subject: [New-bugs-announce] [issue29873] Need a look for return value checking [_elementtree.c] Message-ID: <1490153975.41.0.412355658191.issue29873@psf.upfronthosting.co.za> New submission from Alex CHEN: In file _elementtree.c our static code scanner has reported this case, but I don't sure that could be any problem, may you have a look? static PyObject* element_getattr(ElementObject* self, char* name) { PyObject* res; /* handle common attributes first */ if (strcmp(name, "tag") == 0) { res = self->tag; Py_INCREF(res); return res; } else if (strcmp(name, "text") == 0) { res = element_get_text(self); // is it possible that element_get_text could return NULL here? Py_INCREF(res); return res; } ---------- components: XML messages: 289965 nosy: alexc priority: normal severity: normal status: open title: Need a look for return value checking [_elementtree.c] type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 00:14:16 2017 From: report at bugs.python.org (Alex CHEN) Date: Wed, 22 Mar 2017 04:14:16 +0000 Subject: [New-bugs-announce] [issue29874] Need a look for return value checking [selectmodule.c] Message-ID: <1490156056.24.0.758776369262.issue29874@psf.upfronthosting.co.za> New submission from Alex CHEN: In file selectmodule.c our static code scanner has reported the following case, function set2list is liable to return NULL (if PyTuple_New failed), would any chance the NULL pointer be dereferenced (Py_DECREF(fdlist) after set2list) or it would just raise python exception to handle PyTuple_New error ? static PyObject * select_select(PyObject *self, PyObject *args) { ...... if (n < 0) { PyErr_SetFromErrno(SelectError); } #endif else { /* any of these three calls can raise an exception. it's more convenient to test for this after all three calls... but is that acceptable? */ ifdlist = set2list(&ifdset, rfd2obj); // || <===== ofdlist = set2list(&ofdset, wfd2obj); // || efdlist = set2list(&efdset, efd2obj); // || if (PyErr_Occurred()) ret = NULL; else ret = PyTuple_Pack(3, ifdlist, ofdlist, efdlist); Py_DECREF(ifdlist); Py_DECREF(ofdlist); Py_DECREF(efdlist); ---------- messages: 289967 nosy: alexc priority: normal severity: normal status: open title: Need a look for return value checking [selectmodule.c] _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 01:09:16 2017 From: report at bugs.python.org (Igor) Date: Wed, 22 Mar 2017 05:09:16 +0000 Subject: [New-bugs-announce] [issue29875] IDLE quit unexpectedly Message-ID: <1490159356.31.0.65115458873.issue29875@psf.upfronthosting.co.za> New submission from Igor: Hi! I'm a newbie, both in this community and with Python. I downloaded Python today (March 22, 2017, version 3.6.1) and as I was following a tutorial on how to build my first program (the classical Hello World) with Python, the IDLE window closed and my Mac showed the following message: "IDLE quit unexpectedly". It happens every time I type the ' (single quotation mark).I've tried 10 times so far and it keeps happening. Thank you! ---------- assignee: terry.reedy components: IDLE messages: 289970 nosy: igorafm, terry.reedy priority: normal severity: normal status: open title: IDLE quit unexpectedly versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 01:52:24 2017 From: report at bugs.python.org (Alex CHEN) Date: Wed, 22 Mar 2017 05:52:24 +0000 Subject: [New-bugs-announce] [issue29876] Check for null return value [_elementtree.c : subelement] Message-ID: <1490161944.61.0.295678924414.issue29876@psf.upfronthosting.co.za> New submission from Alex CHEN: In file _elementtree.c our static code scanner has reported this case, I think there is a bit similar to http://bugs.python.org/issue29874 (returns NULL when NoMemory) static PyObject* subelement(PyObject* self, PyObject* args, PyObject* kw) { PyObject* elem; ElementObject* parent; PyObject* tag; PyObject* attrib = NULL; if (!PyArg_ParseTuple(args, "O!O|O!:SubElement", &Element_Type, &parent, &tag, &PyDict_Type, &attrib)) return NULL; if (attrib || kw) { attrib = (attrib) ? PyDict_Copy(attrib) : PyDict_New(); if (!attrib) return NULL; if (kw) PyDict_Update(attrib, kw); } else { Py_INCREF(Py_None); attrib = Py_None; } elem = element_new(tag, attrib); // <== element_new could returns a NULL pointer, the followed Py_DECREF(elem) would dereference NULL pointer. Py_DECREF(attrib); if (element_add_subelement(parent, elem) < 0) { Py_DECREF(elem); return NULL; } ---------- components: XML messages: 289972 nosy: alexc priority: normal severity: normal status: open title: Check for null return value [_elementtree.c : subelement] type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 01:57:16 2017 From: report at bugs.python.org (Dustin Spicuzza) Date: Wed, 22 Mar 2017 05:57:16 +0000 Subject: [New-bugs-announce] [issue29877] compileall fails with urandom error even if number of workers is 1 Message-ID: <1490162236.47.0.103711647228.issue29877@psf.upfronthosting.co.za> New submission from Dustin Spicuzza: Found on Python 3.6 on a low-resource platform (NI RoboRIO), it seems that this occurs only because the ProcessPoolExecutor is being imported. A proposed fix would only import ProcessPoolExecutor if -j > 1. Stacktrace follows: /usr/local/bin/python3 -m compileall -j 1 /home/lvuser/py ^CTraceback (most recent call last): File "/usr/local/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/compileall.py", line 20, in from concurrent.futures import ProcessPoolExecutor File "/usr/local/lib/python3.6/concurrent/futures/__init__.py", line 17, in from concurrent.futures.process import ProcessPoolExecutor File "/usr/local/lib/python3.6/concurrent/futures/process.py", line 53, in import multiprocessing File "/usr/local/lib/python3.6/multiprocessing/__init__.py", line 16, in from . import context File "/usr/local/lib/python3.6/multiprocessing/context.py", line 5, in from . import process File "/usr/local/lib/python3.6/multiprocessing/process.py", line 311, in _current_process = _MainProcess() File "/usr/local/lib/python3.6/multiprocessing/process.py", line 298, in __init__ self._config = {'authkey': AuthenticationString(os.urandom(32)), ---------- components: Library (Lib) messages: 289973 nosy: virtuald priority: normal severity: normal status: open title: compileall fails with urandom error even if number of workers is 1 versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 02:15:11 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 22 Mar 2017 06:15:11 +0000 Subject: [New-bugs-announce] [issue29878] Add global instances of int 0 and 1 Message-ID: <1490163311.47.0.711325721357.issue29878@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: When C code needs to compare Python object with int 0 or add int 1 it either use local reference to PyLong_FromLong(0) and PyLong_FromLong(1) which should be decrefed just after use or module level global variable initialized and cleared during initializing and finalizing the module. Proposed patch adds global variables _PyLong_Zero and _PyLong_One for references to integer objects 0 and 1. This simplifies the code since no need to initialize local variables, check for error the result of PyLong_FromLong() and decref it after use. The patch decreases the total code size by 244 lines. That variables are only for internal use. User code should use PyLong_FromLong(0) and PyLong_FromLong(1). ---------- components: Interpreter Core files: long-constants.diff keywords: patch messages: 289974 nosy: haypo, mark.dickinson, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add global instances of int 0 and 1 type: enhancement versions: Python 3.7 Added file: http://bugs.python.org/file46751/long-constants.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 10:11:25 2017 From: report at bugs.python.org (=?utf-8?q?Charles_Bouchard-L=C3=A9gar=C3=A9?=) Date: Wed, 22 Mar 2017 14:11:25 +0000 Subject: [New-bugs-announce] [issue29879] typing.Text not available in python 3.5.1 Message-ID: <1490191885.24.0.466307693119.issue29879@psf.upfronthosting.co.za> New submission from Charles Bouchard-L?gar?: The real issue here is that this is not documented in Doc/library/typing.rst. ---------- assignee: docs at python components: Documentation messages: 289985 nosy: Charles Bouchard-L?gar?, docs at python priority: normal severity: normal status: open title: typing.Text not available in python 3.5.1 versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 11:34:27 2017 From: report at bugs.python.org (pz) Date: Wed, 22 Mar 2017 15:34:27 +0000 Subject: [New-bugs-announce] [issue29880] python3.6 install readline , and then cpython exit Message-ID: <1490196867.91.0.906821746268.issue29880@psf.upfronthosting.co.za> New submission from pz: python3.6 -m pip istall readline # python3.6 Python 3.6.0 (default, Mar 21 2017, 23:23:51) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> *** glibc detected *** python3.6: free(): invalid pointer: 0x00007f63bbb5b570 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x75f3e)[0x7f63babdaf3e] /lib64/libc.so.6(+0x78d8d)[0x7f63babddd8d] python3.6(PyOS_Readline+0x134)[0x44a433] ............ and after I unistall readline,then enter cpython?the error not occur # python3.6 Python 3.6.0 (default, Mar 21 2017, 23:23:51) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> >>> ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 289991 nosy: pz priority: normal severity: normal status: open title: python3.6 install readline ,and then cpython exit versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 12:43:29 2017 From: report at bugs.python.org (STINNER Victor) Date: Wed, 22 Mar 2017 16:43:29 +0000 Subject: [New-bugs-announce] [issue29881] Add a new private API for "static C variables" (_PyStaticVar) to clear them at exit Message-ID: <1490201009.64.0.544065179107.issue29881@psf.upfronthosting.co.za> New submission from STINNER Victor: When I read Serhiy Storshaka's idea in issue #29878, it recalled me an old idea of writing a generalization of the _Py_IDENTIFIER() API. _Py_IDENTIFIER is an API to initialize a static string variable once. The _Py_Identifier structure has a "next" field to create a single-linked chained list. It allows to clear all variables at exit in _PyUnicode_ClearStaticStrings(). I propose a similar API but for any PyObject* object, to be able to clear all static variables at exit. It should help to release all memory in Py_Finalize() and have a safer Python finalization. See attached pull request for the API itself. "Static variables" in C are variables with a limited scope: a single C file or a single function. It seems like the API can remove some lines of code. Example of patch: @@ -1452,14 +1450,14 @@ compiler_mod(struct compiler *c, mod_ty mod) { PyCodeObject *co; int addNone = 1; - static PyObject *module; - if (!module) { - module = PyUnicode_InternFromString(""); - if (!module) - return NULL; + _Py_STATICVAR(module); + + if (_PY_STATICVAR_INIT(&module, PyUnicode_InternFromString(""))) { + return 0; } + /* Use 0 for firstlineno initially, will fixup in assemble(). */ - if (!compiler_enter_scope(c, module, COMPILER_SCOPE_MODULE, mod, 0)) + if (!compiler_enter_scope(c, module.obj, COMPILER_SCOPE_MODULE, mod, 0)) return NULL; switch (mod->kind) { case Module_kind: -- Drawbacks of the API: * It adds one pointer per static variables, so increase the memory footprint of 8 bytes per variable * It requires to write "var.obj" instead of just "var" to access the Python object The API doesn't try to remove duplicated objects. I consider that it's not an issue since functions like PyLong_FromLong() and PyUnicode_InternFromString("c string") already do it for us. Some functions create mutable variables like PyImport_Import() which creates an empty list. -- Note: Eric Snow proposed a solution "solving multi-core Python": * https://mail.python.org/pipermail/python-ideas/2015-June/034177.html * http://ericsnowcurrently.blogspot.fr/2016/09/solving-mutli-core-python.html I'm not sure if this API would help or not to implement such idea, but Eric's project is experimental and wasn't taken in account when designing the API. ---------- components: Interpreter Core messages: 289999 nosy: haypo priority: normal severity: normal status: open title: Add a new private API for "static C variables" (_PyStaticVar) to clear them at exit type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 13:30:42 2017 From: report at bugs.python.org (Niklas Fiekas) Date: Wed, 22 Mar 2017 17:30:42 +0000 Subject: [New-bugs-announce] [issue29882] Add an efficient popcount method for integers Message-ID: <1490203842.58.0.276315183014.issue29882@psf.upfronthosting.co.za> New submission from Niklas Fiekas: An efficient popcount (something equivalent to bin(a).count("1")) would be useful for numerics, parsing binary formats, scientific applications and others. DESIGN DECISIONS * Is a popcount method useful enough? * How to handle negative values? * How should the method be named? SURVEY gmpy calls the operation popcount and returns -1/None for negative values: >>> import gmpy2 >>> gmpy2.popcount(-10) -1 >>> import gmpy >>> gmpy.popcount(-10) >From the documentation [1]: > If x < 0, the number of bits with value 1 is infinite > so -1 is returned in that case. (I am not a fan of the arbitrary return value). The bitarray module has a count(value=True) method: >>> from bitarray import bitarray >>> bitarray(bin(123456789).strip("0b")).count() 16 Bitsets [2] exposes __len__. There is an SSE4 POPCNT instruction. C compilers call the corresponding intrinsic popcnt or popcount (with some prefix and suffix) and they accept unsigned arguments. Rust calls the operation count_ones [3]. Ones are counted in the binary representation of the *absolute* value. (I propose to do the same). Introducing popcount was previously considered here but closed for lack of a PEP or patch: http://bugs.python.org/issue722647 Sensible names could be bit_count along the lines of the existing bit_length or popcount for gmpy compability and to distinguish it more. PERFORMANCE $ ./python -m timeit "bin(123456789).count('1')" # equivalent 1000000 loops, best of 5: 286 nsec per loop $ ./python -m timeit "(123456789).bit_count()" # fallback 5000000 loops, best of 5: 46.3 nsec per loop [1] https://gmpy2.readthedocs.io/en/latest/mpz.html#mpz-functions [2] https://pypi.python.org/pypi/bitsets/0.7.9 [3] https://doc.rust-lang.org/std/primitive.i64.html#method.count_ones ---------- components: Interpreter Core messages: 290003 nosy: mark.dickinson, niklasf priority: normal severity: normal status: open title: Add an efficient popcount method for integers type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 22 15:28:01 2017 From: report at bugs.python.org (Adam Meily) Date: Wed, 22 Mar 2017 19:28:01 +0000 Subject: [New-bugs-announce] [issue29883] asyncio: Windows Proactor Event Loop UDP Support Message-ID: <1490210881.18.0.553215827239.issue29883@psf.upfronthosting.co.za> New submission from Adam Meily: I am working on a Python 3.5 project that uses asyncio on a Windows system to poll both UDP and TCP connections. Multiple sources online say that the Windows Proactor event loop, which uses I/O Completion Ports, is considerably faster and more efficient than the default Selector event loop. I'm using both UDP and TCP connections so I am stuck with the Selector event loop for the time being. I've seen the overhead of 128 open UDP/TCP connections on the Selector event loop to be near 85%, which I understand is entirely spent in Windows proprietary code and not the Python implementation. I'd like to take a shot at implementing UDP support in the IOCP event loop. It looks like there will be a considerable amount of code shared between TCP and UDP IOCP so I plan on implementing UDP support directly in _ProactorReadPipeTransport and _ProactorBaseWritePipeTransport. I should be able to do this by wrapping any TCP/UDP specific function calls in a check of: if sock.type == socket.SOCK_DGRAM: # Do UDP stuff elif sock.type == socket.SOCK_STREAM: # Do TCP stuff My initial implementation plan is to: - Call datagram_received() instead of data_received() when UDP data is available in _ProactorReadPipeTransport._loop_reading(). - Implement BaseProactorEventLoop._make_datagram_transport(). - Implement wrappers for WSAConnect, WSARecvFrom, and WSASendTo in _overlapped. - Implement sendto() and recvfrom() in IocpProactor, which will use the new functions in _overlapped. - Implement handling for UDP "connections" in IocpProactor.connect() to call WSAConnect(). WSAConnect() appears to always return immediately so the function not supporting IOCP shouldn't be an issue. We can't use ConnectEx() for UDP because ConnectEx() is for connection-oriented sockets only. My project is unfortunately tied to Python 3.5. So, if possible, I'd like to have UDP support merged into a v3.5 release. I can fork off of master instead of v3.5.3 if Python 3.5 support isn't an option. ---------- components: asyncio messages: 290010 nosy: Adam Meily, yselivanov priority: normal severity: normal status: open title: asyncio: Windows Proactor Event Loop UDP Support type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 01:43:49 2017 From: report at bugs.python.org (Christophe Zeitouny) Date: Thu, 23 Mar 2017 05:43:49 +0000 Subject: [New-bugs-announce] [issue29884] faulthandler does not properly restore sigaltstack during teardown Message-ID: <1490247829.54.0.456330369246.issue29884@psf.upfronthosting.co.za> New submission from Christophe Zeitouny: Looks like faulthandler is not properly tearing down its sigaltstack, causing potential double-free issues in any application that embeds the Python interpreter. I stumbled upon this when I enabled AddressSanitizer on my application, which sets up and tears down a Python interpreter instance at runtime. AddressSanitizer complained about a double-free of the stack_t::ss_sp memory region. After close inspection, here's what's happening: 1. When a new thread is created, AddressSanitizer sets up its own alternative stack by calling sigaltstack 2. Py_Initialize() is called, which initializes faulthandler, which sets up its own alternative stack, therefore overriding the one installed by AddressSanitizer 3. Py_Finalize() is called, which deinitializes faulthandler, which merely deletes the allocated stack region, but leaves the alternative stack installed. Any signal that occurs after this point will be using a memory region it doesn't own as stack. dangerous stuff. 4. The thread exits, at which point AddressSanitizer queries sigaltstack for the current alternative stack, blindly assumes that it's the same one that it installed, and attempts to free the allocated stack region. Therefore causing a double free issue Regardless of the fact that AddressSanitizer should probably not blindly trust that the currently installed sigaltstack is the same one it installed earlier, the current code in faulthandler leaves the sigaltstack in a very bad state after finalizing. This means that the application that embeds the Python interpreter better hope that no signals are raised after it calls Py_Finalize(). I have a patch that fixes this issue. faulthandler will save the previously installed alternative stack at initialization time. During deinitialization, it will query sigaltstack for the current stack. If it's the same as the one it installed, it will restore the saved previous stack. 'sigaltstack' just sounds like a badly designed API. There is essentially no way to use it 'correctly'. Here's how AddressSanitizer uses it (line 164): http://llvm.org/viewvc/llvm-project/compiler-rt/trunk/lib/sanitizer_common/sanitizer_posix_libcdep.cc?view=markup and here's how the Chrome browser uses it: https://chromium.googlesource.com/breakpad/breakpad/+/chrome_43/src/client/linux/handler/exception_handler.cc#149 Notice that my approach is closer to what Chrome does, but in the case where the installed stack is no longer ours, I don't disable whatever stack is installed. This is because I don't believe that will make much difference. Whoever switched out the stack could have saved our stack somewhere and planned on blindly restoring it upon exit. In which case, whatever we do would be overridden. Attached are a tiny reproducer for the issue, along with the complete analysis of what's reported by AddressSanitizer. I'll follow this up by a pull request for my changes. Thanks! Chris ---------- components: Extension Modules files: python_failure.txt messages: 290025 nosy: haypo, tich priority: normal severity: normal status: open title: faulthandler does not properly restore sigaltstack during teardown type: enhancement Added file: http://bugs.python.org/file46754/python_failure.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 02:35:08 2017 From: report at bugs.python.org (Decorater) Date: Thu, 23 Mar 2017 06:35:08 +0000 Subject: [New-bugs-announce] [issue29885] Allow GMT timezones to be used in datetime. Message-ID: <1490250908.5.0.958278371805.issue29885@psf.upfronthosting.co.za> New submission from Decorater: I noticed that there is no ways to convert local times to GMT if I realize that some other object (sometimes from a server) is using GMT and they happen to be ahead of my current time. As such it would be great if one can convert their current time that can be in an datetime object to an GMT time to see how much time has passed since something happened on their zone if so desired (otherwise can cause undesired or undefined consequences). ---------- components: Library (Lib) messages: 290027 nosy: Decorater priority: normal severity: normal status: open title: Allow GMT timezones to be used in datetime. versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 11:14:44 2017 From: report at bugs.python.org (chrysn) Date: Thu, 23 Mar 2017 15:14:44 +0000 Subject: [New-bugs-announce] [issue29886] links between binascii.{un, }hexlify / bytes.{, to}hex Message-ID: <1490282084.27.0.505150099985.issue29886@psf.upfronthosting.co.za> New submission from chrysn: The function binascii.{un,}hexlify and bytes.{,to}hex do almost the same things (plus/minus details of whether they accept whitespace, and whether the hex-encoded data is accepted/returned as strings or bytes). I think that it would help users to point that out in the documentation, eg. by adding a "Similar functionality is provided by the ... function." lines at the ends of the functions' documentations. ---------- assignee: docs at python components: Documentation messages: 290050 nosy: chrysn, docs at python priority: normal severity: normal status: open title: links between binascii.{un,}hexlify / bytes.{,to}hex type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 12:57:10 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 23 Mar 2017 16:57:10 +0000 Subject: [New-bugs-announce] [issue29887] test_normalization doesn't work Message-ID: <1490288230.58.0.523545383484.issue29887@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: It needs to fetch http://www.pythontest.net/unicode/9.0.0/NormalizationTest.txt (8.0.0 in 3.5) but get the 404 error. ---------- components: Tests messages: 290053 nosy: serhiy.storchaka priority: normal severity: normal status: open title: test_normalization doesn't work type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 14:04:23 2017 From: report at bugs.python.org (Kinebuchi Tomohiko) Date: Thu, 23 Mar 2017 18:04:23 +0000 Subject: [New-bugs-announce] [issue29888] The link referring to "Python download page" is broken Message-ID: <1490292263.44.0.884300917971.issue29888@psf.upfronthosting.co.za> New submission from Kinebuchi Tomohiko: The download page [1]_ contains a link intended to refer to the release page of the corresponding Python version [2]_. .. [1] `Download Python 2.7.13 Documentation `_ .. [2] e.g. `Python 2.7.8 Release `_ Although, this link is broken for three reasons. 1. Wrong template syntax `Present code `_::

{% trans download_page="https://www.python.org/download/releases/{{ release[:5] }}/" %}HTML Help (.chm) files are made available in the "Windows" section on the Python download page.{% endtrans %}

Fixed code::

{% trans download_page="https://www.python.org/download/releases/" + release[:5] + "/" %}HTML Help (.chm) files are made available in the "Windows" section on the Python download page.{% endtrans %}

2. Unexpected version number The URL contains a Python version string (i.e. ``release[:5]``), but for Python 2.7.13, ``release[:5]`` evaluates to ``'2.7.1'`` which obviously wrong as a version string. 3. Non-existent release pages for some versions www.python.org has pages which URLs are https://www.python.org/download/releases// with = 2.7.1--8, although has no pages with = 2.7.9 and so on. Is https://www.python.org/downloads/release/python-2713/ an appropriate page to refer? ---------- assignee: docs at python components: Documentation messages: 290057 nosy: cocoatomo, docs at python priority: normal severity: normal status: open title: The link referring to "Python download page" is broken versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 17:30:44 2017 From: report at bugs.python.org (Thomas Knox) Date: Thu, 23 Mar 2017 21:30:44 +0000 Subject: [New-bugs-announce] [issue29889] test_asyncio fails always Message-ID: <1490304644.24.0.581268594683.issue29889@psf.upfronthosting.co.za> New submission from Thomas Knox: Downloaded Python 3.6.1 source code onto CentOS 7.3 (64 bit), Fedora 25 (64 bit), Ubuntu 16.10 (64 bit) and Raspberry Pi 8.0 (32 bit). Configured with ./configure --enable-shared --enable-optimizations --enable-loadable-sqlite-extensions --disable-ipv6 --with-system-expat --with-system-ffi --with-threads On every platform, when running the profile generation code, test_asyncio fails with this error message: 0:06:45 [ 25/405] test_asyncio Executing .start() done, defined at /home/pi/Source/Python-3.6.1/Lib/test/test_asyncio/test_pep492.py:150> result=None created at /home/pi/Source/Python-3.6.1/Lib/asyncio/base_events.py:446> took 2.106 seconds 0:13:11 [ 26/405] test_asyncore -- test_asyncio failed in 6 min 27 sec ---------- components: Tests messages: 290059 nosy: Thomas Knox priority: normal severity: normal status: open title: test_asyncio fails always type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 17:59:48 2017 From: report at bugs.python.org (Ilya Kulakov) Date: Thu, 23 Mar 2017 21:59:48 +0000 Subject: [New-bugs-announce] [issue29890] Constructor of ipaddress.IPv*Interface does not follow documentation Message-ID: <1490306388.92.0.69250411121.issue29890@psf.upfronthosting.co.za> New submission from Ilya Kulakov: As per documentation, it should understand the same arguments as IPv*Network. Unfortunately it does not recognize netmask in string form. Hence the following code will fail: ipaddress.ip_interface(('192.168.1.10', '255.255.255.0')) while the following will work: ipaddress.ip_network(('192.168.1.10', '255.255.255.0'), strict=False) ---------- messages: 290062 nosy: Ilya.Kulakov priority: normal severity: normal status: open title: Constructor of ipaddress.IPv*Interface does not follow documentation type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 18:03:56 2017 From: report at bugs.python.org (Ezio Melotti) Date: Thu, 23 Mar 2017 22:03:56 +0000 Subject: [New-bugs-announce] [issue29891] urllib.request.Request accepts but doesn't check bytes headers Message-ID: <1490306636.78.0.894703905259.issue29891@psf.upfronthosting.co.za> New submission from Ezio Melotti: urllib.request.Request allows the user to create a request object like: req = Request(url, headers={b'Content-Type': b'application/json'}) When calling urlopen(req, data), urllib will check if a 'Content-Type' header is present and fail to recognize b'Content-Type' because it's bytes. urrlib will therefore add the default Content-Type 'application/x-www-form-urlencoded', and the request will then be sent with both Content-Types. This will result in difficult-to-debug errors because the server will sometimes pick one and sometimes the other, depending on the order. urllib should either reject bytes headers, or check for both bytes and strings. The docs also don't seem to specify that the headers should be strings. ---------- components: Library (Lib) messages: 290063 nosy: ezio.melotti, orsenthil priority: normal severity: normal stage: test needed status: open title: urllib.request.Request accepts but doesn't check bytes headers type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 23 19:46:18 2017 From: report at bugs.python.org (OSAMU NAKAMURA) Date: Thu, 23 Mar 2017 23:46:18 +0000 Subject: [New-bugs-announce] [issue29892] change statement for open() is splited into two part in middle of sentence. Message-ID: <1490312778.71.0.779732227064.issue29892@psf.upfronthosting.co.za> New submission from OSAMU NAKAMURA: In https://docs.python.org/3.6/library/functions.html#open , Following sentence is wrongly separated by extra asterisk. ``` FileExistsError is now raised if the file opened in exclusive creation mode ('x') already exists. ``` This mistake is introduced by https://github.com/python/cpython/commit/3929499914d47365ae744df312e16da8955c90ac#diff-30d76a3dc0c885f86917b7d307ccf279 ---------- assignee: docs at python components: Documentation messages: 290070 nosy: OSAMU.NAKAMURA, docs at python priority: normal severity: normal status: open title: change statement for open() is splited into two part in middle of sentence. versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 24 01:20:17 2017 From: report at bugs.python.org (Torrin Jones) Date: Fri, 24 Mar 2017 05:20:17 +0000 Subject: [New-bugs-announce] [issue29893] create_subprocess_exec doc doesn't match software Message-ID: <1490332817.83.0.92510042207.issue29893@psf.upfronthosting.co.za> New submission from Torrin Jones: The documentation for asyncio.create_subprocess_exec says this is the definition . . . asyncio.create_subprocess_exec(*args, stdin=None, stdout=None, stderr=None, loop=None, limit=None, **kwds) The actual definition is this . . . def create_subprocess_exec(program, *args, stdin=None, stdout=None, stderr=None, loop=None, limit=streams._DEFAULT_LIMIT, **kwds) Notice the first argument (program) at the start of the actual definition. ---------- components: asyncio messages: 290077 nosy: Torrin Jones, yselivanov priority: normal severity: normal status: open title: create_subprocess_exec doc doesn't match software versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 24 04:27:24 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 24 Mar 2017 08:27:24 +0000 Subject: [New-bugs-announce] [issue29894] Deprecate returning a subclass of complex from __complex__ Message-ID: <1490344044.5.0.0943427930901.issue29894@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: This is similar to issue26983, but complex() always returned exact complex. A deprecation warning is added just for uniformity with __float__ and __int__. ---------- components: Interpreter Core messages: 290080 nosy: mark.dickinson, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Deprecate returning a subclass of complex from __complex__ type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 24 10:24:33 2017 From: report at bugs.python.org (Tommy Carpenter) Date: Fri, 24 Mar 2017 14:24:33 +0000 Subject: [New-bugs-announce] [issue29895] Distutils blows up with an incorrect pypirc, should be caught Message-ID: <1490365473.69.0.662081207985.issue29895@psf.upfronthosting.co.za> New submission from Tommy Carpenter: Full details and stacktrace are at: http://stackoverflow.com/questions/43001446/python-pypi-configparser-blowing-up-when-pointing-to-certain-repo/43001770#43001770 Essentially, I had an index-servers section that listed a repo, that was not listed in the remainder of the .pypirc file. Instead of distutils catching this, it blows up in an obscure ConfigParsing error. ---------- components: Distutils messages: 290090 nosy: Tommy Carpenter, dstufft, merwok priority: normal severity: normal status: open title: Distutils blows up with an incorrect pypirc, should be caught type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 24 10:34:38 2017 From: report at bugs.python.org (Vasiliy Faronov) Date: Fri, 24 Mar 2017 14:34:38 +0000 Subject: [New-bugs-announce] [issue29896] ElementTree.fromstring raises undocumented UnicodeError Message-ID: <1490366078.02.0.633799101454.issue29896@psf.upfronthosting.co.za> New submission from Vasiliy Faronov: >>> from xml.etree import ElementTree as ET >>> ET.fromstring(b'<\xC4/>') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/xml/etree/ElementTree.py", line 1314, in XML parser.feed(text) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc4 in position 0: invalid continuation byt e The documentation for xml.etree.ElementTree does not mention that it can raise UnicodeError, only ParseError. I think that either the above error should be wrapped in a ParseError, or the documentation should be amended. This happens at least on 3.6, 3.5 and 2.7. ---------- messages: 290091 nosy: vfaronov priority: normal severity: normal status: open title: ElementTree.fromstring raises undocumented UnicodeError type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 24 15:14:26 2017 From: report at bugs.python.org (Michael Seifert) Date: Fri, 24 Mar 2017 19:14:26 +0000 Subject: [New-bugs-announce] [issue29897] itertools.chain behaves strangly when copied with copy.copy Message-ID: <1490382866.15.0.806348806156.issue29897@psf.upfronthosting.co.za> New submission from Michael Seifert: When using `copy.copy` to copy an `itertools.chain` instance the results can be weird. For example >>> from itertools import chain >>> from copy import copy >>> a = chain([1,2,3], [4,5,6]) >>> b = copy(a) >>> next(a) # looks okay 1 >>> next(b) # jumps to the second iterable, not okay? 4 >>> tuple(a) (2, 3) >>> tuple(b) (5, 6) I don't really want to "copy.copy" such an iterator (I would either use `a, b = itertools.tee(a, 2)` or `b = a` depending on the use-case). This just came up because I investigated how pythons iterators behave when copied, deepcopied or pickled because I want to make the iterators in my extension module behave similarly. ---------- components: Library (Lib) messages: 290106 nosy: MSeifert priority: normal severity: normal status: open title: itertools.chain behaves strangly when copied with copy.copy type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 24 18:34:03 2017 From: report at bugs.python.org (Eryk Sun) Date: Fri, 24 Mar 2017 22:34:03 +0000 Subject: [New-bugs-announce] [issue29898] PYTHONLEGACYWINDOWSIOENCODING isn't implemented Message-ID: <1490394843.09.0.467635996133.issue29898@psf.upfronthosting.co.za> New submission from Eryk Sun: The environment variable PYTHONLEGACYWINDOWSIOENCODING is documented here: https://docs.python.org/3/using/cmdline.html#envvar-PYTHONLEGACYWINDOWSIOENCODING but not actually implemented. Also, I think setting PYTHONIOENCODING to anything except UTF-8 should disable using io._WindowsConsoleIO. ---------- components: IO, Unicode, Windows messages: 290240 nosy: eryksun, ezio.melotti, haypo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: test needed status: open title: PYTHONLEGACYWINDOWSIOENCODING isn't implemented type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 01:19:41 2017 From: report at bugs.python.org (=?utf-8?b?a3lyZW4g5Y6f5a2Q5Za1?=) Date: Sat, 25 Mar 2017 05:19:41 +0000 Subject: [New-bugs-announce] [issue29899] zlib missing when --enable--optimizations option appended Message-ID: <1490419181.2.0.645690574744.issue29899@psf.upfronthosting.co.za> New submission from kyren ???: i think it happens to all versions that recognizes the optimizations option. At least I confirmed python version 3.4.6, 3.5.3, 3.6.1, when `--enable-optimizations` is appended, `zlib` cannot be imported, `No module named zlib`. I'm working on Ubuntu 14.04 by the way. I don't know if it's system specific. ---------- messages: 290465 nosy: kyren ??? priority: normal severity: normal status: open title: zlib missing when --enable--optimizations option appended type: behavior versions: Python 3.3 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 04:25:52 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 25 Mar 2017 08:25:52 +0000 Subject: [New-bugs-announce] [issue29900] Remove unneeded wrappers in pathlib Message-ID: <1490430352.62.0.808201029923.issue29900@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Since functions in the os module support path-like objects, the code of the pathlib module can be simplified. The wrappers that explicitly convert Path to str no longer needed. ---------- components: Library (Lib) messages: 290468 nosy: pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Remove unneeded wrappers in pathlib type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 05:34:25 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 25 Mar 2017 09:34:25 +0000 Subject: [New-bugs-announce] [issue29901] Support path-like objects in zipapp Message-ID: <1490434465.71.0.616268921696.issue29901@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch simplifies support of pathlib.Path in zipapp. As a side effect zipapp now supports other path-like objects, not just pathlib.Path. ---------- components: Library (Lib) messages: 290470 nosy: paul.moore, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Support path-like objects in zipapp type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 09:16:44 2017 From: report at bugs.python.org (Bruce Frederiksen) Date: Sat, 25 Mar 2017 13:16:44 +0000 Subject: [New-bugs-announce] [issue29902] copy breaks staticmethod Message-ID: <1490447804.79.0.661284691694.issue29902@psf.upfronthosting.co.za> New submission from Bruce Frederiksen: Doing a copy on a staticmethod breaks it: Python 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from copy import copy >>> def foo(): pass ... >>> class bar: pass ... >>> bar.x = staticmethod(foo) >>> bar.x.__name__ 'foo' >>> bar.y = copy(staticmethod(foo)) >>> bar.y.__name__ Traceback (most recent call last): File "", line 1, in RuntimeError: uninitialized staticmethod object ---------- components: Library (Lib) messages: 290481 nosy: dangyogi priority: normal severity: normal status: open title: copy breaks staticmethod type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 11:44:48 2017 From: report at bugs.python.org (Aviv Palivoda) Date: Sat, 25 Mar 2017 15:44:48 +0000 Subject: [New-bugs-announce] [issue29903] struct.Struct Addition Message-ID: <1490456688.43.0.618194339702.issue29903@psf.upfronthosting.co.za> New submission from Aviv Palivoda: I would like to suggest that the struct.Struct class will support addition. For example you will be able to do: >>> s1 = Struct(">L") >>> s2 = Struct(">B") >>> s3 = s1 + s2 >>> s3.format b">LB" ---------- components: Extension Modules messages: 290486 nosy: mark.dickinson, meador.inge, palaviv priority: normal severity: normal status: open title: struct.Struct Addition type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 13:35:41 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Sat, 25 Mar 2017 17:35:41 +0000 Subject: [New-bugs-announce] [issue29904] Fix a number of error message typos Message-ID: <1490463341.4.0.570341233776.issue29904@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Specifically, the list I've currently found in .py files: - _pyio.py: ValueError("flush of closed file") ValueError("flush of closed file") "of" -> "on" for both. - configparser.py: ValueError("Required argument `source' not given.") ValueError("Cannot specify both `filename' and `source'. " fix ` on the quotes on argument names. - windows_utils.py ValueError("I/O operatioon on closed pipe") "operatioon" -> "operation" - proactor_events.py, asynchat.py: TypeError('data argument must be byte-ish (%r)', raise TypeError('data argument must be byte-ish (%r)', AFAIK, "byte-ish" isn't used elsewhere, the author probably mean to go for "bytes-like object". - _header_value_parser.py: errors.HeaderParseError("expected atom at a start of " "at a start of " -> "at the start of " - http/cookiejar.py: raise ValueError("filename must be string-like") I think "must be a str" was intended. ---------- messages: 290491 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Fix a number of error message typos versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 17:10:20 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Sat, 25 Mar 2017 21:10:20 +0000 Subject: [New-bugs-announce] [issue29905] TypeErrors not formatting values correctly Message-ID: <1490476220.4.0.605506915687.issue29905@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Specifically, in both Lib/async/proactor_events.py and asynchat.py there's a comma where a % should be thereby not formatting the value correctly: TypeError('data argument must be byte-ish (%r)', type(data)) TypeError('data argument must be byte-ish (%r)', type(data)) proposed fix is to change them to: TypeError('data argument must be a bytes-like object, not %r' % type(data).__name__) TypeError('data argument must be a bytes-like object, not %r' % type(data).__name__) ---------- components: Library (Lib) messages: 290499 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: TypeErrors not formatting values correctly type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 21:18:34 2017 From: report at bugs.python.org (Aron Bordin) Date: Sun, 26 Mar 2017 01:18:34 +0000 Subject: [New-bugs-announce] [issue29906] Add callback parameter to concurrent.futures.Executor.map Message-ID: <1490491114.58.0.0924318232997.issue29906@psf.upfronthosting.co.za> New submission from Aron Bordin: I'm facing some situations where would be helpful to be able to add a default function callback when calling the Executor.map. So, when making calls with this command we could get the executor result easily. I think that we could provide a callback parameter to the map function, that adds the callable to the future (similar to add_done_callback). ---------- components: Library (Lib) messages: 290502 nosy: aron.bordin priority: normal severity: normal status: open title: Add callback parameter to concurrent.futures.Executor.map type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 22:26:08 2017 From: report at bugs.python.org (Robert Baker) Date: Sun, 26 Mar 2017 02:26:08 +0000 Subject: [New-bugs-announce] [issue29907] Unicode encoding failure Message-ID: <1490495168.15.0.580990833474.issue29907@psf.upfronthosting.co.za> New submission from Robert Baker: Using Python 2.7 (not IDLE) on Windows 10. I have tried to use a Python 2.7 program to print the name of Czech composer Anton?n Dvo??k. I remembered to add the "u" before the string, but regardless of whether I encode the caron-r as a literal character (pasted from Windows Character Map) or as \u0159, it gives the error that character 0159 is undefined. This is incorrect; that character has been defined as "lower case r with caron above" for several years now. (The interpreter has no problem with the ANSI characters in the string.) ---------- messages: 290503 nosy: Robert Baker priority: normal severity: normal status: open title: Unicode encoding failure type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 25 23:53:33 2017 From: report at bugs.python.org (Cameron Mckain) Date: Sun, 26 Mar 2017 03:53:33 +0000 Subject: [New-bugs-announce] [issue29908] Inconsistent crashing with an access violation Message-ID: <1490500413.47.0.654460616526.issue29908@psf.upfronthosting.co.za> New submission from Cameron Mckain: Almost every time I attempt to run my Django server ("python manage.py runserver") from PyCharm, python.exe crashes with the error "Unhandled exception at 0x7647BD9E (ucrtbase.dll) in python.exe: 0xC0000005: Access violation reading location 0x03BF8000." and "Process finished with exit code -1073741819 (0xC0000005)" is printed in the console. These errors only happen about 90% of the time. ---------- messages: 290504 nosy: Cameron Mckain priority: normal severity: normal status: open title: Inconsistent crashing with an access violation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 26 04:59:57 2017 From: report at bugs.python.org (Eric Hopper) Date: Sun, 26 Mar 2017 08:59:57 +0000 Subject: [New-bugs-announce] [issue29909] types.coroutine monkey patches original function Message-ID: <1490518797.37.0.740685001336.issue29909@psf.upfronthosting.co.za> New submission from Eric Hopper: The types.coroutine decorator for Python 3.6 (and I suspect for Python 3.6.1 as well) simply monkey patches the function it's passed and then returns it. This results in behavior that I found somewhat surprising. def bar(): yield 5 foo = types.coroutine(bar) foo is bar And, so now both foo and bar are now awaitable. I wasn't really expecting this, and while it's minor, it also doesn't really seem like the right thing to do. ---------- components: asyncio messages: 290518 nosy: Omnifarious, yselivanov priority: normal severity: normal status: open title: types.coroutine monkey patches original function type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 26 13:25:01 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 26 Mar 2017 17:25:01 +0000 Subject: [New-bugs-announce] [issue29910] Ctrl-D eats a character on IDLE Message-ID: <1490549101.95.0.397312441336.issue29910@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Ctrl-D is binded to commenting out a block in IDLE. But additionally it deletes the first character of the line after the block. The default binding of Ctrl-D in Text widget is deleting a character under cursor. IDLE first comment out selected block, and after that runs the default handler. Proposed patch fixes this issue and presumably other potential conflicts with default bindings. It just adds `return "break"` at the end of most event handlers. ---------- assignee: terry.reedy components: IDLE messages: 290539 nosy: serhiy.storchaka, terry.reedy priority: normal severity: normal stage: patch review status: open title: Ctrl-D eats a character on IDLE type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 26 14:09:34 2017 From: report at bugs.python.org (Christian Ullrich) Date: Sun, 26 Mar 2017 18:09:34 +0000 Subject: [New-bugs-announce] [issue29911] Uninstall command line in Windows registry does not uninstall Message-ID: <1490551774.28.0.407532543156.issue29911@psf.upfronthosting.co.za> New submission from Christian Ullrich: The Windows installation package registers a command line for uninstalling the package. Running this command line does not uninstall the package. The command line ends with "/modify". For uninstallation, it should be "/passive /uninstall". Windows provides for separate command lines for modifying and uninstalling packages to be set in the "Uninstall" subkey: - ModifyPath: Command line for modifying the package - UninstallString: Command line for removing the package By setting both keys, the ARP control panel will display separate buttons for the two operations. Having an uninstallation command line that does not do what it says, and in fact causes modal UI to be presented, also interferes with automated package management. Ceterum censeo: This bug would have been avoided by using MSI as the distribution package format, because "msiexec /qn /x [ProductCode]" would have been correct regardless of what the registry says, and even if the registry does not say anything because the Uninstall key (as well as the uninstaller executable itself) were actually deleted months ago as part of some expired user profile. See bug #25166. ---------- components: Windows messages: 290544 nosy: Christian.Ullrich, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Uninstall command line in Windows registry does not uninstall type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 26 19:42:37 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Sun, 26 Mar 2017 23:42:37 +0000 Subject: [New-bugs-announce] [issue29912] Overlapping tests between list_tests and seq_tests Message-ID: <1490571757.15.0.357120556689.issue29912@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: Seems the CommonTests class defined in list_tests duplicates the testing performed by seq_tests.CommonTests in the following functions: test_index, test_count Additionally, a part of test_imul from list_tests.CommonTests can be moved to seq_tests.CommonTests. (specifically, up until ` self.assertEqual(u, self.type2test([]))`). Am I missing some non-obvious thing here or can I safely remove the two test functions in list_tests.CommonTests and move (while also adding a super call) part of test_imul from list_tests.CommonTests to test_imul in seq_tests.CommonTests? Some links: [1a] seq_tests test_index: https://github.com/python/cpython/blob/1e73dbbc29c96d0739ffef92db36f63aa1aa30da/Lib/test/seq_tests.py#L363 [1b] list_tests test_index: https://github.com/python/cpython/blob/1e73dbbc29c96d0739ffef92db36f63aa1aa30da/Lib/test/list_tests.py#L376 [2a] seq_tests test_count: https://github.com/python/cpython/blob/1e73dbbc29c96d0739ffef92db36f63aa1aa30da/Lib/test/seq_tests.py#L344 [2b] list_tests test_count: https://github.com/python/cpython/blob/1e73dbbc29c96d0739ffef92db36f63aa1aa30da/Lib/test/list_tests.py#L357 [3a] seq_tests test_imul: https://github.com/python/cpython/blob/1e73dbbc29c96d0739ffef92db36f63aa1aa30da/Lib/test/seq_tests.py#L300 [3b] list_tests test_imul: https://github.com/python/cpython/blob/1e73dbbc29c96d0739ffef92db36f63aa1aa30da/Lib/test/list_tests.py#L550 ---------- components: Tests messages: 290550 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Overlapping tests between list_tests and seq_tests type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 03:23:35 2017 From: report at bugs.python.org (Sanjay) Date: Mon, 27 Mar 2017 07:23:35 +0000 Subject: [New-bugs-announce] [issue29913] ipadress compare_networks does not work according to documentation Message-ID: <1490599415.99.0.717154213111.issue29913@psf.upfronthosting.co.za> New submission from Sanjay: according to the docs compare_networks only checks the network address but the implementation is also taking the mask length into account. It returns '0' only if both the network address and the mask are equal but this can be done with just equality check ( ip1 == ip2 ) Example: >>> ip1=ipaddress.ip_network("1.1.1.0/24") >>> ip2=ipaddress.ip_network("1.1.1.0/25") >>> ip1.compare_networks(ip2) -1 >>> ip1 == ip2 False >>> ip1.network_address IPv4Address('1.1.1.0') >>> ip2.network_address IPv4Address('1.1.1.0') >>> shouldn't we ignore the mask length ? I have tried it here: https://github.com/s-sanjay/cpython/commit/942073c1ebd29891e047b5e784750c2b6f74494a ---------- components: Library (Lib) messages: 290566 nosy: Sanjay priority: normal severity: normal status: open title: ipadress compare_networks does not work according to documentation type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 04:01:08 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 27 Mar 2017 08:01:08 +0000 Subject: [New-bugs-announce] [issue29914] Incorrect signatures of object.__reduce__() and object.__reduce_ex__() Message-ID: <1490601668.43.0.188654711703.issue29914@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The special method __reduce__() doesn't take arguments, the special method __reduce_ex__() takes one mandatory argument. But default implementations in the object class takes one optional argument. This looks as an oversign. Proposed patch fixes signatures of object.__reduce__() and object.__reduce_ex__(). ---------- components: Interpreter Core messages: 290567 nosy: alexandre.vassalotti, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Incorrect signatures of object.__reduce__() and object.__reduce_ex__() type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 05:29:04 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Mar 2017 09:29:04 +0000 Subject: [New-bugs-announce] [issue29915] Drop Mac OS X Tiger support in Python 3.7? Message-ID: <1490606944.79.0.982783891513.issue29915@psf.upfronthosting.co.za> New submission from STINNER Victor: Hi, Last september I already proposed the same thing for Python 3.6 (issue #28099), but Ned Deily asked to keep OS X Tiger supper. The Tiger buildbot fails since many years: http://buildbot.python.org/all/builders/x86%20Tiger%203.x/builds/479/steps/test/logs/stdio It doesn't seem like anyone take care of this buildbot. Mac OS X Tiger was released in 2004 (13 years ago). The last update was 10.4.11: November 14, 2007 (9 years ago). "Support status: Unsupported as of September 2009, Safari support ended November 2010." https://en.wikipedia.org/wiki/Mac_OS_X_Tiger ---------- components: Build messages: 290570 nosy: haypo, ned.deily priority: normal severity: normal status: open title: Drop Mac OS X Tiger support in Python 3.7? versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 06:26:54 2017 From: report at bugs.python.org (Michael Seifert) Date: Mon, 27 Mar 2017 10:26:54 +0000 Subject: [New-bugs-announce] [issue29916] No explicit documentation for PyGetSetDef and getter and setter C-API Message-ID: <1490610414.6.0.8634942266.issue29916@psf.upfronthosting.co.za> New submission from Michael Seifert: A copy of the struct definition can be found in the typeobject documentation [1]. There is also some explanation of the "closure" function pointer in the extending tutorial [2]. However the struct isn't explicitly defined as "c:type" so the 6 links to it in the documentations go nowhere. I also submitted a pull request. [1] https://docs.python.org/3.6/c-api/typeobj.html#c.PyTypeObject.tp_getset [2] https://docs.python.org/3/extending/newtypes.html ---------- assignee: docs at python components: Documentation messages: 290574 nosy: MSeifert, docs at python priority: normal pull_requests: 742 severity: normal status: open title: No explicit documentation for PyGetSetDef and getter and setter C-API _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 06:31:12 2017 From: report at bugs.python.org (Michael Seifert) Date: Mon, 27 Mar 2017 10:31:12 +0000 Subject: [New-bugs-announce] [issue29917] Wrong link target in PyMethodDef documentation Message-ID: <1490610672.62.0.617621215351.issue29917@psf.upfronthosting.co.za> New submission from Michael Seifert: The `link`-target of the "type" struct member is the python built-in "type". See [1]. I think it should not be a link at all. [1] https://docs.python.org/3.7/c-api/structures.html#c.PyMemberDef ---------- assignee: docs at python components: Documentation messages: 290575 nosy: MSeifert, docs at python priority: normal severity: normal status: open title: Wrong link target in PyMethodDef documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 07:30:45 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 27 Mar 2017 11:30:45 +0000 Subject: [New-bugs-announce] [issue29918] Missed "const" modifiers in C API documentation Message-ID: <1490614245.98.0.401745534622.issue29918@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds missed "const" modifiers in C API documentation. ---------- assignee: docs at python components: Documentation messages: 290588 nosy: docs at python, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Missed "const" modifiers in C API documentation type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 09:05:13 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Mar 2017 13:05:13 +0000 Subject: [New-bugs-announce] [issue29919] Remove unused imports found by pyflakes Message-ID: <1490619913.4.0.443050658966.issue29919@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached PR removes unused imports found by pyflakes. It makes also minor PEP 8 coding style fixes on modified imports. ---------- messages: 290607 nosy: haypo priority: normal pull_requests: 746 severity: normal status: open title: Remove unused imports found by pyflakes versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 09:16:49 2017 From: report at bugs.python.org (Xavier Morel) Date: Mon, 27 Mar 2017 13:16:49 +0000 Subject: [New-bugs-announce] [issue29920] Document cgitb.text and cgitb.html Message-ID: <1490620609.02.0.405950190914.issue29920@psf.upfronthosting.co.za> New submission from Xavier Morel: Currently, cgitb documents the hook (enable) and somewhat unclearly the ability to dump the HTML traceback to stdout, but despite that being technically available it does not document the ability to dump the traceback to a string as either text or html. Possible further improvement: make ``cgitb.html`` and ``cgitb.text`` implicitly call `sys.exc_info()` if not given a parameter (much like `cgitb.handler` does). ---------- assignee: docs at python components: Documentation messages: 290608 nosy: docs at python, xmorel priority: normal pull_requests: 747 severity: normal status: open title: Document cgitb.text and cgitb.html type: enhancement versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 09:48:47 2017 From: report at bugs.python.org (m-parry) Date: Mon, 27 Mar 2017 13:48:47 +0000 Subject: [New-bugs-announce] [issue29921] datetime validation is stricter in 3.6.1 than previous versions Message-ID: <1490622527.46.0.936939014549.issue29921@psf.upfronthosting.co.za> New submission from m-parry: The change in issue #29100 - intended AFAICS simply to fix a regression in 3.6 - seems to have made datetime validation via certain code paths stricter than it was in 2.7 or 3.5. I think it's the case that some routes via the C API now reject out of range values that were previously permitted. Even if this previous behaviour was incorrect, was it intentional to alter that in a maintenance release? Here's a quick example using pywin32: --- > import getpass, sspi, sspicon, win32security > client_name = getpass.getuser() > auth_info = (client_name, 'wherever.com', None) > pkg_info = win32security.QuerySecurityPackageInfo('Kerberos') > win32security.AcquireCredentialsHandle( > client_name, pkg_info['Name'], > sspicon.SECPKG_CRED_OUTBOUND, > None, auth_info) ValueError: year 30828 is out of range --- Of course, this is probably a mishandling of the 'never expires' value returned by the Windows API in this case, and indeed I have also created a pywin32 ticket. However, I'm guessing that the linked issue wasn't supposed to break such code. ---------- components: Library (Lib) messages: 290609 nosy: m-parry priority: normal severity: normal status: open title: datetime validation is stricter in 3.6.1 than previous versions type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 13:24:22 2017 From: report at bugs.python.org (Tadhg McDonald-Jensen) Date: Mon, 27 Mar 2017 17:24:22 +0000 Subject: [New-bugs-announce] [issue29922] error message when __aexit__ is not async Message-ID: <1490635462.94.0.877144122619.issue29922@psf.upfronthosting.co.za> New submission from Tadhg McDonald-Jensen: When creating a asynchronous context manager if the __aexit__ method is not labeled as async (so it returns None instead of a coroutine) the error has a generic error message: TypeError: object NoneType can't be used in 'await' expression Would it be possible to change this so it indicates that it was the context that was invalid not an `await` statement? Since the traceback points to the last statement of the with block it can create very confusing errors if the last statement was an await. Example: import asyncio class Test(): async def __aenter__(self): print("aenter used") value = asyncio.Future() value.set_result(True) return value #FORGOT TO MARK AS async !! def __aexit__(self, *errors): print("aexit used") return None async def my_test(): async with Test() as x: print("inside async with, now awaiting on", x) await x my_test().send(None) Give the output: aenter used inside async with, now awaiting on aexit used Traceback (most recent call last): File ".../test.py", line 19, in my_test().send(None) File ".../test.py", line 16, in my_test await x TypeError: object NoneType can't be used in 'await' expression Which indicates to me that `x` was None when it was await-ed for. ---------- components: asyncio messages: 290630 nosy: Tadhg McDonald-Jensen, yselivanov priority: normal severity: normal status: open title: error message when __aexit__ is not async type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 16:15:16 2017 From: report at bugs.python.org (Brian Petersen) Date: Mon, 27 Mar 2017 20:15:16 +0000 Subject: [New-bugs-announce] [issue29923] PEP487 __init_subclass__ incompatible with abc.ABCMeta Message-ID: <1490645716.61.0.370506837908.issue29923@psf.upfronthosting.co.za> New submission from Brian Petersen: First time issue reporter here. I really love PEP 487, but I'm finding the new __init_subclass__ functionality is not playing nicely with existing abstract class functionality. For example, taking the Quest example given in PEP 487 but simply adding ABCMeta metaclass results in a runtime error: ``` class QuestBase(metaclass=abc.ABCMeta): # this is implicitly a @classmethod (see below for motivation) def __init_subclass__(cls, swallow, **kwargs): cls.swallow = swallow super().__init_subclass__(**kwargs) class Quest(QuestBase, swallow="african"): pass print(Quest.swallow) Traceback (most recent call last): File "credentials.py", line 23, in class Quest(QuestBase, swallow="african"): TypeError: __new__() got an unexpected keyword argument 'swallow' ``` ---------- components: Library (Lib) messages: 290641 nosy: Brian Petersen priority: normal severity: normal status: open title: PEP487 __init_subclass__ incompatible with abc.ABCMeta versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 16:55:22 2017 From: report at bugs.python.org (SylvainDe) Date: Mon, 27 Mar 2017 20:55:22 +0000 Subject: [New-bugs-announce] [issue29924] Useless argument in call to PyErr_Format Message-ID: <1490648122.87.0.778103510722.issue29924@psf.upfronthosting.co.za> New submission from SylvainDe: Very uninteresting issue I've found while looking at the code. In Objects/call.c, in _PyMethodDef_RawFastCallDict(PyMethodDef *method, PyObject *self, PyObject **arg...), we have no_keyword_error: PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", method->ml_name, nargs); The `nargs` seems pointless. This issue is mosly opened to have a record number to open a commit but it raises a few questions: - would it make sense to try to use GCC/CLang's logic around __attribute__ to have this kind of things checked during compilation as much as possible ? - would it make sense to define very small functions wrapping some calls to `PyErr_Format` so that one can use function with a very clear signature at (almost) no cost? This would be specially relevant for error raised in multiple places with the same message (The trio PyMethodDef *method/PyExc_TypeError/"%.200s() takes no keyword arguments" is a good candidate for this). I'd be happy for work on this but I'm afraid this would correspond to something Raymond Hettinger asks new comers not to do : "Don't be a picture straightener" ( https://speakerdeck.com/pybay2016/raymond-hettinger-keynote-core-developer-world ). I've filled the impacted version as 3.7 as there is no real impacted version from a user point of view. ---------- components: Argument Clinic messages: 290645 nosy: SylvainDe, larry priority: normal severity: normal status: open title: Useless argument in call to PyErr_Format type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 17:05:46 2017 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Mar 2017 21:05:46 +0000 Subject: [New-bugs-announce] [issue29925] test_uuid fails on OS X Tiger Message-ID: <1490648746.25.0.479483160976.issue29925@psf.upfronthosting.co.za> New submission from STINNER Victor: http://buildbot.python.org/all/builders/x86%20Tiger%203.x/builds/363/steps/test/logs/stdio ====================================================================== FAIL: test_uuid1_safe (test.test_uuid.TestUUID) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_uuid.py", line 351, in test_uuid1_safe self.assertNotEqual(u.is_safe, uuid.SafeUUID.unknown) AssertionError: == According to Ned Deily, the test started to fail with the the commit 0b8432538acf45d7a605fe68648b4712e8d9cee3. PR: https://github.com/python/cpython/pull/388 See also the issue #29915 (OS X Tiger). ---------- components: Tests, macOS messages: 290652 nosy: benjamin.peterson, haypo, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: test_uuid fails on OS X Tiger versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 18:51:11 2017 From: report at bugs.python.org (Mark) Date: Mon, 27 Mar 2017 22:51:11 +0000 Subject: [New-bugs-announce] [issue29926] time.sleep ignores keyboard interrupt in IDLE Message-ID: <1490655071.78.0.435370193059.issue29926@psf.upfronthosting.co.za> New submission from Mark: Consider the following code, typed interactively: >>> import time >>> time.sleep(1e6) This will sleep for a bit over one and a half weeks. If this was typed in error, you may want to interrupt it. If using the command line, this is easy: just use Ctrl-C. If using IDLE, Ctrl-C has no effect. One could attempt to restart the shell with Ctrl-F6, which seems to work, but in fact the process remains in the background, hung until the timeout expires. There are two obvious workarounds: one is to sleep in a separate thread, so as to avoid blocking the main thread, and the other is to use a loop with smaller sleep increments: for ii in range(1e5): sleep(10) Now it only takes 10 seconds to interrupt a sleep. But these are both clumsy workarounds. They're so clumsy that I think I'm not going to use IDLE for this particular program and just use python -I. Would be nice if this were fixed. ---------- assignee: terry.reedy components: IDLE messages: 290663 nosy: Mark, terry.reedy priority: normal severity: normal status: open title: time.sleep ignores keyboard interrupt in IDLE versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 27 22:23:00 2017 From: report at bugs.python.org (Kinebuchi Tomohiko) Date: Tue, 28 Mar 2017 02:23:00 +0000 Subject: [New-bugs-announce] [issue29927] Unnecessary code in the c-api/exceptions.c Message-ID: <1490667780.76.0.56540941039.issue29927@psf.upfronthosting.co.za> New submission from Kinebuchi Tomohiko: 1. BufferError is PRE_INIT'ed twice, and also POST_INIT'ed twice. 2. Using macros (PRE_INIT, POST_INIT and ADD_ERRNO) with following unnecessary semicolons. These unnecessary code have no semantic effect, but is somehow confusing. ---------- components: Interpreter Core messages: 290678 nosy: cocoatomo priority: normal severity: normal status: open title: Unnecessary code in the c-api/exceptions.c versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 01:07:33 2017 From: report at bugs.python.org (Mariatta Wijaya) Date: Tue, 28 Mar 2017 05:07:33 +0000 Subject: [New-bugs-announce] [issue29928] Add f-strings to Glossary Message-ID: <1490677653.09.0.270979437774.issue29928@psf.upfronthosting.co.za> New submission from Mariatta Wijaya: The Glossary section should mention f-strings, starting in Python 3.6. ---------- assignee: docs at python components: Documentation messages: 290682 nosy: Mariatta, docs at python priority: normal severity: normal status: open title: Add f-strings to Glossary versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 05:22:30 2017 From: report at bugs.python.org (Nick Coghlan) Date: Tue, 28 Mar 2017 09:22:30 +0000 Subject: [New-bugs-announce] [issue29929] Idea: Make __main__ an implied package Message-ID: <1490692950.43.0.231452338576.issue29929@psf.upfronthosting.co.za> New submission from Nick Coghlan: In just the last 24 hours, I've run across two cases where the default "the script directory is on sys.path" behaviour confused even experienced programmers: 1. a GitHub engineer thought the Python version in their Git-for-Windows bundle was broken because "from random import randint" failed (from a script called "random.py" 2. a Red Hat engineer was thoroughly confused when their systemd.py script was executed a second time when an unhandled exception was raised (Fedora's system Python is integrated with the ABRT crash reporter, and the except hook implementation does "from systemd import journal" while dealing with an unhandled exception) This isn't a new problem, we've known for a long time that people are regularly confused by this, and it earned a mention as one of my "Traps for the Unwary in Python's Import System": http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html#the-name-shadowing-trap However, what's changed is that for the first time I think I see a potential way out of this: rather than injecting the script directory as sys.path[0], we could set it as "__main__.__path__ = []". Cross-version compatible code would then be written as: if "__path__" in globals(): from . import relative_module_name else: import relative_module_name This approach would effectively be a continuation of PEP 328 (which eliminated implicit relative imports from within packages) and PEP 366 (which allowed implicit relative imports from modules executed with the '-m' switch). ---------- components: Interpreter Core messages: 290689 nosy: ncoghlan priority: normal severity: normal status: open title: Idea: Make __main__ an implied package type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 05:45:54 2017 From: report at bugs.python.org (Metathink) Date: Tue, 28 Mar 2017 09:45:54 +0000 Subject: [New-bugs-announce] [issue29930] asyncio.StreamWriter.drain raises an AssertionError under heavy use Message-ID: <1490694354.71.0.0417616939951.issue29930@psf.upfronthosting.co.za> New submission from Metathink: While trying to break some code in a project using asyncio, I found that under certain circumstances, asyncio.StreamWriter.drain raises an AssertionError. 1. There must be a lot of concurrent uses of "await writer.drain()" 2. The server on which we send data must be public, no AssertionError occurs while connected to 127.0.0.1 Task exception was never retrieved future: exception=AssertionError()> Traceback (most recent call last): File "client.py", line 12, in flooding await writer.drain() File "/usr/local/lib/python3.6/asyncio/streams.py", line 333, in drain yield from self._protocol._drain_helper() File "/usr/local/lib/python3.6/asyncio/streams.py", line 208, in _drain_helper assert waiter is None or waiter.cancelled() AssertionError I don't know much about how the drain function is working or how networking is handled by the OS, but I'm assuming that I'v reached some OS limitation which trigger this AssertionError. I'm not sure how I'm supposed to handle that. Am I supposed to add some throttling because I should not send too much data concurrently? Is this considered as a bug? Any explanations are welcome. Here some minimal client and server exemples if you want to try to reproduce it: - Server: https://pastebin.com/SED89pwB - Client: https://pastebin.com/ikJKHxi9 Also, I don't think this is limited to python 3.6, I'v found this old issue on the aaugustin's websockets repo which looks the same: https://github.com/aaugustin/websockets/issues/16 ---------- components: asyncio messages: 290690 nosy: metathink, yselivanov priority: normal severity: normal status: open title: asyncio.StreamWriter.drain raises an AssertionError under heavy use type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 06:37:20 2017 From: report at bugs.python.org (Sanjay) Date: Tue, 28 Mar 2017 10:37:20 +0000 Subject: [New-bugs-announce] [issue29931] ipaddress.ip_interface __lt__ check seems to be broken Message-ID: <1490697440.73.0.104969617886.issue29931@psf.upfronthosting.co.za> New submission from Sanjay: The less than check for ip_interface behavior seems weird. I am not sure if this is by design. We are just comparing the network address but when network address is equal we should compare the ip address. The expectation is if a < b is False then b <= a must be True >>> import ipaddress >>> a = ipaddress.ip_interface("1.1.1.1/24") >>> b = ipaddress.ip_interface("1.1.1.2/24") >>> a < b False >>> b <= a False >>> a == b False >>> This happens with both v4 and v6 The tests were passing because in ComparisonTests we were testing with prefix length of 32 which means the whole ip address became the network address. I have made a fix here: https://github.com/s-sanjay/cpython/commit/14975f58539308b7af5a1519705fb8cd95ad7951 I can add more tests and send PR but before that I wanted to confirm the behavior. ---------- components: Library (Lib) messages: 290695 nosy: Sanjay, ncoghlan, pmoody, xiang.zhang priority: normal severity: normal status: open title: ipaddress.ip_interface __lt__ check seems to be broken type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 08:35:05 2017 From: report at bugs.python.org (SylvainDe) Date: Tue, 28 Mar 2017 12:35:05 +0000 Subject: [New-bugs-announce] [issue29932] Missing word ("be") in error message ("first argument must a type object") Message-ID: <1490704505.47.0.339319801351.issue29932@psf.upfronthosting.co.za> New submission from SylvainDe: Very uninteresting issue but error message should probably be "first argument must BE a type object" in `array__array_reconstructor_impl` in Modules/arraymodule.c . This has been introduced with ad077154d0f305ee0ba5bf41d3cb47d1d9c43e7b . I'll handle this issue in the next day. ---------- messages: 290701 nosy: SylvainDe priority: normal severity: normal status: open title: Missing word ("be") in error message ("first argument must a type object") type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 09:18:59 2017 From: report at bugs.python.org (STINNER Victor) Date: Tue, 28 Mar 2017 13:18:59 +0000 Subject: [New-bugs-announce] [issue29933] asyncio: set_write_buffer_limits() doc doesn't specify unit of the parameters Message-ID: <1490707139.27.0.838851028066.issue29933@psf.upfronthosting.co.za> New submission from STINNER Victor: The asyncio set_write_buffer_limits() documentation doesn't specify unit of high and low parameters. Moreover, it would to explain better the effect of high and low: * pause_writing() is called when the buffer size becomes larger or equal to high * (if writing is pause) resume_writing() is called when the buffer size becomes smaller or equal to low ---------- assignee: docs at python components: Documentation, asyncio keywords: easy messages: 290712 nosy: docs at python, haypo, yselivanov priority: normal severity: normal status: open title: asyncio: set_write_buffer_limits() doc doesn't specify unit of the parameters versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 10:37:41 2017 From: report at bugs.python.org (Mert Bora Alper) Date: Tue, 28 Mar 2017 14:37:41 +0000 Subject: [New-bugs-announce] [issue29934] % formatting fails to find formatting code in bytes type after a null byte Message-ID: <1490711861.33.0.140733331075.issue29934@psf.upfronthosting.co.za> New submission from Mert Bora Alper: Hello, In Python 3.6.0, % formatting fails to find formatting code after a null byte in bytes type. Example: >>> "%s_\x00%s" % ("hello", "world") 'hello_\x00world' >>> b"%s_\x00%s" % (b"hello", b"world") Traceback (most recent call last): File "", line 1, in TypeError: not all arguments converted during bytes formatting In contrast, the exact same code works as expected in Python 3.5: >>> "%s_\x00%s" % ("hello", "world") 'hello_\x00world' >>> b"%s_\x00%s" % (b"hello", b"world") b'hello_\x00world' I used Python 3.6.0 that I installed using pyenv 1.0.8 on Kubuntu 16.04 x86_64. ---------- messages: 290724 nosy: boramalper priority: normal severity: normal status: open title: % formatting fails to find formatting code in bytes type after a null byte type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 12:16:21 2017 From: report at bugs.python.org (George King) Date: Tue, 28 Mar 2017 16:16:21 +0000 Subject: [New-bugs-announce] [issue29935] list and tuple index methods should accept None parameters Message-ID: <1490717781.97.0.695968126549.issue29935@psf.upfronthosting.co.za> New submission from George King: As of python3.6, passing None to the start/end parameters of `list.index` and `tuple.index` raises the following exception: "slice indices must be integers or None or have an __index__ method" This suggests that the intent is to support None as a valid input. This would be quite useful for the end parameter, where the sensible default is len(self) rather than a constant. Note also that str, bytes, and bytearray all support None. I suggest that CPython be patched to support None for start/end. Otherwise, at the very least the exception message should be changed. Accepting None will make the optional start/end parameters for this method more consistent across the types, which is especially helpful when using type annotations / checking. ---------- messages: 290737 nosy: gwk priority: normal severity: normal status: open title: list and tuple index methods should accept None parameters _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 12:26:04 2017 From: report at bugs.python.org (Niklas Fiekas) Date: Tue, 28 Mar 2017 16:26:04 +0000 Subject: [New-bugs-announce] [issue29936] Typo in __GNU*C*_MINOR__ guard affecting gcc 3.x Message-ID: <1490718364.46.0.906285030804.issue29936@psf.upfronthosting.co.za> New submission from Niklas Fiekas: The patch in http://bugs.python.org/issue16881 disables the nicer macro for gcc 3.x due to a small typo. The build is not failing. The guard just unnescessarily evaluates to false. ---------- components: Interpreter Core messages: 290738 nosy: niklasf priority: normal severity: normal status: open title: Typo in __GNU*C*_MINOR__ guard affecting gcc 3.x versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 13:43:18 2017 From: report at bugs.python.org (Mark Nolan) Date: Tue, 28 Mar 2017 17:43:18 +0000 Subject: [New-bugs-announce] [issue29937] argparse mutex group should allow mandatory parameters Message-ID: <1490722998.75.0.478597210318.issue29937@psf.upfronthosting.co.za> New submission from Mark Nolan: I see elsewhere, and from use, that a mutex group will not support mandatory positional parameters. TBH, I don't understand why this should be any different from any other option, but if it must, then I think it should follow the 'required' parameter of the mutex. So, it should be possible to have a mutex group where one option must be chosen and that option must have positional parameters. (My first post here. Not sure of any other way to discuss with the argparse development group. Point me somewhere else if appropriate). ---------- components: Library (Lib) messages: 290750 nosy: Mark Nolan priority: normal severity: normal status: open title: argparse mutex group should allow mandatory parameters type: enhancement versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 28 20:56:07 2017 From: report at bugs.python.org (SLAPaper) Date: Wed, 29 Mar 2017 00:56:07 +0000 Subject: [New-bugs-announce] [issue29938] subprocess.run calling bash on windows10 cause 0x80070057 error when capture stdout with PIPE Message-ID: <1490748967.96.0.600752277215.issue29938@psf.upfronthosting.co.za> New submission from SLAPaper: print(subprocess.run("bash -c ls", shell=True, stdout=subprocess.PIPE, encoding='utf_16_le').stdout) # ??: 0x80070057 # error: 0x80070057 And the returncode is 4294967295. OS: Simp-Chinese Win10; Bash on Windows(Ubuntu 14.04 LTS). Python 3.5 and Python 3.6 produce the same issue. While not capture stdout, the command works just fine and output the result onto screen. ---------- components: Windows messages: 290762 nosy: SLAPaper, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: subprocess.run calling bash on windows10 cause 0x80070057 error when capture stdout with PIPE type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 29 02:44:44 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 29 Mar 2017 06:44:44 +0000 Subject: [New-bugs-announce] [issue29939] Compiler warning in _ctypes_test.c Message-ID: <1490769884.15.0.305944990861.issue29939@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Compiler warning was introduced by issue29565. /home/serhiy/py/cpython/Modules/_ctypes/_ctypes_test.c: In function ?_testfunc_large_struct_update_value?: /home/serhiy/py/cpython/Modules/_ctypes/_ctypes_test.c:53:42: warning: parameter ?in? set but not used [-Wunused-but-set-parameter] _testfunc_large_struct_update_value(Test in) ^ ---------- components: ctypes messages: 290771 nosy: serhiy.storchaka, vinay.sajip priority: normal severity: normal stage: needs patch status: open title: Compiler warning in _ctypes_test.c type: compile error versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 29 11:41:05 2017 From: report at bugs.python.org (Samwyse) Date: Wed, 29 Mar 2017 15:41:05 +0000 Subject: [New-bugs-announce] [issue29940] Add follow_wrapped=True option to help() Message-ID: <1490802065.05.0.49346265762.issue29940@psf.upfronthosting.co.za> New submission from Samwyse: The help(obj) function uses the type of obj to create its result. This is less than helpful when requesting help on a wrapped object. Since 3.5, inspect.signature() and inspect.from_callable() have a follow_wrapped option to get around similar issues. Adding the option to help() would prevent surprising behavior while still allowing current behavior to be used when needed. See http://stackoverflow.com/a/17705456/603136 for more. ---------- components: Library (Lib) messages: 290782 nosy: samwyse priority: normal severity: normal status: open title: Add follow_wrapped=True option to help() type: enhancement versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 29 12:41:48 2017 From: report at bugs.python.org (Thomas Wouters) Date: Wed, 29 Mar 2017 16:41:48 +0000 Subject: [New-bugs-announce] [issue29941] Confusion between asserts and Py_DEBUG Message-ID: <1490805708.56.0.244134698101.issue29941@psf.upfronthosting.co.za> New submission from Thomas Wouters: There is a bit of confusion in the CPython source between Py_DEBUG and (C) asserts. By default Python builds without Py_DEBUG and without asserts (definining NDEBUG to disable them). Turning on Py_DEBUG also enables asserts. However, it *is* possible to turn on asserts *without* turning on Py_DEBUG, and at Google we routinely build CPython that way. (Doing this with the regular configure/make process can be done by setting CFLAGS=-UNDEBUG when running configure.) This happens to highlight two different problems: - Code being defined in Py_DEBUG blocks but used in assertions: _PyDict_CheckConsistency() is defined in dictobject.c in an #ifdef Py_DEBUG, but then used in assert without a check for Py_DEBUG. This is a compile-time error. - Assertions checking for things that are outside of CPython's control, like whether an exception is set before calling something that might clobber it. Generally speaking assertions should be for internal invariants; things that should be a specific way, and it's an error in CPython itself when it's not (I think Tim Peters originally expressed this view of C asserts). For example, PyObject_Call() (and various other flavours of it) does 'assert(!PyErr_Occurred())', which is easily triggered and the cause of which is not always apparent. The second case is useful, mind you, as it exposes bugs in extension modules, but the way it does it is not very helpful (it displays no traceback), and if the intent is to only do this when Py_DEBUG is enabled it would be better to check for that. The attached PR fixes both issues. I think what our codebase does (enable assertions by default, without enabling Py_DEBUG) is useful, even when applied to CPython, and I would like CPython to keep working that way. However, if it's deemed more appropriate to make assertions only work in Py_DEBUG mode, that's fine too -- but please make it explicit, by making non-Py_DEBUG builds require NDEBUG. ---------- messages: 290784 nosy: Thomas Wouters, gregory.p.smith priority: normal pull_requests: 788 severity: normal status: open title: Confusion between asserts and Py_DEBUG type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 29 13:09:09 2017 From: report at bugs.python.org (Thomas Wouters) Date: Wed, 29 Mar 2017 17:09:09 +0000 Subject: [New-bugs-announce] [issue29942] Stack overflow in itertools.chain.from_iterable. Message-ID: <1490807349.39.0.701880825332.issue29942@psf.upfronthosting.co.za> New submission from Thomas Wouters: itertools.chain.from_iterable (somewhat ironically) uses recursion to resolve the next iterator, which means it can run out of the C stack when there's a long run of empty iterables. This is most obvious when building with low optimisation modes, or with Py_DEBUG enabled: Python 3.7.0a0 (heads/master:c431854a09, Mar 29 2017, 10:03:50) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import itertools >>> next(itertools.chain.from_iterable(() for unused in range(10000000))) Segmentation fault (core dumped) ---------- messages: 290787 nosy: gregory.p.smith, twouters priority: normal pull_requests: 791 severity: normal status: open title: Stack overflow in itertools.chain.from_iterable. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 29 19:36:28 2017 From: report at bugs.python.org (Nathaniel Smith) Date: Wed, 29 Mar 2017 23:36:28 +0000 Subject: [New-bugs-announce] [issue29943] PySlice_GetIndicesEx change broke ABI in 3.5 and 3.6 branches Message-ID: <1490830588.09.0.152876220246.issue29943@psf.upfronthosting.co.za> New submission from Nathaniel Smith: In the process of fixing issue 27867, a new function PySlice_AdjustIndices was added, and PySlice_GetIndicesEx was converted into a macro that calls this new function. The patch was backported to both the 3.5 and 3.6 branches, was released in 3.6.1, and is currently slated to be released as part of 3.5.4. Unfortunately, this breaks our normal ABI stability guarantees for micro releases: it means that if a C module that uses PySlice_GetIndicesEx is compiled against e.g. 3.6.1, then it cannot be imported on 3.6.0. This affects a number of high-profile packages (cffi, pandas, mpi4py, dnf, ...). The only workaround is that if you are distributing binary extension modules (e.g. wheels), then you need to be careful not to upgrade to 3.6.1. It's not possible for a wheel to declare that it requires 3.6.1-or-better, because CPython normally follows the rule that we don't make these kinds of changes. Oops. CC'ing Ned and Larry, because it's possible this should trigger a 3.6.2, and I think it's a blocker for 3.5.4. CC'ing Serhiy as the author of the original patch, since you probably have the best idea how this could be unwound with minimal breakage :-). python-dev discussion: https://mail.python.org/pipermail/python-dev/2017-March/147707.html Fedora bug: https://bugzilla.redhat.com/show_bug.cgi?id=1435135 ---------- messages: 290796 nosy: larry, ned.deily, njs, serhiy.storchaka priority: normal severity: normal status: open title: PySlice_GetIndicesEx change broke ABI in 3.5 and 3.6 branches versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 02:36:49 2017 From: report at bugs.python.org (assume_away) Date: Thu, 30 Mar 2017 06:36:49 +0000 Subject: [New-bugs-announce] [issue29944] Argumentless super() calls do not work in classes constructed with type() Message-ID: <1490855809.89.0.267666010996.issue29944@psf.upfronthosting.co.za> New submission from assume_away: The simplest example: def mydec(cls): return type(cls.__name__, cls.__bases__, dict(cls.__dict__)) @mydec class MyList(list): def extend(self, item): super(MyList, self).extend(item) def insert(self, index, object): super().insert(index, object) >>> lst = MyList() >>> lst.extend([2,3]) >>> lst.insert(0, 1) TypeError: super(type, obj): obj must be an instance or subtype of type >>> lst [2, 3] If this is intended behavior, at least the error message could be fixed. ---------- messages: 290823 nosy: assume_away priority: normal severity: normal status: open title: Argumentless super() calls do not work in classes constructed with type() type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 04:15:37 2017 From: report at bugs.python.org (webber) Date: Thu, 30 Mar 2017 08:15:37 +0000 Subject: [New-bugs-announce] [issue29945] decode string:u"\ufffd" UnicodeEncodeError Message-ID: <1490861737.98.0.649780144425.issue29945@psf.upfronthosting.co.za> New submission from webber: I use python on linux, version is 2.7.13: [root at localhost bin]# ./python2.7 Python 2.7.13 (default, Mar 30 2017, 00:54:08) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a=u"\ufffd" >>> a.decode("utf=8") Traceback (most recent call last): File "", line 1, in File "/opt/python2.7.13/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 0: ordinal not in range(128) but,windows version run success! ---------- components: Unicode messages: 290834 nosy: ezio.melotti, foxscheduler, haypo priority: normal severity: normal status: open title: decode string:u"\ufffd" UnicodeEncodeError versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 04:28:04 2017 From: report at bugs.python.org (Xiang Zhang) Date: Thu, 30 Mar 2017 08:28:04 +0000 Subject: [New-bugs-announce] [issue29946] compiler warning "sqrtpi defined but not used" Message-ID: <1490862484.88.0.697244394269.issue29946@psf.upfronthosting.co.za> New submission from Xiang Zhang: Ubuntu 16.10, GCC 6.2.0 /home/angwer/repos/cpython/Modules/mathmodule.c:74:21: warning: ?sqrtpi? defined but not used [-Wunused-const-variable=] static const double sqrtpi = 1.772453850905516027298167483341145182798; ---------- components: Build messages: 290836 nosy: xiang.zhang priority: normal severity: normal status: open title: compiler warning "sqrtpi defined but not used" versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 08:08:53 2017 From: report at bugs.python.org (Dominic Mayers) Date: Thu, 30 Mar 2017 12:08:53 +0000 Subject: [New-bugs-announce] [issue29947] In SocketServer, why not passing a factory instance for the RequestHandlerClass instead of the class itself? Message-ID: <1490875733.83.0.750732897079.issue29947@psf.upfronthosting.co.za> New submission from Dominic Mayers: I am just curious to know if someone considered the idea of passing a factory instance that returns RequestHandlerClass instances instead of directly passing the class? It may affect existing handlers that read non local variables, but there should be a way to make the factory optional. The purpose is only aesthetic and a better organization of the code. I find it awkward to have to subclass the server every time that we have an handler that needs special objects, a database connection, a socket connection to another party, etc. The server class should have a single purpose: accept a request and pass it to an handler. We should only need to subclass a server when we need to do that in a different way : TCP vs UDP, Unix Vs INET, etc. The usage is simpler and more natural. Instead of subclassing the server, we create a factory for the handler. ---------- components: Library (Lib) messages: 290840 nosy: dominic108 priority: normal severity: normal status: open title: In SocketServer, why not passing a factory instance for the RequestHandlerClass instead of the class itself? type: enhancement versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 11:03:07 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 30 Mar 2017 15:03:07 +0000 Subject: [New-bugs-announce] [issue29948] DeprecationWarning when parse ElementTree with a doctype in 2.7 Message-ID: <1490886187.9.0.0310872110299.issue29948@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: DeprecationWarning is emitted when parse ElementTree with a doctype in 2.7. $ python2.7 -Wa Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import xml.etree.ElementTree as ET >>> ET.XML('text') /usr/lib/python2.7/xml/etree/ElementTree.py:1638: DeprecationWarning: This method of XMLParser is deprecated. Define doctype() method on the TreeBuilder target. DeprecationWarning, /usr/lib/python2.7/xml/etree/ElementTree.py:1638: DeprecationWarning: This method of XMLParser is deprecated. Define doctype() method on the TreeBuilder target. DeprecationWarning, ---------- assignee: serhiy.storchaka components: XML messages: 290846 nosy: eli.bendersky, scoder, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: DeprecationWarning when parse ElementTree with a doctype in 2.7 type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 14:09:13 2017 From: report at bugs.python.org (INADA Naoki) Date: Thu, 30 Mar 2017 18:09:13 +0000 Subject: [New-bugs-announce] [issue29949] sizeof set after set_merge() is doubled from 3.5 Message-ID: <1490897353.44.0.901506949839.issue29949@psf.upfronthosting.co.za> New submission from INADA Naoki: (original thread is https://mail.python.org/pipermail/python-list/2017-March/720391.html) https://github.com/python/cpython/commit/4897300276d870f99459c82b937f0ac22450f0b6 this commit doubles sizeof set object created by set_merge(). It is used by constructor of set and frozenset. $ /usr/bin/python3 Python 3.5.2+ (default, Sep 22 2016, 12:18:14) [GCC 6.2.0 20160927] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> s = set(range(10)) >>> sys.getsizeof(frozenset(s)) 736 $ python3 Python 3.6.0 (default, Dec 30 2016, 20:49:54) [GCC 6.2.0 20161005] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> s = set(range(10)) >>> sys.getsizeof(frozenset(s)) 1248 ---------- components: Interpreter Core keywords: 3.6regression messages: 290868 nosy: inada.naoki, rhettinger priority: normal severity: normal status: open title: sizeof set after set_merge() is doubled from 3.5 versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 18:32:35 2017 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Thu, 30 Mar 2017 22:32:35 +0000 Subject: [New-bugs-announce] [issue29950] Rename SlotWrapperType to WrapperDescriptorType Message-ID: <1490913155.68.0.921075997719.issue29950@psf.upfronthosting.co.za> New submission from Jim Fasarakis-Hilliard: The name SlotWrapperType was added in #29377 but it added the type based on the repr of the object instead of it's type as `type(object.__init__)` results in. I proposed this be named to WrapperDescriptorType to avoid and any unecessary confusion down the line. ---------- components: Library (Lib) messages: 290883 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Rename SlotWrapperType to WrapperDescriptorType versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 20:03:07 2017 From: report at bugs.python.org (Michael Seifert) Date: Fri, 31 Mar 2017 00:03:07 +0000 Subject: [New-bugs-announce] [issue29951] PyArg_ParseTupleAndKeywords exception messages containing "function" Message-ID: <1490918587.4.0.0607667805061.issue29951@psf.upfronthosting.co.za> New submission from Michael Seifert: Some exceptions thrown by `PyArg_ParseTupleAndKeywords` refer to "function" or "this function" even when a function name was specified. For example: >>> import bisect >>> bisect.bisect_right([1,2,3,4], 2, low=10) TypeError: 'low' is an invalid keyword argument for this function Wouldn't it be better to replace the "this function" part (if given) with the actual function name? ---------- messages: 290885 nosy: MSeifert priority: normal severity: normal status: open title: PyArg_ParseTupleAndKeywords exception messages containing "function" type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 30 23:40:13 2017 From: report at bugs.python.org (Kinebuchi Tomohiko) Date: Fri, 31 Mar 2017 03:40:13 +0000 Subject: [New-bugs-announce] [issue29952] "keys and values" is preferred to "keys and elements" for name of dict constituent Message-ID: <1490931613.41.0.406216073022.issue29952@psf.upfronthosting.co.za> New submission from Kinebuchi Tomohiko: In the section "6.10.1. Value comparisons" [1]_:: Equality comparison of the keys and elements enforces reflexivity. would be Equality comparison of the keys and values enforces reflexivity. because we usually call an entry of dict as "key-value pair". .. [1] https://docs.python.org/3.6/reference/expressions.html#value-comparisons ---------- assignee: docs at python components: Documentation messages: 290890 nosy: cocoatomo, docs at python priority: normal severity: normal status: open title: "keys and values" is preferred to "keys and elements" for name of dict constituent versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 31 09:44:25 2017 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 31 Mar 2017 13:44:25 +0000 Subject: [New-bugs-announce] [issue29953] Memory leak in the replace() method of datetime and time objects Message-ID: <1490967865.61.0.513039272437.issue29953@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: When pass out of bound keyword argument fold to datetime.datetime.replace() or datetime.time.replace(), ValueError is raised and just allocated object is leaked. Proposed patch fixes the leaks. ---------- components: Extension Modules messages: 290913 nosy: belopolsky, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Memory leak in the replace() method of datetime and time objects type: resource usage versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 31 12:35:59 2017 From: report at bugs.python.org (Petr Zemek) Date: Fri, 31 Mar 2017 16:35:59 +0000 Subject: [New-bugs-announce] [issue29954] multiprocessing.Pool.__exit__() calls terminate() instead of close() Message-ID: <1490978159.71.0.789108610011.issue29954@psf.upfronthosting.co.za> New submission from Petr Zemek: multiprocessing.Pool.__exit__() calls terminate() instead of close(). Why? Wouldn't it be better (and more expected from a user's point of view) if it called close()? Reasons: - Calling close() would wait until all tasks are completed before shutting down the pool instead of terminating them abruptly when some of them may still be running. - concurrent.futures.ProcessPoolExecutor.__exit__() calls shutdown(wait=True), which waits until all tasks are finished. In this regard, the behavior of Pool.__exit__() is inconsistent. See also this comment by Dan O'Reilly (http://bugs.python.org/msg242120), who expressed an identical concern two years ago. ---------- components: Library (Lib) messages: 290923 nosy: s3rvac priority: normal severity: normal status: open title: multiprocessing.Pool.__exit__() calls terminate() instead of close() type: behavior versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 31 13:18:07 2017 From: report at bugs.python.org (Skip Montanaro) Date: Fri, 31 Mar 2017 17:18:07 +0000 Subject: [New-bugs-announce] [issue29955] logging decimal point should come from locale Message-ID: <1490980687.61.0.0711345645767.issue29955@psf.upfronthosting.co.za> New submission from Skip Montanaro: The logging module hard codes the decimal point for timestamps to be ",". It should use locale.localeconv()["decimal_point"] instead. ---------- components: Library (Lib) messages: 290927 nosy: skip.montanaro priority: normal severity: normal status: open title: logging decimal point should come from locale type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 31 16:08:37 2017 From: report at bugs.python.org (Alexander Belopolsky) Date: Fri, 31 Mar 2017 20:08:37 +0000 Subject: [New-bugs-announce] [issue29956] math.exp documentation is misleading Message-ID: <1490990917.06.0.404933995002.issue29956@psf.upfronthosting.co.za> New submission from Alexander Belopolsky: The math.exp(x) function is documented to "Return e**x" . This is misleading because even in the simplest case, math.exp(x) is not the same as math.e ** x: >>> import math >>> math.exp(2) - math.e ** 2 8.881784197001252e-16 I suggest using ex instead of e**x to distinguish between Python syntax and mathematical operation and change "Return e**x" to "Return ex, the base-e exponential of x." ---------- assignee: docs at python components: Documentation messages: 290937 nosy: belopolsky, docs at python priority: normal severity: normal status: open title: math.exp documentation is misleading versions: Python 3.7 _______________________________________ Python tracker _______________________________________