From report at bugs.python.org Mon Feb 1 04:57:05 2016 From: report at bugs.python.org (Luke Schubert) Date: Mon, 01 Feb 2016 09:57:05 +0000 Subject: [New-bugs-announce] [issue26255] symtable.Symbol.is_referenced() returns false for valid use Message-ID: <1454320625.23.0.218294172463.issue26255@psf.upfronthosting.co.za> New submission from Luke Schubert: If the following function is saved in listcomp.py: def encode(inputLetters): code = {'C':'D', 'F':'E'} return set(code[x] for x in inputLetters) and the following code is used to create a symtable for this function: import symtable if __name__ == "__main__": fileName = 'listcomp.py' f = open(fileName, 'r') source = f.read() table = symtable.symtable(source, fileName, 'exec') children = table.get_children() for childTable in children: symbols = childTable.get_symbols() for s in symbols: if (not s.is_referenced()): print ("Unused symbol '%s' in function '%s'" % (s.get_name(), childTable.get_name())) then is_referenced() returns false for the 'code' symbol. If the following function is saved instead: def encode2(inputLetters): code = {'C':'D', 'F':'E'} return [ code[x] for x in inputLetters ] then is_referenced() returns true for the 'code' symbol. Possibly I'm misunderstanding what is_referenced() means, but I thought it should return true in both cases? ---------- messages: 259316 nosy: luke.schubert priority: normal severity: normal status: open title: symtable.Symbol.is_referenced() returns false for valid use versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 05:28:33 2016 From: report at bugs.python.org (Jurjen N.E. Bos) Date: Mon, 01 Feb 2016 10:28:33 +0000 Subject: [New-bugs-announce] [issue26256] Fast decimalisation and conversion to other bases Message-ID: <1454322513.41.0.794136944555.issue26256@psf.upfronthosting.co.za> New submission from Jurjen N.E. Bos: Inspired by the recently new discovered 49th Mersenne prime number I wrote a module to do high speed long/int to decimal conversion. I can now output the new Mersenne number in 18.5 minutes (instead of several hours) on my machine. For numbers longer than about 100000 bits, this routine it is faster than str(number) thanks to the Karatsuba multiplication in CPython. The module supports all number bases 2 through 36, and is written in pure python (both 2 and 3). There is a simple way to save more time by reusing the conversion object (saving about half the time for later calls). My suggestion is to incorporate this into some library, since Python still lacks a routine to convert to any number base. Ideally, it could be incorporated in the builtin str function, but this would need more work. When converting to C, it is recommended to optimize bases 4 and 32 the same way as oct, hex and bin do (which isn't easily accessible from Python). Hope you like it. At least, it was a lot of fun to write... Hope you like it. ---------- components: Library (Lib) files: fastStr.py messages: 259317 nosy: jneb priority: normal severity: normal status: open title: Fast decimalisation and conversion to other bases type: enhancement versions: Python 2.7 Added file: http://bugs.python.org/file41770/fastStr.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 05:49:55 2016 From: report at bugs.python.org (Martin Panter) Date: Mon, 01 Feb 2016 10:49:55 +0000 Subject: [New-bugs-announce] [issue26257] Eliminate buffer_tests.py Message-ID: <1454323795.43.0.12298865479.issue26257@psf.upfronthosting.co.za> New submission from Martin Panter: This sort of follows on from other cleanup of test_bytes.py in Issue 19587. Currently buffer_tests defines one base test class, MixinBytesBufferCommonTests, which is used by test_bytes.py to test bytearray (but not bytes!). However there are other tests defined in test_bytes.py, or string_tests.py, which are run, or could be run, for both bytearray and bytes. I haven?t checking too closely, but I think all the methods in buffer_tests could be merged with other methods and collected into string_tests.BaseTest. Then they would get run for bytes, bytearray, str, and UserString. The following methods are probably redundant with class string_tests.MixinStrUnicodeUserStringTest (only run for str and UserString). It looks like there is no bytes version of these tests: * test_islower/upper/title/space/alpha/alnum/digit() * test_title() * test_splitlines() test_capitalize() looks redundant with half of the method in string_tests.CommonTest (also only run for str and UserString). Again there doesn?t seem to be a bytes version of it. I think the common part could be moved into BaseTest, and the unicode-specific part left where it is. The following are probably also redundant with class string_tests.CommonTest: * test_ljust/rjust/center() * test_swapcase() * test_zfill() For ljust(), rjust() and center(), there are also tests run for bytes and bytearray in test_bytes.BaseBytesTest that could also be merged at the same time. But swapcase() and zfill() don?t seem to be currently tested for bytes. ---------- components: Tests messages: 259318 nosy: martin.panter priority: normal severity: normal stage: needs patch status: open title: Eliminate buffer_tests.py type: enhancement versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 07:21:28 2016 From: report at bugs.python.org (Ali Razmjoo) Date: Mon, 01 Feb 2016 12:21:28 +0000 Subject: [New-bugs-announce] [issue26258] readline module for python 3.x on windows Message-ID: <1454329288.94.0.0141088037674.issue26258@psf.upfronthosting.co.za> New submission from Ali Razmjoo: Hello, I using python 2.7.10 on windows and there isn't any problem with this readline module, but it's not exist in python3.x on windows, is it possible to add it ? how? ---------- messages: 259322 nosy: Ali Razmjoo priority: normal severity: normal status: open title: readline module for python 3.x on windows _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 11:06:20 2016 From: report at bugs.python.org (Jonas Brunsgaard) Date: Mon, 01 Feb 2016 16:06:20 +0000 Subject: [New-bugs-announce] [issue26259] Memleak when repeated calls to asyncio.queue.Queue.get is performed, without push to queue. Message-ID: <1454342780.58.0.210365622359.issue26259@psf.upfronthosting.co.za> New submission from Jonas Brunsgaard: When making repeated calls to queue.get, memory is building up and is not freed until queue.push is called. I wrote this little program to show my findings. The program will perform a lot of calls to queue.get and once every 60 seconds a queue.push is performed. Every 15 seconds the memory usage of dictionaries is printet to the console. You can find the output below the program ``` import asyncio from pympler import muppy from pympler import summary q = asyncio.Queue() loop = asyncio.get_event_loop() closing = False async def get_with_timeout(): while not closing: try: task = asyncio.ensure_future(q.get()) await asyncio.wait_for(task, 0.2) except asyncio.TimeoutError: pass def mem_profiling(): if not closing: types_ = muppy.filter(muppy.get_objects(), Type=dict) summary.print_(summary.summarize(types_)) loop.call_later(15, mem_profiling) def put(): q.put_nowait(None) loop.call_later(60, put) put() tasks = [asyncio.ensure_future(get_with_timeout()) for _ in range(10000)] mem_profiling() try: loop.run_forever() except KeyboardInterrupt: closing = True loop.run_until_complete( asyncio.ensure_future(asyncio.wait(tasks))) finally: loop.close() ``` Output: types | # objects | total size ======================================== | =========== | ============ _______________________________________ From report at bugs.python.org Mon Feb 1 11:40:22 2016 From: report at bugs.python.org (Jim Jin) Date: Mon, 01 Feb 2016 16:40:22 +0000 Subject: [New-bugs-announce] [issue26260] utf8 decoding inconsistency between P2 and P3 Message-ID: <1454344822.47.0.584077650452.issue26260@psf.upfronthosting.co.za> New submission from Jim Jin: PAYLOAD1 = b'\xce\xba\xe1\xbd\xb9\xcf\x83\xce\xbc\xce\xb5' PAYLOAD2 = b'\xed\xa0\x80' PAYLOAD3 = b'\x65\x64\x69\x74\x65\x64' PAYLOAD = PAYLOAD1 + PAYLOAD2 + PAYLOAD3 PAYLOAD.decode('utf8') passes in P2.7.* and fails in P3.4 Thank you for reading. ---------- components: Unicode messages: 259329 nosy: ezio.melotti, haypo, jinz priority: normal severity: normal status: open title: utf8 decoding inconsistency between P2 and P3 type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 12:21:22 2016 From: report at bugs.python.org (Antti Haapala) Date: Mon, 01 Feb 2016 17:21:22 +0000 Subject: [New-bugs-announce] [issue26261] NamedTemporaryFile documentation is vague about the `name` attribute Message-ID: <1454347282.51.0.892341568817.issue26261@psf.upfronthosting.co.za> New submission from Antti Haapala: The documentation for NamedTemporaryFile is a bit vague. It says [--] That name can be retrieved from the name attribute of the file object. [--] The returned object is always a file-like object whose file attribute is the underlying true file object. This file-like object can be used in a with statement, just like a normal file. That `file-like object` vs `true file object` made me assume that I need to do f = NamedTemporaryFile() f.file.name to get the filename, which sort of worked, but only later realized that `f.file.name` is actually the file descriptor number on Linux, a.k.a an integer. Thus I suggest that the one sentence be changed to "That name can be retrieved from the name attribute of the returned file-like object." ---------- assignee: docs at python components: Documentation messages: 259334 nosy: docs at python, ztane priority: normal severity: normal status: open title: NamedTemporaryFile documentation is vague about the `name` attribute _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 12:50:09 2016 From: report at bugs.python.org (Zachary Ware) Date: Mon, 01 Feb 2016 17:50:09 +0000 Subject: [New-bugs-announce] [issue26262] Cannot compile with /fp:strict with MSVC Message-ID: <1454349009.19.0.118531537241.issue26262@psf.upfronthosting.co.za> New submission from Zachary Ware: As reported in msg258685 (issue25934), Python cannot be built with /fp:strict while using MSVC. As Mark showed in msg258686 and msg258689, there are three calculations that could be replaced with suitable constants to allow /fp:strict to be used. (Nosy list copied from #25934) ---------- components: Build, Windows keywords: easy messages: 259336 nosy: haypo, mark.dickinson, paul.moore, pitrou, python-dev, r.david.murray, serhiy.storchaka, skrah, steve.dower, tim.golden, tim.peters, zach.ware priority: low severity: normal stage: needs patch status: open title: Cannot compile with /fp:strict with MSVC type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 13:50:39 2016 From: report at bugs.python.org (Omer Katz) Date: Mon, 01 Feb 2016 18:50:39 +0000 Subject: [New-bugs-announce] [issue26263] Serialize array.array to JSON by default Message-ID: <1454352639.9.0.0979042510265.issue26263@psf.upfronthosting.co.za> New submission from Omer Katz: Is there a reason why the JSON module doesn't serialize array.array() instances by default? Currently you need to convert them to tuples but I'm confident that the C API for those types is enough to iterate over the array and serialize it to a JSON list. We should support serializing arrays by default in my opinion. ---------- components: Interpreter Core messages: 259341 nosy: Omer.Katz priority: normal severity: normal status: open title: Serialize array.array to JSON by default type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 1 21:06:26 2016 From: report at bugs.python.org (Timo Furrer) Date: Tue, 02 Feb 2016 02:06:26 +0000 Subject: [New-bugs-announce] [issue26264] keyword.py missing async await Message-ID: <1454378786.53.0.925531991818.issue26264@psf.upfronthosting.co.za> New submission from Timo Furrer: I had a look at the *Lib/keyword.py* module. It seems like the auto generated *kwlist* is missing the *await* and *async* keywords. At least according to https://www.python.org/dev/peps/pep-0492/ they are called *keywords*. The keyword module generates the *kwlist* from *Python/graminit.c*. ---------- components: Library (Lib) messages: 259349 nosy: tuxtimo priority: normal severity: normal status: open title: keyword.py missing async await versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 10:12:49 2016 From: report at bugs.python.org (David Beck) Date: Tue, 02 Feb 2016 15:12:49 +0000 Subject: [New-bugs-announce] [issue26265] errors during static build on OSX Message-ID: <1454425969.86.0.641229022171.issue26265@psf.upfronthosting.co.za> New submission from David Beck: I'm working on an iMac (27" mid 2010) running OSX 10.11.3. I'm trying to build Python3.5.0 for use with pyqtdeploy. If I build python without specifying "--enable-universalsdk", I get multiple warnings "clang: warning: no such sysroot directory: '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk'" on trying to build PyQt5. If I specify "--enable-universalsdk" while building Python, I get an error "ld: symbol(s) not found for architecture i386" from the 'make' command. If I then specify "--with-universal-archs=64-bit", I get the output below, which includes a number of requests to report this here. David$ /Users/David/OpenSource/Python-3.5.0/configure --prefix /Users/David/OpenSource/PyQtDeploy/python --enable-universalsdk --with-universal-archs=64-bit checking build system type... x86_64-apple-darwin15.3.0 checking host system type... x86_64-apple-darwin15.3.0 checking for --enable-universalsdk... /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk checking for --with-universal-archs... 64-bit checking MACHDEP... darwin checking for --without-gcc... no Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /usr/bin/grep checking for --with-cxx-main=... no checking for g++... no configure: By default, distutils will build C++ extension modules with "g++". If this is not intended, then set CXX on the configure command line. checking for the platform triplet based on compiler characteristics... darwin checking for -Wl,--no-as-needed... no checking for egrep... /usr/bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking whether it is safe to define __EXTENSIONS__... yes checking for --with-suffix... checking for case-insensitive build directory... yes checking LIBRARY... libpython$(VERSION)$(ABIFLAGS).a checking LINKCC... $(PURIFY) $(MAINCC) checking for GNU ld... no checking for inline... inline checking for --enable-shared... no checking for --enable-profiling... no checking LDLIBRARY... libpython$(VERSION)$(ABIFLAGS).a checking for ranlib... ranlib checking for ar... ar checking for readelf... no checking for python3.5... python3.5 checking for python3.5... (cached) python3.5 checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... ./install-sh -c -d checking for --with-pydebug... no checking whether gcc accepts and needs -fno-strict-aliasing... no checking if we can turn off gcc unused result warning... yes checking for -Werror=declaration-after-statement... yes checking if we can turn on gcc mixed sign comparison warning... yes checking if we can turn on gcc unreachable code warning... yes checking which compiler should be used... gcc checking which MACOSX_DEPLOYMENT_TARGET to use... 10.9 checking whether pthreads are available without options... no checking whether gcc accepts -Kpthread... no checking whether gcc accepts -Kthread... no checking whether gcc accepts -pthread... no checking whether g++ also accepts flags for thread support... no checking for ANSI C header files... (cached) yes checking asm/types.h usability... no checking asm/types.h presence... no checking for asm/types.h... no checking conio.h usability... no checking conio.h presence... no checking for conio.h... no checking direct.h usability... no checking direct.h presence... no checking for direct.h... no checking dlfcn.h usability... no checking dlfcn.h presence... yes configure: WARNING: dlfcn.h: present but cannot be compiled configure: WARNING: dlfcn.h: check for missing prerequisite headers? configure: WARNING: dlfcn.h: see the Autoconf documentation configure: WARNING: dlfcn.h: section "Present But Cannot Be Compiled" configure: WARNING: dlfcn.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for dlfcn.h... no checking errno.h usability... no checking errno.h presence... yes configure: WARNING: errno.h: present but cannot be compiled configure: WARNING: errno.h: check for missing prerequisite headers? configure: WARNING: errno.h: see the Autoconf documentation configure: WARNING: errno.h: section "Present But Cannot Be Compiled" configure: WARNING: errno.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for errno.h... no checking fcntl.h usability... no checking fcntl.h presence... yes configure: WARNING: fcntl.h: present but cannot be compiled configure: WARNING: fcntl.h: check for missing prerequisite headers? configure: WARNING: fcntl.h: see the Autoconf documentation configure: WARNING: fcntl.h: section "Present But Cannot Be Compiled" configure: WARNING: fcntl.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for fcntl.h... no checking grp.h usability... no checking grp.h presence... yes configure: WARNING: grp.h: present but cannot be compiled configure: WARNING: grp.h: check for missing prerequisite headers? configure: WARNING: grp.h: see the Autoconf documentation configure: WARNING: grp.h: section "Present But Cannot Be Compiled" configure: WARNING: grp.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for grp.h... no checking ieeefp.h usability... no checking ieeefp.h presence... no checking for ieeefp.h... no checking io.h usability... no checking io.h presence... no checking for io.h... no checking langinfo.h usability... no checking langinfo.h presence... yes configure: WARNING: langinfo.h: present but cannot be compiled configure: WARNING: langinfo.h: check for missing prerequisite headers? configure: WARNING: langinfo.h: see the Autoconf documentation configure: WARNING: langinfo.h: section "Present But Cannot Be Compiled" configure: WARNING: langinfo.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for langinfo.h... no checking libintl.h usability... no checking libintl.h presence... no checking for libintl.h... no checking process.h usability... no checking process.h presence... no checking for process.h... no checking pthread.h usability... no checking pthread.h presence... yes configure: WARNING: pthread.h: present but cannot be compiled configure: WARNING: pthread.h: check for missing prerequisite headers? configure: WARNING: pthread.h: see the Autoconf documentation configure: WARNING: pthread.h: section "Present But Cannot Be Compiled" configure: WARNING: pthread.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for pthread.h... no checking sched.h usability... no checking sched.h presence... yes configure: WARNING: sched.h: present but cannot be compiled configure: WARNING: sched.h: check for missing prerequisite headers? configure: WARNING: sched.h: see the Autoconf documentation configure: WARNING: sched.h: section "Present But Cannot Be Compiled" configure: WARNING: sched.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sched.h... no checking shadow.h usability... no checking shadow.h presence... no checking for shadow.h... no checking signal.h usability... no checking signal.h presence... yes configure: WARNING: signal.h: present but cannot be compiled configure: WARNING: signal.h: check for missing prerequisite headers? configure: WARNING: signal.h: see the Autoconf documentation configure: WARNING: signal.h: section "Present But Cannot Be Compiled" configure: WARNING: signal.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for signal.h... no checking for stdint.h... (cached) yes checking stropts.h usability... no checking stropts.h presence... no checking for stropts.h... no checking termios.h usability... no checking termios.h presence... yes configure: WARNING: termios.h: present but cannot be compiled configure: WARNING: termios.h: check for missing prerequisite headers? configure: WARNING: termios.h: see the Autoconf documentation configure: WARNING: termios.h: section "Present But Cannot Be Compiled" configure: WARNING: termios.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for termios.h... no checking for unistd.h... (cached) yes checking utime.h usability... no checking utime.h presence... yes configure: WARNING: utime.h: present but cannot be compiled configure: WARNING: utime.h: check for missing prerequisite headers? configure: WARNING: utime.h: see the Autoconf documentation configure: WARNING: utime.h: section "Present But Cannot Be Compiled" configure: WARNING: utime.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for utime.h... no checking poll.h usability... no checking poll.h presence... yes configure: WARNING: poll.h: present but cannot be compiled configure: WARNING: poll.h: check for missing prerequisite headers? configure: WARNING: poll.h: see the Autoconf documentation configure: WARNING: poll.h: section "Present But Cannot Be Compiled" configure: WARNING: poll.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for poll.h... no checking sys/devpoll.h usability... no checking sys/devpoll.h presence... no checking for sys/devpoll.h... no checking sys/epoll.h usability... no checking sys/epoll.h presence... no checking for sys/epoll.h... no checking sys/poll.h usability... no checking sys/poll.h presence... yes configure: WARNING: sys/poll.h: present but cannot be compiled configure: WARNING: sys/poll.h: check for missing prerequisite headers? configure: WARNING: sys/poll.h: see the Autoconf documentation configure: WARNING: sys/poll.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/poll.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/poll.h... no checking sys/audioio.h usability... no checking sys/audioio.h presence... no checking for sys/audioio.h... no checking sys/xattr.h usability... no checking sys/xattr.h presence... yes configure: WARNING: sys/xattr.h: present but cannot be compiled configure: WARNING: sys/xattr.h: check for missing prerequisite headers? configure: WARNING: sys/xattr.h: see the Autoconf documentation configure: WARNING: sys/xattr.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/xattr.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/xattr.h... no checking sys/bsdtty.h usability... no checking sys/bsdtty.h presence... no checking for sys/bsdtty.h... no checking sys/event.h usability... no checking sys/event.h presence... yes configure: WARNING: sys/event.h: present but cannot be compiled configure: WARNING: sys/event.h: check for missing prerequisite headers? configure: WARNING: sys/event.h: see the Autoconf documentation configure: WARNING: sys/event.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/event.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/event.h... no checking sys/file.h usability... no checking sys/file.h presence... yes configure: WARNING: sys/file.h: present but cannot be compiled configure: WARNING: sys/file.h: check for missing prerequisite headers? configure: WARNING: sys/file.h: see the Autoconf documentation configure: WARNING: sys/file.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/file.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/file.h... no checking sys/ioctl.h usability... no checking sys/ioctl.h presence... yes configure: WARNING: sys/ioctl.h: present but cannot be compiled configure: WARNING: sys/ioctl.h: check for missing prerequisite headers? configure: WARNING: sys/ioctl.h: see the Autoconf documentation configure: WARNING: sys/ioctl.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/ioctl.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/ioctl.h... no checking sys/kern_control.h usability... no checking sys/kern_control.h presence... yes configure: WARNING: sys/kern_control.h: present but cannot be compiled configure: WARNING: sys/kern_control.h: check for missing prerequisite headers? configure: WARNING: sys/kern_control.h: see the Autoconf documentation configure: WARNING: sys/kern_control.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/kern_control.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/kern_control.h... no checking sys/loadavg.h usability... no checking sys/loadavg.h presence... no checking for sys/loadavg.h... no checking sys/lock.h usability... no checking sys/lock.h presence... yes configure: WARNING: sys/lock.h: present but cannot be compiled configure: WARNING: sys/lock.h: check for missing prerequisite headers? configure: WARNING: sys/lock.h: see the Autoconf documentation configure: WARNING: sys/lock.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/lock.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/lock.h... no checking sys/mkdev.h usability... no checking sys/mkdev.h presence... no checking for sys/mkdev.h... no checking sys/modem.h usability... no checking sys/modem.h presence... no checking for sys/modem.h... no checking sys/param.h usability... no checking sys/param.h presence... yes configure: WARNING: sys/param.h: present but cannot be compiled configure: WARNING: sys/param.h: check for missing prerequisite headers? configure: WARNING: sys/param.h: see the Autoconf documentation configure: WARNING: sys/param.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/param.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/param.h... no checking sys/select.h usability... no checking sys/select.h presence... yes configure: WARNING: sys/select.h: present but cannot be compiled configure: WARNING: sys/select.h: check for missing prerequisite headers? configure: WARNING: sys/select.h: see the Autoconf documentation configure: WARNING: sys/select.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/select.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/select.h... no checking sys/sendfile.h usability... no checking sys/sendfile.h presence... no checking for sys/sendfile.h... no checking sys/socket.h usability... no checking sys/socket.h presence... yes configure: WARNING: sys/socket.h: present but cannot be compiled configure: WARNING: sys/socket.h: check for missing prerequisite headers? configure: WARNING: sys/socket.h: see the Autoconf documentation configure: WARNING: sys/socket.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/socket.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/socket.h... no checking sys/statvfs.h usability... no checking sys/statvfs.h presence... yes configure: WARNING: sys/statvfs.h: present but cannot be compiled configure: WARNING: sys/statvfs.h: check for missing prerequisite headers? configure: WARNING: sys/statvfs.h: see the Autoconf documentation configure: WARNING: sys/statvfs.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/statvfs.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/statvfs.h... no checking for sys/stat.h... (cached) yes checking sys/syscall.h usability... no checking sys/syscall.h presence... yes configure: WARNING: sys/syscall.h: present but cannot be compiled configure: WARNING: sys/syscall.h: check for missing prerequisite headers? configure: WARNING: sys/syscall.h: see the Autoconf documentation configure: WARNING: sys/syscall.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/syscall.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/syscall.h... no checking sys/sys_domain.h usability... no checking sys/sys_domain.h presence... yes configure: WARNING: sys/sys_domain.h: present but cannot be compiled configure: WARNING: sys/sys_domain.h: check for missing prerequisite headers? configure: WARNING: sys/sys_domain.h: see the Autoconf documentation configure: WARNING: sys/sys_domain.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/sys_domain.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/sys_domain.h... no checking sys/termio.h usability... no checking sys/termio.h presence... no checking for sys/termio.h... no checking sys/time.h usability... no checking sys/time.h presence... yes configure: WARNING: sys/time.h: present but cannot be compiled configure: WARNING: sys/time.h: check for missing prerequisite headers? configure: WARNING: sys/time.h: see the Autoconf documentation configure: WARNING: sys/time.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/time.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/time.h... no checking sys/times.h usability... no checking sys/times.h presence... yes configure: WARNING: sys/times.h: present but cannot be compiled configure: WARNING: sys/times.h: check for missing prerequisite headers? configure: WARNING: sys/times.h: see the Autoconf documentation configure: WARNING: sys/times.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/times.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/times.h... no checking for sys/types.h... (cached) yes checking sys/uio.h usability... no checking sys/uio.h presence... yes configure: WARNING: sys/uio.h: present but cannot be compiled configure: WARNING: sys/uio.h: check for missing prerequisite headers? configure: WARNING: sys/uio.h: see the Autoconf documentation configure: WARNING: sys/uio.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/uio.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/uio.h... no checking sys/un.h usability... no checking sys/un.h presence... yes configure: WARNING: sys/un.h: present but cannot be compiled configure: WARNING: sys/un.h: check for missing prerequisite headers? configure: WARNING: sys/un.h: see the Autoconf documentation configure: WARNING: sys/un.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/un.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/un.h... no checking sys/utsname.h usability... no checking sys/utsname.h presence... yes configure: WARNING: sys/utsname.h: present but cannot be compiled configure: WARNING: sys/utsname.h: check for missing prerequisite headers? configure: WARNING: sys/utsname.h: see the Autoconf documentation configure: WARNING: sys/utsname.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/utsname.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/utsname.h... no checking sys/wait.h usability... no checking sys/wait.h presence... yes configure: WARNING: sys/wait.h: present but cannot be compiled configure: WARNING: sys/wait.h: check for missing prerequisite headers? configure: WARNING: sys/wait.h: see the Autoconf documentation configure: WARNING: sys/wait.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/wait.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/wait.h... no checking pty.h usability... no checking pty.h presence... no checking for pty.h... no checking libutil.h usability... no checking libutil.h presence... no checking for libutil.h... no checking sys/resource.h usability... no checking sys/resource.h presence... yes configure: WARNING: sys/resource.h: present but cannot be compiled configure: WARNING: sys/resource.h: check for missing prerequisite headers? configure: WARNING: sys/resource.h: see the Autoconf documentation configure: WARNING: sys/resource.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/resource.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sys/resource.h... no checking netpacket/packet.h usability... no checking netpacket/packet.h presence... no checking for netpacket/packet.h... no checking sysexits.h usability... no checking sysexits.h presence... yes configure: WARNING: sysexits.h: present but cannot be compiled configure: WARNING: sysexits.h: check for missing prerequisite headers? configure: WARNING: sysexits.h: see the Autoconf documentation configure: WARNING: sysexits.h: section "Present But Cannot Be Compiled" configure: WARNING: sysexits.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for sysexits.h... no checking bluetooth.h usability... no checking bluetooth.h presence... no checking for bluetooth.h... no checking bluetooth/bluetooth.h usability... no checking bluetooth/bluetooth.h presence... no checking for bluetooth/bluetooth.h... no checking linux/tipc.h usability... no checking linux/tipc.h presence... no checking for linux/tipc.h... no checking spawn.h usability... no checking spawn.h presence... yes configure: WARNING: spawn.h: present but cannot be compiled configure: WARNING: spawn.h: check for missing prerequisite headers? configure: WARNING: spawn.h: see the Autoconf documentation configure: WARNING: spawn.h: section "Present But Cannot Be Compiled" configure: WARNING: spawn.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for spawn.h... no checking util.h usability... no checking util.h presence... yes configure: WARNING: util.h: present but cannot be compiled configure: WARNING: util.h: check for missing prerequisite headers? configure: WARNING: util.h: see the Autoconf documentation configure: WARNING: util.h: section "Present But Cannot Be Compiled" configure: WARNING: util.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for util.h... no checking alloca.h usability... no checking alloca.h presence... yes configure: WARNING: alloca.h: present but cannot be compiled configure: WARNING: alloca.h: check for missing prerequisite headers? configure: WARNING: alloca.h: see the Autoconf documentation configure: WARNING: alloca.h: section "Present But Cannot Be Compiled" configure: WARNING: alloca.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for alloca.h... no checking endian.h usability... no checking endian.h presence... no checking for endian.h... no checking sys/endian.h usability... no checking sys/endian.h presence... no checking for sys/endian.h... no checking for dirent.h that defines DIR... no checking for sys/ndir.h that defines DIR... no checking for sys/dir.h that defines DIR... no checking for ndir.h that defines DIR... no checking for library containing opendir... no checking whether sys/types.h defines makedev... no checking for sys/mkdev.h... (cached) no checking sys/sysmacros.h usability... no checking sys/sysmacros.h presence... no checking for sys/sysmacros.h... no checking for net/if.h... no checking for linux/netlink.h... no checking for linux/can.h... no checking for linux/can/raw.h... no checking for linux/can/bcm.h... no checking for clock_t in time.h... yes checking for makedev... no checking for le64toh... no checking Solaris LFS bug... yes checking for mode_t... no checking for off_t... no checking for pid_t... no checking for size_t... no checking for uid_t in sys/types.h... yes checking for uint32_t... no checking for uint32_t... no checking for uint64_t... no checking for uint64_t... no checking for int32_t... no checking for int32_t... no checking for int64_t... no checking for int64_t... no checking for ssize_t... no checking for __uint128_t... no checking size of int... 0 checking size of long... 0 checking size of void *... 0 checking size of short... 0 checking size of float... 0 checking size of double... 0 checking size of fpos_t... 0 checking size of size_t... 0 checking size of pid_t... 0 checking for long long support... no checking for long double support... no checking for _Bool support... no checking for uintptr_t... no checking size of off_t... 0 checking whether to enable large file support... no checking size of time_t... 0 checking for pthread_t... no checking for --enable-framework... no checking for dyld... always on for Darwin checking the extension of shared libraries... .so checking LDSHARED... $(CC) -bundle -undefined dynamic_lookup checking CCSHARED... checking LINKFORSHARED... -Wl,-stack_size,1000000 -framework CoreFoundation checking CFLAGSFORSHARED... checking SHLIBS... $(LIBS) checking for sendfile in -lsendfile... no checking for dlopen in -ldl... no checking for shl_load in -ldld... no checking for RAND_egd in -lcrypto... no checking for library containing sem_init... no checking for textdomain in -lintl... no checking aligned memory access is required... yes checking for --with-hash-algorithm... default checking for --with-address-sanitizer... no checking for t_open in -lnsl... no checking for socket in -lsocket... no checking for --with-libs... no checking for pkg-config... /opt/local/bin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for --with-system-expat... no checking for --with-system-ffi... no checking for --with-system-libmpdec... no checking for --enable-loadable-sqlite-extensions... no checking for --with-tcltk-includes... default checking for --with-tcltk-libs... default checking for --with-dbmliborder... checking for --with-signal-module... yes checking for --with-threads... yes checking for _POSIX_THREADS in unistd.h... yes checking for pthread_create in -lpthread... checking for pthread_detach... no checking for pthread_create in -lpthreads... no checking for pthread_create in -lc_r... no checking for __pthread_create_system in -lpthread... no checking for pthread_create in -lcma... no checking for usconfig in -lmpc... no checking for thr_create in -lthread... no checking if --enable-ipv6 is specified... no checking for CAN_RAW_FD_FRAMES... no checking for OSX 10.5 SDK or later... no checking for --with-doc-strings... yes checking for --with-tsc... no checking for --with-pymalloc... yes checking for --with-valgrind... no checking for dlopen... no checking DYNLOADFILE... dynload_stub.o checking MACHDEP_OBJS... none checking for alarm... no checking for accept4... no checking for setitimer... no checking for getitimer... no checking for bind_textdomain_codeset... no checking for chown... no checking for clock... no checking for confstr... no checking for ctermid... no checking for dup3... no checking for execv... no checking for faccessat... no checking for fchmod... no checking for fchmodat... no checking for fchown... no checking for fchownat... no checking for fexecve... no checking for fdopendir... no checking for fork... no checking for fpathconf... no checking for fstatat... no checking for ftime... no checking for ftruncate... no checking for futimesat... no checking for futimens... no checking for futimes... no checking for gai_strerror... no checking for getentropy... no checking for getgrouplist... no checking for getgroups... no checking for getlogin... no checking for getloadavg... no checking for getpeername... no checking for getpgid... no checking for getpid... no checking for getpriority... no checking for getresuid... no checking for getresgid... no checking for getpwent... no checking for getspnam... no checking for getspent... no checking for getsid... no checking for getwd... no checking for if_nameindex... no checking for initgroups... no checking for kill... no checking for killpg... no checking for lchmod... no checking for lchown... no checking for lockf... no checking for linkat... no checking for lstat... no checking for lutimes... no checking for mmap... no checking for memrchr... no checking for mbrtowc... no checking for mkdirat... no checking for mkfifo... no checking for mkfifoat... no checking for mknod... no checking for mknodat... no checking for mktime... no checking for mremap... no checking for nice... no checking for openat... no checking for pathconf... no checking for pause... no checking for pipe2... no checking for plock... no checking for poll... no checking for posix_fallocate... no checking for posix_fadvise... no checking for pread... no checking for pthread_init... no checking for pthread_kill... no checking for putenv... no checking for pwrite... no checking for readlink... no checking for readlinkat... no checking for readv... no checking for realpath... no checking for renameat... no checking for select... no checking for sem_open... no checking for sem_timedwait... no checking for sem_getvalue... no checking for sem_unlink... no checking for sendfile... no checking for setegid... no checking for seteuid... no checking for setgid... no checking for sethostname... no checking for setlocale... no checking for setregid... no checking for setreuid... no checking for setresuid... no checking for setresgid... no checking for setsid... no checking for setpgid... no checking for setpgrp... no checking for setpriority... no checking for setuid... no checking for setvbuf... no checking for sched_get_priority_max... no checking for sched_setaffinity... no checking for sched_setscheduler... no checking for sched_setparam... no checking for sched_rr_get_interval... no checking for sigaction... no checking for sigaltstack... no checking for siginterrupt... no checking for sigpending... no checking for sigrelse... no checking for sigtimedwait... no checking for sigwait... no checking for sigwaitinfo... no checking for snprintf... no checking for strftime... no checking for strlcpy... no checking for symlinkat... no checking for sync... no checking for sysconf... no checking for tcgetpgrp... no checking for tcsetpgrp... no checking for tempnam... no checking for timegm... no checking for times... no checking for tmpfile... no checking for tmpnam... no checking for tmpnam_r... no checking for truncate... no checking for uname... no checking for unlinkat... no checking for unsetenv... no checking for utimensat... no checking for utimes... no checking for waitid... no checking for waitpid... no checking for wait3... no checking for wait4... no checking for wcscoll... no checking for wcsftime... no checking for wcsxfrm... no checking for wmemcmp... no checking for writev... no checking for _getpty... no checking whether dirfd is declared... no checking for chroot... no checking for link... no checking for symlink... no checking for fchdir... no checking for fsync... no checking for fdatasync... no checking for epoll... no checking for epoll_create1... no checking for kqueue... no checking for prlimit... no checking for ctermid_r... no checking for flock declaration... no checking for getpagesize... no checking for broken unsetenv... yes checking for true... true checking for inet_aton in -lc... no checking for inet_aton in -lresolv... no checking for chflags... no checking for lchflags... no checking for inflateCopy in -lz... no checking for hstrerror... no checking for inet_aton... no checking for inet_pton... no checking for setgroups... no checking for openpty... no checking for openpty in -lutil... no checking for openpty in -lbsd... no checking for forkpty... no checking for forkpty in -lutil... no checking for forkpty in -lbsd... no checking for memmove... no checking for fseek64... no checking for fseeko... no checking for fstatvfs... no checking for ftell64... no checking for ftello... no checking for statvfs... no checking for dup2... no checking for strdup... no checking for getpgrp... no checking for setpgrp... (cached) no checking for gettimeofday... no checking for clock_gettime... no checking for clock_gettime in -lrt... no checking for clock_getres... no checking for clock_getres in -lrt... no checking for major... no checking for getaddrinfo... no checking for getnameinfo... no checking whether time.h and sys/time.h may both be included... no checking whether struct tm is in sys/time.h or time.h... sys/time.h checking for struct tm.tm_zone... no checking whether tzname is declared... no checking for tzname... no checking for struct stat.st_rdev... no checking for struct stat.st_blksize... no checking for struct stat.st_flags... no checking for struct stat.st_gen... no checking for struct stat.st_birthtime... no checking for struct stat.st_blocks... no checking for time.h that defines altzone... no checking whether sys/select.h and sys/time.h may both be included... no checking for addrinfo... no checking for sockaddr_storage... no checking whether char is unsigned... yes checking for an ANSI C-conforming const... no checking for working volatile... no checking for working signed char... no checking for prototypes... no checking for variable length prototypes and stdarg.h... no checking for socketpair... no checking if sockaddr has sa_len member... no checking whether va_list is an array... yes checking for gethostbyname_r... no checking for gethostbyname... no checking for __fpu_control... no checking for __fpu_control in -lieee... no checking for --with-fpectl... no checking for --with-libm=STRING... default LIBM="" checking for --with-libc=STRING... default LIBC="" checking for x64 gcc inline assembler... no checking whether C doubles are little-endian IEEE 754 binary64... no checking whether C doubles are big-endian IEEE 754 binary64... no checking whether C doubles are ARM mixed-endian IEEE 754 binary64... no checking whether we can use gcc inline assembler to get and set x87 control word... no checking whether we can use gcc inline assembler to get and set mc68881 fpcr... no checking for x87-style double rounding... yes checking for acosh... no checking for asinh... no checking for atanh... no checking for copysign... no checking for erf... no checking for erfc... no checking for expm1... no checking for finite... no checking for gamma... no checking for hypot... no checking for lgamma... no checking for log1p... no checking for log2... no checking for round... no checking for tgamma... no checking whether isinf is declared... no checking whether isnan is declared... no checking whether isfinite is declared... no checking whether tanh preserves the sign of zero... no checking whether POSIX semaphores are enabled... no checking for broken sem_getvalue... yes checking digit size for Python's longs... no value specified checking wchar.h usability... no checking wchar.h presence... yes configure: WARNING: wchar.h: present but cannot be compiled configure: WARNING: wchar.h: check for missing prerequisite headers? configure: WARNING: wchar.h: see the Autoconf documentation configure: WARNING: wchar.h: section "Present But Cannot Be Compiled" configure: WARNING: wchar.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for wchar.h... no checking for UCS-4 tcl... no /Users/David/OpenSource/Python-3.5.0/configure: line 14108: test: : integer expression expected no usable wchar_t found checking whether byte ordering is bigendian... yes checking ABIFLAGS... m checking SOABI... cpython-35m-darwin checking LDVERSION... $(VERSION)$(ABIFLAGS) checking whether right shift extends the sign bit... no checking for getc_unlocked() and friends... no checking how to link readline libs... none checking for rl_callback_handler_install in -lreadline... no checking for rl_pre_input_hook in -lreadline... no checking for rl_completion_display_matches_hook in -lreadline... no checking for rl_completion_matches in -lreadline... no checking for append_history in -lreadline... no checking for broken nice()... no checking for broken poll()... no checking for struct tm.tm_zone... (cached) no checking whether tzname is declared... (cached) no checking for tzname... (cached) no checking for working tzset()... no checking for tv_nsec in struct stat... no checking for tv_nsec2 in struct stat... no checking curses.h usability... no checking curses.h presence... yes configure: WARNING: curses.h: present but cannot be compiled configure: WARNING: curses.h: check for missing prerequisite headers? configure: WARNING: curses.h: see the Autoconf documentation configure: WARNING: curses.h: section "Present But Cannot Be Compiled" configure: WARNING: curses.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for curses.h... no checking ncurses.h usability... no checking ncurses.h presence... yes configure: WARNING: ncurses.h: present but cannot be compiled configure: WARNING: ncurses.h: check for missing prerequisite headers? configure: WARNING: ncurses.h: see the Autoconf documentation configure: WARNING: ncurses.h: section "Present But Cannot Be Compiled" configure: WARNING: ncurses.h: proceeding with the compiler's result configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to http://bugs.python.org/ ## configure: WARNING: ## -------------------------------------- ## checking for ncurses.h... no checking for term.h... no checking whether mvwdelch is an expression... no checking whether WINDOW has _flags... no checking for is_term_resized... no checking for resize_term... no checking for resizeterm... no configure: checking for device files checking for /dev/ptmx... yes checking for /dev/ptc... no checking for %zd printf() format support... no checking for socklen_t... no checking for broken mbstowcs... yes checking for --with-computed-gotos... no value specified checking whether gcc supports computed gotos... no checking for build directories... done checking for -O2... yes checking for glibc _FORTIFY_SOURCE/memmove bug... yes checking for stdatomic.h... no checking for GCC >= 4.7 __atomic builtins... no checking for ensurepip... upgrade checking if the dirent structure of a d_type field... no checking for the Linux getrandom() syscall... no configure: creating ./config.status config.status: creating Makefile.pre config.status: creating Modules/Setup.config config.status: creating Misc/python.pc config.status: creating Misc/python-config.sh config.status: creating Modules/ld_so_aix config.status: creating pyconfig.h creating Modules/Setup creating Modules/Setup.local creating Makefile ---------- components: Macintosh messages: 259396 nosy: davidjamesbeck, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: errors during static build on OSX type: compile error versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 11:27:31 2016 From: report at bugs.python.org (Ethan Furman) Date: Tue, 02 Feb 2016 16:27:31 +0000 Subject: [New-bugs-announce] [issue26266] add classattribute to enum to handle non-Enum attributes Message-ID: <1454430451.09.0.888937739305.issue26266@psf.upfronthosting.co.za> New submission from Ethan Furman: The rules for what objects in an Enum become members and which do not are fairly straight-forward: __double_underscore__ do not (but is reserved for Python) _single_underscore_ do not (but is reserved for Enum itself) any descriptored object (such as functions) do not Which means the proper way to add constants/attributes to an Enum is to write a descriptor, but most folks don't think about that when the Enum is not working properly they (okay, and me :/ ) just add the double-underscore. This question has already come up a couple times on StackOverflow: - http://stackoverflow.com/q/17911188/208880 - http://stackoverflow.com/q/34465739/208880 While this doesn't come up very often, that just means it is even more likely to have the attribute be __double_underscored__ instead of descriptored. The solution is have a descriptor in the Enum module for this case. While it would be possible to have several (constant-unless-mutable, constant-even-if-mutable, not-constant, possibly others) I think the not-constant would be sufficient (aka a writable property), although I am not opposed to a constant-unless mutable version as well. The not-constant version would look like this (I'll attach patch later): class classattribute: def __init__(self, value): self.value = value def __get__(self, *args): return self.value def __set__(self, value): self.value = value def __repr__(self): return '%s(%r)' % (self.__class__.__name__, self.value) ---------- assignee: ethan.furman messages: 259399 nosy: barry, eli.bendersky, ethan.furman priority: normal severity: normal status: open title: add classattribute to enum to handle non-Enum attributes type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 13:43:09 2016 From: report at bugs.python.org (Andrew Barnert) Date: Tue, 02 Feb 2016 18:43:09 +0000 Subject: [New-bugs-announce] [issue26267] UUID docs should say how to get "standard form" Message-ID: <1454438589.41.0.991764109365.issue26267@psf.upfronthosting.co.za> New submission from Andrew Barnert: Most real-world code that uses the UUID module wants either the standard format '{12345678-1234-5678-1234-567812345678}', or the same thing without the braces. There are a number of different documented accessors, but none of them give you either of these. The simplest way I can think of to guarantee the standard format from what's documented is '{%08x-%04x-%04x-%02x%02x-%12x}' % u.fields. It might be nice to add accessors for standard form and braceless standard form, but that probably isn't necessary--as long as there's documentation saying that __str__ returns the braceless standard form. The example code does say that, but I don't think people can trust that a comment in an example is binding documentation--plus, plenty of people don't read the examples looking for more information about things that aren't documented. And I've seen people come up with buggy versions of the format string that miss leading zeros, or horrible things like repr(u)[6:42]. ---------- assignee: docs at python components: Documentation messages: 259414 nosy: abarnert, docs at python priority: normal severity: normal status: open title: UUID docs should say how to get "standard form" type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 14:24:17 2016 From: report at bugs.python.org (Zachary Ware) Date: Tue, 02 Feb 2016 19:24:17 +0000 Subject: [New-bugs-announce] [issue26268] Update python.org installers to use OpenSSL 1.0.2f Message-ID: <1454441057.55.0.0354413518189.issue26268@psf.upfronthosting.co.za> New submission from Zachary Ware: http://openssl.org/news/secadv/20160128.txt ---------- components: Build, Macintosh, Windows messages: 259422 nosy: ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Update python.org installers to use OpenSSL 1.0.2f versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 16:44:58 2016 From: report at bugs.python.org (Patrik Dufresne) Date: Tue, 02 Feb 2016 21:44:58 +0000 Subject: [New-bugs-announce] [issue26269] zipfile should call lstat instead of stat if available Message-ID: <1454449498.64.0.544045383692.issue26269@psf.upfronthosting.co.za> New submission from Patrik Dufresne: To mirror the behavior in tarfile, zipfile should make a call to stat instead of lstat if available. Failure to do so resolved into an "IOError No such file or directory" if the filename is a symbolik link being broken. As such, the creation of the archive fail. ---------- components: Library (Lib) messages: 259436 nosy: Patrik Dufresne priority: normal severity: normal status: open title: zipfile should call lstat instead of stat if available versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 17:20:19 2016 From: report at bugs.python.org (Paulo Costa) Date: Tue, 02 Feb 2016 22:20:19 +0000 Subject: [New-bugs-announce] [issue26270] Support for read()/write()/select() on asyncio Message-ID: <1454451619.12.0.0280470843365.issue26270@psf.upfronthosting.co.za> New submission from Paulo Costa: I want to read from file descriptors from async coroutines. I currently use `loop.add_reader(fd, callback)` and `loop.remove_reader(fd)` It works, but IMO using callbacks feels completely alien in the middle of a coroutine. I suggest adding new APIs to handle this. e.g.: - asyncio.select(fd, mode) - asyncio.read(fd, num) - asyncio.write(fd, str) (It would be nice to support all kinds of IO operations, but these are certainly the most important) Using the current APIs, the async implementation of read() looks like this: async def async_read(fd. n): loop = asyncio.get_event_loop() future = asyncio.Future() def ready(): future.set_result(os.read(fd, n)) loop.remove_reader(fd) loop.add_reader(fd, ready) return future ---------- components: asyncio messages: 259438 nosy: Paulo Costa, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: Support for read()/write()/select() on asyncio type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 2 18:02:15 2016 From: report at bugs.python.org (Daniel Shaulov) Date: Tue, 02 Feb 2016 23:02:15 +0000 Subject: [New-bugs-announce] [issue26271] freeze.py makefile uses the wrong flags variables Message-ID: <1454454135.77.0.499921177416.issue26271@psf.upfronthosting.co.za> New submission from Daniel Shaulov: Tools/Freeze/makemakefile.py uses CFLAGS, LDFLAGS and CPPFLAGS instead of the PY_ prefixed versions. This makes flags passed to ./configure ineffective. The patch makes the trivial fix of adding PY_ when needed. ---------- components: Build files: pyflags.patch keywords: patch messages: 259444 nosy: Daniel Shaulov priority: normal severity: normal status: open title: freeze.py makefile uses the wrong flags variables type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41785/pyflags.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 03:48:27 2016 From: report at bugs.python.org (Pengyu Chen) Date: Wed, 03 Feb 2016 08:48:27 +0000 Subject: [New-bugs-announce] [issue26272] `zipfile.ZipFile` fails reading a file object in specific version(s) Message-ID: <1454489307.52.0.490426016128.issue26272@psf.upfronthosting.co.za> New submission from Pengyu Chen: When passing a file object to `zipfile.ZipFile` for reading it fails in Python 3.5.1 it fails. While the same code in Python 2.7.11 works. Also loading that specific file directly from file path works in Python 3.5.1. (Please see the sample testing code below for details) [pengyu at GLaDOS tmp]$ echo "test message" > testfile [pengyu at GLaDOS tmp]$ zip testfile.zip testfile updating: testfile (stored 0%) [pengyu at GLaDOS tmp]$ file testfile.zip testfile.zip: Zip archive data, at least v1.0 to extract [pengyu at GLaDOS tmp]$ python -c "a = open('testfile.zip'); import zipfile; b = zipfile.ZipFile(a, 'r')" Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.5/zipfile.py", line 1026, in __init__ self._RealGetContents() File "/usr/lib/python3.5/zipfile.py", line 1093, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file [pengyu at GLaDOS tmp]$ python2 -c "a = open('testfile.zip'); import zipfile; b = zipfile.ZipFile(a, 'r')" [pengyu at GLaDOS tmp]$ python -c "import zipfile; b = zipfile.ZipFile('testfile.zip', 'r')" [pengyu at GLaDOS tmp]$ python --version Python 3.5.1 [pengyu at GLaDOS tmp]$ python2 --version Python 2.7.11 ---------- components: Library (Lib) messages: 259459 nosy: Pengyu Chen priority: normal severity: normal status: open title: `zipfile.ZipFile` fails reading a file object in specific version(s) type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 03:54:44 2016 From: report at bugs.python.org (Omar Sandoval) Date: Wed, 03 Feb 2016 08:54:44 +0000 Subject: [New-bugs-announce] [issue26273] Expose TCP_CONGESTION and TCP_USER_TIMEOUT to the socket module Message-ID: <1454489684.49.0.598051421532.issue26273@psf.upfronthosting.co.za> New submission from Omar Sandoval: The socket module is missing a couple of TCP socket options: TCP_CONGESTION was added to Linux in v2.6.13 and TCP_USER_TIMEOUT was added in v2.6.37. These should be exposed. ---------- components: Library (Lib) files: socket_tcp_options.patch keywords: patch messages: 259460 nosy: Omar Sandoval priority: normal severity: normal status: open title: Expose TCP_CONGESTION and TCP_USER_TIMEOUT to the socket module type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41787/socket_tcp_options.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 04:36:08 2016 From: report at bugs.python.org (Florin Papa) Date: Wed, 03 Feb 2016 09:36:08 +0000 Subject: [New-bugs-announce] [issue26274] Add CPU affinity to perf.py Message-ID: <1454492168.34.0.115069633072.issue26274@psf.upfronthosting.co.za> New submission from Florin Papa: Hi all, This is Florin Papa from the Dynamic Scripting Languages Optimizations Team from Intel Corporation. The patch submitted adds an affinity feature to the Grand Unified Python Benchmarks suite to allow running benchmarks on a given CPU/set of CPUs. It is implemented for Linux and uses the taskset command to bond a command to the CPUs specified. This minimizes run to run variation, as we can get considerable differences in measured performance from running a benchmark on different cores. The taskset command receives a CPU mask that specifies which cores in the system will be used for the command. Example: python perf.py ?-affinity=0x1 will use processor #0 python perf.py ?-affinity=0x3 will use processors #0 and #1 Thank you, Florin ---------- components: Benchmarks files: affinity.patch keywords: patch messages: 259464 nosy: brett.cannon, florin.papa, pitrou priority: normal severity: normal status: open title: Add CPU affinity to perf.py versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file41788/affinity.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 05:38:49 2016 From: report at bugs.python.org (STINNER Victor) Date: Wed, 03 Feb 2016 10:38:49 +0000 Subject: [New-bugs-announce] [issue26275] perf.py: bm_regex_v8 doesn't seem reliable even with --rigorous Message-ID: <1454495929.85.0.601853644385.issue26275@psf.upfronthosting.co.za> New submission from STINNER Victor: Hi, I'm working on some optimizations projects like FAT Python (PEP 509: issue #26058, PEP 510: issue #26098, and PEP 511: issue #26145) and faster memory allocators (issue #26249). I have the *feeling* that perf.py output is not reliable even if it takes more than 20 minutes :-/ Maybe because Yury told that I must use -r (--rigorous) :-) Example with 5 runs of "python3 perf.py ../default/python ../default/python.orig -b regex_v8": --------------- Report on Linux smithers 4.3.3-300.fc23.x86_64 #1 SMP Tue Jan 5 23:31:01 UTC 2016 x86_64 x86_64 Total CPU cores: 8 ### regex_v8 ### Min: 0.043237 -> 0.050196: 1.16x slower Avg: 0.043714 -> 0.050574: 1.16x slower Significant (t=-19.83) Stddev: 0.00171 -> 0.00174: 1.0178x larger ### regex_v8 ### Min: 0.042774 -> 0.051420: 1.20x slower Avg: 0.043843 -> 0.051874: 1.18x slower Significant (t=-14.46) Stddev: 0.00351 -> 0.00176: 2.0009x smaller ### regex_v8 ### Min: 0.042673 -> 0.048870: 1.15x slower Avg: 0.043726 -> 0.050474: 1.15x slower Significant (t=-8.74) Stddev: 0.00283 -> 0.00467: 1.6513x larger ### regex_v8 ### Min: 0.044029 -> 0.049445: 1.12x slower Avg: 0.044564 -> 0.049971: 1.12x slower Significant (t=-13.97) Stddev: 0.00175 -> 0.00211: 1.2073x larger ### regex_v8 ### Min: 0.042692 -> 0.049084: 1.15x slower Avg: 0.044295 -> 0.050725: 1.15x slower Significant (t=-7.00) Stddev: 0.00421 -> 0.00494: 1.1745x larger --------------- I'm only care of the "Min", IMHO it's the most interesting information here. The slowdown is betwen 12% and 20%, for me it's a big difference. It looks like some benchmarks have very short iterations compare to others. For example, bm_json_v2 takes around 3 seconds, whereas bm_regex_v8 only takes less than 0.050 second (50 ms). $ python3 performance/bm_json_v2.py -n 3 --timer perf_counter 3.310384973010514 3.3116717970115133 3.3077902760123834 $ python3 performance/bm_regex_v8.py -n 3 --timer perf_counter 0.0670697659952566 0.04515827298746444 0.045114840992027894 Do you think that bm_regex_v8 is reliable? I see that there is an "iteration scaling" to use run the benchmarks with more iterations. Maybe we can start to increase the "iteration scaling" for bm_regex_v8? Instead of a fixed number of iterations, we should redesign benchmarks to use time. For example, one iteration must take at least 100 ms and should not take more than 1 second (but take longer to get more reliable results). Then the benchmark is responsible to ajust internal parameters. I used this design for my "benchmark.py" script which is written to get "reliable" microbenchmarks: https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py?fileviewer=file-view-default The script is based on time and calibrate a benchmark. It also uses the *effictive* resolution of the clock used by the benchmark to calibrate the benchmark. I will maybe work on such patch, but it would be good to know first your opinion on such change. I guess that we should use the base python to calibrate the benchmark and then pass the same parameters to the modified python. ---------- components: Benchmarks messages: 259469 nosy: brett.cannon, haypo, pitrou, yselivanov priority: normal severity: normal status: open title: perf.py: bm_regex_v8 doesn't seem reliable even with --rigorous type: performance versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 06:39:10 2016 From: report at bugs.python.org (Mark Shannon) Date: Wed, 03 Feb 2016 11:39:10 +0000 Subject: [New-bugs-announce] [issue26276] Inconsistent behaviour of PEP 3101 formatting between versions Message-ID: <1454499550.19.0.0724839464997.issue26276@psf.upfronthosting.co.za> New submission from Mark Shannon: In Python 2.7.6 and 3.2.3: >>> "{ {{ 0} }}".format(**{' {{ 0} }': 'X'}) 'X' In Python 3.4.3: >>> "{ {{ 0} }}".format(**{' {{ 0} }': 'X'}) Traceback (most recent call last): File "", line 1, in ValueError: unexpected '{' in field name I think the problem is that PEP 3101 is under specified w.r.t. to non identifier characters in format units. ---------- components: Interpreter Core messages: 259473 nosy: Mark Shannon priority: normal severity: normal status: open title: Inconsistent behaviour of PEP 3101 formatting between versions type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 07:25:04 2016 From: report at bugs.python.org (flying sheep) Date: Wed, 03 Feb 2016 12:25:04 +0000 Subject: [New-bugs-announce] [issue26277] Allow zipapp to target modules Message-ID: <1454502304.42.0.453397075525.issue26277@psf.upfronthosting.co.za> New submission from flying sheep: currently, zipapp?s notion of __main__.py is very different from the usual. the root directory of a zipapp is not a module, therefore __init__.py isn?t used and relative imports from __main__.py don?t work. i propose that the zipapp functionality is amended/changed to 1. allow more than one target directory (bundle multiple modules) 2. allow creation of a __main__.py that contains a runpy-powered module runner so maybe we could specify the module itself as entry point to get this behavior zipapp -m my_module dependency.py my_module should create my_module.pyz ? __main__.py ? ...runpy.run_module('my_module') ? dependency.py ? my_module ? __init__.py ? __main__.py ? from . import ... or there would be another option to specify it. ---------- components: Extension Modules messages: 259476 nosy: flying sheep priority: normal severity: normal status: open title: Allow zipapp to target modules type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 10:57:36 2016 From: report at bugs.python.org (=?utf-8?q?S=C3=BCmer_Cip?=) Date: Wed, 03 Feb 2016 15:57:36 +0000 Subject: [New-bugs-announce] [issue26278] BaseTransport.close() does not trigger connection_lost() Message-ID: <1454515056.72.0.0290408111031.issue26278@psf.upfronthosting.co.za> New submission from S?mer Cip: Hi all, We have implemented a TCP server based on asyncio. And while doing some regression tests we randomly see following error: 1) Client connects to the server. 2) Client is closed ungracefully(without sending a FIN, deplug cable) 3) We have a custom PING handler that sends a PING and waits for PONG message. 4) After a while, we see that we timeout for the PING and we call close() on the Transport object. Now, most of the time, above just works fine, but at some point, somehow connection_lost() is NEVER gets called even though we call close() on the socket. As this issue is happening very randomly I don't have any asyncio logs for it. But can you think about any scenario that might lead to this somehow? Somehow, it seems we have an outgoing data in the TCP buffer when this happens and that is why the close() does not call the connection_lost immediately, but why it is never calling it is a mystery to me. Can that be following: 1) we call close() and is_closing is set to true we have outgoing data so we return. 2) Then a subsequent write occurs and connection ConnectionResetError() is raised and this calls _force_close(), but as we have previously set is_closing to True, connection_lost() does not get called. Above is just a very trivial idea which is probably is not the case, I do not spend too much time on the code. Thanks, ---------- components: asyncio messages: 259487 nosy: S?mer.Cip, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: BaseTransport.close() does not trigger connection_lost() type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 11:02:51 2016 From: report at bugs.python.org (Ioannis Aslanidis) Date: Wed, 03 Feb 2016 16:02:51 +0000 Subject: [New-bugs-announce] [issue26279] Possible bug in time library Message-ID: <1454515371.52.0.00863631935711.issue26279@psf.upfronthosting.co.za> New submission from Ioannis Aslanidis: There seems to be a bug in the time library when performing the following conversion: """ In [8]: mytime=time.strptime(str([2015, 53, 0]), '[%Y, %U, %w]') In [9]: mytime Out[9]: time.struct_time(tm_year=2016, tm_mon=1, tm_mday=3, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=6, tm_yday=368, tm_isdst=-1) In [10]: time.strftime('%Y %U', mytime) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 time.strftime('%Y %U', mytime) ValueError: day of year out of range """ As you can observe, tm_yday got a value of 368 instead of 3. It seems that the C function is not properly subtracting 365 or 366 days when converting from week 53 of a particular year which, according to ISO 8601 [1] and per python documentation [2]. Documentation explicitly says that: "The first week of the year is the week with the year's first Thursday in it". [1] https://en.wikipedia.org/wiki/ISO_8601#Week_dates [2] https://docs.python.org/2/library/time.html ---------- components: Library (Lib) messages: 259488 nosy: iaslan priority: normal severity: normal status: open title: Possible bug in time library type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 12:02:18 2016 From: report at bugs.python.org (Yury Selivanov) Date: Wed, 03 Feb 2016 17:02:18 +0000 Subject: [New-bugs-announce] [issue26280] ceval: Optimize [] operation similarly to CPython 2.7 Message-ID: <1454518938.55.0.360626721568.issue26280@psf.upfronthosting.co.za> New submission from Yury Selivanov: See also issue #21955 ---------- components: Interpreter Core messages: 259492 nosy: haypo, yselivanov, zbyrne priority: normal severity: normal stage: needs patch status: open title: ceval: Optimize [] operation similarly to CPython 2.7 type: performance versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 16:11:00 2016 From: report at bugs.python.org (Brett Cannon) Date: Wed, 03 Feb 2016 21:11:00 +0000 Subject: [New-bugs-announce] [issue26281] Clear sys.path_importer_cache from importlib.invalidate_caches() Message-ID: <1454533860.47.0.453601441471.issue26281@psf.upfronthosting.co.za> New submission from Brett Cannon: Would it make sense to clear sys.path_importer_cache when someone calls importlib.invalidate_caches()? It is a cache after all. ---------- components: Library (Lib) messages: 259518 nosy: brett.cannon, eric.snow, ncoghlan priority: normal severity: normal stage: test needed status: open title: Clear sys.path_importer_cache from importlib.invalidate_caches() type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 16:44:16 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 03 Feb 2016 21:44:16 +0000 Subject: [New-bugs-announce] [issue26282] Add support for partial keyword arguments in extension functions Message-ID: <1454535856.96.0.444049158428.issue26282@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Currently extension functions either accept only positional-only arguments (PyArg_ParseTuple), or keyword arguments (PyArg_ParseTupleAndKeywords). While adding support passing by keywords looks good for some arguments, for other arguments it doesn't make much sense. For example "encoding" and "errors" arguments for str or "base" argument for int are examples of good keyword arguments, but it is hard to choose good name for the first argument. I suggest to allow to add the support of keyword arguments only for the part of arguments, while left other arguments positional-only. This issue consists from two stages: 1. Allow PyArg_ParseTupleAndKeywords to accept empty string "" as keywords and interpret this as positional-only argument. 2. Make Argument Clinic to generate code for partial keyword arguments. The syntax already supports this: "/" separates positional-only arguments from keyword arguments. ---------- messages: 259521 nosy: larry, martin.panter, serhiy.storchaka priority: normal severity: normal status: open title: Add support for partial keyword arguments in extension functions type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 3 21:19:28 2016 From: report at bugs.python.org (=?utf-8?b?5by15Lyv6Kqg?=) Date: Thu, 04 Feb 2016 02:19:28 +0000 Subject: [New-bugs-announce] [issue26283] zipfile can not handle the path build by os.path.join() Message-ID: <1454552368.89.0.734714115407.issue26283@psf.upfronthosting.co.za> New submission from ???: I think the built-in library zipfile.py does not handle correctly the path build by os.path.join(). For example, assuming we have a zipfile named Test.zip which contanins a member xml/hello.xml, if you want to extract the member out, then you hava to use 'xml/hello.xml', if using os.path.join('xml','hello.xml'), you will get an error. Platform: Windows7, Python3.4 >>> import zipfile,os >>> f=zipfile.ZipFile("Test.zip",'r') >>> f.extract('xml/hello.xml','.') # OK. >>> f.extract(os.path.join('xml','hello.xml'),'.') # does not work. If we fixed the zipfile.py, inside the method getinfo(self,name) of class ZipFile: before: def getinfo(self, name): """Return the instance of ZipInfo given 'name'.""" info = self.NameToInfo.get(name) if info is None: raise KeyError( 'There is no item named %r in the archive' % name) return info after: def getinfo(self, name): """Return the instance of ZipInfo given 'name'.""" if os.sep=='\\' and os.sep in name: name=name.replace('\\','/') info = self.NameToInfo.get(name) if info is None: raise KeyError( 'There is no item named %r in the archive' % name) return info Then this line work! >>> f.extract(os.path.join('xml','hello.xml'),'.') # OK! of course, this line also work: >>> f.extract('xml/hello.xml','.') # also OK! I think it is a bug. Why? Let's we take a closer look at the method: info = self.NameToInfo.get(name) The keys of NameToInfo are always the format "xxx/yyy" according to the document of class ZipInfo: def __init__(self, filename="NoName", date_time=(1980,1,1,0,0,0)): # This is used to ensure paths in generated ZIP files always use # forward slashes as the directory separator, as required by the # ZIP format specification. if os.sep != "/" and os.sep in filename: filename = filename.replace(os.sep, "/") Hence the method getinfo(self,name) of class ZipFile always get KeyError for the path build os.path.join('xxx','yyy') Thnaks for reading! Bocheng. ---------- components: Library (Lib) messages: 259524 nosy: ??? priority: normal severity: normal status: open title: zipfile can not handle the path build by os.path.join() type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 09:28:01 2016 From: report at bugs.python.org (Stefan Krah) Date: Thu, 04 Feb 2016 14:28:01 +0000 Subject: [New-bugs-announce] [issue26284] FIx telco benchmark Message-ID: <1454596081.41.0.621997504219.issue26284@psf.upfronthosting.co.za> New submission from Stefan Krah: The telco benchmark is unstable. It needs some of Victor's changes from #26275 and probably a larger data set: http://speleotrove.com/decimal/expon180-1e6b.zip is too big for _pydecimal, but the one that is used is probably too small for _decimal. ---------- components: Benchmarks messages: 259569 nosy: brett.cannon, haypo, pitrou, skrah, yselivanov priority: normal severity: normal stage: needs patch status: open title: FIx telco benchmark type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 10:56:28 2016 From: report at bugs.python.org (Alecsandru Patrascu) Date: Thu, 04 Feb 2016 15:56:28 +0000 Subject: [New-bugs-announce] [issue26285] Garbage collection of unused input sections from CPython binaries Message-ID: <1454601388.85.0.405033419332.issue26285@psf.upfronthosting.co.za> New submission from Alecsandru Patrascu: Hi all, This is Alecsandru from the Dynamic Scripting Languages Optimization Team at Intel Corporation. I would like to submit a patch that enables garbage collection of unused input sections from the CPython2 and CPython3 binaries, by using the "--gc-sections" linker flag, which decides which input sections are used by examining symbols and relocations. In order for this to work, GCC must place each function or data item into its own section in the output file, thus dedicated flags are used. With this technique, an average of 1% is gained in both interpreters, with a few small regressions. Steps: ====== 1. Get the CPython source codes hg clone https://hg.python.org/cpython cpython cd cpython hg update 2.7 (for CPython2) 2. Build the binary a) Default: ./configure make b) Unused input sections patch Copy the attached patch files hg import --no-commit cpython2-deadcode-v01.patch.patch (for CPython3) hg import --no-commit cpython2-deadcode-v01.patch (for CPython2) ./configure make Hardware and OS Configuration ============================= Hardware: Intel XEON (Haswell-EP) 18 Cores BIOS settings: Intel Turbo Boost Technology: false Hyper-Threading: false OS: Ubuntu 14.04.3 LTS Server OS configuration: Address Space Layout Randomization (ASLR) disabled to reduce run to run variation by echo 0 > /proc/sys/kernel/randomize_va_space CPU frequency set fixed at 2.6GHz GCC version: GCC version 4.9.2 Benchmark: Grand Unified Python Benchmark from https://hg.python.org/benchmarks/ Measurements and Results ======================== CPython2 and CPython3 sample results, measured using GUPB on a Haswell platform, can be viewed in Table 1 and 2. On the first column (Benchmark) you can see the benchmark name and on the second (%S) the speedup compared with the default version; a higher value is better. Table 1. CPython3 results: Benchmark %S ---------------------- telco 11 etree_parse 7 call_simple 6 etree_iterparse 5 regex_v8 4 meteor_contest 3 etree_process 3 call_method_unknown 3 json_dump_v2 3 formatted_logging 2 hexiom2 2 chaos 2 richards 2 django_v3 2 nbody 2 etree_generate 2 pickle_list 1 go 1 nqueens 1 call_method 1 mako_v2 1 raytrace 1 chameleon_v2 1 silent_logging 0 fastunpickle 0 2to3 0 float 0 regex_effbot 0 pidigits 0 json_load 0 simple_logging 0 normal_startup 0 startup_nosite 0 fastpickle 0 tornado_http 0 regex_compile 0 fannkuch 0 spectral_norm 0 pickle_dict 0 unpickle_list 0 call_method_slots 0 pathlib -2 unpack_sequence -2 Table 2. CPython2 results: Benchmark %S ---------------------- simple_logging 4 formatted_logging 3 slowpickle 2 silent_logging 2 pickle_dict 1 chameleon_v2 1 hg_startup 1 pickle_list 1 call_method_unknown 1 pidigits 1 regex_effbot 1 regex_v8 1 html5lib 0 normal_startup 0 regex_compile 0 etree_parse 0 spambayes 0 html5lib_warmup 0 unpack_sequence 0 richards 0 rietveld 0 startup_nosite 0 raytrace 0 etree_iterparse 0 json_dump_v2 0 fastpickle 0 slowspitfire 0 slowunpickle 0 call_simple 0 float 0 2to3 0 bzr_startup 0 json_load 0 hexiom2 0 chaos 0 unpickle_list 0 call_method_slots 0 tornado_http 0 fastunpickle 0 etree_process 0 spectral_norm 0 meteor_contest 0 pybench 0 go 0 etree_generate 0 mako_v2 0 django_v3 0 fannkuch 0 nbody 0 nqueens 0 telco -1 call_method -2 pathlib -3 Thank you, Alecsandru ---------- components: Build files: cpython2-deadcode-v01.patch keywords: patch messages: 259572 nosy: alecsandru.patrascu priority: normal severity: normal status: open title: Garbage collection of unused input sections from CPython binaries type: performance versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file41805/cpython2-deadcode-v01.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 16:08:00 2016 From: report at bugs.python.org (Jim Jewett) Date: Thu, 04 Feb 2016 21:08:00 +0000 Subject: [New-bugs-announce] [issue26286] dis module: coroutine opcode documentation clarity Message-ID: <1454620080.04.0.670513123841.issue26286@psf.upfronthosting.co.za> New submission from Jim Jewett: https://docs.python.org/3/library/dis.html includes a section describing the various opcodes. Current documentation: """ Coroutine opcodes GET_AWAITABLE Implements TOS = get_awaitable(TOS), where get_awaitable(o) returns o if o is a coroutine object or a generator object with the CO_ITERABLE_COROUTINE flag, or resolves o.__await__. GET_AITER Implements TOS = get_awaitable(TOS.__aiter__()). See GET_AWAITABLE for details about get_awaitable GET_ANEXT Implements PUSH(get_awaitable(TOS.__anext__())). See GET_AWAITABLE for details about get_awaitable BEFORE_ASYNC_WITH Resolves __aenter__ and __aexit__ from the object on top of the stack. Pushes __aexit__ and result of __aenter__() to the stack. SETUP_ASYNC_WITH Creates a new frame object. """ (1) There is a PUSH macro in ceval.c, but no PUSH bytecode. I spent a few minutes trying to figure out what a PUSH command was, and how the GET_ANEXT differed from TOS = get_awaitable(TOS.__anext__()) which would match the bytecodes right above it. After looking at ceval.c, I think GET_ANEXT is the only such bytecode to leave the original TOS in place, but I'm not certain about that. Please be explicit. (Unless they are the same, in which case, please use the same wording.) (2) The coroutine bytecode instructions should have a "New in 3.5" marker, as the GET_YIELD_FROM_ITER does. It might make sense to just place the mark under Coroutine opcodes section header and say it applies to all of them, instead of marking each individual opcode. (3) The GET_AITER and GET_ANEXT descriptions do not show the final period. Opcodes such as INPLACE_LSHIFT also end with a code quote, but still include a (not-marked-as-code) final period. (4) Why does SETUP_ASYNC_WITH talk about frames? Is there actually a python frame involved, or is this another bytecode "block", similar to that used for except and finally? ---------- assignee: yselivanov components: Documentation messages: 259595 nosy: Jim.Jewett, yselivanov priority: normal severity: normal stage: needs patch status: open title: dis module: coroutine opcode documentation clarity versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 20:21:25 2016 From: report at bugs.python.org (Petr Viktorin) Date: Fri, 05 Feb 2016 01:21:25 +0000 Subject: [New-bugs-announce] [issue26287] Core dump in f-string with lambda and format specification Message-ID: <1454635285.76.0.309799685579.issue26287@psf.upfronthosting.co.za> New submission from Petr Viktorin: Evaluating the expression f"{(lambda: 0):x}" crashes Python. $ ./python Python 3.6.0a0 (default, Feb 5 2016, 02:14:48) [GCC 5.3.1 20151207 (Red Hat 5.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> f"{(lambda: 0):x}" Fatal Python error: Python/ceval.c:3576 object at 0x7f6b42f21338 has negative ref count -2604246222170760230 Traceback (most recent call last): File "", line 1, in TypeError: non-empty format string passed to object.__format__ Aborted (core dumped) ---------- messages: 259609 nosy: encukou, eric.smith priority: normal severity: normal status: open title: Core dump in f-string with lambda and format specification versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 20:55:00 2016 From: report at bugs.python.org (Yury Selivanov) Date: Fri, 05 Feb 2016 01:55:00 +0000 Subject: [New-bugs-announce] [issue26288] Optimize PyLong_AsDouble for single-digit longs Message-ID: <1454637300.25.0.0795862049025.issue26288@psf.upfronthosting.co.za> New submission from Yury Selivanov: The attached patch drastically speeds up PyLong_AsDouble for single digit longs: -m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + (x+0.1)/(x-0.1)*2 + (x+10)*(x-30)" with patch: 0.414 without: 0.612 spectral_norm: 1.05x faster. The results are even better when paired with patch from issue #21955. ---------- components: Interpreter Core files: as_double.patch keywords: patch messages: 259615 nosy: haypo, pitrou, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Optimize PyLong_AsDouble for single-digit longs versions: Python 3.6 Added file: http://bugs.python.org/file41812/as_double.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 21:07:54 2016 From: report at bugs.python.org (Yury Selivanov) Date: Fri, 05 Feb 2016 02:07:54 +0000 Subject: [New-bugs-announce] [issue26289] Optimize floor division for ints Message-ID: <1454638074.35.0.0871962001647.issue26289@psf.upfronthosting.co.za> New submission from Yury Selivanov: The attached patch optimizes floor division for ints. ### spectral_norm ### Min: 0.319087 -> 0.289172: 1.10x faster Avg: 0.322564 -> 0.294319: 1.10x faster Significant (t=21.71) Stddev: 0.00249 -> 0.01277: 5.1180x larger -m timeit -s "x=22331" "x//2;x//3;x//4;x//5;x//6;x//7;x//8;x/99;x//100;" with patch: 0.298 without: 0.515 ---------- components: Interpreter Core files: floor_div.patch keywords: patch messages: 259617 nosy: haypo, pitrou, serhiy.storchaka, yselivanov priority: normal severity: normal stage: patch review status: open title: Optimize floor division for ints type: performance versions: Python 3.6 Added file: http://bugs.python.org/file41813/floor_div.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 21:30:26 2016 From: report at bugs.python.org (Don Hatch) Date: Fri, 05 Feb 2016 02:30:26 +0000 Subject: [New-bugs-announce] [issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering Message-ID: <1454639426.39.0.789860369492.issue26290@psf.upfronthosting.co.za> New submission from Don Hatch: Iterating over input using either 'for line in fileinput.input():' or 'for line in sys.stdin:' has the following unexpected behavior: no matter how many lines of input the process reads, the loop body is not entered until either (1) at least 8193 chars have been read and at least one of them was a newline, or (2) EOF is read (i.e. the read() system call returns zero bytes). The behavior I expect instead is what "for line in iter(sys.stdin.readline, ''):" does: that is, the loop body is entered for the first time as soon as a newline or EOF is read. Furthermore strace reveals that this well-behaved alternative code does sensible input buffering, in the sense that the underlying system call being made is read(0,buf,8192), thereby allowing it to get as many characters as are available on input, up to 8192 of them, to be buffered and used in subsequent loop iterations. This is familiar and sensible behavior, and is what I think of as "input buffering". I anticipate there will be responses to this bug report of the form "this is documented behavior; the fileinput and sys.stdin iterators do input buffering". To that, I say: no, these iterators' unfriendly behavior is *not* input buffering in any useful sense; my impression is that someone may have implemented what they thought the words "input buffering" meant, but if so, they really botched it. This bug is most noticeable and harmful when using a filter written in python to filter the output of an ongoing process that may have long pauses between lines of output; e.g. running "tail -f" on a log file. In this case, the python filter spends a lot of time in a state where it is paused without reason, having read many input lines that it has not yet processed. If there is any suspicion that the delayed output is due to the previous program in the pipeline buffering its output instead, strace can be used on the python filter process to confirm that its input lines are in fact being read in a timely manner. This is certainly true if the previous process in the pipeline is "tail -f", at least on my ubuntu linux system. To demonstrate the bug, run each of the following from the bash command line. This was observed using bash 4.3.11(1), python 2.7.6, and python 3.4.3, on ubuntu 14.04 linux. ---------------------------------------------- { echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import fileinput,sys\nfor line in fileinput.input(): sys.stdout.write("line: "+line)' # result (BAD): pauses for 1 second, prints the three lines, returns to prompt { echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import sys\nfor line in sys.stdin: sys.stdout.write("line: "+line)' # result (BAD): pauses for 1 second, prints the three lines, returns to prompt { echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import sys\nfor line in iter(sys.stdin.readline, ""): sys.stdout.write("line: "+line)' # result (GOOD): prints the three lines, pauses for 1 second, returns to prompt { echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import fileinput,sys\nfor line in fileinput.input(): sys.stdout.write("line: "+line)' # result (BAD): pauses for 1 second, prints the three lines, returns to prompt { echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import sys\nfor line in sys.stdin: sys.stdout.write("line: "+line)' # result (GOOD): prints the three lines, pauses for 1 second, returns to prompt { echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import sys\nfor line in iter(sys.stdin.readline, ""): sys.stdout.write("line: "+line)' # result (GOOD): prints the three lines, pauses for 1 second, returns to prompt ---------------------------------------------- Notice the 'for line in sys.stdin:' behavior is apparently fixed in python 3.4. So the matrix of behavior observed above can be summarized as follows: 2.7 3.4 for line in fileinput.input(): BAD BAD for line in sys.stdin: BAD GOOD for line in iter(sys.stdin.readline, ""): GOOD GOOD Note that adding '-u' to the python args makes no difference in behavior, in any of the above 6 command lines. Finally, if I insert "strace -T" before "python" in each of the 6 command lines above, it confirms that the python process is reading the 3 lines of input immediately in all cases, in a single read(..., ..., 4096 or 8192) which seems reasonable. ---------- components: Library (Lib) messages: 259619 nosy: Don Hatch priority: normal severity: normal status: open title: fileinput and 'for line in sys.stdin' do strange mockery of input buffering type: behavior versions: Python 2.7, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 4 21:54:53 2016 From: report at bugs.python.org (good.bad) Date: Fri, 05 Feb 2016 02:54:53 +0000 Subject: [New-bugs-announce] [issue26291] Floating-point arithmetic Message-ID: <1454640893.5.0.780875316779.issue26291@psf.upfronthosting.co.za> New submission from good.bad: print(1 - 0.8) 0.19999999999999996 print(1 - 0.2) 0.8 why not 0.2? ---------- messages: 259622 nosy: goodbad priority: normal severity: normal status: open title: Floating-point arithmetic versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 03:44:20 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 05 Feb 2016 08:44:20 +0000 Subject: [New-bugs-announce] [issue26292] Raw I/O writelines() broken Message-ID: <1454661860.28.0.473572264835.issue26292@psf.upfronthosting.co.za> New submission from STINNER Victor: Copy of Antoine Pitrou's email (sent in 2012 ;-): https://mail.python.org/pipermail/python-dev/2012-August/121396.html Hello, I was considering a FileIO.writelines() implementation based on writev() and I noticed that the current RawIO.writelines() implementation is broken: RawIO.write() can return a partial write but writelines() ignores the result and happily proceeds to the next iterator item (and None is returned at the end). (it's probably broken with non-blocking streams too, for the same reason) In the spirit of RawIO.write(), I think RawIO.writelines() could return the number of bytes written (allowing for partial writes). Regards Antoine. -- Software development and contracting: http://pro.pitrou.net ---------- components: IO messages: 259640 nosy: haypo priority: normal severity: normal status: open title: Raw I/O writelines() broken versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 03:52:35 2016 From: report at bugs.python.org (spoo) Date: Fri, 05 Feb 2016 08:52:35 +0000 Subject: [New-bugs-announce] [issue26293] Embedded zipfile fields dependent on absolute position Message-ID: <1454662355.53.0.48718125489.issue26293@psf.upfronthosting.co.za> New submission from spoo: Example: from zipfile import ZipFile with open('a.zipp', 'wb') as base: base.write(b'old\n') with ZipFile(base, 'a') as myzip: myzip.write('eggs.txt') If the embedded zip portion of the file is extracted (first four bytes deleted), some fields will be incorrect in the resultant file - commenting out line 3 produces a file that can serve as a comparison. These differences cause issues opening with some zip library implementations. My best guess is that this is related to this line: https://github.com/python/cpython/blob/master/Lib/zipfile.py#L1459 ---------- components: Library (Lib) messages: 259642 nosy: spoo priority: normal severity: normal status: open title: Embedded zipfile fields dependent on absolute position type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 04:00:58 2016 From: report at bugs.python.org (Frank Millman) Date: Fri, 05 Feb 2016 09:00:58 +0000 Subject: [New-bugs-announce] [issue26294] Queue().unfinished_tasks not in docs - deliberate? Message-ID: <1454662858.22.0.0628824582163.issue26294@psf.upfronthosting.co.za> New submission from Frank Millman: dir(queue.Queue()) shows an attribute 'unfinished_tasks'. It appears to be the counter referred to in the docs to 'join()', but it is not documented itself. I would like to make use of it, but I don't know if it is part of the official API for this module. Please advise. ---------- assignee: docs at python components: Documentation messages: 259645 nosy: docs at python, frankmillman priority: normal severity: normal status: open title: Queue().unfinished_tasks not in docs - deliberate? versions: Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 04:15:07 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 05 Feb 2016 09:15:07 +0000 Subject: [New-bugs-announce] [issue26295] Random failures when running test suite in parallel (-m test -j0) caused by test_regrtest Message-ID: <1454663707.24.0.201658170139.issue26295@psf.upfronthosting.co.za> New submission from STINNER Victor: test_regrtest creates temporary test files called test_regrtest_pid_xxx.py in Lib/test/. The problem is that some tests like test___all__ and test_zipfile haves test relying on the list of Lib/test/test_*.py. When tests are run in parallel, test_regrtest can creates temporary test_regrtest_pid_xxx.py files, test_zipfile sees them, test_regrtest removes them, and then test_zipfiles fails. The best would be to write these temporary files into a temporary directory, but I failed to fix Lib/test/regrtest.py to load tests from a different directory. In theory, Python 3 supports packages with files splitted into multiple directories, in practice it doesn't seem to work :-p Maybe test_regrtest should use a test submodule like Lib/test/temp_regrtest/ ? test_regrtest started to create temporary test_xxx.py files since issue #25220. (Other changes to test_regrtest: issues #18174, #22806, #25260, #25306, #25369, #25373, #25694). ====================================================================== ERROR: test_all (test.test___all__.AllTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/haypo/prog/python/default/Lib/test/test___all__.py", line 102, in test_all with open(path, "rb") as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/haypo/prog/python/default/Lib/test/test_regrtest_25743_noop20.py' ---------------------------------------------------------------------- ---------- components: Tests messages: 259647 nosy: brett.cannon, haypo priority: normal severity: normal status: open title: Random failures when running test suite in parallel (-m test -j0) caused by test_regrtest versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 09:00:12 2016 From: report at bugs.python.org (Mats Luspa) Date: Fri, 05 Feb 2016 14:00:12 +0000 Subject: [New-bugs-announce] [issue26296] colorys rgb_to_hls algorithm error Message-ID: <1454680812.05.0.172060571572.issue26296@psf.upfronthosting.co.za> New submission from Mats Luspa: In the colorsys library function rgb_to_hls the algorithm is not implemented quite correctly. According to algorithm the correct implementation should be (the error is in the condition r == maxc). In the current code g _______________________________________ From report at bugs.python.org Fri Feb 5 12:34:07 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 05 Feb 2016 17:34:07 +0000 Subject: [New-bugs-announce] [issue26297] Move constant folding to AST level Message-ID: <1454693647.41.0.887597709812.issue26297@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: For using more efficient bytecode (either specialized 8-bit opcodes or 16-bit opcodes) we need to move some optimizations from bytecode level to AST level, since LOAD_CONST variants could have variable size. Now with the Constant node this should be easy. ---------- components: Interpreter Core messages: 259680 nosy: abarnert, benjamin.peterson, brett.cannon, georg.brandl, haypo, ncoghlan, serhiy.storchaka, yselivanov priority: normal severity: normal stage: needs patch status: open title: Move constant folding to AST level type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 12:43:14 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 05 Feb 2016 17:43:14 +0000 Subject: [New-bugs-announce] [issue26298] Split ceval.c into small files Message-ID: <1454694194.19.0.667828891375.issue26298@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached patch splits the huge "switch (opcode)" of ceval.c into smaller ceval_xxx.h files. New files: 93 Python/ceval_stack.h 142 Python/ceval_condjump.h 155 Python/ceval_misc.h 162 Python/ceval_fast.h 180 Python/ceval_module.h 238 Python/ceval_ctx.h 249 Python/ceval_func.h 262 Python/ceval_iter.h 268 Python/ceval_build.h 384 Python/ceval_number.h Maybe we can put more files per .h file, maybe less. I don't really care. It will allow to keep the code readable even with new optimizations like the issue #21955. What do you think? ---------- files: split_ceval.patch keywords: patch messages: 259682 nosy: haypo, pitrou, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Split ceval.c into small files versions: Python 3.6 Added file: http://bugs.python.org/file41827/split_ceval.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 13:40:05 2016 From: report at bugs.python.org (Samwyse) Date: Fri, 05 Feb 2016 18:40:05 +0000 Subject: [New-bugs-announce] [issue26299] wsgiref.util FileWrapper raises ValueError: I/O operation on closed file. Message-ID: <1454697605.85.0.988308856795.issue26299@psf.upfronthosting.co.za> New submission from Samwyse: While developing, I am using wsgiref.simple_server. Using to serve static content isn't working. The attached program demonstrates the issue. Run it and connect to http://127.0.0.1:8000/. You will see three buttons. Clicking on 'wsgi.filewrapper' causes the FileWrapper class found in wsgiref.util to be used, while clicking on 'PEP 0333' uses code suggested in PEP 0333 (https://www.python.org/dev/peps/pep-0333/#id36). Both of these fail. Clicking on 'slurp' causes the entire file to loaded and returned in a list. This works. When an application returns an instance of environ['wsgi.file_wrapper'], or creates it's own iterator, an error is raised during the initial read of the file. Reading the entire file and returning a single-element list (the 'slurp' technique) works, but is impractical for larger files. I've tested this with both Python 2.7.9 and 3.4.3 under Windows 7 (my laptop) and 3.4.3 under Alpine Linux (a Docker instance). ---------- components: Library (Lib) files: wsgitest.py messages: 259685 nosy: samwyse priority: normal severity: normal status: open title: wsgiref.util FileWrapper raises ValueError: I/O operation on closed file. versions: Python 2.7, Python 3.4 Added file: http://bugs.python.org/file41828/wsgitest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 17:17:44 2016 From: report at bugs.python.org (Andrew Barnert) Date: Fri, 05 Feb 2016 22:17:44 +0000 Subject: [New-bugs-announce] [issue26300] "unpacked" bytecode Message-ID: <1454710664.23.0.316699327961.issue26300@psf.upfronthosting.co.za> New submission from Andrew Barnert: Currently, the compiler starts with a list of arrays of instructions, packs them to 1/3/6-bytes-apiece bytecodes, fixes up all the jumps, and then calls PyCode_Optimize on the result. This makes the peephole optimizer much more complicated. Assuming PEP 511 is accepted, it will also make plug-in bytecode optimizers much more complicated (and probably wasteful--they'll each be repeating the same work to re-do the fixups). The simplest alternative (as suggested by Serhiy on -ideas) is to expose an "unpacked" bytecode to the optimizer (in the code parameter and return value and lnotab_obj in-out parameter for PyCode_Optimize, and similarly for PEP 511) where each instruction takes a fixed 4 bytes. This is much easier to process. After the optimizer returns, the compiler packs opcodes into the usual 1/3/6-byte format, removing NOPs, retargeting jumps, and adjusting the lnotab as it goes. (Note that it already pretty much has code to do all of this except the NOP removal; it's just doing it before the optimizer instead of after.) Negatives: * Arguments can now only go up to 2**23 instead of 2**31. I don't think that's a problem (has anyone ever created a code object with 4 million instructions?). * A bit more work for the compiler; we'd need to test to make sure there's no measurable performance impact. We could also expose this functionality through C API PyCode_Pack/Unpack and Python dis.pack_code/unpack_code functions (and also make the dis module know how to parse unpacked code), which would allow import hooks, post-processing decorators, etc. to be simplified as well. This would remove some, but not all, of the need for things like byteplay. I think this may be worth doing, but I'm not sure until I see how complicated it is. We could even allow code objects with unpacked bytecode to be executed, but I think that's unnecessary complexity. Nobody should want to do that intentionally, and if an optimizer lets such code escape by accident, a SystemError is fine. MRAB implied an alternative: exposing some slightly-higher-level label-based format. That would be even nicer to work with. But it's also more complicated for the compiler and for the API, and I think it's already easy enough to handle jumps with fixed-width instructions. ---------- components: Interpreter Core messages: 259693 nosy: abarnert, benjamin.peterson, georg.brandl, haypo, pitrou, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: "unpacked" bytecode type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 5 20:40:39 2016 From: report at bugs.python.org (STINNER Victor) Date: Sat, 06 Feb 2016 01:40:39 +0000 Subject: [New-bugs-announce] [issue26301] ceval.c: reintroduce fast-path for list[index] in BINARY_SUBSCR Message-ID: <1454722839.37.0.315889112887.issue26301@psf.upfronthosting.co.za> New submission from STINNER Victor: Copy of msg222985 by Raymond Hettinger from issue #21955: "There also used to be a fast path for binary subscriptions with integer indexes. I would like to see that performance regression fixed if it can be done cleanly." ---------- messages: 259708 nosy: haypo, rhettinger, yselivanov priority: normal severity: normal status: open title: ceval.c: reintroduce fast-path for list[index] in BINARY_SUBSCR type: performance versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 6 00:18:50 2016 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 06 Feb 2016 05:18:50 +0000 Subject: [New-bugs-announce] [issue26302] cookies module allows commas in keys Message-ID: <1454735930.3.0.423422750364.issue26302@psf.upfronthosting.co.za> New submission from Jason R. Coombs: Commas aren't legal characters in cookie keys, yet in Python 3.5, they're allowed: >>> bool(http.cookies._is_legal_key(',')) True The issue lies in the use of _LegalChars constructing a regular expression. "Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems." The issue arises in this line: _is_legal_key = re.compile('[%s]+' % _LegalChars).fullmatch Which was added in 88e1151e8e0242 referencing issue2211. The problem is that in a regular expression, and in a character class in particular, the '-' character has a special meaning if not the first character in the class, which is "span all characters between the leading and following characters". As a result, the pattern has the unintended effect of including the comma in the pattern: >>> http.cookies._is_legal_key.__self__ re.compile("[abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!#$%&'*+-.^_`|~:]+") >>> pattern = _ >>> pattern.fullmatch(',') <_sre.SRE_Match object; span=(0, 1), match=','> >>> ord('+') 43 >>> ord('.') 46 >>> ''.join(map(chr, range(43,47))) '+,-.' That's how the comma creeped in. This issue is the underlying cause of https://bitbucket.org/cherrypy/cherrypy/issues/1405/testcookies-fails-on-python-35 and possible other cookie-related bugs in Python. While I jest about regular expressions, I like the implementation. It just needs to account for the extraneous comma, perhaps by escaping the dash: _is_legal_key = re.compile('[%s]+' % _LegalChars.replace('-', '\\-')).fullmatch Also, regression tests for keys containing invalid characters should be added as well. ---------- keywords: 3.5regression messages: 259718 nosy: jason.coombs, serhiy.storchaka priority: normal severity: normal status: open title: cookies module allows commas in keys versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 6 10:54:22 2016 From: report at bugs.python.org (kernc) Date: Sat, 06 Feb 2016 15:54:22 +0000 Subject: [New-bugs-announce] [issue26303] Shared execution context between doctests in a module Message-ID: <1454774062.04.0.526927951264.issue26303@psf.upfronthosting.co.za> New submission from kernc: The doctest execution context documentation [0] says the tests get shallow *copies* of module's globals, so one test can't mingle with results of another. This makes it impossible to make literate modules such as: """ This module is about reusable doctests context. Examples -------- Let's prepare something the later examples can work with: >>> import foo >>> result = foo.Something() 2 """ class Bar: """ Class about something. >>> bar = Bar(foo) >>> bar.uses(foo) True """ def baz(self): """ Returns 3. >>> result + bar.baz() 5 """ return 3 I.e. one has to instantiate everything in every single test. The documentation says one can pass their own globals as `glob=your_dict`, but it doesn't mention the dict is *cleared* after the test run. Please acknowledge the use case of doctests in a module sharing their environment and results sometimes legitimately exists, and to make it future-compatible, please amend the final paragraph of the relevant part of documentation [0] like so: You can force use of your own dict as the execution context by passing `globs=your_dict` to `testmod()` or `testfile()` instead, e.g., to have all doctests in a module use the _same_ execution context (sharing variables), define a context like so: class Context(dict): def copy(self): return self def clear(self): pass and use it, optionally prepopulated with `M`'s globals: doctest.testmod(module, glob=Context(module.__dict__.copy())) Thank you! [0]: https://docs.python.org/3/library/doctest.html#what-s-the-execution-context ---------- assignee: docs at python components: Documentation messages: 259731 nosy: docs at python, kernc priority: normal severity: normal status: open title: Shared execution context between doctests in a module type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 02:55:32 2016 From: report at bugs.python.org (Martin Panter) Date: Mon, 08 Feb 2016 07:55:32 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue26304=5D_Fix_=E2=80=9Callow?= =?utf-8?q?s_to_=3Cverb=3E=E2=80=9D_in_documentation?= Message-ID: <1454918132.41.0.149238322531.issue26304@psf.upfronthosting.co.za> New submission from Martin Panter: This patch changes instances of ?. . . allows to ? to ?. . . allows ing? or similar. I understand the original form is not correct English grammar, although the equivalent is apparently valid in some other languages. As a native English speaker it feels awkward to me, although the meaning is clear. This question & answer seem to back me up, but I thought I should get a quick review or second opinion before changing everything. ---------- assignee: docs at python components: Documentation files: allows-to.patch keywords: patch messages: 259827 nosy: docs at python, martin.panter priority: normal severity: normal stage: patch review status: open title: Fix ?allows to ? in documentation versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41845/allows-to.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 06:08:31 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 08 Feb 2016 11:08:31 +0000 Subject: [New-bugs-announce] [issue26305] Make Argument Clinic to generate PEP 7 conforming code Message-ID: <1454929711.55.0.893733572519.issue26305@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch makes Argument Clinic to generate C code with curly braces as strongly preferred by PEP 7. ---------- components: Demos and Tools files: clinic_pep7_braces.patch keywords: patch messages: 259836 nosy: brett.cannon, larry, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Make Argument Clinic to generate PEP 7 conforming code type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41849/clinic_pep7_braces.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 07:21:53 2016 From: report at bugs.python.org (Jack Hargreaves) Date: Mon, 08 Feb 2016 12:21:53 +0000 Subject: [New-bugs-announce] [issue26306] Can't create abstract tuple Message-ID: <1454934113.76.0.551150355835.issue26306@psf.upfronthosting.co.za> New submission from Jack Hargreaves: When creating an abstract class, subclassing tuple causes check for instantiation of an abstract class to be bypassed. See the associated stackoverflow question -- http://stackoverflow.com/questions/35267954/mix-in-of-abstract-class-and-namedtuple from abc import abstractmethod, ABCMeta class AbstactClass(tuple, metaclass=ABCMeta): @abstractmethod def some_method(self): pass # following should throw a TypeError, but doesn't AbstactClass() ---------- messages: 259839 nosy: Jack Hargreaves priority: normal severity: normal status: open title: Can't create abstract tuple type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 08:28:26 2016 From: report at bugs.python.org (=?utf-8?b?zqfPgc6uz4PPhM6/z4IgzpPOtc+Jz4HOs86vzr/PhSAoQ2hyaXN0b3MgR2Vv?= =?utf-8?q?rgiou=29?=) Date: Mon, 08 Feb 2016 13:28:26 +0000 Subject: [New-bugs-announce] [issue26307] no PGO for built-in modules with `make profile-opt` Message-ID: <1454938106.72.0.648611247693.issue26307@psf.upfronthosting.co.za> New submission from ??????? ???????? (Christos Georgiou): (related to issue #24915) I discovered that `make profile-opt` does not use the profile information for the builtin-modules (e.g. arraymodule or _pickle) because in the `profile-opt` target there is the following sequence of actions: ? $(MAKE) build_all_merge_profile @echo "Rebuilding with profile guided optimizations:" $(MAKE) clean $(MAKE) build_all_use_profile ? The action `$(MAKE) clean` performs an `rm -rf build`, destroying among other things all the *.gcda files generated for the built-in modules. On my Linux system with gcc, a kludge to `Makefile.pre.in` that works is: ? @echo "Rebuilding with profile guided optimizations:" find build -name \*.gcda -print | cpio -o >_modules.gcda.cpio # XXX $(MAKE) clean cpio -id <_modules.gcda.cpio # XXX $(MAKE) build_all_use_profile ? but, like I said, it's a kludge and it's POSIX-only (-print0 can be avoided since I believe it's guaranteed that there will be no whitespace-containing-filenames). Now, if this road is to be taken, I believe the most cross-platform method available to save the profile-generated files is to use a custom python script (at this point, the local profile-generating python executable is functional) and make a tar file, which will be untared back. However, I don't like it much. I'm willing to provide any necessary patches if someone gives me a "proper" roadmap as to how to handle this issue. ---------- components: Build messages: 259842 nosy: tzot priority: normal severity: normal status: open title: no PGO for built-in modules with `make profile-opt` type: performance versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 10:07:00 2016 From: report at bugs.python.org (Georg Sauthoff) Date: Mon, 08 Feb 2016 15:07:00 +0000 Subject: [New-bugs-announce] [issue26308] Solaris 10 build issues Message-ID: <1454944020.19.0.240398292442.issue26308@psf.upfronthosting.co.za> New submission from Georg Sauthoff: When building on Solaris 10 I had to patch Modules/_posixsubprocess.c -> dirfd issues Modules/socketmodule.c -> sethostname declaration setup.py -> ncurses detection See the attached patch for details. I built it like this: CC=gcc CXX=g++ LDFLAGS="-m64 -L/opt/csw/lib/64 -R/opt/csw/lib/64" CPPFLAGS="-I/opt/csw/include -I/opt/csw/include/ncursesw" CFLAGS="-m64 -D_XOPEN_SOURCE=600 -std=gnu99" CXXFLAGS="-m64" ./configure --prefix=/usr/local/python-3.5.1 Note that /opt/csw/ is popular open source repository for Solaris 10; it is the first place to go to get relatively recent stuff like gcc 4.9 etc. The first 2 hunks of the patch aren't very controversial, I guess. The Solaris-ncurses detection can be done in several ways, of course. Also, one could change ncurses package build such that the global CFLAGS (especially -D_XOPEN_SOURCE=600) are picked up. Another variation - the configure could try to detect if _XOPEN_SOURCE=600 is supported and - in case it is - automatically set it. ---------- components: Build files: python-3.5.1-solaris10.diff keywords: patch messages: 259852 nosy: gms priority: normal severity: normal status: open title: Solaris 10 build issues type: compile error versions: Python 3.5 Added file: http://bugs.python.org/file41852/python-3.5.1-solaris10.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 11:58:33 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Mon, 08 Feb 2016 16:58:33 +0000 Subject: [New-bugs-announce] [issue26309] socketserver.BaseServer._handle_request_noblock() don't shutdwon request if verify_request is False Message-ID: <1454950713.49.0.94493423375.issue26309@psf.upfronthosting.co.za> New submission from Aviv Palivoda: When socketserver.BaseServer.verify_request() return False then we do not call shutdown_request. If we will take the TCPServer as example we will call get_request thus calling socket.accept() and creating a new socket but we will not call shutdown_request to close the unused socket. ---------- components: Library (Lib) files: socketserver-shutdown-if-verify-false.patch keywords: patch messages: 259861 nosy: palaviv priority: normal severity: normal status: open title: socketserver.BaseServer._handle_request_noblock() don't shutdwon request if verify_request is False type: resource usage versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41855/socketserver-shutdown-if-verify-false.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 13:06:44 2016 From: report at bugs.python.org (Marien) Date: Mon, 08 Feb 2016 18:06:44 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue26310=5D_Fix_typo_=E2=80=9C?= =?utf-8?q?variariables=E2=80=9D_in_socketserver=2Epy?= Message-ID: <1454954804.39.0.970337231827.issue26310@psf.upfronthosting.co.za> New submission from Marien: This patch fixes a typo in socketserver.py ---------- assignee: docs at python components: Documentation files: fix-typo-variariables.patch keywords: patch messages: 259869 nosy: docs at python, marienfr priority: normal severity: normal status: open title: Fix typo ?variariables? in socketserver.py Added file: http://bugs.python.org/file41858/fix-typo-variariables.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 13:44:43 2016 From: report at bugs.python.org (=?utf-8?b?UmHDumwgTsO6w7FleiBkZSBBcmVuYXM=?=) Date: Mon, 08 Feb 2016 18:44:43 +0000 Subject: [New-bugs-announce] [issue26311] Typo in documentation for xml.parsers.expat Message-ID: <1454957083.56.0.365121544419.issue26311@psf.upfronthosting.co.za> New submission from Ra?l N??ez de Arenas: At https://docs.python.org/3.5/library/pyexpat.html#module-xml.parsers.expat.model the docs say "Content modules are described using nested tuples. It should say "Content models are described using nested tuples." I've checked docs for version 3.6.0a and the typo is there, too. ---------- components: XML messages: 259873 nosy: Ra?l N??ez de Arenas priority: normal severity: normal status: open title: Typo in documentation for xml.parsers.expat versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 14:39:33 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 08 Feb 2016 19:39:33 +0000 Subject: [New-bugs-announce] [issue26312] Raise SystemError on programmical errors in PyArg_Parse*() Message-ID: <1454960373.85.0.989575322137.issue26312@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: For now programmical errors with the use of PyArg_ParseTuple() cause raising SystemError. But some programmical errors with the use of PyArg_ParseTupleAndKeywords() cause raising RuntimeError. I think that SystemError is the correct exception type. Proposed patch replaces RuntimeError with SystemError in PyArg_ParseTupleAndKeywords(). This change shouldn't break any code (except CPython tests for PyArg_ParseTupleAndKeywords()), because this exception never raised if PyArg_Parse*() functions are used correctly. ---------- components: Interpreter Core files: pyarg_parse_error.patch keywords: patch messages: 259877 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Raise SystemError on programmical errors in PyArg_Parse*() type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41859/pyarg_parse_error.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 15:09:48 2016 From: report at bugs.python.org (Jonathan Kamens) Date: Mon, 08 Feb 2016 20:09:48 +0000 Subject: [New-bugs-announce] [issue26313] ssl.py _load_windows_store_certs fails if windows cert store is empty Message-ID: <1454962188.81.0.985392423363.issue26313@psf.upfronthosting.co.za> New submission from Jonathan Kamens: In ssl.py: def _load_windows_store_certs(self, storename, purpose): certs = bytearray() for cert, encoding, trust in enum_certificates(storename): # CA certs are never PKCS#7 encoded if encoding == "x509_asn": if trust is True or purpose.oid in trust: certs.extend(cert) self.load_verify_locations(cadata=certs) return certs The line right before the return statement will raise an exception if certs is empty. It should be protected with "if certs:" as it is elsewhere in this file. ---------- components: Windows messages: 259880 nosy: Jonathan Kamens, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ssl.py _load_windows_store_certs fails if windows cert store is empty versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 15:42:46 2016 From: report at bugs.python.org (Gregory P. Smith) Date: Mon, 08 Feb 2016 20:42:46 +0000 Subject: [New-bugs-announce] [issue26314] interned strings are stored in a dict, a set would use less memory Message-ID: <1454964166.73.0.652183434251.issue26314@psf.upfronthosting.co.za> New submission from Gregory P. Smith: The implementation of string interning uses a dict [1]. It would consume less memory and be a bit simpler if it used a set. Identifier strings in a program are interned. If you have a large program with a lot of code, this makes for a large dictionary. Experimenting with changing this to use a set on 2.7 found ~22k savings on an interactive interpreter startup. Measuring it on a huge application showed a few hundred k saved. [1]: https://hg.python.org/cpython/file/3.5/Objects/unicodeobject.c#l1579 ---------- messages: 259885 nosy: gregory.p.smith, nnorwitz priority: low severity: normal stage: needs patch status: open title: interned strings are stored in a dict, a set would use less memory type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 17:31:15 2016 From: report at bugs.python.org (Yury Selivanov) Date: Mon, 08 Feb 2016 22:31:15 +0000 Subject: [New-bugs-announce] [issue26315] Optimize mod division for ints Message-ID: <1454970675.84.0.301967594836.issue26315@psf.upfronthosting.co.za> New submission from Yury Selivanov: The attached patch implements fast path for modulo division of single digit longs. Some timeit micro-benchmarks: -m timeit -s "x=22331" "x%2;x%3;x%4;x%5;x%6;x%7;x%8;x%99;x%100;" with patch: 0.213 usec without patch: 0.602 usec ---------- assignee: yselivanov components: Interpreter Core files: mod_div.patch keywords: patch messages: 259897 nosy: haypo, serhiy.storchaka, yselivanov priority: normal severity: normal stage: patch review status: open title: Optimize mod division for ints type: performance versions: Python 3.6 Added file: http://bugs.python.org/file41861/mod_div.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 21:32:33 2016 From: report at bugs.python.org (Martin Panter) Date: Tue, 09 Feb 2016 02:32:33 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue26316=5D_Probable_typo_in_A?= =?utf-8?b?cmcgQ2xpbmlj4oCZcyBsaW5lYXJfZm9ybWF0KCk=?= Message-ID: <1454985153.85.0.932055658559.issue26316@psf.upfronthosting.co.za> New submission from Martin Panter: The curly bracket separator is assigned to ?curl?, but then the previous ?curly? variable is tested: https://hg.python.org/cpython/annotate/3.5/Tools/clinic/clinic.py#l202 name, curl, trailing = trailing.partition('}') if not curly or name not in kwargs: ... I presume the fix is to assign to ?curly?, but I haven?t had a chance to figure out how to test it yet. ---------- components: Argument Clinic messages: 259908 nosy: larry, martin.panter priority: normal severity: normal stage: needs patch status: open title: Probable typo in Arg Clinic?s linear_format() versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 8 23:36:24 2016 From: report at bugs.python.org (Robert P Fischer) Date: Tue, 09 Feb 2016 04:36:24 +0000 Subject: [New-bugs-announce] [issue26317] Build Problem with GCC + Macintosh OS X 10.11 El Capitain Message-ID: <1454992584.65.0.198275659625.issue26317@psf.upfronthosting.co.za> New submission from Robert P Fischer: Changes to OS X 10.11 render GCC's Objective-C compiler useless. However, I want to compile the main part of Python in GCC (because my C++ / Fortran Cython modules use GCC). I tried to build Python (via MacPorts) using Clang for Objective-C and GCC for C/C++. The environment upon running ./configure included: CC='/Users/rpfische/macports/mpgompi-4.9.3/bin/gcc-mp-4.9' CXX='/Users/rpfische/macports/mpgompi-4.9.3/bin/gcc-mp-4.9' OBJC='/usr/bin/clang' OBJCXX='/usr/bin/clang++' HOWEVER... the build still tried to use GCC to compile Objective-C, and failed miserably: :info:destroot /Users/rpfische/macports/mpgompi-4.9.3/bin/gcc-mp-4.9 -pipe -Os -arch x86_64 -Wno-unused-result -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -o FileSettings.o -c ./FileSettings.m :info:destroot /usr/include/objc/NSObject.h:22:4: error: unknown type name 'instancetype' :info:destroot - (instancetype)self; :info:destroot ^ ... Log file attached. ---------- components: Build files: log messages: 259914 nosy: Robert P Fischer priority: normal severity: normal status: open title: Build Problem with GCC + Macintosh OS X 10.11 El Capitain versions: Python 3.4 Added file: http://bugs.python.org/file41865/log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 01:30:24 2016 From: report at bugs.python.org (=?utf-8?b?0JzQsNGA0Log0JrQvtGA0LXQvdCx0LXRgNCz?=) Date: Tue, 09 Feb 2016 06:30:24 +0000 Subject: [New-bugs-announce] [issue26318] `io.open(fd, ...).name` returns numeric fd instead of None Message-ID: <1454999424.95.0.397759883327.issue26318@psf.upfronthosting.co.za> New submission from ???? ?????????: `io.open(fd, ...).name` returns numeric fd instead of None. This lead to some nasty bugs. In order to bring consistency and make that predictable, please make `.name` for that case to return None. (and document it) ---------- components: IO, Library (Lib) messages: 259915 nosy: mmarkk priority: normal severity: normal status: open title: `io.open(fd, ...).name` returns numeric fd instead of None type: behavior versions: Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 05:37:56 2016 From: report at bugs.python.org (j w) Date: Tue, 09 Feb 2016 10:37:56 +0000 Subject: [New-bugs-announce] [issue26319] Check recData size before unpack in zipfile Message-ID: <1455014276.42.0.95606517598.issue26319@psf.upfronthosting.co.za> New submission from j w: Encountered on version: 2.7.3 Exception message: "error: unpack requires a string argument of length 22" Stack trace: ... elif zipfile.is_zipfile(_file):> File "/usr/lib/python2.7/zipfile.py", line 152, in is_zipfile> result = _check_zipfile(fp)> File "/usr/lib/python2.7/zipfile.py", line 135, in _check_zipfile> if _EndRecData(fp):> File "/usr/lib/python2.7/zipfile.py", line 238, in _EndRecData> endrec = list(struct.unpack(structEndArchive, recData))> Check the size of recData before unpacking. ... 237: recData = data[start:start+sizeEndCentDir] 238: endrec = list(struct.unpack(structEndArchive, recData)) ---------- components: Extension Modules messages: 259922 nosy: j w priority: normal severity: normal status: open title: Check recData size before unpack in zipfile type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 08:16:36 2016 From: report at bugs.python.org (Rory McCann) Date: Tue, 09 Feb 2016 13:16:36 +0000 Subject: [New-bugs-announce] [issue26320] Web documentation for 2.7 has unreadable highlights in Table of Contents Message-ID: <1455023796.48.0.712407093962.issue26320@psf.upfronthosting.co.za> New submission from Rory McCann: Using Firefox 41.0.2 on Ubuntu 14.04, most table of contents in the online docs have unreadable highlighted sections, where modules/methods are named. See https://docs.python.org/2/library/datetime.html for a good example, or the attached screenshot. I think this highlighting has appeared recently. The 3.x versions are fine, as is 2.6. ---------- assignee: docs at python components: Documentation files: Screenshot from 2016-02-09 13:15:43.png messages: 259930 nosy: Rory McCann, docs at python priority: normal severity: normal status: open title: Web documentation for 2.7 has unreadable highlights in Table of Contents type: enhancement versions: Python 2.7 Added file: http://bugs.python.org/file41867/Screenshot from 2016-02-09 13:15:43.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 10:08:56 2016 From: report at bugs.python.org (Andrew Page) Date: Tue, 09 Feb 2016 15:08:56 +0000 Subject: [New-bugs-announce] [issue26321] datetime.strptime fails to parse AM/PM correctly Message-ID: <1455030536.64.0.408673098232.issue26321@psf.upfronthosting.co.za> New submission from Andrew Page: ## ## It appears that strptime is ignoring the AM/PM field ## from datetime import datetime d1 = datetime.strptime("1:00 PM", "%H:%M %p") d2 = datetime.strptime("1:00 AM", "%H:%M %p") d1.hour, d2.hour (1, 1) # d1 should be 13 d1 == d2 True # and these should not be equal ---------- components: Library (Lib) messages: 259936 nosy: aepage priority: normal severity: normal status: open title: datetime.strptime fails to parse AM/PM correctly versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 15:48:20 2016 From: report at bugs.python.org (Guido van Rossum) Date: Tue, 09 Feb 2016 20:48:20 +0000 Subject: [New-bugs-announce] [issue26322] Missing docs for typing.Set Message-ID: <1455050900.92.0.506743123977.issue26322@psf.upfronthosting.co.za> New submission from Guido van Rossum: The typing docs don't seem to mention Set (which is a bit of an anomaly, since it corresponds to builtins.set, not to collections.abc.Set -- the latter is typing.AbstractSet). Also, AbstractSet is mentioned twice. (And the second occurrence somehow doesn't have a "paragraph" link.) All this should be easy to fix. ---------- assignee: docs at python components: Documentation keywords: easy messages: 259954 nosy: docs at python, gvanrossum priority: normal severity: normal status: open title: Missing docs for typing.Set versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 19:00:45 2016 From: report at bugs.python.org (Amit Saha) Date: Wed, 10 Feb 2016 00:00:45 +0000 Subject: [New-bugs-announce] [issue26323] Add a assert_called() method for mock objects Message-ID: <1455062445.23.0.2044896551.issue26323@psf.upfronthosting.co.za> New submission from Amit Saha: Would a patch for adding a assert_called() method to mocked objects be welcome for inclusion? We do have a assert_not_called() method, so I think this may be a good idea. Please let me know and I will work on it. ---------- components: Library (Lib) messages: 259960 nosy: Amit.Saha priority: normal severity: normal status: open title: Add a assert_called() method for mock objects _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 9 19:08:02 2016 From: report at bugs.python.org (Antoine Pitrou) Date: Wed, 10 Feb 2016 00:08:02 +0000 Subject: [New-bugs-announce] [issue26324] sum() incorrect on negative zeros Message-ID: <1455062882.26.0.0214307271238.issue26324@psf.upfronthosting.co.za> New submission from Antoine Pitrou: >>> sum([-0.0,-0.0]) 0.0 ---------- components: Interpreter Core messages: 259961 nosy: eric.smith, lemburg, mark.dickinson, pitrou, stutzbach priority: low severity: normal status: open title: sum() incorrect on negative zeros type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 05:14:02 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 10 Feb 2016 10:14:02 +0000 Subject: [New-bugs-announce] [issue26325] Add helper to check that no ResourceWarning is emitted Message-ID: <1455099242.2.0.566872742177.issue26325@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Victor proposed in issue25994 to use special context manager to check that no ResourceWarning is emitted. I think that this helper can be useful in other tests. It saves 3 lines for every use. Proposed patch adds it in test.support. ---------- components: Tests files: test_support_check_no_resource_warning.patch keywords: patch messages: 260000 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add helper to check that no ResourceWarning is emitted type: enhancement versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41883/test_support_check_no_resource_warning.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 05:40:19 2016 From: report at bugs.python.org (=?utf-8?q?Andreas_R=C3=B6hler?=) Date: Wed, 10 Feb 2016 10:40:19 +0000 Subject: [New-bugs-announce] [issue26326] Named entity "vertical line" missed in 2.7 htmlentitydefs.py Message-ID: <56BB13AF.2030906@online.de> New submission from Andreas R?hler: Shouldn't a named entity "vbar" or "vline" be part of dict in htmlentitydefs.py? Thanks, Andreas ---------- messages: 260003 nosy: andreas.roehler priority: normal severity: normal status: open title: Named entity "vertical line" missed in 2.7 htmlentitydefs.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 07:59:49 2016 From: report at bugs.python.org (Sebastian Bank) Date: Wed, 10 Feb 2016 12:59:49 +0000 Subject: [New-bugs-announce] [issue26327] File > Save in IDLE shell window not working Message-ID: <1455109189.98.0.269293952823.issue26327@psf.upfronthosting.co.za> New submission from Sebastian Bank: Under Python 2.7.11 (Win 7), saving of the IDLE shell output produces no file if the output contains non-ASCII characters, e.g. after doing (before this, it does work): >>> print u'sp\xe4m' sp?m >>> When saving (generally), the cursor also moves to the next line. Maybe the default file type for the shell save dialog(s) can be changed from 'Python Files (*.py, *.pyw)' to the other entry 'Text files (*.txt)' as the resulting file will normally not be a valid Python file (e.g. due to '>>>' prompts). ---------- components: IDLE messages: 260008 nosy: xflr6 priority: normal severity: normal status: open title: File > Save in IDLE shell window not working type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 08:03:44 2016 From: report at bugs.python.org (Laurent Bigonville) Date: Wed, 10 Feb 2016 13:03:44 +0000 Subject: [New-bugs-announce] [issue26328] shutil._copyxattr() function shouldn't fail if setting security.selinux xattr fails Message-ID: <1455109424.72.0.659631056233.issue26328@psf.upfronthosting.co.za> New submission from Laurent Bigonville: Hi, In the _copyxattr() function from the shutil module, the function throw an exception as soon as the copy of an attribute is failing. But setting the "security.selinux" xattr will likely fail if selinux is enforcing more. If this function is supposed to behave like cp -a, failing to set the security.selinux xattr shouldn't be fatal See: https://bugs.python.org/issue14082 and https://bugs.python.org/msg157903 ---------- components: Library (Lib) messages: 260009 nosy: bigon priority: normal severity: normal status: open title: shutil._copyxattr() function shouldn't fail if setting security.selinux xattr fails type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 09:58:05 2016 From: report at bugs.python.org (Fred Rolland) Date: Wed, 10 Feb 2016 14:58:05 +0000 Subject: [New-bugs-announce] [issue26329] os.path.normpath("//") returns // Message-ID: <1455116285.45.0.915370101527.issue26329@psf.upfronthosting.co.za> New submission from Fred Rolland: Hi, os.path.normpath("//") returns '//' I would expect to be '/' >>> os.path.normpath("//") '//' >>> os.path.normpath("///") '/' >>> os.path.normpath("////") '/' ---------- components: Library (Lib) messages: 260016 nosy: Fred Rolland priority: normal severity: normal status: open title: os.path.normpath("//") returns // versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 10:11:17 2016 From: report at bugs.python.org (Giampaolo Rodola') Date: Wed, 10 Feb 2016 15:11:17 +0000 Subject: [New-bugs-announce] [issue26330] shutil.disk_usage() on Windows can't properly handle unicode Message-ID: <1455117077.45.0.531847730839.issue26330@psf.upfronthosting.co.za> New submission from Giampaolo Rodola': On Python 3.4, Windows 7: >>> import shutil, os >>> path = 'psuugxik1s0?' >>> os.stat(path) os.stat_result(st_mode=33206, st_ino=6755399441249628, st_dev=3158553679, st_nlink=1, st_uid=0, st_gid=0, st_size=27136, st_atime=1455 116789, st_mtime=1455116789, st_ctime=1455116789) >>> >>> shutil.disk_usage(path) Traceback (most recent call last): File "", line 1, in File "C:\python34\lib\shutil.py", line 989, in disk_usage total, free = nt._getdiskusage(path) NotADirectoryError: [WinError 267] The directory name is invalid >>> ---------- messages: 260017 nosy: giampaolo.rodola priority: normal severity: normal status: open title: shutil.disk_usage() on Windows can't properly handle unicode versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 12:50:24 2016 From: report at bugs.python.org (Georg Brandl) Date: Wed, 10 Feb 2016 17:50:24 +0000 Subject: [New-bugs-announce] [issue26331] Tokenizer: allow underscores for grouping in numeric literals Message-ID: <1455126624.48.0.163167904941.issue26331@psf.upfronthosting.co.za> New submission from Georg Brandl: As discussed on python-ideas: https://mail.python.org/pipermail/python-ideas/2016-February/038354.html The rules are: Underscores are allowed anywhere in numeric literals, except: * at the beginning of a literal (obviously) * at the end of a literal * directly after a dot (since the underscore could start an attribute name) * directly after a sign in exponents (for consistency with leading signs) * in the middle of the "0x", "0o" or "0b" base specifiers Currently this only touches literals, not the inputs of int() or float(). Whether they should accept this syntax is debatable (I'd vote no). Otherwise missing: doc updates. Review question: is PyMem_RawStrdup/RawFree the right API to use here? ---------- components: Interpreter Core files: numeric_underscores.diff keywords: patch messages: 260026 nosy: georg.brandl priority: normal severity: normal stage: patch review status: open title: Tokenizer: allow underscores for grouping in numeric literals type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41888/numeric_underscores.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 13:36:38 2016 From: report at bugs.python.org (JK) Date: Wed, 10 Feb 2016 18:36:38 +0000 Subject: [New-bugs-announce] [issue26332] OSError: exception: access violation writing <...> (Windows 10 x64, Python 3.5.1) Message-ID: <1455129398.68.0.407875580594.issue26332@psf.upfronthosting.co.za> New submission from JK: Hi! I came across this mysterious bug the other week, and I haven't yet found any resolution to it. In my case, I am using a proprietary 3rd party C/C++ DLL for functionality within Python, however I've come across a few others who have run into the same issue with an open source project. My traceback is essentially: return cppapi.DoSomething(hAlpha, pzString1, pzString2); OSError: exception: access violation writing 0x00000000576EE908 However, you can see a very similar scenario taking place here too: https://github.com/asweigart/pyperclip/issues/25 (However, I did not see a bug report filed here on behalf of them) This MIGHT be related to http://bugs.python.org/issue20160 but I am not certain. Thanks! ---------- components: Windows, ctypes messages: 260030 nosy: jk, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: OSError: exception: access violation writing <...> (Windows 10 x64, Python 3.5.1) versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 14:03:45 2016 From: report at bugs.python.org (Aaron Halfaker) Date: Wed, 10 Feb 2016 19:03:45 +0000 Subject: [New-bugs-announce] [issue26333] Multiprocessing imap hangs when generator input errors Message-ID: <1455131025.64.0.640942257663.issue26333@psf.upfronthosting.co.za> New submission from Aaron Halfaker: multiprocessing.imap will hang and not raise an error if an error occurs in the generator that is being mapped over. I'd expect the error to be raised and/or the process to fail. For example, run the following code in python 2.7 or 3.4: from multiprocessing import Pool def add_one(v): return v+1 pool = Pool(processes=2) values = ["1", "2", "3", "4", "foo", "5", "6", "7", "8"] value_iter = (int(v) for v in values) for new_val in pool.imap(add_one, value_iter): print(new_val) And output should look something like this: $ python demo_hanging.py 2 3 4 5 Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner self.run() File "/usr/lib/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.4/multiprocessing/pool.py", line 378, in _handle_tasks for i, task in enumerate(taskseq): File "/usr/lib/python3.4/multiprocessing/pool.py", line 286, in self._taskqueue.put((((result._job, i, func, (x,), {}) File "demo_hanging.py", line 9, in value_iter = (int(v) for v in values) ValueError: invalid literal for int() with base 10: 'foo' The script will then hang indefinitely. ---------- components: Library (Lib) messages: 260032 nosy: Aaron Halfaker priority: normal severity: normal status: open title: Multiprocessing imap hangs when generator input errors type: behavior versions: Python 2.7, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 14:52:44 2016 From: report at bugs.python.org (Nicholas Chammas) Date: Wed, 10 Feb 2016 19:52:44 +0000 Subject: [New-bugs-announce] [issue26334] bytes.translate() doesn't take keyword arguments; docs suggests it does Message-ID: <1455133964.48.0.745384045329.issue26334@psf.upfronthosting.co.za> New submission from Nicholas Chammas: The docs for `bytes.translate()` [0] show the following signature: ``` bytes.translate(table[, delete]) ``` However, calling this method with keyword arguments yields: ``` >>> b''.translate(table='la table', delete=b'delete') Traceback (most recent call last): File "", line 1, in TypeError: translate() takes no keyword arguments ``` I'm guessing other methods have this same issue. (e.g. `str.translate()`) Do the docs need to be updated, or should these methods be updated to accept keyword arguments, or something else? [0] https://docs.python.org/3/library/stdtypes.html#bytes.translate ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 260034 nosy: Nicholas Chammas, docs at python priority: normal severity: normal status: open title: bytes.translate() doesn't take keyword arguments; docs suggests it does versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 17:55:37 2016 From: report at bugs.python.org (Jakub Stasiak) Date: Wed, 10 Feb 2016 22:55:37 +0000 Subject: [New-bugs-announce] [issue26335] Make mmap.write return the number of bytes written like other write methods Message-ID: <1455144937.95.0.515520937166.issue26335@psf.upfronthosting.co.za> New submission from Jakub Stasiak: Since mmap objects are said to "behave like both bytearray and like file objects" I believe it's appropriate for the mmap.write() method to return the number of bytes written like write() of other file objects/interfaces I could find in the standard library for consistency reasons: https://docs.python.org/3/library/io.html#io.BufferedIOBase.write https://docs.python.org/3/library/io.html#io.BufferedWriter.write https://docs.python.org/3/library/io.html#io.RawIOBase.write https://docs.python.org/3/library/io.html#io.TextIOBase.write Why I believe this would be useful: code that writes to file objects and tests the number of bytes/characters written right now will likely fail when it's passed a mmap object because its write() method returns None. With this patch applied it'll work transparently. Please find proposed patch attached, I included information about the exception type in the documentation as it seems fitting (apologies for generating the patch using Git, I'll generate using Mercurial if necessary). ---------- components: IO, Library (Lib) keywords: patch messages: 260053 nosy: jstasiak priority: normal severity: normal status: open title: Make mmap.write return the number of bytes written like other write methods type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41890/mmap_write_return_count.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 10 23:05:27 2016 From: report at bugs.python.org (Jonathan Goble) Date: Thu, 11 Feb 2016 04:05:27 +0000 Subject: [New-bugs-announce] [issue26336] Expose regex bytecode as attribute of compiled pattern object Message-ID: <1455163527.75.0.996534901196.issue26336@psf.upfronthosting.co.za> New submission from Jonathan Goble: Once a regular expression is compiled with `obj = re.compile()`, it would be nice to have access to the raw bytecode, probably as `obj.code` or `obj.bytecode`, so it can be explored programmatically. Currently, regex bytecode is only stored in a C struct and not exposed to Python code; the only way to examine the compiled version is to pass the `re.DEBUG` flag to `re.compile()`, which prints only to stdout and outputs not the finished bytecode, but a "pretty-printed" intermediate representation useless for programmatic analysis. This is basically requesting the equivalent of the `co_code` attribute of the code object returned by the built-in `compile()`, but for regular expression objects instead of Python code objects. Given that the bytecode can actually be multi-byte integers, `regexobj.bytecode` should return a list (perhaps even just the same list passed to the C function?) or an `array.array()` instance, rather than a bytestring. ---------- components: Library (Lib), Regular Expressions messages: 260072 nosy: Jonathan Goble, ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Expose regex bytecode as attribute of compiled pattern object type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 00:55:17 2016 From: report at bugs.python.org (Ramin Farajpour Cami) Date: Thu, 11 Feb 2016 05:55:17 +0000 Subject: [New-bugs-announce] [issue26337] Bypass imghdr module determines the type of image Message-ID: <1455170117.9.0.169068464747.issue26337@psf.upfronthosting.co.za> New submission from Ramin Farajpour Cami: import imghdr imghdr.what('phppng.png') output : 'png' if you set javascript script in file .png or .jpg , output : ValueError: invalid \x escape Hexdump: root at Ramin:~# hexdump -C phppng.png 00000000 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 |.PNG........IHDR| 00000010 00 00 00 20 00 00 00 20 08 02 00 00 00 fc 18 ed |... ... ........| 00000020 a3 00 00 00 09 70 48 59 73 00 00 0e c4 00 00 0e |.....pHYs.......| 00000030 c4 01 95 2b 0e 1b 00 00 00 60 49 44 41 54 48 89 |...+.....`IDATH.| 00000040 63 5c 3c 3f 3d 24 5f 47 45 54 5b 30 5d 28 24 5f |c\X....| 00000060 73 5e 37 93 fc 8f 8b db 7e 5f d3 7d aa 27 f7 f1 |s^7.....~_.}.'..| 00000070 e3 c9 bf 5f ef 06 7c b2 30 30 63 d9 b9 67 fd d9 |..._..|.00c..g..| 00000080 3d 1b ce 32 8c 82 51 30 0a 46 c1 28 18 05 a3 60 |=..2..Q0.F.(...`| 00000090 14 8c 82 51 30 0a 86 0d 00 00 81 b2 1b 02 07 78 |...Q0..........x| 000000a0 0d 0c 00 00 00 00 49 45 4e 44 ae 42 60 82 |......IEND.B`.| 000000ae ---------- components: Library (Lib) files: phppng.png messages: 260074 nosy: Ramin Farajpour Cami priority: normal severity: normal status: open title: Bypass imghdr module determines the type of image type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file41891/phppng.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 09:37:34 2016 From: report at bugs.python.org (Sebastien Bourdeauducq) Date: Thu, 11 Feb 2016 14:37:34 +0000 Subject: [New-bugs-announce] [issue26338] remove duplicate bind addresses in create_server Message-ID: <1455201454.94.0.572810493705.issue26338@psf.upfronthosting.co.za> New submission from Sebastien Bourdeauducq: https://github.com/python/asyncio/issues/315 New patch attached. ---------- components: asyncio files: asyncio_norebind.diff keywords: patch messages: 260108 nosy: gvanrossum, haypo, sebastien.bourdeauducq, yselivanov priority: normal severity: normal status: open title: remove duplicate bind addresses in create_server versions: Python 3.5 Added file: http://bugs.python.org/file41898/asyncio_norebind.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 10:02:01 2016 From: report at bugs.python.org (sherpa) Date: Thu, 11 Feb 2016 15:02:01 +0000 Subject: [New-bugs-announce] [issue26339] Python rk0.3b1 KeyError: 'config_argparse_rel_path' Message-ID: <1455202921.22.0.411034223357.issue26339@psf.upfronthosting.co.za> New submission from sherpa: Hi, we use python 2.7.3 on Linux SuSe Enterprise 11 x86 64 bits. After installing the python module rk0.3b1, we have this error when rk parse the configuration file: KeyError: 'config_argparse_rel_path' after this function dict.__getitem__(self, key) You will find below the debugger trace: # python -m pdb /opt/python-2.7.3/bin/rk > /opt/python-2.7.3/bin/rk(3)() -> __requires__ = 'rk==0.3b1' (Pdb) cont Traceback (most recent call last): File "/opt/python-2.7.3/lib/python2.7/pdb.py", line 1314, in main pdb._runscript(mainpyfile) File "/opt/python-2.7.3/lib/python2.7/pdb.py", line 1233, in _runscript self.run(statement) File "/opt/python-2.7.3/lib/python2.7/bdb.py", line 387, in run exec cmd in globals, locals File "", line 1, in File "/opt/python-2.7.3/bin/rk", line 3, in __requires__ = 'rk==0.3b1' File "build/bdist.linux-x86_64/egg/rk/rk.py", line 253, in main create_dictionaries() File "build/bdist.linux-x86_64/egg/rk/rk.py", line 25, in create_dictionaries config_argparse_rel_path = config["config_argparse_rel_path"] File "/opt/python-2.7.3/lib/python2.7/site-packages/configobj.py", line 554, in __getitem__ val = dict.__getitem__(self, key) KeyError: 'config_argparse_rel_path' Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program > /opt/python-2.7.3/lib/python2.7/site-packages/configobj.py(554)__getitem__() -> val = dict.__getitem__(self, key) (Pdb) ---------- components: Installation files: pdb_rk0.3b1.txt messages: 260109 nosy: sherpa priority: normal severity: normal status: open title: Python rk0.3b1 KeyError: 'config_argparse_rel_path' type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file41899/pdb_rk0.3b1.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 12:05:42 2016 From: report at bugs.python.org (vs) Date: Thu, 11 Feb 2016 17:05:42 +0000 Subject: [New-bugs-announce] [issue26340] modal dialog with transient method; parent window fails to iconify Message-ID: <1455210342.9.0.230352859774.issue26340@psf.upfronthosting.co.za> New submission from vs: Consider the following code implementing a custom modal dialog: from Tkinter import * def dialog(): win = Toplevel() Label(win, text='Modal Dialog').pack() win.transient(win.master) win.focus_set() win.grab_set() win.wait_window() root = Tk() Button(root, text='Custom Dialog', command=dialog).pack() root.mainloop() In Python 2.7.3, the parent window behaves as expected when the modal dialog is active. That is, the parent window responds to minimize, maximize, and close events. The modal dialog window iconifies and closes together with the parent window. If a user presses Show Desktop button in Windows OS, the parent and the dialog iconify together and can be restored from the Taskbar. However, in more recent Python releases (I tested 2.7.8, 2.7.11 and 3.5.1), the parent window does not respond to any of the three window commands. If the modal dialog is open and the user presses Show Desktop, both windows iconify, but they CANNOT be restored or closed from the Taskbar. The only way to close such an application is to kill it through the Task Manager. ---------- components: Tkinter, Windows messages: 260114 nosy: paul.moore, steve.dower, tim.golden, vs, zach.ware priority: normal severity: normal status: open title: modal dialog with transient method; parent window fails to iconify type: behavior versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 14:22:36 2016 From: report at bugs.python.org (Yury Selivanov) Date: Thu, 11 Feb 2016 19:22:36 +0000 Subject: [New-bugs-announce] [issue26341] Implement free-list for single-digit longs Message-ID: <1455218556.74.0.414585930827.issue26341@psf.upfronthosting.co.za> New submission from Yury Selivanov: The attached patch implements a free-list for single-digit longs. We already have free lists for many fundamental types, such as floats & unicode. The patch improves performance in micro-benchmarks by 10-20%. It'll also lessen memory fragmentation issues. == Benchmarks == ### spectral_norm ### Min: 0.268018 -> 0.245042: 1.09x faster Avg: 0.289548 -> 0.257861: 1.12x faster Significant (t=18.82) Stddev: 0.01004 -> 0.00640: 1.5680x smaller -m timeit -s "loops=tuple(range(1000))" "for x in loops: x+x" with patch: 34.5 usec without patch: 45.9 usec == Why only single-digit? == I've also a patch that implements free-lists for 1-digit, 2-digits and 3-digits longs, and collects statistics on them. It looks like we only want to optimize 1-digit longs: * 2to3 benchmark (the first number is the number of all N-digit longs, the second number is the number of longs created via free-list): ===> d1_longs = 142384 124759 ===> d2_longs = 6872 6264 ===> d3_longs = 2907 2834 * richards: ===> d1_longs = 219630 219033 ===> d2_longs = 1455 1096 ===> d3_longs = 630 627 * spectral_norm: ===> d1_longs = 133928432 133927838 ===> d2_longs = 1471 1113 ===> d3_longs = 630 627 ---------- assignee: yselivanov components: Interpreter Core files: long_fl.patch keywords: patch messages: 260124 nosy: haypo, mark.dickinson, serhiy.storchaka, yselivanov priority: normal severity: normal stage: patch review status: open title: Implement free-list for single-digit longs type: performance versions: Python 3.6 Added file: http://bugs.python.org/file41900/long_fl.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 14:33:58 2016 From: report at bugs.python.org (Yury Selivanov) Date: Thu, 11 Feb 2016 19:33:58 +0000 Subject: [New-bugs-announce] [issue26342] Faster bit ops for single-digit positive longs Message-ID: <1455219238.31.0.545521585495.issue26342@psf.upfronthosting.co.za> New submission from Yury Selivanov: This patch implements a fast path for &, |, and ^ bit operations for single-digit positive longs. We already have fast paths for ~, and pretty much every other long op. -m timeit -s "x=21827623" "x&2;x&2;x&2;x&333;x&3;x&3;x&4444;x&4" with patch: 0.181 without patch: 0.403 -m timeit -s "x=21827623" "x|21222;x|23;x|2;x|333;x|3;x|3;x|4444;x|4" with patch: 0.241 without patch: 0.342 -m timeit -s "x=21827623" "x^21222;x^23;x^2;x^333;x^3;x^3;x^4444;x^4" with patch: 0.241 without patch: 0.332 ---------- assignee: yselivanov components: Interpreter Core files: fast_bits.patch keywords: patch messages: 260126 nosy: haypo, mark.dickinson, serhiy.storchaka, yselivanov priority: normal severity: normal stage: patch review status: open title: Faster bit ops for single-digit positive longs type: performance Added file: http://bugs.python.org/file41901/fast_bits.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 17:11:23 2016 From: report at bugs.python.org (Gustavo Goretkin) Date: Thu, 11 Feb 2016 22:11:23 +0000 Subject: [New-bugs-announce] [issue26343] os.O_CLOEXEC not available on OS X Message-ID: <1455228683.06.0.527873241791.issue26343@psf.upfronthosting.co.za> New submission from Gustavo Goretkin: I am on OS X 10.9.5 Python 3.5.1 (v3.5.1:37a07cee5969, Dec 5 2015, 21:12:44) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.O_CLOEXEC Traceback (most recent call last): File "", line 1, in AttributeError: module 'os' has no attribute 'O_CLOEXEC' I checked on my system $ man 2 open | grep CLOEXEC O_CLOEXEC mark as close-on-exec The O_CLOEXEC flag causes the file descriptor to be marked as close-on- exec, setting the FD_CLOEXEC flag. The state of the file descriptor I first noticed this on an anaconda distribution of python, but it looks like it is also present on the 3.5 .dmg file on https://www.python.org/downloads/ ---------- components: Macintosh messages: 260135 nosy: Gustavo Goretkin, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: os.O_CLOEXEC not available on OS X versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 17:39:43 2016 From: report at bugs.python.org (Wolfgang Richter) Date: Thu, 11 Feb 2016 22:39:43 +0000 Subject: [New-bugs-announce] [issue26344] `sys.meta_path` Skipped for Packages with Non-Standard Suffixed `__init__` Files Message-ID: <1455230383.33.0.948611163745.issue26344@psf.upfronthosting.co.za> New submission from Wolfgang Richter: My understanding of `sys.meta_path` is that it is supposed to allow customized loading of Python modules and packages. In fact the `importlib` machinery appears to have support for identifying packages with `__init__` files with non-standard suffixes: https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap_external.py#L645 https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap_external.py#L1233 However, I find that when I `import wolftest` inside a folder with structure: ./ /wolftest /__init__.wolf /something.wolf None of my sys.meta_path finders are called at all, and instead a namespace is returned. I was wondering why the `import` statement appears to short-circuit and not check with `sys.meta_path` handlers in this case? ---------- components: Library (Lib) files: PyMetaPath.tar.gz messages: 260138 nosy: Wolfgang Richter priority: normal severity: normal status: open title: `sys.meta_path` Skipped for Packages with Non-Standard Suffixed `__init__` Files type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file41902/PyMetaPath.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 20:06:05 2016 From: report at bugs.python.org (Egor Tensin) Date: Fri, 12 Feb 2016 01:06:05 +0000 Subject: [New-bugs-announce] [issue26345] Extra newline appended to UTF-8 strings on Windows Message-ID: <1455239165.25.0.227087852631.issue26345@psf.upfronthosting.co.za> New submission from Egor Tensin: I've come across an issue of Python 3.5.1 appending an extra newline when print()ing non-ASCII strings on Windows. This only happens when the active "code page" is set UTF-8 in cmd.exe: >chcp Active code page: 65001 Now, if I try to print an ASCII character (e.g. LATIN CAPITAL LETTER A), everything works fine: >python -c "print(chr(0x41))" A > But if I try to print something a little less common (GREEK CAPITAL LETTER ALPHA), something weird happens: >python -c "print(chr(0x391))" ? > For another example, let's try to print CYRILLIC CAPITAL LETTER A: >python -c "print(chr(0x410))" ? > This only happens if the current code page is UTF-8 though. If I change it to something that can represent those characters, everything seems to be working fine. For example, the Greek letter: >chcp 1252 Active code page: 1253 >python -c "print(chr(0x391))" ? > And the Cyrillic letter: >chcp 1251 Active code page: 1251 >python -c "print(chr(0x410))" ? > This also happens if one tries to print a string with a funny character somewhere in it. Sometimes it's even worse: >python -c "print('??????!')" ??????! ??! > Look, guys, I know what a mess Unicode handling on Windows is, and I'm not even sure it's Python's fault, I just wanted to make sure I'm not delusional and not making stuff up. Can somebody at least confirm this? Thank you. I'm using x86-64 version of Python 3.5.1 on Windows 8.1. ---------- components: Unicode, Windows messages: 260153 nosy: Egor Tensin, ezio.melotti, haypo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Extra newline appended to UTF-8 strings on Windows type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 21:12:33 2016 From: report at bugs.python.org (Anthony Tuininga) Date: Fri, 12 Feb 2016 02:12:33 +0000 Subject: [New-bugs-announce] [issue26346] PySequenceMethods documentation missing sq_slice and sq_ass_slice Message-ID: <1455243153.76.0.688486976715.issue26346@psf.upfronthosting.co.za> New submission from Anthony Tuininga: These methods are completely missing from the documentation found here: https://docs.python.org/3/c-api/typeobj.html ---------- assignee: docs at python components: Documentation messages: 260154 nosy: atuining, docs at python priority: normal severity: normal status: open title: PySequenceMethods documentation missing sq_slice and sq_ass_slice versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 11 22:33:09 2016 From: report at bugs.python.org (Frederick Wagner) Date: Fri, 12 Feb 2016 03:33:09 +0000 Subject: [New-bugs-announce] [issue26347] BoundArguments.apply_defaults doesn't handle empty arguments Message-ID: <1455247989.9.0.159596346531.issue26347@psf.upfronthosting.co.za> New submission from Frederick Wagner: [First-time contributor, feedback appreciated.] Found and fixed some unexpected behavior in inspect.BoundArguments.apply_defaults. Simplest explanation is the following test that I added: # Make sure a no-args binding still acquires proper defaults. def foo(a='spam'): pass sig = inspect.signature(foo) ba = sig.bind() ba.apply_defaults() self.assertEqual(list(ba.arguments.items()), [('a', 'spam')]) I've included the patch file; is there anything else I can do? ---------- components: Library (Lib), Tests files: 0001-Make-apply_defaults-work-for-empty-arguments.patch keywords: patch messages: 260157 nosy: Frederick Wagner priority: normal severity: normal status: open title: BoundArguments.apply_defaults doesn't handle empty arguments type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41906/0001-Make-apply_defaults-work-for-empty-arguments.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 08:08:59 2016 From: report at bugs.python.org (Dan McCombs) Date: Fri, 12 Feb 2016 13:08:59 +0000 Subject: [New-bugs-announce] [issue26348] activate.fish sets VENV prompt incorrectly Message-ID: <1455282539.52.0.733684318611.issue26348@psf.upfronthosting.co.za> New submission from Dan McCombs: Currently, the activate.fish VENV script sets fish's prompt to be prepended with a literal "__VENV_PROMPT__" rather than the contents of the $__VENV_PROMPT__ variable as intended. The attached patch simply adds the missing "$" to the variable in the conditional test and it's usage, so it's only being set if the variable is non-zero, rather than testing if the string "__VENV_PROMPT__" is non-zero like it is doing right now. The results in the prompt being correctly prepended by "(my_actual_venv_name)". ---------- components: Library (Lib) files: activate-fish.patch keywords: patch messages: 260175 nosy: Dan McCombs priority: normal severity: normal status: open title: activate.fish sets VENV prompt incorrectly type: behavior versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41907/activate-fish.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 08:27:33 2016 From: report at bugs.python.org (=?utf-8?q?Thomas_F=C3=BChringer?=) Date: Fri, 12 Feb 2016 13:27:33 +0000 Subject: [New-bugs-announce] [issue26349] Ship python35.lib with the embedded distribution, please Message-ID: <1455283653.75.0.831667056004.issue26349@psf.upfronthosting.co.za> New submission from Thomas F?hringer: I would like to use the new embedded distribution and load-time link to python35.dll. It would make things a lot easier if you could also include python35.lib in the file python-3.5.1-embed-win32.zip. ---------- components: Installation messages: 260176 nosy: Thomas F?hringer priority: normal severity: normal status: open title: Ship python35.lib with the embedded distribution, please type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 09:01:43 2016 From: report at bugs.python.org (Giampaolo Rodola') Date: Fri, 12 Feb 2016 14:01:43 +0000 Subject: [New-bugs-announce] [issue26350] Windoes: signal doc should state certains signals can't be registered Message-ID: <1455285703.05.0.44078516142.issue26350@psf.upfronthosting.co.za> New submission from Giampaolo Rodola': I'm not sure whether this is a bug with signal.signal doc or with the function itself, anyway, here goes. On UNIX I'm able to register a signal handler for SIGTERM which is executed when the signal is received. On Windows I'm able to register it but it will never be executed: import os, signal def foo(*args): print("foo") # this never gets printed on Windows signal.signal(signal.SIGTERM, foo) os.kill(os.getpid(), signal.SIGTERM) I think signal module doc should be more clear about this. In details, if it is possible to execute a function on SIGTERM if should explain how. If not (and AFAIK it's not possible) it should state that "signal.signal(signal.SIGTERM, foo)" on Windows is a NOOP. Note: I'm probably missing something but the same thing applies for SIGINT and possibly (all) other signals, so I'm not sure why Windows has signal.signal in the first place. What's its use case on Windows? ---------- messages: 260179 nosy: giampaolo.rodola priority: normal severity: normal status: open title: Windoes: signal doc should state certains signals can't be registered _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 12:47:32 2016 From: report at bugs.python.org (Steven D'Aprano) Date: Fri, 12 Feb 2016 17:47:32 +0000 Subject: [New-bugs-announce] [issue26351] Occasionally check for Ctrl-C in long-running operations like sum Message-ID: <1455299252.18.0.421982887455.issue26351@psf.upfronthosting.co.za> New submission from Steven D'Aprano: There are a few operations such as summing or unpacking infinite iterators where the interpreter can become unresponsive and ignore Ctrl-C KeyboardInterrupt. Guido suggests that such places should occasionally check for signals: https://mail.python.org/pipermail/python-ideas/2016-February/038426.html ---------- components: Interpreter Core messages: 260189 nosy: steven.daprano priority: normal severity: normal status: open title: Occasionally check for Ctrl-C in long-running operations like sum type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 16:10:50 2016 From: report at bugs.python.org (Matt Hooks) Date: Fri, 12 Feb 2016 21:10:50 +0000 Subject: [New-bugs-announce] [issue26352] getpass incorrectly displays password prompt on stderr on fallback Message-ID: <1455311450.52.0.299309264077.issue26352@psf.upfronthosting.co.za> New submission from Matt Hooks: When calling getpass.getpass(), certain circumstances cause it to fallback to getpass.fallback_getpass, such as when swapping out sys.stdin for another object in a unit test. In such a circumstance, fallback_getpass may be called with stream=None when getpass itself was called without a stream. fallback_getpass needs a stream to write the "Warning: Password input may be echoed" warning to, and reasonably chooses stderr. However, this choice persists down into getpass._raw_input, where the user is then shown the password prompt on stderr as well. Instead of on stderr, the user should get the password prompt on stdout. tl;dr: Some calls to getpass.getpass result in a password prompt on stderr. Bad behavior: password prompt displayed over stderr. Expected behavior: password prompt should be on stdout. Found in 3.4, looks like it's the same in 3.5. I will attach a patch in a few moments, after I get my dev environment sorted out. ---------- components: Library (Lib) messages: 260202 nosy: Matt Hooks priority: normal severity: normal status: open title: getpass incorrectly displays password prompt on stderr on fallback type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 17:08:12 2016 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 12 Feb 2016 22:08:12 +0000 Subject: [New-bugs-announce] [issue26353] IDLE: Saving Shell should not add \n Message-ID: <1455314892.8.0.4844402262.issue26353@psf.upfronthosting.co.za> New submission from Terry J. Reedy: When one saves the IDLE Shell window, at least when the cursor is at the prompt, \n is added, so that >>> | changes to >>> | This seems wrong. It does not happen in editor windows. I should check Output Windows. ---------- messages: 260205 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Saving Shell should not add \n type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 12 19:19:24 2016 From: report at bugs.python.org (Magesh Kumar) Date: Sat, 13 Feb 2016 00:19:24 +0000 Subject: [New-bugs-announce] [issue26354] re.I does not work as expected Message-ID: <1455322764.68.0.816067707271.issue26354@psf.upfronthosting.co.za> New submission from Magesh Kumar: I am in the process of re.sub the tag with empty string from a xml output line. If "re.I" is used, I am not able to remove the complete tag. ======================================================================== >>> a 'ype="str">falseDefaultMulticastClient>> b = re.sub('\<\/?item(\s+type="dict")?\>', '', a, re.I) >>> b 'ype="str">falseDefaultMulticastClient>> b = re.sub('\<\/?item(\s+type="dict")?\>', '', a) >>> b 'ype="str">falseDefaultMulticastClient _______________________________________ From report at bugs.python.org Sat Feb 13 03:52:19 2016 From: report at bugs.python.org (Nick Coghlan) Date: Sat, 13 Feb 2016 08:52:19 +0000 Subject: [New-bugs-announce] [issue26355] Emit major version based canonical URLs for docs Message-ID: <1455353539.4.0.541474968967.issue26355@psf.upfronthosting.co.za> New submission from Nick Coghlan: Based on a recent comment on one of the mailing lists, I spent a bit of time looking into canonical URLs and their implications for how search engines treat versioned documentation. The most relevant resource for that in relation to the CPython docs appears to be this page on ReadTheDocs: http://docs.readthedocs.org/en/latest/canonical.html I sometimes encounter direct links to particular 3.x versions in search results, so I'm wondering if it might be desirable to set up the Python 3 docs to report their canonical URL as "/3/", and the Python 2 docs as "/2/". ---------- messages: 260227 nosy: georg.brandl, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Emit major version based canonical URLs for docs type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 13 15:49:20 2016 From: report at bugs.python.org (Haroun Rauni) Date: Sat, 13 Feb 2016 20:49:20 +0000 Subject: [New-bugs-announce] [issue26356] Registration Message-ID: New submission from Haroun Rauni: Yes ---------- messages: 260249 nosy: harounrauni priority: normal severity: normal status: open title: Registration _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 13 16:17:53 2016 From: report at bugs.python.org (=?utf-8?q?Andr=C3=A9_Caron?=) Date: Sat, 13 Feb 2016 21:17:53 +0000 Subject: [New-bugs-announce] [issue26357] asyncio.wait loses coroutine return value Message-ID: <1455398273.27.0.148553648829.issue26357@psf.upfronthosting.co.za> New submission from Andr? Caron: When the asyncio.wait() function with coroutine objects as inputs, it is impossbible to extract the results reliably. This issue arises because asyncio.wait() returns an unordered set of futures against which membership tests of coroutine objects always fail. The issue is made even worse by the fact that coroutine objects cannot be awaited multiple times (see https://bugs.python.org/issue25887). Any await expression on the coroutine object after the call to asyncio.wait() returns None, regardless of the coroutine's return value. (See attached `asyncio-wait-coroutine.py` for an example of both these issues.) In the worst case, multiple inputs are coroutine objects and the set of returned futures contains return values but there is no way to determine which result corresponds to which coroutine call. To work around this issue, callers need to explicitly use asyncio.ensure_future() on coroutine objects before calling asyncio.wait(). (See comment in `asyncio-wait-coroutine.py` for an example "fix"). Note that, in general, it is not possible to know ahead of time whether all inputs to asyncio.wait() are coroutines or futures. Furthermore, the fact that a given third-party library function is implemented as a regular function that returns a Future or a proper coroutine is an implementation decision which may not be part of the public interface. Even if it is, the inputs to asyncio.wait() may come from complex code paths and it may be difficult to verify that all of them end up producing a Future. As a consequence, the only reliable way to recover all results from asyncio.wait() is to explicitly call asyncio.ensure_future() on each of the inputs. When doing so, both the membership test against the `done` set and the await expressions work as expected. Quickly, there are several possible solutions: - allow programs to await coroutine multiple times; - make the set membership test of a coroutine object succeed; or - change support for coroutine objects as inputs to asyncio.wait(): ** update documentation for asyncio.wait() to explain this limitation; or ** explicitly reject coroutine objects; or ** warn when passing coroutine objects as inputs -- unless wrapped. Related issue: https://bugs.python.org/issue25887 proposes a patch to change the behaviour of awaiting a coroutine object multiple times in order to produce a RuntimeError on all awaits after the 1st. While that change in behaviour would make it easier to diagnose the loss of the return value, it does not fix this issue. ---------- components: asyncio files: asyncio-wait-coroutine.py messages: 260250 nosy: Andr? Caron, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: asyncio.wait loses coroutine return value versions: Python 3.5 Added file: http://bugs.python.org/file41914/asyncio-wait-coroutine.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 13 22:37:46 2016 From: report at bugs.python.org (Antti Haapala) Date: Sun, 14 Feb 2016 03:37:46 +0000 Subject: [New-bugs-announce] [issue26358] mmap.mmap.__iter__ is broken (yields bytes instead of ints) Message-ID: <1455421066.19.0.307034706486.issue26358@psf.upfronthosting.co.za> New submission from Antti Haapala: Just noticed when answering a question on StackOverflow (http://stackoverflow.com/q/35387843/918959) that on Python 3 iterating over a mmap object yields individual bytes as bytes objects, even though iterating over slices, indexing and so on gives ints Example: import mmap with open('test.dat', 'rb') as f: mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) for b in mm: print(b) # prints for example b'A' instead of 65 mm.close() I believe this should be fixed for the sake of completeness - the documentation says that "Memory-mapped file objects behave like both bytearray and like file objects." - however the current behaviour is neither like a bytearray nor like a file object, and quite confusing. Similarly the `in` operator seems to be broken; one could search for space using `32 in bytesobj`, which would work for slices but not for the whole mmap object. ---------- messages: 260261 nosy: ztane priority: normal severity: normal status: open title: mmap.mmap.__iter__ is broken (yields bytes instead of ints) type: behavior versions: Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 14 05:28:13 2016 From: report at bugs.python.org (Alecsandru Patrascu) Date: Sun, 14 Feb 2016 10:28:13 +0000 Subject: [New-bugs-announce] [issue26359] CPython build options for out-of-the box performance Message-ID: <1455445693.01.0.984013993837.issue26359@psf.upfronthosting.co.za> New submission from Alecsandru Patrascu: Hello, This is Alecsandru from the Dynamic Scripting Languages Optimization Team at Intel Corporation. I would like to submit a patch as a response to the discussion thread opened in Python-Dev (https://mail.python.org/pipermail/python-dev/2016-February/143215.html), regarding the way CPython is built, mainly the options that are available to the programmers. Analyzing the CPython ecosystem we can see that there are a lot of users that just download the sources and hit the commands "./configure", "make" and "make install" once and then continue using it with their Python scripts. One of the problems with this workflow it that the users do not benefit from the entire optimization features that are existing in the build system, such as PGO and LTO. Therefore, I propose a workflow, like the following. Assume some work has to be done into the CPython interpreter, a developer can do the following steps: A. Implementation and debugging phase. 1. The command "./configure PYDIST=debug" is ran once. It will enable the Py_DEBUG, -O0 and -g flags 2. The command "make" is ran once or multiple times B. Testing the implementation from step A, in a pre-release environment 1. The command "./configure PYDIST=devel" is ran once. It will disable the Py_DEBUG flags and will enable the -O3 and -g flags, and it is just like the current implementation in CPython 2. The command "make" is ran once or multiple times C. For any other CPython usage, for example distributing the interpreter, installing it inside an operating system, or just the majority of users who are not CPython developers and only want to compile it once and use it as-is: 1. The command "./configure" is ran once. Alternatively, the command "./configure PYDIST=release" can be used. It will disable all debugging functionality, enable the -O3 flag and will enable PGO and LTO. 2. The command "make" is ran once Thank you, Alecsandru ---------- components: Build files: cpython2_pgo_default-v01.patch keywords: patch messages: 260271 nosy: alecsandru.patrascu priority: normal severity: normal status: open title: CPython build options for out-of-the box performance type: performance versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41918/cpython2_pgo_default-v01.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 14 06:38:44 2016 From: report at bugs.python.org (Mark Dickinson) Date: Sun, 14 Feb 2016 11:38:44 +0000 Subject: [New-bugs-announce] [issue26360] Deadlock in thread.join Message-ID: <1455449924.98.0.400633064774.issue26360@psf.upfronthosting.co.za> New submission from Mark Dickinson: On OS X (10.9.5), I'm getting an apparent deadlock in the following simple Python script: #-------------------------------------------------------- import itertools import threading def is_prime(n): return n >= 2 and all(n % d for d in xrange(2, n)) def count_primes_in_range(start, stop): return sum(is_prime(n) for n in xrange(start, stop)) def main(): threads = [ threading.Thread( target=count_primes_in_range, args=(12500*i, 12500*(i+1)) ) for i in xrange(8) ] for thread in threads: thread.start() for thread in threads: thread.join() if __name__ == '__main__': for i in itertools.count(): print "Iteration: ", i main() #-------------------------------------------------------- Each iteration takes around 60 seconds, and I typically get a deadlock within the first 5 iterations. It looks as though the deadlock happens during the "thread.join", at a stage where some of the threads have already completed and been joined. The code hangs with no CPU activity, and a backtrace (attached) shows that all the background threads are waiting for the GIL, while the main thread is in a blocking `thread.join` call. I've attached a gdb-generated stack trace. I was unable to reproduce this with a debug build of Python. I *have* reproduced with a normal build of Python, and on various Python 2.7 executables from 3rd party sources (Apple, Macports, Enthought Canopy). I've also not yet managed to reproduce on Python 3, but I haven't tried that hard. I suspect it's a Python 2-only problem, though. (And yes, this is horrible code that doesn't make much sense. It's actually a cut-down example from a "how not to do it" part of a course on concurrency. Nevertheless, it shouldn't be deadlocking.) ---------- components: Interpreter Core files: hang_backtrace.txt messages: 260273 nosy: mark.dickinson priority: normal severity: normal status: open title: Deadlock in thread.join type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file41920/hang_backtrace.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 14 18:09:13 2016 From: report at bugs.python.org (Samuel Ainsworth) Date: Sun, 14 Feb 2016 23:09:13 +0000 Subject: [New-bugs-announce] [issue26361] lambda in dict comprehension is broken Message-ID: <1455491353.47.0.497218626994.issue26361@psf.upfronthosting.co.za> New submission from Samuel Ainsworth: If we construct a dict as follows, x = {t: lambda x: x * t for t in range(5)} then `x[0](1)` evaluates to 4! Tested on 2.7 and 3.5. ---------- messages: 260287 nosy: Samuel.Ainsworth priority: normal severity: normal status: open title: lambda in dict comprehension is broken type: behavior versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 14 21:11:11 2016 From: report at bugs.python.org (Ben Finney) Date: Mon, 15 Feb 2016 02:11:11 +0000 Subject: [New-bugs-announce] [issue26362] Approved API for creating a temporary file path Message-ID: <1455502271.6.0.558905453707.issue26362@psf.upfronthosting.co.za> New submission from Ben Finney: The security issues of `tempfile.mktemp` are clear when the return value is used to create a filesystem entry. The documentation and docstrings (and even some comments on past issues) are correct o deprecate its use for that purpose. The function has a use which doers not have security implications: generating test data. When a test case wants to generate unpredictable, unique, valid filesystem paths ? and *never access those paths* on the filesystem ? the `tempfile.mktemp` function is right there and is very useful. The `tempfile._RandomNameSequence` class would also be useful, but its name also makes clear that it is not part of the library public API. Please make that functionality available for the purpose of *only* generating filesystem paths as `tempfile._RandomNameSequence` does, in a public, supported, non-deprecated API. ---------- components: Library (Lib) messages: 260291 nosy: bignose priority: normal severity: normal status: open title: Approved API for creating a temporary file path type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 15 05:35:57 2016 From: report at bugs.python.org (Xavier Combelle) Date: Mon, 15 Feb 2016 10:35:57 +0000 Subject: [New-bugs-announce] [issue26363] builtins propagation is misleading described in exec and eval documentation Message-ID: <1455532557.86.0.044307885123.issue26363@psf.upfronthosting.co.za> Changes by Xavier Combelle : ---------- assignee: docs at python components: Documentation nosy: docs at python, xcombelle priority: normal severity: normal status: open title: builtins propagation is misleading described in exec and eval documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 15 06:01:54 2016 From: report at bugs.python.org (Barry Scott) Date: Mon, 15 Feb 2016 11:01:54 +0000 Subject: [New-bugs-announce] [issue26364] pip uses colour in messages that does not work on white terminals Message-ID: <1455534114.36.0.799410529747.issue26364@psf.upfronthosting.co.za> New submission from Barry Scott: pip3 (3.5 on Mac OS X) is outputting a message in yellow that I can barely see on a white background terminal. "You are using pip version 7.1.2, however version 8.0.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command." I suggest that pip only uses text output. But if the pip developers insist on using colour then set both foreground and background colours. ---------- messages: 260307 nosy: barry.scott priority: normal severity: normal status: open title: pip uses colour in messages that does not work on white terminals type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 15 08:25:05 2016 From: report at bugs.python.org (Ben Kummer) Date: Mon, 15 Feb 2016 13:25:05 +0000 Subject: [New-bugs-announce] [issue26365] ntpath.py Error in Windows Message-ID: <1455542705.33.0.354690816824.issue26365@psf.upfronthosting.co.za> New submission from Ben Kummer: ntpath.py throws an error in Python2.7.11 Windows Code snippet: product_dir ="/zope/eggs43" my_tuple= os.path.split(product_dir)[:-1] roduct_prefix = os.path.join(my_tuple ) The same code works in python 2.7.11 under Linux Traceback: C:\zope\selenium_test>c:\Python27\python.exe python_bug_reproduce.py Traceback (most recent call last): File "python_bug_reproduce.py", line 10, in main() File "python_bug_reproduce.py", line 7, in main product_prefix = os.path.join(my_tuple ) File "c:\Python27\lib\ntpath.py", line 90, in join return result_drive + result_path TypeError: cannot concatenate 'str' and 'tuple' objects code to reproduce: #!/usr/bin/python import os def main(): product_dir ="/zope/eggs43" my_tuple= os.path.split(product_dir)[:-1] product_prefix = os.path.join(my_tuple ) if __name__ == "__main__": main() ---------- components: Windows files: python_bug_reproduce.py messages: 260310 nosy: ben.kummer, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ntpath.py Error in Windows type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file41928/python_bug_reproduce.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 15 10:49:19 2016 From: report at bugs.python.org (Tony R.) Date: Mon, 15 Feb 2016 15:49:19 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlMjYzNjZdIFVzZSDigJwuLiB2?= =?utf-8?b?ZXJzaW9uYWRkZWTigJ3CoG92ZXIg4oCcLi4gdmVyc2lvbmNoYW5nZWTigJ0g?= =?utf-8?q?where_appropriate?= Message-ID: <1455551359.29.0.702516130467.issue26366@psf.upfronthosting.co.za> New submission from Tony R.: In the documentation, I noticed several uses of ``.. versionchanged::`` that described things which had been added. I love Python, and its documentation, and I wanted to contribute. So, I figured a low-risk contribution would be to change ``.. versionchanged::`` to ?.. versionadded? where appropriate. (I also tweaked the descriptions accordingly. E.g., ?Added the *x* argument? became ?The *x* argument?, because it?s unnecessary to say ?added? in a description under ?.. versionadded?.) I did also make a few unrelated tweaks along the way--all very minor, and related to phrasing or formatting. Please let me know if I can do anything else to get this merged! ---------- assignee: docs at python components: Documentation files: _docs-version-markup.patch keywords: patch messages: 260313 nosy: Tony R., docs at python priority: normal severity: normal status: open title: Use ?.. versionadded??over ?.. versionchanged? where appropriate type: enhancement Added file: http://bugs.python.org/file41929/_docs-version-markup.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 15 19:59:01 2016 From: report at bugs.python.org (Manuel Jacob) Date: Tue, 16 Feb 2016 00:59:01 +0000 Subject: [New-bugs-announce] [issue26367] importlib.__import__ does not fail for invalid relative import Message-ID: <1455584341.33.0.058581047341.issue26367@psf.upfronthosting.co.za> New submission from Manuel Jacob: Python 3.6.0a0 (default:6c6f7dff597b, Feb 16 2016, 01:24:51) [GCC 5.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import importlib >>> importlib.__import__('array', globals(), locals(), level=1) >>> __import__('array', globals(), locals(), level=1) Traceback (most recent call last): File "", line 1, in ImportError: attempted relative import with no known parent package Instead of failing, importlib.__import__ returns a module with a wrong name. This happens with both built-in modules and pure python modules. However it fails when replacing 'array' with 'time' (this seems to be related to whether the module is in Modules/Setup.dist). ---------- messages: 260338 nosy: brett.cannon, eric.snow, mjacob, ncoghlan priority: normal severity: normal status: open title: importlib.__import__ does not fail for invalid relative import type: behavior versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 15 20:44:15 2016 From: report at bugs.python.org (Ryan Stuart) Date: Tue, 16 Feb 2016 01:44:15 +0000 Subject: [New-bugs-announce] [issue26368] grammatical error in documentation Message-ID: <1455587055.9.0.191263906998.issue26368@psf.upfronthosting.co.za> New submission from Ryan Stuart: The note for 18.5.5.1. Stream functions is missing a word. It should read "Note The top-level functions in this module are meant **as** convenience wrappers only; there?s really nothing special there, and if they don?t do exactly what you want, feel free to copy their code." ---------- assignee: docs at python components: Documentation files: asyncio-stream_docs.patch keywords: patch messages: 260339 nosy: Ryan Stuart, docs at python priority: normal severity: normal status: open title: grammatical error in documentation versions: Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41931/asyncio-stream_docs.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 16 06:58:58 2016 From: report at bugs.python.org (Ben Spiller) Date: Tue, 16 Feb 2016 11:58:58 +0000 Subject: [New-bugs-announce] [issue26369] doc for unicode.decode and str.encode is unnecessarily confusing Message-ID: <1455623938.04.0.928922751587.issue26369@psf.upfronthosting.co.za> New submission from Ben Spiller: It's well known that lots of people struggle writing correct programs using non-ascii strings in python 2.x, but I think one of the main reasons for this could be very easily fixed with a small addition to the documentation for str.encode and unicode.decode, which is currently quite vague. The decode/encode methods really make most sense when called on a unicode string i.e. unicode.encode() to produce a byte string, or on a byte string e.g. str.decode() to produce a unicode object from a byte string. However, the additional presence of the opposite methods str.encode() and unicode.decode() is quite confusing, and a frequent source of errors - e.g. calling str.encode('utf-8') first DECODES the str object (which might already be in utf8) to a unicode string **using the default encoding of "ascii"** (!) before ENCODING to a utf-8 byte str as requested, which of course will fail at the first stage with the classic error "UnicodeDecodeError: 'ascii' codec can't decode byte" if there are any non-ascii chars present. It's unfortunate that this initial decode/encode stage ignores both the "encoding" argument (used only for the subsequent encode/decode) and the "errors" argument (commonly used when the programmer is happy with a best-effort conversion e.g. for logging purposes). Anyway, given this behaviour, a lot of time would be saved by a simple sentence on the doc for str.encode()/unicode.decode() essentially warning people that those methods aren't that useful and they probably really intended to use str.decode()/unicode.encode() - the current doc gives absolutely no clue about this extra stage which ignores the input arguments and sues 'ascii' and 'strict'. It might also be worth stating in the documentation that the pattern (u.encode(encoding) if isinstance(u, unicode) else u) can be helpful for cases where you unavoidably have to deal with both kinds of input, string calling str.encode is such a bad idea. In an ideal world I'd love to see the implementation of str.encode/unicode.decode changed to be more useful (i.e. instead of using ascii, it would be more logical and useful to use the passed-in encoding to perform the initial decode/encode, and the apss-in 'errors' value). I wasn't sure if that change would be accepted so for now I'm proposing better documentation of the existing behaviour as a second-best. ---------- assignee: docs at python components: Documentation messages: 260359 nosy: benspiller, docs at python priority: normal severity: normal status: open title: doc for unicode.decode and str.encode is unnecessarily confusing type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 16 09:02:05 2016 From: report at bugs.python.org (Dima Tisnek) Date: Tue, 16 Feb 2016 14:02:05 +0000 Subject: [New-bugs-announce] [issue26370] shelve filename inconsistent between platforms Message-ID: <1455631325.94.0.590014562337.issue26370@psf.upfronthosting.co.za> New submission from Dima Tisnek: shelve.open("foo.db") creates "foo.db" on Linux and "foo.db.db" on OSX. Something to that extent is even documented: """d = shelve.open(filename) # open, with (g)dbm filename -- no suffix""" and """As a side-effect, an extension may be added to the filename and more than one file may be created.""" Still, it's super-quirky, it's almost as if the message was "don't use shelve." Some ways out: * ValueError if "." in basename(filaneme) # a hammer * ValueError if filename.endswith((".db", ".gdbm", ...)) # block only known extensions * strip extension if it's known that underlying library is going to add it * patch underlying library to always use filename verbatim ---------- components: Extension Modules messages: 260362 nosy: Dima.Tisnek priority: normal severity: normal status: open title: shelve filename inconsistent between platforms type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 16 15:47:24 2016 From: report at bugs.python.org (NGG) Date: Tue, 16 Feb 2016 20:47:24 +0000 Subject: [New-bugs-announce] [issue26371] asynchat.async_chat and asyncore.dispatcher_with_send are not thread-safe Message-ID: <1455655644.91.0.36699489167.issue26371@psf.upfronthosting.co.za> New submission from NGG: The initiate_send() method in both asynchat.async_chat and asyncore.dispatcher_with_send is not a thread-safe function, but it can be called from multiple threads. For example if asyncore.loop() runs in a background thread, and the main thread sends a message then both threads will call this. It can result in sending (part of) the message multiple times or not sending part of it. Possible solution: asyncore.dispatcher_with_send.out_buffer and asynchat.async_chat.producer_fifo should be guarded with a lock (it should be locked even in writable()) ---------- components: Library (Lib) messages: 260367 nosy: ngg priority: normal severity: normal status: open title: asynchat.async_chat and asyncore.dispatcher_with_send are not thread-safe type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 16 20:28:32 2016 From: report at bugs.python.org (Memeplex) Date: Wed, 17 Feb 2016 01:28:32 +0000 Subject: [New-bugs-announce] [issue26372] Popen.communicate not ignoring BrokenPipeError Message-ID: <1455672512.72.0.551664124622.issue26372@psf.upfronthosting.co.za> New submission from Memeplex: When not using a timeout, communicate will raise a BrokenPipeError if the command returns an error code. For example: sp = subprocess.Popen('cat --not-an-option', shell=True, stdin=subprocess.PIPE) time.sleep(.2) sp.communicate(b'hi\n') This isn't consistent with the behavior of communicate with a timeout, which doesn't raise the exception. Moreover, there is even a comment near the point of the exception stating that communicate must ignore BrokenPipeError: def _stdin_write(self, input): if input: try: self.stdin.write(input) except BrokenPipeError: # communicate() must ignore broken pipe error pass .... self.stdin.close() Obviously, the problem is that self.stdin.close() is outside the except clause. ---------- components: Library (Lib) messages: 260373 nosy: memeplex priority: normal severity: normal status: open title: Popen.communicate not ignoring BrokenPipeError type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 06:30:21 2016 From: report at bugs.python.org (STINNER Victor) Date: Wed, 17 Feb 2016 11:30:21 +0000 Subject: [New-bugs-announce] [issue26373] asyncio: add support for async context manager on streams? Message-ID: <1455708621.94.0.460576626878.issue26373@psf.upfronthosting.co.za> New submission from STINNER Victor: While working on the issue #24911 (Context manager of socket.socket is not documented), I recalled that asyncio objects don't support context manager. With the PEP 492, it becomes possible to add support for async context manager for a few asyncio objects. I suggest to start with StreamWriter. Usually, my rationale to decide which object should implement context manager is to check if the destructor can emit ResourceWarning. In asyncio, ResourceWarning is now emitted by transport and event loop destructors: https://docs.python.org/dev/library/asyncio-dev.html#close-transports-and-event-loops ---------- components: asyncio messages: 260394 nosy: gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: asyncio: add support for async context manager on streams? type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 07:26:30 2016 From: report at bugs.python.org (F.D. Sacerdoti) Date: Wed, 17 Feb 2016 12:26:30 +0000 Subject: [New-bugs-announce] [issue26374] concurrent_futures Executor.map semantics better specified in docs Message-ID: <1455711990.13.0.290959896018.issue26374@psf.upfronthosting.co.za> New submission from F.D. Sacerdoti: Hello, My colleague and I have both written parallel executors for the concurrent_futures module, and are having an argument, as described in the dialog below. To resolve, I would like to add "order of results is undefined" to disambiguate the docs for "map(func, *iterables, timeout=None)". DISCUSSION Q: Correct Semantics to return results out of order? JH: No, breaks API as stated Rebut: order is undefined, concurrent_futures specifies map() returns an iterator, where builtin map returns a list. Q: Does it break the spirit of the module? A: No, I believe one of the best things about doing things async is the dataflow model: do the next thing as soon as its inputs are ready. Q: Should we hold up the caller in all cases when there are stragglers, i.e. elements that compute slower? A: No, the interface should allow both modes. def james_map(exe, fn, *args): return iter( sorted( list( exe.map( fn, *args ) ) ) ) ---------- messages: 260396 nosy: F.D. Sacerdoti priority: normal severity: normal status: open title: concurrent_futures Executor.map semantics better specified in docs type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 08:39:17 2016 From: report at bugs.python.org (Michal Niklas) Date: Wed, 17 Feb 2016 13:39:17 +0000 Subject: [New-bugs-announce] [issue26375] New versions of Python hangs on imaplib.IMAP4_SSL() Message-ID: <1455716357.67.0.858374272161.issue26375@psf.upfronthosting.co.za> New submission from Michal Niklas: I have application that import emails from client IMAP4 mailboxes on home.pl (I think it is popular provider in Poland). It worked very well up to Python 2.7.9 but with version 2.7.10 it hangs on read() in imaplib.IMAP4_SSL(). On my Fedora 23 I have Python 3.4.3 and 2.7.10. With Python 3.4.3 such code works as expected: [mn] python3 -c "import imaplib; x=imaplib.IMAP4_SSL('imap.home.pl', 993); print('finish', x)" finish But with Python 2.7.10 it hangs. I pressed Ctrl-C after few seconds: [mn] python -c "import imaplib; x=imaplib.IMAP4_SSL('imap.home.pl', 993); print('finish', x)" ^CTraceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.7/imaplib.py", line 1166, in __init__ IMAP4.__init__(self, host, port) File "/usr/lib64/python2.7/imaplib.py", line 202, in __init__ typ, dat = self.capability() File "/usr/lib64/python2.7/imaplib.py", line 374, in capability typ, dat = self._simple_command(name) File "/usr/lib64/python2.7/imaplib.py", line 1088, in _simple_command return self._command_complete(name, self._command(name, *args)) File "/usr/lib64/python2.7/imaplib.py", line 910, in _command_complete typ, data = self._get_tagged_response(tag) File "/usr/lib64/python2.7/imaplib.py", line 1017, in _get_tagged_response self._get_response() File "/usr/lib64/python2.7/imaplib.py", line 929, in _get_response resp = self._get_line() File "/usr/lib64/python2.7/imaplib.py", line 1027, in _get_line line = self.readline() File "/usr/lib64/python2.7/imaplib.py", line 1189, in readline return self.file.readline() File "/usr/lib64/python2.7/socket.py", line 451, in readline data = self._sock.recv(self._rbufsize) File "/usr/lib64/python2.7/ssl.py", line 734, in recv return self.read(buflen) File "/usr/lib64/python2.7/ssl.py", line 621, in read v = self._sslobj.read(len or 1024) KeyboardInterrupt You can also test it with docker to see that it worked with older versions of Python: [mn] sudo docker run --rm -it python:2.7.9 python -c "import imaplib; x=imaplib.IMAP4_SSL('imap.home.pl', 993); print('finish', x)" ('finish', ) [mn] sudo docker run --rm -it python:2.7.10 python -c "import imaplib; x=imaplib.IMAP4_SSL('imap.home.pl', 993); print('finish', x)" ^CTraceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/imaplib.py", line 1166, in __init__ IMAP4.__init__(self, host, port) File "/usr/local/lib/python2.7/imaplib.py", line 202, in __init__ typ, dat = self.capability() File "/usr/local/lib/python2.7/imaplib.py", line 374, in capability typ, dat = self._simple_command(name) File "/usr/local/lib/python2.7/imaplib.py", line 1088, in _simple_command return self._command_complete(name, self._command(name, *args)) File "/usr/local/lib/python2.7/imaplib.py", line 910, in _command_complete typ, data = self._get_tagged_response(tag) File "/usr/local/lib/python2.7/imaplib.py", line 1017, in _get_tagged_response self._get_response() File "/usr/local/lib/python2.7/imaplib.py", line 929, in _get_response resp = self._get_line() File "/usr/local/lib/python2.7/imaplib.py", line 1027, in _get_line line = self.readline() File "/usr/local/lib/python2.7/imaplib.py", line 1189, in readline return self.file.readline() File "/usr/local/lib/python2.7/socket.py", line 451, in readline data = self._sock.recv(self._rbufsize) File "/usr/local/lib/python2.7/ssl.py", line 734, in recv return self.read(buflen) File "/usr/local/lib/python2.7/ssl.py", line 621, in read v = self._sslobj.read(len or 1024) KeyboardInterrupt The same way I tested that it hangs with Python 3.4.4. I think it is a bug because it hangs instead of raising an exception with helpful information. Maybe it is caused by removing RC4 cipher from new versions of Python? How can I workaround this problem? ---------- messages: 260398 nosy: mniklas priority: normal severity: normal status: open title: New versions of Python hangs on imaplib.IMAP4_SSL() type: behavior versions: Python 2.7, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 10:00:45 2016 From: report at bugs.python.org (Sam Yeager) Date: Wed, 17 Feb 2016 15:00:45 +0000 Subject: [New-bugs-announce] [issue26376] Tkinter root window won't close if packed. Message-ID: <1455721245.86.0.235285513904.issue26376@psf.upfronthosting.co.za> New submission from Sam Yeager: Using the following code, the root window will not close properly when the close icon is clicked: from tkinter import * rootWin = Tk() l = Label(rootWin, text="foo") l.pack() Similar issue occurs with Tk.grid(). OS: Mac OS X 10.10.5 Python IDE: IDLE 3.4.4 tkinter.TkVersion: 8.5 tkinter.TclVersion: 8.5 ActiveTcl: 8.6.4 ---------- components: IDLE, Macintosh, Tkinter files: Screen Shot 2016-02-17 at 10.00.21 AM.png messages: 260400 nosy: Sam Yeager, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Tkinter root window won't close if packed. type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file41942/Screen Shot 2016-02-17 at 10.00.21 AM.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 10:07:00 2016 From: report at bugs.python.org (Sam Yeager) Date: Wed, 17 Feb 2016 15:07:00 +0000 Subject: [New-bugs-announce] [issue26377] Tkinter dialogs will not close if root window not packed. Message-ID: <1455721620.42.0.0590186130956.issue26377@psf.upfronthosting.co.za> New submission from Sam Yeager: Using the following code, the messagebox will not close, leaving it on top of all other open windows: from tkinter import * rootWin = Tk() messagebox.showinfo("Title", "foo") If the root window contains a widget (Label, Entry, Button, etc.), the dialog can close. Similar results have been obtained with filedialog. OS: Mac OS X 10.10.5 Python IDE: IDLE 3.4.4 tkinter.TkVersion: 8.5 tkinter.TclVersion: 8.5 ActiveTcl: 8.6.4 ---------- components: IDLE, Macintosh, Tkinter messages: 260401 nosy: Sam Yeager, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Tkinter dialogs will not close if root window not packed. versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 16:43:04 2016 From: report at bugs.python.org (David Rager) Date: Wed, 17 Feb 2016 21:43:04 +0000 Subject: [New-bugs-announce] [issue26378] Typo in regex documentation Message-ID: <1455745384.28.0.629898893574.issue26378@psf.upfronthosting.co.za> New submission from David Rager: In the following sentence, "few" should probably be "fewer." "Repetitions such as * are greedy; when repeating a RE, the matching engine will try to repeat it as many times as possible. If later portions of the pattern don?t match, the matching engine will then back up and try again with few repetitions." https://docs.python.org/2/howto/regex.html Thanks for such a great language and documentation! My apologies if this isn't actually a typo. ---------- assignee: docs at python components: Documentation messages: 260411 nosy: David Rager, docs at python priority: normal severity: normal status: open title: Typo in regex documentation type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 17 23:08:20 2016 From: report at bugs.python.org (Joe Jevnik) Date: Thu, 18 Feb 2016 04:08:20 +0000 Subject: [New-bugs-announce] [issue26379] zlib decompress as_bytearray flag Message-ID: <1455768500.67.0.379084833673.issue26379@psf.upfronthosting.co.za> New submission from Joe Jevnik: Adds a keyword only flag to zlib.decompress and zlib.Decompress.decompress which marks that the return type should be a bytearray instead of bytes. The use case for this is reading compressed data into a mutable array to do further operations without requiring that the user copy the data first. Often, if a user is choosing to compress the data there might be a real cost to copying the uncompressed bytes into a mutable buffer. The impetus for this change comes from a patch for Pandas (https://github.com/pydata/pandas/pull/12359). I have also worked on a similar feature for blosc, another popular compression library for python (https://github.com/Blosc/python-blosc/pull/107). My hope is to fix the hacky solution presented in the pandas patch by supporting this feature directly in the compression libraries. As a side note: this is my first time using the argument clinic and I love it. It was so simple to add this argument, thank you very much for developing such an amazing tool! ---------- components: Extension Modules files: zlib.patch keywords: patch messages: 260424 nosy: llllllllll priority: normal severity: normal status: open title: zlib decompress as_bytearray flag type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41944/zlib.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 01:24:05 2016 From: report at bugs.python.org (Demian Brecht) Date: Thu, 18 Feb 2016 06:24:05 +0000 Subject: [New-bugs-announce] [issue26380] Add an http method enum Message-ID: <1455776645.01.0.0347855430312.issue26380@psf.upfronthosting.co.za> New submission from Demian Brecht: Super minor annoyance that I've had over multiple projects is seeing either hard coded strings being used (which is a bit of a no-no in terms of best practices) or each project defining its own set of constants for http methods. Why not just include a standard set in http as is done for status codes? ---------- components: Library (Lib) keywords: needs review messages: 260431 nosy: demian.brecht, r.david.murray priority: normal severity: normal status: open title: Add an http method enum versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 07:27:14 2016 From: report at bugs.python.org (Serhiy Int) Date: Thu, 18 Feb 2016 12:27:14 +0000 Subject: [New-bugs-announce] [issue26381] Add 'geo' URI scheme (RFC 5870) to urllib.parse.uses_params Message-ID: <1455798434.45.0.376625800279.issue26381@psf.upfronthosting.co.za> Changes by Serhiy Int : ---------- components: Library (Lib) nosy: Serhiy Int priority: normal severity: normal status: open title: Add 'geo' URI scheme (RFC 5870) to urllib.parse.uses_params versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 09:26:51 2016 From: report at bugs.python.org (Catalin Gabriel Manciu) Date: Thu, 18 Feb 2016 14:26:51 +0000 Subject: [New-bugs-announce] [issue26382] List object memory allocator Message-ID: <1455805611.43.0.0364707113583.issue26382@psf.upfronthosting.co.za> New submission from Catalin Gabriel Manciu: Hi All, This is Catalin from the Server Scripting Languages Optimization Team at Intel Corporation. I would like to submit a patch that replaces the 'malloc' allocator used by the list object (Objects/listobject.c) with the small object allocator (obmalloc.c) and simplifies the 'list_resize' function by removing a redundant check and properly handling resizing to zero. Replacing PyMem_* calls with PyObject_* inside the list implementation is beneficial because many PyMem_* calls are made for requesting sizes that are better handled by the small object allocator. For example, when running Tools/pybench.py -w 1 a total of 48.295.840 allocation requests are made by the list implementation (either by using 'PyMem_MALLOC' directly or by calling 'PyMem_RESIZE') out of which 42.581.993 (88%) are requesting sizes that can be handled by the small object allocator (they're equal or less than 512 bytes in size). The changes to 'list_resize' were made in order to further improve performance by removing a redundant check and handling the 'resize to zero' case separately. The 'empty' state of a list is suggested by the 'PyList_New' function as having the 'ob_item' pointer NULL and the 'ob_size' and 'allocated' members equal with 0. Previously, when being called with zero as a size parameter, 'list_resize' would set 'ob_size' and 'allocated' to zero, but it would also call 'PyMem_RESIZE' which, by its design, would call 'realloc' with a size of 1, thus going through the process of allocating an unnecessary 1 byte and setting the 'ob_item' pointer with the newly obtained address. The proposed implementation just deletes the buffer pointed by 'ob_item' and sets 'ob_size', 'allocated' and 'ob_item' to zero when receiving a 'resize to zero' request. Hardware and OS Configuration ============================= Hardware: Intel XEON (Haswell-EP) 36 Cores / Intel XEON (Broadwell-EP) 36 Cores BIOS settings: Intel Turbo Boost Technology: false Hyper-Threading: false OS: Ubuntu 14.04.2 LTS OS configuration: Address Space Layout Randomization (ASLR) disabled to reduce run to run variation by echo 0 > /proc/sys/kernel/randomize_va_space CPU frequency set fixed at 2.3GHz GCC version: GCC version 5.1.0 Benchmark: Grand Unified Python Benchmark from https://hg.python.org/benchmarks/ Measurements and Results ======================== A. Repository: GUPB Benchmark: hg id : 9923b81a1d34 tip hg --debug id -i : 9923b81a1d346891f179f57f8780f86dcf5cf3b9 CPython3: hg id : 733a902ac816 tip hg id -r 'ancestors(.) and tag()': 737efcadf5a6 (3.4) v3.4.4 hg --debug id -i : 733a902ac816bd5b7b88884867ae1939844ba2c5 CPython2: hg id : 5715a6d9ff12 (2.7) hg id -r 'ancestors(.) and tag()': 6d1b6a68f775 (2.7) v2.7.11 hg --debug id -i : 5715a6d9ff12053e81f7ad75268ac059b079b351 B. Results: CPython2 and CPython3 sample results, measured on a Haswell and a Broadwell platform can be viewed in Tables 1, 2, 3 and 4. The first column (Benchmark) is the benchmark name and the second (%D) is the speedup in percents compared with the unpatched version. Table 1. CPython 3 results on Intel XEON (Haswell-EP) @ 2.3 GHz Benchmark %D ---------------------------------- unpickle_list 20.27 regex_effbot 6.07 fannkuch 5.87 mako_v2 5.19 meteor_contest 4.31 simple_logging 3.98 nqueens 3.40 json_dump_v2 3.14 fastpickle 2.16 django_v3 2.03 tornado_http 1.90 pathlib 1.84 fastunpickle 1.81 call_simple 1.75 nbody 1.60 etree_process 1.58 go 1.54 call_method_unknown 1.53 2to3 1.26 telco 1.04 etree_generate 1.02 json_load 0.85 etree_parse 0.81 call_method_slots 0.73 etree_iterparse 0.68 call_method 0.65 normal_startup 0.63 silent_logging 0.56 chameleon_v2 0.56 pickle_list 0.52 regex_compile 0.50 hexiom2 0.47 pidigits 0.39 startup_nosite 0.17 pickle_dict 0.00 unpack_sequence 0.00 formatted_logging -0.06 raytrace -0.06 float -0.18 richards -0.37 spectral_norm -0.51 chaos -0.65 regex_v8 -0.72 Table 2. CPython 3 results on Intel XEON (Broadwell-EP) @ 2.3 GHz Benchmark %D ---------------------------------- unpickle_list 15.75 nqueens 5.24 mako_v2 5.17 unpack_sequence 4.44 fannkuch 4.42 nbody 3.25 meteor_contest 2.86 regex_effbot 2.45 json_dump_v2 2.44 django_v3 2.26 call_simple 2.09 tornado_http 1.74 regex_compile 1.40 regex_v8 1.16 spectral_norm 0.89 2to3 0.76 chameleon_v2 0.70 telco 0.70 normal_startup 0.64 etree_generate 0.61 etree_process 0.55 hexiom2 0.51 json_load 0.51 call_method_slots 0.48 formatted_logging 0.33 call_method 0.28 startup_nosite -0.02 fastunpickle -0.02 pidigits -0.20 etree_parse -0.23 etree_iterparse -0.27 richards -0.30 silent_logging -0.36 pickle_list -0.42 simple_logging -0.82 float -0.91 pathlib -0.99 go -1.16 raytrace -1.16 chaos -1.26 fastpickle -1.72 call_method_unknown -2.94 pickle_dict -4.73 Table 3. CPython 2 results on Intel XEON (Haswell-EP) @ 2.3 GHz Benchmark %D ---------------------------------- unpickle_list 15.89 json_load 11.53 fannkuch 7.90 mako_v2 7.01 meteor_contest 4.21 nqueens 3.81 fastunpickle 3.56 django_v3 2.91 call_simple 2.72 call_method_slots 2.45 slowpickle 2.23 call_method 2.21 html5lib_warmup 1.90 chaos 1.89 html5lib 1.81 regex_v8 1.81 tornado_http 1.66 2to3 1.56 json_dump_v2 1.49 nbody 1.38 rietveld 1.26 formatted_logging 1.12 regex_compile 0.99 spambayes 0.92 pickle_list 0.87 normal_startup 0.82 pybench 0.74 slowunpickle 0.71 raytrace 0.67 startup_nosite 0.59 float 0.47 hexiom2 0.46 slowspitfire 0.46 pidigits 0.44 etree_process 0.44 etree_generate 0.37 go 0.27 telco 0.24 regex_effbot 0.12 etree_iterparse 0.06 bzr_startup 0.04 richards 0.03 etree_parse 0.00 unpack_sequence 0.00 call_method_unknown -0.26 pathlib -0.57 fastpickle -0.64 silent_logging -0.94 simple_logging -1.10 chameleon_v2 -1.25 pickle_dict -1.67 spectral_norm -3.25 Table 4. CPython 2 results on Intel XEON (Broadwell-EP) @ 2.3 GHz Benchmark %D ---------------------------------- unpickle_list 15.44 json_load 11.11 fannkuch 7.55 meteor_contest 5.51 mako_v2 4.94 nqueens 3.49 html5lib_warmup 3.15 html5lib 2.78 call_simple 2.35 silent_logging 2.33 json_dump_v2 2.14 startup_nosite 2.09 bzr_startup 1.93 fastunpickle 1.93 slowspitfire 1.91 regex_v8 1.79 rietveld 1.74 pybench 1.59 nbody 1.57 regex_compile 1.56 pathlib 1.51 tornado_http 1.33 normal_startup 1.21 2to3 1.14 chaos 1.00 spambayes 0.85 etree_process 0.73 pickle_list 0.70 float 0.69 hexiom2 0.51 slowpickle 0.44 call_method_unknown 0.42 slowunpickle 0.37 pickle_dict 0.25 etree_parse 0.20 go 0.19 django_v3 0.12 call_method_slots 0.12 spectral_norm 0.05 call_method 0.01 unpack_sequence 0.00 raytrace -0.08 pidigits -0.11 richards -0.16 etree_generate -0.23 regex_effbot -0.26 telco -0.28 simple_logging -0.32 etree_iterparse -0.38 formatted_logging -0.50 fastpickle -1.08 chameleon_v2 -1.74 ---------- components: Interpreter Core files: listobject_CPython3.patch keywords: patch messages: 260459 nosy: catalin.manciu priority: normal severity: normal status: open title: List object memory allocator type: performance versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file41953/listobject_CPython3.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 09:30:26 2016 From: report at bugs.python.org (Florin Papa) Date: Thu, 18 Feb 2016 14:30:26 +0000 Subject: [New-bugs-announce] [issue26383] number of decimal places in csv output Message-ID: <1455805826.32.0.222173600977.issue26383@psf.upfronthosting.co.za> New submission from Florin Papa: Hi, This is Florin Papa from the Dynamic Scripting Languages Optimizations Team at Intel Corporation. This patch checks whether the benchmark measurement that will be included in the csv output has more than 3 of the first decimal places equal to 0 and outputs more decimal places. The problem is that each measurement in the csv output has only 6 decimal places (default for "%f" format), while some benchmarks, like unpack_sequence, will output values of the magnitude 0.0000xx (first four decimal places 0). This truncation of the original value can lead to incorrect results. ---------- components: Benchmarks files: decimal_csv_output.csv messages: 260460 nosy: brett.cannon, florin.papa, pitrou priority: normal severity: normal status: open title: number of decimal places in csv output type: behavior versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file41955/decimal_csv_output.csv _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 10:50:52 2016 From: report at bugs.python.org (Berker Peksag) Date: Thu, 18 Feb 2016 15:50:52 +0000 Subject: [New-bugs-announce] [issue26384] UnboundLocalError in socket._sendfile_use_sendfile Message-ID: <1455810652.46.0.743197751136.issue26384@psf.upfronthosting.co.za> New submission from Berker Peksag: I noticed this while working on issue 16915: Traceback (most recent call last): ... File "/home/berker/projects/cpython/default/Lib/socket.py", line 262, in _sendfile_use_sendfile raise _GiveupOnSendfile(err) # not a regular file UnboundLocalError: local variable 'err' referenced before assignment Here's a patch. ---------- components: Library (Lib) files: socket_unboundlocalerror.diff keywords: patch messages: 260464 nosy: berker.peksag priority: normal severity: normal stage: patch review status: open title: UnboundLocalError in socket._sendfile_use_sendfile type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41956/socket_unboundlocalerror.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 11:17:52 2016 From: report at bugs.python.org (Eugene Viktorov) Date: Thu, 18 Feb 2016 16:17:52 +0000 Subject: [New-bugs-announce] [issue26385] the call of tempfile.NamedTemporaryFile fails and leaves a file on the disk Message-ID: <1455812272.44.0.420327480876.issue26385@psf.upfronthosting.co.za> New submission from Eugene Viktorov: When calling the tempfile.NamedTemporaryFile with mode='wr' or with any other wrong value for "mode", it raises the ValueError and silently leaves an unknown, just created file on the file system. ---------- components: Library (Lib) files: temp_file.py messages: 260466 nosy: Eugene Viktorov priority: normal severity: normal status: open title: the call of tempfile.NamedTemporaryFile fails and leaves a file on the disk type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file41957/temp_file.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 11:30:08 2016 From: report at bugs.python.org (George) Date: Thu, 18 Feb 2016 16:30:08 +0000 Subject: [New-bugs-announce] [issue26386] tkinter - Treeview - .selection_add and selection_toggle Message-ID: <1455813008.22.0.795281140329.issue26386@psf.upfronthosting.co.za> New submission from George: Id's with spaces in them causes a crash when using the .selection* methods Treeview. Version of ttk.py "0.3.1" dated 12/6/2015. Traceback line numbers are 1415 then 1395 Either of these lines of code, where the item id is "2009 Report.pdf" crash allParents = oTree.get_children() for id in allParents: oTree.selection_add(id) # oTree.selection_toggle(id) These two lines of workaround code do work however. oTree.selection_add('"' + id + '"') # oTree.selection_toggle('"' + id + '"') Note that so far all other places in dealing with the item id's have no issue when there are spaces in them. ---------- components: Tkinter messages: 260469 nosy: gbarnabic priority: normal severity: normal status: open title: tkinter - Treeview - .selection_add and selection_toggle type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 12:59:33 2016 From: report at bugs.python.org (Filipp Andjelo) Date: Thu, 18 Feb 2016 17:59:33 +0000 Subject: [New-bugs-announce] [issue26387] Race condition in sqlite module Message-ID: <1455818373.79.0.020961668279.issue26387@psf.upfronthosting.co.za> New submission from Filipp Andjelo: Race condition in sqlite close/dealloc crashes the application with double free(). The pointer is set to NULL outside of mutexed zone, so if close and dealloc follow each other very shortly application crashes. Please see the attached patch. ---------- components: Library (Lib) files: sqlite_connection.c.patch keywords: patch messages: 260472 nosy: scorp priority: normal severity: normal status: open title: Race condition in sqlite module type: crash versions: Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41958/sqlite_connection.c.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 17:37:01 2016 From: report at bugs.python.org (Mike Kaplinskiy) Date: Thu, 18 Feb 2016 22:37:01 +0000 Subject: [New-bugs-announce] [issue26388] Disabling changing sys.argv[0] with runpy.run_module(...alter_sys=True) Message-ID: <1455835021.39.0.133039555561.issue26388@psf.upfronthosting.co.za> New submission from Mike Kaplinskiy: For the purposes of pex (https://github.com/pantsbuild/pex), it would be useful to allow calling run_module without sys.argv[0] changing. In general, this behavior is useful if the script intends to re-exec itself (so it needs to know the original arguments that it was started with). To make run_module more useful in general, I propose adding a `argv` parameter that has the following semantics: - (default) If set to a special value runpy.INHERIT (similar to subprocess.STDOUT), produces the current behavior of just changing sys.argv[0]. - If set to None, does not touch sys.argv at all. - If set to an iterable, executes `sys.argv = [module.__file__] + list(argv)`, and properly reverts it once the module is done executing. ---------- components: Library (Lib) messages: 260483 nosy: Mike Kaplinskiy priority: normal severity: normal status: open title: Disabling changing sys.argv[0] with runpy.run_module(...alter_sys=True) type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 17:59:10 2016 From: report at bugs.python.org (Brett Cannon) Date: Thu, 18 Feb 2016 22:59:10 +0000 Subject: [New-bugs-announce] [issue26389] Expand traceback module API to accept just an exception as an argument Message-ID: <1455836350.85.0.0162824201978.issue26389@psf.upfronthosting.co.za> New submission from Brett Cannon: When reading https://docs.python.org/3/library/traceback.html#traceback.print_exception I noticed that everything takes a traceback or a set of exception type, instance, and traceback. But since Python 3.0 all of that information is contained in a single exception object. So there's no reason to expand the APIs in the traceback module that take an exception to just take an exception instance and infer the exception type and grab the exception from exception instance itself. ---------- components: Library (Lib) messages: 260485 nosy: brett.cannon priority: low severity: normal stage: test needed status: open title: Expand traceback module API to accept just an exception as an argument versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 18 19:00:57 2016 From: report at bugs.python.org (Daan Bakker) Date: Fri, 19 Feb 2016 00:00:57 +0000 Subject: [New-bugs-announce] [issue26390] hashlib's pbkdf2_hmac documentation "rounds" does not match source Message-ID: <1455840057.88.0.0946194046166.issue26390@psf.upfronthosting.co.za> New submission from Daan Bakker: The documentation for pbkdf2_hmac at https://docs.python.org/3/library/hashlib.html uses the "rounds" keyword: hashlib.pbkdf2_hmac(name, password, salt, rounds, dklen=None) However, the actual source code uses "iterations". No-one has probably noticed it before because no error is raised if the number of iterations is given as a positional argument. ---------- assignee: docs at python components: Documentation files: pbkdf2_4.patch keywords: patch messages: 260490 nosy: dbakker, docs at python priority: normal severity: normal status: open title: hashlib's pbkdf2_hmac documentation "rounds" does not match source type: enhancement versions: Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41963/pbkdf2_4.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 19 06:52:22 2016 From: report at bugs.python.org (Kai Wohlfahrt) Date: Fri, 19 Feb 2016 11:52:22 +0000 Subject: [New-bugs-announce] [issue26391] Specialized sub-classes of Generic never call __init__ Message-ID: <1455882742.53.0.118642694532.issue26391@psf.upfronthosting.co.za> New submission from Kai Wohlfahrt: A specialized sub-class of a generic type never calls __init__ when it is instantiated. See below for an example: from typing import Generic, TypeVar T = TypeVar('T') class Foo(Generic[T]): def __init__(self, value: T): self.value = value Bar = Foo[str] foo = Foo('foo') bar = Bar('bar') print(type(foo), end=' ') print(foo.value) print(type(bar), end=' ') print(bar.value) # AttributeError I would expect Foo[str], Foo[int], etc to be equivalent to Foo at run-time. If this is not the case it might deserve an explicit mention in the docs. At the moment, behaviour is confusing because an instance of Foo is returned that does not have any of its attributes set. ---------- messages: 260519 nosy: Kai Wohlfahrt priority: normal severity: normal status: open title: Specialized sub-classes of Generic never call __init__ type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 19 12:00:17 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Fri, 19 Feb 2016 17:00:17 +0000 Subject: [New-bugs-announce] [issue26392] socketserver.BaseServer.close_server should stop serve_forever Message-ID: <1455901217.04.0.582313607927.issue26392@psf.upfronthosting.co.za> New submission from Aviv Palivoda: Currently if you call server_close you only close the socket. If we called serve_forever and then call server_close without calling shutdown the serve_forever loop keep running. Before using the selectors module for doing the poll we would have had exception thrown from the select (The socket fd is -1) in serve_forever. IMO you should be able to call server_close at any time and expect it to stop the serve_forever. Maybe even adding a block option to server_close that will wait on the server_forever if it's running (waiting for issue 12463 to resolve before doing this). Added a patch that closes serve_forever if server_close is called. ---------- components: Library (Lib) files: socketserver_close_stop_serve_forever.patch keywords: patch messages: 260524 nosy: palaviv priority: normal severity: normal status: open title: socketserver.BaseServer.close_server should stop serve_forever versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file41971/socketserver_close_stop_serve_forever.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 19 14:32:21 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Fri, 19 Feb 2016 19:32:21 +0000 Subject: [New-bugs-announce] [issue26393] random.shuffled Message-ID: <1455910341.76.0.914915111625.issue26393@psf.upfronthosting.co.za> New submission from Aviv Palivoda: I am suggesting adding random.shuffled to the random module. This is very similar to the shuffle function just return a new shuffled list instead of in place shuffle. This is very similar to the sorted and list.sort. As you can see in the patch the shuffled function just return random.sample(x, len(x)) as this is what i usually do when i want to get back a shuffled list. ---------- components: Library (Lib) files: random-shuffled.patch keywords: patch messages: 260529 nosy: mark.dickinson, palaviv, rhettinger priority: normal severity: normal status: open title: random.shuffled type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41976/random-shuffled.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 20 11:17:11 2016 From: report at bugs.python.org (Michael Herold) Date: Sat, 20 Feb 2016 16:17:11 +0000 Subject: [New-bugs-announce] [issue26394] argparse: Add set_values() function to complement set_defaults() Message-ID: <1455985031.95.0.488150837583.issue26394@psf.upfronthosting.co.za> New submission from Michael Herold: argparse has at least three features to set defaults (default=, set_defaults(), argument_default=). However, there is no feature to set the values of arguments. The difference is, that a required argument has to be specified, also if a default value exists. Thus, a clean way to set values from config files or env variables does not exist without making all arguments optional. See for example , where it becomes clear that no general solution for all actions exists. As a result, people also start to mess around with the args parameter of parse_args(). Even ConfigArgParse (used by Let's Encrypt) seems to fail in doing this properly. So please add a set_values() function, similar to set_defaults(). The predefined value should be treated as if it had been entered on the command line. If the value has also been supplied via the command line, the predefined value is overwritten. ---------- components: Library (Lib) messages: 260568 nosy: quabla priority: normal severity: normal status: open title: argparse: Add set_values() function to complement set_defaults() type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 20 17:50:29 2016 From: report at bugs.python.org (Simon Bernier St-Pierre) Date: Sat, 20 Feb 2016 22:50:29 +0000 Subject: [New-bugs-announce] [issue26395] asyncio does not support yielding from recvfrom (socket/udp) Message-ID: <1456008629.58.0.297729770488.issue26395@psf.upfronthosting.co.za> New submission from Simon Bernier St-Pierre: I want to receive data on a UDP socket that was bound, without blocking the event loop. I've looked through the asyncio docs, and I haven't found a way of doing that using the coroutine API (yield from/await). There is a sock_recv method on BaseEventLoop which is a coroutine, it seems like sock_recvfrom was never implemented. I don't have a patch for this right now, I wanted to know what people thought of adding support for this. ---------- components: asyncio messages: 260580 nosy: Simon Bernier St-Pierre, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: asyncio does not support yielding from recvfrom (socket/udp) versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 20 19:44:28 2016 From: report at bugs.python.org (Brett Cannon) Date: Sun, 21 Feb 2016 00:44:28 +0000 Subject: [New-bugs-announce] [issue26396] Create json.JSONType Message-ID: <1456015468.57.0.0882861590724.issue26396@psf.upfronthosting.co.za> New submission from Brett Cannon: See https://github.com/python/typing/issues/182 for the full details, but it should be: JSONType = t.Union[str, int, float, bool, None, t.Dict[str, t.Any], t.List[Any]] ---------- assignee: brett.cannon components: Library (Lib) messages: 260587 nosy: brett.cannon, gvanrossum priority: low severity: normal stage: needs patch status: open title: Create json.JSONType type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 20 19:59:08 2016 From: report at bugs.python.org (Brett Cannon) Date: Sun, 21 Feb 2016 00:59:08 +0000 Subject: [New-bugs-announce] [issue26397] Tweak importlib Example of importlib.import_module() to use importlib.util.module_from_spec() Message-ID: <1456016348.28.0.973001051865.issue26397@psf.upfronthosting.co.za> New submission from Brett Cannon: The example uses `spec.loader.create_module()` where it should be using `util.module_from_spec(spec)`. ---------- assignee: brett.cannon components: Documentation messages: 260589 nosy: brett.cannon priority: normal severity: normal stage: needs patch status: open title: Tweak importlib Example of importlib.import_module() to use importlib.util.module_from_spec() versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 04:45:59 2016 From: report at bugs.python.org (Dhiraj) Date: Sun, 21 Feb 2016 09:45:59 +0000 Subject: [New-bugs-announce] [issue26398] cgi.escape() Can Lead To XSS and HTMLi Vulnerabilities Message-ID: <1456047959.51.0.622191858918.issue26398@psf.upfronthosting.co.za> New submission from Dhiraj: The Pre-defined Module cgi.escape() can lead to XSS or HTMLi in every Version of Python. Example : import cgi test = "

Vulnerable

" cgi.escape(test) Works Properly all the Charters are escape properly but , Example 2: import cgi test2 = ' " ' cgi.escape(test2) Do not works Fine and the ' " ' Character is not escape properly and this may cause and XSS or HTMLi Please find the Attachments Below (PFA) The Python Security Expert says : " - The behavior of the cgi.escape() function is not a bug. It works exactly as documented in the Python documentation, https://docs.python.org/2/library/cgi.html#cgi.escape - By default the cgi.escape() function only escapes the three chars '<', '>' and '&'. The double quote char '"' is not quoted unless you cann cgi.escape() with quote=True. The default mode is suitable for escaping blocks of text that may contain HTML." He says that if the quote = True then its not Vulnerable. Example : cgi.escape('

"ä"

', quote=True) But Many Websites Developers and many popular Companies forget to implement the quote = True function and this may cause XSS and HTMLi According to me there should be a Predefine value in cgi.escape() which makes quote = True , then it will not be Vulnerable. I hope this will be patched soon and will be Updated. Thank You (PFA) Dhiraj Mishra Bug ---------- assignee: docs at python components: Documentation files: CGI.ESCAPE_2.png messages: 260600 nosy: DhirajMishra, docs at python priority: normal severity: normal status: open title: cgi.escape() Can Lead To XSS and HTMLi Vulnerabilities versions: Python 3.6 Added file: http://bugs.python.org/file41982/CGI.ESCAPE_2.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 05:45:20 2016 From: report at bugs.python.org (Acid) Date: Sun, 21 Feb 2016 10:45:20 +0000 Subject: [New-bugs-announce] [issue26399] -2+1 Message-ID: <1456051520.47.0.797070893394.issue26399@psf.upfronthosting.co.za> Changes by Acid : ---------- nosy: Acid priority: normal severity: normal status: open title: -2+1 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 09:39:43 2016 From: report at bugs.python.org (giumas) Date: Sun, 21 Feb 2016 14:39:43 +0000 Subject: [New-bugs-announce] [issue26400] SyntaxError when running Python 2.7 interpreter with subprocess.call Message-ID: <1456065583.71.0.292979306373.issue26400@psf.upfronthosting.co.za> New submission from giumas: On Windows, I am getting a `SyntaxError` when I try to input commands after having launched a Python 2.7.x interpreter with `subprocess.call`. This is a minimal example: import os import subprocess def python_env_path(python_path): env = os.environ.copy() python_scripts = os.path.join(python_path, "Scripts") python_bin = os.path.join(python_path, "Library", "bin") path_env = "%s;%s;%s;" % (python_path, python_scripts, python_bin) env['PATH'] = path_env.encode() return env def open_python_prompt(python_path): env = python_env_path(python_path) prc = subprocess.call(["start", "python"], shell=True, cwd=python_path, env=env) if prc != 0: print("Unable to open a Python prompt") return False return True open_python_prompt("C:\Py27x64") When I try to write whatever simple command for the interpreter I get: >>> a = 0 File "", line 1 a = 0 ^ SyntaxError: invalid syntax I did not find SO questions that solve my issue. The same code works fine with Python 3.x. The same Python installation works fine if I open a shell and call the interpreter using a batch file. ---------- components: Extension Modules, Windows messages: 260612 nosy: giumas, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: SyntaxError when running Python 2.7 interpreter with subprocess.call type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 10:35:43 2016 From: report at bugs.python.org (=?utf-8?b?UmHDumwgTsO6w7FleiBkZSBBcmVuYXM=?=) Date: Sun, 21 Feb 2016 15:35:43 +0000 Subject: [New-bugs-announce] [issue26401] Error in documentation for "compile" built-in function Message-ID: <1456068943.64.0.547733660691.issue26401@psf.upfronthosting.co.za> New submission from Ra?l N??ez de Arenas: According to the documentation, if the 'compile' built-in function encounters NUL bytes in the compiled source, it raises TypeError, but this is not true: >>> source = '\u0000' >>> compile(source, '', 'single') Traceback (most recent call last): File "", line 1, in ValueError: source code string cannot contain null bytes It raises ValueError, not TypeError. And IMHO, it's the proper exception to raise... ---------- assignee: docs at python components: Documentation messages: 260613 nosy: Ra?l N??ez de Arenas, docs at python priority: normal severity: normal status: open title: Error in documentation for "compile" built-in function type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 11:03:36 2016 From: report at bugs.python.org (Jelte Fennema) Date: Sun, 21 Feb 2016 16:03:36 +0000 Subject: [New-bugs-announce] [issue26402] Regression in Python 3.5 http.client, raises RemoteDisconnected seemingly randomly. Message-ID: <1456070616.55.0.333319744792.issue26402@psf.upfronthosting.co.za> New submission from Jelte Fennema: I've been developing an application which uses fuse as in interface to an xmlrpc API. I developed it with python 3.4 and it worked fine. When I used python 3.5 it started randomly raising a specific error when requesting info using the xmlrpc API. The error in question is the RemoteDisconnected error which was added in 3.5. An example stacktrace is: Traceback (most recent call last): File "/home/jelte/fun/easyfuse/easyfuse/utils.py", line 20, in _convert_error_to_fuse_error yield File "/home/jelte/fun/easyfuse/easyfuse/filesystem.py", line 276, in children self.refresh_children() File "dokuwikifuse.py", line 139, in refresh_children pages = dw.pages.list(self.full_path, depth=self.full_depth + 2) File "/home/jelte/fun/dokuwikifuse/venv/src/dokuwiki-master/dokuwiki.py", line 102, in list return self._dokuwiki.send('dokuwiki.getPagelist', namespace, options) File "/home/jelte/fun/dokuwikifuse/venv/src/dokuwiki-master/dokuwiki.py", line 55, in send return method(*args) File "/usr/lib64/python3.5/xmlrpc/client.py", line 1091, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python3.5/xmlrpc/client.py", line 1431, in __request verbose=self.__verbose File "/usr/lib64/python3.5/xmlrpc/client.py", line 1133, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python3.5/xmlrpc/client.py", line 1146, in single_request resp = http_conn.getresponse() File "/usr/lib64/python3.5/http/client.py", line 1174, in getresponse response.begin() File "/usr/lib64/python3.5/http/client.py", line 282, in begin version, status, reason = self._read_status() File "/usr/lib64/python3.5/http/client.py", line 251, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response The program in question can be found here: https://github.com/JelteF/dokuwikifuse The bug can be initiated by calling running the program as described in the README and doing a couple of file system operations wich do requests. These are things like `ls wiki/*` or calling `head wiki/*.doku`. ---------- components: IO messages: 260617 nosy: JelteF priority: normal severity: normal status: open title: Regression in Python 3.5 http.client, raises RemoteDisconnected seemingly randomly. type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 14:28:45 2016 From: report at bugs.python.org (desbma) Date: Sun, 21 Feb 2016 19:28:45 +0000 Subject: [New-bugs-announce] [issue26403] Don't call sendto in DatagramRequestHandler if there is nothing to send Message-ID: <1456082925.77.0.767922099224.issue26403@psf.upfronthosting.co.za> New submission from desbma: When using socketserver to create a simple server for Unix Domain sockets (see server_dgram.py), and when sending data with a client that immediately shuts down (without waiting for a response, on Linux I test with 'echo data | nc -Uu -w 0 /tmp/s.socket') I get this exception: Exception happened during processing of request from /tmp/nc.XXXXsuGc1C Traceback (most recent call last): File "/usr/lib/python3.4/socketserver.py", line 617, in process_request_thread self.finish_request(request, client_address) File "/usr/lib/python3.4/socketserver.py", line 344, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python3.4/socketserver.py", line 675, in __init__ self.finish() File "/usr/lib/python3.4/socketserver.py", line 752, in finish self.socket.sendto(self.wfile.getvalue(), self.client_address) FileNotFoundError: [Errno 2] No such file or directory The attached patch fixes this by checking if there is something to send before calling sendto. Also I am wondering if we should catch FileNotFoundError (and possibly other exceptions) here, because with TCP or UDP, the server does not raise any exception if client is disconnected when finish is called in the handler. Thank you ---------- components: Macintosh files: server_dgram.py messages: 260632 nosy: desbma, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Don't call sendto in DatagramRequestHandler if there is nothing to send versions: Python 3.5 Added file: http://bugs.python.org/file41990/server_dgram.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 15:55:10 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Sun, 21 Feb 2016 20:55:10 +0000 Subject: [New-bugs-announce] [issue26404] socketserver context manager Message-ID: <1456088110.26.0.0505883466277.issue26404@psf.upfronthosting.co.za> New submission from Aviv Palivoda: As Martin commented on my patch at issue 26392 the socketserver.server_close is like the file close. That made me think that we should add a context manager to the socketserver. ---------- components: Library (Lib) files: socketserver_context_manager.patch keywords: patch messages: 260639 nosy: martin.panter, palaviv priority: normal severity: normal status: open title: socketserver context manager type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file41993/socketserver_context_manager.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 16:07:13 2016 From: report at bugs.python.org (rapolas) Date: Sun, 21 Feb 2016 21:07:13 +0000 Subject: [New-bugs-announce] [issue26405] tkinter askopenfilename doubleclick issue on windows Message-ID: <1456088833.09.0.473464639317.issue26405@psf.upfronthosting.co.za> New submission from rapolas: Issue is that doubleclick passes a click down to a parent window, and if it happens that you doubleclicking to select a file directly above some button, that button gets pressed. Here is the code to demonstrate this issue: import tkinter as tk from tkinter import filedialog, ttk class MainWindow(ttk.Frame): def __init__(self, root, *args, **kwargs): super().__init__(root, *args, **kwargs) self.pack() btnoptions = {'expand':True, 'fill': 'both'} btn = ttk.Button(self, text='Select', command=self.ask_openfile) btn.pack(**btnoptions) def ask_openfile(self): filename = filedialog.askopenfilename() return filename if __name__=='__main__': root = tk.Tk() root.geometry('600x300') MainWindow(root).pack(expand=True, fill='both', side='top') root.mainloop() I observed the same behavior on two different computers, one running win10, another win7, both using the latest python 3.5.1 installation from python.org ---------- components: Tkinter messages: 260640 nosy: rapolas priority: normal severity: normal status: open title: tkinter askopenfilename doubleclick issue on windows type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 21 22:33:13 2016 From: report at bugs.python.org (A. Jesse Jiryu Davis) Date: Mon, 22 Feb 2016 03:33:13 +0000 Subject: [New-bugs-announce] [issue26406] getaddrinfo is thread-safe on NetBSD and OpenBSD Message-ID: <1456111993.44.0.164668159059.issue26406@psf.upfronthosting.co.za> New submission from A. Jesse Jiryu Davis: In socketmodule.c we lock around getaddrinfo calls on platforms where getaddrinfo is believed not to be thread-safe. We've verified that it *is* thread-safe, and therefore stopped locking around it, on FreeBSD 5.3+ (#1288833) and Mac OS X 10.5+ (#25924). This ticket intends to do the same for OpenBSD and NetBSD. OpenBSD 5.4 fixed getaddrinfo's thread safety and announced it 2013-11-01, "getaddrinfo(3) is now thread-safe": http://www.openbsd.org/plus54.html NetBSD's fix is older and less publicized. Since ancient times NetBSD's getaddrinfo.c included a comment, "Thread safe-ness must be checked", and the getaddrinfo(3) man page had the same warning as other BSDs, "The implementation of getaddrinfo is not thread-safe." On 2004-05-27 Christos Zoulas committed with the comment "make yp stuff re-entrant", fixing obvious problems like static variables in getaddrinfo: http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/net/getaddrinfo.c.diff?r1=1.71&r2=1.72&only_with_tag=MAIN That change was released with NetBSD 3.0, and that alone might convince us to stop locking around getaddrinfo. Later, on 2006-07-18, between NetBSD 3 and 4, Zoulas deleted the comment "Thread safe-ness must be checked" from the source, with the message "Remove comments that do not reflect reality anymore": http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/net/getaddrinfo.c.diff?r1=1.82&r2=1.83&only_with_tag=MAIN The same day, he removed the man page warning: http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/net/getaddrinfo.3.diff?r1=1.43&r2=1.44&only_with_tag=MAIN NetBSD 4.0 was released 2007-12-19. ---------- messages: 260655 nosy: emptysquare, gvanrossum, martin.panter, ned.deily, python-dev, ronaldoussoren, yselivanov priority: normal severity: normal status: open title: getaddrinfo is thread-safe on NetBSD and OpenBSD versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 07:38:53 2016 From: report at bugs.python.org (=?utf-8?q?Ilja_Everil=C3=A4?=) Date: Mon, 22 Feb 2016 12:38:53 +0000 Subject: [New-bugs-announce] [issue26407] csv.writer.writerows masks exceptions from __iter__ Message-ID: <1456144733.82.0.26880413097.issue26407@psf.upfronthosting.co.za> New submission from Ilja Everil?: When passing a class implementing the dunder __iter__ that raises to csv.writer.writerows it hides the original exception and raises a TypeError instead. In the raised TypeError the __context__ and __cause__ both are None. For example: >>> class X: ... def __iter__(self): ... raise RuntimeError("I'm hidden") ... >>> x = X() >>> list(x) Traceback (most recent call last): File "", line 1, in File "", line 3, in __iter__ RuntimeError: I'm hidden >>> import csv >>> csv.writer(open('foo.csv', 'w', newline='')).writerows(x) Traceback (most recent call last): File "", line 1, in TypeError: writerows() argument must be iterable >>> try: ... csv.writer(open('foo.csv', 'w', newline='')).writerows(x) ... except TypeError as e: ... e_ = e >>> e_.__context__ is None True >>> e_.__cause__ is None True It would seem that for reasons unknown to me either PyObject_GetIter loses the exception context or the call to PyErr_SetString in csv_writerows hides it. ---------- messages: 260673 nosy: Ilja Everil? priority: normal severity: normal status: open title: csv.writer.writerows masks exceptions from __iter__ type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 08:24:39 2016 From: report at bugs.python.org (thefourtheye) Date: Mon, 22 Feb 2016 13:24:39 +0000 Subject: [New-bugs-announce] [issue26408] pep-8 requires few corrections Message-ID: <1456147479.45.0.358658656612.issue26408@psf.upfronthosting.co.za> New submission from thefourtheye: 1. It doesn't have the Reference 4 used anywhere in the doc. 2. The `if` condition is not properly ended in the Programming Recommendation section. 3. Apart from that few grammatical changes are necessary I believe. ---------- assignee: docs at python components: Documentation files: mywork.patch keywords: patch messages: 260676 nosy: docs at python, thefourtheye priority: normal severity: normal status: open title: pep-8 requires few corrections type: enhancement Added file: http://bugs.python.org/file42005/mywork.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 08:36:26 2016 From: report at bugs.python.org (Aivar Annamaa) Date: Mon, 22 Feb 2016 13:36:26 +0000 Subject: [New-bugs-announce] [issue26409] Support latest Tcl/Tk on future versions of Mac installer Message-ID: <1456148186.12.0.689143317593.issue26409@psf.upfronthosting.co.za> New submission from Aivar Annamaa: Currently Mac installer can create only Tk 8.5 compatible Tkinter, even if Tcl/Tk 8.6 is installed. Unofficial distributions doesn't seem to have problems with Tk 8.6 on Mac. I think official installer should be upgraded as well. Best option for users would be bundling Tcl/Tk 8.6 with installer, just like Windows installer does. ---------- components: Installation, Macintosh, Tkinter messages: 260677 nosy: Aivar.Annamaa, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Support latest Tcl/Tk on future versions of Mac installer type: enhancement versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 10:15:16 2016 From: report at bugs.python.org (Devyn Johnson) Date: Mon, 22 Feb 2016 15:15:16 +0000 Subject: [New-bugs-announce] [issue26410] "incompatible pointer type" while compiling Python3.5.1 Message-ID: <1456154116.48.0.737124559195.issue26410@psf.upfronthosting.co.za> New submission from Devyn Johnson: When compiling Python3.5.1 on Ubuntu 15.10 (64-bit), I see three "incompatible pointer type" warnings (copy-pasted below). I understand that they can be ignored. However, from my experience with programming (in C STD-2011), "weird bugs" and other odd behavior disappears when warnings like this are fixed. Environment flags (such as CFLAGS) and the configure settings do not appear to influence that warning (i.e. my settings are irrelevant to this bug report). I hope that this bug report is found to be helpful. Also, thank you Python-developers for making and maintaining Python as open-source software. Python/ceval_gil.h: In function drop_gil: Python/ceval_gil.h:181:9: warning: initialization from incompatible pointer type [-Wincompatible-pointer-types] _Py_atomic_store_relaxed(&gil_last_holder, tstate); ^ Python/ceval_gil.h: In function take_gil: Python/ceval_gil.h:243:9: warning: initialization from incompatible pointer type [-Wincompatible-pointer-types] _Py_atomic_store_relaxed(&gil_last_holder, tstate); ^ Python/pystate.c: In function PyThreadState_Swap: Python/pystate.c:509:5: warning: initialization from incompatible pointer type [-Wincompatible-pointer-types] _Py_atomic_store_relaxed(&_PyThreadState_Current, newts); ---------- components: Build messages: 260685 nosy: Devyn Johnson priority: normal severity: normal status: open title: "incompatible pointer type" while compiling Python3.5.1 type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 10:22:59 2016 From: report at bugs.python.org (Devyn Johnson) Date: Mon, 22 Feb 2016 15:22:59 +0000 Subject: [New-bugs-announce] [issue26411] Suggestion concerning compile-time warnings Message-ID: <1456154579.82.0.723903517664.issue26411@psf.upfronthosting.co.za> New submission from Devyn Johnson: I understand that compile-time warnings can typically be ignored. However, from my experience with programming (C STD-2011, for instance), "weird bugs", non-easily-replicable bugs, and odd behaviors disappear when warnings like this are fixed. I also understand that it will be time-consuming to fix each and every minor warning. I have also noticed (in my own coding-projects) that fixing all warnings generated by -Wextra (and the many other warning flags) allows the compiler to more easily apply various optimizations. ---------- components: Interpreter Core messages: 260686 nosy: Devyn Johnson priority: normal severity: normal status: open title: Suggestion concerning compile-time warnings type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 12:07:59 2016 From: report at bugs.python.org (Payden Comer) Date: Mon, 22 Feb 2016 17:07:59 +0000 Subject: [New-bugs-announce] [issue26412] Segmentation Fault: 11 Message-ID: <1456160879.85.0.401353388041.issue26412@psf.upfronthosting.co.za> New submission from Payden Comer: A "Segmentation Fault:11" crash occurs on OS X El Captain when using `return pysodium.sodium.crypto_box_beforenm(clientpub, serverprivatekey)`, with args as None, serverprivatekey, rather than a traceback dissallowing a NoneType Argument. Crash Log attached. ---------- components: Macintosh files: Python_2016-02-22-110201_Shuros-MacBook-Pro.crash messages: 260690 nosy: ned.deily, ronaldoussoren, thecheater887 priority: normal severity: normal status: open title: Segmentation Fault: 11 type: crash versions: Python 2.7 Added file: http://bugs.python.org/file42006/Python_2016-02-22-110201_Shuros-MacBook-Pro.crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 17:00:20 2016 From: report at bugs.python.org (Mike) Date: Mon, 22 Feb 2016 22:00:20 +0000 Subject: [New-bugs-announce] [issue26413] python 3.5.1 uses wrong registry in system-wide installation Message-ID: <1456178420.73.0.449144850072.issue26413@psf.upfronthosting.co.za> New submission from Mike: The installer for python 3.5.1 (observed with the x64-86 executable installer, assumed to happen with all installers) allows users to install python either just for themselves or do a system-wide installation (provided they have sufficient privileges). However, when selecting a system-wide installation, the uninstall information is registered to a key under HKEY_CURRENT_USER. The result of this is that any user can run python 3.5.1; however, the entry in the "uninstall programs" list shows only for the original user that installed it. This is in contrast to previous versions of python (e.g. 3.4.4) where any user could uninstall it (provided they have sufficient privileges). It is also in contrast to pylauncher of the same version (i.e. 3.5.1) which properly registers itself under HKEY_LOCAL_MACHINE when selected to be installed for all users. The key in question is at this path: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall\{b8440650-9dbe-4b7d-8167-6e0e3dcdf5d0} I believe it should be here: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\{b8440650-9dbe-4b7d-8167-6e0e3dcdf5d0} ---------- components: Installation messages: 260700 nosy: mray priority: normal severity: normal status: open title: python 3.5.1 uses wrong registry in system-wide installation type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 17:50:17 2016 From: report at bugs.python.org (John Beck) Date: Mon, 22 Feb 2016 22:50:17 +0000 Subject: [New-bugs-announce] [issue26414] os.defpath too permissive Message-ID: <1456181417.61.0.91952600054.issue26414@psf.upfronthosting.co.za> New submission from John Beck: A bug has been filed against Solaris' internal version of Python, which is largely the same (including in this case) as the base version we get from python.org. The bug is that os.defpath starts with ':' and thus any Python script run with a null PATH environment variable will use the current working directory as its first entry. This is generally considered to be bad practice, and especially dangerous for anyone running with root privileges on a Unix box. So we intend to change Solaris' version of Python to elide this, i.e., to apply the attached patch to our 2.7 version and comparable patches to our 3.4 and 3.5 versions As a precaution, I queried the security list before filing this bug, asking: * Is this intentional? (Seems like it but I couldn't find any documentation to confirm.) * If so, why? (Feel free to point me to any docs I missed.) * If it is intentional, and we were to change our version anyway, do you know of any gotchas we should look out for? There were no regressions when I ran the Python test suite. and got the following reply: --- From: Guido van Rossum Date: Sat, 20 Feb 2016 09:29:11 -0800 Subject: Re: [PSRT] os.defpath too permissive Wow. That looks like something really old. I think you can just file an issue with a patch for this at bugs.python.org. I agree that it should be fixed. I don't think there are many users that would be vulnerable, nor do I think that much code would break; the only use in the stdlib has os.environ.get("PATH", os.defpath) so in all practical cases it would get the user's $PATH variable (which is presumably safe) anyway. --- So I am now filing this bug as suggested. ---------- components: Library (Lib) files: 2.7-defpath.patch keywords: patch messages: 260703 nosy: jbeck priority: normal severity: normal status: open title: os.defpath too permissive versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42010/2.7-defpath.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 18:06:20 2016 From: report at bugs.python.org (A. Skrobov) Date: Mon, 22 Feb 2016 23:06:20 +0000 Subject: [New-bugs-announce] [issue26415] Out of memory, trying to parse a 35MB dict Message-ID: <1456182380.09.0.81920177296.issue26415@psf.upfronthosting.co.za> New submission from A. Skrobov: I have a one-line module that assigns a tuple->int dictionary: holo_table = {(0, 0, 0, 0, 0, 0, 1, 41, 61, 66, 89): 9, (0, 0, 0, 70, 88, 98, 103, 131, 147, 119, 93): 4, [35MB skipped], (932, 643, 499, 286, 326, 338, 279, 200, 280, 262, 115): 5} When I try to import this module, Python grinds 100% of my CPU for like half an hour, then ultimately crashes with a MemoryError. How much memory does it need to parse 35MB of data, of a rather simple structure? Attaching the module, zipped to 10MB. ---------- components: Interpreter Core files: crash.zip messages: 260704 nosy: A. Skrobov priority: normal severity: normal status: open title: Out of memory, trying to parse a 35MB dict type: resource usage versions: Python 2.7 Added file: http://bugs.python.org/file42011/crash.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 18:06:45 2016 From: report at bugs.python.org (Brett Cannon) Date: Mon, 22 Feb 2016 23:06:45 +0000 Subject: [New-bugs-announce] [issue26416] Deprecate the regex_v8, telco, and spectral_norm benchmarks Message-ID: <1456182405.89.0.810278260076.issue26416@psf.upfronthosting.co.za> New submission from Brett Cannon: In the thread at https://mail.python.org/pipermail/speed/2016-February/000272.html it came up that the regex_v8, telco, and spectral_norm benchmarks are all very inconsistent. That means they should be deprecated. ---------- components: Benchmarks messages: 260705 nosy: brett.cannon, pitrou priority: normal severity: normal stage: needs patch status: open title: Deprecate the regex_v8, telco, and spectral_norm benchmarks type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 22 23:49:12 2016 From: report at bugs.python.org (Ned Deily) Date: Tue, 23 Feb 2016 04:49:12 +0000 Subject: [New-bugs-announce] [issue26417] Default IDLE 2.7.11 configuration files are out-of-sync on OS X framework installs Message-ID: <1456202952.38.0.718952125759.issue26417@psf.upfronthosting.co.za> New submission from Ned Deily: On OS X framework installs, the Mac-specific sub-makefiles do some important tailoring of IDLE/s config-extensions.def and config-main.def files, among other things changing Tk some Tk events for more appropriate keyboard bindings (e.g. " _______________________________________ From report at bugs.python.org Tue Feb 23 04:47:20 2016 From: report at bugs.python.org (renlifeng) Date: Tue, 23 Feb 2016 09:47:20 +0000 Subject: [New-bugs-announce] [issue26418] multiprocessing.pool.ThreadPool eats up memories Message-ID: <1456220840.03.0.74121216833.issue26418@psf.upfronthosting.co.za> New submission from renlifeng: If func creates lots objects and appends them to a list, and runs over and over, pool.map(func...) will eventually eat up all memories. Cleaning the list at the end of func does not help. One can reproduce by running the attached file. By contrast, after replacing ThreadPool with the theading module (see the commented out lines), memory usage will not grow continuously. By the way, I used what's in Debian stretch, i.e. python 2.7.11 and 3.5.1. ---------- components: Library (Lib) files: cvtest5.py messages: 260716 nosy: renlifeng priority: normal severity: normal status: open title: multiprocessing.pool.ThreadPool eats up memories versions: Python 2.7, Python 3.5 Added file: http://bugs.python.org/file42012/cvtest5.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 23 09:24:34 2016 From: report at bugs.python.org (Tatsunori Uchino) Date: Tue, 23 Feb 2016 14:24:34 +0000 Subject: [New-bugs-announce] [issue26420] IDEL for Python 3.5.1 for x64 Windows exits when pasted a string with non-BMP characters Message-ID: <1456237474.93.0.941468320226.issue26420@psf.upfronthosting.co.za> New submission from Tatsunori Uchino: On Windows Server 2012 R2 with Update(which is compatible with Windows 8.1 Update 1), I copied a character U+20BB7 and pasted it on IDLE. After that, IDLE suddenly exited with no messages, notices, or errors. Then, I tried to input some non-BMP charactors by the Google Japanese IME other than ? such as U+1F444, U+1F468 and found that the IDLE exited as the same way. I suppose that these phenomena could be due to a failure when treating surrogate pairs in input or pasted strings. ---------- components: IDLE messages: 260736 nosy: Tats.U. priority: normal severity: normal status: open title: IDEL for Python 3.5.1 for x64 Windows exits when pasted a string with non-BMP characters type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 23 09:49:58 2016 From: report at bugs.python.org (yuriy_levchenko) Date: Tue, 23 Feb 2016 14:49:58 +0000 Subject: [New-bugs-announce] [issue26421] string_richcompare invalid check Py_NotImplemented Message-ID: <1456238998.85.0.782373518975.issue26421@psf.upfronthosting.co.za> New submission from yuriy_levchenko: i have object with flag Py_TPFLAGS_STRING_SUBCLASS stringobject.c (line 1192) in function string_richcompare we have check string PyString_Check but, #define PyString_Check(op) \ PyType_FastSubclass(Py_TYPE(op), Py_TPFLAGS_STRING_SUBCLASS) i successful check this. But my type is not PyStringObject Maybe need replace this check on PyString_CheckExact? ---------- messages: 260738 nosy: yuriy_levchenko priority: normal severity: normal status: open title: string_richcompare invalid check Py_NotImplemented type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 23 16:25:46 2016 From: report at bugs.python.org (John Taylor) Date: Tue, 23 Feb 2016 21:25:46 +0000 Subject: [New-bugs-announce] [issue26422] printing 1e23 and up is incorrect Message-ID: <1456262746.09.0.528501038499.issue26422@psf.upfronthosting.co.za> New submission from John Taylor: The print statement does not display accurate results. code: print("%35d" % (1e21)) print("%35d" % (1e22)) print("%35d" % (1e23)) print("%35d" % (1e24)) print("%35d" % (1e25)) print("%35d" % (1e26)) print("%35d" % (1e27)) print("%35d" % (1e28)) print("%35d" % (1e29)) print("%35d" % (1e30)) print("%35d" % (1e31)) result: 1000000000000000000000 10000000000000000000000 99999999999999991611392 999999999999999983222784 10000000000000000905969664 100000000000000004764729344 1000000000000000013287555072 9999999999999999583119736832 99999999999999991433150857216 1000000000000000019884624838656 9999999999999999635896294965248 Platforms: Windows 10 x64, Python 3.5.1 Debian 8 (jessie), Python 3.5.1 ---------- messages: 260745 nosy: jftuga priority: normal severity: normal status: open title: printing 1e23 and up is incorrect type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 23 18:04:48 2016 From: report at bugs.python.org (Dave Hibbitts) Date: Tue, 23 Feb 2016 23:04:48 +0000 Subject: [New-bugs-announce] [issue26423] __len__() returns 32 bit int on windows leading to overflows Message-ID: <1456268688.08.0.284414420897.issue26423@psf.upfronthosting.co.za> New submission from Dave Hibbitts: __len__() always returns an int which on windows machines is tied to the size of a c long and is always 32 bits even if it's compiled for 64 bit. len() however returns an int for values less than sys.maxint and a long above that. Returning an int in __len__() causes it to return negative lengths for objects of size greater than sys.maxint, below you can see a quick test on how to reproduce it. And here's an explanation from \u\Rhomboid on Reddit of why we believe the issue happens. "You'll only see that on Windows. The issue is that, confusingly, the range of the Python int type is tied to the range of the C long type. On Windows long is always 32 bits even on x64 systems, whereas on Unix systems it's the native machine word size. You can confirm this by checking sys.maxint, which will be 2**31 - 1 even with a 64 bit interpreter on Windows. The difference in behavior of foo.__len__ vs len(foo) is that the former goes through an attribute lookup which goes through the slot lookup stuff, finally ending in Python/typeobject.c:wrap_lenfunc(). The error is casting Py_ssize_t to long, which truncates on Windows x64 as Py_ssize_t is a proper signed 64 bit integer. And then it compounds the injury by creating a Python int object with PyInt_FromLong(), so this is hopelessly broken. In the case of len(foo), you end up in Python/bltinmodule.c:builtin_len() which skips all the attribute lookup stuff and uses the object protocol directly, calling PyObject_Size() and creating a Python object of the correct type via PyInt_FromSsize_t() which figures out whether a Python int or long is necessary. This is definitely a bug that should be reported. In 3.x the int/long distinction is gone and all integers are Python longs, but the bogus cast to a C long still exists in wrap_lenfunc(): return PyLong_FromLong((long)res); That means the bug still exists even though the reason for its existence is gone! Oops. That needs to be updated to get rid of the cast and call PyLong_FromSsize_t()." Python 2.7.8 |Anaconda 2.1.0 (64-bit)| (default, Jul 2 2014, 15:12:11) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://binstar.org >>> a = 'a'*2500000000 >>> a.__len__() -1794967296 >>> len(a) 2500000000L >>> a = [1]*2500000000 >>> len(a) 2500000000L >>> a.__len__() -1794967296 ---------- components: Windows messages: 260749 nosy: Dave Hibbitts, georg.brandl, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: __len__() returns 32 bit int on windows leading to overflows type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 04:03:25 2016 From: report at bugs.python.org (MARTA Roggero) Date: Wed, 24 Feb 2016 09:03:25 +0000 Subject: [New-bugs-announce] [issue26424] QPyNullVariant Message-ID: <1456304605.68.0.937952212287.issue26424@psf.upfronthosting.co.za> New submission from MARTA Roggero: Good morning, I use the latest version of QGIS (Lyon) in MAC and have se the default Python version 2.7. Moreover, I have installed all the required python modules and GDAL. It was working before, but now, when I click on the "Browse" button in Vector-> Geoprocessing->Union, it returns the following error (actually, it returns this error whenever I click on any "browse" button in QGIS): Error during execution of Python code : TypeError: QgsEncodingFileDialog(QWidget parent=None, QString caption=QString(), QString directory=QString(), QString filter=QString(), QString encoding=QString()): argument 3 has unexpected type 'QPyNullVariant' Traceback (most recent call last): File "/Applications/QGIS.app/Contents/Resources/python/plugins/fTools/tools/doGeoprocessing.py", line 125, in outFile (self.shapefileName, self.encoding) = ftools_utils.saveDialog(self) File "/Applications/QGIS.app/Contents/Resources/python/plugins/fTools/tools/ftools_utils.py", line 327, in saveDialog fileDialog = QgsEncodingFileDialog(parent, QCoreApplication.translate("fTools", "Save output shapefile"), dirName, filtering, encode) TypeError: QgsEncodingFileDialog(QWidget parent=None, QString caption=QString(), QString directory=QString(), QString filter=QString(), QString encoding=QString()): argument 3 has unexpected type 'QPyNullVariant' Versione Python: 2.7.10 (default, Oct 23 2015, 18:05:06) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] Versione di QGIS: 2.12.1-Lyon Lyon, exported I cannot enter the path manually. It does not allow me to click in the blank field. Thanks for every answer, Marta ---------- components: Macintosh messages: 260772 nosy: MARTA Roggero, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: QPyNullVariant type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 05:14:39 2016 From: report at bugs.python.org (Konrad) Date: Wed, 24 Feb 2016 10:14:39 +0000 Subject: [New-bugs-announce] [issue26425] 'TypeError: object of type 'NoneType' has no len()' in 'splitdrive' Message-ID: <1456308879.88.0.0764113291168.issue26425@psf.upfronthosting.co.za> New submission from Konrad: Hello, when trying to build the 'bsddb3' extension (from https://pypi.python.org/pypi/bsddb3/6.1.1) which is necessary for Gramps ( https://gramps-project.org/wiki/index.php?title=Install_latest_BSDDB )I get the error 'TypeError: object of type 'NoneType' has no len()'. MinGW32 which I've installed for this purpose seems to work. Here is the Traceback: Detected Berkeley DB version 6.0 from db.h running build running build_py running build_ext building 'bsddb3._pybsddb' extension c:\temp\mingw\bin\gcc.exe -mdll -O -Wall -IC:/Temp/BerkelyDB/include/ -Ic:\temp\ python34\include -Ic:\temp\python34\include -c Modules/_bsddb.c -o build\temp.wi n32-3.4\Release\modules\_bsddb.o writing build\temp.win32-3.4\Release\modules\_pybsddb.def Traceback (most recent call last): File "setup3.py", line 527, in 'Programming Language :: Python :: 3.4', File "c:\temp\python34\lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\temp\python34\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "c:\temp\python34\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "c:\temp\python34\lib\distutils\command\build.py", line 126, in run self.run_command(cmd_name) File "c:\temp\python34\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\temp\python34\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "c:\temp\python34\lib\site-packages\setuptools\command\build_ext.py", line 49, in run _build_ext.run(self) File "c:\temp\python34\lib\distutils\command\build_ext.py", line 339, in run self.build_extensions() File "c:\temp\python34\lib\distutils\command\build_ext.py", line 448, in build_extensions self.build_extension(ext) File "c:\temp\python34\lib\site-packages\setuptools\command\build_ext.py", line 174, in build_extension _build_ext.build_extension(self, ext) File "c:\temp\python34\lib\distutils\command\build_ext.py", line 535, in build_extension target_lang=language) File "c:\temp\python34\lib\distutils\ccompiler.py", line 717, in link_shared_object extra_preargs, extra_postargs, build_temp, target_lang) File "c:\temp\python34\lib\distutils\cygwinccompiler.py", line 248, in link target_lang) File "c:\temp\python34\lib\distutils\unixccompiler.py", line 157, in link libraries) File "c:\temp\python34\lib\distutils\ccompiler.py", line 1092, in gen_lib_options opt = compiler.runtime_library_dir_option(dir) File "c:\temp\python34\lib\distutils\unixccompiler.py", line 224, in runtime_library_dir_option compiler = os.path.basename(sysconfig.get_config_var("CC")) File "c:\temp\python34\lib\ntpath.py", line 246, in basename return split(p)[1] File "c:\temp\python34\lib\ntpath.py", line 217, in split d, p = splitdrive(p) File "c:\temp\python34\lib\ntpath.py", line 159, in splitdrive if len(p) > 1: TypeError: object of type 'NoneType' has no len() This behavior possibly relates to other issues (http://bugs.python.org/issue21336, http://bugs.python.org/issue22587) but I'm not sure about it. Sorry for my poor understanding of these things. Thanks, Konrad ---------- components: Windows messages: 260783 nosy: Konrad, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: 'TypeError: object of type 'NoneType' has no len()' in 'splitdrive' type: compile error versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 08:37:42 2016 From: report at bugs.python.org (Jakub Wilk) Date: Wed, 24 Feb 2016 13:37:42 +0000 Subject: [New-bugs-announce] [issue26426] email examples: incorrect use of email.headerregistry.Address Message-ID: <1456321062.13.0.278509193885.issue26426@psf.upfronthosting.co.za> New submission from Jakub Wilk: https://docs.python.org/3/library/email-examples.html#examples-using-the-provisional-api contains the following code: from email.headerregistry import Address ... msg['From'] = Address("Pep? Le Pew", "pepe at example.com") msg['To'] = (Address("Penelope Pussycat", "penelope at example.com"), Address("Fabrette Pussycat", "fabrette at example.com")) But Address takes just the username, not the whole email address, as the second argument. So this should be written as: msg['From'] = Address("Pep? Le Pew", "pepe", "example.com") ... or: msg['From'] = Address("Pep? Le Pew", addr_spec="pepe at example.com") ... ---------- assignee: docs at python components: Documentation messages: 260796 nosy: docs at python, jwilk priority: normal severity: normal status: open title: email examples: incorrect use of email.headerregistry.Address _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 09:46:26 2016 From: report at bugs.python.org (Victor Halperin) Date: Wed, 24 Feb 2016 14:46:26 +0000 Subject: [New-bugs-announce] [issue26427] w* format in PyArg_ParseTupleAndKeywords for optional argument Message-ID: <1456325186.34.0.457786949535.issue26427@psf.upfronthosting.co.za> New submission from Victor Halperin: Two functions read the format string in Python/getargs.c: convertitem() for present arguments and skipitem() for missing ones. They must agree on the format; in fact, a comment to this effect is written right above convertitem(). Nevertheless, skipitem() only allows '*' modifier with 's' and 'z' formats, unlike convertitem(), that correctly processes 'w*'. As a result, 'w*' works if an array arguments is supplied, and fails if it is omitted. Suggested fix: add c == 'w' comparison to the line with *format == '*' in skipitem(). ---------- components: Interpreter Core files: getargs.c messages: 260807 nosy: VHalperin priority: normal severity: normal status: open title: w* format in PyArg_ParseTupleAndKeywords for optional argument type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file42021/getargs.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 11:40:45 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 24 Feb 2016 16:40:45 +0000 Subject: [New-bugs-announce] [issue26428] The range for xrange() is too narrow Message-ID: <1456332045.09.0.811032800198.issue26428@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The xrange() object works with integers in the range of C long, not Py_ssize_t. Thus the idiomatic expression xrange(len(seq)) can fail for real sequences if sys.maxint < sys.maxsize (e.g on 64-bit Windows). Proposed patch changes the xrange() implementation to use Py_ssize_t instead of C long. ---------- components: Interpreter Core files: xrange_py_ssize_t.patch keywords: patch messages: 260816 nosy: benjamin.peterson, haypo, mark.dickinson, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: The range for xrange() is too narrow type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file42022/xrange_py_ssize_t.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 12:46:08 2016 From: report at bugs.python.org (Chaitanya Mannem) Date: Wed, 24 Feb 2016 17:46:08 +0000 Subject: [New-bugs-announce] [issue26429] os.path.dirname returns empty string instead of "." when file is in current directory Message-ID: <1456335968.05.0.295325544141.issue26429@psf.upfronthosting.co.za> New submission from Chaitanya Mannem: Don't know if this is for windows compatibility or whatever but I think it makes more sense if os.path.dirname would return "." if the file passed in was in the current directory. ---------- components: Library (Lib) messages: 260821 nosy: Chaitanya Mannem priority: normal severity: normal status: open title: os.path.dirname returns empty string instead of "." when file is in current directory versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 14:13:36 2016 From: report at bugs.python.org (quetzal) Date: Wed, 24 Feb 2016 19:13:36 +0000 Subject: [New-bugs-announce] [issue26430] quote marks problem on loaded file Message-ID: <1456341216.28.0.891493532956.issue26430@psf.upfronthosting.co.za> New submission from quetzal: impossible to match a string pattern with a quotation mark in an opened file [code] file = '''it's an opened file made of "strings"''' if '''"string"''' in file: ____print('ok') else: ____print('none') [/code] in iddle, it works with no difficulties... but if the file comes from an file = open(fileadress).read() the same code return nothing... it do not find the pattern string, and just because of double quote marks (i've not tried with some other marks) for me it's a bug, because it do works well, if their no quotation marks ---------- messages: 260826 nosy: quetzal priority: normal severity: normal status: open title: quote marks problem on loaded file type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 24 22:37:08 2016 From: report at bugs.python.org (Greg McCoy) Date: Thu, 25 Feb 2016 03:37:08 +0000 Subject: [New-bugs-announce] [issue26431] string template substitute tests Message-ID: <1456371428.77.0.845294692706.issue26431@psf.upfronthosting.co.za> New submission from Greg McCoy: Wrote test for string template substitute ---------- components: Tests files: mywork.patch keywords: patch messages: 260837 nosy: Greg McCoy priority: normal severity: normal status: open title: string template substitute tests type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42023/mywork.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 03:16:04 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 25 Feb 2016 08:16:04 +0000 Subject: [New-bugs-announce] [issue26432] Add partial.kwargs Message-ID: <1456388164.52.0.992584911868.issue26432@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Most Python classes that exposes a dictionary of keyword arguments as an attribute, name this attribute "kwargs": concurrent.futures.process._CallItem concurrent.futures.process._WorkItem concurrent.futures.thread._WorkItem inspect.BoundArguments sched.Event threading.Timer unittest.mock._patch weakref.finalize._Info The only exceptions are classes contextlib._GeneratorContextManager with the "kwds" attribute and functools.partial and functools.partialmethod with the "keywords" attribute. Proposed patch adds the "kwargs" alias to the the "keywords" attribute in functools.partial and functools.partialmethod. There are precedences for adding aliases in the stdlib. ---------- components: Library (Lib) files: partial_kwargs.patch keywords: patch messages: 260843 nosy: gvanrossum, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add partial.kwargs type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42024/partial_kwargs.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 04:48:39 2016 From: report at bugs.python.org (=?utf-8?q?Thomas_G=C3=BCttler?=) Date: Thu, 25 Feb 2016 09:48:39 +0000 Subject: [New-bugs-announce] [issue26433] urllib.urlencode() does not explain how to handle unicode Message-ID: <1456393719.44.0.187138588042.issue26433@psf.upfronthosting.co.za> New submission from Thomas G?ttler: The current docs for Python2, don't explain how to handle unicode: https://docs.python.org/2/library/urllib.html#urllib.urlencode It seems to be a common problem. See http://stackoverflow.com/questions/6480723/urllib-urlencode-doesnt-like-unicode-values-how-about-this-workaround It would be nice to document the most pythonic way to handle unicode in urllib.urlencode() ---------- assignee: docs at python components: Documentation messages: 260845 nosy: Thomas G?ttler, docs at python priority: normal severity: normal status: open title: urllib.urlencode() does not explain how to handle unicode versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 04:56:47 2016 From: report at bugs.python.org (Marc Schlaich) Date: Thu, 25 Feb 2016 09:56:47 +0000 Subject: [New-bugs-announce] [issue26434] multiprocessing cannot spawn grandchild from a Windows service Message-ID: <1456394207.49.0.123218861266.issue26434@psf.upfronthosting.co.za> New submission from Marc Schlaich: This is a follow up of #5162. There are some occasions where you can still run into this issue. One example is if you want to spawn a new multiprocessing.Process as a child of a multiprocessing.Process: pythonservice.exe - multiprocessing.Process - multiprocessing.Process (does not start!) Attached is a test case. If you run this in pywin32 service debug mode, you see that the process crashes with: Traceback (most recent call last): File "", line 1, in File "C:\Python27\lib\multiprocessing\forking.py", line 380, in main prepare(preparation_data) File "C:\Python27\lib\multiprocessing\forking.py", line 503, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named PythonService In get_preparation_data is the following state: WINSERVICE: False WINEXE: False _python_exe: C:\Python27\python.exe And so you get as preparation data: {'authkey': '...', 'sys_path': [...], 'name': 'test', 'orig_dir': '...', 'sys_argv': ['C:\\Python27\\lib\\site-packages\\win32\\PythonService.exe'], 'main_path': 'C:\\Python27\\lib\\site-packages\\win32\\PythonService.exe', 'log_to_stderr': False} A workaround for me is patching `get_preparation_data` as follows: import multiprocessing.forking _org_get_preparation_data = multiprocessing.forking.get_preparation_data def _get_preparation_data(*args): data = _org_get_preparation_data(*args) main_path = data.get('main_path') if main_path is not None and main_path.endswith('exe'): data.pop('main_path') return data multiprocessing.forking.get_preparation_data = _get_preparation_data BTW, the test case does not run on Python 3.5, but starting the service manually did work. So this is probably a Python 2 issue only. ---------- components: Library (Lib), Windows files: test_mp_service.py messages: 260846 nosy: paul.moore, schlamar, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: multiprocessing cannot spawn grandchild from a Windows service type: crash versions: Python 2.7 Added file: http://bugs.python.org/file42025/test_mp_service.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 05:09:00 2016 From: report at bugs.python.org (Jakub Stasiak) Date: Thu, 25 Feb 2016 10:09:00 +0000 Subject: [New-bugs-announce] [issue26435] Fix versionadded/versionchanged documentation directives Message-ID: <1456394940.93.0.16954708831.issue26435@psf.upfronthosting.co.za> New submission from Jakub Stasiak: A double colon seems to be required for a directive to work, please find a patch attached. ---------- assignee: docs at python components: Documentation files: typos.patch keywords: patch messages: 260848 nosy: docs at python, jstasiak priority: normal severity: normal status: open title: Fix versionadded/versionchanged documentation directives type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42027/typos.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 07:17:37 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 25 Feb 2016 12:17:37 +0000 Subject: [New-bugs-announce] [issue26436] Add the regex-dna benchmark Message-ID: <1456402657.17.0.694301948635.issue26436@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds the regex-dna benchmark from The Computer Language Benchmarks Game (http://benchmarksgame.alioth.debian.org/). This is artificial but well known benchmark. The patch is based on regex-dna Python 3 #5 program and fasta Python 3 #3 program (for generating input). ---------- components: Benchmarks files: bm_regex_dna.patch keywords: patch messages: 260854 nosy: brett.cannon, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add the regex-dna benchmark type: enhancement Added file: http://bugs.python.org/file42028/bm_regex_dna.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 08:51:59 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Thu, 25 Feb 2016 13:51:59 +0000 Subject: [New-bugs-announce] [issue26437] asyncio create_server() not always accepts the 'port' parameter as str Message-ID: <1456408319.2.0.804697554696.issue26437@psf.upfronthosting.co.za> New submission from Xavier de Gaye: create_server() used to accept the 'port' parameter as a string before in all cases (until last december at least). The following session shows the difference in behavior when the listening address is INADDR_ANY and '127.0.0.1': ============================ $ python Python 3.6.0a0 (default:47fa003aa9f1, Feb 24 2016, 13:09:02) [GCC 5.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from asyncio import * >>> loop = get_event_loop() >>> coro = loop.create_server(Protocol(), '', '12345') >>> loop.run_until_complete(coro) , ]> >>> coro = loop.create_server(Protocol(), '127.0.0.1', '12345') >>> loop.run_until_complete(coro) Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/asyncio/base_events.py", line 373, in run_until_complete return future.result() File "/usr/local/lib/python3.6/asyncio/futures.py", line 274, in result raise self._exception File "/usr/local/lib/python3.6/asyncio/tasks.py", line 240, in _step result = coro.send(None) File "/usr/local/lib/python3.6/asyncio/base_events.py", line 946, in create_server sock.bind(sa) TypeError: an integer is required (got type str) ============================ IMPHO python should consistently either accept 'port' as str or require 'port' as int. ---------- components: Library (Lib) messages: 260857 nosy: giampaolo.rodola, gvanrossum, haypo, pitrou, xdegaye, yselivanov priority: normal severity: normal status: open title: asyncio create_server() not always accepts the 'port' parameter as str type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 08:57:57 2016 From: report at bugs.python.org (heha37) Date: Thu, 25 Feb 2016 13:57:57 +0000 Subject: [New-bugs-announce] [issue26438] Complete your registration to Python tracker -- key4g5ti2VWPYCbHm4wBLtl92LNQ9nzndqi Message-ID: New submission from heha37: is ok ---Original--- From: "Python tracker" Date: 2016/2/25 21:55:19 To: "zhanghanqun"; Subject: Complete your registration to Python tracker -- key4g5ti2VWPYCbHm4wBLtl92LNQ9nzndqi To complete your registration of the user "heha37" with Python tracker, please do one of the following: - send a reply to report at bugs.python.org and maintain the subject line as is (the reply's additional "Re:" is ok), - or visit the following URL: http://bugs.python.org/?@action=confrego&otk=4g5ti2VWPYCbHm4wBLtl92LNQ9nzndqi ---------- messages: 260858 nosy: heha37 priority: normal severity: normal status: open title: Complete your registration to Python tracker -- key4g5ti2VWPYCbHm4wBLtl92LNQ9nzndqi _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 10:10:23 2016 From: report at bugs.python.org (Michael Felt) Date: Thu, 25 Feb 2016 15:10:23 +0000 Subject: [New-bugs-announce] [issue26439] ctypes.util.find_library fails ALWAYS when gcc is not used Message-ID: <1456413023.18.0.874865601403.issue26439@psf.upfronthosting.co.za> New submission from Michael Felt: I have successful enough with python 2.7.10 (for building cloud-init), including it finding openssl libraries during the installation od setuptools (before installing pip). I have also been able to assemble saltstack - BUT - salt-master and salt-minion fail to start because ctypes.util.find_library() always returns 'None'. ======== EXCERPT ==== File "/opt/lib/python2.7/site-packages/salt/crypt.py", line 37, in import salt.utils.rsax931 File "/opt/lib/python2.7/site-packages/salt/utils/rsax931.py", line 69, in libcrypto = _init_libcrypto() File "/opt/lib/python2.7/site-packages/salt/utils/rsax931.py", line 47, in _init_libcrypto libcrypto = _load_libcrypto() File "/opt/lib/python2.7/site-packages/salt/utils/rsax931.py", line 40, in _load_libcrypto raise OSError('Cannot locate OpenSSL libcrypto') OSError: Cannot locate OpenSSL libcrypto ======= I built python using the IBM compiler, and my images do not have /sbin/ldconfig installed so the assumption that /sbin/ldconfig is always installed is "a bug". in the util.py file the code reached is: +219 else: +220 +221 def _findSoname_ldconfig(name): +222 import struct +223 if struct.calcsize('l') == 4: +224 machine = os.uname()[4] + '-32' +225 else: +226 machine = os.uname()[4] + '-64' +227 mach_map = { +228 'x86_64-64': 'libc6,x86-64', +229 'ppc64-64': 'libc6,64bit', +230 'sparc64-64': 'libc6,64bit', +231 's390x-64': 'libc6,64bit', +232 'ia64-64': 'libc6,IA-64', +233 } +234 abi_type = mach_map.get(machine, 'libc6') +235 +236 # XXX assuming GLIBC's ldconfig (with option -p) +237 expr = r'\s+(lib%s\.[^\s]+)\s+\(%s' % (re.escape(name), abi_type) +238 f = os.popen('/sbin/ldconfig -p 2>/dev/null') +239 try: +240 data = f.read() +241 finally: +242 f.close() +243 res = re.search(expr, data) +244 if not res: +245 return None +246 return res.group(1) +247 +248 def find_library(name): +249 return _findSoname_ldconfig(name) or _get_soname(_findLib_gcc(name)) +250 (I have not researched _get_soname or _findLib_gcc but neither of these "feel right" as AIX, by default, does not end library (archives) with .so (archives end with .a and archive members frequently end with .so) That this is/has not been reported more frequently may be because python programmers are avoiding ctypes - when portability is essential. I hope that, just as for Solaris - where an alternate program is used - that AIX can have a block: # if os.name == "posix" and sys.platform.startswith("aix"): so that if ldconfig is not available the command /usr/bin/dump could be used instead, and/or search LIBPATH and/or (when not empty) and/or ldd (for programs). Ideally, /sbin/ldconfig would not be need at all! dump output: root at x064:[/data/prj/gnu/bashRC1-4.4]dump -Xany -H /opt/bin/python /opt/bin/python: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x000005ac 0x000035e3 0x0000006e #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000005 0x00030ee4 0x00006772 0x00030f52 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/vac/lib:/usr/lib:/lib 1 libc.a shr.o 2 libpthreads.a shr_xpg5.o 3 libpthreads.a shr_comm.o 4 libdl.a shr.o root at x064:[/usr/bin]dump -Xany -H /usr/lib/libcrypto.a /usr/lib/libcrypto.a[libcrypto.so]: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000ff9 0x0000498a 0x00000038 #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000003 0x0004f1f0 0x00014636 0x0004f228 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/lib:/lib 1 libc.a shr.o 2 libpthreads.a shr_xpg5.o /usr/lib/libcrypto.a[libcrypto.so.0.9.8]: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000c4e 0x00004312 0x00000038 #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000003 0x00044c48 0x0000f236 0x00044c80 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/lib:/lib 1 libc.a shr.o 2 libpthreads.a shr_xpg5.o /usr/lib/libcrypto.a[libcrypto.so.1.0.0]: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000ff9 0x0000498a 0x00000038 #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000003 0x0004f1f0 0x00014636 0x0004f228 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/lib:/lib 1 libc.a shr.o 2 libpthreads.a shr_xpg5.o /usr/lib/libcrypto.a[libcrypto64.so]: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000ff2 0x00004987 0x0000003e #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000003 0x00061758 0x00014e03 0x00061796 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/lib:/lib 1 libc.a shr_64.o 2 libpthreads.a shr_xpg5_64.o /usr/lib/libcrypto.a[libcrypto64.so.0.9.8]: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000c47 0x0000430d 0x0000003e #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000003 0x000557b0 0x0000f9e7 0x000557ee ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/lib:/lib 1 libc.a shr_64.o 2 libpthreads.a shr_xpg5_64.o /usr/lib/libcrypto.a[libcrypto64.so.1.0.0]: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000ff2 0x00004987 0x0000003e #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000003 0x00061758 0x00014e03 0x00061796 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/lib:/lib 1 libc.a shr_64.o 2 libpthreads.a shr_xpg5_64.o root at x064:[/usr/bin] root at x064:[/usr/bin]echo $? 0 ldd outpath: root at x064:[/usr/bin]ldd /opt/bin/python /opt/bin/python needs: /usr/lib/libc.a(shr.o) /usr/lib/libpthreads.a(shr_xpg5.o) /usr/lib/libpthreads.a(shr_comm.o) /usr/lib/libdl.a(shr.o) /unix /usr/lib/libcrypt.a(shr.o) root at x064:[/usr/bin]ldd /usr/lib/libcrypto.a ldd: /usr/lib/libcrypto.a: File is an archive. root at x064:[/usr/bin]echo $? 2 ---------- components: ctypes messages: 260861 nosy: Michael.Felt priority: normal severity: normal status: open title: ctypes.util.find_library fails ALWAYS when gcc is not used type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 11:49:19 2016 From: report at bugs.python.org (Bill Lee) Date: Thu, 25 Feb 2016 16:49:19 +0000 Subject: [New-bugs-announce] [issue26440] tarfile._FileInFile.seekable is broken in stream mode Message-ID: <1456418959.65.0.0948342534374.issue26440@psf.upfronthosting.co.za> New submission from Bill Lee: Description =========== With a file object, retrieved by the `extractfile` method of a TarFile object opened in stream mode, calling its `seekable` method will raise an AttributeError. How to Reproduce ================ cat > seekable.py << EOF import sys import tarfile tar = tarfile.open(fileobj=sys.stdin.buffer, mode='r|') contentFile = tar.extractfile(tar.next()) print(contentFile.seekable()) EOF tar -cf test.tar seekable.py python seekable.py < test.tar Traceback ========= Traceback (most recent call last): File "seekable.py", line 5, in print(contentFile.seekable()) File "/usr/local/lib/python3.5/tarfile.py", line 649, in seekable return self.fileobj.seekable() How to Fix ========== I think that adding a method seekable(), which always return False, to tarfile._Stream will works. ---------- components: Library (Lib) messages: 260866 nosy: Bill Lee priority: normal severity: normal status: open title: tarfile._FileInFile.seekable is broken in stream mode type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 25 17:48:00 2016 From: report at bugs.python.org (Martin Panter) Date: Thu, 25 Feb 2016 22:48:00 +0000 Subject: [New-bugs-announce] [issue26441] email.charset: to_splittable and from_splittable are not there anymore! Message-ID: <1456440480.97.0.346838980393.issue26441@psf.upfronthosting.co.za> New submission from Martin Panter: In /Doc/library/email.charset.rst, the descriptions of the to_splittable() and from_splittable() methods are disabled. The comment saying they are not there is presumably referring to the fact that the methods were removed from the Python 3 implementation. They were removed in r57697, and the documentation was adjusted in revision f65de36f185c. Perhaps the two descriptions should just be deleted from the documentation? ---------- components: email messages: 260883 nosy: barry, martin.panter, r.david.murray priority: normal severity: normal status: open title: email.charset: to_splittable and from_splittable are not there anymore! versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 06:31:39 2016 From: report at bugs.python.org (ProgVal) Date: Fri, 26 Feb 2016 11:31:39 +0000 Subject: [New-bugs-announce] [issue26442] Doc refers to xmlrpc.client but means xmlrpc.server Message-ID: <1456486299.23.0.764792855205.issue26442@psf.upfronthosting.co.za> New submission from ProgVal: The doc of xmlrpc.server and xmlrpc.client both warn about XML vulnerabilities. However, both say ?The xmlrpc.client module is not secure?, whereas the page for xml.server should say xmlrpc.server. ---------- assignee: docs at python components: Documentation messages: 260892 nosy: Valentin.Lorentz, docs at python priority: normal severity: normal status: open title: Doc refers to xmlrpc.client but means xmlrpc.server versions: Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 10:19:33 2016 From: report at bugs.python.org (=?utf-8?q?Martin_Hundeb=C3=B8ll?=) Date: Fri, 26 Feb 2016 15:19:33 +0000 Subject: [New-bugs-announce] [issue26443] cross building extensions picks up host headers Message-ID: <1456499973.95.0.823052816124.issue26443@psf.upfronthosting.co.za> New submission from Martin Hundeb?ll: When cross building python, the building of extensions is called with -I/usr/include and -L/usr/lib This makes some extensions fail to compile due to picking up inline assembly from host headers (e.g. _socket[1]). I have fixed this locally by applying the attached patch, but I cannot tell if that would make other builds fail. [1] log output: building '_socket' extension arm-cortexa9neon-linux-gnueabi-gcc -fPIC -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/stage/machine/usr/lib/libffi-3.2.1/include -O2 -fexpensive-optimizations -fomit-frame-pointer -frename-registers -Werror=declaration-after-statement -I./Include -I/usr/include -I. -IInclude -I/home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/stage/cross/bin/../arm-cortexa9neon-linux-gnueabi/sysroot/usr/include -I/home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1/Include -I/home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1 -c /home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1/Modules/socketmodule.c -o build/temp.linux-arm-3.5/home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1/Modules/socketmodule.o In file included from /usr/include/bits/byteswap.h:35:0, from /usr/include/endian.h:60, from /usr/include/bits/string2.h:51, from /usr/include/string.h:635, from ./Include/Python.h:30, from /home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1/Modules/socketmodule.c:95: /home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1/Modules/socketmodule.c: In function 'socket_getservbyport': /usr/include/bits/byteswap-16.h:31:5: error: invalid 'asm': invalid operand for code 'w' __asm__ ("rorw $8, %w0" \ ^ /usr/include/netinet/in.h:403:21: note: in expansion of macro '__bswap_16' # define htons(x) __bswap_16 (x) ^ /home/mnhu/projects/pil/oe/tmp/work/machine/arm-cortexa9neon-linux-gnueabi/python-3.5.1/src/Python-3.5.1/Modules/socketmodule.c:4861:24: note: in expansion of macro 'htons' sp = getservbyport(htons((short)port), proto); ^ ---------- components: Cross-Build files: include-dirs.patch keywords: patch messages: 260894 nosy: hundeboll priority: normal severity: normal status: open title: cross building extensions picks up host headers type: compile error versions: Python 2.7, Python 3.5 Added file: http://bugs.python.org/file42032/include-dirs.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 11:58:49 2016 From: report at bugs.python.org (Ismail s) Date: Fri, 26 Feb 2016 16:58:49 +0000 Subject: [New-bugs-announce] [issue26444] Fix 2 typos on ElementTree docs Message-ID: <1456505929.01.0.140412198941.issue26444@psf.upfronthosting.co.za> New submission from Ismail s: 'incrementall' has been changed to 'incrementally' (and text reflowed). 'keywword' has been changed to 'keyword'. ---------- assignee: docs at python components: Documentation files: work.patch keywords: patch messages: 260897 nosy: Ismail s, docs at python priority: normal severity: normal status: open title: Fix 2 typos on ElementTree docs type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42033/work.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 13:26:33 2016 From: report at bugs.python.org (glep) Date: Fri, 26 Feb 2016 18:26:33 +0000 Subject: [New-bugs-announce] [issue26445] setup.py sdist mishandles package_dir option Message-ID: <1456511193.21.0.2161734334.issue26445@psf.upfronthosting.co.za> New submission from glep: Suppose I have a setup.py with the option ... packages=['package', 'package.utils'], package_dir={'package.utils', '../utils'}, ... as would arise if ../utils was a package shared between several projets ('package', 'package1', ...). I would expect the source distribution created by 'python setup.py sdist' to create an archive with the following structure: / |-- package/ |-- package/utils And this is indeed what *bdist* does. BUT *sdist* copies '../utils' in 'package/../utils' instead of 'package/utils', with the result that utils is outside of the distribution altogether. The issue is referenced in a couple of StackOverflow posts that have attracted little attention so far, for example: http://stackoverflow.com/questions/35510972/inconsistent-behaviour-of-bdist-vs-sdist-when-distributing-a-python-package ---------- components: Distutils messages: 260904 nosy: dstufft, eric.araujo, glep priority: normal severity: normal status: open title: setup.py sdist mishandles package_dir option type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 14:24:27 2016 From: report at bugs.python.org (Brett Cannon) Date: Fri, 26 Feb 2016 19:24:27 +0000 Subject: [New-bugs-announce] [issue26446] Mention in the devguide that core devs are expected to follow the PSF CoC Message-ID: <1456514667.83.0.674412678031.issue26446@psf.upfronthosting.co.za> New submission from Brett Cannon: It should be mentioned in the devguide that receiving one's core developer privileges includes following the PSF CoC (https://www.python.org/psf/codeofconduct/). ---------- components: Devguide messages: 260907 nosy: brett.cannon, ezio.melotti, willingc priority: normal severity: normal stage: needs patch status: open title: Mention in the devguide that core devs are expected to follow the PSF CoC _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 15:44:07 2016 From: report at bugs.python.org (=?utf-8?b?SmVyb2Qg4oCcajNyZOKAnSBHYXduZQ==?=) Date: Fri, 26 Feb 2016 20:44:07 +0000 Subject: [New-bugs-announce] [issue26447] rstrip() is pilfering my 'p' Message-ID: <1456519447.91.0.0851895930382.issue26447@psf.upfronthosting.co.za> New submission from Jerod ?j3rd? Gawne: Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32 In[4]: str = 'aaaaaaap.py' In[5]: print(str.rstrip('.py')) aaaaaaa In[6]: str = 'aaaaaaap.pdf' In[7]: print(str.rstrip('.pdf')) aaaaaaa In[8]: str = 'aaaaaaab.pdf' In[9]: print(str.rstrip('.pdf')) aaaaaaab In[10]: str = 'apapapab.pdf' In[11]: print(str.rstrip('.pdf')) apapapab In[12]: str = 'apapapap.pdf' In[13]: print(str.rstrip('.pdf')) apapapa what's with the 'p' pilfering? In[14]: str = 'apapapab.bdf' In[15]: print(str.rstrip('.bdf')) apapapa In[16]: print(str.rstrip(r'.bdf')) apapapa In[18]: print(str.rstrip('\.bdf')) apapapa Actually though, it's grabbing an additional character before the '.' the same as the one after. ---------- messages: 260908 nosy: Jerod ?j3rd? Gawne priority: normal severity: normal status: open title: rstrip() is pilfering my 'p' type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 26 22:54:10 2016 From: report at bugs.python.org (Eric Fahlgren) Date: Sat, 27 Feb 2016 03:54:10 +0000 Subject: [New-bugs-announce] [issue26448] dis.findlabels ignores EXTENDED_ARG Message-ID: <1456545250.68.0.0525262950792.issue26448@psf.upfronthosting.co.za> New submission from Eric Fahlgren: When trying out dis.dis on some synthetically long functions, I noted that spurious branch targets were being generated in the output. First one is at address 8: 157 0 LOAD_CONST 1 (1) 3 DUP_TOP 4 STORE_FAST 0 (z) 7 DUP_TOP >> 8 STORE_FAST 1 (a) 11 DUP_TOP I dug into findlabels and notices that it pays no attention to EXTENDED_ARG. The fix is pretty simple, basically copy pasta from dis._get_instructions_bytes, at line 369, in the 3.5.1 release code add all the "extended_arg" bits: extended_arg = 0 while i < n: op = code[i] i = i+1 if op >= HAVE_ARGUMENT: arg = code[i] + code[i+1]*256 + extended_arg extended_arg = 0 i = i+2 if op == EXTENDED_ARG: extended_arg = arg*65536 label = -1 ---------- components: Library (Lib) messages: 260913 nosy: eric.fahlgren priority: normal severity: normal status: open title: dis.findlabels ignores EXTENDED_ARG type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 27 04:15:41 2016 From: report at bugs.python.org (Martijn Pieters) Date: Sat, 27 Feb 2016 09:15:41 +0000 Subject: [New-bugs-announce] [issue26449] Tutorial on Python Scopes and Namespaces uses confusing 'read-only' terminology Message-ID: <1456564541.09.0.218031979167.issue26449@psf.upfronthosting.co.za> New submission from Martijn Pieters: >From the 9.2. Python Scopes and Namespace section: > If a name is declared global, then all references and assignments go directly to the middle scope containing the module?s global names. To rebind variables found outside of the innermost scope, the nonlocal statement can be used; if not declared nonlocal, those variable are read-only (an attempt to write to such a variable will simply create a new local variable in the innermost scope, leaving the identically named outer variable unchanged). This terminology is extremely confusing to newcomers; see https://stackoverflow.com/questions/35667757/read-only-namespace-in-python for an example. Variables are never read-only. The parent scope name simply is *not visible*, which is an entirely different concept. Can this section be re-written to not use the term 'read-only'? ---------- messages: 260933 nosy: mjpieters priority: normal severity: normal status: open title: Tutorial on Python Scopes and Namespaces uses confusing 'read-only' terminology _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 27 21:31:06 2016 From: report at bugs.python.org (Alex LordThorsen) Date: Sun, 28 Feb 2016 02:31:06 +0000 Subject: [New-bugs-announce] [issue26450] make html fails on OSX Message-ID: <1456626666.41.0.889799405397.issue26450@psf.upfronthosting.co.za> New submission from Alex LordThorsen: Did a fresh hg clone of the python 3.6 code base (3.6.0a0) on OSX 10.10.5 and made a modification to a library rst. I went to build my changes and ran $ make html sphinx-build -b html -d build/doctrees -D latex_paper_size= . build/html make: sphinx-build: No such file or directory make: *** [build] Error 1 I looked at https://docs.python.org/devguide/documenting.html#building-doc to see about what I can do about this failure and didn't see anything OSX specific. $ sphinx-build -b html . build/html Works, however. ---------- components: Build messages: 260954 nosy: Alex.Lord priority: normal severity: normal status: open title: make html fails on OSX versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 27 22:18:04 2016 From: report at bugs.python.org (Alex LordThorsen) Date: Sun, 28 Feb 2016 03:18:04 +0000 Subject: [New-bugs-announce] [issue26451] CSV documentation doesn't open with an example Message-ID: <1456629484.08.0.443868302354.issue26451@psf.upfronthosting.co.za> New submission from Alex LordThorsen: Had a friend get stuck on the CSV documentation. They didn't know what a CSV was (to start with) and couldn't find an example that made sense to them. they went to other sources to figure out how to read CSV files in the end. I made this patch in the hope that starting out with a very minimal example file format followed by an example python read will make landing on the CSV docs easier to follow for new programmers. ---------- assignee: docs at python components: Documentation files: csv_documentation.patch keywords: patch messages: 260960 nosy: Alex.LordThorsen, docs at python priority: normal severity: normal status: open title: CSV documentation doesn't open with an example versions: Python 3.6 Added file: http://bugs.python.org/file42040/csv_documentation.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 28 01:15:49 2016 From: report at bugs.python.org (Greg Price) Date: Sun, 28 Feb 2016 06:15:49 +0000 Subject: [New-bugs-announce] [issue26452] Wrong line number attributed to comprehension expressions Message-ID: <1456640149.75.0.85936592027.issue26452@psf.upfronthosting.co.za> New submission from Greg Price: In a multi-line list comprehension (or dict or set comprehension), the code for the main expression of the comprehension is wrongly attributed to the *last* line of the comprehension, which might be several lines later. This makes for quite baffling tracebacks when an exception occurs -- for example this program: ``` def f(): return [j for i in range(3) if i] f() ``` produces (with CPython from current `default`): ``` Traceback (most recent call last): File "foo.py", line 15, in f() File "foo.py", line 3, in f for i in range(3) File "foo.py", line 4, in if i] NameError: name 'j' is not defined ``` showing the line `if i]`, which has nothing to do with the error and gives very little hint as to where the exception is being raised. Disassembly confirms that the line numbers on the code object are wrong: ``` 2 0 BUILD_LIST 0 3 LOAD_FAST 0 (.0) >> 6 FOR_ITER 18 (to 27) 3 9 STORE_FAST 1 (i) 4 12 LOAD_FAST 1 (i) 15 POP_JUMP_IF_FALSE 6 18 LOAD_GLOBAL 0 (j) 21 LIST_APPEND 2 24 JUMP_ABSOLUTE 6 >> 27 RETURN_VALUE ``` The `LOAD_GLOBAL` instruction for `j` is attributed to line 4, when it should be line 2. A similar issue affects multi-line function calls, which get attributed to a line in the last argument. This is less often so seriously confusing because the function called is right there as the next frame down on the stack, but it's much more common and it makes the traceback look a little off -- I've noticed this as a minor annoyance for years, before the more serious comprehension issue got my attention. Historically, line numbers were constrained to be wrong in these ways because the line-number table `co_lnotab` on a code object required its line numbers to increase monotonically -- and the code for the main expression of a comprehension comes after all the `for` and `if` clauses, so it can't get a line number earlier than theirs. Victor Stinner's recent work in https://hg.python.org/cpython/rev/775b74e0e103 lifted that restriction in the `co_lnotab` data structure, so it's now just a matter of actually entering the correct line numbers there. I have a draft patch to do this, attached here. It fixes the issue both for comprehensions and function calls, and includes tests. Things I'd still like to do before considering the patch ready: * There are a couple of bits of logic that I knock out that can probably be simplified further. * While I'm looking at this, there are several other forms of expression and statement that have or probably have similar issues, and I'll want to go and review them too to either fix or determine that they're fine. The ones I've thought of are included in the draft test file, either as actual tests (with their current answers) or TODO comments for me to investigate. Comments very welcome on the issue and my draft patch, and meanwhile I'll continue with the further steps mentioned above. Thanks to Benjamin Peterson for helping diagnose this issue with me when we ran into a confusing traceback that ran through a comprehension. ---------- components: Interpreter Core files: lines.diff keywords: patch messages: 260966 nosy: Greg Price priority: normal severity: normal status: open title: Wrong line number attributed to comprehension expressions type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42042/lines.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 28 02:16:59 2016 From: report at bugs.python.org (Antony Lee) Date: Sun, 28 Feb 2016 07:16:59 +0000 Subject: [New-bugs-announce] [issue26453] SystemError on invalid numpy.ndarray / Path operation Message-ID: <1456643819.34.0.521319001507.issue26453@psf.upfronthosting.co.za> New submission from Antony Lee: Running ``` from pathlib import Path import numpy as np np.arange(300000) / Path("foo") ``` raises ``` TypeError: argument should be a path or str object, not During handling of the above exception, another exception occurred: SystemError: returned a result with an error set During handling of the above exception, another exception occurred: SystemError: returned a result with an error set During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmp/foo.py", line 3, in np.arange(300000) / Path("foo") File "/usr/lib/python3.5/pathlib.py", line 879, in __rtruediv__ return self._from_parts([key] + self._parts) File "/usr/lib/python3.5/pathlib.py", line 637, in _from_parts self = object.__new__(cls) SystemError: returned a result with an error set ``` Note that this does NOT appear for small arrays; I haven't determined the threshold. Crossposted as https://github.com/numpy/numpy/issues/7360. ---------- components: Interpreter Core messages: 260967 nosy: Antony.Lee priority: normal severity: normal status: open title: SystemError on invalid numpy.ndarray / Path operation versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 28 08:06:15 2016 From: report at bugs.python.org (yuriy_levchenko) Date: Sun, 28 Feb 2016 13:06:15 +0000 Subject: [New-bugs-announce] [issue26454] add support string that are not inherited from PyStringObject Message-ID: <1456664775.79.0.943467327671.issue26454@psf.upfronthosting.co.za> New submission from yuriy_levchenko: i have my string object based on COW (https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Copy-on-write) i think i need add flag Py_TPFLAGS_STRING_SUBCLASS https://bugs.python.org/issue26421 but this only for bases on PyStringObject ---------- messages: 260974 nosy: yuriy_levchenko priority: normal severity: normal status: open title: add support string that are not inherited from PyStringObject type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 28 14:50:26 2016 From: report at bugs.python.org (Michel Desmoulin) Date: Sun, 28 Feb 2016 19:50:26 +0000 Subject: [New-bugs-announce] [issue26455] Inconsistent behavior with KeyboardInterrupt and asyncio futures Message-ID: <1456689026.99.0.0359024900858.issue26455@psf.upfronthosting.co.za> New submission from Michel Desmoulin: If you trigger KeyboardInterrupt in a coroutine and catch it, the program terminates cleanly: import asyncio async def bar(): raise KeyboardInterrupt loop = asyncio.get_event_loop() try: loop.run_until_complete(bar()) except KeyboardInterrupt: print("It's ok") finally: loop.stop() loop.close() This outputs: It's ok However, if you wrap the coroutine in a Task, you will get a mixed behavior: try: task = asyncio.ensure_future(bar()) loop.run_until_complete(task) except KeyboardInterrupt: print("It's ok") This outputs: It's ok Task exception was never retrieved future: exception=KeyboardInterrupt()> Traceback (most recent call last): File "ki_bug.py", line 10, in loop.run_until_complete(main_future) File "/usr/lib/python3.5/asyncio/base_events.py", line 325, in run_until_complete self.run_forever() File "/usr/lib/python3.5/asyncio/base_events.py", line 295, in run_forever self._run_once() File "/usr/lib/python3.5/asyncio/base_events.py", line 1258, in _run_once handle._run() File "/usr/lib/python3.5/asyncio/events.py", line 125, in _run self._callback(*self._args) File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "ki_bug.py", line 5, in bar raise KeyboardInterrupt KeyboardInterrupt We have several contradictory behaviors: the KeyboardInterrupt is raised, and captured by the future (since your can do task.exception() to suppress the stracktrace) but also catched by except while the program is allowed to continue, yet still the stack trace is displayed and eventually the program return code will be 0. It's very confusing. ---------- components: asyncio messages: 260984 nosy: Michel Desmoulin, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: Inconsistent behavior with KeyboardInterrupt and asyncio futures type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 01:29:11 2016 From: report at bugs.python.org (Martin Panter) Date: Mon, 29 Feb 2016 06:29:11 +0000 Subject: [New-bugs-announce] [issue26456] import _tkinter + TestForkInThread leaves zombie with stalled thread Message-ID: <1456727351.62.0.83247971741.issue26456@psf.upfronthosting.co.za> New submission from Martin Panter: After running the 2.7 test suite many times, my Linux OS?s memory slowly gets eaten up. It seems to be because of zombie Python processes that never get cleaned up unless I kill them explicitly. I never get this problem with the Python 3 test suite. I narrowed it down to running test_tcl followed by test_thread, and then narrowed it even further to importing _tkinter and running TestForkInThread.test_forkinthread(). Now I have it minimized to the following: $ ./python -c 'import _tkinter, thread, os; thread.start_new_thread(os.fork, ())' A process is left behind listed with the ?defunct? or Z (zombie) status. However it has a child thread; maybe this is why it does not automatically get cleaned up. Extract from ?htop?: PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 1 root 20 0 35412 4528 3448 S 0.0 0.2 0:01.25 /sbin/init 12615 vadmium 20 0 0 0 0 Z 0.0 0.0 0:00.00 ?? python 12616 vadmium 20 0 142M 5952 2220 S 0.0 0.3 0:00.00 ? ?? ./python -c import _tkinter, thread, os; thread.start_new_thread(os.fork, ()) $ sudo strace -p 12616 Process 12616 attached - interrupt to quit select(4, [3], [], [], NULL^C Process 12616 detached $ ls -l /proc/12616/fd total 0 lrwx------ 1 vadmium users 64 Feb 29 05:57 0 -> /dev/pts/1 lrwx------ 1 vadmium users 64 Feb 29 05:57 1 -> /dev/pts/1 lrwx------ 1 vadmium users 64 Feb 29 05:57 2 -> /dev/pts/1 lr-x------ 1 vadmium users 64 Feb 29 05:57 3 -> pipe:[946176] lr-x------ 1 vadmium users 64 Feb 29 05:57 4 -> pipe:[946321] l-wx------ 1 vadmium users 64 Feb 29 05:57 5 -> pipe:[946176] $ pacman -Q systemd glibc systemd 222-1 glibc 2.22-4 ---------- components: Tests, Tkinter messages: 260993 nosy: martin.panter priority: normal severity: normal status: open title: import _tkinter + TestForkInThread leaves zombie with stalled thread type: resource usage versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 01:29:44 2016 From: report at bugs.python.org (feng liang) Date: Mon, 29 Feb 2016 06:29:44 +0000 Subject: [New-bugs-announce] [issue26457] Error in ipaddress.address_exclude function Message-ID: <1456727384.52.0.87644034474.issue26457@psf.upfronthosting.co.za> New submission from feng liang: when i read in document 3.5.1,run the example in ipaddress.address_exclude function >>> n1 = ip_network('192.0.2.0/28') >>> n2 = ip_network('192.0.2.1/32') >>> list(n1.address_exclude(n2)) I got: Traceback (most recent call last): File "", line 1, in File "C:\Python 3.5\lib\ipaddress.py", line 794, in address_exclude s1, s2 = s1.subnets() ValueError: not enough values to unpack (expected 2, got 1) ---------- components: Library (Lib) messages: 260994 nosy: out priority: normal severity: normal status: open title: Error in ipaddress.address_exclude function type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 05:26:28 2016 From: report at bugs.python.org (Albert Freeman) Date: Mon, 29 Feb 2016 10:26:28 +0000 Subject: [New-bugs-announce] [issue26458] Is the default value assignment of a function parameter evaluated multiple times if it is Parameter=None Message-ID: <1456741588.21.0.674537261973.issue26458@psf.upfronthosting.co.za> New submission from Albert Freeman: def f(a, L=[]): L.append(a) return L Seems to behave differently to def f(a, L=None): L = [] L.append(a) return L Which behaves the same as (as far as I noticed) to the below code in the documentation (In the tutorial under 4. More Control Flow Tools) def f(a, L=None): if L is None: L = [] L.append(a) return L I am using CPython 3.5.1, what is the point of "if L is None:" in the lowermost above example? And why is None treated differently to []? ---------- assignee: docs at python components: Documentation messages: 261000 nosy: docs at python, tocretpa priority: normal severity: normal status: open title: Is the default value assignment of a function parameter evaluated multiple times if it is Parameter=None versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 09:32:02 2016 From: report at bugs.python.org (Maciej Fijalkowski) Date: Mon, 29 Feb 2016 14:32:02 +0000 Subject: [New-bugs-announce] [issue26459] Windows build instructions are very inaccurate Message-ID: <1456756322.44.0.126364143286.issue26459@psf.upfronthosting.co.za> New submission from Maciej Fijalkowski: I've tried following the dev guide (still not successful) to compile a debug version of cpython 2.7 and a couple issues that I run into: * The VS2010 vs VS2008 confustion - the docs say "most versions before 3.3 use VS2008" - what does it mean by "most"? The current cpython trunk seems to work only on 2010 (with a variety of fun errors). * VS2010 is hard to download, as is 2008 - direct links would help * nowhere it's mentioned that you need to run stuff from VS console * the readme and the devguide disagree on a few points - readme seems to be better, but also not ideal * the docs don't say how to get svn.exe (that is install tortoiseHG, but then select extra tools from somewhere) * the build seems to require perl, despite claiming it's not Other things are misguiding too, but fixing all of the above would be a massive step forward ---------- assignee: docs at python components: Documentation messages: 261009 nosy: docs at python, fijall priority: normal severity: normal status: open title: Windows build instructions are very inaccurate versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 13:02:35 2016 From: report at bugs.python.org (Sriram Rajagopalan) Date: Mon, 29 Feb 2016 18:02:35 +0000 Subject: [New-bugs-announce] [issue26460] datetime.strptime without a year fails on Feb 29 Message-ID: <1456768955.02.0.995801288981.issue26460@psf.upfronthosting.co.za> New submission from Sriram Rajagopalan: $ python Python 3.5.1 (default, Dec 7 2015, 12:58:09) [GCC 5.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> >>> >>> import time >>> >>> time.strptime("Feb 29", "%b %d") time.struct_time(tm_year=1900, tm_mon=2, tm_mday=29, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=0, tm_yday=60, tm_isdst=-1) >>> >>> >>> import datetime >>> >>> datetime.datetime.strptime("Feb 29", "%b %d") Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.5/_strptime.py", line 511, in _strptime_datetime return cls(*args) ValueError: day is out of range for month The same issue is seen in all versions of Python ---------- components: Library (Lib) messages: 261014 nosy: Sriram Rajagopalan priority: normal severity: normal status: open title: datetime.strptime without a year fails on Feb 29 type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 18:19:10 2016 From: report at bugs.python.org (Maciej Fijalkowski) Date: Mon, 29 Feb 2016 23:19:10 +0000 Subject: [New-bugs-announce] [issue26461] PyInterpreterState_Head(), PyThreadState_Next() etc can't be sanely used Message-ID: <1456787950.96.0.423186176416.issue26461@psf.upfronthosting.co.za> New submission from Maciej Fijalkowski: All the internal uses of this API guard everything with HEAD_LOCK/HEAD_UNLOCK that's not exposed. It's not safe to traverse the whole API without holding those locks and those locks are static and module local ---------- messages: 261030 nosy: fijall priority: normal severity: normal status: open title: PyInterpreterState_Head(), PyThreadState_Next() etc can't be sanely used _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 18:58:50 2016 From: report at bugs.python.org (Julien) Date: Mon, 29 Feb 2016 23:58:50 +0000 Subject: [New-bugs-announce] [issue26462] Patch to enhance literal block language declaration Message-ID: <1456790330.71.0.936150839602.issue26462@psf.upfronthosting.co.za> New submission from Julien: Hi, As I don't like warnings, and sphinx-doc was verbose about "Could not parse literal_block as "python3". highlighting skipped.", I fixed most of them. Bonus: It's graphically better, as an example the XML block here: https://docs.python.org/3.5/library/pyexpat.html is now nicely colored: http://www.afpy.org/doc/python/3.5/library/pyexpat.html ---------- assignee: docs at python components: Documentation files: literal_blocks_languages.patch keywords: patch messages: 261035 nosy: docs at python, sizeof priority: normal severity: normal status: open title: Patch to enhance literal block language declaration type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file42050/literal_blocks_languages.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 29 21:50:54 2016 From: report at bugs.python.org (Nicholas Chammas) Date: Tue, 01 Mar 2016 02:50:54 +0000 Subject: [New-bugs-announce] [issue26463] asyncio-related (?) segmentation fault Message-ID: <1456800654.16.0.873933997584.issue26463@psf.upfronthosting.co.za> New submission from Nicholas Chammas: Python 3.5.1, OS X 10.11.3. I have an application that uses asyncio and Cryptography (via the AsyncSSH library). Cryptography has some parts written in C, I believe. I'm testing my application by sending a keyboard interrupt while 2 tasks are working. My application doesn't clean up after itself correctly, so I get these warnings about pending tasks being destroyed, but I don't think I should ever be getting segfaults. I am able to consistently get this segfault by interrupting my application at roughly the same point. I'm frankly intimidated by the segfault (it's been many years since I dug into one), but the most likely culprits are either Python or Cryptography since they're the only components of my application that have parts written in C, as far as I know. I'm willing to help boil this down to something more minimal with some help. Right now I just have the repro at this branch of my application (which isn't too helpful for people other than myself): https://github.com/nchammas/flintrock/pull/77 Basically, launch a cluster on EC2, and as soon as one task reports that SSH is online, interrupt Flintrock with Control + C. You'll get this segfault. ---------- components: Macintosh, asyncio files: segfault.txt messages: 261036 nosy: Nicholas Chammas, gvanrossum, haypo, ned.deily, ronaldoussoren, yselivanov priority: normal severity: normal status: open title: asyncio-related (?) segmentation fault type: crash versions: Python 3.5 Added file: http://bugs.python.org/file42051/segfault.txt _______________________________________ Python tracker _______________________________________