From storchaka at gmail.com Tue Mar 1 06:14:01 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 1 Mar 2016 13:14:01 +0200 Subject: [Python-Dev] pickle and copy discrepancy Message-ID: The pickle and the copy modules use the same protocol. The reconstruct the object by data returned by the __reduce_ex__/__reduce__ method, but does it in different and incompatible way. In general case the result of __reduce__ includes: 1) The class of the object and arguments to __new__(). 2) The state passed to __setstate__() (or a dict of attributes and possible a tuple of __slots__ values). 3) An iterator of list items that should be appended to the object by calling extend() or append(). 4) An iterator of key-value value pairs that should be set in the object by calling update() or __setitem__(). The difference is that the copy module sets object's state before adding items and key-value pairs, but the pickle module sets object's state after adding items and key-value pairs. If append() or __setitem__() depend on the state of the object, the pickling is incompatible with the copying. The behaviour of copy was changed in issue1100562 [1] (see also issue1099746 [2]). But this caused a problem with other classes (see issue10131 [3]). Changing either pickle or copy for sure will break existing code. But keeping current discrepancy makes existing code not correct and makes too hard to write correct code that works with both pickle and copy. For sure most existing code for which it is matter is not correct. The behaviour of default reducing method for dicts/lists subclasses is not documented [4]. We should choose in what direction we have to break backward compatibility. The behaviour of the copy module looks more natural. It allows to work correctly most naive implementations (as in [2]). The pickle module is more used and breaking it can cause more harm. But the order of object reconstruction is determined at pickling time, thus already pickled data will be reconstructed with old order. The change will only affect new pickles. [1] http://bugs.python.org/issue1100562 [2] http://bugs.python.org/issue1099746 [3] http://bugs.python.org/issue10131 [4] http://bugs.python.org/issue4712 From cournape at gmail.com Tue Mar 1 06:37:01 2016 From: cournape at gmail.com (David Cournapeau) Date: Tue, 1 Mar 2016 11:37:01 +0000 Subject: [Python-Dev] PEP 514: Python environment registration in the Windows Registry In-Reply-To: <56B65F0C.2070403@python.org> References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> Message-ID: Hi Steve, I have looked into this PEP to see what we need to do on Enthought side of things. I have a few questions: 1. Is it recommended to follow this for any python version we may provide, or just new versions (3.6 and above). Most of our customers still heavily use 2.7, and I wonder whether it would cause more trouble than it is worth backporting this to 2.7. 2. The main issue for us in practice has been `PythonPath` entry as used to build `sys.path`. I understand this is not the point of the PEP, but would it make sense to give more precise recommendations for 3rd party providers there ? IIUC, the PEP 514 would recommend for us to do the following: 1. Use HKLM for "system install" or HKCU for "user install" as the root key 2. Register under "\Software\Python\Enthought" 3. We should patch our pythons to look in 2. and not in "\Software\Python\PythonCore", especially for `sys.path` constructions. 4. When a python from enthought is installed, it should never register anything in the key defined in 2. Is this correct ? I am not clear about 3., especially on what should be changed. I know that for 2.7, we need to change PC\getpathp.c for sys.path, but are there any other places where the registry is used by python itself ? Thanks for working on this, David On Sat, Feb 6, 2016 at 9:01 PM, Steve Dower wrote: > I've posted an updated version of this PEP that should soon be visible at > https://www.python.org/dev/peps/pep-0514. > > Leaving aside the fact that the current implementation of Python relies on > *other* information in the registry (that is not specified in this PEP), > I'm still looking for feedback or concerns from developers who are likely > to create or use the keys that are described here. > > ---------------- > > PEP: 514 > Title: Python registration in the Windows registry > Version: $Revision$ > Last-Modified: $Date$ > Author: Steve Dower > Status: Draft > Type: Informational > Content-Type: text/x-rst > Created: 02-Feb-2016 > Post-History: 02-Feb-2016 > > Abstract > ======== > > This PEP defines a schema for the Python registry key to allow third-party > installers to register their installation, and to allow applications to > detect > and correctly display all Python environments on a user's machine. No > implementation changes to Python are proposed with this PEP. > > Python environments are not required to be registered unless they want to > be > automatically discoverable by external tools. > > The schema matches the registry values that have been used by the official > installer since at least Python 2.5, and the resolution behaviour matches > the > behaviour of the official Python releases. > > Motivation > ========== > > When installed on Windows, the official Python installer creates a > registry key > for discovery and detection by other applications. This allows tools such > as > installers or IDEs to automatically detect and display a user's Python > installations. > > Third-party installers, such as those used by distributions, typically > create > identical keys for the same purpose. Most tools that use the registry to > detect > Python installations only inspect the keys used by the official installer. > As a > result, third-party installations that wish to be discoverable will > overwrite > these values, resulting in users "losing" their Python installation. > > By describing a layout for registry keys that allows third-party > installations > to register themselves uniquely, as well as providing tool developers > guidance > for discovering all available Python installations, these collisions > should be > prevented. > > Definitions > =========== > > A "registry key" is the equivalent of a file-system path into the > registry. Each > key may contain "subkeys" (keys nested within keys) and "values" (named and > typed attributes attached to a key). > > ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in > user, > and this user can generally read and write all settings under this root. > > ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, > any > user can read these settings but only administrators can modify them. It is > typical for values under ``HKEY_CURRENT_USER`` to take precedence over > those in > ``HKEY_LOCAL_MACHINE``. > > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a > special key > that 32-bit processes transparently read and write to rather than > accessing the > ``Software`` key directly. > > Structure > ========= > > We consider there to be a single collection of Python environments on a > machine, > where the collection may be different for each user of the machine. There > are > three potential registry locations where the collection may be stored > based on > the installation options of each environment:: > > HKEY_CURRENT_USER\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ > > Environments are uniquely identified by their Company-Tag pair, with two > options > for conflict resolution: include everything, or give priority to user > preferences. > > Tools that include every installed environment, even where the Company-Tag > pairs > match, should ensure users can easily identify whether the registration was > per-user or per-machine. > > Tools that give priority to user preferences must ignore values from > ``HKEY_LOCAL_MACHINE`` when a matching Company-Tag pair exists is in > ``HKEY_CURRENT_USER``. > > Official Python releases use ``PythonCore`` for Company, and the value of > ``sys.winver`` for Tag. Other registered environments may use any values > for > Company and Tag. Recommendations are made in the following sections. > > Python environments are not required to register themselves unless they > want to > be automatically discoverable by external tools. > > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit > builds in > ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. > > To ensure backwards compatibility, applications should treat environments > listed > under the following two registry keys as distinct, even when the Tag > matches:: > > HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ > > Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct > from > both of the above keys, potentially resulting in three environments > discovered > using the same Tag. Alternatively, a tool may determine whether the > per-user > environment is 64-bit or 32-bit and give it priority over the per-machine > environment, resulting in a maximum of two discovered environments. > > It is not possible to detect side-by-side installations of both 64-bit and > 32-bit versions of Python prior to 3.5 when they have been installed for > the > current user. Python 3.5 and later always uses different Tags for 64-bit > and > 32-bit versions. > > Environments registered under other Company names must use distinct Tags to > support side-by-side installations. There is no backwards compatibility > allowance. > > Company > ------- > > The Company part of the key is intended to group related environments and > to > ensure that Tags are namespaced appropriately. The key name should be > alphanumeric without spaces and likely to be unique. For example, a > trademarked > name, a UUID, or a hostname would be appropriate:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 > HKEY_CURRENT_USER\Software\Python\www.example.com > > The company name ``PyLauncher`` is reserved for the PEP 397 launcher > (``py.exe``). It does not follow this convention and should be ignored by > tools. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the environment category to users. Otherwise, the name of the key should be > used. > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise > used to direct users to a web site related to the environment. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > (Default) = (value not set) > DisplayName = "Example Corp" > SupportUrl = "http://www.example.com" > > Tag > --- > > The Tag part of the key is intended to uniquely identify an environment > within > those provided by a single company. The key name should be alphanumeric > without > spaces and stable across installations. For example, the Python language > version, a UUID or a partial/complete hash would be appropriate; an integer > counter that increases for each new environment may not:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > > If a string value named ``DisplayName`` exists, it should be used to > identify > the environment to users. Otherwise, the name of the key should be used. > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise > used to direct users to a web site related to the environment. > > If a string value named ``Version`` exists, it should be used to identify > the > version of the environment. This is independent from the version of Python > implemented by the environment. > > If a string value named ``SysVersion`` exists, it must be in ``x.y`` or > ``x.y.z`` format matching the version returned by ``sys.version_info`` in > the > interpreter. Otherwise, if the Tag matches this format it is used. If not, > the > Python version is unknown. > > Note that each of these values is recommended, but optional. A complete > example > may look like this:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > (Default) = (value not set) > DisplayName = "Distro 3" > SupportUrl = "http://www.example.com/distro-3" > Version = "3.0.12345.0" > SysVersion = "3.6.0" > > InstallPath > ----------- > > Beneath the environment key, an ``InstallPath`` key must be created. This > key is > always named ``InstallPath``, and the default value must match > ``sys.prefix``:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath > (Default) = "C:\ExampleCorpPy36" > > If a string value named ``ExecutablePath`` exists, it must be a path to the > ``python.exe`` (or equivalent) executable. Otherwise, the interpreter > executable > is assumed to be called ``python.exe`` and exist in the directory > referenced by > the default value. > > If a string value named ``WindowedExecutablePath`` exists, it must be a > path to > the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed > interpreter executable is assumed to be called ``pythonw.exe`` and exist > in the > directory referenced by the default value. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\InstallPath > (Default) = "C:\ExampleDistro30" > ExecutablePath = "C:\ExampleDistro30\ex_python.exe" > WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" > > Help > ---- > > Beneath the environment key, a ``Help`` key may be created. This key is > always > named ``Help`` if present and has no default value. > > Each subkey of ``Help`` specifies a documentation file, tool, or URL > associated > with the environment. The subkey may have any name, and the default value > is a > string appropriate for passing to ``os.startfile`` or equivalent. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the help file to users. Otherwise, the key name should be used. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help > Python\ > (Default) = "C:\ExampleDistro30\python36.chm" > DisplayName = "Python Documentation" > Extras\ > (Default) = "http://www.example.com/tutorial" > DisplayName = "Example Distro Online Tutorial" > > Other Keys > ---------- > > Some other registry keys are used for defining or inferring search paths > under > certain conditions. A third-party installation is permitted to define > these keys > under their Company-Tag key, however, the interpreter must be modified and > rebuilt in order to read these values. > > Copyright > ========= > > This document has been placed in the public domain. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/cournape%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Mar 1 08:24:08 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 1 Mar 2016 13:24:08 +0000 Subject: [Python-Dev] PEP 514: Python environment registration in the Windows Registry In-Reply-To: References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> Message-ID: On 1 March 2016 at 11:37, David Cournapeau wrote: > I am not clear about 3., especially on what should be changed. I know that > for 2.7, we need to change PC\getpathp.c for sys.path, but are there any > other places where the registry is used by python itself ? My understanding from the earlier discussion was that you should not patch Python at all. The sys.path building via PythonPath is not covered by the PEP and you should continue as at present. The new keys are all for informational purposes - your installer should write to them, and read them if looking for your installations. But the Python interpreter itself should not know or care about your new keys. Steve can probably clarify better than I can, but that's how I recall it being intended to work. Paul From ethan at stoneleaf.us Tue Mar 1 11:34:09 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 01 Mar 2016 08:34:09 -0800 Subject: [Python-Dev] pickle and copy discrepancy In-Reply-To: References: Message-ID: <56D5C481.4020406@stoneleaf.us> On 03/01/2016 03:14 AM, Serhiy Storchaka wrote: > The difference is that the copy module sets object's state before adding > items and key-value pairs, but the pickle module sets object's state > after adding items and key-value pairs. If append() or __setitem__() > depend on the state of the object, the pickling is incompatible with the > copying. Aren't there tests to ensure the unpickled/copied object are identical to the original object? Under which circumstances would they be different? -- ~Ethan~ From steve.dower at python.org Tue Mar 1 12:46:30 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 1 Mar 2016 09:46:30 -0800 Subject: [Python-Dev] PEP 514: Python environment registration in the Windows Registry In-Reply-To: References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> Message-ID: <56D5D576.9010307@python.org> On 01Mar2016 0524, Paul Moore wrote: > On 1 March 2016 at 11:37, David Cournapeau wrote: >> I am not clear about 3., especially on what should be changed. I know that >> for 2.7, we need to change PC\getpathp.c for sys.path, but are there any >> other places where the registry is used by python itself ? > > My understanding from the earlier discussion was that you should not > patch Python at all. The sys.path building via PythonPath is not > covered by the PEP and you should continue as at present. The new keys > are all for informational purposes - your installer should write to > them, and read them if looking for your installations. But the Python > interpreter itself should not know or care about your new keys. > > Steve can probably clarify better than I can, but that's how I recall > it being intended to work. > Paul Yes, the intention was to not move sys.path building out of the PythonCore key. It's solely about discovery by external tools. If you want to patch your own distribution to move the paths you are welcome to do that - there is only one string literal in getpathp.c that needs to be updated - but it's not a requirement and I deliberately avoided making a recommendation either way. (Though as discussed earlier in the thread, I'm very much in favour of deprecating and removing any use of the registry by the runtime itself in 3.6+, but still working out the implications of that.) Cheers, Steve From cournape at gmail.com Tue Mar 1 14:37:45 2016 From: cournape at gmail.com (David Cournapeau) Date: Tue, 1 Mar 2016 19:37:45 +0000 Subject: [Python-Dev] PEP 514: Python environment registration in the Windows Registry In-Reply-To: <56D5D576.9010307@python.org> References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> <56D5D576.9010307@python.org> Message-ID: On Tue, Mar 1, 2016 at 5:46 PM, Steve Dower wrote: > On 01Mar2016 0524, Paul Moore wrote: > >> On 1 March 2016 at 11:37, David Cournapeau wrote: >> >>> I am not clear about 3., especially on what should be changed. I know >>> that >>> for 2.7, we need to change PC\getpathp.c for sys.path, but are there any >>> other places where the registry is used by python itself ? >>> >> >> My understanding from the earlier discussion was that you should not >> patch Python at all. The sys.path building via PythonPath is not >> covered by the PEP and you should continue as at present. The new keys >> are all for informational purposes - your installer should write to >> them, and read them if looking for your installations. But the Python >> interpreter itself should not know or care about your new keys. >> >> Steve can probably clarify better than I can, but that's how I recall >> it being intended to work. >> Paul >> > > Yes, the intention was to not move sys.path building out of the PythonCore > key. It's solely about discovery by external tools. > Right. For us, continuing populating sys.path from the registry "owned" by python.org official installers is more and more untenable, because every distribution writes there, and this is especially problematic when you have both 32 bits and 64 bits distributions in the same machine. > If you want to patch your own distribution to move the paths you are > welcome to do that - there is only one string literal in getpathp.c that > needs to be updated - but it's not a requirement and I deliberately avoided > making a recommendation either way. (Though as discussed earlier in the > thread, I'm very much in favour of deprecating and removing any use of the > registry by the runtime itself in 3.6+, but still working out the > implications of that.) > Great, I just wanted to make sure removing it ourselves do not put us in a corner or further away from where python itself is going. Would it make sense to indicate in the PEP that doing so is allowed (neither recommended or frowned upon) ? David -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Mar 1 15:25:57 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 1 Mar 2016 12:25:57 -0800 Subject: [Python-Dev] PEP 514: Python environment registration in the Windows Registry In-Reply-To: References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> <56D5D576.9010307@python.org> Message-ID: <56D5FAD5.2030904@python.org> On 01Mar2016 1137, David Cournapeau wrote: > If you want to patch your own distribution to move the paths you are > welcome to do that - there is only one string literal in getpathp.c > that needs to be updated - but it's not a requirement and I > deliberately avoided making a recommendation either way. (Though as > discussed earlier in the thread, I'm very much in favour of > deprecating and removing any use of the registry by the runtime > itself in 3.6+, but still working out the implications of that.) > > > Great, I just wanted to make sure removing it ourselves do not put us in > a corner or further away from where python itself is going. > > Would it make sense to indicate in the PEP that doing so is allowed > (neither recommended or frowned upon) ? I was hoping not, but I suspect the question will come up again, so best to address it once in the official doc. Cheers, Steve From storchaka at gmail.com Wed Mar 2 04:15:39 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 2 Mar 2016 11:15:39 +0200 Subject: [Python-Dev] pickle and copy discrepancy In-Reply-To: <56D5C481.4020406@stoneleaf.us> References: <56D5C481.4020406@stoneleaf.us> Message-ID: On 01.03.16 18:34, Ethan Furman wrote: > On 03/01/2016 03:14 AM, Serhiy Storchaka wrote: >> The difference is that the copy module sets object's state before adding >> items and key-value pairs, but the pickle module sets object's state >> after adding items and key-value pairs. If append() or __setitem__() >> depend on the state of the object, the pickling is incompatible with the >> copying. > > Aren't there tests to ensure the unpickled/copied object are identical > to the original object? We have no pickle/copy tests for every class. And of course we can't test third-party classes. But even if write tests and they will fail, what to do? The problem is that for some classes pickle and copy contradict. An implementation that works with copy doesn't work with pickle or vice verse. > Under which circumstances would they be different? If append() or __setitem__() depend on the state or change the state. See examples in issue1099746 and issue10131. From barry at python.org Wed Mar 2 10:02:55 2016 From: barry at python.org (Barry Warsaw) Date: Wed, 2 Mar 2016 10:02:55 -0500 Subject: [Python-Dev] Accepted: PEP 493 (HTTPS verification migration tools for Python 2.7) Message-ID: <20160302100255.69c3029e@subdivisions.wooz.org> As BDFL-Delegate, I am officially accepting PEP 493. Congratulations Nick, Robert, and MAL. I want to personally thank Nick for taking my concerns into consideration, and re-working the PEP to resolve them. Thanks also to everyone who contributed to the discussion. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From aixtools at gmail.com Wed Mar 2 06:50:15 2016 From: aixtools at gmail.com (Michael Felt) Date: Wed, 02 Mar 2016 12:50:15 +0100 Subject: [Python-Dev] Question about sys.path and sys.argv and how packaging (may) affects default values In-Reply-To: <56D5FAD5.2030904@python.org> References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> <56D5D576.9010307@python.org> <56D5FAD5.2030904@python.org> Message-ID: <56D6D377.5000707@gmail.com> Hello all, 1) There are many lists to choose from - if this is the wrong one for questions about packaging - please forgive me, and point me in the right direction. 2) Normally, I have just packaged python, and then moved on. However, recently I have been asked to help with packaging an 'easier to install' python by people using cloud-init, and more recently people wanting to use salt-stack (on AIX). FYI: I have been posting about my complete failure to build 2.7.11 ( http://bugs.python.org/issue26466) - so, what I am testing is based on 2.7.10 - which built easily for me. Going through the 'base documentation' I saw a reference to both sys.argv and sys.path. atm, I am looking for an easy way to get the program name (e.g., /opt/bin/python, versus ./python). I have my reasons (basically, looking for a compiled-in library search path to help with http://bugs.python.org/issue26439) Looking on two platforms (AIX, my build, and debian for power) I am surprised that sys.argv is empty in both cases, and sys.path returns /opt/lib/python27.zip with AIX, but not with debian. root at x064:[/data/prj/aixtools/python/python-2.7.10]/opt/bin/python Python 2.7.10 (default, Nov 3 2015, 14:36:51) [C] on aix5 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.argv [''] >>> sys.path ['', '/opt/lib/python27.zip', '/opt/lib/python2.7', '/opt/lib/python2.7/plat-aix5', '/opt/lib/python2.7/lib-tk', '/opt/lib/python2.7/lib-old', '/opt/lib/python2.7/lib-dynload', '/opt/lib/python2.7/site-packages'] michael at ipv4:~$ python Python 2.7.9 (default, Mar 1 2015, 13:01:00) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.argv [''] >>> sys.path ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-powerpc-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7'] And I guess I would be interested in getting '/opt/lib/python2.7/dist-packages' in there as well (or learn a way to later add it for pre-compiled packages such as cloud-init AND that those would also look 'first' in /opt/lib/python2.7/dist-packages/cloud-init for modules added to support cloud-init - should I so choose (mainly in case of compatibility issues between say cloud-init and salt-stack that have common modules BUT may have conflicts) - Hopefully never needed for that reason, but it might also simplify packaging applications that depend on python. Many thanks for your time and pointers into the documentation, It is a bit daunting :) Michael From ericsnowcurrently at gmail.com Wed Mar 2 12:10:50 2016 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 2 Mar 2016 10:10:50 -0700 Subject: [Python-Dev] Accepted: PEP 493 (HTTPS verification migration tools for Python 2.7) In-Reply-To: <20160302100255.69c3029e@subdivisions.wooz.org> References: <20160302100255.69c3029e@subdivisions.wooz.org> Message-ID: On Wed, Mar 2, 2016 at 8:02 AM, Barry Warsaw wrote: > As BDFL-Delegate, I am officially accepting PEP 493. > > Congratulations Nick, Robert, and MAL. I want to personally thank Nick for > taking my concerns into consideration, and re-working the PEP to resolve > them. Thanks also to everyone who contributed to the discussion. Yeah, congrats! And thanks for taking on something that (in my mind) isn't the most exciting thing to work on. :) -eric From brett at python.org Wed Mar 2 12:19:46 2016 From: brett at python.org (Brett Cannon) Date: Wed, 02 Mar 2016 17:19:46 +0000 Subject: [Python-Dev] Question about sys.path and sys.argv and how packaging (may) affects default values In-Reply-To: <56D6D377.5000707@gmail.com> References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> <56D5D576.9010307@python.org> <56D5FAD5.2030904@python.org> <56D6D377.5000707@gmail.com> Message-ID: On Wed, 2 Mar 2016 at 09:12 Michael Felt wrote: > Hello all, > > 1) There are many lists to choose from - if this is the wrong one for > questions about packaging - please forgive me, and point me in the right > direction. > So in this instance you're after python-list since this is a general support question. But since I have an answer for you... > > 2) Normally, I have just packaged python, and then moved on. However, > recently I have been asked to help with packaging an 'easier to install' > python by people using cloud-init, and more recently people wanting to > use salt-stack (on AIX). > > FYI: I have been posting about my complete failure to build 2.7.11 ( > http://bugs.python.org/issue26466) - so, what I am testing is based on > 2.7.10 - which built easily for me. > > Going through the 'base documentation' I saw a reference to both > sys.argv and sys.path. atm, I am looking for an easy way to get the > program name (e.g., /opt/bin/python, versus ./python). > I have my reasons (basically, looking for a compiled-in library search > path to help with http://bugs.python.org/issue26439) > https://docs.python.org/2.7/library/sys.html#sys.executable > > Looking on two platforms (AIX, my build, and debian for power) I am > surprised that sys.argv is empty in both cases, and sys.path returns > /opt/lib/python27.zip with AIX, but not with debian. > Did you actually build your version of Python on Debian? If not then do realize that Debian patches their version of CPython, so it wouldn't shock me if they stripped out the code that adds the zip file to sys.path. IOW don't trust the pre-installed CPython to act the same as one that is built from source. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at python.org Wed Mar 2 12:45:16 2016 From: thomas at python.org (Thomas Wouters) Date: Wed, 2 Mar 2016 09:45:16 -0800 Subject: [Python-Dev] Question about sys.path and sys.argv and how packaging (may) affects default values In-Reply-To: <56D6D377.5000707@gmail.com> References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> <56D5D576.9010307@python.org> <56D5FAD5.2030904@python.org> <56D6D377.5000707@gmail.com> Message-ID: On Wed, Mar 2, 2016 at 3:50 AM, Michael Felt wrote: > Hello all, > > 1) There are many lists to choose from - if this is the wrong one for > questions about packaging - please forgive me, and point me in the right > direction. > It's hard to say where this belongs best, but python-list would probably have done as well. > > 2) Normally, I have just packaged python, and then moved on. However, > recently I have been asked to help with packaging an 'easier to install' > python by people using cloud-init, and more recently people wanting to use > salt-stack (on AIX). > > FYI: I have been posting about my complete failure to build 2.7.11 ( > http://bugs.python.org/issue26466) - so, what I am testing is based on > 2.7.10 - which built easily for me. > > Going through the 'base documentation' I saw a reference to both sys.argv > and sys.path. atm, I am looking for an easy way to get the program name > (e.g., /opt/bin/python, versus ./python). > I have my reasons (basically, looking for a compiled-in library search > path to help with http://bugs.python.org/issue26439) > I think the only way to get at the compiled-in search path is to recreate it based on the compiled-in prefix, which you can get through distutils. Python purposely only uses the compiled-in path as the last resort. Instead, it searches for its home relative to the executable and adds a set of directories relative to its home (if they exist). It's not clear to me why you're focusing on these differences, as (as I describe below) they are immaterial. > Looking on two platforms (AIX, my build, and debian for power) I am > surprised that sys.argv is empty in both cases, and sys.path returns > /opt/lib/python27.zip with AIX, but not with debian. > When you run python interactively, sys.argv[0] will be '', yes. Since you're not launching a program, there's nothing else to set it to. 'python' (or the path to the executable) wouldn't be the right thing to set it to, because python itself isn't a Python program :) The actual python executable is sys.executable, not sys.argv[0], but you shouldn't usually care about that, either. If you want to know where to install things, distutils is the thing to use. If you want to know where Python thinks it's installed (for debugging purposes only, really), sys.prefix will tell you. > > root at x064:[/data/prj/aixtools/python/python-2.7.10]/opt/bin/python > Python 2.7.10 (default, Nov 3 2015, 14:36:51) [C] on aix5 > Type "help", "copyright", "credits" or "license" for more information. > >>> import sys > >>> sys.argv > [''] > >>> sys.path > ['', '/opt/lib/python27.zip', '/opt/lib/python2.7', > '/opt/lib/python2.7/plat-aix5', '/opt/lib/python2.7/lib-tk', > '/opt/lib/python2.7/lib-old', '/opt/lib/python2.7/lib-dynload', > '/opt/lib/python2.7/site-packages'] > > michael at ipv4:~$ python > Python 2.7.9 (default, Mar 1 2015, 13:01:00) > [GCC 4.9.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import sys > >>> sys.argv > [''] > >>> sys.path > ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-powerpc-linux-gnu', > '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', > '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', > '/usr/lib/python2.7/dist-packages', > '/usr/lib/python2.7/dist-packages/PILcompat', > '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7'] > In sys.path, you're seeing the difference between a vanilla Python and Debian's patched Python. Vanilla Python adds $prefix/lib/python27.zip to sys.path unconditionally, whereas Debian removes it when it doesn't exist. Likewise, the dist-packages directory is a local modification by Debian; in vanilla Python it's called 'site-packages' instead. The subdirectories in dist-packages that you see in the Debian case are added by .pth files installed in $prefix -- third-party packages, in other words, adding their own directories to the module search path. > > And I guess I would be interested in getting > '/opt/lib/python2.7/dist-packages' in there as well (or learn a way to > later add it for pre-compiled packages such as cloud-init AND that those > would also look 'first' in /opt/lib/python2.7/dist-packages/cloud-init for > modules added to support cloud-init - should I so choose (mainly in case of > compatibility issues between say cloud-init and salt-stack that have common > modules BUT may have conflicts) - Hopefully never needed for that reason, > but it might also simplify packaging applications that depend on python. > A vanilla Python (or non-Debian-built python, even) has no business looking in dist-packages. It should just use site-packages. Third-party packages shouldn't care whether they're installed in site-packages or dist-packages, and instead should use distutils one way or another (if not by having an actual setup.py that uses distutils or setuptools, then at least by querying distutils for the installation directory the way python-config does). > > Many thanks for your time and pointers into the documentation, It is a bit > daunting :) > > Michael > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/thomas%40python.org > -- Thomas Wouters Hi! I'm an email virus! Think twice before sending your email to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 3 02:56:58 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Mar 2016 17:56:58 +1000 Subject: [Python-Dev] Accepted: PEP 493 (HTTPS verification migration tools for Python 2.7) In-Reply-To: References: <20160302100255.69c3029e@subdivisions.wooz.org> Message-ID: On 3 March 2016 at 03:10, Eric Snow wrote: > On Wed, Mar 2, 2016 at 8:02 AM, Barry Warsaw wrote: >> As BDFL-Delegate, I am officially accepting PEP 493. >> >> Congratulations Nick, Robert, and MAL. I want to personally thank Nick for >> taking my concerns into consideration, and re-working the PEP to resolve >> them. Thanks also to everyone who contributed to the discussion. > > Yeah, congrats! And thanks for taking on something that (in my mind) > isn't the most exciting thing to work on. :) I got to spend work time on it, which greatly eases the pain :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Mar 3 02:58:31 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Mar 2016 17:58:31 +1000 Subject: [Python-Dev] Accepted: PEP 493 (HTTPS verification migration tools for Python 2.7) In-Reply-To: <20160302100255.69c3029e@subdivisions.wooz.org> References: <20160302100255.69c3029e@subdivisions.wooz.org> Message-ID: On 3 March 2016 at 01:02, Barry Warsaw wrote: > As BDFL-Delegate, I am officially accepting PEP 493. > > Congratulations Nick, Robert, and MAL. I want to personally thank Nick for > taking my concerns into consideration, and re-working the PEP to resolve > them. Thanks also to everyone who contributed to the discussion. Thank you! I'll commit the reference implementation after I've added documentation and given folks a chance to take a look at that. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jonathan.qin at cname.org.cn Thu Mar 3 10:52:11 2016 From: jonathan.qin at cname.org.cn (Jonathan Qin) Date: Thu, 3 Mar 2016 23:52:11 +0800 Subject: [Python-Dev] Confirm: About python Registration Message-ID: <20160303235218614127@cname.org.cn> Dear Manager, (Please forward this to your CEO, because this is urgent. Thanks!) This is Jonathan Qin---the manager of domain name registration and solution center in China. On February 29th, 2016, we received an application from Baiyao Holdings Ltd requested ?python? as their internet keyword and China (CN) domain names. But after checking it, we find this name conflict with your company name or trademark. In order to deal with this matter better, it?s necessary to send an email to you and confirm whether this company is your distributor or business partner in China? Kind regards Jonathan Qin Manager Cname (Head Office) 8008, Tianan Building, No. 1399 Jinqiao Road, Shanghai 200120 Tel: +86-21-6191-8696 Mobile: +86-1377-4400-340 Fax:+86-21-6191-8697 www.cname.org.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Mar 4 12:08:37 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 4 Mar 2016 18:08:37 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160304170837.2AF0A56678@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-02-26 - 2016-03-04) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5435 ( -2) closed 32802 (+38) total 38237 (+36) Open issues with patches: 2370 Issues opened (26) ================== #19450: Bug in sqlite in Windows binaries http://bugs.python.org/issue19450 reopened by schlamar #25910: Fixing links in documentation http://bugs.python.org/issue25910 reopened by georg.brandl #26434: multiprocessing cannot spawn grandchild from a Windows service http://bugs.python.org/issue26434 reopened by schlamar #26446: Mention in the devguide that core dev stuff falls under the PS http://bugs.python.org/issue26446 opened by brett.cannon #26448: dis.findlabels ignores EXTENDED_ARG http://bugs.python.org/issue26448 opened by eric.fahlgren #26449: Tutorial on Python Scopes and Namespaces uses confusing 'read- http://bugs.python.org/issue26449 opened by mjpieters #26450: make html fails on OSX http://bugs.python.org/issue26450 opened by Alex.LordThorsen #26451: CSV documentation doesn't open with an example http://bugs.python.org/issue26451 opened by Alex.LordThorsen #26452: Wrong line number attributed to comprehension expressions http://bugs.python.org/issue26452 opened by Greg Price #26454: add support string that are not inherited from PyStringObject http://bugs.python.org/issue26454 opened by yuriy_levchenko #26455: Inconsistent behavior with KeyboardInterrupt and asyncio futur http://bugs.python.org/issue26455 opened by Michel Desmoulin #26456: import _tkinter + TestForkInThread leaves zombie with stalled http://bugs.python.org/issue26456 opened by martin.panter #26459: Windows build instructions are very inaccurate http://bugs.python.org/issue26459 opened by fijall #26460: datetime.strptime without a year fails on Feb 29 http://bugs.python.org/issue26460 opened by Sriram Rajagopalan #26461: PyInterpreterState_Head(), PyThreadState_Next() etc can't be s http://bugs.python.org/issue26461 opened by fijall #26462: Patch to enhance literal block language declaration http://bugs.python.org/issue26462 opened by sizeof #26465: Upgrade OpenSSL shipped with python installers http://bugs.python.org/issue26465 opened by alex #26466: could not build python 2.7.11 on AIX http://bugs.python.org/issue26466 opened by Michael.Felt #26467: Add async magic method support to unittest.mock.Mock http://bugs.python.org/issue26467 opened by brett.cannon #26468: shutil.copy2 raises OSError if filesystem doesn't support chmo http://bugs.python.org/issue26468 opened by Vojt??ch Pachol #26470: Make OpenSSL module compatible with OpenSSL 1.1.0 http://bugs.python.org/issue26470 opened by christian.heimes #26471: load_verify_locations(cadata) should load AUX ASN.1 to support http://bugs.python.org/issue26471 opened by christian.heimes #26475: Misleading debugging output for verbose regular expressions http://bugs.python.org/issue26475 opened by serhiy.storchaka #26477: typing forward references and module attributes http://bugs.python.org/issue26477 opened by mjpieters #26479: Init documentation typo "may be return" > "may NOT be returned http://bugs.python.org/issue26479 opened by Samuel Colvin #26480: add a flag that will not give the set a sys.stdin http://bugs.python.org/issue26480 opened by yuriy_levchenko Most recent 15 issues with no replies (15) ========================================== #26480: add a flag that will not give the set a sys.stdin http://bugs.python.org/issue26480 #26475: Misleading debugging output for verbose regular expressions http://bugs.python.org/issue26475 #26471: load_verify_locations(cadata) should load AUX ASN.1 to support http://bugs.python.org/issue26471 #26467: Add async magic method support to unittest.mock.Mock http://bugs.python.org/issue26467 #26455: Inconsistent behavior with KeyboardInterrupt and asyncio futur http://bugs.python.org/issue26455 #26454: add support string that are not inherited from PyStringObject http://bugs.python.org/issue26454 #26452: Wrong line number attributed to comprehension expressions http://bugs.python.org/issue26452 #26451: CSV documentation doesn't open with an example http://bugs.python.org/issue26451 #26441: email.charset: to_splittable and from_splittable are not there http://bugs.python.org/issue26441 #26433: urllib.urlencode() does not explain how to handle unicode http://bugs.python.org/issue26433 #26418: multiprocessing.pool.ThreadPool eats up memories http://bugs.python.org/issue26418 #26396: Create json.JSONType http://bugs.python.org/issue26396 #26391: Specialized sub-classes of Generic never call __init__ http://bugs.python.org/issue26391 #26383: benchmarks (perf.py): number of decimal places in csv output http://bugs.python.org/issue26383 #26373: asyncio: add support for async context manager on streams? http://bugs.python.org/issue26373 Most recent 15 issues waiting for review (15) ============================================= #26475: Misleading debugging output for verbose regular expressions http://bugs.python.org/issue26475 #26462: Patch to enhance literal block language declaration http://bugs.python.org/issue26462 #26456: import _tkinter + TestForkInThread leaves zombie with stalled http://bugs.python.org/issue26456 #26452: Wrong line number attributed to comprehension expressions http://bugs.python.org/issue26452 #26451: CSV documentation doesn't open with an example http://bugs.python.org/issue26451 #26448: dis.findlabels ignores EXTENDED_ARG http://bugs.python.org/issue26448 #26443: cross building extensions picks up host headers http://bugs.python.org/issue26443 #26436: Add the regex-dna benchmark http://bugs.python.org/issue26436 #26434: multiprocessing cannot spawn grandchild from a Windows service http://bugs.python.org/issue26434 #26432: Add partial.kwargs http://bugs.python.org/issue26432 #26423: Integer overflow in wrap_lenfunc() on 64-bit build of Windows http://bugs.python.org/issue26423 #26414: os.defpath too permissive http://bugs.python.org/issue26414 #26404: socketserver context manager http://bugs.python.org/issue26404 #26403: Catch FileNotFoundError in socketserver.DatagramRequestHandler http://bugs.python.org/issue26403 #26394: Have argparse provide ability to require a fallback value be p http://bugs.python.org/issue26394 Top 10 most discussed issues (10) ================================= #26448: dis.findlabels ignores EXTENDED_ARG http://bugs.python.org/issue26448 20 msgs #19475: Add timespec optional flag to datetime isoformat() to choose t http://bugs.python.org/issue19475 16 msgs #26466: could not build python 2.7.11 on AIX http://bugs.python.org/issue26466 10 msgs #25702: Link Time Optimizations support for GCC and CLANG http://bugs.python.org/issue25702 8 msgs #26362: Approved API for creating a temporary file path http://bugs.python.org/issue26362 7 msgs #26436: Add the regex-dna benchmark http://bugs.python.org/issue26436 6 msgs #25910: Fixing links in documentation http://bugs.python.org/issue25910 5 msgs #26439: ctypes.util.find_library fails when ldconfig/glibc not availab http://bugs.python.org/issue26439 5 msgs #26449: Tutorial on Python Scopes and Namespaces uses confusing 'read- http://bugs.python.org/issue26449 5 msgs #26450: make html fails on OSX http://bugs.python.org/issue26450 5 msgs Issues closed (36) ================== #6294: Improve shutdown exception ignored message http://bugs.python.org/issue6294 closed by martin.panter #13492: ./configure --with-system-ffi=LIBFFI-PATH http://bugs.python.org/issue13492 closed by berker.peksag #13573: csv.writer uses str() for floats instead of repr() http://bugs.python.org/issue13573 closed by rhettinger #17580: ctypes: ARM hardfloat argument corruption calling functions wi http://bugs.python.org/issue17580 closed by berker.peksag #17873: _ctypes/libffi missing bits for aarch64 support http://bugs.python.org/issue17873 closed by berker.peksag #22176: update internal libffi copy to 3.1, introducing AArch64 and PO http://bugs.python.org/issue22176 closed by berker.peksag #22836: Broken "Exception ignored in:" message on exceptions in __repr http://bugs.python.org/issue22836 closed by martin.panter #23041: csv needs more quoting rules http://bugs.python.org/issue23041 closed by berker.peksag #24421: Race condition compiling Modules/_math.c http://bugs.python.org/issue24421 closed by martin.panter #25647: Return of asyncio.coroutine from asyncio.coroutine doesn't wor http://bugs.python.org/issue25647 closed by yselivanov #25888: awaiting on coroutine that is being awaited should be an error http://bugs.python.org/issue25888 closed by yselivanov #25918: AssertionError in lib2to3 on 2.7.11 Windows http://bugs.python.org/issue25918 closed by schlamar #26221: awaiting asyncio.Future swallows StopIteration http://bugs.python.org/issue26221 closed by yselivanov #26246: Code output toggle button uses removed jQuery method http://bugs.python.org/issue26246 closed by ezio.melotti #26335: Make mmap.write return the number of bytes written like other http://bugs.python.org/issue26335 closed by berker.peksag #26338: remove duplicate bind addresses in create_server http://bugs.python.org/issue26338 closed by yselivanov #26347: BoundArguments.apply_defaults doesn't handle empty arguments http://bugs.python.org/issue26347 closed by yselivanov #26385: the call of tempfile.NamedTemporaryFile fails and leaves a fil http://bugs.python.org/issue26385 closed by SilentGhost #26393: random.shuffled http://bugs.python.org/issue26393 closed by rhettinger #26420: IDLE for Python 3.5.1 for x64 Windows exits when pasted a stri http://bugs.python.org/issue26420 closed by terry.reedy #26421: string_richcompare invalid check Py_NotImplemented http://bugs.python.org/issue26421 closed by serhiy.storchaka #26442: Doc refers to xmlrpc.client but means xmlrpc.server http://bugs.python.org/issue26442 closed by python-dev #26444: Fix 2 typos on ElementTree docs http://bugs.python.org/issue26444 closed by python-dev #26445: setup.py sdist mishandles package_dir option http://bugs.python.org/issue26445 closed by glep #26447: rstrip() is pilfering my 'p' http://bugs.python.org/issue26447 closed by ethan.furman #26453: SystemError on invalid numpy.ndarray / Path operation http://bugs.python.org/issue26453 closed by haypo #26457: Error in ipaddress.address_exclude function http://bugs.python.org/issue26457 closed by serhiy.storchaka #26458: Is the default value assignment of a function parameter evalua http://bugs.python.org/issue26458 closed by ezio.melotti #26463: asyncio-related (?) segmentation fault http://bugs.python.org/issue26463 closed by Nicholas Chammas #26464: str.translate() unexpectedly duplicates characters http://bugs.python.org/issue26464 closed by haypo #26469: Bug in ConfigParser when setting new values in extended interp http://bugs.python.org/issue26469 closed by Michael Jacob #26472: Infinite loop http://bugs.python.org/issue26472 closed by christian.heimes #26473: Python 3.5 not run http://bugs.python.org/issue26473 closed by SilentGhost #26474: ctypes: Memory leak at malloc_closure.c http://bugs.python.org/issue26474 closed by brett.cannon #26476: Constness in _PyErr_BadInternalCall http://bugs.python.org/issue26476 closed by serhiy.storchaka #26478: dict views don't implement subtraction correctly http://bugs.python.org/issue26478 closed by python-dev From daniel.shaulov at gmail.com Sat Mar 5 14:24:46 2016 From: daniel.shaulov at gmail.com (Daniel Shaulov) Date: Sat, 5 Mar 2016 21:24:46 +0200 Subject: [Python-Dev] Requesting review for the patch in issue26271 Message-ID: Hi, issue26271 has a patch attached that fixes it. Can someone please review it? It is a very small and straightforward patch. (pinging here as suggested in the devguide - https://docs.python.org/devguide/patch.html#reviewing) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Mar 5 16:04:20 2016 From: brett at python.org (Brett Cannon) Date: Sat, 05 Mar 2016 21:04:20 +0000 Subject: [Python-Dev] Requesting review for the patch in issue26271 In-Reply-To: References: Message-ID: FYI I've reviewed Daniel's patch -- which LGTM -- and assigned it to myself to make sure it gets committed (although if someone has time to commit the patch today they can feel free). On Sat, 5 Mar 2016 at 11:27 Daniel Shaulov wrote: > Hi, > > issue26271 has a patch attached that fixes it. Can someone please review > it? It is a very small and straightforward patch. > > (pinging here as suggested in the devguide - > https://docs.python.org/devguide/patch.html#reviewing) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From facundobatista at gmail.com Sun Mar 6 20:07:31 2016 From: facundobatista at gmail.com (Facundo Batista) Date: Sun, 6 Mar 2016 22:07:31 -0300 Subject: [Python-Dev] Web redirect to PyAr Message-ID: Hello! Sending mail here because I really don't know where this is handled :) At some point in the past we had a redirect from python.org/ar (note the slash, not a point) to a PyAr site. Now it's not working, it 404s. It should be redirected to python.org.ar (note the point). Thanks! -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ Twitter: @facundobatista From nad at python.org Mon Mar 7 13:53:27 2016 From: nad at python.org (Ned Deily) Date: Mon, 07 Mar 2016 13:53:27 -0500 Subject: [Python-Dev] Web redirect to PyAr References: Message-ID: In article , Facundo Batista wrote: > Sending mail here because I really don't know where this is handled :) > > At some point in the past we had a redirect from python.org/ar (note > the slash, not a point) to a PyAr site. > > Now it's not working, it 404s. > > It should be redirected to python.org.ar (note the point). These days, python.org website issues are being tracked at: https://github.com/python/pythondotorg/issues Suggest opening an issue there. -- Ned Deily, nad at python.org From berker.peksag at gmail.com Mon Mar 7 14:08:49 2016 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Mon, 7 Mar 2016 21:08:49 +0200 Subject: [Python-Dev] Web redirect to PyAr In-Reply-To: References: Message-ID: On Mon, Mar 7, 2016 at 3:07 AM, Facundo Batista wrote: > Hello! > > Sending mail here because I really don't know where this is handled :) > > At some point in the past we had a redirect from python.org/ar (note > the slash, not a point) to a PyAr site. > > Now it's not working, it 404s. > > It should be redirected to python.org.ar (note the point). Can you try again? It should be redirected to python.org.ar now. --Berker From facundobatista at gmail.com Mon Mar 7 16:40:39 2016 From: facundobatista at gmail.com (Facundo Batista) Date: Mon, 7 Mar 2016 18:40:39 -0300 Subject: [Python-Dev] Web redirect to PyAr In-Reply-To: References: Message-ID: On Mon, Mar 7, 2016 at 4:08 PM, Berker Peksa? wrote: >> At some point in the past we had a redirect from python.org/ar (note >> the slash, not a point) to a PyAr site. >> >> Now it's not working, it 404s. >> >> It should be redirected to python.org.ar (note the point). > > Can you try again? It should be redirected to python.org.ar now. Wonderful, it works! Thanks Berker!! Regards, -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ Twitter: @facundobatista From michael at felt.demon.nl Tue Mar 8 03:49:38 2016 From: michael at felt.demon.nl (Michael Felt) Date: Tue, 8 Mar 2016 08:49:38 +0000 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL Message-ID: <56DE9222.1030407@felt.demon.nl> As a relative newcomer I may have missed a long previous discussion re: linking with OpenSSL and/or LibreSSL. In an ideal world this would be rtl linking, i.e., underlying complexities of *SSL libraries are hidden from applications. In short, when I saw this http://bugs.python.org/issue26465 Title: Upgrade OpenSSL shipped with python installers, it reminded me I need to start looking at LibreSSL again - and that, if not already done - might be something "secure" for python as well. Michael From hasan.diwan at gmail.com Tue Mar 8 09:55:01 2016 From: hasan.diwan at gmail.com (Hasan Diwan) Date: Tue, 8 Mar 2016 06:55:01 -0800 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: <56DE9222.1030407@felt.demon.nl> References: <56DE9222.1030407@felt.demon.nl> Message-ID: On 8 March 2016 at 00:49, Michael Felt wrote: > As a relative newcomer I may have missed a long previous discussion re: > linking with OpenSSL and/or LibreSSL. > In an ideal world this would be rtl linking, i.e., underlying complexities > of *SSL libraries are hidden from applications. > > In short, when I saw this http://bugs.python.org/issue26465 Title: > Upgrade OpenSSL shipped with python installers, it reminded me I need to > start looking at LibreSSL again - and that, if not already done - might be > something "secure" for python as well. > According to the libressl website, one of the projects primary goals is to remain "backwards-compatible with OpenSSL", which is to say, to either have code work without changes or to fail gracefully when it uses the deprecated bits. It does seem it ships with OpenBSD. There is an issue open on bugs to address whatever incompatibilities remain between LibreSSL and OpenSSL[1]. Perhaps you might want to take a look at that? -- H 1. https://bugs.python.org/issue23177 > > Michael > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com > -- OpenPGP: http://hasan.d8u.us/gpg.asc Sent from my mobile device Envoy? de mon portable -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Mar 9 09:54:40 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 9 Mar 2016 15:54:40 +0100 Subject: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance In-Reply-To: References: <56B3254F.7020605@egenix.com> <56B34A1E.4010501@egenix.com> <56B35AB5.5090308@egenix.com> Message-ID: 2016-02-08 15:18 GMT+01:00 Victor Stinner : >> Perhaps if you add some guards somewhere :-) > > We have runtime checks but only implemented in debug mode for efficiency. > > By the way, I proposed once to add an environment variable to allow to > enable these checks without having to recompile Python. Since the PEP > 445, it became easy to implement this. What do you think? > https://www.python.org/dev/peps/pep-0445/#add-a-new-pydebugmalloc-environment-variable Ok, I wrote a patch to implement a new PYTHONMALLOC environment variable: http://bugs.python.org/issue26516 PYTHONMALLOC=debug installs debug hooks to: * detect API violations, ex: PyObject_Free() called on a buffer allocated by PyMem_Malloc() * detect write before the start of the buffer (buffer underflow) * detect write after the end of the buffer (buffer overflow) https://docs.python.org/dev/c-api/memory.html#c.PyMem_SetupDebugHooks The main advantage of this variable is that you don't have to recompile Python in debug mode to benefit of these checks. Recompiling Python in debug mode requires to recompile *all* extensions modules since the debug ABI is incompatible. When I played with tracemalloc on Python 2 ( http://pytracemalloc.readthedocs.org/ ), I had such issues, it was very annoying with non-trivial extension modules like PyQt or PyGTK. With PYTHONMALLOC, you don't have to recompile extension modules anymore! With tracemalloc and PYTHONMALLOC=debug, we will have a complete tool suite to "debug memory"! My motivation for PYTHONMALLOC=debug is to detect API violations to prepare my change on PyMem_Malloc() allocator ( http://bugs.python.org/issue26249 ), but also to help users to detect bugs. It's common that users report a bug: "Python crashed", but have no idea of the responsible of the crash. I hope that detection of buffer underflow & overflow will help them to detect bugs in their own extension modules. Moreover, I added PYTHONMALLOC=malloc to ease the use of external memory debugger on Python. By default, Python uses pymalloc allocator for PyObject_Malloc() which raises a lot of false positive in Valgrind. We even have a configuration (--with-valgrind) and a Valgrind suppressino file to be able to skip these false alarms in Valgrind. IMHO PYTHONMALLOC=malloc is a simpler option to use Valgrind (or other tools). Victor From mamfelt at gmail.com Wed Mar 9 12:29:22 2016 From: mamfelt at gmail.com (Michael Felt) Date: Wed, 9 Mar 2016 17:29:22 +0000 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: References: <56DE9222.1030407@felt.demon.nl> Message-ID: <56E05D72.8040408@gmail.com> Can look at it. There has been a lot of discussion, iirc, between OpenSSL and LibreSSL re: version identification. Thx for the reference. On 08-Mar-16 14:55, Hasan Diwan wrote: > > On 8 March 2016 at 00:49, Michael Felt > wrote: > > As a relative newcomer I may have missed a long previous > discussion re: linking with OpenSSL and/or LibreSSL. > In an ideal world this would be rtl linking, i.e., underlying > complexities of *SSL libraries are hidden from applications. > > In short, when I saw this http://bugs.python.org/issue26465 Title: > Upgrade OpenSSL shipped with python installers, it reminded me I > need to start looking at LibreSSL again - and that, if not already > done - might be something "secure" for python as well. > > > According to the libressl website, one of the projects primary goals > is to remain "backwards-compatible with OpenSSL", which is to say, to > either have code work without changes or to fail gracefully when it > uses the deprecated bits. It does seem it ships with OpenBSD. There is > an issue open on bugs to address whatever incompatibilities remain > between LibreSSL and OpenSSL[1]. Perhaps you might want to take a look > at that? -- H > 1. https://bugs.python.org/issue23177 > > > Michael > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com > > > > > -- > OpenPGP: http://hasan.d8u.us/gpg.asc > Sent from my mobile device > Envoy? de mon portable -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Mar 9 12:54:18 2016 From: brett at python.org (Brett Cannon) Date: Wed, 09 Mar 2016 17:54:18 +0000 Subject: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance In-Reply-To: References: <56B3254F.7020605@egenix.com> <56B34A1E.4010501@egenix.com> <56B35AB5.5090308@egenix.com> Message-ID: On Wed, 9 Mar 2016 at 06:57 Victor Stinner wrote: > 2016-02-08 15:18 GMT+01:00 Victor Stinner : > >> Perhaps if you add some guards somewhere :-) > > > > We have runtime checks but only implemented in debug mode for efficiency. > > > > By the way, I proposed once to add an environment variable to allow to > > enable these checks without having to recompile Python. Since the PEP > > 445, it became easy to implement this. What do you think? > > > https://www.python.org/dev/peps/pep-0445/#add-a-new-pydebugmalloc-environment-variable > > Ok, I wrote a patch to implement a new PYTHONMALLOC environment variable: > > http://bugs.python.org/issue26516 > > PYTHONMALLOC=debug installs debug hooks to: > > * detect API violations, ex: PyObject_Free() called on a buffer > allocated by PyMem_Malloc() > * detect write before the start of the buffer (buffer underflow) > * detect write after the end of the buffer (buffer overflow) > > https://docs.python.org/dev/c-api/memory.html#c.PyMem_SetupDebugHooks > > The main advantage of this variable is that you don't have to > recompile Python in debug mode to benefit of these checks. > I just wanted to say this all sounds awesome! Thanks for all the hard work on making our memory management story easier to work with, Victor. -Brett > > Recompiling Python in debug mode requires to recompile *all* > extensions modules since the debug ABI is incompatible. When I played > with tracemalloc on Python 2 ( http://pytracemalloc.readthedocs.org/ > ), I had such issues, it was very annoying with non-trivial extension > modules like PyQt or PyGTK. With PYTHONMALLOC, you don't have to > recompile extension modules anymore! > > > With tracemalloc and PYTHONMALLOC=debug, we will have a complete tool > suite to "debug memory"! > > My motivation for PYTHONMALLOC=debug is to detect API violations to > prepare my change on PyMem_Malloc() allocator ( > http://bugs.python.org/issue26249 ), but also to help users to detect > bugs. > > It's common that users report a bug: "Python crashed", but have no > idea of the responsible of the crash. I hope that detection of buffer > underflow & overflow will help them to detect bugs in their own > extension modules. > > > Moreover, I added PYTHONMALLOC=malloc to ease the use of external > memory debugger on Python. By default, Python uses pymalloc allocator > for PyObject_Malloc() which raises a lot of false positive in > Valgrind. We even have a configuration (--with-valgrind) and a > Valgrind suppressino file to be able to skip these false alarms in > Valgrind. IMHO PYTHONMALLOC=malloc is a simpler option to use Valgrind > (or other tools). > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irene.papuc at toptal.com Thu Mar 10 07:01:03 2016 From: irene.papuc at toptal.com (Irina Papuc) Date: Thu, 10 Mar 2016 04:01:03 -0800 Subject: [Python-Dev] Connecting Message-ID: <56e161ff5105a_58e64acaf40_489_49@production-outreach-worker-pensive-mummy-6575.mail> Hi Python, My name is Irene Papuc, and I work for a technical publication known as the Toptal Engineering blog. Our blog covers a variety of topics, across many programming languages and dev/design. I have noticed we share similar areas of focus in our topics including python, which I found here. We?ve recently published an article called "Python Multithreading Tutorial: Concurrency and Parallelism", which has a similar focus on python. Would you be interested in sharing this with your community by publishing it on your site? In addition, what other topics make sense, and are you open to long-term collaboration? Best wishes, Irina Irina Papuc | Toptal |?irene.papuc at toptal.com?| Toptal - Hire the Top 3% of Freelance Developers If you'd like me to stop sending you emails, please click here -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Mar 11 12:08:39 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 11 Mar 2016 18:08:39 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160311170839.3DECA5626B@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-03-04 - 2016-03-11) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5454 (+19) closed 32842 (+40) total 38296 (+59) Open issues with patches: 2373 Issues opened (39) ================== #16851: Hint about correct ismethod and isfunction usage http://bugs.python.org/issue16851 reopened by haypo #26481: unittest discovery process not working without .py source file http://bugs.python.org/issue26481 opened by stefan #26483: docs unclear on difference between str.isdigit() and str.isdec http://bugs.python.org/issue26483 opened by ethan.furman #26488: hashlib command line interface http://bugs.python.org/issue26488 opened by palaviv #26491: Defer DECREFs until enum object is in a consistent state for r http://bugs.python.org/issue26491 opened by rhettinger #26492: Exhausted array iterator should left exhausted http://bugs.python.org/issue26492 opened by serhiy.storchaka #26493: Bad formatting in WinError 193 when using subprocess.check_cal http://bugs.python.org/issue26493 opened by Ra??l N????ez de Arenas #26494: Double deallocation on iterator exhausting http://bugs.python.org/issue26494 opened by serhiy.storchaka #26495: super() does not work in nested functions, genexps, listcomps, http://bugs.python.org/issue26495 opened by ztane #26496: Exhausted deque iterator should free a reference to a deque http://bugs.python.org/issue26496 opened by serhiy.storchaka #26499: http.client.IncompleteRead from HTTPResponse read after part r http://bugs.python.org/issue26499 opened by maubp #26500: Document of star operator missing. It must be documented for b http://bugs.python.org/issue26500 opened by janonymous #26503: argparse with required field , not having new line separator i http://bugs.python.org/issue26503 opened by mohankumar #26506: hex() documentation: mention "%x" % int http://bugs.python.org/issue26506 opened by haypo #26507: Use highest pickle protocol in multiprocessing http://bugs.python.org/issue26507 opened by ebehn #26509: spurious ConnectionAbortedError logged on Windows http://bugs.python.org/issue26509 opened by sebastien.bourdeauducq #26510: [argparse] Add required argument to add_subparsers http://bugs.python.org/issue26510 opened by memeplex #26511: Add link to id() built-in in comparison operator documentation http://bugs.python.org/issue26511 opened by Mike Vertolli #26512: Vocabulary: Using "integral" in library/stdtypes.html http://bugs.python.org/issue26512 opened by sizeof #26513: platform.win32_ver() broken in 2.7.11 http://bugs.python.org/issue26513 opened by Florian Roth #26515: Update extending/embedding docs to new way to build modules in http://bugs.python.org/issue26515 opened by brett.cannon #26516: Add PYTHONMALLOC env var and add support for malloc debug hook http://bugs.python.org/issue26516 opened by haypo #26517: Crash in installer http://bugs.python.org/issue26517 opened by jools #26519: Cython doesn't work anymore on Python 3.6 http://bugs.python.org/issue26519 opened by haypo #26523: multiprocessing ThreadPool is untested http://bugs.python.org/issue26523 opened by pitrou #26524: document what config directory is used for http://bugs.python.org/issue26524 opened by jbeck #26525: Documentation of ord(c) easy to misread http://bugs.python.org/issue26525 opened by gladman #26526: In parsermodule.c, replace over 2KLOC of hand-crafted validati http://bugs.python.org/issue26526 opened by A. Skrobov #26527: CGI library - Using unicode in header fields http://bugs.python.org/issue26527 opened by Olivier.Le.Moign #26528: NameError for built in function open when re-raising stored ex http://bugs.python.org/issue26528 opened by reidfaiv #26530: tracemalloc: add C API to manually track/untrack memory alloca http://bugs.python.org/issue26530 opened by haypo #26531: KeyboardInterrupt while in input() not catchable on Windows 10 http://bugs.python.org/issue26531 opened by jecanne #26533: logging.config does not allow disable_existing_loggers=True http://bugs.python.org/issue26533 opened by Grazfather x #26534: subprocess.check_output with shell=True ignores the timeout http://bugs.python.org/issue26534 opened by Daniel Shaulov #26535: Minor typo in the docs for struct.unpack http://bugs.python.org/issue26535 opened by Antony.Lee #26536: Add the SIO_LOOPBACK_FAST_PATH option to socket.ioctl http://bugs.python.org/issue26536 opened by Daniel Stokes #26537: ConfigParser has optionxform, but not sectionxform http://bugs.python.org/issue26537 opened by Anaphory #26538: regrtest: setup_tests() must not replace module.__path__ (_Nam http://bugs.python.org/issue26538 opened by haypo #26539: frozen executables should have an empty path http://bugs.python.org/issue26539 opened by Daniel Shaulov Most recent 15 issues with no replies (15) ========================================== #26539: frozen executables should have an empty path http://bugs.python.org/issue26539 #26537: ConfigParser has optionxform, but not sectionxform http://bugs.python.org/issue26537 #26536: Add the SIO_LOOPBACK_FAST_PATH option to socket.ioctl http://bugs.python.org/issue26536 #26534: subprocess.check_output with shell=True ignores the timeout http://bugs.python.org/issue26534 #26533: logging.config does not allow disable_existing_loggers=True http://bugs.python.org/issue26533 #26526: In parsermodule.c, replace over 2KLOC of hand-crafted validati http://bugs.python.org/issue26526 #26524: document what config directory is used for http://bugs.python.org/issue26524 #26515: Update extending/embedding docs to new way to build modules in http://bugs.python.org/issue26515 #26511: Add link to id() built-in in comparison operator documentation http://bugs.python.org/issue26511 #26491: Defer DECREFs until enum object is in a consistent state for r http://bugs.python.org/issue26491 #26488: hashlib command line interface http://bugs.python.org/issue26488 #26481: unittest discovery process not working without .py source file http://bugs.python.org/issue26481 #26480: add a flag that will not give the set a sys.stdin http://bugs.python.org/issue26480 #26471: load_verify_locations(cadata) should load AUX ASN.1 to support http://bugs.python.org/issue26471 #26467: Add async magic method support to unittest.mock.Mock http://bugs.python.org/issue26467 Most recent 15 issues waiting for review (15) ============================================= #26539: frozen executables should have an empty path http://bugs.python.org/issue26539 #26538: regrtest: setup_tests() must not replace module.__path__ (_Nam http://bugs.python.org/issue26538 #26536: Add the SIO_LOOPBACK_FAST_PATH option to socket.ioctl http://bugs.python.org/issue26536 #26535: Minor typo in the docs for struct.unpack http://bugs.python.org/issue26535 #26534: subprocess.check_output with shell=True ignores the timeout http://bugs.python.org/issue26534 #26530: tracemalloc: add C API to manually track/untrack memory alloca http://bugs.python.org/issue26530 #26523: multiprocessing ThreadPool is untested http://bugs.python.org/issue26523 #26516: Add PYTHONMALLOC env var and add support for malloc debug hook http://bugs.python.org/issue26516 #26509: spurious ConnectionAbortedError logged on Windows http://bugs.python.org/issue26509 #26499: http.client.IncompleteRead from HTTPResponse read after part r http://bugs.python.org/issue26499 #26494: Double deallocation on iterator exhausting http://bugs.python.org/issue26494 #26492: Exhausted array iterator should left exhausted http://bugs.python.org/issue26492 #26491: Defer DECREFs until enum object is in a consistent state for r http://bugs.python.org/issue26491 #26488: hashlib command line interface http://bugs.python.org/issue26488 #26462: Patch to enhance literal block language declaration http://bugs.python.org/issue26462 Top 10 most discussed issues (10) ================================= #26249: Change PyMem_Malloc to use pymalloc allocator http://bugs.python.org/issue26249 18 msgs #26506: hex() documentation: mention "%x" % int http://bugs.python.org/issue26506 12 msgs #25652: collections.UserString.__rmod__() raises NameError http://bugs.python.org/issue25652 8 msgs #26415: Fragmentation of the heap memory in the Python parser http://bugs.python.org/issue26415 8 msgs #26516: Add PYTHONMALLOC env var and add support for malloc debug hook http://bugs.python.org/issue26516 8 msgs #26531: KeyboardInterrupt while in input() not catchable on Windows 10 http://bugs.python.org/issue26531 8 msgs #26247: Document Chrome/Chromium for python2.7 http://bugs.python.org/issue26247 7 msgs #26513: platform.win32_ver() broken in 2.7.11 http://bugs.python.org/issue26513 6 msgs #22367: Add open_file_descriptor parameter to fcntl.lockf() (use the n http://bugs.python.org/issue22367 5 msgs #26483: docs unclear on difference between str.isdigit() and str.isdec http://bugs.python.org/issue26483 5 msgs Issues closed (38) ================== #2202: urllib2 fails against IIS 6.0 (No support for MD5-sess auth) http://bugs.python.org/issue2202 closed by berker.peksag #15068: fileinput requires two EOF when reading stdin http://bugs.python.org/issue15068 closed by serhiy.storchaka #16465: dict creation performance regression http://bugs.python.org/issue16465 closed by serhiy.storchaka #17940: extra code in argparse.py http://bugs.python.org/issue17940 closed by berker.peksag #21034: Python docs reference the Distribute package which has been de http://bugs.python.org/issue21034 closed by berker.peksag #24324: Remove -Wunreachable-code flag http://bugs.python.org/issue24324 closed by ned.deily #24852: Python 3.5.0rc1 "HOWTO Use Python in the web" needs fix http://bugs.python.org/issue24852 closed by berker.peksag #25718: itertools.accumulate __reduce__/__setstate__ bug http://bugs.python.org/issue25718 closed by serhiy.storchaka #25907: Documentation i18n: Added trans tags in sphinx templates http://bugs.python.org/issue25907 closed by sizeof #26015: Add new tests for pickling iterators of mutable sequences http://bugs.python.org/issue26015 closed by serhiy.storchaka #26167: Improve copy.copy speed for built-in types (list/set/dict) http://bugs.python.org/issue26167 closed by serhiy.storchaka #26177: tkinter: Canvas().keys returns empty strings. http://bugs.python.org/issue26177 closed by serhiy.storchaka #26443: cross building extensions picks up host headers http://bugs.python.org/issue26443 closed by hundeboll #26456: import _tkinter + TestForkInThread leaves zombie with stalled http://bugs.python.org/issue26456 closed by martin.panter #26465: Upgrade OpenSSL shipped with python installers http://bugs.python.org/issue26465 closed by steve.dower #26466: could not build python 2.7.11 on AIX http://bugs.python.org/issue26466 closed by Michael.Felt #26475: Misleading debugging output for verbose regular expressions http://bugs.python.org/issue26475 closed by serhiy.storchaka #26482: Restore pickling recursive dequeues http://bugs.python.org/issue26482 closed by serhiy.storchaka #26484: Broken table in /2.7/library/sets.html#set-objects http://bugs.python.org/issue26484 closed by gregory.p.smith #26485: Missing newline, raising a warning, in /Doc/license.rst http://bugs.python.org/issue26485 closed by berker.peksag #26486: Backport some tests to 2.7 http://bugs.python.org/issue26486 closed by serhiy.storchaka #26487: Machine value for fat PowerPC build http://bugs.python.org/issue26487 closed by ned.deily #26489: dictionary unpacking operator in dict expression http://bugs.python.org/issue26489 closed by berker.peksag #26490: Leading ???0??? allowed, only for decimal zero http://bugs.python.org/issue26490 closed by ethan.furman #26497: Documentation - HOWTO Use Python in the web paragraph on Turbo http://bugs.python.org/issue26497 closed by berker.peksag #26498: _io.so flat namespace http://bugs.python.org/issue26498 closed by bugbee #26501: bytes splitlines() method returns strings without decoding http://bugs.python.org/issue26501 closed by zach.ware #26502: traceback.extract_tb breaks compatibility by returning FrameSu http://bugs.python.org/issue26502 closed by berker.peksag #26504: tclErr: invalid command name "PyAggImagePhoto" http://bugs.python.org/issue26504 closed by r mosher #26505: [PATCH] Spelling of ANY in the license of Modules/getaddrinfo. http://bugs.python.org/issue26505 closed by ned.deily #26508: Infinite crash leading to DoS http://bugs.python.org/issue26508 closed by haypo #26514: Object defines '__ne__' as 'not __eq__' if '__ne__' is not imp http://bugs.python.org/issue26514 closed by eryksun #26518: AttributeError: 'module' object has no attribute '_handlerList http://bugs.python.org/issue26518 closed by berker.peksag #26520: asyncio.new_event_loop() hangs http://bugs.python.org/issue26520 closed by gvanrossum #26521: add `extend_enum` to Enum http://bugs.python.org/issue26521 closed by ethan.furman #26522: pickle.whichmodule(object.__new__, None) = 'email.MIMEAudio' http://bugs.python.org/issue26522 closed by serhiy.storchaka #26529: urllib.request: ftplib's cwd() can do harm http://bugs.python.org/issue26529 closed by Vadim Markovtsev #26532: build fails with address sanitizer http://bugs.python.org/issue26532 closed by brett.cannon From michael at felt.demon.nl Fri Mar 11 07:14:30 2016 From: michael at felt.demon.nl (Michael Felt) Date: Fri, 11 Mar 2016 13:14:30 +0100 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20160304170837.2AF0A56678@psf.upfronthosting.co.za> References: <20160304170837.2AF0A56678@psf.upfronthosting.co.za> Message-ID: <56E2B6A6.2050809@felt.demon.nl> I guess I should have never changed the title - apparently the tracker loses track - there are more than 5 messages. On 2016-03-04 18:08, Python tracker wrote: > #26439: ctypes.util.find_library fails when ldconfig/glibc not availab > http://bugs.python.org/issue26439 5 msgs And, while I do not want to ping the list in a rude way, I submitted a patch - not perfect of course (seems to work as expected stand-alone, but not in a 'build' attempt (a previous 'half' patch that was the 'work in progress' did build) - so as I hope to have time in the coming days to dig further - some hints on how to debug the failed 'build moment during 'make install' would be greatly appreciated. Basically, the make install ends with: ... Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/xml/sax/xmlreader.py ... Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/xmllib.py ... Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/xmlrpclib.py ... Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/zipfile.py ... make: 1254-004 The error code from the last command is 1. Stop. root at x064:[/data/prj/aixtools/python/python-2.7.11.2] So, my question: how do I make the 'compile' of /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/zipfile.py more verbose? I tried "make V=1 DESTDIR=/var/aixtools/aixtools/python/2.7.11.2 install". but the output was identical. Thanks, Michael From anais_carolina_09 at hotmail.com Fri Mar 11 14:27:06 2016 From: anais_carolina_09 at hotmail.com (anais timaure) Date: Fri, 11 Mar 2016 14:57:06 -0430 Subject: [Python-Dev] Request for Pronouncement: PEP 441 - Improving Python ZIP Application Support Message-ID: Por favor quiero descargar este programa para mi tel?fono Windows 8 Anais'Timaure? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Mar 11 17:09:41 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 11 Mar 2016 17:09:41 -0500 Subject: [Python-Dev] Use utf-8 charset for tracker summaries? Message-ID: The weeky 'Summariy of Python tracker Issues' ('tracker' should be capitalized if 'Issues' is) starts with Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Names sometimes have not-ascii chars, and they do not get properly displayed for me with Thunderbird on Windows. For instance, Ra?l N??ez de Arenas (Ra?l N??ez de Arenas) is transmitted as Ra=C3=BAl N=C3=BA=C3=B1ez de= Arenas and displayed as Ra??l N????ez de Arenas I am rather sure that this does not happen with email sent to me by the tracker itself. -- Terry Jan Reedy From vadmium+py at gmail.com Fri Mar 11 17:38:26 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Fri, 11 Mar 2016 22:38:26 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: Hi Russell. Sorry for the minor ~1 month delay in replying :) I have been doing some experimenting to see what is involved in cross-compiling Python (Native host = Linux, target = Windows via mingw and some patches). So I have a slightly better understanding of the problem than before. On 16 February 2016 at 01:41, Russell Keith-Magee wrote: > In order to build for a host platform, you have to compile for a local > platform first - for example, to compile an iOS ARM64 binary, you have to > compile for OS X x86_64 first. This gives you a local platform version of > Python you can use when building the iOS version. > > Early in the Makefile, the variable PYTHON_FOR_BUILD is set. This points at > the CPU-local version of Python that can be invoked, which is used for > module builds, and for compiling the standard library source code. This is > set by ?host and ?build flags to configure, plus the use of CC and LDFLAGS > environment variables to point at the compiler and libraries for the > platform you?re compiling for, and a PATH variable that provides the local > platform?s version of Python. So far I haven?t succeeded with my Min GW cross build and am temporarily giving up due to incompatibilities. But my attempts looked a bit like this: make clean # Work around confusion with existing in-source build mkdir native (cd native/ && ../configure) make -C native/ Parser/pgen mkdir mingw (cd mingw/ && ../configure --host=i486-mingw32 --build=x86) make -C mingw/ PGEN=../native/Parser/pgen Actually it was not as smooth as the above commands, because pgen tends to get overwritten with a cross-compiled version. Perhaps we could add a PGEN_FOR_BUILD override, like HOSTPGEN in the patch used at . > There are two places where special handling is required: the compilation and > execution of the parser generator, and _freeze_importlib. In both cases, the > tool needs to be compiled for the local platform, and then executed. > Historically (i.e., Py3.4 and earlier), this has been done by spawning a > child MAKE to compile the tool; this runs the compilation phase with the > local CPU environment, before returning to the master makefile and executing > the tool. By spawning the child MAKE, you get a ?clean? environment, so the > tool is built natively. However, as I understand it, it causes problems with > parallel builds due to race conditions on build rules. The change in > Python3.5 simplified the rule so that child MAKE calls weren?t used, but > that means that pgen and _freeze_importlib are compiled for ARM64, so they > won?t run on the local platform. You suggest that the child Make command happened to compile pgen etc natively, rather than with the cross compiler. But my understanding is that when you invoke $(MAKE), all the environment variables, configure settings, etc, including the cross compiler, would be inherited by the child. Would it be more correct to say instead that in 3.4 you did a separate native build step, precompiling pgen and _freeze_importlib for the native build host? Then you hoped that the child Make was _not_ invoked in the cross-compilation stage and your precompiled executables would not be rebuilt? > As best as I can work out, the solution is to: > > (1) Include the parser generator and _freeze_importlib as part of the > artefacts of local platform. That way, you could use the version of pgen and > _freeze_importlib that was compiled as part of the local platform build. At > present, pgen and _freeze_importlib are used during the build process, but > aren?t preserved at the end of the build; or I don?t understand. After I run Make, it looks like I get working executables leftover at Programs/_freeze_importlib and Parser/pgen. Do you mean to install these programs with ?make install? or something? > (2) Include some concept of the ?local compiler? in the build process, which > can be used to compile pgen and _freeze_importlib; or On the surface solution (2) sounds like the ideal fix. But I guess the local native compiler might also require a separate set of CPPFLAGS, pyconfig.h settings etc. In other words it is sounding like a whole separate ?configure? run. I am thinking it might be simplest to just require this native ?configure? run to be done manually. From russell at keith-magee.com Fri Mar 11 18:16:29 2016 From: russell at keith-magee.com (Russell Keith-Magee) Date: Sat, 12 Mar 2016 07:16:29 +0800 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: On Sat, Mar 12, 2016 at 6:38 AM, Martin Panter wrote: > Hi Russell. Sorry for the minor ~1 month delay in replying :) > > I have been doing some experimenting to see what is involved in > cross-compiling Python (Native host = Linux, target = Windows via > mingw and some patches). So I have a slightly better understanding of > the problem than before. > > On 16 February 2016 at 01:41, Russell Keith-Magee > wrote: > > In order to build for a host platform, you have to compile for a local > > platform first - for example, to compile an iOS ARM64 binary, you have to > > compile for OS X x86_64 first. This gives you a local platform version of > > Python you can use when building the iOS version. > > > > Early in the Makefile, the variable PYTHON_FOR_BUILD is set. This points > at > > the CPU-local version of Python that can be invoked, which is used for > > module builds, and for compiling the standard library source code. This > is > > set by ?host and ?build flags to configure, plus the use of CC and > LDFLAGS > > environment variables to point at the compiler and libraries for the > > platform you?re compiling for, and a PATH variable that provides the > local > > platform?s version of Python. > > So far I haven?t succeeded with my Min GW cross build and am > temporarily giving up due to incompatibilities. But my attempts looked > a bit like this: > > make clean # Work around confusion with existing in-source build > mkdir native > (cd native/ && ../configure) > make -C native/ Parser/pgen > mkdir mingw > (cd mingw/ && ../configure --host=i486-mingw32 --build=x86) > make -C mingw/ PGEN=../native/Parser/pgen > > Actually it was not as smooth as the above commands, because pgen > tends to get overwritten with a cross-compiled version. Perhaps we > could add a PGEN_FOR_BUILD override, like HOSTPGEN in the patch used > at < > https://wayback.archive.org/web/20160131224915/http://randomsplat.com/id5-cross-compiling-python-for-embedded-linux.html > >. > > That might fix the pgen problem, but _freeze_importlib still remains. I suppose the same thing might be possible for _freeze_importlib as well? > There are two places where special handling is required: the compilation > and > > execution of the parser generator, and _freeze_importlib. In both cases, > the > > tool needs to be compiled for the local platform, and then executed. > > Historically (i.e., Py3.4 and earlier), this has been done by spawning a > > child MAKE to compile the tool; this runs the compilation phase with the > > local CPU environment, before returning to the master makefile and > executing > > the tool. By spawning the child MAKE, you get a ?clean? environment, so > the > > tool is built natively. However, as I understand it, it causes problems > with > > parallel builds due to race conditions on build rules. The change in > > Python3.5 simplified the rule so that child MAKE calls weren?t used, but > > that means that pgen and _freeze_importlib are compiled for ARM64, so > they > > won?t run on the local platform. > > You suggest that the child Make command happened to compile pgen etc > natively, rather than with the cross compiler. But my understanding is > that when you invoke $(MAKE), all the environment variables, configure > settings, etc, including the cross compiler, would be inherited by the > child. > > Would it be more correct to say instead that in 3.4 you did a separate > native build step, precompiling pgen and _freeze_importlib for the > native build host? Then you hoped that the child Make was _not_ > invoked in the cross-compilation stage and your precompiled > executables would not be rebuilt? > Yes - as far as I can make out (with my admittedly hazy understanding), that appears to be what is going on. Although it?s not that I ?hoped? the build wouldn?t happen on the second pass - it was the behavior that was previously relied, and on was altered. > > As best as I can work out, the solution is to: > > > > (1) Include the parser generator and _freeze_importlib as part of the > > artefacts of local platform. That way, you could use the version of pgen > and > > _freeze_importlib that was compiled as part of the local platform build. > At > > present, pgen and _freeze_importlib are used during the build process, > but > > aren?t preserved at the end of the build; or > > I don?t understand. After I run Make, it looks like I get working > executables leftover at Programs/_freeze_importlib and Parser/pgen. Do > you mean to install these programs with ?make install? or something? > Making them part of the installable artefacts would be one option, but they don?t have to be installed, just preserved. For example, as a nasty hack, I?ve been able to use this approach to get the build working for 3.5. After the native build, I copy _freeze_importlib to a ?safe? location. I then copy it back into place prior to the target build. It works, but it?s in no way suitable for a final build. > > (2) Include some concept of the ?local compiler? in the build process, > which > > can be used to compile pgen and _freeze_importlib; or > > On the surface solution (2) sounds like the ideal fix. But I guess the > local native compiler might also require a separate set of CPPFLAGS, > pyconfig.h settings etc. In other words it is sounding like a whole > separate ?configure? run. I am thinking it might be simplest to just > require this native ?configure? run to be done manually. > That run is going to happen anyway, since you have to compile and build for the native platform. Yours, Russ Magee %-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadmium+py at gmail.com Fri Mar 11 19:48:07 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Sat, 12 Mar 2016 00:48:07 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: On 11 March 2016 at 23:16, Russell Keith-Magee wrote: > > On Sat, Mar 12, 2016 at 6:38 AM, Martin Panter wrote: >> make clean # Work around confusion with existing in-source build >> mkdir native >> (cd native/ && ../configure) >> make -C native/ Parser/pgen >> mkdir mingw >> (cd mingw/ && ../configure --host=i486-mingw32 --build=x86) >> make -C mingw/ PGEN=../native/Parser/pgen >> >> Actually it was not as smooth as the above commands, because pgen >> tends to get overwritten with a cross-compiled version. Perhaps we >> could add a PGEN_FOR_BUILD override, like HOSTPGEN in the patch used >> at >> . >> > That might fix the pgen problem, but _freeze_importlib still remains. I > suppose the same thing might be possible for _freeze_importlib as well? Yes. I never got up to it failing in my experiments, but I think I would propose a FREEZE_IMPORTLIB override variable for that too. >> Would it be more correct to say instead that in 3.4 you did a separate >> native build step, precompiling pgen and _freeze_importlib for the >> native build host? Then you hoped that the child Make was _not_ >> invoked in the cross-compilation stage and your precompiled >> executables would not be rebuilt? > > > Yes - as far as I can make out (with my admittedly hazy understanding), that > appears to be what is going on. Although it?s not that I ?hoped? the build > wouldn?t happen on the second pass - it was the behavior that was previously > relied, and on was altered. Do you have a copy/patch/link/etc to the actual commands that you relied on? It?s hard to guess exactly what you were doing that broke without this information. >> > As best as I can work out, the solution is to: >> > >> > (1) Include the parser generator and _freeze_importlib as part of the >> > artefacts of local platform. That way, you could use the version of pgen >> > and >> > _freeze_importlib that was compiled as part of the local platform build. >> > At >> > present, pgen and _freeze_importlib are used during the build process, >> > but >> > aren?t preserved at the end of the build; or >> >> I don?t understand. After I run Make, it looks like I get working >> executables leftover at Programs/_freeze_importlib and Parser/pgen. Do >> you mean to install these programs with ?make install? or something? > > > Making them part of the installable artefacts would be one option, but they > don?t have to be installed, just preserved. What commands are you running that cause them to not be preserved at the end of the build? > For example, as a nasty hack, I?ve been able to use this approach to get the > build working for 3.5. After the native build, I copy _freeze_importlib to a > ?safe? location. I then copy it back into place prior to the target build. > It works, but it?s in no way suitable for a final build. > >> >> > (2) Include some concept of the ?local compiler? in the build process, >> > which >> > can be used to compile pgen and _freeze_importlib; or >> >> On the surface solution (2) sounds like the ideal fix. But I guess the >> local native compiler might also require a separate set of CPPFLAGS, >> pyconfig.h settings etc. In other words it is sounding like a whole >> separate ?configure? run. I am thinking it might be simplest to just >> require this native ?configure? run to be done manually. > > > That run is going to happen anyway, since you have to compile and build for > the native platform. From ezio.melotti at gmail.com Sat Mar 12 01:42:55 2016 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sat, 12 Mar 2016 08:42:55 +0200 Subject: [Python-Dev] Use utf-8 charset for tracker summaries? In-Reply-To: References: Message-ID: On Sat, Mar 12, 2016 at 12:09 AM, Terry Reedy wrote: > The weeky 'Summariy of Python tracker Issues' ('tracker' should be > capitalized if 'Issues' is) starts with > > Content-Type: text/plain; charset="us-ascii" > Content-Transfer-Encoding: quoted-printable > > Names sometimes have not-ascii chars, and they do not get properly displayed > for me with Thunderbird on Windows. For instance, > Ra?l N??ez de Arenas (Ra?l N??ez de Arenas) > is transmitted as > Ra=C3=BAl N=C3=BA=C3=B1ez de= Arenas > and displayed as > Ra??l N????ez de Arenas > This already looks like UTF-8 -- you should be able to verify this by manually selecting UTF-8 as encoding from the menu. If the Content-Type still uses us-ascii though, it should be fixed to specify UTF-8 instead. Best Regards, Ezio Melotti > I am rather sure that this does not happen with email sent to me by the > tracker itself. > > -- > Terry Jan Reedy > From tjreedy at udel.edu Sat Mar 12 03:56:25 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 12 Mar 2016 03:56:25 -0500 Subject: [Python-Dev] Use utf-8 charset for tracker summaries? In-Reply-To: References: Message-ID: On 3/12/2016 1:42 AM, Ezio Melotti wrote: > On Sat, Mar 12, 2016 at 12:09 AM, Terry Reedy wrote: >> The weeky 'Summariy of Python tracker Issues' ('tracker' should be >> capitalized if 'Issues' is) starts with >> >> Content-Type: text/plain; charset="us-ascii" >> Content-Transfer-Encoding: quoted-printable >> >> Names sometimes have not-ascii chars, and they do not get properly displayed >> for me with Thunderbird on Windows. For instance, >> Ra?l N??ez de Arenas (Ra?l N??ez de Arenas) >> is transmitted as >> Ra=C3=BAl N=C3=BA=C3=B1ez de= Arenas >> and displayed as >> Ra??l N????ez de Arenas >> > > This already looks like UTF-8 -- you should be able to verify this by > manually selecting UTF-8 as encoding from the menu. I found the per-message setting and selecting utf-8 fixed it. > If the Content-Type still uses us-ascii though, it should be fixed to > specify UTF-8 instead. Email from the tracker specifies utf-8. Thunderbird must assume latin-8 for non-ascii in a message specified as ascii. -- Terry Jan Reedy From steve at holdenweb.com Sat Mar 12 14:46:33 2016 From: steve at holdenweb.com (Steve Holden) Date: Sat, 12 Mar 2016 19:46:33 +0000 Subject: [Python-Dev] [Webmaster] Python installation problem In-Reply-To: <56e4293e.d247620a.75c50.ffffef11@mx.google.com> References: <56e4293e.d247620a.75c50.ffffef11@mx.google.com> Message-ID: Hi Sumit, We get a lot of these inquiries. Yes, you can generally assume that Python will be available for the current versions of Linux, Windows and MacOS because the developers generally use those platforms for development work. It is certainly fine on Windows 10, so you should have no problems. You might find https://www.londonappdeveloper.com/setting-up-your-windows-10-system-for-python-development-pydev-eclipse-python/ helpful in setting up your environment. I have forwarded this query to the developers list, as I noticed that the Windows releases page (https://www.python.org/downloads/windows/) tells the reader a lot about what's available for Windows but doesn't mention which versions of Windows are supported. I won't guarantee the developers will have time to reply, but at least they know that this would be useful information. Thanks for your interest in Python. Clicking the links for the latest downloads on the home page of python.org is the traditional way to get started. Good luck! regards Steve Steve Holden On Sat, Mar 12, 2016 at 2:34 PM, Sumit Mandal wrote: > Sir/Mam > > > > I wanted to know if python runs on windows 10 or not and if it does how > can I install it and get it running? > > > > Thanking you > > Yours sincerely > > Sumit Mandal > > > > Sent from Mail for > Windows 10 > > > > _______________________________________________ > Webmaster mailing list > Webmaster at python.org > https://mail.python.org/mailman/listinfo/webmaster > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aixtools at gmail.com Sun Mar 13 07:06:26 2016 From: aixtools at gmail.com (Michael Felt) Date: Sun, 13 Mar 2016 12:06:26 +0100 Subject: [Python-Dev] Question about sys.path and sys.argv and how packaging (may) affects default values In-Reply-To: References: <56B18CD7.2010409@python.org> <56B1B73C.1030204@sdamon.com> <56B39E26.3060407@sdamon.com> <56B65F0C.2070403@python.org> <56D5D576.9010307@python.org> <56D5FAD5.2030904@python.org> <56D6D377.5000707@gmail.com> Message-ID: <56E549B2.8020702@gmail.com> On 2016-03-02 18:45, Thomas Wouters wrote: > On Wed, Mar 2, 2016 at 3:50 AM, Michael Felt wrote: > >> Hello all, >> >> 1) There are many lists to choose from - if this is the wrong one for >> questions about packaging - please forgive me, and point me in the right >> direction. >> > It's hard to say where this belongs best, but python-list would probably > have done as well. > > >> 2) Normally, I have just packaged python, and then moved on. However, >> recently I have been asked to help with packaging an 'easier to install' >> python by people using cloud-init, and more recently people wanting to use >> salt-stack (on AIX). >> >> FYI: I have been posting about my complete failure to build 2.7.11 ( >> http://bugs.python.org/issue26466) - so, what I am testing is based on >> 2.7.10 - which built easily for me. >> >> Going through the 'base documentation' I saw a reference to both sys.argv >> and sys.path. atm, I am looking for an easy way to get the program name >> (e.g., /opt/bin/python, versus ./python). >> I have my reasons (basically, looking for a compiled-in library search >> path to help with http://bugs.python.org/issue26439) >> > I think the only way to get at the compiled-in search path is to recreate > it based on the compiled-in prefix, which you can get through distutils. > Python purposely only uses the compiled-in path as the last resort. > Instead, it searches for its home relative to the executable and adds a set > of directories relative to its home (if they exist). > > It's not clear to me why you're focusing on these differences, as (as I > describe below) they are immaterial. > > >> Looking on two platforms (AIX, my build, and debian for power) I am >> surprised that sys.argv is empty in both cases, and sys.path returns >> /opt/lib/python27.zip with AIX, but not with debian. >> > When you run python interactively, sys.argv[0] will be '', yes. Since > you're not launching a program, there's nothing else to set it to. 'python' > (or the path to the executable) wouldn't be the right thing to set it to, > because python itself isn't a Python program :) > > The actual python executable is sys.executable, not sys.argv[0], but you > shouldn't usually care about that, either. If you want to know where to > install things, distutils is the thing to use. If you want to know where > Python thinks it's installed (for debugging purposes only, really), > sys.prefix will tell you. > > >> root at x064:[/data/prj/aixtools/python/python-2.7.10]/opt/bin/python >> Python 2.7.10 (default, Nov 3 2015, 14:36:51) [C] on aix5 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import sys >>>>> sys.argv >> [''] >>>>> sys.path >> ['', '/opt/lib/python27.zip', '/opt/lib/python2.7', >> '/opt/lib/python2.7/plat-aix5', '/opt/lib/python2.7/lib-tk', >> '/opt/lib/python2.7/lib-old', '/opt/lib/python2.7/lib-dynload', >> '/opt/lib/python2.7/site-packages'] >> >> michael at ipv4:~$ python >> Python 2.7.9 (default, Mar 1 2015, 13:01:00) >> [GCC 4.9.2] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import sys >>>>> sys.argv >> [''] >>>>> sys.path >> ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-powerpc-linux-gnu', >> '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', >> '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', >> '/usr/lib/python2.7/dist-packages', >> '/usr/lib/python2.7/dist-packages/PILcompat', >> '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7'] >> > In sys.path, you're seeing the difference between a vanilla Python and > Debian's patched Python. Vanilla Python adds $prefix/lib/python27.zip to > sys.path unconditionally, whereas Debian removes it when it doesn't exist. > Likewise, the dist-packages directory is a local modification by Debian; in > vanilla Python it's called 'site-packages' instead. The subdirectories in > dist-packages that you see in the Debian case are added by .pth files > installed in $prefix -- third-party packages, in other words, adding their > own directories to the module search path. > > >> And I guess I would be interested in getting >> '/opt/lib/python2.7/dist-packages' in there as well (or learn a way to >> later add it for pre-compiled packages such as cloud-init AND that those >> would also look 'first' in /opt/lib/python2.7/dist-packages/cloud-init for >> modules added to support cloud-init - should I so choose (mainly in case of >> compatibility issues between say cloud-init and salt-stack that have common >> modules BUT may have conflicts) - Hopefully never needed for that reason, >> but it might also simplify packaging applications that depend on python. >> > A vanilla Python (or non-Debian-built python, even) has no business looking > in dist-packages. It should just use site-packages. Third-party packages > shouldn't care whether they're installed in site-packages or dist-packages, > and instead should use distutils one way or another (if not by having an > actual setup.py that uses distutils or setuptools, then at least by > querying distutils for the installation directory the way python-config > does). > > >> Many thanks for your time and pointers into the documentation, It is a bit >> daunting :) >> >> Michael >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/thomas%40python.org >> > > Many Thanks Thomas for the extensive answer. My question is to help me understand where to look for default libraries in order to work on a patch. That has come a long way, but it is stuck now with/by something else I need to learn to debug (to find where the non-zero exit status comes from during a build). As a packager I hope to be as 'vanilla' as possible from the python perspective. Another time (another list!) I shall ask about what goes into python.zip Michael From vadmium+py at gmail.com Sun Mar 13 22:41:20 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Mon, 14 Mar 2016 02:41:20 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: On 13 March 2016 at 01:13, Russell Keith-Magee wrote: > The patches that I've uploaded to Issue23670 [1] show a full cross-platform > [1] http://bugs.python.org/issue23670 > build process. After you apply that patch, the iOS directory contains a > meta-Makefile that manages the build process. Thanks very much for pointing that out. This has helped me understand a lot more things. Only now do I realize that the four files generated by pgen and _freeze_importlib are actually already committed into the Mercurial repository: Include/graminit.h Python/graminit.c Python/importlib.h Python/importlib_external.h A question for other Python developers: Why are these generated files stored in the repository? The graminit ones seem to have been there since forever (1990). It seems the importlib ones were there due to a bootstrapping problem, but now that is solved. Antoine said he kept them in the repository on purpose, but I want to know why. If we ignore the cross compiling use case, would there be any other consequences of removing these generated files from the repository? E.g. would it affect the Windows build process? I have two possible solutions in mind: either remove the generated files from the repository and always build them, or keep them but do not automatically regenerate them every build. Since they are generated files, not source files, I would prefer to remove them, but I want to know the consequences first. > On Sat, Mar 12, 2016 at 8:48 AM, Martin Panter wrote: >> On 11 March 2016 at 23:16, Russell Keith-Magee >> wrote: >> > >> > On Sat, Mar 12, 2016 at 6:38 AM, Martin Panter >> > wrote: >> >> I don't understand. After I run Make, it looks like I get working >> >> executables leftover at Programs/_freeze_importlib and Parser/pgen. Do >> >> you mean to install these programs with "make install" or something? >> > >> > >> > Making them part of the installable artefacts would be one option, but >> > they >> > don't have to be installed, just preserved. >> >> What commands are you running that cause them to not be preserved at >> the end of the build? > > > I don't know - this is where I hit the end of my understanding of the build > process. All I know for certain is that 3.4.2 doesn't have this problem; > 3.5.1 does, and Issue22359 (fixed in [3]) is the source of the problem. A > subsequent fix [4] introduced an additional problem with _freeze_importlib. > > [3] https://hg.python.org/cpython/rev/565b96093ec8 > [4] https://hg.python.org/cpython/rev/02e3bf65b2f8 After my realization about the generated files, I think I can solve this one. Before the changes you identified, the build process probably thought the generated files were up to date, so it didn't need to cross-compile pgen or _freeze_importlib. If the generated file timestamps were out of date (e.g. depending on the order they are checked out or extracted), the first native build stage would have fixed them up. From greg at krypto.org Sun Mar 13 23:04:08 2016 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 14 Mar 2016 03:04:08 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: On Sun, Mar 13, 2016 at 7:41 PM Martin Panter wrote: > On 13 March 2016 at 01:13, Russell Keith-Magee > wrote: > > The patches that I've uploaded to Issue23670 [1] show a full > cross-platform > > [1] http://bugs.python.org/issue23670 > > build process. After you apply that patch, the iOS directory contains a > > meta-Makefile that manages the build process. > > Thanks very much for pointing that out. This has helped me understand > a lot more things. Only now do I realize that the four files generated > by pgen and _freeze_importlib are actually already committed into the > Mercurial repository: > > Include/graminit.h > Python/graminit.c > Python/importlib.h > Python/importlib_external.h > > A question for other Python developers: Why are these generated files > stored in the repository? The graminit ones seem to have been there > since forever (1990). It seems the importlib ones were there due to a > bootstrapping problem, but now that is solved. Antoine > said he kept them in > the repository on purpose, but I want to know why. > > If we ignore the cross compiling use case, would there be any other > consequences of removing these generated files from the repository? > E.g. would it affect the Windows build process? > > I have two possible solutions in mind: either remove the generated > files from the repository and always build them, or keep them but do > not automatically regenerate them every build. Since they are > generated files, not source files, I would prefer to remove them, but > I want to know the consequences first. > They should not be regenerated every build, if they are, that seems like a bug in the makefile to me (or else the timestamp checks that make does vs how your code checkout happened). Having them checked in is convenient for cross builds as it is one less thing that needs a build-host-arch build. my 2 cents, -gps > > > On Sat, Mar 12, 2016 at 8:48 AM, Martin Panter > wrote: > >> On 11 March 2016 at 23:16, Russell Keith-Magee > > >> wrote: > >> > > >> > On Sat, Mar 12, 2016 at 6:38 AM, Martin Panter > >> > wrote: > >> >> I don't understand. After I run Make, it looks like I get working > >> >> executables leftover at Programs/_freeze_importlib and Parser/pgen. > Do > >> >> you mean to install these programs with "make install" or something? > >> > > >> > > >> > Making them part of the installable artefacts would be one option, but > >> > they > >> > don't have to be installed, just preserved. > >> > >> What commands are you running that cause them to not be preserved at > >> the end of the build? > > > > > > I don't know - this is where I hit the end of my understanding of the > build > > process. All I know for certain is that 3.4.2 doesn't have this problem; > > 3.5.1 does, and Issue22359 (fixed in [3]) is the source of the problem. A > > subsequent fix [4] introduced an additional problem with > _freeze_importlib. > > > > [3] https://hg.python.org/cpython/rev/565b96093ec8 > > [4] https://hg.python.org/cpython/rev/02e3bf65b2f8 > > After my realization about the generated files, I think I can solve > this one. Before the changes you identified, the build process > probably thought the generated files were up to date, so it didn't > need to cross-compile pgen or _freeze_importlib. If the generated file > timestamps were out of date (e.g. depending on the order they are > checked out or extracted), the first native build stage would have > fixed them up. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Mon Mar 14 00:20:06 2016 From: steve.dower at python.org (Steve Dower) Date: Sun, 13 Mar 2016 21:20:06 -0700 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: Simply removing them would break the Windows build, and it may not be easily fixable as the dependency system we use doesn't allow building a project twice. Currently we fail the build if a generated file changes and then the user has to trigger a second build with the new file, but typically they're fine and the first build succeeds. Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Martin Panter" Sent: ?3/?13/?2016 19:43 To: "Russell Keith-Magee" ; "python-dev" Cc: "antoine at python.org" Subject: Re: [Python-Dev] Bug in build system for cross-platform builds On 13 March 2016 at 01:13, Russell Keith-Magee wrote: > The patches that I've uploaded to Issue23670 [1] show a full cross-platform > [1] http://bugs.python.org/issue23670 > build process. After you apply that patch, the iOS directory contains a > meta-Makefile that manages the build process. Thanks very much for pointing that out. This has helped me understand a lot more things. Only now do I realize that the four files generated by pgen and _freeze_importlib are actually already committed into the Mercurial repository: Include/graminit.h Python/graminit.c Python/importlib.h Python/importlib_external.h A question for other Python developers: Why are these generated files stored in the repository? The graminit ones seem to have been there since forever (1990). It seems the importlib ones were there due to a bootstrapping problem, but now that is solved. Antoine said he kept them in the repository on purpose, but I want to know why. If we ignore the cross compiling use case, would there be any other consequences of removing these generated files from the repository? E.g. would it affect the Windows build process? I have two possible solutions in mind: either remove the generated files from the repository and always build them, or keep them but do not automatically regenerate them every build. Since they are generated files, not source files, I would prefer to remove them, but I want to know the consequences first. > On Sat, Mar 12, 2016 at 8:48 AM, Martin Panter wrote: >> On 11 March 2016 at 23:16, Russell Keith-Magee >> wrote: >> > >> > On Sat, Mar 12, 2016 at 6:38 AM, Martin Panter >> > wrote: >> >> I don't understand. After I run Make, it looks like I get working >> >> executables leftover at Programs/_freeze_importlib and Parser/pgen. Do >> >> you mean to install these programs with "make install" or something? >> > >> > >> > Making them part of the installable artefacts would be one option, but >> > they >> > don't have to be installed, just preserved. >> >> What commands are you running that cause them to not be preserved at >> the end of the build? > > > I don't know - this is where I hit the end of my understanding of the build > process. All I know for certain is that 3.4.2 doesn't have this problem; > 3.5.1 does, and Issue22359 (fixed in [3]) is the source of the problem. A > subsequent fix [4] introduced an additional problem with _freeze_importlib. > > [3] https://hg.python.org/cpython/rev/565b96093ec8 > [4] https://hg.python.org/cpython/rev/02e3bf65b2f8 After my realization about the generated files, I think I can solve this one. Before the changes you identified, the build process probably thought the generated files were up to date, so it didn't need to cross-compile pgen or _freeze_importlib. If the generated file timestamps were out of date (e.g. depending on the order they are checked out or extracted), the first native build stage would have fixed them up. _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 14 08:41:18 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 14 Mar 2016 22:41:18 +1000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: On 14 March 2016 at 13:04, Gregory P. Smith wrote: > They should not be regenerated every build, if they are, that seems like a > bug in the makefile to me (or else the timestamp checks that make does vs > how your code checkout happened). Having them checked in is convenient for > cross builds as it is one less thing that needs a build-host-arch build. It's also two less things to go wrong for folks just wanting to work on the 99.9% of CPython that is neither the Grammar file nor importlib._bootstrap. I'm trying to remember the problem I was solving in making freezeimportlib.o explicitly depend on the Makefile (while cursing past me for not writing it down in the commit message), and I think the issue was that it wasn't correctly picking up changes to the builtin module list. If that's right, then fixing the dependency to be on "$(srcdir)/Makefile.pre.in" instead of the generated Makefile should keep it from getting confused in the cross-compilation scenario, and also restore the behaviour of using the checked in copy rather than rebuilding it just because you ran "./configure". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rdmurray at bitdance.com Mon Mar 14 09:26:14 2016 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 14 Mar 2016 09:26:14 -0400 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: Message-ID: <20160314132616.03455B14029@webabinitio.net> On Mon, 14 Mar 2016 03:04:08 -0000, "Gregory P. Smith" wrote: > On Sun, Mar 13, 2016 at 7:41 PM Martin Panter wrote: > > > On 13 March 2016 at 01:13, Russell Keith-Magee > > wrote: > > > The patches that I've uploaded to Issue23670 [1] show a full > > cross-platform > > > [1] http://bugs.python.org/issue23670 > > > build process. After you apply that patch, the iOS directory contains a > > > meta-Makefile that manages the build process. > > > > Thanks very much for pointing that out. This has helped me understand > > a lot more things. Only now do I realize that the four files generated > > by pgen and _freeze_importlib are actually already committed into the > > Mercurial repository: > > > > Include/graminit.h > > Python/graminit.c > > Python/importlib.h > > Python/importlib_external.h > > > > A question for other Python developers: Why are these generated files > > stored in the repository? The graminit ones seem to have been there > > since forever (1990). It seems the importlib ones were there due to a > > bootstrapping problem, but now that is solved. Antoine > > said he kept them in > > the repository on purpose, but I want to know why. > > > > If we ignore the cross compiling use case, would there be any other > > consequences of removing these generated files from the repository? > > E.g. would it affect the Windows build process? > > > > I have two possible solutions in mind: either remove the generated > > files from the repository and always build them, or keep them but do > > not automatically regenerate them every build. Since they are > > generated files, not source files, I would prefer to remove them, but > > I want to know the consequences first. > > > > They should not be regenerated every build, if they are, that seems like a > bug in the makefile to me (or else the timestamp checks that make does vs > how your code checkout happened). Having them checked in is convenient for > cross builds as it is one less thing that needs a build-host-arch build. The repo-timestamp problem is addressed by the 'make touch' target. And yes, checking in these platform-independent artifacts is very intentional: less to build, fewer external dependencies in the build process...you don't need to *have* python to *build* python, which you would have to if they were not checked in. --David From xdegaye at gmail.com Mon Mar 14 12:34:31 2016 From: xdegaye at gmail.com (Xavier de Gaye) Date: Mon, 14 Mar 2016 17:34:31 +0100 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: <20160314132616.03455B14029@webabinitio.net> References: <20160314132616.03455B14029@webabinitio.net> Message-ID: <56E6E817.1040108@gmail.com> On 03/14/2016 02:26 PM, R. David Murray wrote: > > The repo-timestamp problem is addressed by the 'make touch' target. > > And yes, checking in these platform-independent artifacts is very > intentional: less to build, fewer external dependencies in the build > process...you don't need to *have* python to *build* python, which you > would have to if they were not checked in. Changeset c2a53aa27cad [1] was commited in issue 22359 [2] to remove incorrect uses of recursive make. The changeset added executable binaries as prerequisites to the existing rules (Python/importlib.h and $(GRAMMAR_H)). This broke cross-compilation: * the executables do not exist and must be cross-compiled * then the Python/importlib.h or $(GRAMMAR_H) target recipes must be run since the prerequisites are newer * but the executables cannot run on the build system Actually the files need not be re-generated as their timestamps have been setup for that purpose with 'make touch'. So a solution to the problem introduced by this changeset when cross-compiling could be to remove the binaries as prerequisites of these rules and include the recipe of their corresponding rules, the one used to build the executable, into the recipes of the original rule. Also IMHO the Programs/_freeze_importlib.c can be used instead of Programs/_freeze_importlib.o as a prerequisite in the Python/importlib.h rule. [1] https://hg.python.org/cpython/rev/c2a53aa27cad/ [2] http://bugs.python.org/issue22359 Xavier From xdegaye at gmail.com Mon Mar 14 15:28:26 2016 From: xdegaye at gmail.com (Xavier de Gaye) Date: Mon, 14 Mar 2016 20:28:26 +0100 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: <56E6E817.1040108@gmail.com> References: <20160314132616.03455B14029@webabinitio.net> <56E6E817.1040108@gmail.com> Message-ID: <56E710DA.1060807@gmail.com> On 03/14/2016 05:34 PM, Xavier de Gaye wrote: > Changeset c2a53aa27cad [1] was commited in issue 22359 [2] to remove incorrect > uses of recursive make. The changeset added executable binaries as > prerequisites to the existing rules (Python/importlib.h and $(GRAMMAR_H)). > This broke cross-compilation: > * the executables do not exist and must be cross-compiled > * then the Python/importlib.h or $(GRAMMAR_H) target recipes must be run since > the prerequisites are newer > * but the executables cannot run on the build system > > Actually the files need not be re-generated as their timestamps have been > setup for that purpose with 'make touch'. So a solution to the problem > introduced by this changeset when cross-compiling could be to remove the > binaries as prerequisites of these rules and include the recipe of their > corresponding rules, the one used to build the executable, into the recipes of > the original rule. Also IMHO the Programs/_freeze_importlib.c can be used > instead of Programs/_freeze_importlib.o as a prerequisite in the > Python/importlib.h rule. > > [1] https://hg.python.org/cpython/rev/c2a53aa27cad/ > [2] http://bugs.python.org/issue22359 The pgen dependencies are lost when following my previous suggestion, which is wrong. I have uploaded a patch at issue 22359 that uses a conditional to change the rules, based on whether a cross-compilation is being run. Xavier From victor.stinner at gmail.com Mon Mar 14 19:08:25 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 15 Mar 2016 00:08:25 +0100 Subject: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance In-Reply-To: References: <56B3254F.7020605@egenix.com> <56B34A1E.4010501@egenix.com> <56B35AB5.5090308@egenix.com> Message-ID: 2016-03-09 18:54 GMT+01:00 Brett Cannon : >>> https://docs.python.org/dev/c-api/memory.html#c.PyMem_SetupDebugHooks >> >> The main advantage of this variable is that you don't have to >> recompile Python in debug mode to benefit of these checks. > > I just wanted to say this all sounds awesome! Thanks for all the hard work > on making our memory management story easier to work with, Victor. You're welcome. I pushed my patch adding PYTHONMALLOC environment variable: https://docs.python.org/dev/whatsnew/3.6.html#pythonmalloc-environment-variable Please test PYTHONMALLOC=debug and PYTHONMALLOC=malloc with your favorite application. I also adjusted code (like code handling PYTHONMALLOCSTATS env var) to be able to use debug checks in all cases. For example, debug hooks are now also installed by default when Python is configured in debug mode without pymalloc support. Victor From victor.stinner at gmail.com Mon Mar 14 19:19:11 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 15 Mar 2016 00:19:11 +0100 Subject: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance In-Reply-To: <56BDDEA3.2060702@egenix.com> References: <56B3254F.7020605@egenix.com> <56B34A1E.4010501@egenix.com> <56B35AB5.5090308@egenix.com> <56BDDEA3.2060702@egenix.com> Message-ID: 2016-02-12 14:31 GMT+01:00 M.-A. Lemburg : >>> If your program has bugs, you can use a debug build of Python 3.5 to >>> detect misusage of the API. > > Yes, but people don't necessarily do this, e.g. I have > for a very long time ignored debug builds completely > and when I started to try them, I found that some of the > things I had been doing with e.g. free list implementations > did not work in debug builds. I just added support for debug hooks on Python memory allocators on Python compiled in *release* mode. Set the environment variable PYTHONMALLOC to debug to try with Python 3.6. I added a check on PyObject_Malloc() debug hook to ensure that the function is called with the GIL held. I opened an issue to add a similar check on PyMem_Malloc(): https://bugs.python.org/issue26563 > Yes, but those are part of the stdlib. You'd need to check > a few C extensions which are not tested as part of the stdlib, > e.g. numpy, scipy, lxml, pillow, etc. (esp. ones which implement custom > types in C since these will often need the memory management > APIs). > > It may also be a good idea to check wrapper generators such > as cython, swig, cffi, etc. I ran the test suite of numpy, lxml, Pillow and cryptography (used cffi). I found a bug in numpy. numpy calls PyMem_Malloc() without holding the GIL: https://github.com/numpy/numpy/pull/7404 Except of this bug, all other tests pass with PyMem_Malloc() using pymalloc and all debug checks. Victor From greg at krypto.org Mon Mar 14 19:40:14 2016 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 14 Mar 2016 23:40:14 +0000 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: <56E05D72.8040408@gmail.com> References: <56DE9222.1030407@felt.demon.nl> <56E05D72.8040408@gmail.com> Message-ID: Don't forget BoringSSL. On Wed, Mar 9, 2016 at 9:30 AM Michael Felt wrote: > Can look at it. There has been a lot of discussion, iirc, between OpenSSL > and LibreSSL re: version identification. > Thx for the reference. > > > On 08-Mar-16 14:55, Hasan Diwan wrote: > > > On 8 March 2016 at 00:49, Michael Felt wrote: > >> As a relative newcomer I may have missed a long previous discussion re: >> linking with OpenSSL and/or LibreSSL. >> In an ideal world this would be rtl linking, i.e., underlying >> complexities of *SSL libraries are hidden from applications. >> >> In short, when I saw this http://bugs.python.org/issue26465 Title: >> Upgrade OpenSSL shipped with python installers, it reminded me I need to >> start looking at LibreSSL again - and that, if not already done - might be >> something "secure" for python as well. >> > > According to the libressl website, one of the projects primary goals is to > remain "backwards-compatible with OpenSSL", which is to say, to either > have code work without changes or to fail gracefully when it uses the > deprecated bits. It does seem it ships with OpenBSD. There is an issue open > on bugs to address whatever incompatibilities remain between LibreSSL and > OpenSSL[1]. Perhaps you might want to take a look at that? -- H > 1. https://bugs.python.org/issue23177 > >> >> Michael >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com >> > > > > -- > OpenPGP: http://hasan.d8u.us/gpg.asc > Sent from my mobile device > Envoy? de mon portable > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Mar 14 19:56:46 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 14 Mar 2016 16:56:46 -0700 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: References: <56DE9222.1030407@felt.demon.nl> <56E05D72.8040408@gmail.com> Message-ID: Should people outside google pay attention to boringssl? The first thing it says on the website is: "Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don?t recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability." On Mon, Mar 14, 2016 at 4:40 PM, Gregory P. Smith wrote: > Don't forget BoringSSL. > > On Wed, Mar 9, 2016 at 9:30 AM Michael Felt wrote: >> >> Can look at it. There has been a lot of discussion, iirc, between OpenSSL >> and LibreSSL re: version identification. >> Thx for the reference. >> >> >> On 08-Mar-16 14:55, Hasan Diwan wrote: >> >> >> On 8 March 2016 at 00:49, Michael Felt wrote: >>> >>> As a relative newcomer I may have missed a long previous discussion re: >>> linking with OpenSSL and/or LibreSSL. >>> In an ideal world this would be rtl linking, i.e., underlying >>> complexities of *SSL libraries are hidden from applications. >>> >>> In short, when I saw this http://bugs.python.org/issue26465 Title: >>> Upgrade OpenSSL shipped with python installers, it reminded me I need to >>> start looking at LibreSSL again - and that, if not already done - might be >>> something "secure" for python as well. >> >> >> According to the libressl website, one of the projects primary goals is to >> remain "backwards-compatible with OpenSSL", which is to say, to either have >> code work without changes or to fail gracefully when it uses the deprecated >> bits. It does seem it ships with OpenBSD. There is an issue open on bugs to >> address whatever incompatibilities remain between LibreSSL and OpenSSL[1]. >> Perhaps you might want to take a look at that? -- H >> 1. https://bugs.python.org/issue23177 >>> >>> >>> Michael >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com >> >> >> >> >> -- >> OpenPGP: http://hasan.d8u.us/gpg.asc >> Sent from my mobile device >> Envoy? de mon portable >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/njs%40pobox.com > -- Nathaniel J. Smith -- https://vorpus.org From greg at krypto.org Mon Mar 14 20:06:50 2016 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 15 Mar 2016 00:06:50 +0000 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: References: <56DE9222.1030407@felt.demon.nl> <56E05D72.8040408@gmail.com> Message-ID: On Mon, Mar 14, 2016 at 4:56 PM Nathaniel Smith wrote: > Should people outside google pay attention to boringssl? The first > thing it says on the website is: > > "Although BoringSSL is an open source project, it is not intended for > general use, as OpenSSL is. We don?t recommend that third parties > depend upon it. Doing so is likely to be frustrating because there are > no guarantees of API or ABI stability." > Heh, good point. I guess not. :) > On Mon, Mar 14, 2016 at 4:40 PM, Gregory P. Smith wrote: > > Don't forget BoringSSL. > > > > On Wed, Mar 9, 2016 at 9:30 AM Michael Felt wrote: > >> > >> Can look at it. There has been a lot of discussion, iirc, between > OpenSSL > >> and LibreSSL re: version identification. > >> Thx for the reference. > >> > >> > >> On 08-Mar-16 14:55, Hasan Diwan wrote: > >> > >> > >> On 8 March 2016 at 00:49, Michael Felt wrote: > >>> > >>> As a relative newcomer I may have missed a long previous discussion re: > >>> linking with OpenSSL and/or LibreSSL. > >>> In an ideal world this would be rtl linking, i.e., underlying > >>> complexities of *SSL libraries are hidden from applications. > >>> > >>> In short, when I saw this http://bugs.python.org/issue26465 Title: > >>> Upgrade OpenSSL shipped with python installers, it reminded me I need > to > >>> start looking at LibreSSL again - and that, if not already done - > might be > >>> something "secure" for python as well. > >> > >> > >> According to the libressl website, one of the projects primary goals is > to > >> remain "backwards-compatible with OpenSSL", which is to say, to either > have > >> code work without changes or to fail gracefully when it uses the > deprecated > >> bits. It does seem it ships with OpenBSD. There is an issue open on > bugs to > >> address whatever incompatibilities remain between LibreSSL and > OpenSSL[1]. > >> Perhaps you might want to take a look at that? -- H > >> 1. https://bugs.python.org/issue23177 > >>> > >>> > >>> Michael > >>> _______________________________________________ > >>> Python-Dev mailing list > >>> Python-Dev at python.org > >>> https://mail.python.org/mailman/listinfo/python-dev > >>> Unsubscribe: > >>> > https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com > >> > >> > >> > >> > >> -- > >> OpenPGP: http://hasan.d8u.us/gpg.asc > >> Sent from my mobile device > >> Envoy? de mon portable > >> > >> > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> https://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: > >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > https://mail.python.org/mailman/options/python-dev/njs%40pobox.com > > > > > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadmium+py at gmail.com Mon Mar 14 20:49:36 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 15 Mar 2016 00:49:36 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: <20160314132616.03455B14029@webabinitio.net> References: <20160314132616.03455B14029@webabinitio.net> Message-ID: On 14 March 2016 at 13:26, R. David Murray wrote: > On Mon, 14 Mar 2016 03:04:08 -0000, "Gregory P. Smith" wrote: >> On Sun, Mar 13, 2016 at 7:41 PM Martin Panter wrote: >> > Include/graminit.h >> > Python/graminit.c >> > Python/importlib.h >> > Python/importlib_external.h >> > >> > A question for other Python developers: Why are these generated files >> > stored in the repository? [. . .] >> >> They should not be regenerated every build, if they are, that seems like a >> bug in the makefile to me (or else the timestamp checks that make does vs >> how your code checkout happened). The reason the current Python 3 build regenerates some files, is because of the makefile prerequisites. For example, Include/graminit.h currently depends on Parser/pgen, which needs to be compiled for the native build host. >> Having them checked in is convenient for >> cross builds as it is one less thing that needs a build-host-arch build. > > [. . .] > And yes, checking in these platform-independent artifacts is very > intentional: less to build, fewer external dependencies in the build > process...you don't need to *have* python to *build* python, which you > would have to if they were not checked in. Okay so it sounds like the generated files (more listed in .hgtouch) have to stay. Reasons given: * Some need Python to generate them (bootstrap problem) * Relied on by Windows build system * General convenience (less build steps, less prerequisites, less things to go wrong) One more idea I am considering is to decouple the makefile rules from the main build. So to update the generated files you would have to run a separate command like ?make graminit? or ?make frozen?. The normal build would never regenerate them; although perhaps it could still result in an error or warning if they appear out of date. From jim.baker at python.org Mon Mar 14 21:08:00 2016 From: jim.baker at python.org (Jim Baker) Date: Mon, 14 Mar 2016 19:08:00 -0600 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: References: <56DE9222.1030407@felt.demon.nl> <56E05D72.8040408@gmail.com> Message-ID: I have no vested interest in this, other than the continuing work we have done to make Jython compatible with OpenSSL's model, warts and all. But the fact that BoringSSL cleans up the OpenSSL API ( https://boringssl.googlesource.com/boringssl/+/HEAD/PORTING.md), at the cost of possible backwards breaking API changes looks reasonable. I suppose there is some risk - perhaps the maintainers will decide that returning 1 should mean OK, but that's not going to happen, is it. The real issue here is that no direct exposure of BoringSSL to other packages. I don't think that happens with CPython. (Ironically it happens with Jython, due to how signed jars poorly interact with shading/Java namespace remapping.) Maintaining security means dealing with the inevitable churn. Did I mention Jython's support of Python-compatible SSL? I think I did :p - Jim On Mon, Mar 14, 2016 at 6:06 PM, Gregory P. Smith wrote: > > On Mon, Mar 14, 2016 at 4:56 PM Nathaniel Smith wrote: > >> Should people outside google pay attention to boringssl? The first >> thing it says on the website is: >> >> "Although BoringSSL is an open source project, it is not intended for >> general use, as OpenSSL is. We don?t recommend that third parties >> depend upon it. Doing so is likely to be frustrating because there are >> no guarantees of API or ABI stability." >> > > Heh, good point. I guess not. :) > > >> On Mon, Mar 14, 2016 at 4:40 PM, Gregory P. Smith >> wrote: >> > Don't forget BoringSSL. >> > >> > On Wed, Mar 9, 2016 at 9:30 AM Michael Felt wrote: >> >> >> >> Can look at it. There has been a lot of discussion, iirc, between >> OpenSSL >> >> and LibreSSL re: version identification. >> >> Thx for the reference. >> >> >> >> >> >> On 08-Mar-16 14:55, Hasan Diwan wrote: >> >> >> >> >> >> On 8 March 2016 at 00:49, Michael Felt wrote: >> >>> >> >>> As a relative newcomer I may have missed a long previous discussion >> re: >> >>> linking with OpenSSL and/or LibreSSL. >> >>> In an ideal world this would be rtl linking, i.e., underlying >> >>> complexities of *SSL libraries are hidden from applications. >> >>> >> >>> In short, when I saw this http://bugs.python.org/issue26465 Title: >> >>> Upgrade OpenSSL shipped with python installers, it reminded me I need >> to >> >>> start looking at LibreSSL again - and that, if not already done - >> might be >> >>> something "secure" for python as well. >> >> >> >> >> >> According to the libressl website, one of the projects primary goals >> is to >> >> remain "backwards-compatible with OpenSSL", which is to say, to either >> have >> >> code work without changes or to fail gracefully when it uses the >> deprecated >> >> bits. It does seem it ships with OpenBSD. There is an issue open on >> bugs to >> >> address whatever incompatibilities remain between LibreSSL and >> OpenSSL[1]. >> >> Perhaps you might want to take a look at that? -- H >> >> 1. https://bugs.python.org/issue23177 >> >>> >> >>> >> >>> Michael >> >>> _______________________________________________ >> >>> Python-Dev mailing list >> >>> Python-Dev at python.org >> >>> https://mail.python.org/mailman/listinfo/python-dev >> >>> Unsubscribe: >> >>> >> https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com >> >> >> >> >> >> >> >> >> >> -- >> >> OpenPGP: http://hasan.d8u.us/gpg.asc >> >> Sent from my mobile device >> >> Envoy? de mon portable >> >> >> >> >> >> _______________________________________________ >> >> Python-Dev mailing list >> >> Python-Dev at python.org >> >> https://mail.python.org/mailman/listinfo/python-dev >> >> Unsubscribe: >> >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org >> > >> > >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> > https://mail.python.org/mailman/options/python-dev/njs%40pobox.com >> > >> >> >> >> -- >> Nathaniel J. Smith -- https://vorpus.org >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jbaker%40zyasoft.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 15 00:32:58 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 15 Mar 2016 14:32:58 +1000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: <20160314132616.03455B14029@webabinitio.net> Message-ID: On 15 March 2016 at 10:49, Martin Panter wrote: > One more idea I am considering is to decouple the makefile rules from > the main build. So to update the generated files you would have to run > a separate command like ?make graminit? or ?make frozen?. The normal > build would never regenerate them; although perhaps it could still > result in an error or warning if they appear out of date. Some of them used to work that way and it's an incredible PITA when you actually *are* working on one of the affected bits of the interpreter - working on those parts is rare, so if there are special cases to remember, you can pretty much guarantee we'll have forgotten them by the time we work on that piece again. However, it would be worth reviewing the explicit dependencies on "Makefile" and see whether they could be replaced by dependencies on Makefile.pre.in instead. I'm confident that will work just fine for the importlib bootstrapping, and I suspect it will work for the other pregenerated-and-checked-in files as well. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vadmium+py at gmail.com Tue Mar 15 01:15:47 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 15 Mar 2016 05:15:47 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: <20160314132616.03455B14029@webabinitio.net> Message-ID: On 15 March 2016 at 04:32, Nick Coghlan wrote: > On 15 March 2016 at 10:49, Martin Panter wrote: >> One more idea I am considering is to decouple the makefile rules from >> the main build. So to update the generated files you would have to run >> a separate command like ?make graminit? or ?make frozen?. The normal >> build would never regenerate them; although perhaps it could still >> result in an error or warning if they appear out of date. > > Some of them used to work that way and it's an incredible PITA when > you actually *are* working on one of the affected bits of the > interpreter - working on those parts is rare, so if there are special > cases to remember, you can pretty much guarantee we'll have forgotten > them by the time we work on that piece again. Perhaps if we wrapped them all up in a common ?make regenerate? target, so it is only one special case to remember? Maybe you could include other stuff like ?make clinic? in that as well. Or you could include the special commands in the warning messages. > However, it would be worth reviewing the explicit dependencies on > "Makefile" and see whether they could be replaced by dependencies on > Makefile.pre.in instead. I'm confident that will work just fine for > the importlib bootstrapping, and I suspect it will work for the other > pregenerated-and-checked-in files as well. The problem is not the reference to Makefile. The graminit files do not depend on Makefile. The bigger problem is that the checked-in files depend on compiled programs. This is a summary of the current rules for importlib: _freeze_importlib.o: _freeze_importlib.c Makefile _freeze_importlib: _freeze_importlib.o [. . .] $(LINKCC) [. . .] importlib_external.h: _bootstrap_external.py _freeze_importlib _freeze_importlib _bootstrap_external.py importlib_external.h importlib.h: _bootstrap.py _freeze_importlib _freeze_importlib _bootstrap.py importlib.h So importlib.h depends on the _freeze_importlib compiled program (and only indirectly on Makefile). The makefile says we have to compile _freeze_importlib before checking if importlib.h is up to date. Gnu Make has order-only prerequisites , which it seems we could abuse to do most of what we want. But (1) I?m not sure if we can restrict ourselves to Gnu Make, and (2) it is a horrible hack and would always compile _freeze_importlib even if it is never run. From ncoghlan at gmail.com Tue Mar 15 04:04:38 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 15 Mar 2016 18:04:38 +1000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: <20160314132616.03455B14029@webabinitio.net> Message-ID: On 15 March 2016 at 15:15, Martin Panter wrote: > The problem is not the reference to Makefile. The graminit files do > not depend on Makefile. The bigger problem is that the checked-in > files depend on compiled programs. This is a summary of the current > rules for importlib: > > _freeze_importlib.o: _freeze_importlib.c Makefile > > _freeze_importlib: _freeze_importlib.o [. . .] > $(LINKCC) [. . .] > > importlib_external.h: _bootstrap_external.py _freeze_importlib > _freeze_importlib _bootstrap_external.py importlib_external.h > > importlib.h: _bootstrap.py _freeze_importlib > _freeze_importlib _bootstrap.py importlib.h > > So importlib.h depends on the _freeze_importlib compiled program (and > only indirectly on Makefile). The makefile says we have to compile > _freeze_importlib before checking if importlib.h is up to date. Ah, I understand now - the fundamental problem is with a checked in file depending on a non-checked-in file, so if you clean out all the native build artifacts when cross-compiling, the makefile will attempt to create target versions of all the helper utilities (pgen, _freeze_importlib, argument clinic, etc). Would it help to have a "make bootstrap" target that touched all the checked in generated files with build dependencies on non-checked-in files, but only after first touching the expected locations of the built binaries they depend on? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From cory at lukasa.co.uk Tue Mar 15 06:21:11 2016 From: cory at lukasa.co.uk (Cory Benfield) Date: Tue, 15 Mar 2016 10:21:11 +0000 Subject: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL In-Reply-To: References: <56DE9222.1030407@felt.demon.nl> <56E05D72.8040408@gmail.com> Message-ID: <99974E63-56B5-42EF-90B3-CDF314394D5B@lukasa.co.uk> > On 15 Mar 2016, at 01:08, Jim Baker wrote: > > I have no vested interest in this, other than the continuing work we have done to make Jython compatible with OpenSSL's model, warts and all. > > But the fact that BoringSSL cleans up the OpenSSL API (https://boringssl.googlesource.com/boringssl/+/HEAD/PORTING.md), at the cost of possible backwards breaking API changes looks reasonable. I suppose there is some risk - perhaps the maintainers will decide that returning 1 should mean OK, but that's not going to happen, is it. The real issue here is that no direct exposure of BoringSSL to other packages. I don't think that happens with CPython. (Ironically it happens with Jython, due to how signed jars poorly interact with shading/Java namespace remapping.) > > Maintaining security means dealing with the inevitable churn. Did I mention Jython's support of Python-compatible SSL? I think I did :p It is *possible* to support BoringSSL: curl does. However, the BoringSSL developers *really* only target Chromium when they consider the possibility of breakage, so it costs curl quite a bit of development time[0]. curl accepts that cost because it supports every TLS stack under the sun: given that CPython currently supports exactly one, widening it to two is a very big risk indeed. Cory [0]: See https://github.com/curl/curl/issues/275, https://github.com/curl/curl/pull/524, https://github.com/curl/curl/pull/640 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vadmium+py at gmail.com Tue Mar 15 06:40:35 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 15 Mar 2016 10:40:35 +0000 Subject: [Python-Dev] Bug in build system for cross-platform builds In-Reply-To: References: <20160314132616.03455B14029@webabinitio.net> Message-ID: On 15 March 2016 at 08:04, Nick Coghlan wrote: > On 15 March 2016 at 15:15, Martin Panter wrote: >> _freeze_importlib.o: _freeze_importlib.c Makefile >> >> _freeze_importlib: _freeze_importlib.o [. . .] >> $(LINKCC) [. . .] >> >> importlib_external.h: _bootstrap_external.py _freeze_importlib >> _freeze_importlib _bootstrap_external.py importlib_external.h >> >> importlib.h: _bootstrap.py _freeze_importlib >> _freeze_importlib _bootstrap.py importlib.h > > Ah, I understand now - the fundamental problem is with a checked in > file depending on a non-checked-in file, so if you clean out all the > native build artifacts when cross-compiling, the makefile will attempt > to create target versions of all the helper utilities (pgen, > _freeze_importlib, argument clinic, etc). > > Would it help to have a "make bootstrap" target that touched all the > checked in generated files with build dependencies on non-checked-in > files, but only after first touching the expected locations of the > built binaries they depend on? That sounds similar to ?make touch?, with a couple differences. One trouble I forsee is the conflict with shared prerequisites. E.g. ?make bootstrap? would have to create some dummy object files as prerequisites of the pgen program, but we would first have build others e.g. Parser/acceler.o properly for the main Python library. It all feels way too complicated to me. The Python build system is complicated enough as it is. Maybe it is simplest to just add something in the spirit of Xavier?s suggested patch. This would mean that we keep the generated files checked in (to help with Windows and cross compiled builds), we keep the current rules that force normal makefile builds to blindly regenerate the files, but we add some flag or configure.ac check to disable this regeneration if desired. From guido at python.org Tue Mar 15 16:30:08 2016 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Mar 2016 13:30:08 -0700 Subject: [Python-Dev] What does a double coding cookie mean? Message-ID: I came across a file that had two different coding cookies -- one on the first line and one on the second. CPython uses the first, but mypy happens to use the second. I couldn't find anything in the spec or docs ruling out the second interpretation. Does anyone have a suggestion (apart from following CPython)? Reference: https://github.com/python/mypy/issues/1281 -- --Guido van Rossum (python.org/~guido) From python at mrabarnett.plus.com Tue Mar 15 16:53:34 2016 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 15 Mar 2016 20:53:34 +0000 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: <56E8764E.7040501@mrabarnett.plus.com> On 2016-03-15 20:30, Guido van Rossum wrote: > I came across a file that had two different coding cookies -- one on > the first line and one on the second. CPython uses the first, but mypy > happens to use the second. I couldn't find anything in the spec or > docs ruling out the second interpretation. Does anyone have a > suggestion (apart from following CPython)? > > Reference: https://github.com/python/mypy/issues/1281 > I think it should follow CPython. As I see it, CPython allows it to be on the second line because the first line might be needed for the shebang. If the first two lines both had an encoding, and then you inserted a shebang line, the second one would be ignored anyway. From brett at python.org Tue Mar 15 17:04:57 2016 From: brett at python.org (Brett Cannon) Date: Tue, 15 Mar 2016 21:04:57 +0000 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On Tue, 15 Mar 2016 at 13:31 Guido van Rossum wrote: > I came across a file that had two different coding cookies -- one on > the first line and one on the second. CPython uses the first, but mypy > happens to use the second. I couldn't find anything in the spec or > docs ruling out the second interpretation. Does anyone have a > suggestion (apart from following CPython)? > > Reference: https://github.com/python/mypy/issues/1281 I think the spirit of PEP 263 is for the first specified encoding to win as the support of two lines is to support shebangs and not multiple encodings :) . I also think the fact that tokenize.detect_encoding() doesn't automatically read two lines from its input also suggests the intent is "first encoding wins" (and that is the semantics of the function). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon+python-dev at unequivocal.co.uk Tue Mar 15 16:58:01 2016 From: jon+python-dev at unequivocal.co.uk (Jon Ribbens) Date: Tue, 15 Mar 2016 20:58:01 +0000 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: <20160315205801.GS4951@unequivocal.co.uk> On Tue, Mar 15, 2016 at 01:30:08PM -0700, Guido van Rossum wrote: > I came across a file that had two different coding cookies -- one on > the first line and one on the second. CPython uses the first, but mypy > happens to use the second. I couldn't find anything in the spec or > docs ruling out the second interpretation. Does anyone have a > suggestion (apart from following CPython)? > > Reference: https://github.com/python/mypy/issues/1281 If it helps, what 'vim' appears to do is to read the first 'n' lines in order and then last 'n' lines in reverse order, stopping if the second stage reaches a line already processed by the first stage. So with 'modelines=5', the following file: /* vim: set ts=1: */ /* vim: set ts=2: */ /* vim: set ts=3: */ /* vim: set ts=4: */ /* vim: set sw=5 ts=5: */ /* vim: set ts=6: */ /* vim: set ts=7: */ /* vim: set ts=8: */ sets sw=5 and ts=6. Obviously CPython shouldn't be going through all that palaver! But it would be a bit more vim-like to use the second line rather than the first if both lines have the cookie. Take that as you will - I'm not saying being 'vim-like' is an inherent virtue ;-) From python at mrabarnett.plus.com Tue Mar 15 17:23:33 2016 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 15 Mar 2016 21:23:33 +0000 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56E8764E.7040501@mrabarnett.plus.com> References: <56E8764E.7040501@mrabarnett.plus.com> Message-ID: <56E87D55.8020004@mrabarnett.plus.com> On 2016-03-15 20:53, MRAB wrote: > On 2016-03-15 20:30, Guido van Rossum wrote: >> I came across a file that had two different coding cookies -- one on >> the first line and one on the second. CPython uses the first, but mypy >> happens to use the second. I couldn't find anything in the spec or >> docs ruling out the second interpretation. Does anyone have a >> suggestion (apart from following CPython)? >> >> Reference: https://github.com/python/mypy/issues/1281 >> > I think it should follow CPython. > > As I see it, CPython allows it to be on the second line because the > first line might be needed for the shebang. > > If the first two lines both had an encoding, and then you inserted a > shebang line, the second one would be ignored anyway. > A further thought: is mypy just assuming that the first line contains the shebang? If there's only one encoding line, and it's the first line, does mypy still get it right? From guido at python.org Tue Mar 15 20:28:05 2016 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Mar 2016 17:28:05 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: I agree that the spirit of the PEP is to stop at the first coding cookie found. Would it be okay if I updated the PEP to clarify this? I'll definitely also update the docs. On Tue, Mar 15, 2016 at 2:04 PM, Brett Cannon wrote: > > > On Tue, 15 Mar 2016 at 13:31 Guido van Rossum wrote: >> >> I came across a file that had two different coding cookies -- one on >> the first line and one on the second. CPython uses the first, but mypy >> happens to use the second. I couldn't find anything in the spec or >> docs ruling out the second interpretation. Does anyone have a >> suggestion (apart from following CPython)? >> >> Reference: https://github.com/python/mypy/issues/1281 > > > I think the spirit of PEP 263 is for the first specified encoding to win as > the support of two lines is to support shebangs and not multiple encodings > :) . I also think the fact that tokenize.detect_encoding() doesn't > automatically read two lines from its input also suggests the intent is > "first encoding wins" (and that is the semantics of the function). -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Tue Mar 15 23:53:17 2016 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 16 Mar 2016 14:53:17 +1100 Subject: [Python-Dev] What does a double coding cookie mean? References: Message-ID: <85h9g7kriq.fsf@benfinney.id.au> Guido van Rossum writes: > I agree that the spirit of the PEP is to stop at the first coding > cookie found. Would it be okay if I updated the PEP to clarify this? > I'll definitely also update the docs. +1, it never occurred to me that the specification could mean otherwise. On reflection I can't see a good reason for it to mean otherwise. -- \ ?Alternative explanations are always welcome in science, if | `\ they are better and explain more. Alternative explanations that | _o__) explain nothing are not welcome.? ?Victor J. Stenger, 2001-11-05 | Ben Finney From terri at toybox.ca Wed Mar 16 01:59:08 2016 From: terri at toybox.ca (Terri Oda) Date: Tue, 15 Mar 2016 22:59:08 -0700 Subject: [Python-Dev] Really sketchy Core Python GSoC 2016 ideas page up Message-ID: <56E8F62C.1000101@toybox.ca> Hey all, I've put up an incredibly sketchy, terrible ideas page up for Summer of Code with core python: https://wiki.python.org/moin/SummerOfCode/2016/python-core I'm pretty much the worst person to do this since I'm always swamped with admin stuff and particularly out of touch with what's needed in Core Python around this time of year, so I'm counting on you all to tell me that it's terrible and how to fix it. :) If you don't already have edit privileges on the python wiki and need them for this, let me know your wiki username and I can get you set up. (Or similarly, just tell me what needs fixing and make it my problem.) We're also still looking for more volunteers who can help mentor students. Let me know if this interests you -- we have a few folk who can do higher level code reviews, but we need more day-to-day mentors who can help students keep on track and help them figure out who to ask for help if they get stuck. I can also find folk who will provide mentoring for mentors if you'd like to try but don't think you could be a primary mentor without help! If you're interested in mentoring, email gsoc-admins at python.org and we can get you the link to sign up. And if you emailed before and didn't get a response or haven't found a group to work with, feel free to email again! Terri From storchaka at gmail.com Wed Mar 16 02:03:37 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Mar 2016 08:03:37 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On 15.03.16 22:30, Guido van Rossum wrote: > I came across a file that had two different coding cookies -- one on > the first line and one on the second. CPython uses the first, but mypy > happens to use the second. I couldn't find anything in the spec or > docs ruling out the second interpretation. Does anyone have a > suggestion (apart from following CPython)? > > Reference: https://github.com/python/mypy/issues/1281 There is similar question. If a file has two different coding cookies on the same line, what should win? Currently the last cookie wins, in CPython parser, in the tokenize module, in IDLE, and in number of other code. I think this is a bug. From rosuav at gmail.com Wed Mar 16 02:07:53 2016 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 16 Mar 2016 17:07:53 +1100 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On Wed, Mar 16, 2016 at 5:03 PM, Serhiy Storchaka wrote: > On 15.03.16 22:30, Guido van Rossum wrote: >> >> I came across a file that had two different coding cookies -- one on >> the first line and one on the second. CPython uses the first, but mypy >> happens to use the second. I couldn't find anything in the spec or >> docs ruling out the second interpretation. Does anyone have a >> suggestion (apart from following CPython)? >> >> Reference: https://github.com/python/mypy/issues/1281 > > > There is similar question. If a file has two different coding cookies on the > same line, what should win? Currently the last cookie wins, in CPython > parser, in the tokenize module, in IDLE, and in number of other code. I > think this is a bug. Why would you ever have two coding cookies in a file? Surely this should be either an error, or ill-defined (ie parsers are allowed to pick whichever they like, including raising)? ChrisA From jcgoble3 at gmail.com Wed Mar 16 02:29:05 2016 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Wed, 16 Mar 2016 02:29:05 -0400 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On Wed, Mar 16, 2016 at 2:07 AM, Chris Angelico wrote: > Why would you ever have two coding cookies in a file? Surely this > should be either an error, or ill-defined (ie parsers are allowed to > pick whichever they like, including raising)? > > ChrisA +1. If multiple coding cookies are found, and all do not agree, I would expect an error to be raised. That it apparently does not raise an error currently is surprising to me. (If multiple coding cookies are found but do agree, perhaps raising a warning would be a good idea.) From v+python at g.nevcal.com Wed Mar 16 02:34:44 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Tue, 15 Mar 2016 23:34:44 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: <56E8FE84.5070108@g.nevcal.com> On 3/15/2016 11:07 PM, Chris Angelico wrote: > On Wed, Mar 16, 2016 at 5:03 PM, Serhiy Storchaka wrote: >> On 15.03.16 22:30, Guido van Rossum wrote: >>> I came across a file that had two different coding cookies -- one on >>> the first line and one on the second. CPython uses the first, but mypy >>> happens to use the second. I couldn't find anything in the spec or >>> docs ruling out the second interpretation. Does anyone have a >>> suggestion (apart from following CPython)? >>> >>> Reference: https://github.com/python/mypy/issues/1281 >> >> There is similar question. If a file has two different coding cookies on the >> same line, what should win? Currently the last cookie wins, in CPython >> parser, in the tokenize module, in IDLE, and in number of other code. I >> think this is a bug. > Why would you ever have two coding cookies in a file? Surely this > should be either an error, or ill-defined (ie parsers are allowed to > pick whichever they like, including raising)? > > ChrisA From the PEP 263: > To define a source code encoding, a magic comment must > be placed into the source files either as first or second > line in the file, such as: So clearly there is only one magic comment. "either" the first or second line, not both. Both, therefore, should be an error. From the PEP 263: > More precisely, the first or second line must match the regular > expression "coding[:=]\s*([-\w.]+)". The first group of this > expression is then interpreted as encoding name. If the encoding > is unknown to Python, an error is raised during compilation. There > must not be any Python statement on the line that contains the > encoding declaration. Clearly the regular expression would only match the first of multiple cookies on the same line, so the first one should always win... but there should only be one, from the first PEP quote "a magic comment". Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Mar 16 03:09:08 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Mar 2016 09:09:08 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56E8FE84.5070108@g.nevcal.com> References: <56E8FE84.5070108@g.nevcal.com> Message-ID: On 16.03.16 08:34, Glenn Linderman wrote: > From the PEP 263: > >> More precisely, the first or second line must match the regular >> expression "coding[:=]\s*([-\w.]+)". The first group of this >> expression is then interpreted as encoding name. If the encoding >> is unknown to Python, an error is raised during compilation. There >> must not be any Python statement on the line that contains the >> encoding declaration. > > Clearly the regular expression would only match the first of multiple > cookies on the same line, so the first one should always win... but > there should only be one, from the first PEP quote "a magic comment". "The first group of this expression" means the first regular expression group. Only the part between parenthesis "([-\w.]+)" is interpreted as encoding name, not all expression. From storchaka at gmail.com Wed Mar 16 03:14:05 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Mar 2016 09:14:05 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On 16.03.16 02:28, Guido van Rossum wrote: > I agree that the spirit of the PEP is to stop at the first coding > cookie found. Would it be okay if I updated the PEP to clarify this? > I'll definitely also update the docs. Could you please also update the regular expression in PEP 263 to "^[ \t\v]*#.*?coding[:=][ \t]*([-.a-zA-Z0-9]+)"? Coding cookie must be in comment, only the first occurrence in the line must be taken to account (here is a bug in CPython), encoding name must be ASCII, and there must not be any Python statement on the line that contains the encoding declaration. [1] [1] https://bugs.python.org/issue18873 From v+python at g.nevcal.com Wed Mar 16 03:46:04 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Wed, 16 Mar 2016 00:46:04 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E8FE84.5070108@g.nevcal.com> Message-ID: <56E90F3C.4000701@g.nevcal.com> On 3/16/2016 12:09 AM, Serhiy Storchaka wrote: > On 16.03.16 08:34, Glenn Linderman wrote: >> From the PEP 263: >> >>> More precisely, the first or second line must match the regular >>> expression "coding[:=]\s*([-\w.]+)". The first group of this >>> expression is then interpreted as encoding name. If the encoding >>> is unknown to Python, an error is raised during compilation. There >>> must not be any Python statement on the line that contains the >>> encoding declaration. >> >> Clearly the regular expression would only match the first of multiple >> cookies on the same line, so the first one should always win... but >> there should only be one, from the first PEP quote "a magic comment". > > "The first group of this expression" means the first regular > expression group. Only the part between parenthesis "([-\w.]+)" is > interpreted as encoding name, not all expression. Sure. But there is no mention anywhere in the PEP of more than one being legal: just more than one position for it, EITHER line 1 or line 2. So while the regular expression mentioned is not anchored, to allow variation in syntax between emacs and vim, "must match the regular expression" doesn't imply "several times", and when searching for a regular expression that might not be anchored, one typically expects to find the first. Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Wed Mar 16 03:59:31 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 16 Mar 2016 08:59:31 +0100 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: <56E91263.2050900@egenix.com> On 16.03.2016 01:28, Guido van Rossum wrote: > I agree that the spirit of the PEP is to stop at the first coding > cookie found. Would it be okay if I updated the PEP to clarify this? > I'll definitely also update the docs. +1 The only reason to read up to two lines was to address the use of the shebang on Unix, not to be able to define two competing source code encodings :-) > On Tue, Mar 15, 2016 at 2:04 PM, Brett Cannon wrote: >> >> >> On Tue, 15 Mar 2016 at 13:31 Guido van Rossum wrote: >>> >>> I came across a file that had two different coding cookies -- one on >>> the first line and one on the second. CPython uses the first, but mypy >>> happens to use the second. I couldn't find anything in the spec or >>> docs ruling out the second interpretation. Does anyone have a >>> suggestion (apart from following CPython)? >>> >>> Reference: https://github.com/python/mypy/issues/1281 >> >> >> I think the spirit of PEP 263 is for the first specified encoding to win as >> the support of two lines is to support shebangs and not multiple encodings >> :) . I also think the fact that tokenize.detect_encoding() doesn't >> automatically read two lines from its input also suggests the intent is >> "first encoding wins" (and that is the semantics of the function). > > > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Mar 16 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-03-07: Released eGenix pyOpenSSL 0.13.14 ... http://egenix.com/go89 2016-02-19: Released eGenix PyRun 2.1.2 ... http://egenix.com/go88 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From storchaka at gmail.com Wed Mar 16 03:59:55 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Mar 2016 09:59:55 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56E90F3C.4000701@g.nevcal.com> References: <56E8FE84.5070108@g.nevcal.com> <56E90F3C.4000701@g.nevcal.com> Message-ID: On 16.03.16 09:46, Glenn Linderman wrote: > On 3/16/2016 12:09 AM, Serhiy Storchaka wrote: >> On 16.03.16 08:34, Glenn Linderman wrote: >>> From the PEP 263: >>> >>>> More precisely, the first or second line must match the regular >>>> expression "coding[:=]\s*([-\w.]+)". The first group of this >>>> expression is then interpreted as encoding name. If the encoding >>>> is unknown to Python, an error is raised during compilation. There >>>> must not be any Python statement on the line that contains the >>>> encoding declaration. >>> >>> Clearly the regular expression would only match the first of multiple >>> cookies on the same line, so the first one should always win... but >>> there should only be one, from the first PEP quote "a magic comment". >> >> "The first group of this expression" means the first regular >> expression group. Only the part between parenthesis "([-\w.]+)" is >> interpreted as encoding name, not all expression. > > Sure. But there is no mention anywhere in the PEP of more than one > being legal: just more than one position for it, EITHER line 1 or line > 2. So while the regular expression mentioned is not anchored, to allow > variation in syntax between emacs and vim, "must match the regular > expression" doesn't imply "several times", and when searching for a > regular expression that might not be anchored, one typically expects to > find the first. Actually "must match the regular expression" is not correct, because re.match() implies anchoring at the start. I have proposed more correct regular expression in other branch of this thread. From waseem.tabraze at gmail.com Wed Mar 16 11:14:58 2016 From: waseem.tabraze at gmail.com (Wasim Thabraze) Date: Wed, 16 Mar 2016 20:44:58 +0530 Subject: [Python-Dev] Interested in the GSoC idea 'Roundup - GitHub integration' Message-ID: Hello everyone, I am Wasim Thabraze, a Computer Science Undergraduate. I have thoroughly gone through the Core-Python GSoC ideas page and have narrowed down my choices to the project 'Improving Roundup GitHub integration'. I have experience in building stuff that are connected to GitHub. Openflock (http://www.openflock.co) is one of such products that I developed. Can someone please help me in knowing more about the project? I wanted to know how and where GitHub should be integrated in the https://bugs.python.org I hope I can code with Core Python this summer. Regards, Wasim www.thabraze.me github.com/waseem18 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Mar 16 11:59:16 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 16 Mar 2016 11:59:16 -0400 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On 3/16/2016 3:14 AM, Serhiy Storchaka wrote: > On 16.03.16 02:28, Guido van Rossum wrote: >> I agree that the spirit of the PEP is to stop at the first coding >> cookie found. Would it be okay if I updated the PEP to clarify this? >> I'll definitely also update the docs. > > Could you please also update the regular expression in PEP 263 to > "^[ \t\v]*#.*?coding[:=][ \t]*([-.a-zA-Z0-9]+)"? > > Coding cookie must be in comment, only the first occurrence in the line > must be taken to account (here is a bug in CPython), encoding name must > be ASCII, and there must not be any Python statement on the line that > contains the encoding declaration. [1] > > [1] https://bugs.python.org/issue18873 Also, I think there should be one 'official' function somewhere in the stdlib to get and return the encoding declaration. The patch for the issue above had to make the same change in four places other than tests, a violent violation of DRY. -- Terry Jan Reedy From guido at python.org Wed Mar 16 20:05:32 2016 From: guido at python.org (Guido van Rossum) Date: Wed, 16 Mar 2016 17:05:32 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56E91263.2050900@egenix.com> References: <56E91263.2050900@egenix.com> Message-ID: On Wed, Mar 16, 2016 at 12:59 AM, M.-A. Lemburg wrote: > The only reason to read up to two lines was to address the use of > the shebang on Unix, not to be able to define two competing > source code encodings :-) I know. I was just surprised that the PEP was sufficiently vague about it that when I found that mypy picked the second if there were two, I couldn't prove to myself that it was violating the PEP. I'd rather clarify the PEP than rely on the reasoning presented earlier here. I don't like erroring out when there are two different cookies on two lines; I feel that the spirit of the PEP is to read up to two lines until a cookie is found, whichever comes first. I will update the regex in the PEP too (or change the wording to avoid "match"). I'm not sure what to do if there are two cooking on one line. If CPython currently picks the latter we may want to preserve that behavior. Should we recommend that everyone use tokenize.detect_encoding()? -- --Guido van Rossum (python.org/~guido) From victor.stinner at gmail.com Wed Mar 16 20:15:31 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 17 Mar 2016 01:15:31 +0100 Subject: [Python-Dev] GSoC: looking for a student to help on FAT Python Message-ID: Hi, I am now looking for a Google Summer of Code (GSoC) student to help me of my FAT Python project, a new static optimizer for CPython 3.6 using specialization with guards. The FAT Python project is already fully functional, the code is written and tested. I need help to implement new efficient optimizations to "finish" the project and prove that my design allows to really run applications faster. FAT Python project: https://faster-cpython.readthedocs.org/fat_python.html fatoptimizer module: https://fatoptimizer.readthedocs.org/ Slides of my talk at FOSDEM: https://github.com/haypo/conf/raw/master/2016-FOSDEM/fat_python.pdf The "fatoptimizer" optimizer is written in pure Python. I'm looking for a student who knows compilers especially static optimizations like loop unrolling and function inlining. For concrete tasks, take a look at the TODO list: https://fatoptimizer.readthedocs.org/en/latest/todo.html Hurry up students! The deadline is in 1 week! (Sorry, I'm late for my project...) -- PSF GSoC, Python core projects: https://wiki.python.org/moin/SummerOfCode/2016/python-core All PSF GSoC projects: https://wiki.python.org/moin/SummerOfCode/2016 GSOC: https://developers.google.com/open-source/gsoc/ Victor From victor.stinner at gmail.com Wed Mar 16 20:29:33 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 17 Mar 2016 01:29:33 +0100 Subject: [Python-Dev] Make the warnings module extensible Message-ID: Hi, I have an API question for you. I would like to add a new parameter the the showwarning() function of the warnings module. Problem: it's not possible to do that without breaking the backward compatibility (when an application replaces warnings.showwarning(), the warnings allows and promotes that). I proposed a patch to add a new showmsg() function which takes a warnings.WarningMessage object: https://bugs.python.org/issue26568 The design is inspired by the logging module and its logging.LogRecord class. The warnings.WarningMessage already exists. Since it's class, it's easy to add new attributes without breaking the API. - If warnings.showwarning() is replaced by an application, this function will be called in practice to log the warning. - If warnings.showmsg() is replaced, again, this function will be called in practice. - If both functions are replaced, showmsg() will be called (replaced showwarning() is ignored) I'm not sure about function names: showmsg() and formatmsg(). Maybe: showwarnmsg() and formatwarnmsg()? Bikeshedding.... fight! The final goal is to log the traceback where the destroyed object was allocated when a ResourceWarning warning is logged: https://bugs.python.org/issue26567 Adding a new parameter to warnings make the implementation much more simple and gives more freedom to the logger to decide how to format the warning. Victor From guido at python.org Wed Mar 16 20:29:58 2016 From: guido at python.org (Guido van Rossum) Date: Wed, 16 Mar 2016 17:29:58 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: I've updated the PEP. Please review. I decided not to update the Unicode howto (the thing is too obscure). Serhiy, you're probably in a better position to fix the code looking for cookies to pick the first one if there are two on the same line (or do whatever you think should be done there). Should we recommend that everyone use tokenize.detect_encoding()? On Wed, Mar 16, 2016 at 5:05 PM, Guido van Rossum wrote: > On Wed, Mar 16, 2016 at 12:59 AM, M.-A. Lemburg wrote: >> The only reason to read up to two lines was to address the use of >> the shebang on Unix, not to be able to define two competing >> source code encodings :-) > > I know. I was just surprised that the PEP was sufficiently vague about > it that when I found that mypy picked the second if there were two, I > couldn't prove to myself that it was violating the PEP. I'd rather > clarify the PEP than rely on the reasoning presented earlier here. > > I don't like erroring out when there are two different cookies on two > lines; I feel that the spirit of the PEP is to read up to two lines > until a cookie is found, whichever comes first. > > I will update the regex in the PEP too (or change the wording to avoid "match"). > > I'm not sure what to do if there are two cooking on one line. If > CPython currently picks the latter we may want to preserve that > behavior. > > Should we recommend that everyone use tokenize.detect_encoding()? > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From v+python at g.nevcal.com Wed Mar 16 21:54:02 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Wed, 16 Mar 2016 18:54:02 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: <56EA0E3A.1090809@g.nevcal.com> On 3/16/2016 5:29 PM, Guido van Rossum wrote: > I've updated the PEP. Please review. I decided not to update the > Unicode howto (the thing is too obscure). Serhiy, you're probably in a > better position to fix the code looking for cookies to pick the first > one if there are two on the same line (or do whatever you think should > be done there). > > Should we recommend that everyone use tokenize.detect_encoding()? > > On Wed, Mar 16, 2016 at 5:05 PM, Guido van Rossum wrote: >> On Wed, Mar 16, 2016 at 12:59 AM, M.-A. Lemburg wrote: >>> The only reason to read up to two lines was to address the use of >>> the shebang on Unix, not to be able to define two competing >>> source code encodings :-) >> I know. I was just surprised that the PEP was sufficiently vague about >> it that when I found that mypy picked the second if there were two, I >> couldn't prove to myself that it was violating the PEP. I'd rather >> clarify the PEP than rely on the reasoning presented earlier here. Oh sure. Updating the PEP is the best way forward. But the reasoning, although from somewhat vague specifications, seems sound enough to declare that it meant "find the first cookie in the first two lines". Which is what you've said in the update, although not quite that tersely. It now leaves no room for ambiguous interpretations. >> >> I don't like erroring out when there are two different cookies on two >> lines; I feel that the spirit of the PEP is to read up to two lines >> until a cookie is found, whichever comes first. The only reason for an error would be to alert people that had depended on the bugs, or misinterpretations. Personally, I think if they haven't converted to UTF-8 by now, they've got bigger problems than this change. >> >> I will update the regex in the PEP too (or change the wording to avoid "match"). >> >> I'm not sure what to do if there are two cooking on one line. If >> CPython currently picks the latter we may want to preserve that >> behavior. >> >> Should we recommend that everyone use tokenize.detect_encoding()? >> >> -- >> --Guido van Rossum (python.org/~guido) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Thu Mar 17 04:34:37 2016 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 17 Mar 2016 17:34:37 +0900 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: <22250.27677.692000.625010@turnbull.sk.tsukuba.ac.jp> Guido van Rossum writes: > > Should we recommend that everyone use tokenize.detect_encoding()? +1 From storchaka at gmail.com Thu Mar 17 08:04:06 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 17 Mar 2016 14:04:06 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On 17.03.16 02:29, Guido van Rossum wrote: > I've updated the PEP. Please review. I decided not to update the > Unicode howto (the thing is too obscure). Serhiy, you're probably in a > better position to fix the code looking for cookies to pick the first > one if there are two on the same line (or do whatever you think should > be done there). http://bugs.python.org/issue26581 > Should we recommend that everyone use tokenize.detect_encoding()? Likely. However the interface of tokenize.detect_encoding() is not very simple. From mal at egenix.com Thu Mar 17 09:14:33 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 17 Mar 2016 14:14:33 +0100 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: <56EAADB9.8090005@egenix.com> On 17.03.2016 01:29, Guido van Rossum wrote: > I've updated the PEP. Please review. I decided not to update the > Unicode howto (the thing is too obscure). Serhiy, you're probably in a > better position to fix the code looking for cookies to pick the first > one if there are two on the same line (or do whatever you think should > be done there). Thanks, will do. > Should we recommend that everyone use tokenize.detect_encoding()? I'd prefer a separate utility for this somewhere, since tokenize.detect_encoding() is not available in Python 2. I've attached an example implementation with tests, which works in Python 2.7 and 3. > On Wed, Mar 16, 2016 at 5:05 PM, Guido van Rossum wrote: >> On Wed, Mar 16, 2016 at 12:59 AM, M.-A. Lemburg wrote: >>> The only reason to read up to two lines was to address the use of >>> the shebang on Unix, not to be able to define two competing >>> source code encodings :-) >> >> I know. I was just surprised that the PEP was sufficiently vague about >> it that when I found that mypy picked the second if there were two, I >> couldn't prove to myself that it was violating the PEP. I'd rather >> clarify the PEP than rely on the reasoning presented earlier here. I suppose it's a rather rare case, since it's the first time that I heard about anyone thinking that a possible second line could be picked - after 15 years :-) >> I don't like erroring out when there are two different cookies on two >> lines; I feel that the spirit of the PEP is to read up to two lines >> until a cookie is found, whichever comes first. >> >> I will update the regex in the PEP too (or change the wording to avoid "match"). >> >> I'm not sure what to do if there are two cooking on one line. If >> CPython currently picks the latter we may want to preserve that >> behavior. >> >> Should we recommend that everyone use tokenize.detect_encoding()? >> >> -- >> --Guido van Rossum (python.org/~guido) > > > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Mar 17 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-03-07: Released eGenix pyOpenSSL 0.13.14 ... http://egenix.com/go89 2016-02-19: Released eGenix PyRun 2.1.2 ... http://egenix.com/go88 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: detect_source_encoding.py Type: text/x-python Size: 2159 bytes Desc: not available URL: From storchaka at gmail.com Thu Mar 17 10:02:55 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 17 Mar 2016 16:02:55 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56EAADB9.8090005@egenix.com> References: <56E91263.2050900@egenix.com> <56EAADB9.8090005@egenix.com> Message-ID: On 17.03.16 15:14, M.-A. Lemburg wrote: > On 17.03.2016 01:29, Guido van Rossum wrote: >> Should we recommend that everyone use tokenize.detect_encoding()? > > I'd prefer a separate utility for this somewhere, since > tokenize.detect_encoding() is not available in Python 2. > > I've attached an example implementation with tests, which works > in Python 2.7 and 3. Sorry, but this code doesn't match the behaviour of Python interpreter, nor other tools. I suggest to backport tokenize.detect_encoding() (but be aware that the default encoding in Python 2 is ASCII, not UTF-8). From guido at python.org Thu Mar 17 10:55:51 2016 From: guido at python.org (Guido van Rossum) Date: Thu, 17 Mar 2016 07:55:51 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On Thu, Mar 17, 2016 at 5:04 AM, Serhiy Storchaka wrote: >> Should we recommend that everyone use tokenize.detect_encoding()? > > Likely. However the interface of tokenize.detect_encoding() is not very > simple. I just found that out yesterday. You have to give it a readline() function, which is cumbersome if all you have is a (byte) string and you don't want to split it on lines just yet. And the readline() function raises SyntaxError when the encoding isn't right. I wish there were a lower-level helper that just took a line and told you what the encoding in it was, if any. Then the rest of the logic can be handled by the caller (including the logic of trying up to two lines). -- --Guido van Rossum (python.org/~guido) From brett at python.org Thu Mar 17 11:37:32 2016 From: brett at python.org (Brett Cannon) Date: Thu, 17 Mar 2016 15:37:32 +0000 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On Thu, 17 Mar 2016 at 07:56 Guido van Rossum wrote: > On Thu, Mar 17, 2016 at 5:04 AM, Serhiy Storchaka > wrote: > >> Should we recommend that everyone use tokenize.detect_encoding()? > > > > Likely. However the interface of tokenize.detect_encoding() is not very > > simple. > > I just found that out yesterday. You have to give it a readline() > function, which is cumbersome if all you have is a (byte) string and > you don't want to split it on lines just yet. And the readline() > function raises SyntaxError when the encoding isn't right. I wish > there were a lower-level helper that just took a line and told you > what the encoding in it was, if any. Then the rest of the logic can be > handled by the caller (including the logic of trying up to two lines). > Since this is for mypy my guess is you only want to know the encoding, but if you're simply trying to decode bytes of syntax then importilb.util.decode_source() will handle that for you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Thu Mar 17 12:50:59 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 17 Mar 2016 18:50:59 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On 17.03.16 16:55, Guido van Rossum wrote: > On Thu, Mar 17, 2016 at 5:04 AM, Serhiy Storchaka wrote: >>> Should we recommend that everyone use tokenize.detect_encoding()? >> >> Likely. However the interface of tokenize.detect_encoding() is not very >> simple. > > I just found that out yesterday. You have to give it a readline() > function, which is cumbersome if all you have is a (byte) string and > you don't want to split it on lines just yet. And the readline() > function raises SyntaxError when the encoding isn't right. I wish > there were a lower-level helper that just took a line and told you > what the encoding in it was, if any. Then the rest of the logic can be > handled by the caller (including the logic of trying up to two lines). The simplest way to detect encoding of bytes string: lines = data.splitlines() encoding = tokenize.detect_encoding(iter(lines).__next__)[0] If you don't want to split all data on lines, the most efficient way in Python 3.5 is: encoding = tokenize.detect_encoding(io.BytesIO(data).readline)[0] In Python 3.5 io.BytesIO(data) has constant complexity. In older versions for detecting encoding without copying data or splitting all data on lines you should write line iterator. For example: def iterlines(data): start = 0 while True: end = data.find(b'\n', start) + 1 if not end: break yield data[start:end] start = end yield data[start:] encoding = tokenize.detect_encoding(iterlines(data).__next__)[0] or it = (m.group() for m in re.finditer(b'.*\n?', data)) encoding = tokenize.detect_encoding(it.__next__) I don't know what approach is more efficient. From mal at egenix.com Thu Mar 17 13:23:00 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 17 Mar 2016 18:23:00 +0100 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> <56EAADB9.8090005@egenix.com> Message-ID: <56EAE7F4.7060801@egenix.com> On 17.03.2016 15:02, Serhiy Storchaka wrote: > On 17.03.16 15:14, M.-A. Lemburg wrote: >> On 17.03.2016 01:29, Guido van Rossum wrote: >>> Should we recommend that everyone use tokenize.detect_encoding()? >> >> I'd prefer a separate utility for this somewhere, since >> tokenize.detect_encoding() is not available in Python 2. >> >> I've attached an example implementation with tests, which works >> in Python 2.7 and 3. > > Sorry, but this code doesn't match the behaviour of Python interpreter, > nor other tools. I suggest to backport tokenize.detect_encoding() (but > be aware that the default encoding in Python 2 is ASCII, not UTF-8). Yes, I got the default for Python 3 wrong. I'll fix that. Thanks for the note. What other aspects are different than what Python implements ? -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Mar 17 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-03-07: Released eGenix pyOpenSSL 0.13.14 ... http://egenix.com/go89 2016-02-19: Released eGenix PyRun 2.1.2 ... http://egenix.com/go88 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From storchaka at gmail.com Thu Mar 17 13:53:07 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 17 Mar 2016 19:53:07 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56EAE7F4.7060801@egenix.com> References: <56E91263.2050900@egenix.com> <56EAADB9.8090005@egenix.com> <56EAE7F4.7060801@egenix.com> Message-ID: On 17.03.16 19:23, M.-A. Lemburg wrote: > On 17.03.2016 15:02, Serhiy Storchaka wrote: >> On 17.03.16 15:14, M.-A. Lemburg wrote: >>> On 17.03.2016 01:29, Guido van Rossum wrote: >>>> Should we recommend that everyone use tokenize.detect_encoding()? >>> >>> I'd prefer a separate utility for this somewhere, since >>> tokenize.detect_encoding() is not available in Python 2. >>> >>> I've attached an example implementation with tests, which works >>> in Python 2.7 and 3. >> >> Sorry, but this code doesn't match the behaviour of Python interpreter, >> nor other tools. I suggest to backport tokenize.detect_encoding() (but >> be aware that the default encoding in Python 2 is ASCII, not UTF-8). > > Yes, I got the default for Python 3 wrong. I'll fix that. Thanks > for the note. > > What other aspects are different than what Python implements ? 1. If there is a BOM and coding cookie, the source encoding is "utf-8-sig". 2. If there is a BOM and coding cookie is not 'utf-8', this is an error. 3. If the first line is not blank or comment line, the coding cookie is not searched in the second line. 4. Encoding name should be canonized. "UTF8", "utf8", "utf_8" and "utf-8" is the same encoding (and all are changed to "utf-8-sig" with BOM). 5. There isn't the limit of 400 bytes. Actually there is a bug with handling long lines in current code, but even with this bug the limit is larger. 6. I made a mistake in the regular expression, missed the underscore. tokenize.detect_encoding() is the closest imitation of the behavior of Python interpreter. From brett at python.org Thu Mar 17 14:19:14 2016 From: brett at python.org (Brett Cannon) Date: Thu, 17 Mar 2016 18:19:14 +0000 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: Where did this PEP leave off? Anything blocking its acceptance? On Sat, 13 Feb 2016 at 00:49 Georg Brandl wrote: > Hi all, > > after talking to Guido and Serhiy we present the next revision > of this PEP. It is a compromise that we are all happy with, > and a relatively restricted rule that makes additions to PEP 8 > basically unnecessary. > > I think the discussion has shown that supporting underscores in > the from-string constructors is valuable, therefore this is now > added to the specification section. > > The remaining open question is about the reverse direction: do > we want a string formatting modifier that adds underscores as > thousands separators? > > cheers, > Georg > > ----------------------------------------------------------------- > > PEP: 515 > Title: Underscores in Numeric Literals > Version: $Revision$ > Last-Modified: $Date$ > Author: Georg Brandl, Serhiy Storchaka > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 10-Feb-2016 > Python-Version: 3.6 > Post-History: 10-Feb-2016, 11-Feb-2016 > > Abstract and Rationale > ====================== > > This PEP proposes to extend Python's syntax and number-from-string > constructors so that underscores can be used as visual separators for > digit grouping purposes in integral, floating-point and complex number > literals. > > This is a common feature of other modern languages, and can aid > readability of long literals, or literals whose value should clearly > separate into parts, such as bytes or words in hexadecimal notation. > > Examples:: > > # grouping decimal numbers by thousands > amount = 10_000_000.0 > > # grouping hexadecimal addresses by words > addr = 0xDEAD_BEEF > > # grouping bits into nibbles in a binary literal > flags = 0b_0011_1111_0100_1110 > > # same, for string conversions > flags = int('0b_1111_0000', 2) > > > Specification > ============= > > The current proposal is to allow one underscore between digits, and > after base specifiers in numeric literals. The underscores have no > semantic meaning, and literals are parsed as if the underscores were > absent. > > Literal Grammar > --------------- > > The production list for integer literals would therefore look like > this:: > > integer: decinteger | bininteger | octinteger | hexinteger > decinteger: nonzerodigit (["_"] digit)* | "0" (["_"] "0")* > bininteger: "0" ("b" | "B") (["_"] bindigit)+ > octinteger: "0" ("o" | "O") (["_"] octdigit)+ > hexinteger: "0" ("x" | "X") (["_"] hexdigit)+ > nonzerodigit: "1"..."9" > digit: "0"..."9" > bindigit: "0" | "1" > octdigit: "0"..."7" > hexdigit: digit | "a"..."f" | "A"..."F" > > For floating-point and complex literals:: > > floatnumber: pointfloat | exponentfloat > pointfloat: [digitpart] fraction | digitpart "." > exponentfloat: (digitpart | pointfloat) exponent > digitpart: digit (["_"] digit)* > fraction: "." digitpart > exponent: ("e" | "E") ["+" | "-"] digitpart > imagnumber: (floatnumber | digitpart) ("j" | "J") > > Constructors > ------------ > > Following the same rules for placement, underscores will be allowed in > the following constructors: > > - ``int()`` (with any base) > - ``float()`` > - ``complex()`` > - ``Decimal()`` > > > Prior Art > ========= > > Those languages that do allow underscore grouping implement a large > variety of rules for allowed placement of underscores. In cases where > the language spec contradicts the actual behavior, the actual behavior > is listed. ("single" or "multiple" refer to allowing runs of > consecutive underscores.) > > * Ada: single, only between digits [8]_ > * C# (open proposal for 7.0): multiple, only between digits [6]_ > * C++14: single, between digits (different separator chosen) [1]_ > * D: multiple, anywhere, including trailing [2]_ > * Java: multiple, only between digits [7]_ > * Julia: single, only between digits (but not in float exponent parts) > [9]_ > * Perl 5: multiple, basically anywhere, although docs say it's > restricted to one underscore between digits [3]_ > * Ruby: single, only between digits (although docs say "anywhere") > [10]_ > * Rust: multiple, anywhere, except for between exponent "e" and digits > [4]_ > * Swift: multiple, between digits and trailing (although textual > description says only "between digits") [5]_ > > > Alternative Syntax > ================== > > Underscore Placement Rules > -------------------------- > > Instead of the relatively strict rule specified above, the use of > underscores could be limited. As we seen from other languages, common > rules include: > > * Only one consecutive underscore allowed, and only between digits. > * Multiple consecutive underscores allowed, but only between digits. > * Multiple consecutive underscores allowed, in most positions except > for the start of the literal, or special positions like after a > decimal point. > > The syntax in this PEP has ultimately been selected because it covers > the common use cases, and does not allow for syntax that would have to > be discouraged in style guides anyway. > > A less common rule would be to allow underscores only every N digits > (where N could be 3 for decimal literals, or 4 for hexadecimal ones). > This is unnecessarily restrictive, especially considering the > separator placement is different in different cultures. > > Different Separators > -------------------- > > A proposed alternate syntax was to use whitespace for grouping. > Although strings are a precedent for combining adjoining literals, the > behavior can lead to unexpected effects which are not possible with > underscores. Also, no other language is known to use this rule, > except for languages that generally disregard any whitespace. > > C++14 introduces apostrophes for grouping (because underscores > introduce ambiguity with user-defined literals), which is not > considered because of the use in Python's string literals. [1]_ > > > Open Proposals > ============== > > It has been proposed [11]_ to extend the number-to-string formatting > language to allow ``_`` as a thousans separator, where currently only > ``,`` is supported. This could be used to easily generate code with > more readable literals. > > > Implementation > ============== > > A preliminary patch that implements the specification given above has > been posted to the issue tracker. [12]_ > > > References > ========== > > .. [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3499.html > > .. [2] http://dlang.org/spec/lex.html#integerliteral > > .. [3] http://perldoc.perl.org/perldata.html#Scalar-value-constructors > > .. [4] http://doc.rust-lang.org/reference.html#number-literals > > .. [5] > > https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/LexicalStructure.html > > .. [6] https://github.com/dotnet/roslyn/issues/216 > > .. [7] > > https://docs.oracle.com/javase/7/docs/technotes/guides/language/underscores-literals.html > > .. [8] http://archive.adaic.com/standards/83lrm/html/lrm-02-04.html#2.4 > > .. [9] > > http://docs.julialang.org/en/release-0.4/manual/integers-and-floating-point-numbers/ > > .. [10] > http://ruby-doc.org/core-2.3.0/doc/syntax/literals_rdoc.html#label-Numbers > > .. [11] > https://mail.python.org/pipermail/python-dev/2016-February/143283.html > > .. [12] http://bugs.python.org/issue26331 > > > Copyright > ========= > > This document has been placed in the public domain. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Thu Mar 17 14:35:02 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 17 Mar 2016 19:35:02 +0100 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> <56EAADB9.8090005@egenix.com> <56EAE7F4.7060801@egenix.com> Message-ID: <56EAF8D6.1050306@egenix.com> On 17.03.2016 18:53, Serhiy Storchaka wrote: > On 17.03.16 19:23, M.-A. Lemburg wrote: >> On 17.03.2016 15:02, Serhiy Storchaka wrote: >>> On 17.03.16 15:14, M.-A. Lemburg wrote: >>>> On 17.03.2016 01:29, Guido van Rossum wrote: >>>>> Should we recommend that everyone use tokenize.detect_encoding()? >>>> >>>> I'd prefer a separate utility for this somewhere, since >>>> tokenize.detect_encoding() is not available in Python 2. >>>> >>>> I've attached an example implementation with tests, which works >>>> in Python 2.7 and 3. >>> >>> Sorry, but this code doesn't match the behaviour of Python interpreter, >>> nor other tools. I suggest to backport tokenize.detect_encoding() (but >>> be aware that the default encoding in Python 2 is ASCII, not UTF-8). >> >> Yes, I got the default for Python 3 wrong. I'll fix that. Thanks >> for the note. >> >> What other aspects are different than what Python implements ? > > 1. If there is a BOM and coding cookie, the source encoding is "utf-8-sig". Ok, that makes sense (even though it's not mandated by the PEP; the utf-8-sig codec didn't exist yet). > 2. If there is a BOM and coding cookie is not 'utf-8', this is an error. It's an error for Python, but why should a detection function always raise an error for this case ? It would probably be a good idea to have an errors parameter to leave this to the use to decide. Same for unknown encodings. > 3. If the first line is not blank or comment line, the coding cookie is > not searched in the second line. Hmm, the PEP does allow having the coding cookie in the second line, even if the first line is not a comment. Perhaps that's not really needed. > 4. Encoding name should be canonized. "UTF8", "utf8", "utf_8" and > "utf-8" is the same encoding (and all are changed to "utf-8-sig" with BOM). Well, that's cosmetics :-) The codec system will take care of this when needed. > 5. There isn't the limit of 400 bytes. Actually there is a bug with > handling long lines in current code, but even with this bug the limit is > larger. I think it's a reasonable limit, since shebang lines may only be 127 long on at least Linux (and probably several other Unix systems as well). But just in case, I made this configurable :-) > 6. I made a mistake in the regular expression, missed the underscore. I added it. > tokenize.detect_encoding() is the closest imitation of the behavior of > Python interpreter. Probably, but that doesn't us on Python 2, right ? I'll upload the script to github later today or tomorrow to continue development. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Mar 17 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-03-07: Released eGenix pyOpenSSL 0.13.14 ... http://egenix.com/go89 2016-02-19: Released eGenix PyRun 2.1.2 ... http://egenix.com/go88 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From guido at python.org Thu Mar 17 15:11:04 2016 From: guido at python.org (Guido van Rossum) Date: Thu, 17 Mar 2016 12:11:04 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On Thu, Mar 17, 2016 at 9:50 AM, Serhiy Storchaka wrote: > On 17.03.16 16:55, Guido van Rossum wrote: >> >> On Thu, Mar 17, 2016 at 5:04 AM, Serhiy Storchaka >> wrote: >>>> >>>> Should we recommend that everyone use tokenize.detect_encoding()? >>> >>> >>> Likely. However the interface of tokenize.detect_encoding() is not very >>> simple. >> >> >> I just found that out yesterday. You have to give it a readline() >> function, which is cumbersome if all you have is a (byte) string and >> you don't want to split it on lines just yet. And the readline() >> function raises SyntaxError when the encoding isn't right. I wish >> there were a lower-level helper that just took a line and told you >> what the encoding in it was, if any. Then the rest of the logic can be >> handled by the caller (including the logic of trying up to two lines). > > > The simplest way to detect encoding of bytes string: > > lines = data.splitlines() > encoding = tokenize.detect_encoding(iter(lines).__next__)[0] This will raise SyntaxError if the encoding is unknown. That needs to be caught in mypy's case and then it needs to get the line number from the exception. I tried this and it was too painful, so now I've just changed the regex that mypy uses to use non-eager matching (https://github.com/python/mypy/commit/b291998a46d580df412ed28af1ba1658446b9fe5). > If you don't want to split all data on lines, the most efficient way in > Python 3.5 is: > > encoding = tokenize.detect_encoding(io.BytesIO(data).readline)[0] > > In Python 3.5 io.BytesIO(data) has constant complexity. Ditto with the SyntaxError though. > In older versions for detecting encoding without copying data or splitting > all data on lines you should write line iterator. For example: > > def iterlines(data): > start = 0 > while True: > end = data.find(b'\n', start) + 1 > if not end: > break > yield data[start:end] > start = end > yield data[start:] > > encoding = tokenize.detect_encoding(iterlines(data).__next__)[0] > > or > > it = (m.group() for m in re.finditer(b'.*\n?', data)) > encoding = tokenize.detect_encoding(it.__next__) > > I don't know what approach is more efficient. Having my own regex was simpler. :-( -- --Guido van Rossum (python.org/~guido) From storchaka at gmail.com Thu Mar 17 16:03:09 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 17 Mar 2016 22:03:09 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On 17.03.16 21:11, Guido van Rossum wrote: > I tried this and it was too painful, so now I've just > changed the regex that mypy uses to use non-eager matching > (https://github.com/python/mypy/commit/b291998a46d580df412ed28af1ba1658446b9fe5). \s* matches newlines. {0,1}? is the same as ??. From storchaka at gmail.com Thu Mar 17 16:09:14 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 17 Mar 2016 22:09:14 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: On 17.03.16 21:11, Guido van Rossum wrote: > This will raise SyntaxError if the encoding is unknown. That needs to > be caught in mypy's case and then it needs to get the line number from > the exception. Good point. "lineno" and "offset" attributes of SyntaxError is set to None by tokenize.detect_encoding() and to 0 by CPython interpreter. They should be set to useful values. From michael at felt.demon.nl Thu Mar 17 18:31:20 2016 From: michael at felt.demon.nl (Michael Felt) Date: Thu, 17 Mar 2016 23:31:20 +0100 Subject: [Python-Dev] bitfields - short - and xlc compiler Message-ID: <56EB3038.60402@felt.demon.nl> a) hope this is not something you expect to be on -list, if so - my apologies! Getting this message (here using c99 as compiler name, but same issue with xlc as compiler name) c99 -qarch=pwr4 -qbitfields=signed -DNDEBUG -O -I. -IInclude -I./Include -I/data/prj/aixtools/python/python-2.7.11.2/Include -I/data/prj/aixtools/python/python-2.7.11.2 -c /data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.7/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.o "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field M must be of type signed int, unsigned int or int. "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field N must be of type signed int, unsigned int or int. "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field O must be of type signed int, unsigned int or int. "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field P must be of type signed int, unsigned int or int. "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field Q must be of type signed int, unsigned int or int. "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field R must be of type signed int, unsigned int or int. "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", line 387.5: 1506-009 (S) Bit field S must be of type signed int, unsigned int or int. for: struct BITS { int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; short M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; }; in short xlC v11 does not like short (xlC v7 might have accepted it, but "32-bit machines were common then". I am guessing that 16-bit is not well liked on 64-bit hw now. reference for xlC v7, where short was (apparently) still accepted: http://www.serc.iisc.ernet.in/facilities/ComputingFacilities/systems/cluster/vac-7.0/html/language/ref/clrc03defbitf.htm I am taking this is from xlC v7 documentation from the URL, not because I know it personally. So - my question: if "short" is unacceptable for POWER, or maybe only xlC (not tried with gcc) - how terrible is this, and is it possible to adjust the test so - the test is accurate? I am going to modify the test code so it is struct BITS { signed int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; unsigned int M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; }; And see what happens - BUT - what does this have for impact on python - assuming that "short" bitfields are not supported? p.s. not submitting this a bug (now) as it may just be that "you" consider it a bug in xlC to not support (signed) short bit fields. p.p.s. Note: xlc, by default, considers bitfields to be unsigned. I was trying to force them to signed with -qbitfields=signed - and I still got messages. So, going back to defaults. From v+python at g.nevcal.com Thu Mar 17 19:54:35 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 17 Mar 2016 16:54:35 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E8FE84.5070108@g.nevcal.com> <56E90F3C.4000701@g.nevcal.com> Message-ID: <56EB43BB.5080504@g.nevcal.com> On 3/16/2016 12:59 AM, Serhiy Storchaka wrote: > On 16.03.16 09:46, Glenn Linderman wrote: >> On 3/16/2016 12:09 AM, Serhiy Storchaka wrote: >>> On 16.03.16 08:34, Glenn Linderman wrote: >>>> From the PEP 263: >>>> >>>>> More precisely, the first or second line must match the regular >>>>> expression "coding[:=]\s*([-\w.]+)". The first group of this >>>>> expression is then interpreted as encoding name. If the encoding >>>>> is unknown to Python, an error is raised during compilation. >>>>> There >>>>> must not be any Python statement on the line that contains the >>>>> encoding declaration. >>>> >>>> Clearly the regular expression would only match the first of multiple >>>> cookies on the same line, so the first one should always win... but >>>> there should only be one, from the first PEP quote "a magic comment". >>> >>> "The first group of this expression" means the first regular >>> expression group. Only the part between parenthesis "([-\w.]+)" is >>> interpreted as encoding name, not all expression. >> >> Sure. But there is no mention anywhere in the PEP of more than one >> being legal: just more than one position for it, EITHER line 1 or line >> 2. So while the regular expression mentioned is not anchored, to allow >> variation in syntax between emacs and vim, "must match the regular >> expression" doesn't imply "several times", and when searching for a >> regular expression that might not be anchored, one typically expects to >> find the first. > > Actually "must match the regular expression" is not correct, because > re.match() implies anchoring at the start. I have proposed more > correct regular expression in other branch of this thread. "match" doesn't imply anchoring at the start. "re.match()" does (and as a result is very confusing to newbies to Python re, that have used other regexp systems). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Mar 17 20:54:12 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 17 Mar 2016 17:54:12 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56EB43BB.5080504@g.nevcal.com> References: <56E8FE84.5070108@g.nevcal.com> <56E90F3C.4000701@g.nevcal.com> <56EB43BB.5080504@g.nevcal.com> Message-ID: <56EB51B4.6030105@stoneleaf.us> On 03/17/2016 04:54 PM, Glenn Linderman wrote: > On 3/16/2016 12:59 AM, Serhiy Storchaka wrote: >> Actually "must match the regular expression" is not correct, because >> re.match() implies anchoring at the start. I have proposed more >> correct regular expression in other branch of this thread. > > "match" doesn't imply anchoring at the start. "re.match()" does (and as > a result is very confusing to newbies to Python re, that have used other > regexp systems). It still confuses me from time to time. :( -- ~Ethan~ From michael at felt.demon.nl Thu Mar 17 20:56:51 2016 From: michael at felt.demon.nl (Michael Felt) Date: Fri, 18 Mar 2016 01:56:51 +0100 Subject: [Python-Dev] bitfields - short - and xlc compiler In-Reply-To: <56EB3038.60402@felt.demon.nl> References: <56EB3038.60402@felt.demon.nl> Message-ID: <56EB5253.10807@felt.demon.nl> Update: Is this going to be impossible? test_short fails om AIX when using xlC in any case. How terrible is this? ====================================================================== FAIL: test_shorts (ctypes.test.test_bitfields.C_Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", line 48, in test_shorts self.assertEqual((name, i, getattr(b, name)), (name, i, func(byref(b), name))) AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1) First differing element 2: -1 1 - ('M', 1, -1) ? - + ('M', 1, 1) ---------------------------------------------------------------------- Ran 440 tests in 1.538s FAILED (failures=1, skipped=91) Traceback (most recent call last): File "./Lib/test/test_ctypes.py", line 15, in test_main() File "./Lib/test/test_ctypes.py", line 12, in test_main run_unittest(unittest.TestSuite(suites)) File "/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", line 1428, in run_unittest _run_suite(suite) File "/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", line 1411, in _run_suite raise TestFailed(err) test.test_support.TestFailed: Traceback (most recent call last): File "/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", line 48, in test_shorts self.assertEqual((name, i, getattr(b, name)), (name, i, func(byref(b), name))) AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1) First differing element 2: -1 1 - ('M', 1, -1) ? - + ('M', 1, 1) On 17-Mar-16 23:31, Michael Felt wrote: > a) hope this is not something you expect to be on -list, if so - my > apologies! > > Getting this message (here using c99 as compiler name, but same issue > with xlc as compiler name) > c99 -qarch=pwr4 -qbitfields=signed -DNDEBUG -O -I. -IInclude > -I./Include -I/data/prj/aixtools/python/python-2.7.11.2/Include > -I/data/prj/aixtools/python/python-2.7.11.2 -c > /data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c > -o > build/temp.aix-5.3-2.7/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.o > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field M must be of type signed int, > unsigned int or int. > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field N must be of type signed int, > unsigned int or int. > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field O must be of type signed int, > unsigned int or int. > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field P must be of type signed int, > unsigned int or int. > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field Q must be of type signed int, > unsigned int or int. > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field R must be of type signed int, > unsigned int or int. > "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", > line 387.5: 1506-009 (S) Bit field S must be of type signed int, > unsigned int or int. > > for: > > struct BITS { > int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; > short M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; > }; > > in short xlC v11 does not like short (xlC v7 might have accepted it, > but "32-bit machines were common then". I am guessing that 16-bit is > not well liked on 64-bit hw now. > > reference for xlC v7, where short was (apparently) still accepted: > http://www.serc.iisc.ernet.in/facilities/ComputingFacilities/systems/cluster/vac-7.0/html/language/ref/clrc03defbitf.htm > > > I am taking this is from xlC v7 documentation from the URL, not > because I know it personally. > > So - my question: if "short" is unacceptable for POWER, or maybe only > xlC (not tried with gcc) - how terrible is this, and is it possible to > adjust the test so - the test is accurate? > > I am going to modify the test code so it is > struct BITS { > signed int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; > unsigned int M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; > }; > > And see what happens - BUT - what does this have for impact on python > - assuming that "short" bitfields are not supported? > > p.s. not submitting this a bug (now) as it may just be that "you" > consider it a bug in xlC to not support (signed) short bit fields. > > p.p.s. Note: xlc, by default, considers bitfields to be unsigned. I > was trying to force them to signed with -qbitfields=signed - and I > still got messages. So, going back to defaults. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/register%40felt.demon.nl > From python at mrabarnett.plus.com Thu Mar 17 21:35:05 2016 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 18 Mar 2016 01:35:05 +0000 Subject: [Python-Dev] bitfields - short - and xlc compiler In-Reply-To: <56EB5253.10807@felt.demon.nl> References: <56EB3038.60402@felt.demon.nl> <56EB5253.10807@felt.demon.nl> Message-ID: <56EB5B49.80706@mrabarnett.plus.com> On 2016-03-18 00:56, Michael Felt wrote: > Update: > Is this going to be impossible? > From what I've been able to find out, the C89 standard limits bitfields to int, signed int and unsigned int, and the C99 standard added _Bool, although some compilers allow other integer types too. It looks like your compiler doesn't allow those additional types. > test_short fails om AIX when using xlC in any case. How terrible is this? > > ====================================================================== > FAIL: test_shorts (ctypes.test.test_bitfields.C_Test) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", > line 48, in test_shorts > self.assertEqual((name, i, getattr(b, name)), (name, i, > func(byref(b), name))) > AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1) > > First differing element 2: > -1 > 1 > > - ('M', 1, -1) > ? - > > + ('M', 1, 1) > > ---------------------------------------------------------------------- > Ran 440 tests in 1.538s > > FAILED (failures=1, skipped=91) > Traceback (most recent call last): > File "./Lib/test/test_ctypes.py", line 15, in > test_main() > File "./Lib/test/test_ctypes.py", line 12, in test_main > run_unittest(unittest.TestSuite(suites)) > File > "/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", > line 1428, in run_unittest > _run_suite(suite) > File > "/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", > line 1411, in _run_suite > raise TestFailed(err) > test.test_support.TestFailed: Traceback (most recent call last): > File > "/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", > line 48, in test_shorts > self.assertEqual((name, i, getattr(b, name)), (name, i, > func(byref(b), name))) > AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1) > > First differing element 2: > -1 > 1 > > - ('M', 1, -1) > ? - > > + ('M', 1, 1) > > > > > On 17-Mar-16 23:31, Michael Felt wrote: >> a) hope this is not something you expect to be on -list, if so - my >> apologies! >> >> Getting this message (here using c99 as compiler name, but same issue >> with xlc as compiler name) >> c99 -qarch=pwr4 -qbitfields=signed -DNDEBUG -O -I. -IInclude >> -I./Include -I/data/prj/aixtools/python/python-2.7.11.2/Include >> -I/data/prj/aixtools/python/python-2.7.11.2 -c >> /data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c >> -o >> build/temp.aix-5.3-2.7/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.o >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field M must be of type signed int, >> unsigned int or int. >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field N must be of type signed int, >> unsigned int or int. >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field O must be of type signed int, >> unsigned int or int. >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field P must be of type signed int, >> unsigned int or int. >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field Q must be of type signed int, >> unsigned int or int. >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field R must be of type signed int, >> unsigned int or int. >> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >> line 387.5: 1506-009 (S) Bit field S must be of type signed int, >> unsigned int or int. >> >> for: >> >> struct BITS { >> int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; >> short M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; >> }; >> >> in short xlC v11 does not like short (xlC v7 might have accepted it, >> but "32-bit machines were common then". I am guessing that 16-bit is >> not well liked on 64-bit hw now. >> >> reference for xlC v7, where short was (apparently) still accepted: >> http://www.serc.iisc.ernet.in/facilities/ComputingFacilities/systems/cluster/vac-7.0/html/language/ref/clrc03defbitf.htm >> >> >> I am taking this is from xlC v7 documentation from the URL, not >> because I know it personally. >> >> So - my question: if "short" is unacceptable for POWER, or maybe only >> xlC (not tried with gcc) - how terrible is this, and is it possible to >> adjust the test so - the test is accurate? >> >> I am going to modify the test code so it is >> struct BITS { >> signed int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; >> unsigned int M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; >> }; >> >> And see what happens - BUT - what does this have for impact on python >> - assuming that "short" bitfields are not supported? >> >> p.s. not submitting this a bug (now) as it may just be that "you" >> consider it a bug in xlC to not support (signed) short bit fields. >> >> p.p.s. Note: xlc, by default, considers bitfields to be unsigned. I >> was trying to force them to signed with -qbitfields=signed - and I >> still got messages. So, going back to defaults. >> From abarnert at yahoo.com Fri Mar 18 00:57:38 2016 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 17 Mar 2016 21:57:38 -0700 Subject: [Python-Dev] bitfields - short - and xlc compiler In-Reply-To: <56EB5B49.80706@mrabarnett.plus.com> References: <56EB3038.60402@felt.demon.nl> <56EB5253.10807@felt.demon.nl> <56EB5B49.80706@mrabarnett.plus.com> Message-ID: <036959DA-7006-4FF0-A984-7840E543CB7E@yahoo.com> On Mar 17, 2016, at 18:35, MRAB wrote: > >> On 2016-03-18 00:56, Michael Felt wrote: >> Update: >> Is this going to be impossible? > From what I've been able to find out, the C89 standard limits bitfields to int, signed int and unsigned int, and the C99 standard added _Bool, although some compilers allow other integer types too. It looks like your compiler doesn't allow those additional types. Yeah, C99 (6.7.2.1) allows "a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type", and same for C11. This means that a compiler could easily allow an implementation-defined type that's identical to and interconvertible with short, say "i16", to be used in bitfields, but not short itself. And yet, gcc still allows short "even in strictly conforming mode" (4.9), and it looks like Clang and Intel do the same. Meanwhile, MSVC specifically says it's illegal ("The type-specifier for the declarator must be unsigned int, signed int, or int") but then defines the semantics (you can't have a 17-bit short, bit fields act as the underlying type when accessed, alignment is forced to a boundary appropriate for the underlying type). They do mention that allowing char and long types is a Microsoft extension, but still nothing about short, even though it's used in most of the examples on the page. Anyway, is the question what ctypes should do? If a platform's compiler allows "short M: 1", especially if it has potentially different alignment than "int M: 1", ctypes on that platform had better make ("M", c_short, 1) match the former, right? So it sounds like you need some configure switch to test that your compiler doesn't allow short bit fields, so your ctypes build at least skips that part of _ctypes_test.c and test_bitfields.py, and maybe even doesn't allow them in Python code. >> test_short fails om AIX when using xlC in any case. How terrible is this? >> >> ====================================================================== >> FAIL: test_shorts (ctypes.test.test_bitfields.C_Test) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", >> line 48, in test_shorts >> self.assertEqual((name, i, getattr(b, name)), (name, i, >> func(byref(b), name))) >> AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1) >> >> First differing element 2: >> -1 >> 1 >> >> - ('M', 1, -1) >> ? - >> >> + ('M', 1, 1) >> >> ---------------------------------------------------------------------- >> Ran 440 tests in 1.538s >> >> FAILED (failures=1, skipped=91) >> Traceback (most recent call last): >> File "./Lib/test/test_ctypes.py", line 15, in >> test_main() >> File "./Lib/test/test_ctypes.py", line 12, in test_main >> run_unittest(unittest.TestSuite(suites)) >> File >> "/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", >> line 1428, in run_unittest >> _run_suite(suite) >> File >> "/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", >> line 1411, in _run_suite >> raise TestFailed(err) >> test.test_support.TestFailed: Traceback (most recent call last): >> File >> "/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", >> line 48, in test_shorts >> self.assertEqual((name, i, getattr(b, name)), (name, i, >> func(byref(b), name))) >> AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1) >> >> First differing element 2: >> -1 >> 1 >> >> - ('M', 1, -1) >> ? - >> >> + ('M', 1, 1) >> >> >> >> >>> On 17-Mar-16 23:31, Michael Felt wrote: >>> a) hope this is not something you expect to be on -list, if so - my >>> apologies! >>> >>> Getting this message (here using c99 as compiler name, but same issue >>> with xlc as compiler name) >>> c99 -qarch=pwr4 -qbitfields=signed -DNDEBUG -O -I. -IInclude >>> -I./Include -I/data/prj/aixtools/python/python-2.7.11.2/Include >>> -I/data/prj/aixtools/python/python-2.7.11.2 -c >>> /data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c >>> -o >>> build/temp.aix-5.3-2.7/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.o >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field M must be of type signed int, >>> unsigned int or int. >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field N must be of type signed int, >>> unsigned int or int. >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field O must be of type signed int, >>> unsigned int or int. >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field P must be of type signed int, >>> unsigned int or int. >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field Q must be of type signed int, >>> unsigned int or int. >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field R must be of type signed int, >>> unsigned int or int. >>> "/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", >>> line 387.5: 1506-009 (S) Bit field S must be of type signed int, >>> unsigned int or int. >>> >>> for: >>> >>> struct BITS { >>> int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; >>> short M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; >>> }; >>> >>> in short xlC v11 does not like short (xlC v7 might have accepted it, >>> but "32-bit machines were common then". I am guessing that 16-bit is >>> not well liked on 64-bit hw now. >>> >>> reference for xlC v7, where short was (apparently) still accepted: >>> http://www.serc.iisc.ernet.in/facilities/ComputingFacilities/systems/cluster/vac-7.0/html/language/ref/clrc03defbitf.htm >>> >>> >>> I am taking this is from xlC v7 documentation from the URL, not >>> because I know it personally. >>> >>> So - my question: if "short" is unacceptable for POWER, or maybe only >>> xlC (not tried with gcc) - how terrible is this, and is it possible to >>> adjust the test so - the test is accurate? >>> >>> I am going to modify the test code so it is >>> struct BITS { >>> signed int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9; >>> unsigned int M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7; >>> }; >>> >>> And see what happens - BUT - what does this have for impact on python >>> - assuming that "short" bitfields are not supported? >>> >>> p.s. not submitting this a bug (now) as it may just be that "you" >>> consider it a bug in xlC to not support (signed) short bit fields. >>> >>> p.p.s. Note: xlc, by default, considers bitfields to be unsigned. I >>> was trying to force them to signed with -qbitfields=signed - and I >>> still got messages. So, going back to defaults. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/abarnert%40yahoo.com From status at bugs.python.org Fri Mar 18 13:08:40 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 18 Mar 2016 18:08:40 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160318170840.8F81A568C1@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-03-11 - 2016-03-18) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5459 ( +5) closed 32885 (+43) total 38344 (+48) Open issues with patches: 2375 Issues opened (36) ================== #15660: Clarify 0 prefix for width specifier in str.format doc, http://bugs.python.org/issue15660 reopened by terry.reedy #22758: Regression in Python 3.2 cookie parsing http://bugs.python.org/issue22758 reopened by berker.peksag #25934: ICC compiler: ICC treats denormal floating point numbers as 0. http://bugs.python.org/issue25934 reopened by zach.ware #26270: Support for read()/write()/select() on asyncio http://bugs.python.org/issue26270 reopened by gvanrossum #26481: unittest discovery process not working without .py source file http://bugs.python.org/issue26481 reopened by rbcollins #26541: Add stop_after parameter to setup() http://bugs.python.org/issue26541 opened by memeplex #26543: imaplib noop Debug http://bugs.python.org/issue26543 opened by Stephen.Evans #26544: platform.libc_ver() returns incorrect version number http://bugs.python.org/issue26544 opened by Thomas.Waldmann #26545: os.walk is limited by python's recursion limit http://bugs.python.org/issue26545 opened by Thomas.Waldmann #26546: Provide translated french translation on docs.python.org http://bugs.python.org/issue26546 opened by sizeof #26547: Undocumented use of the term dictproxy in vars() documentation http://bugs.python.org/issue26547 opened by sizeof #26549: co_stacksize is calculated from unoptimized code http://bugs.python.org/issue26549 opened by ztane #26550: documentation minor issue : "Step back: WSGI" section from "HO http://bugs.python.org/issue26550 opened by Alejandro Soini #26552: Failing ensure_future still creates a Task http://bugs.python.org/issue26552 opened by gordon #26553: Write HTTP in uppercase http://bugs.python.org/issue26553 opened by Sudheer Satyanarayana #26554: PC\bdist_wininst\install.c: Missing call to fclose() http://bugs.python.org/issue26554 opened by maddin200 #26556: Update expat to 2.2.1 http://bugs.python.org/issue26556 opened by christian.heimes #26557: dictviews methods not present on shelve objects http://bugs.python.org/issue26557 opened by Michael Crouch #26559: logging.handlers.MemoryHandler flushes on shutdown but not rem http://bugs.python.org/issue26559 opened by David Escott #26560: Error in assertion in wsgiref.handlers.BaseHandler.start_respo http://bugs.python.org/issue26560 opened by inglesp #26565: [ctypes] Add value attribute to non basic pointers. http://bugs.python.org/issue26565 opened by memeplex #26566: Failures on FreeBSD CURRENT buildbot http://bugs.python.org/issue26566 opened by haypo #26567: ResourceWarning: Use tracemalloc to display the traceback wher http://bugs.python.org/issue26567 opened by haypo #26568: Add a new warnings.showmsg() function taking a warnings.Warnin http://bugs.python.org/issue26568 opened by haypo #26571: turtle regression in 3.5 http://bugs.python.org/issue26571 opened by Ellison Marks #26574: replace_interleave can be optimized for single character byte http://bugs.python.org/issue26574 opened by Josh Snider #26576: Tweak wording of decorator docos http://bugs.python.org/issue26576 opened by Rosuav #26577: inspect.getclosurevars returns incorrect variable when using c http://bugs.python.org/issue26577 opened by Ryan Fox #26578: Bad BaseHTTPRequestHandler response when using HTTP/0.9 http://bugs.python.org/issue26578 opened by xiang.zhang #26579: Support pickling slots in subclasses of common classes http://bugs.python.org/issue26579 opened by serhiy.storchaka #26581: Double coding cookie http://bugs.python.org/issue26581 opened by serhiy.storchaka #26582: asyncio documentation links to wrong CancelledError http://bugs.python.org/issue26582 opened by awilfox #26584: pyclbr module needs to be more flexible on loader support http://bugs.python.org/issue26584 opened by eric.snow #26585: Use html.escape to replace _quote_html in http.server http://bugs.python.org/issue26585 opened by xiang.zhang #26586: Simple enhancement to BaseHTTPRequestHandler http://bugs.python.org/issue26586 opened by xiang.zhang #26587: Possible duplicate entries in sys.path if .pth files are used http://bugs.python.org/issue26587 opened by tds333 Most recent 15 issues with no replies (15) ========================================== #26584: pyclbr module needs to be more flexible on loader support http://bugs.python.org/issue26584 #26582: asyncio documentation links to wrong CancelledError http://bugs.python.org/issue26582 #26581: Double coding cookie http://bugs.python.org/issue26581 #26579: Support pickling slots in subclasses of common classes http://bugs.python.org/issue26579 #26577: inspect.getclosurevars returns incorrect variable when using c http://bugs.python.org/issue26577 #26571: turtle regression in 3.5 http://bugs.python.org/issue26571 #26565: [ctypes] Add value attribute to non basic pointers. http://bugs.python.org/issue26565 #26559: logging.handlers.MemoryHandler flushes on shutdown but not rem http://bugs.python.org/issue26559 #26556: Update expat to 2.2.1 http://bugs.python.org/issue26556 #26554: PC\bdist_wininst\install.c: Missing call to fclose() http://bugs.python.org/issue26554 #26550: documentation minor issue : "Step back: WSGI" section from "HO http://bugs.python.org/issue26550 #26546: Provide translated french translation on docs.python.org http://bugs.python.org/issue26546 #26543: imaplib noop Debug http://bugs.python.org/issue26543 #26541: Add stop_after parameter to setup() http://bugs.python.org/issue26541 #26539: frozen executables should have an empty path http://bugs.python.org/issue26539 Most recent 15 issues waiting for review (15) ============================================= #26586: Simple enhancement to BaseHTTPRequestHandler http://bugs.python.org/issue26586 #26585: Use html.escape to replace _quote_html in http.server http://bugs.python.org/issue26585 #26581: Double coding cookie http://bugs.python.org/issue26581 #26579: Support pickling slots in subclasses of common classes http://bugs.python.org/issue26579 #26576: Tweak wording of decorator docos http://bugs.python.org/issue26576 #26574: replace_interleave can be optimized for single character byte http://bugs.python.org/issue26574 #26568: Add a new warnings.showmsg() function taking a warnings.Warnin http://bugs.python.org/issue26568 #26567: ResourceWarning: Use tracemalloc to display the traceback wher http://bugs.python.org/issue26567 #26560: Error in assertion in wsgiref.handlers.BaseHandler.start_respo http://bugs.python.org/issue26560 #26553: Write HTTP in uppercase http://bugs.python.org/issue26553 #26547: Undocumented use of the term dictproxy in vars() documentation http://bugs.python.org/issue26547 #26546: Provide translated french translation on docs.python.org http://bugs.python.org/issue26546 #26539: frozen executables should have an empty path http://bugs.python.org/issue26539 #26536: Add the SIO_LOOPBACK_FAST_PATH option to socket.ioctl http://bugs.python.org/issue26536 #26535: Minor typo in the docs for struct.unpack http://bugs.python.org/issue26535 Top 10 most discussed issues (10) ================================= #26530: tracemalloc: add C API to manually track/untrack memory alloca http://bugs.python.org/issue26530 12 msgs #26512: Vocabulary: Using "integral" in library/stdtypes.html http://bugs.python.org/issue26512 8 msgs #26553: Write HTTP in uppercase http://bugs.python.org/issue26553 8 msgs #13305: datetime.strftime("%Y") not consistent for years < 1000 http://bugs.python.org/issue13305 7 msgs #26576: Tweak wording of decorator docos http://bugs.python.org/issue26576 7 msgs #22359: Remove incorrect uses of recursive make http://bugs.python.org/issue22359 4 msgs #22625: When cross-compiling, don???t try to execute binaries http://bugs.python.org/issue22625 4 msgs #23214: BufferedReader.read1(size) signature incompatible with Buffere http://bugs.python.org/issue23214 4 msgs #24959: unittest swallows part of stack trace when raising AssertionEr http://bugs.python.org/issue24959 4 msgs #26585: Use html.escape to replace _quote_html in http.server http://bugs.python.org/issue26585 4 msgs Issues closed (45) ================== #5505: sys.stdin.read() doesn't return after first EOF on Windows http://bugs.python.org/issue5505 closed by berker.peksag #16181: cookielib.http2time raises ValueError for invalid date. http://bugs.python.org/issue16181 closed by berker.peksag #17603: AC_LIBOBJ replacement of fileblocks http://bugs.python.org/issue17603 closed by martin.panter #17758: test_site fails when the user does not have a home directory http://bugs.python.org/issue17758 closed by haypo #18320: python installation is broken if prefix is overridden on an in http://bugs.python.org/issue18320 closed by ned.deily #20556: Use specific asserts in threading tests http://bugs.python.org/issue20556 closed by serhiy.storchaka #20589: pathlib.owner() and pathlib.group() raise ImportError on Windo http://bugs.python.org/issue20589 closed by berker.peksag #23606: ctypes.util.find_library("c") no longer makes sense http://bugs.python.org/issue23606 closed by steve.dower #23718: strptime() can produce invalid date with negative year day http://bugs.python.org/issue23718 closed by serhiy.storchaka #24918: Docs layout bug http://bugs.python.org/issue24918 closed by ezio.melotti #25320: unittest loader.py TypeError when code directory contains a so http://bugs.python.org/issue25320 closed by rbcollins #25638: Verify the etree_parse and etree_iterparse benchmarks are work http://bugs.python.org/issue25638 closed by brett.cannon #25687: Error during test case and tearDown http://bugs.python.org/issue25687 closed by ezio.melotti #25959: tkinter - PhotoImage.zoom() causes segfault http://bugs.python.org/issue25959 closed by terry.reedy #26079: Build with Visual Studio 2015 using PlatformToolset=v120 http://bugs.python.org/issue26079 closed by steve.dower #26102: access violation in PyErrFetch if tcur==null in PyGILState_Rel http://bugs.python.org/issue26102 closed by haypo #26247: Document Chrome/Chromium for python2.7 http://bugs.python.org/issue26247 closed by ezio.melotti #26313: ssl.py _load_windows_store_certs fails if windows cert store i http://bugs.python.org/issue26313 closed by steve.dower #26314: interned strings are stored in a dict, a set would use less me http://bugs.python.org/issue26314 closed by rhettinger #26323: Add assert_called() and assert_called_once() methods for mock http://bugs.python.org/issue26323 closed by berker.peksag #26499: http.client.IncompleteRead from HTTPResponse read after part r http://bugs.python.org/issue26499 closed by martin.panter #26500: Document of new star unpacking is missing. http://bugs.python.org/issue26500 closed by terry.reedy #26513: platform.win32_ver() broken in 2.7.11 http://bugs.python.org/issue26513 closed by steve.dower #26516: Add PYTHONMALLOC env var and add support for malloc debug hook http://bugs.python.org/issue26516 closed by haypo #26519: Cython doesn't work anymore on Python 3.6 http://bugs.python.org/issue26519 closed by haypo #26523: multiprocessing ThreadPool is untested http://bugs.python.org/issue26523 closed by pitrou #26538: regrtest: setup_tests() must not replace module.__path__ (_Nam http://bugs.python.org/issue26538 closed by haypo #26540: Python Macros http://bugs.python.org/issue26540 closed by ezio.melotti #26542: Wrongly formatted doctest block in difflib documentation http://bugs.python.org/issue26542 closed by berker.peksag #26548: Probably missing word in a sentence in the doc of bitwise oper http://bugs.python.org/issue26548 closed by rhettinger #26551: Regex.finditer infinite loops with certain input http://bugs.python.org/issue26551 closed by serhiy.storchaka #26555: string.format(bytes) raise warning http://bugs.python.org/issue26555 closed by haypo #26558: Disable PyGILState_Check() when Py_NewInterpreter() is called http://bugs.python.org/issue26558 closed by haypo #26561: exec function fails to properly assign scope to dict like obje http://bugs.python.org/issue26561 closed by belopolsky #26562: Large Involuntary context switches during oom-killer http://bugs.python.org/issue26562 closed by haypo #26563: PyMem_Malloc(): check that the GIL is hold in debug hooks http://bugs.python.org/issue26563 closed by haypo #26564: Malloc debug hooks: display memory block traceback on error http://bugs.python.org/issue26564 closed by haypo #26569: pyclbr.readmodule() and pyclbr.readmodule_ex() don't support n http://bugs.python.org/issue26569 closed by eric.snow #26570: comma-separated cookies with expires header do not parse prope http://bugs.python.org/issue26570 closed by SilentGhost #26572: urlparse does not handle passwords with ? in them. http://bugs.python.org/issue26572 closed by martin.panter #26573: Method Parameters can be Accepted as Keyword Arguments? http://bugs.python.org/issue26573 closed by G Young #26575: lambda not closed on specific value in comprehension http://bugs.python.org/issue26575 closed by martin.panter #26580: Documentation for ftplib still references ftpmirror.py http://bugs.python.org/issue26580 closed by berker.peksag #26583: test_timestamp_overflow of test_importlib fails if PYTHONDONTW http://bugs.python.org/issue26583 closed by ned.deily #747320: rfc2822 formatdate functionality duplication http://bugs.python.org/issue747320 closed by berker.peksag From mal at egenix.com Fri Mar 18 16:05:27 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 18 Mar 2016 21:05:27 +0100 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56E91263.2050900@egenix.com> Message-ID: <56EC5F87.6070805@egenix.com> On 17.03.2016 15:55, Guido van Rossum wrote: > On Thu, Mar 17, 2016 at 5:04 AM, Serhiy Storchaka wrote: >>> Should we recommend that everyone use tokenize.detect_encoding()? >> >> Likely. However the interface of tokenize.detect_encoding() is not very >> simple. > > I just found that out yesterday. You have to give it a readline() > function, which is cumbersome if all you have is a (byte) string and > you don't want to split it on lines just yet. And the readline() > function raises SyntaxError when the encoding isn't right. I wish > there were a lower-level helper that just took a line and told you > what the encoding in it was, if any. Then the rest of the logic can be > handled by the caller (including the logic of trying up to two lines). I've uploaded the code I posted yesterday, modified to address some of the issues it had to github: https://github.com/malemburg/python-snippets/blob/master/detect_source_encoding.py I'm pretty sure the two-lines read can be optimized away and put straight into the regular expression used for matching. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Mar 18 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-03-07: Released eGenix pyOpenSSL 0.13.14 ... http://egenix.com/go89 2016-02-19: Released eGenix PyRun 2.1.2 ... http://egenix.com/go88 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From guido at python.org Fri Mar 18 20:02:42 2016 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Mar 2016 17:02:42 -0700 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: I'm happy to accept this PEP as is stands, assuming the authors are ready for this news. I recommend also implementing the option from footnote [11] (extend the number-to-string formatting language to allow ``_`` as a thousans separator). On Thu, Mar 17, 2016 at 11:19 AM, Brett Cannon wrote: > Where did this PEP leave off? Anything blocking its acceptance? > > On Sat, 13 Feb 2016 at 00:49 Georg Brandl wrote: >> >> Hi all, >> >> after talking to Guido and Serhiy we present the next revision >> of this PEP. It is a compromise that we are all happy with, >> and a relatively restricted rule that makes additions to PEP 8 >> basically unnecessary. >> >> I think the discussion has shown that supporting underscores in >> the from-string constructors is valuable, therefore this is now >> added to the specification section. >> >> The remaining open question is about the reverse direction: do >> we want a string formatting modifier that adds underscores as >> thousands separators? >> >> cheers, >> Georg >> >> ----------------------------------------------------------------- >> >> PEP: 515 >> Title: Underscores in Numeric Literals >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Georg Brandl, Serhiy Storchaka >> Status: Draft >> Type: Standards Track >> Content-Type: text/x-rst >> Created: 10-Feb-2016 >> Python-Version: 3.6 >> Post-History: 10-Feb-2016, 11-Feb-2016 >> >> Abstract and Rationale >> ====================== >> >> This PEP proposes to extend Python's syntax and number-from-string >> constructors so that underscores can be used as visual separators for >> digit grouping purposes in integral, floating-point and complex number >> literals. >> >> This is a common feature of other modern languages, and can aid >> readability of long literals, or literals whose value should clearly >> separate into parts, such as bytes or words in hexadecimal notation. >> >> Examples:: >> >> # grouping decimal numbers by thousands >> amount = 10_000_000.0 >> >> # grouping hexadecimal addresses by words >> addr = 0xDEAD_BEEF >> >> # grouping bits into nibbles in a binary literal >> flags = 0b_0011_1111_0100_1110 >> >> # same, for string conversions >> flags = int('0b_1111_0000', 2) >> >> >> Specification >> ============= >> >> The current proposal is to allow one underscore between digits, and >> after base specifiers in numeric literals. The underscores have no >> semantic meaning, and literals are parsed as if the underscores were >> absent. >> >> Literal Grammar >> --------------- >> >> The production list for integer literals would therefore look like >> this:: >> >> integer: decinteger | bininteger | octinteger | hexinteger >> decinteger: nonzerodigit (["_"] digit)* | "0" (["_"] "0")* >> bininteger: "0" ("b" | "B") (["_"] bindigit)+ >> octinteger: "0" ("o" | "O") (["_"] octdigit)+ >> hexinteger: "0" ("x" | "X") (["_"] hexdigit)+ >> nonzerodigit: "1"..."9" >> digit: "0"..."9" >> bindigit: "0" | "1" >> octdigit: "0"..."7" >> hexdigit: digit | "a"..."f" | "A"..."F" >> >> For floating-point and complex literals:: >> >> floatnumber: pointfloat | exponentfloat >> pointfloat: [digitpart] fraction | digitpart "." >> exponentfloat: (digitpart | pointfloat) exponent >> digitpart: digit (["_"] digit)* >> fraction: "." digitpart >> exponent: ("e" | "E") ["+" | "-"] digitpart >> imagnumber: (floatnumber | digitpart) ("j" | "J") >> >> Constructors >> ------------ >> >> Following the same rules for placement, underscores will be allowed in >> the following constructors: >> >> - ``int()`` (with any base) >> - ``float()`` >> - ``complex()`` >> - ``Decimal()`` >> >> >> Prior Art >> ========= >> >> Those languages that do allow underscore grouping implement a large >> variety of rules for allowed placement of underscores. In cases where >> the language spec contradicts the actual behavior, the actual behavior >> is listed. ("single" or "multiple" refer to allowing runs of >> consecutive underscores.) >> >> * Ada: single, only between digits [8]_ >> * C# (open proposal for 7.0): multiple, only between digits [6]_ >> * C++14: single, between digits (different separator chosen) [1]_ >> * D: multiple, anywhere, including trailing [2]_ >> * Java: multiple, only between digits [7]_ >> * Julia: single, only between digits (but not in float exponent parts) >> [9]_ >> * Perl 5: multiple, basically anywhere, although docs say it's >> restricted to one underscore between digits [3]_ >> * Ruby: single, only between digits (although docs say "anywhere") >> [10]_ >> * Rust: multiple, anywhere, except for between exponent "e" and digits >> [4]_ >> * Swift: multiple, between digits and trailing (although textual >> description says only "between digits") [5]_ >> >> >> Alternative Syntax >> ================== >> >> Underscore Placement Rules >> -------------------------- >> >> Instead of the relatively strict rule specified above, the use of >> underscores could be limited. As we seen from other languages, common >> rules include: >> >> * Only one consecutive underscore allowed, and only between digits. >> * Multiple consecutive underscores allowed, but only between digits. >> * Multiple consecutive underscores allowed, in most positions except >> for the start of the literal, or special positions like after a >> decimal point. >> >> The syntax in this PEP has ultimately been selected because it covers >> the common use cases, and does not allow for syntax that would have to >> be discouraged in style guides anyway. >> >> A less common rule would be to allow underscores only every N digits >> (where N could be 3 for decimal literals, or 4 for hexadecimal ones). >> This is unnecessarily restrictive, especially considering the >> separator placement is different in different cultures. >> >> Different Separators >> -------------------- >> >> A proposed alternate syntax was to use whitespace for grouping. >> Although strings are a precedent for combining adjoining literals, the >> behavior can lead to unexpected effects which are not possible with >> underscores. Also, no other language is known to use this rule, >> except for languages that generally disregard any whitespace. >> >> C++14 introduces apostrophes for grouping (because underscores >> introduce ambiguity with user-defined literals), which is not >> considered because of the use in Python's string literals. [1]_ >> >> >> Open Proposals >> ============== >> >> It has been proposed [11]_ to extend the number-to-string formatting >> language to allow ``_`` as a thousans separator, where currently only >> ``,`` is supported. This could be used to easily generate code with >> more readable literals. >> >> >> Implementation >> ============== >> >> A preliminary patch that implements the specification given above has >> been posted to the issue tracker. [12]_ >> >> >> References >> ========== >> >> .. [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3499.html >> >> .. [2] http://dlang.org/spec/lex.html#integerliteral >> >> .. [3] http://perldoc.perl.org/perldata.html#Scalar-value-constructors >> >> .. [4] http://doc.rust-lang.org/reference.html#number-literals >> >> .. [5] >> >> https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/LexicalStructure.html >> >> .. [6] https://github.com/dotnet/roslyn/issues/216 >> >> .. [7] >> >> https://docs.oracle.com/javase/7/docs/technotes/guides/language/underscores-literals.html >> >> .. [8] http://archive.adaic.com/standards/83lrm/html/lrm-02-04.html#2.4 >> >> .. [9] >> >> http://docs.julialang.org/en/release-0.4/manual/integers-and-floating-point-numbers/ >> >> .. [10] >> http://ruby-doc.org/core-2.3.0/doc/syntax/literals_rdoc.html#label-Numbers >> >> .. [11] >> https://mail.python.org/pipermail/python-dev/2016-February/143283.html >> >> .. [12] http://bugs.python.org/issue26331 >> >> >> Copyright >> ========= >> >> This document has been placed in the public domain. >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From g.brandl at gmx.net Sat Mar 19 02:44:53 2016 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 19 Mar 2016 07:44:53 +0100 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: I'll update the text so that the format() gets promoted from optional to specified. There was one point of discussion in the tracker issue that should be resolved before acceptance: the Decimal constructor is listed as getting updated to allow underscores, but its syntax is specified in the Decimal spec: http://speleotrove.com/decimal/daconvs.html Acccepting underscores would be an extension to the spec, which may not be what we want to do as otherwise Decimal follows that spec closely. On the other hand, assuming decimal literals are introduced at some point, they would almost definitely need to support underscores. Of course, the decision whether to modify the Decimal constructor can be postponed until that time. cheers, Georg On 03/19/2016 01:02 AM, Guido van Rossum wrote: > I'm happy to accept this PEP as is stands, assuming the authors are > ready for this news. I recommend also implementing the option from > footnote [11] (extend the number-to-string formatting language to > allow ``_`` as a thousans separator). > > On Thu, Mar 17, 2016 at 11:19 AM, Brett Cannon wrote: >> Where did this PEP leave off? Anything blocking its acceptance? >> >> On Sat, 13 Feb 2016 at 00:49 Georg Brandl wrote: >>> >>> Hi all, >>> >>> after talking to Guido and Serhiy we present the next revision >>> of this PEP. It is a compromise that we are all happy with, >>> and a relatively restricted rule that makes additions to PEP 8 >>> basically unnecessary. >>> >>> I think the discussion has shown that supporting underscores in >>> the from-string constructors is valuable, therefore this is now >>> added to the specification section. >>> >>> The remaining open question is about the reverse direction: do >>> we want a string formatting modifier that adds underscores as >>> thousands separators? >>> >>> cheers, >>> Georg >>> >>> ----------------------------------------------------------------- >>> >>> PEP: 515 >>> Title: Underscores in Numeric Literals >>> Version: $Revision$ >>> Last-Modified: $Date$ >>> Author: Georg Brandl, Serhiy Storchaka >>> Status: Draft >>> Type: Standards Track >>> Content-Type: text/x-rst >>> Created: 10-Feb-2016 >>> Python-Version: 3.6 >>> Post-History: 10-Feb-2016, 11-Feb-2016 >>> >>> Abstract and Rationale >>> ====================== >>> >>> This PEP proposes to extend Python's syntax and number-from-string >>> constructors so that underscores can be used as visual separators for >>> digit grouping purposes in integral, floating-point and complex number >>> literals. >>> >>> This is a common feature of other modern languages, and can aid >>> readability of long literals, or literals whose value should clearly >>> separate into parts, such as bytes or words in hexadecimal notation. >>> >>> Examples:: >>> >>> # grouping decimal numbers by thousands >>> amount = 10_000_000.0 >>> >>> # grouping hexadecimal addresses by words >>> addr = 0xDEAD_BEEF >>> >>> # grouping bits into nibbles in a binary literal >>> flags = 0b_0011_1111_0100_1110 >>> >>> # same, for string conversions >>> flags = int('0b_1111_0000', 2) >>> >>> >>> Specification >>> ============= >>> >>> The current proposal is to allow one underscore between digits, and >>> after base specifiers in numeric literals. The underscores have no >>> semantic meaning, and literals are parsed as if the underscores were >>> absent. >>> >>> Literal Grammar >>> --------------- >>> >>> The production list for integer literals would therefore look like >>> this:: >>> >>> integer: decinteger | bininteger | octinteger | hexinteger >>> decinteger: nonzerodigit (["_"] digit)* | "0" (["_"] "0")* >>> bininteger: "0" ("b" | "B") (["_"] bindigit)+ >>> octinteger: "0" ("o" | "O") (["_"] octdigit)+ >>> hexinteger: "0" ("x" | "X") (["_"] hexdigit)+ >>> nonzerodigit: "1"..."9" >>> digit: "0"..."9" >>> bindigit: "0" | "1" >>> octdigit: "0"..."7" >>> hexdigit: digit | "a"..."f" | "A"..."F" >>> >>> For floating-point and complex literals:: >>> >>> floatnumber: pointfloat | exponentfloat >>> pointfloat: [digitpart] fraction | digitpart "." >>> exponentfloat: (digitpart | pointfloat) exponent >>> digitpart: digit (["_"] digit)* >>> fraction: "." digitpart >>> exponent: ("e" | "E") ["+" | "-"] digitpart >>> imagnumber: (floatnumber | digitpart) ("j" | "J") >>> >>> Constructors >>> ------------ >>> >>> Following the same rules for placement, underscores will be allowed in >>> the following constructors: >>> >>> - ``int()`` (with any base) >>> - ``float()`` >>> - ``complex()`` >>> - ``Decimal()`` >>> >>> >>> Prior Art >>> ========= >>> >>> Those languages that do allow underscore grouping implement a large >>> variety of rules for allowed placement of underscores. In cases where >>> the language spec contradicts the actual behavior, the actual behavior >>> is listed. ("single" or "multiple" refer to allowing runs of >>> consecutive underscores.) >>> >>> * Ada: single, only between digits [8]_ >>> * C# (open proposal for 7.0): multiple, only between digits [6]_ >>> * C++14: single, between digits (different separator chosen) [1]_ >>> * D: multiple, anywhere, including trailing [2]_ >>> * Java: multiple, only between digits [7]_ >>> * Julia: single, only between digits (but not in float exponent parts) >>> [9]_ >>> * Perl 5: multiple, basically anywhere, although docs say it's >>> restricted to one underscore between digits [3]_ >>> * Ruby: single, only between digits (although docs say "anywhere") >>> [10]_ >>> * Rust: multiple, anywhere, except for between exponent "e" and digits >>> [4]_ >>> * Swift: multiple, between digits and trailing (although textual >>> description says only "between digits") [5]_ >>> >>> >>> Alternative Syntax >>> ================== >>> >>> Underscore Placement Rules >>> -------------------------- >>> >>> Instead of the relatively strict rule specified above, the use of >>> underscores could be limited. As we seen from other languages, common >>> rules include: >>> >>> * Only one consecutive underscore allowed, and only between digits. >>> * Multiple consecutive underscores allowed, but only between digits. >>> * Multiple consecutive underscores allowed, in most positions except >>> for the start of the literal, or special positions like after a >>> decimal point. >>> >>> The syntax in this PEP has ultimately been selected because it covers >>> the common use cases, and does not allow for syntax that would have to >>> be discouraged in style guides anyway. >>> >>> A less common rule would be to allow underscores only every N digits >>> (where N could be 3 for decimal literals, or 4 for hexadecimal ones). >>> This is unnecessarily restrictive, especially considering the >>> separator placement is different in different cultures. >>> >>> Different Separators >>> -------------------- >>> >>> A proposed alternate syntax was to use whitespace for grouping. >>> Although strings are a precedent for combining adjoining literals, the >>> behavior can lead to unexpected effects which are not possible with >>> underscores. Also, no other language is known to use this rule, >>> except for languages that generally disregard any whitespace. >>> >>> C++14 introduces apostrophes for grouping (because underscores >>> introduce ambiguity with user-defined literals), which is not >>> considered because of the use in Python's string literals. [1]_ >>> >>> >>> Open Proposals >>> ============== >>> >>> It has been proposed [11]_ to extend the number-to-string formatting >>> language to allow ``_`` as a thousans separator, where currently only >>> ``,`` is supported. This could be used to easily generate code with >>> more readable literals. >>> >>> >>> Implementation >>> ============== >>> >>> A preliminary patch that implements the specification given above has >>> been posted to the issue tracker. [12]_ >>> >>> >>> References >>> ========== >>> >>> .. [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3499.html >>> >>> .. [2] http://dlang.org/spec/lex.html#integerliteral >>> >>> .. [3] http://perldoc.perl.org/perldata.html#Scalar-value-constructors >>> >>> .. [4] http://doc.rust-lang.org/reference.html#number-literals >>> >>> .. [5] >>> >>> https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/LexicalStructure.html >>> >>> .. [6] https://github.com/dotnet/roslyn/issues/216 >>> >>> .. [7] >>> >>> https://docs.oracle.com/javase/7/docs/technotes/guides/language/underscores-literals.html >>> >>> .. [8] http://archive.adaic.com/standards/83lrm/html/lrm-02-04.html#2.4 >>> >>> .. [9] >>> >>> http://docs.julialang.org/en/release-0.4/manual/integers-and-floating-point-numbers/ >>> >>> .. [10] >>> http://ruby-doc.org/core-2.3.0/doc/syntax/literals_rdoc.html#label-Numbers >>> >>> .. [11] >>> https://mail.python.org/pipermail/python-dev/2016-February/143283.html >>> >>> .. [12] http://bugs.python.org/issue26331 >>> >>> >>> Copyright >>> ========= >>> >>> This document has been placed in the public domain. >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/brett%40python.org >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > From ncoghlan at gmail.com Sat Mar 19 09:22:46 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 Mar 2016 23:22:46 +1000 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: On 19 March 2016 at 16:44, Georg Brandl wrote: > On the other hand, assuming decimal literals are introduced at some > point, they would almost definitely need to support underscores. > Of course, the decision whether to modify the Decimal constructor > can be postponed until that time. The idea of Decimal literals is complicated significantly by their current context dependent behaviour (especially when it comes to rounding), so I'd suggest leaving them alone in the context of this PEP. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Sat Mar 19 11:19:25 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 19 Mar 2016 17:19:25 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: On 16.03.16 08:03, Serhiy Storchaka wrote: > On 15.03.16 22:30, Guido van Rossum wrote: >> I came across a file that had two different coding cookies -- one on >> the first line and one on the second. CPython uses the first, but mypy >> happens to use the second. I couldn't find anything in the spec or >> docs ruling out the second interpretation. Does anyone have a >> suggestion (apart from following CPython)? >> >> Reference: https://github.com/python/mypy/issues/1281 > > There is similar question. If a file has two different coding cookies on > the same line, what should win? Currently the last cookie wins, in > CPython parser, in the tokenize module, in IDLE, and in number of other > code. I think this is a bug. I just tested with Emacs, and it looks that when specify different codings on two different lines, the first coding wins, but when specify different codings on the same line, the last coding wins. Therefore current CPython behavior can be correct, and the regular expression in PEP 263 should be changed to use greedy repetition. From guido at python.org Sat Mar 19 12:53:35 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 09:53:35 -0700 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: I don't care too much either way, but I think passing underscores to the constructor shouldn't be affected by the context -- the underscores are just removed before parsing the number. But if it's too complicated to implement I'm fine with punting. --Guido (mobile) On Mar 19, 2016 6:24 AM, "Nick Coghlan" wrote: > On 19 March 2016 at 16:44, Georg Brandl wrote: > > On the other hand, assuming decimal literals are introduced at some > > point, they would almost definitely need to support underscores. > > Of course, the decision whether to modify the Decimal constructor > > can be postponed until that time. > > The idea of Decimal literals is complicated significantly by their > current context dependent behaviour (especially when it comes to > rounding), so I'd suggest leaving them alone in the context of this > PEP. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at bytereef.org Sat Mar 19 13:24:30 2016 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 19 Mar 2016 17:24:30 +0000 (UTC) Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) References: Message-ID: Guido van Rossum python.org> writes: > I don't care too much either way, but I think passing underscores to the constructor shouldn't be affected by the context -- the underscores are just removed before parsing the number. But if it's too complicated to implement I'm fine with punting. Just removing the underscores would be fine. The problem is that per the PEP the conversion should happen according the Python float grammar but the actual decimal grammar is the one from the IBM specification. I'd much rather express the problem like you did above: A preprocessing step followed by the IBM specification grammar. Stefan Krah From guido at python.org Sat Mar 19 13:35:41 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 10:35:41 -0700 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: So should the preprocessing step just be s.replace('_', ''), or should it reject underscores that don't follow the rules from the PEP (perhaps augmented so they follow the spirit of the PEP and the letter of the IBM spec)? Honestly I think it's also fine if specifying this exactly is left out of the PEP, and handled by whoever adds this to Decimal. Having a PEP to work from for the language spec and core builtins (int(), float() complex()) is more important. On Sat, Mar 19, 2016 at 10:24 AM, Stefan Krah wrote: > > Guido van Rossum python.org> writes: >> I don't care too much either way, but I think passing underscores to the > constructor shouldn't be affected by the context -- the underscores are just > removed before parsing the number. But if it's too complicated to implement > I'm fine with punting. > > Just removing the underscores would be fine. The problem is that per > the PEP the conversion should happen according the Python float grammar > but the actual decimal grammar is the one from the IBM specification. > > I'd much rather express the problem like you did above: A preprocessing > step followed by the IBM specification grammar. > > > > Stefan Krah > > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From v+python at g.nevcal.com Sat Mar 19 13:36:45 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sat, 19 Mar 2016 10:36:45 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: Message-ID: <56ED8E2D.3050103@g.nevcal.com> On 3/19/2016 8:19 AM, Serhiy Storchaka wrote: > On 16.03.16 08:03, Serhiy Storchaka wrote: >> On 15.03.16 22:30, Guido van Rossum wrote: >>> I came across a file that had two different coding cookies -- one on >>> the first line and one on the second. CPython uses the first, but mypy >>> happens to use the second. I couldn't find anything in the spec or >>> docs ruling out the second interpretation. Does anyone have a >>> suggestion (apart from following CPython)? >>> >>> Reference: https://github.com/python/mypy/issues/1281 >> >> There is similar question. If a file has two different coding cookies on >> the same line, what should win? Currently the last cookie wins, in >> CPython parser, in the tokenize module, in IDLE, and in number of other >> code. I think this is a bug. > > I just tested with Emacs, and it looks that when specify different > codings on two different lines, the first coding wins, but when > specify different codings on the same line, the last coding wins. > > Therefore current CPython behavior can be correct, and the regular > expression in PEP 263 should be changed to use greedy repetition. Just because emacs works that way (and even though I'm an emacs user), that doesn't mean CPython should act like emacs. (1) CPython should not necessarily act like emacs, unless the coding syntax exactly matches emacs, rather than the generic coding that CPython interprets, that matches emacs, vim, and other similar things that both emacs and vim would ignore. (1a) Maybe if a similar test were run on vim with its syntax, and it also works the same way, then one might think it is a trend worth following, but it is not clear to this non-vim user that vim syntax allows more than one coding specification per line. (2) emacs has no requirement that the coding be placed on the first two lines. It specifically looks at the second line only if the first line has a ? #! ? or a ? '\" ? (for troff). (according to docs, not experimentation) (3) emacs also allows for Local Variables to be specified at the end of the file. If CPython were really to act like emacs, then it would need to allow for that too. (4) there is no benefit to specifying the coding twice on a line, it only adds confusion, whether in CPython, emacs, or vim. (4a) Here's an untested line that emacs would interpret as utf-8, and CPython with the greedy regulare expression would interpret as latin-1, because emacs looks only between the -*- pair, and CPython ignores that. # -*- coding: utf-8 -*- this file does not use coding: latin-1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Sat Mar 19 14:03:37 2016 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 20 Mar 2016 03:03:37 +0900 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56ED8E2D.3050103@g.nevcal.com> References: <56ED8E2D.3050103@g.nevcal.com> Message-ID: <22253.38009.767066.478806@turnbull.sk.tsukuba.ac.jp> Glenn Linderman writes: > On 3/19/2016 8:19 AM, Serhiy Storchaka wrote: > > Therefore current CPython behavior can be correct, and the regular > > expression in PEP 263 should be changed to use greedy repetition. > > Just because emacs works that way (and even though I'm an emacs user), > that doesn't mean CPython should act like emacs. > > (1) CPython should not necessarily act like emacs, We can't treat Emacs as a spec, because Emacs doesn't follow specs, doesn't respect standards, and above a certain level of inconvenience to developers doesn't respect backward compatibility. There's never any guarantee that Emacs will do the same thing tomorrow that it does today, although inertia has mostly the same effect. In this case, there's a reason why Emacs behaves the way it does, which is that you can put an arbitrary sequence of variable assignments in "-*- ... -*-" and they will be executed in order. So it makes sense that "last coding wins". But pragmas are severely deprecated in Python; cookies got a very special exception. So that rationale can't apply to Python. > (4) there is no benefit to specifying the coding twice on a line, it > only adds confusion, whether in CPython, emacs, or vim. Indeed. I see no point in reading past the first cookie found (whether a valid codec or not), unless an error would be raised. That might be a good idea, but I doubt it's worth the implementation complexity. From stefan at bytereef.org Sat Mar 19 14:28:12 2016 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 19 Mar 2016 18:28:12 +0000 (UTC) Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) References: Message-ID: Guido van Rossum python.org> writes: > So should the preprocessing step just be s.replace('_', ''), or should > it reject underscores that don't follow the rules from the PEP > (perhaps augmented so they follow the spirit of the PEP and the letter > of the IBM spec)? > > Honestly I think it's also fine if specifying this exactly is left out > of the PEP, and handled by whoever adds this to Decimal. Having a PEP > to work from for the language spec and core builtins (int(), float() > complex()) is more important. I'd keep it simple for Decimal: Remove left and right whitespace (we're already doing this), then remove underscores from the remaining string (which must not contain any further whitespace), then use the IBM grammar. We could add a clause to the PEP that only those strings that follow the spirit of the PEP are guaranteed to be accepted in the future. One reason for keeping it simple is that I would not like to slow down string conversion, but thinking about two grammars is also a problem -- part of the string conversion in libmpdec is modeled in ACL2, which would be invalidated or at least complicated with two grammars. Stefan Krah From guido at python.org Sat Mar 19 14:52:45 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 11:52:45 -0700 Subject: [Python-Dev] PEP 515: Underscores in Numeric Literals (revision 3) In-Reply-To: References: Message-ID: All that sounds fine! On Sat, Mar 19, 2016 at 11:28 AM, Stefan Krah wrote: > Guido van Rossum python.org> writes: >> So should the preprocessing step just be s.replace('_', ''), or should >> it reject underscores that don't follow the rules from the PEP >> (perhaps augmented so they follow the spirit of the PEP and the letter >> of the IBM spec)? >> >> Honestly I think it's also fine if specifying this exactly is left out >> of the PEP, and handled by whoever adds this to Decimal. Having a PEP >> to work from for the language spec and core builtins (int(), float() >> complex()) is more important. > > I'd keep it simple for Decimal: Remove left and right whitespace (we're > already doing this), then remove underscores from the remaining string > (which must not contain any further whitespace), then use the IBM grammar. > > > We could add a clause to the PEP that only those strings that follow > the spirit of the PEP are guaranteed to be accepted in the future. > > > One reason for keeping it simple is that I would not like to slow down > string conversion, but thinking about two grammars is also a problem -- > part of the string conversion in libmpdec is modeled in ACL2, which > would be invalidated or at least complicated with two grammars. > > > > Stefan Krah > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From storchaka at gmail.com Sat Mar 19 17:37:49 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 19 Mar 2016 23:37:49 +0200 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56ED8E2D.3050103@g.nevcal.com> References: <56ED8E2D.3050103@g.nevcal.com> Message-ID: On 19.03.16 19:36, Glenn Linderman wrote: > On 3/19/2016 8:19 AM, Serhiy Storchaka wrote: >> On 16.03.16 08:03, Serhiy Storchaka wrote: >> I just tested with Emacs, and it looks that when specify different >> codings on two different lines, the first coding wins, but when >> specify different codings on the same line, the last coding wins. >> >> Therefore current CPython behavior can be correct, and the regular >> expression in PEP 263 should be changed to use greedy repetition. > > Just because emacs works that way (and even though I'm an emacs user), > that doesn't mean CPython should act like emacs. Yes. But current CPython works that way. The behavior of Emacs is the argument that maybe this is not a bug. > (4) there is no benefit to specifying the coding twice on a line, it > only adds confusion, whether in CPython, emacs, or vim. > (4a) Here's an untested line that emacs would interpret as utf-8, and > CPython with the greedy regulare expression would interpret as latin-1, > because emacs looks only between the -*- pair, and CPython ignores that. > # -*- coding: utf-8 -*- this file does not use coding: latin-1 Since Emacs allows to specify the coding twice on a line, and this can be ambiguous, and CPython already detects some ambiguous situations (UTF-8 BOM and non-UTF-8 coding cookie), it may be worth to add a check that the coding is specified only once on a line. From v+python at g.nevcal.com Sat Mar 19 17:46:24 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sat, 19 Mar 2016 14:46:24 -0700 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: References: <56ED8E2D.3050103@g.nevcal.com> Message-ID: <56EDC8B0.1030301@g.nevcal.com> On 3/19/2016 2:37 PM, Serhiy Storchaka wrote: > On 19.03.16 19:36, Glenn Linderman wrote: >> On 3/19/2016 8:19 AM, Serhiy Storchaka wrote: >>> On 16.03.16 08:03, Serhiy Storchaka wrote: >>> I just tested with Emacs, and it looks that when specify different >>> codings on two different lines, the first coding wins, but when >>> specify different codings on the same line, the last coding wins. >>> >>> Therefore current CPython behavior can be correct, and the regular >>> expression in PEP 263 should be changed to use greedy repetition. >> >> Just because emacs works that way (and even though I'm an emacs user), >> that doesn't mean CPython should act like emacs. > > Yes. But current CPython works that way. The behavior of Emacs is the > argument that maybe this is not a bug. If CPython properly handles the following line as having only one proper coding declaration (utf-8), then I might reluctantly agree that the behavior of Emacs might be a relevant argument. Otherwise, vehemently not relevant. # -*- coding: utf-8 -*- this file does not use coding: latin-1 > >> (4) there is no benefit to specifying the coding twice on a line, it >> only adds confusion, whether in CPython, emacs, or vim. >> (4a) Here's an untested line that emacs would interpret as utf-8, and >> CPython with the greedy regulare expression would interpret as latin-1, >> because emacs looks only between the -*- pair, and CPython ignores that. >> # -*- coding: utf-8 -*- this file does not use coding: latin-1 > > Since Emacs allows to specify the coding twice on a line, and this can > be ambiguous, and CPython already detects some ambiguous situations > (UTF-8 BOM and non-UTF-8 coding cookie), it may be worth to add a > check that the coding is specified only once on a line. Diagnosing ambiguous conditions, even including my example above, might be useful... for a few files... is it worth the effort? What % of .py sources have coding specifications? What % of those have two? -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Mar 19 21:18:52 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 18:18:52 -0700 Subject: [Python-Dev] PEP 484: updates to Python 2.7 signature syntax Message-ID: PEP 484 was already updated to support signatures as type comments in Python 2.7. I'd like to add two more variations to this spec, both of which have already come up through users. First, https://github.com/python/typing/issues/188. This augments the format of signature type comments to allow (...) instead of an argument list. This is useful to avoid having to write (Any, Any, Any, ..., Any) for a long argument list if you don't care about the argument types but do want to specify the return type. It's already implemented by mypy (and presumably by PyCharm). Example: def gcd(a, b): # type: (...) -> int Second, https://github.com/python/typing/issues/186. This builds on the previous syntax but deals with the other annoyance of long argument lists, this time in case you *do* care about the types. The proposal is to allow writing the arguments one per line with a type comment on each line. This has been implemented in PyCharm but not yet in mypy. Example: def gcd( a, # type: int b, # type: int ): # type: (...) -> int In both cases we've considered a few alternatives and ended up agreeing on the best course forward. If you have questions or feedback on either proposal it's probably best to just add a comment to the GitHub tracker issues. A clarification of the status of PEP 484: it was provisionally accepted in May 2015. Having spent close to a year pondering it, and the last several months actively using it at Dropbox, I'm now ready to move with some improvements based on these experiences (and those of others who have started to use it). We already added the basic Python 2.7 compatible syntax (see thread starting at https://mail.python.org/pipermail/python-ideas/2016-January/037704.html), and having used that for a few months the two proposals mentioned above handle a few corner cases that were possible but a bit awkward in our experience. -- --Guido van Rossum (python.org/~guido) From abarnert at yahoo.com Sat Mar 19 21:43:32 2016 From: abarnert at yahoo.com (Andrew Barnert) Date: Sat, 19 Mar 2016 18:43:32 -0700 Subject: [Python-Dev] PEP 484: updates to Python 2.7 signature syntax In-Reply-To: References: Message-ID: On Mar 19, 2016, at 18:18, Guido van Rossum wrote: > > Second, https://github.com/python/typing/issues/186. This builds on > the previous syntax but deals with the other annoyance of long > argument lists, this time in case you *do* care about the types. The > proposal is to allow writing the arguments one per line with a type > comment on each line. This has been implemented in PyCharm but not yet > in mypy. Example: > > def gcd( > a, # type: int > b, # type: int > ): > # type: (...) -> int > This is a lot nicer than what you were originally discussing (at #1101? I forget...). Even more so given how trivial it will be to mechanically convert these to annotations if/when you switch an app to pure Python 3. But one thing: in the PEP and the docs, I think it would be better to pick an example with longer parameter names. This example shows that even in the worst case it isn't that bad, but a better example would show that in the typical case it's actually pretty nice. (Also, I don't see why you wouldn't just use the "old" comment form for this example, since it all fits on one line and isn't at all confusing.) From guido at python.org Sat Mar 19 21:52:51 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 18:52:51 -0700 Subject: [Python-Dev] PEP 484 update: allow @overload in regular module files Message-ID: Here's another proposal for a change to PEP 484. In https://github.com/python/typing/issues/72 there's a long discussion ending with a reasonable argument to allow @overload in (non-stub) modules after all. This proposal does *not* sneak in a syntax for multi-dispatch -- the @overload versions are only for the benefit of type checkers while a single non- at overload implementation must follow that handles all cases. In fact, I expect that if we ever end up adding multi-dispatch to the language or library, it will neither replace not compete with @overload, but the two will most likely be orthogonal to each other, with @overload aiming at a type checker and some other multi-dispatch aiming at the interpreter. (The needs of the two cases are just too different -- e.g. it's hard to imagine multi-dispatch in Python use type variables.) More details in the issue (that's also where I'd like to get feedback if possible). I want to settle this before 3.5.2 goes out, because it requires a change to typing.py in the stdlib. Fortunately the change will be backward compatible (even though this isn't strictly required for a provisional module). In the original typing module, any use of @overload outside a stub is an error (it raises as soon as it's used). In the new proposal, you can decorate a function with @overload, but any attempt to call such a decorated function raises an error. This should catch cases early where you forget to provide an implementation. (Reference for provisional modules: https://www.python.org/dev/peps/pep-0411/) -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Mar 19 21:54:05 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 18:54:05 -0700 Subject: [Python-Dev] PEP 484: updates to Python 2.7 signature syntax In-Reply-To: References: Message-ID: Heh. I could add an example with a long list of parameters with long names, but apart from showing by example what the motivation is it wouldn't really add anything, and it's more to type. :-) On Sat, Mar 19, 2016 at 6:43 PM, Andrew Barnert wrote: > On Mar 19, 2016, at 18:18, Guido van Rossum wrote: >> >> Second, https://github.com/python/typing/issues/186. This builds on >> the previous syntax but deals with the other annoyance of long >> argument lists, this time in case you *do* care about the types. The >> proposal is to allow writing the arguments one per line with a type >> comment on each line. This has been implemented in PyCharm but not yet >> in mypy. Example: >> >> def gcd( >> a, # type: int >> b, # type: int >> ): >> # type: (...) -> int >> > > This is a lot nicer than what you were originally discussing (at #1101? I forget...). Even more so given how trivial it will be to mechanically convert these to annotations if/when you switch an app to pure Python 3. > > But one thing: in the PEP and the docs, I think it would be better to pick an example with longer parameter names. This example shows that even in the worst case it isn't that bad, but a better example would show that in the typical case it's actually pretty nice. (Also, I don't see why you wouldn't just use the "old" comment form for this example, since it all fits on one line and isn't at all confusing.) > -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Mar 19 22:07:16 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Mar 2016 19:07:16 -0700 Subject: [Python-Dev] PEP 484 update: add Type[T] Message-ID: There's a more fundamental PEP 484 update that I'd like to add. The discussion is in https://github.com/python/typing/issues/107. Currently we don't have a way to talk about arguments and variables whose type is itself a type or class. The only annotation you can use for this is 'type' which says "this argument/variable is a type object" (or a class). But it's often useful to be able to say "this is a class and it must be a subclass of X". In fact this was proposed in the original rounds of discussion about PEP 484, but at the time it felt too far removed from practice to know quite how it should be used, so I just put it off. But it's been one of the features that's been requested most by the early adopters of PEP 484 at Dropbox. So I'd like to add it now. At runtime this shouldn't do much; Type would be just a generic class of one parameter that records its one type parameter. The real magic would happen in the type checker, which will be able to use types involving Type. It should also be possible to use this with type variables, so we could write e.g. T = TypeVar('T', bound=int) def factory(c: Type[T]) -> T: This would define factory() as a function whose argument must be a subclass of int and returning an instance of that subclass. (The bound= option to TypeVar() is already described in PEP 484, although mypy hasn't implemented it yet.) (If I screwed up this example, hopefully Jukka will correct me. :-) Again, I'd like this to go out with 3.5.2, because it requires adding something to typing.py (and again, that's allowed because PEP 484 is provisional -- see PEP 411 for an explanation). -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Sun Mar 20 05:56:29 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 Mar 2016 19:56:29 +1000 Subject: [Python-Dev] What does a double coding cookie mean? In-Reply-To: <56EDC8B0.1030301@g.nevcal.com> References: <56ED8E2D.3050103@g.nevcal.com> <56EDC8B0.1030301@g.nevcal.com> Message-ID: On 20 March 2016 at 07:46, Glenn Linderman wrote: > Diagnosing ambiguous conditions, even including my example above, might be > useful... for a few files... is it worth the effort? What % of .py sources > have coding specifications? What % of those have two? And there's a decent argument for leaving detecting such cases to linters rather than the tokeniser. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From michael at felt.demon.nl Sun Mar 20 12:07:32 2016 From: michael at felt.demon.nl (Michael Felt) Date: Sun, 20 Mar 2016 17:07:32 +0100 Subject: [Python-Dev] bitfields - short - and xlc compiler In-Reply-To: <036959DA-7006-4FF0-A984-7840E543CB7E@yahoo.com> References: <56EB3038.60402@felt.demon.nl> <56EB5253.10807@felt.demon.nl> <56EB5B49.80706@mrabarnett.plus.com> <036959DA-7006-4FF0-A984-7840E543CB7E@yahoo.com> Message-ID: <56EECAC4.5050208@felt.demon.nl> On 2016-03-18 05:57, Andrew Barnert via Python-Dev wrote: > Yeah, C99 (6.7.2.1) allows "a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type", and same for C11. This means that a compiler could easily allow an implementation-defined type that's identical to and interconvertible with short, say "i16", to be used in bitfields, but not short itself. > > And yet, gcc still allows short "even in strictly conforming mode" (4.9), and it looks like Clang and Intel do the same. > > Meanwhile, MSVC specifically says it's illegal ("The type-specifier for the declarator must be unsigned int, signed int, or int") but then defines the semantics (you can't have a 17-bit short, bit fields act as the underlying type when accessed, alignment is forced to a boundary appropriate for the underlying type). They do mention that allowing char and long types is a Microsoft extension, but still nothing about short, even though it's used in most of the examples on the page. > > Anyway, is the question what ctypes should do? If a platform's compiler allows "short M: 1", especially if it has potentially different alignment than "int M: 1", ctypes on that platform had better make ("M", c_short, 1) match the former, right? > > So it sounds like you need some configure switch to test that your compiler doesn't allow short bit fields, so your ctypes build at least skips that part of _ctypes_test.c and test_bitfields.py, and maybe even doesn't allow them in Python code. > > >>> >> test_short fails om AIX when using xlC in any case. How terrible is this? a) this does not look solveable using xlC, and I expect from the comment above re: MSVC, that it will, or should also fail there. And, imho, if anything is to done, it is a decision to be made by "Python". b) aka - it sounds like a defect, at least in the test. c) what danger is there to existing Python code if "short" is expected, per legacy when compilers did (and GCC still does - verified that when I compile with gcc the test does not signal failure) So, more with regard to c) - is there something I could/should be looking at in Python itself, in order to message that the code is not supported by the compiler? From facundobatista at gmail.com Sun Mar 20 12:43:27 2016 From: facundobatista at gmail.com (Facundo Batista) Date: Sun, 20 Mar 2016 13:43:27 -0300 Subject: [Python-Dev] Counting references to Py_None Message-ID: Hello! I'm seeing that our code increases the reference counting to Py_None, and I find this a little strange: isn't Py_None eternal and will never die? What's the point of counting its references? Thanks! -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ Twitter: @facundobatista From dw+python-dev at hmmz.org Sun Mar 20 13:00:45 2016 From: dw+python-dev at hmmz.org (David Wilson) Date: Sun, 20 Mar 2016 17:00:45 +0000 Subject: [Python-Dev] Counting references to Py_None In-Reply-To: References: Message-ID: <20160320170045.GA13003@k3> On Sun, Mar 20, 2016 at 01:43:27PM -0300, Facundo Batista wrote: > Hello! > > I'm seeing that our code increases the reference counting to Py_None, > and I find this a little strange: isn't Py_None eternal and will never > die? > > What's the point of counting its references? Avoiding a branch on every single Py_DECREF / Py_INCREF? > > Thanks! > > -- > . Facundo > > Blog: http://www.taniquetil.com.ar/plog/ > PyAr: http://www.python.org/ar/ > Twitter: @facundobatista > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/dw%2Bpython-dev%40hmmz.org From brett at python.org Sun Mar 20 13:10:01 2016 From: brett at python.org (Brett Cannon) Date: Sun, 20 Mar 2016 17:10:01 +0000 Subject: [Python-Dev] Counting references to Py_None In-Reply-To: References: Message-ID: On Sun, 20 Mar 2016 at 09:44 Facundo Batista wrote: > Hello! > > I'm seeing that our code increases the reference counting to Py_None, > and I find this a little strange: isn't Py_None eternal and will never > die? > Semantically yes, but we have to technically make that happen. :) > > What's the point of counting its references? > It's just another Python object, so if you return it then the code that receives it may very well decref it because it always DECREFs the object returned. And if we didn't keep its count accurately it would eventually hit zero and constantly have its dealloc function checked for. We could add a magical "never dies" value for the refcount, but that adds another `if` branch in all the code instead of simply treating Py_None like any other object and properly tracking its reference. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.peters at gmail.com Sun Mar 20 13:07:47 2016 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 20 Mar 2016 12:07:47 -0500 Subject: [Python-Dev] Counting references to Py_None In-Reply-To: References: Message-ID: [Facundo Batista ] > I'm seeing that our code increases the reference counting to Py_None, > and I find this a little strange: isn't Py_None eternal and will never > die? Yes, but it's immortal in CPython because its reference count never falls to 0 (it's created with a reference count of 1 to begin with). That's the only thing that makes it immortal. > What's the point of counting its references? Uniformity and simplicity: code using a PyObject* increments and decrements reference counts appropriately with no concern for what _kind_ of object is being pointed at. All objects (including None) are treated exactly the same way for refcount purposes. From abarnert at yahoo.com Sun Mar 20 16:04:20 2016 From: abarnert at yahoo.com (Andrew Barnert) Date: Sun, 20 Mar 2016 13:04:20 -0700 Subject: [Python-Dev] bitfields - short - and xlc compiler In-Reply-To: <56EECAC4.5050208@felt.demon.nl> References: <56EB3038.60402@felt.demon.nl> <56EB5253.10807@felt.demon.nl> <56EB5B49.80706@mrabarnett.plus.com> <036959DA-7006-4FF0-A984-7840E543CB7E@yahoo.com> <56EECAC4.5050208@felt.demon.nl> Message-ID: <24457946-1564-454A-9C17-467967143989@yahoo.com> On Mar 20, 2016, at 09:07, Michael Felt wrote: > >> On 2016-03-18 05:57, Andrew Barnert via Python-Dev wrote: >> Yeah, C99 (6.7.2.1) allows "a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type", and same for C11. This means that a compiler could easily allow an implementation-defined type that's identical to and interconvertible with short, say "i16", to be used in bitfields, but not short itself. >> >> And yet, gcc still allows short "even in strictly conforming mode" (4.9), and it looks like Clang and Intel do the same. >> >> Meanwhile, MSVC specifically says it's illegal ("The type-specifier for the declarator must be unsigned int, signed int, or int") but then defines the semantics (you can't have a 17-bit short, bit fields act as the underlying type when accessed, alignment is forced to a boundary appropriate for the underlying type). They do mention that allowing char and long types is a Microsoft extension, but still nothing about short, even though it's used in most of the examples on the page. >> >> Anyway, is the question what ctypes should do? If a platform's compiler allows "short M: 1", especially if it has potentially different alignment than "int M: 1", ctypes on that platform had better make ("M", c_short, 1) match the former, right? >> >> So it sounds like you need some configure switch to test that your compiler doesn't allow short bit fields, so your ctypes build at least skips that part of _ctypes_test.c and test_bitfields.py, and maybe even doesn't allow them in Python code. >> >> >>>> >> test_short fails om AIX when using xlC in any case. How terrible is this? > a) this does not look solveable using xlC, and I expect from the comment above re: MSVC, that it will, or should also fail there. > And, imho, if anything is to done, it is a decision to be made by "Python". Sure, but isn't that exactly why you're posting to this list? > b) aka - it sounds like a defect, at least in the test. Agreed. But I think the test is reasonable on at least MSVC, gcc, clang, and icc. So what you need is some way to run the test on those compilers, but not on compilers that can't handle it. So it sounds like you need a flag coming from autoconf that can be tested in C (and probably in Python as well) that tells you whether the compiler can handle it. And I don't think there is any such flag. Which means someone would have to add the configure test. And if people who use MSVC, gcc, and clang are all unaffected, I'm guessing that someone would have to be someone who cares about xlC or some other compiler, like you. The alternative would be to just change the docs to make it explicit that using non-int bitfields isn't supported but may work in platform-specific ways. If you got everyone to agree to that, surely you could just remove the tests, right? But if people are actually writing C code that follows the examples on the MSVC bitfield docs page, and need to talk to that code from ctypes, I don't know if it would be acceptable to stop officially supporting that. From tjreedy at udel.edu Sun Mar 20 17:47:17 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 20 Mar 2016 17:47:17 -0400 Subject: [Python-Dev] bitfields - short - and xlc compiler In-Reply-To: <24457946-1564-454A-9C17-467967143989@yahoo.com> References: <56EB3038.60402@felt.demon.nl> <56EB5253.10807@felt.demon.nl> <56EB5B49.80706@mrabarnett.plus.com> <036959DA-7006-4FF0-A984-7840E543CB7E@yahoo.com> <56EECAC4.5050208@felt.demon.nl> <24457946-1564-454A-9C17-467967143989@yahoo.com> Message-ID: On 3/20/2016 4:04 PM, Andrew Barnert via Python-Dev wrote: > Agreed. But I think the test is reasonable on at least MSVC, gcc, clang, and icc. So what you need is some way to run the test on those compilers, but not on compilers that can't handle it. The test could be conditioned on the compiler used. >>> platform.python_compiler() 'MSC v.1900 64 bit (AMD64)' The doc could say something about the feature only being available with certain compilers. -- Terry Jan Reedy From guido at python.org Mon Mar 21 12:15:41 2016 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Mar 2016 09:15:41 -0700 Subject: [Python-Dev] PEP 484: a "NewType" constructor Message-ID: Here's one more thing we'd like to add to PEP 484. The description is best gleaned from the issue, in particular https://github.com/python/mypy/issues/1284#issuecomment-199021176 and following (we're going with option (A)). Really brief example: from typing import NewType UserId = NewType('UserId', int) Now to the type checker UserId is a new type that's compatible with int, but converting an int to a UserId requires a special cast form, UserId(x). At runtime UserId instances are just ints (not a subclass!) and UserId() is a dummy function that just returns its argument. For use cases see the issue. Also send feedback there please. -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Mar 21 12:20:16 2016 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Mar 2016 09:20:16 -0700 Subject: [Python-Dev] PEP 484: a "NewType" constructor In-Reply-To: References: Message-ID: Sorry, for PEP feedback it's best to use this issue in the typing tracker: https://github.com/python/typing/issues/189 (the issue I linked to was in the mypy tracker). On Mon, Mar 21, 2016 at 9:15 AM, Guido van Rossum wrote: > Here's one more thing we'd like to add to PEP 484. The description is > best gleaned from the issue, in particular > https://github.com/python/mypy/issues/1284#issuecomment-199021176 and > following (we're going with option (A)). > > Really brief example: > > from typing import NewType > UserId = NewType('UserId', int) > > Now to the type checker UserId is a new type that's compatible with > int, but converting an int to a UserId requires a special cast form, > UserId(x). At runtime UserId instances are just ints (not a subclass!) > and UserId() is a dummy function that just returns its argument. > > For use cases see the issue. Also send feedback there please. > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Mar 21 17:11:12 2016 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Mar 2016 14:11:12 -0700 Subject: [Python-Dev] PEP 484: updates to Python 2.7 signature syntax In-Reply-To: References: Message-ID: This seemed pretty uncontroversial -- I've updated the PEP (including a long(ish) example :-). On Sat, Mar 19, 2016 at 6:54 PM, Guido van Rossum wrote: > Heh. I could add an example with a long list of parameters with long > names, but apart from showing by example what the motivation is it > wouldn't really add anything, and it's more to type. :-) > > On Sat, Mar 19, 2016 at 6:43 PM, Andrew Barnert wrote: >> On Mar 19, 2016, at 18:18, Guido van Rossum wrote: >>> >>> Second, https://github.com/python/typing/issues/186. This builds on >>> the previous syntax but deals with the other annoyance of long >>> argument lists, this time in case you *do* care about the types. The >>> proposal is to allow writing the arguments one per line with a type >>> comment on each line. This has been implemented in PyCharm but not yet >>> in mypy. Example: >>> >>> def gcd( >>> a, # type: int >>> b, # type: int >>> ): >>> # type: (...) -> int >>> >> >> This is a lot nicer than what you were originally discussing (at #1101? I forget...). Even more so given how trivial it will be to mechanically convert these to annotations if/when you switch an app to pure Python 3. >> >> But one thing: in the PEP and the docs, I think it would be better to pick an example with longer parameter names. This example shows that even in the worst case it isn't that bad, but a better example would show that in the typical case it's actually pretty nice. (Also, I don't see why you wouldn't just use the "old" comment form for this example, since it all fits on one line and isn't at all confusing.) >> > > > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Mar 21 17:11:38 2016 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Mar 2016 14:11:38 -0700 Subject: [Python-Dev] PEP 484 update: allow @overload in regular module files In-Reply-To: References: Message-ID: This seemed pretty uncontroversial. I've updated the PEP. On Sat, Mar 19, 2016 at 6:52 PM, Guido van Rossum wrote: > Here's another proposal for a change to PEP 484. > > In https://github.com/python/typing/issues/72 there's a long > discussion ending with a reasonable argument to allow @overload in > (non-stub) modules after all. > > This proposal does *not* sneak in a syntax for multi-dispatch -- the > @overload versions are only for the benefit of type checkers while a > single non- at overload implementation must follow that handles all > cases. In fact, I expect that if we ever end up adding multi-dispatch > to the language or library, it will neither replace not compete with > @overload, but the two will most likely be orthogonal to each other, > with @overload aiming at a type checker and some other multi-dispatch > aiming at the interpreter. (The needs of the two cases are just too > different -- e.g. it's hard to imagine multi-dispatch in Python use > type variables.) More details in the issue (that's also where I'd like > to get feedback if possible). > > I want to settle this before 3.5.2 goes out, because it requires a > change to typing.py in the stdlib. Fortunately the change will be > backward compatible (even though this isn't strictly required for a > provisional module). In the original typing module, any use of > @overload outside a stub is an error (it raises as soon as it's used). > In the new proposal, you can decorate a function with @overload, but > any attempt to call such a decorated function raises an error. This > should catch cases early where you forget to provide an > implementation. > > (Reference for provisional modules: https://www.python.org/dev/peps/pep-0411/) > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From arigo at tunes.org Mon Mar 21 17:42:46 2016 From: arigo at tunes.org (Armin Rigo) Date: Mon, 21 Mar 2016 22:42:46 +0100 Subject: [Python-Dev] Counting references to Py_None In-Reply-To: References: Message-ID: Hi, On 20 March 2016 at 18:10, Brett Cannon wrote: > And if we didn't keep its count accurately it would eventually hit > zero and constantly have its dealloc function checked for. I think the idea is really consistency. If we wanted to avoid all "Py_INCREF(Py_None);", it would be possible: we could let the refcount of None decrement to zero, at which point its deallocator is called; but this deallocator can simply bumps the refcount to a large value again. The deallocator would end up being called very rarely. A bient?t, Armin. From tim.peters at gmail.com Mon Mar 21 17:56:06 2016 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 21 Mar 2016 16:56:06 -0500 Subject: [Python-Dev] Counting references to Py_None In-Reply-To: References: Message-ID: Brett Cannon ] >> And if we didn't keep its count accurately it would eventually hit >> zero and constantly have its dealloc function checked for. [Armin Rigo] [> I think the idea is really consistency. If we wanted to avoid all > "Py_INCREF(Py_None);", it would be possible: we could let the refcount > of None decrement to zero, at which point its deallocator is called; > but this deallocator can simply bumps the refcount to a large value > again. The deallocator would end up being called very rarely. It could, but it does something else now ;-) I've seen this trigger, from C code that had no idea it was playing with None, but just had general refcounting errors. So this does serve a debugging purpose, although rarely: static void none_dealloc(PyObject* ignore) { /* This should never get called, but we also don't want to SEGV if * we accidentally decref None out of existence. */ Py_FatalError("deallocating None"); } (that's in object.c) From alexander.belopolsky at gmail.com Mon Mar 21 18:35:40 2016 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 21 Mar 2016 18:35:40 -0400 Subject: [Python-Dev] Counting references to Py_None In-Reply-To: References: Message-ID: On Mon, Mar 21, 2016 at 5:56 PM, Tim Peters wrote: > I've seen this trigger, > from C code that had no idea it was playing with None, but just had > general refcounting errors. So this does serve a debugging purpose, > although rarely > You probably have a better refcounting sense that I do, but I see this quite often when I hack C code and I find this behavior quite useful when debugging. -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Mar 23 13:37:31 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 23 Mar 2016 19:37:31 +0200 Subject: [Python-Dev] cpython: hashtable.h now supports keys of any size In-Reply-To: <20160321210149.61621.76429.F0CE1C70@psf.io> References: <20160321210149.61621.76429.F0CE1C70@psf.io> Message-ID: On 21.03.16 23:01, victor.stinner wrote: > https://hg.python.org/cpython/rev/aca4e9af1ca6 > changeset: 100640:aca4e9af1ca6 > user: Victor Stinner > date: Mon Mar 21 22:00:58 2016 +0100 > summary: > hashtable.h now supports keys of any size > > Issue #26588: hashtable.h now supports keys of any size, not only > sizeof(void*). It allows to support key larger than sizeof(void*), but also to > use less memory for key smaller than sizeof(void*). If key size is compile time constant, Py_MEMCPY() and memcpy() can be optimized in one machine instruction. If it is ht->key_size, it adds more overhead. These changes can have negative performance effect. It can be eliminated if pass a compile time constant to _Py_HASHTABLE_ENTRY_READ_KEY() etc. From storchaka at gmail.com Wed Mar 23 13:42:47 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 23 Mar 2016 19:42:47 +0200 Subject: [Python-Dev] cpython: hashtable.h now supports keys of any size In-Reply-To: References: <20160321210149.61621.76429.F0CE1C70@psf.io> Message-ID: On 23.03.16 19:37, Serhiy Storchaka wrote: > On 21.03.16 23:01, victor.stinner wrote: >> https://hg.python.org/cpython/rev/aca4e9af1ca6 >> changeset: 100640:aca4e9af1ca6 >> user: Victor Stinner >> date: Mon Mar 21 22:00:58 2016 +0100 >> summary: >> hashtable.h now supports keys of any size >> >> Issue #26588: hashtable.h now supports keys of any size, not only >> sizeof(void*). It allows to support key larger than sizeof(void*), but >> also to >> use less memory for key smaller than sizeof(void*). > > If key size is compile time constant, Py_MEMCPY() and memcpy() can be > optimized in one machine instruction. If it is ht->key_size, it adds > more overhead. These changes can have negative performance effect. > > It can be eliminated if pass a compile time constant to > _Py_HASHTABLE_ENTRY_READ_KEY() etc. > Please ignore this message. It was sent to Python-Dev by mistake. From jimjjewett at gmail.com Wed Mar 23 16:39:28 2016 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 23 Mar 2016 13:39:28 -0700 (PDT) Subject: [Python-Dev] [Python-ideas] Add citation() to site.py In-Reply-To: <87io0goq04.fsf@vostro.rath.org> Message-ID: <56f2ff00.01c98c0a.73b88.ffff8d0b@mx.google.com> On Sun Mar 20 16:26:03 EDT 2016, Nikolaus Rath wrote: > Which I believe makes it completely pointless to cite Python at all. As > far as I can see, nowadays citations are given for two reasons: > 1. To give the reader a starting point to get more information on a > topic. I don't often see references to good "starting points", but I'll grant the "get more information". > 2. To formally acknowledge the work done by someone else (who ends up > with an increased number of citations for the cited publication, > which is unfortunately a crucial metric in most academic hiring and > evaluation processes). There is a third category, of reader service. When I as a reader have wanted to follow a citation, it was because I wanted to know more about the specific claim it supposedly supported. In a few cases -- and these were probably the cases most valuable to the authors -- I wanted to build on the work, or test it out under new conditions. Ideally, my first step was to replicate the original result, to ensure that anything new I found was really caused by the intentional changes. If I was looking at a computational model, I really didn't even have the excuse of "too expensive to run that many subjects." For papers more than a few years old, even if the code was available, it generally didn't run -- and often didn't even compile. Were there a few missing utility files, or had they been using a language variant different from what had eventually become the standard? Obviously, it would have been better to just get a copy of the original environment, PDP and all. In real life, it was very helpful to know which version of which compiler the authors had been using. Even the authors who had managed to save their code didn't generally remember that level of detail about the original environment. Python today has much better backwards compatibility, but ... if some junior grad student (maybe not in CS) today came across code raising strings instead of Exceptions, how confident would she be that she had the real code, as opposed to a mangled transcription? Would it help if the paper had a citation that specified CPython 2.1 and she could still download a version of that ... where it worked? -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From mohitkumra95 at gmail.com Thu Mar 24 09:10:17 2016 From: mohitkumra95 at gmail.com (Mohit Kumra) Date: Thu, 24 Mar 2016 18:40:17 +0530 Subject: [Python-Dev] GSOC_2016_ROUNDUP_PROPOSAL Message-ID: I am extremely interested in working for the Roundup issue tracker's github integration part.I am proficient in python and git and have worked before also on the git. It is requested to review the proposal for the same. Thank you for the consideration.A quick reply is appreciated. Thanking You Mohit Kumra mohitkumra95 at gmail.com 91-9718388335 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GSOC_2016_ROUNDUP_PROPOSAL.pdf Type: application/pdf Size: 116425 bytes Desc: not available URL: From victor.stinner at gmail.com Fri Mar 25 08:11:31 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 25 Mar 2016 13:11:31 +0100 Subject: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance In-Reply-To: References: <56B3254F.7020605@egenix.com> <56B34A1E.4010501@egenix.com> <56B35AB5.5090308@egenix.com> <56BDDEA3.2060702@egenix.com> Message-ID: So what do you think? Is it worth to change PyMem_Malloc() allocator to pymalloc for a small speedup? Should we do something else before doing that? Or do you expect that too many applications use PyMem_Malloc() without holding the GIL and will not run try to run their application with PYTHONMALLOC=debug? Victor 2016-03-15 0:19 GMT+01:00 Victor Stinner : > 2016-02-12 14:31 GMT+01:00 M.-A. Lemburg : >>>> If your program has bugs, you can use a debug build of Python 3.5 to >>>> detect misusage of the API. >> >> Yes, but people don't necessarily do this, e.g. I have >> for a very long time ignored debug builds completely >> and when I started to try them, I found that some of the >> things I had been doing with e.g. free list implementations >> did not work in debug builds. > > I just added support for debug hooks on Python memory allocators on > Python compiled in *release* mode. Set the environment variable > PYTHONMALLOC to debug to try with Python 3.6. > > I added a check on PyObject_Malloc() debug hook to ensure that the > function is called with the GIL held. I opened an issue to add a > similar check on PyMem_Malloc(): > https://bugs.python.org/issue26563 > > >> Yes, but those are part of the stdlib. You'd need to check >> a few C extensions which are not tested as part of the stdlib, >> e.g. numpy, scipy, lxml, pillow, etc. (esp. ones which implement custom >> types in C since these will often need the memory management >> APIs). >> >> It may also be a good idea to check wrapper generators such >> as cython, swig, cffi, etc. > > I ran the test suite of numpy, lxml, Pillow and cryptography (used cffi). > > I found a bug in numpy. numpy calls PyMem_Malloc() without holding the GIL: > https://github.com/numpy/numpy/pull/7404 > > Except of this bug, all other tests pass with PyMem_Malloc() using > pymalloc and all debug checks. > > Victor From status at bugs.python.org Fri Mar 25 13:08:36 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 25 Mar 2016 18:08:36 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160325170836.4224256919@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-03-18 - 2016-03-25) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5461 ( +2) closed 32938 (+53) total 38399 (+55) Open issues with patches: 2379 Issues opened (32) ================== #20021: "modernize" makeopcodetargets.py http://bugs.python.org/issue20021 reopened by serhiy.storchaka #26589: Add HTTP Response code 451 http://bugs.python.org/issue26589 opened by rhettinger #26591: datetime datetime.time to datetime.time comparison does nothin http://bugs.python.org/issue26591 opened by jason crockett #26597: Document how to cite Python http://bugs.python.org/issue26597 opened by steven.daprano #26600: MagickMock __str__ sometimes returns MagickMock instead of str http://bugs.python.org/issue26600 opened by Stanis??aw Skonieczny (Uosiu) #26601: Use new madvise()'s MADV_FREE on the private heap http://bugs.python.org/issue26601 opened by StyXman #26602: argparse doc introduction is inappropriately targeted http://bugs.python.org/issue26602 opened by Daniel Stone #26606: logging.baseConfig is missing the encoding parameter http://bugs.python.org/issue26606 opened by janis.slapins #26608: RLock undocumented behavior in case of multiple acquire http://bugs.python.org/issue26608 opened by smbrd #26609: Wrong request target in test_httpservers.py http://bugs.python.org/issue26609 opened by xiang.zhang #26610: test_venv.test_with_pip() fails when ctypes is missing http://bugs.python.org/issue26610 opened by haypo #26612: test_ssl: use context manager (with) to fix ResourceWarning http://bugs.python.org/issue26612 opened by haypo #26615: Missing entry in WRAPPER_ASSIGNMENTS in update_wrapper's doc http://bugs.python.org/issue26615 opened by xiang.zhang #26616: A bug in datetime.astimezone() method http://bugs.python.org/issue26616 opened by belopolsky #26617: Assertion failed in gc with __del__ and weakref http://bugs.python.org/issue26617 opened by guojiahua #26618: _overlapped extension module of asyncio uses deprecated WSAStr http://bugs.python.org/issue26618 opened by haypo #26619: 3.5.1 install fails on Windows Server 2008 R2 64-bit http://bugs.python.org/issue26619 opened by sdmorris #26623: JSON encode: more informative error http://bugs.python.org/issue26623 opened by Mahmoud Lababidi #26624: Windows hangs in call to CRT setlocale() http://bugs.python.org/issue26624 opened by jkloth #26626: test_dbm_gnu http://bugs.python.org/issue26626 opened by ink #26627: IDLE incorrectly labeling error as internal http://bugs.python.org/issue26627 opened by Tadhg McDonald-Jensen #26628: Undefined behavior calling C functions with ctypes.Union argum http://bugs.python.org/issue26628 opened by tilsche #26629: Need an ability to build standard DLLs with distutils http://bugs.python.org/issue26629 opened by Buraddin Ibn-Karlo #26631: Unable to install Python 3.5.1 on Windows 10 - Error 0x8007064 http://bugs.python.org/issue26631 opened by oselljr #26632: __all__ decorator http://bugs.python.org/issue26632 opened by barry #26633: multiprocessing behavior combining daemon with non-daemon chil http://bugs.python.org/issue26633 opened by josh.r #26634: recursive_repr forgets to override __qualname__ of wrapper http://bugs.python.org/issue26634 opened by xiang.zhang #26638: Avoid warnings about missing CLI options when building documen http://bugs.python.org/issue26638 opened by martin.panter #26639: Tools/i18n/pygettext.py: replace deprecated imp module with im http://bugs.python.org/issue26639 opened by haypo #26640: xmlrpc.server imports xmlrpc.client http://bugs.python.org/issue26640 opened by Valentin.Lorentz #26641: doctest doesn't support packages http://bugs.python.org/issue26641 opened by haypo #26642: Replace stdout and stderr with simple standard printers at Pyt http://bugs.python.org/issue26642 opened by haypo Most recent 15 issues with no replies (15) ========================================== #26642: Replace stdout and stderr with simple standard printers at Pyt http://bugs.python.org/issue26642 #26641: doctest doesn't support packages http://bugs.python.org/issue26641 #26639: Tools/i18n/pygettext.py: replace deprecated imp module with im http://bugs.python.org/issue26639 #26638: Avoid warnings about missing CLI options when building documen http://bugs.python.org/issue26638 #26631: Unable to install Python 3.5.1 on Windows 10 - Error 0x8007064 http://bugs.python.org/issue26631 #26629: Need an ability to build standard DLLs with distutils http://bugs.python.org/issue26629 #26626: test_dbm_gnu http://bugs.python.org/issue26626 #26618: _overlapped extension module of asyncio uses deprecated WSAStr http://bugs.python.org/issue26618 #26615: Missing entry in WRAPPER_ASSIGNMENTS in update_wrapper's doc http://bugs.python.org/issue26615 #26609: Wrong request target in test_httpservers.py http://bugs.python.org/issue26609 #26606: logging.baseConfig is missing the encoding parameter http://bugs.python.org/issue26606 #26600: MagickMock __str__ sometimes returns MagickMock instead of str http://bugs.python.org/issue26600 #26589: Add HTTP Response code 451 http://bugs.python.org/issue26589 #26584: pyclbr module needs to be more flexible on loader support http://bugs.python.org/issue26584 #26579: Support pickling slots in subclasses of common classes http://bugs.python.org/issue26579 Most recent 15 issues waiting for review (15) ============================================= #26642: Replace stdout and stderr with simple standard printers at Pyt http://bugs.python.org/issue26642 #26641: doctest doesn't support packages http://bugs.python.org/issue26641 #26639: Tools/i18n/pygettext.py: replace deprecated imp module with im http://bugs.python.org/issue26639 #26638: Avoid warnings about missing CLI options when building documen http://bugs.python.org/issue26638 #26634: recursive_repr forgets to override __qualname__ of wrapper http://bugs.python.org/issue26634 #26623: JSON encode: more informative error http://bugs.python.org/issue26623 #26616: A bug in datetime.astimezone() method http://bugs.python.org/issue26616 #26615: Missing entry in WRAPPER_ASSIGNMENTS in update_wrapper's doc http://bugs.python.org/issue26615 #26612: test_ssl: use context manager (with) to fix ResourceWarning http://bugs.python.org/issue26612 #26609: Wrong request target in test_httpservers.py http://bugs.python.org/issue26609 #26606: logging.baseConfig is missing the encoding parameter http://bugs.python.org/issue26606 #26602: argparse doc introduction is inappropriately targeted http://bugs.python.org/issue26602 #26589: Add HTTP Response code 451 http://bugs.python.org/issue26589 #26587: Possible duplicate entries in sys.path if .pth files are used http://bugs.python.org/issue26587 #26586: Simple enhancement to BaseHTTPRequestHandler http://bugs.python.org/issue26586 Top 10 most discussed issues (10) ================================= #23551: IDLE to provide menu link to PIP gui. http://bugs.python.org/issue23551 15 msgs #25654: test_multiprocessing_spawn ResourceWarning with -Werror http://bugs.python.org/issue25654 14 msgs #19829: _pyio.BufferedReader and _pyio.TextIOWrapper destructor don't http://bugs.python.org/issue19829 10 msgs #26506: hex() documentation: mention "%x" % int http://bugs.python.org/issue26506 10 msgs #26624: Windows hangs in call to CRT setlocale() http://bugs.python.org/issue26624 9 msgs #26585: Use html.escape to replace _quote_html in http.server http://bugs.python.org/issue26585 8 msgs #26632: __all__ decorator http://bugs.python.org/issue26632 8 msgs #26587: Possible duplicate entries in sys.path if .pth files are used http://bugs.python.org/issue26587 7 msgs #23735: Readline not adjusting width after resize with 6.3 http://bugs.python.org/issue23735 6 msgs #10740: sqlite3 module breaks transactions and potentially corrupts da http://bugs.python.org/issue10740 5 msgs Issues closed (52) ================== #10305: Cleanup up ResourceWarnings in multiprocessing http://bugs.python.org/issue10305 closed by haypo #10894: Making stdlib APIs private http://bugs.python.org/issue10894 closed by SilentGhost #10908: Improvements to trace._Ignore http://bugs.python.org/issue10908 closed by SilentGhost #12813: uuid4 is not tested if a uuid4 system routine isn't present http://bugs.python.org/issue12813 closed by berker.peksag #16151: Deferred KeyboardInterrupt in interactive mode http://bugs.python.org/issue16151 closed by serhiy.storchaka #17167: python man page contains $Date$ in page footer http://bugs.python.org/issue17167 closed by python-dev #18787: Misleading error from getspnam function of spwd module http://bugs.python.org/issue18787 closed by berker.peksag #19164: Update uuid.UUID TypeError exception: integer should not be an http://bugs.python.org/issue19164 closed by berker.peksag #19265: Increased test coverage for datetime pickling http://bugs.python.org/issue19265 closed by berker.peksag #21925: ResourceWarning sometimes doesn't display http://bugs.python.org/issue21925 closed by haypo #23723: Provide a way to disable bytecode staleness checks http://bugs.python.org/issue23723 closed by brett.cannon #23848: faulthandler: setup an exception handler on Windows http://bugs.python.org/issue23848 closed by haypo #23857: Make default HTTPS certificate verification setting configurab http://bugs.python.org/issue23857 closed by ncoghlan #24266: raw_input + readline: Ctrl+C during search breaks readline http://bugs.python.org/issue24266 closed by martin.panter #26076: redundant checks in tok_get in Parser\tokenizer.c http://bugs.python.org/issue26076 closed by python-dev #26095: Update porting HOWTO to special-case Python 2 code, not Python http://bugs.python.org/issue26095 closed by brett.cannon #26250: no document for sqlite3.Cursor.connection http://bugs.python.org/issue26250 closed by ezio.melotti #26252: Add an example to importlib docs on setting up an importer http://bugs.python.org/issue26252 closed by brett.cannon #26258: readline module for python 3.x on windows http://bugs.python.org/issue26258 closed by ezio.melotti #26271: freeze.py makefile uses the wrong flags variables http://bugs.python.org/issue26271 closed by brett.cannon #26281: Clear sys.path_importer_cache from importlib.invalidate_caches http://bugs.python.org/issue26281 closed by brett.cannon #26283: zipfile can not handle the path build by os.path.join() http://bugs.python.org/issue26283 closed by ezio.melotti #26396: Create json.JSONType http://bugs.python.org/issue26396 closed by brett.cannon #26525: Documentation of ord(c) easy to misread http://bugs.python.org/issue26525 closed by terry.reedy #26560: Error in assertion in wsgiref.handlers.BaseHandler.start_respo http://bugs.python.org/issue26560 closed by berker.peksag #26567: ResourceWarning: Use tracemalloc to display the traceback wher http://bugs.python.org/issue26567 closed by haypo #26574: replace_interleave can be optimized for single character byte http://bugs.python.org/issue26574 closed by haypo #26581: Double coding cookie http://bugs.python.org/issue26581 closed by serhiy.storchaka #26588: _tracemalloc: add support for multiple address spaces (domains http://bugs.python.org/issue26588 closed by haypo #26590: socket destructor: implement finalizer http://bugs.python.org/issue26590 closed by haypo #26592: _warnings.warn_explicit() should try to import the warnings mo http://bugs.python.org/issue26592 closed by haypo #26593: silly typo in logging cookbook http://bugs.python.org/issue26593 closed by berker.peksag #26594: Tutorial example IndentationError? http://bugs.python.org/issue26594 closed by SilentGhost #26595: Segfault on Pointer operation http://bugs.python.org/issue26595 closed by Emin Ghuliev #26596: numpy.all np.all .all http://bugs.python.org/issue26596 closed by eryksun #26598: Embbedable zip does not import modules in (zip)subdirectory http://bugs.python.org/issue26598 closed by zach.ware #26599: segfault at 24 error 6 in python http://bugs.python.org/issue26599 closed by SilentGhost #26603: os.scandir: implement finalizer (for ResourceWarning) http://bugs.python.org/issue26603 closed by haypo #26604: Add optional source parameter to warnings.warn() http://bugs.python.org/issue26604 closed by haypo #26605: Feature request: string method `to_file` http://bugs.python.org/issue26605 closed by SilentGhost #26607: Rename a parameter in the function PyFile_FromFile http://bugs.python.org/issue26607 closed by SilentGhost #26611: assertRaises callableObj cannot be used as a keyword with args http://bugs.python.org/issue26611 closed by SilentGhost #26613: Descriptor HowTo Guide - Typo http://bugs.python.org/issue26613 closed by rhettinger #26614: False/0 and True/1 collision when used as dict keys? http://bugs.python.org/issue26614 closed by ethan.furman #26620: Fix ResourceWarning warnings in test_urllib2_localnet http://bugs.python.org/issue26620 closed by python-dev #26621: test_decimal fails with libmpdecimal 2.4.2 http://bugs.python.org/issue26621 closed by skrah #26622: test_winreg now logs "Windows exception: code 0x06ba" on Pytho http://bugs.python.org/issue26622 closed by haypo #26625: SELECT-initiated transactions can cause "database is locked" i http://bugs.python.org/issue26625 closed by rhunter #26630: Windows EXE extension installers not finding 32bit Python 3.5 http://bugs.python.org/issue26630 closed by eryksun #26635: Python default date does not match Unix time http://bugs.python.org/issue26635 closed by haypo #26636: SystemError while running tests: extra exception raised http://bugs.python.org/issue26636 closed by haypo #26637: importlib: better error message when import fail during Python http://bugs.python.org/issue26637 closed by haypo From tjreedy at udel.edu Sun Mar 27 02:13:25 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 27 Mar 2016 02:13:25 -0400 Subject: [Python-Dev] Adding a Pip GUI to IDLE and idlelib (GSOC project) Message-ID: Summary: There are two prospective Google Summer of Code (GSOC) students applying to work on writing a gui interface to the basic pip functions needed by beginners. I expect Google to accept their proposals. Before I commit to mentoring a student (sometime in April), I would like to be sure, by addressing any objections now, that I will be able to commit the code when ready (August or before). Long version: In February 2015, Raymond Hettinger opened tracker issue "IDLE to provide menu options for using PIP" https://bugs.python.org/issue23551#msg236906 The menu options would presumably open dialog boxes defined in a new module such as idlelib.pipgui. Raymond gave a list of 9 features he thought would be useful to pip beginners. Donald Stufft (pip maintainer) answered that he already wanted someone to write a pip gui, to be put somewhere, and that he would give advice on interfacing (which he has). I answered that I had also had a vague idea of a pip gui, and thought it should be a stand-alone window invoked by a single IDLE menu item, just as turtledemo can be now. Instead of multiple dialogs (for multiple IDLE menu items), there could be, for instance, multiple tabs in a ttk.Notebook. Some pages might implement more than 1 of the features on Raymond's list. Last September, I did some proof-of-concept experiments and changed the title to "IDLE to provide menu link to PIP gui". In January, when Terri Oda requested Core Python GSOC project ideas, I suggested the pip gui project. I believe Raymond's list can easily be programmed in the time alloted. I also volunteered to help mentor. Since then, two students have submitted competent prototypes (on the tracker issue above) that show that they can write a basic tkinter app and revise in response to reviews. My current plan is to add idlelib/pipgui.py (or perhaps pip.py) to 3.5 and 3.6. The file will be structured so that it can either be run as a separate process ('python -m idlelib.pipgui' either at a console or in a subprocess call) or imported into a running process. IDLE would currently use a subprocess call, but if IDLE is restructured into a single-window, multi-tab application, it might switch to using an import. I would document the new IDLE menu entry in the current IDLE page. Separately from the pip gui project, I plan, at some point, to add a new 'idlelib' section that documents public entry points to generally useful idlelib components. If I do that before next August, I would add an entry for pipgui (which would say that details of the GUI are subject to change). Possible objections: 1. One might argue that if pipgui is written so as to not depend on IDLE, then it, like turtledemo, should be located elsewhere, possibly in Tools/scrips. I would answer that managing packages, unlike running turtle demos, *is* an IDE function. 2. One might argue that adding a new module with a public entry point, in a maintenance release, somehow abuses the license granted by PEP434, in a way that declaring a public interface in an existing module would not. If this is sustained, I could not document the new module for 3.5. Thoughts? -- Terry Jan Reedy From ethan at stoneleaf.us Mon Mar 28 12:50:12 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 28 Mar 2016 09:50:12 -0700 Subject: [Python-Dev] Adding a Pip GUI to IDLE and idlelib (GSOC project) In-Reply-To: References: Message-ID: <56F960C4.2050201@stoneleaf.us> On 03/26/2016 11:13 PM, Terry Reedy wrote: > Summary: There are two prospective Google Summer of Code (GSOC) students > applying to work on writing a gui interface to the basic pip functions > needed by beginners. I expect Google to accept their proposals. Before > I commit to mentoring a student (sometime in April), I would like to be > sure, by addressing any objections now, that I will be able to commit > the code when ready (August or before). > Thoughts? I think it's a great idea, and have no objections. `pip` is now included by default, yes? In those cases where it isn't, IDLE could let the user know they need to install it. -- ~Ethan~ From tjreedy at udel.edu Mon Mar 28 14:21:38 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 28 Mar 2016 14:21:38 -0400 Subject: [Python-Dev] Adding a Pip GUI to IDLE and idlelib (GSOC project) In-Reply-To: <56F960C4.2050201@stoneleaf.us> References: <56F960C4.2050201@stoneleaf.us> Message-ID: On 3/28/2016 12:50 PM, Ethan Furman wrote: > On 03/26/2016 11:13 PM, Terry Reedy wrote: > >> Summary: There are two prospective Google Summer of Code (GSOC) students >> applying to work on writing a gui interface to the basic pip functions >> needed by beginners. I expect Google to accept their proposals. Before >> I commit to mentoring a student (sometime in April), I would like to be >> sure, by addressing any objections now, that I will be able to commit >> the code when ready (August or before). > >> Thoughts? > > I think it's a great idea, and have no objections. `pip` is now > included by default, yes? The stdlib includes the ensurepip package. Our Windows and, I believe, OSX installers run ensurepip.__init__._main. > In those cases where it isn't, IDLE could let > the user know they need to install it. "Ensure pip / Upgrade pip" is the first feature on Raymond's list. -- Terry Jan Reedy From benhoyt at gmail.com Mon Mar 28 20:55:12 2016 From: benhoyt at gmail.com (Ben Hoyt) Date: Mon, 28 Mar 2016 20:55:12 -0400 Subject: [Python-Dev] Scandir module seeking new maintainer Message-ID: Hi folks, I'm not sure if this is the right place to ask, but seeing I was fairly active here when developing scandir and getting it into Python 3.5, so I thought I'd start here. I'm the author and current maintainer of the scandir module (Python 3.5's os.scandir but for Python 2.x and 3.x before 3.5). But I've taken on a few new non-programming projects and found I don't have time for the little improvements and fixes that are now in CPython's os.scandir and should be backported into the PyPI module. So I'm looking for someone who wants to take over the maintenance of the scandir PyPI module. It wouldn't be much work, as there are only a couple of fairly small issues to fix. Or it might be a good way to start for someone who wants to contribute to a small but useful open source module in the Python ecosystem. Any takers, or suggestions for where I could look further for a maintainer? Thanks, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Mar 28 22:25:49 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 28 Mar 2016 19:25:49 -0700 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: References: Message-ID: <56F9E7AD.2010102@stoneleaf.us> On 03/28/2016 05:55 PM, Ben Hoyt wrote: > I'm not sure if this is the right place to ask, but seeing I was fairly > active here when developing scandir and getting it into Python 3.5, so I > thought I'd start here. It's certainly a good place to start. > So I'm looking for someone who wants to take over the maintenance of the > scandir PyPI module. It wouldn't be much work, as there are only a > couple of fairly small issues to fix. Or it might be a good way to start > for someone who wants to contribute to a small but useful open source > module in the Python ecosystem. I'd be happy to, although I only have access to linux and MS Windows. This would be good prompting for me to get a few other *nices going, though (although I don't see MacOSX happening). ~Ethan~ From victor.stinner at gmail.com Tue Mar 29 08:06:10 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 29 Mar 2016 14:06:10 +0200 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: References: Message-ID: Hi, 2016-03-29 2:55 GMT+02:00 Ben Hoyt : > I'm the author and current maintainer of the scandir module (Python 3.5's > os.scandir but for Python 2.x and 3.x before 3.5). But I've taken on a few > new non-programming projects and found I don't have time for the little > improvements and fixes that are now in CPython's os.scandir and should be > backported into the PyPI module. > > So I'm looking for someone who wants to take over the maintenance of the > scandir PyPI module. It wouldn't be much work, as there are only a couple of > fairly small issues to fix. Or it might be a good way to start for someone > who wants to contribute to a small but useful open source module in the > Python ecosystem. FYI I just finished (I hope!) to bug on os.walk(bytes) on Windows with the "emulated" scandir (since os.scandir() only works on Unicode on Python 3.5+): http://bugs.python.org/issue25911 Since I helped you to write your PEP 471 (scandir) as the BDFL-delegate, I think that I now understand well scandir(). So I can help you to maintain your backport. Victor From benhoyt at gmail.com Tue Mar 29 08:09:55 2016 From: benhoyt at gmail.com (Ben Hoyt) Date: Tue, 29 Mar 2016 08:09:55 -0400 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: <56F9E7AD.2010102@stoneleaf.us> References: <56F9E7AD.2010102@stoneleaf.us> Message-ID: > > So I'm looking for someone who wants to take over the maintenance of the >> scandir PyPI module. It wouldn't be much work, as there are only a >> couple of fairly small issues to fix. Or it might be a good way to start >> for someone who wants to contribute to a small but useful open source >> module in the Python ecosystem. >> > > I'd be happy to, although I only have access to linux and MS Windows. This > would be good prompting for me to get a few other *nices going, though > (although I don't see MacOSX happening). > Thanks, Ethan. I don't think that would be too much of an issue -- there is only a small amount of OS X-specific code in scandir.py. I also see Victor Stinner's just replied. I think he would make the ideal maintainer if he's willing, as he helped me write the PEP and code and has been very close to scandir for a long time. -Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From benhoyt at gmail.com Tue Mar 29 08:12:58 2016 From: benhoyt at gmail.com (Ben Hoyt) Date: Tue, 29 Mar 2016 08:12:58 -0400 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: References: Message-ID: > FYI I just finished (I hope!) to bug on os.walk(bytes) on Windows with > the "emulated" scandir (since os.scandir() only works on Unicode on > Python 3.5+): > http://bugs.python.org/issue25911 > > Since I helped you to write your PEP 471 (scandir) as the > BDFL-delegate, I think that I now understand well scandir(). So I can > help you to maintain your backport. > Indeed -- probably better than me at this point! I haven't quite been able to keep up with the latest CPython patches. In any case, I think you'd be an ideal person for maintaining the backport. I know there was some question in the past about whether you had desire/time for more open source contributions; in that light, are you willing and do you have the time? Though again, this isn't a big module and really just a few fixes at this point. -Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From benhoyt at gmail.com Tue Mar 29 08:38:21 2016 From: benhoyt at gmail.com (Ben Hoyt) Date: Tue, 29 Mar 2016 08:38:21 -0400 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: References: Message-ID: FYI to the group: I'll discuss details with Victor privately. -Ben On Tue, Mar 29, 2016 at 8:12 AM, Ben Hoyt wrote: > > >> FYI I just finished (I hope!) to bug on os.walk(bytes) on Windows with >> the "emulated" scandir (since os.scandir() only works on Unicode on >> Python 3.5+): >> http://bugs.python.org/issue25911 >> >> Since I helped you to write your PEP 471 (scandir) as the >> BDFL-delegate, I think that I now understand well scandir(). So I can >> help you to maintain your backport. >> > > Indeed -- probably better than me at this point! I haven't quite been able > to keep up with the latest CPython patches. In any case, I think you'd be > an ideal person for maintaining the backport. I know there was some > question in the past about whether you had desire/time for more open source > contributions; in that light, are you willing and do you have the time? > Though again, this isn't a big module and really just a few fixes at this > point. > > -Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benhoyt at gmail.com Tue Mar 29 10:14:12 2016 From: benhoyt at gmail.com (Ben Hoyt) Date: Tue, 29 Mar 2016 10:14:12 -0400 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: References: Message-ID: Victor Stinner has agreed to take over maintenance of the backport module -- thanks Victor! His plan is straightforward -- to backport enhancements from CPython 3.6 and release a new PyPI version. And thanks Ethan for volunteering as well -- I appreciate it. -Ben On Tue, Mar 29, 2016 at 8:38 AM, Ben Hoyt wrote: > FYI to the group: I'll discuss details with Victor privately. -Ben > > > On Tue, Mar 29, 2016 at 8:12 AM, Ben Hoyt wrote: > >> >> >>> FYI I just finished (I hope!) to bug on os.walk(bytes) on Windows with >>> the "emulated" scandir (since os.scandir() only works on Unicode on >>> Python 3.5+): >>> http://bugs.python.org/issue25911 >>> >>> Since I helped you to write your PEP 471 (scandir) as the >>> BDFL-delegate, I think that I now understand well scandir(). So I can >>> help you to maintain your backport. >>> >> >> Indeed -- probably better than me at this point! I haven't quite been >> able to keep up with the latest CPython patches. In any case, I think you'd >> be an ideal person for maintaining the backport. I know there was some >> question in the past about whether you had desire/time for more open source >> contributions; in that light, are you willing and do you have the time? >> Though again, this isn't a big module and really just a few fixes at this >> point. >> >> -Ben >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue Mar 29 11:22:32 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 29 Mar 2016 08:22:32 -0700 Subject: [Python-Dev] Scandir module seeking new maintainer In-Reply-To: References: Message-ID: <56FA9DB8.1060203@stoneleaf.us> On 03/29/2016 07:14 AM, Ben Hoyt wrote: > Victor Stinner has agreed to take over maintenance of the backport > module -- thanks Victor! Yup, thanks Victor! > And thanks Ethan for volunteering as well -- I appreciate it. Your welcome. Good luck in your other endeavors! -- ~Ethan~ From vadmium+py at gmail.com Tue Mar 29 19:30:37 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 29 Mar 2016 23:30:37 +0000 Subject: [Python-Dev] Not receiving bug tracker emails Message-ID: For the last ~36 hours I have stopped receiving emails for messages posted in the bug tracker. Is anyone else having this problem? Has anything changed recently? I have had it set to send to my gmail.com address since the beginning. At the moment the last bug message email is with ?Date: Mon, 28 Mar 2016 12:19:49 +0000?. I have checked spam and they are not going there. Earlier this year I had to set up a rule to avoid lots of tracker emails suddenly going to spam. I suspect there was something about the emails that Google doesn?t like (though I don?t understand the technical details). Maybe this has recently gotten worse at the Google end? From brett at python.org Tue Mar 29 19:37:47 2016 From: brett at python.org (Brett Cannon) Date: Tue, 29 Mar 2016 23:37:47 +0000 Subject: [Python-Dev] Not receiving bug tracker emails In-Reply-To: References: Message-ID: I just got an email from b.p.o so it's working at least in general. On Tue, Mar 29, 2016, 16:31 Martin Panter wrote: > For the last ~36 hours I have stopped receiving emails for messages > posted in the bug tracker. Is anyone else having this problem? Has > anything changed recently? > > I have had it set to send to my gmail.com address since the beginning. > At the moment the last bug message email is > with ?Date: Mon, 28 Mar > 2016 12:19:49 +0000?. I have checked spam and they are not going > there. > > Earlier this year I had to set up a rule to avoid lots of tracker > emails suddenly going to spam. I suspect there was something about the > emails that Google doesn?t like (though I don?t understand the > technical details). Maybe this has recently gotten worse at the Google > end? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Mar 29 20:23:59 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 30 Mar 2016 02:23:59 +0200 Subject: [Python-Dev] Not receiving bug tracker emails In-Reply-To: References: Message-ID: same for me, i'm using using gmail with a @gmail.com email. Victor 2016-03-30 1:30 GMT+02:00 Martin Panter : > For the last ~36 hours I have stopped receiving emails for messages > posted in the bug tracker. Is anyone else having this problem? Has > anything changed recently? > > I have had it set to send to my gmail.com address since the beginning. > At the moment the last bug message email is > with ?Date: Mon, 28 Mar > 2016 12:19:49 +0000?. I have checked spam and they are not going > there. > > Earlier this year I had to set up a rule to avoid lots of tracker > emails suddenly going to spam. I suspect there was something about the > emails that Google doesn?t like (though I don?t understand the > technical details). Maybe this has recently gotten worse at the Google > end? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From storchaka at gmail.com Wed Mar 30 01:08:59 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 30 Mar 2016 08:08:59 +0300 Subject: [Python-Dev] Not receiving bug tracker emails In-Reply-To: References: Message-ID: On 30.03.16 03:23, Victor Stinner wrote: > same for me, i'm using using gmail with a @gmail.com email. > > Victor > > 2016-03-30 1:30 GMT+02:00 Martin Panter : >> For the last ~36 hours I have stopped receiving emails for messages >> posted in the bug tracker. Is anyone else having this problem? Has >> anything changed recently? Same for me. This is very sad, because some comments can be unnoticed and some patches can be unreviewed. From rdmurray at bitdance.com Wed Mar 30 09:30:39 2016 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 30 Mar 2016 09:30:39 -0400 Subject: [Python-Dev] Not receiving bug tracker emails In-Reply-To: References: Message-ID: <20160330133041.72218B14159@webabinitio.net> On Wed, 30 Mar 2016 08:08:59 +0300, Serhiy Storchaka wrote: > On 30.03.16 03:23, Victor Stinner wrote: > > same for me, i'm using using gmail with a @gmail.com email. > > > > Victor > > > > 2016-03-30 1:30 GMT+02:00 Martin Panter : > >> For the last ~36 hours I have stopped receiving emails for messages > >> posted in the bug tracker. Is anyone else having this problem? Has > >> anything changed recently? > > Same for me. > > This is very sad, because some comments can be unnoticed and some > patches can be unreviewed. Anyone know how to find out what changed from Google's POV? As far as we know nothing changed at the bugs end, but it is certainly possible that something did change in the hosting infrastructure without our knowledge. Knowing what is setting google off would help track it down, if so...or perhaps something changed at the google end, in which case we *really* need to know what. --David From vadmium+py at gmail.com Thu Mar 31 04:37:18 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Thu, 31 Mar 2016 08:37:18 +0000 Subject: [Python-Dev] Not receiving bug tracker emails In-Reply-To: <20160330133041.72218B14159@webabinitio.net> References: <20160330133041.72218B14159@webabinitio.net> Message-ID: On 30 March 2016 at 13:30, R. David Murray wrote: > Anyone know how to find out what changed from Google's POV? As far as > we know nothing changed at the bugs end, but it is certainly possible > that something did change in the hosting infrastructure without our > knowledge. Knowing what is setting google off would help track it down, > if so...or perhaps something changed at the google end, in which case we > *really* need to know what. My only guess is that Google decided to get stricter regarding something mentioned in , maybe something in its sending guidelines. Perhaps to do with IPv6 DNS . FYI I am now working around the problem for myself by pointing my bugs.python.org account at a Yahoo email address, and setting up Yahoo to forward all emails to my G Mail address. From victor.stinner at gmail.com Thu Mar 31 17:40:46 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 31 Mar 2016 23:40:46 +0200 Subject: [Python-Dev] The next major Python version will be Python 8 Message-ID: Hi, Python 3 becomes more and more popular and is close to a dangerous point where it can become popular that Python 2. The PSF decided that it's time to elaborate a new secret plan to ensure that Python users suffer again with a new major release breaking all their legacy code. The PSF is happy to announce that the new Python release will be Python 8! Why the version 8? It's just to be greater than Perl 6 and PHP 7, but it's also a mnemonic for PEP 8. By the way, each minor release will now multiply the version by 2. With Python 8 released in 2016 and one release every two years, we will beat Firefox 44 in 2022 (Python 64) and Windows 2003 in 2032 (Python 2048). A major release requires a major change to justify a version bump: the new killer feature is that it's no longer possible to import a module which does not respect the PEP 8. It ensures that all your code is pure. Example: $ python8 -c 'import keyword' Lib/keyword.py:16:1: E122 continuation line missing indentation or outdented Lib/keyword.py:16:1: E265 block comment should start with '# ' Lib/keyword.py:50:1: E122 continuation line missing indentation or outdented (...) ImportError: no pep8, no glory Good news: since *no* module of the current standard library of Python 3 respect the PEP 8, the standard library will be simplified to one unique module, which is new in Python 8: pep8. The standard library will move to the Python Cheeseshop (PyPI), to reply to an old and popular request. DON'T PANIC! You are still able to import your legacy code into Python 8, you just have to rename all your modules to add a "_noqa" suffix to the filename. For example, rename utils.py to utils_noqa.py. A side effect is that you have to update all imports. For example, replace "import django" with "import django_noqa". After a study of the PSF, it's a best option to split again the Python community and make sure that all users are angry. The plan is that in 10 years, at least 50% of the 77,000 packages on the Python cheeseshop will be updated to get the "_noqa" tag. After 2020, the PSF will start to sponsor trolls to harass users of the legacy Python 3 to force them to migrate to Python 8. Python 8 is a work-in-progress (it's still an alpha version), the standard library was not removed yet. Hopefully, trying to import any module of the standard library fails. Don't hesitate to propose more ideas to make Python 8 more incompatible with Python 3! Note: The change is already effective in the default branch of Python: https://hg.python.org/cpython/rev/9aedec2dbc01 Have fun, Victor From brian.cain at gmail.com Thu Mar 31 17:43:28 2016 From: brian.cain at gmail.com (Brian Cain) Date: Thu, 31 Mar 2016 16:43:28 -0500 Subject: [Python-Dev] The next major Python version will be Python 8 In-Reply-To: References: Message-ID: I bought it. I will confess to being your first victim. :) On Thu, Mar 31, 2016 at 4:40 PM, Victor Stinner wrote: > Hi, > > Python 3 becomes more and more popular and is close to a dangerous point > where it can become popular that Python 2. The PSF decided that it's > time to elaborate a new secret plan to ensure that Python users suffer > again with a new major release breaking all their legacy code. > > The PSF is happy to announce that the new Python release will be > Python 8! > > Why the version 8? It's just to be greater than Perl 6 and PHP 7, but > it's also a mnemonic for PEP 8. By the way, each minor release will now > multiply the version by 2. With Python 8 released in 2016 and one > release every two years, we will beat Firefox 44 in 2022 (Python 64) and > Windows 2003 in 2032 (Python 2048). > > A major release requires a major change to justify a version bump: the > new killer feature is that it's no longer possible to import a module > which does not respect the PEP 8. It ensures that all your code is pure. > Example: > > $ python8 -c 'import keyword' > Lib/keyword.py:16:1: E122 continuation line missing indentation or > outdented > Lib/keyword.py:16:1: E265 block comment should start with '# ' > Lib/keyword.py:50:1: E122 continuation line missing indentation or > outdented > (...) > ImportError: no pep8, no glory > > Good news: since *no* module of the current standard library of Python 3 > respect the PEP 8, the standard library will be simplified to one > unique module, which is new in Python 8: pep8. The standard library will > move to the Python Cheeseshop (PyPI), to reply to an old and popular > request. > > > DON'T PANIC! You are still able to import your legacy code into > Python 8, you just have to rename all your modules to add a "_noqa" suffix > to the filename. For example, rename utils.py to utils_noqa.py. A side > effect is that you have to update all imports. For example, replace > "import django" with "import django_noqa". After a study of the PSF, > it's a best option to split again the Python community and make sure > that all users are angry. > > > The plan is that in 10 years, at least 50% of the 77,000 packages on the > Python cheeseshop will be updated to get the "_noqa" tag. After 2020, > the PSF will start to sponsor trolls to harass users of the legacy > Python 3 to force them to migrate to Python 8. > > > Python 8 is a work-in-progress (it's still an alpha version), the > standard library was not removed yet. Hopefully, trying to import any > module of the standard library fails. > > Don't hesitate to propose more ideas to make Python 8 more incompatible > with Python 3! > > Note: The change is already effective in the default branch of Python: > https://hg.python.org/cpython/rev/9aedec2dbc01 > > Have fun, > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brian.cain%40gmail.com > -- -Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Mar 31 17:50:15 2016 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 1 Apr 2016 08:50:15 +1100 Subject: [Python-Dev] The next major Python version will be Python 8 In-Reply-To: References: Message-ID: On Fri, Apr 1, 2016 at 8:44 AM, Victor Stinner wrote: > You should now try Python 8 and try find if a module can still be imported ;-) Okay.... I can fire up interactive Python and 'import this'. But I can't run 'make'. This will be interesting! ChrisA From ethan at stoneleaf.us Thu Mar 31 18:27:28 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 31 Mar 2016 15:27:28 -0700 Subject: [Python-Dev] The next major Python version will be Python 8 In-Reply-To: References: Message-ID: <56FDA450.6080509@stoneleaf.us> On 03/31/2016 02:40 PM, Victor Stinner wrote: > ImportError: no pep8, no glory Nearly fell off my chair laughing! Nice work. :) -- ~Ethan~ From v+python at g.nevcal.com Thu Mar 31 18:37:26 2016 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 31 Mar 2016 15:37:26 -0700 Subject: [Python-Dev] The next major Python version will be Python 8 In-Reply-To: References: Message-ID: <56FDA6A6.9050406@g.nevcal.com> On 3/31/2016 2:40 PM, Victor Stinner wrote: > Hi, > > Python 3 becomes more and more popular and is close to a dangerous point > where it can become popular that Python 2. The PSF decided that it's > time to elaborate a new secret plan to ensure that Python users suffer > again with a new major release breaking all their legacy code. April 1 comes early in your timezone, I guess :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From senthil at uthcode.com Thu Mar 31 19:05:19 2016 From: senthil at uthcode.com (Senthil Kumaran) Date: Thu, 31 Mar 2016 16:05:19 -0700 Subject: [Python-Dev] The next major Python version will be Python 8 In-Reply-To: References: Message-ID: On Thu, Mar 31, 2016 at 2:40 PM, Victor Stinner wrote: > For example, rename utils.py to utils_noqa.py. A side > effect is that you have to update all imports. For example, replace > "import django" with "import django_noqa". After a study of the PSF, > it's a best option to split again the Python community and make sure > that all users are angry. > We have a huge production code base, lacking tests, running successfully against python2.4. We would like to upgrade our code base to python 8 as we consider it as most sensible update the python developers have ever done till date. Is there is setuptools addition that can automatically change our imports to _noqa ? Thank you! Senthil -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Thu Mar 31 23:24:36 2016 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 1 Apr 2016 06:24:36 +0300 Subject: [Python-Dev] The next major Python version will be Python 8 In-Reply-To: References: Message-ID: On 01.04.16 00:40, Victor Stinner wrote: > The PSF is happy to announce that the new Python release will be > Python 8! Does it combine the base of Python 2 with the power of Python 3?