From status at bugs.python.org Fri Jul 1 12:08:45 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 1 Jul 2016 18:08:45 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160701160845.C1BCF56C0E@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-06-24 - 2016-07-01) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5545 (+14) closed 33648 (+39) total 39193 (+53) Open issues with patches: 2421 Issues opened (39) ================== #22928: HTTP header injection in urrlib2/urllib/httplib/http.client (C http://bugs.python.org/issue22928 reopened by koobs #23804: SSLSocket.recv(0) receives up to 1024 bytes http://bugs.python.org/issue23804 reopened by martin.panter #27385: itertools.groupby has misleading doc string http://bugs.python.org/issue27385 opened by gmathews #27386: Asyncio server hang when clients connect and immediately disco http://bugs.python.org/issue27386 opened by j1m #27387: Thread hangs on str.encode() when locale is not set http://bugs.python.org/issue27387 opened by joshpurvis #27388: IDLE configdialog: reduce multiple references to Var names http://bugs.python.org/issue27388 opened by terry.reedy #27389: When a TypeError is raised due to invalid arguments to a metho http://bugs.python.org/issue27389 opened by Steven.Barker #27391: server_hostname should only be required when checking host nam http://bugs.python.org/issue27391 opened by j1m #27392: Add a server_side keyword parameter to create_connection http://bugs.python.org/issue27392 opened by j1m #27395: Increase test coverage of unittest.runner.TextTestResult http://bugs.python.org/issue27395 opened by Pam.McANulty #27397: email.message.Message.get_payload(decode=True) raises Assertio http://bugs.python.org/issue27397 opened by Claudiu Saftoiu #27398: configure warning for Python 3.5.2 during compilation http://bugs.python.org/issue27398 opened by wizzardx #27400: Datetime NoneType after calling Py_Finalize and Py_Initialize http://bugs.python.org/issue27400 opened by Denny Weinberg #27404: Misc/NEWS: add [Security] prefix to Python 3.5.2 changelog http://bugs.python.org/issue27404 opened by haypo #27405: Ability to trace Tcl commands executed by Tkinter http://bugs.python.org/issue27405 opened by serhiy.storchaka #27407: prepare_ssl.py missing in PCBuild folder http://bugs.python.org/issue27407 opened by George Ge #27408: Document importlib.abc.ExecutionLoader implements get_data() http://bugs.python.org/issue27408 opened by brett.cannon #27409: List socket.SO_*, SCM_*, MSG_*, IPPROTO_* symbols http://bugs.python.org/issue27409 opened by martin.panter #27410: DLL hijacking vulnerability in Python 3.5.2 installer http://bugs.python.org/issue27410 opened by anandbhat #27411: Possible different behaviour of explicit and implicit __dict__ http://bugs.python.org/issue27411 opened by xiang.zhang #27413: Add an option to json.tool to bypass non-ASCII characters. http://bugs.python.org/issue27413 opened by Wei-Cheng.Pan #27414: http.server.BaseHTTPRequestHandler inconsistence with Content- http://bugs.python.org/issue27414 opened by m.xhonneux #27415: BaseEventLoop.create_server does not accept port=None http://bugs.python.org/issue27415 opened by mcobden #27417: Call CoInitializeEx on startup http://bugs.python.org/issue27417 opened by steve.dower #27419: Bugs in PyImport_ImportModuleLevelObject http://bugs.python.org/issue27419 opened by serhiy.storchaka #27420: Docs for os.link - say what happens if link already exists http://bugs.python.org/issue27420 opened by python-bugs-uit #27422: Deadlock when mixing threading and multiprocessing http://bugs.python.org/issue27422 opened by Martin Ritter #27423: Failed assertions when running test.test_os on Windows http://bugs.python.org/issue27423 opened by ebarry #27424: Failures in test.test_logging http://bugs.python.org/issue27424 opened by ebarry #27425: Tests fail because of git's newline preferences on Windows http://bugs.python.org/issue27425 opened by ebarry #27426: Encoding mismatch causes some tests to fail on Windows http://bugs.python.org/issue27426 opened by ebarry #27427: Math tests http://bugs.python.org/issue27427 opened by franciscouzo #27428: Document WindowsRegistryFinder inherits from MetaPathFinder http://bugs.python.org/issue27428 opened by brett.cannon #27429: xml.sax.saxutils.escape doesn't escape multiple characters saf http://bugs.python.org/issue27429 opened by tylerjohnhughes #27432: Unittest truncating of error message not works http://bugs.python.org/issue27432 opened by Camilla Ke #27434: cross-building python 3.6 with an older interpreter fails http://bugs.python.org/issue27434 opened by xdegaye #27435: ctypes and AIX - also for 2.7.X (and later) http://bugs.python.org/issue27435 opened by aixtools at gmail.com #27436: Strange code in selectors.KqueueSelector http://bugs.python.org/issue27436 opened by dabeaz #1732367: Document the constants in the socket module http://bugs.python.org/issue1732367 reopened by martin.panter Most recent 15 issues with no replies (15) ========================================== #27436: Strange code in selectors.KqueueSelector http://bugs.python.org/issue27436 #27435: ctypes and AIX - also for 2.7.X (and later) http://bugs.python.org/issue27435 #27429: xml.sax.saxutils.escape doesn't escape multiple characters saf http://bugs.python.org/issue27429 #27428: Document WindowsRegistryFinder inherits from MetaPathFinder http://bugs.python.org/issue27428 #27427: Math tests http://bugs.python.org/issue27427 #27426: Encoding mismatch causes some tests to fail on Windows http://bugs.python.org/issue27426 #27420: Docs for os.link - say what happens if link already exists http://bugs.python.org/issue27420 #27411: Possible different behaviour of explicit and implicit __dict__ http://bugs.python.org/issue27411 #27409: List socket.SO_*, SCM_*, MSG_*, IPPROTO_* symbols http://bugs.python.org/issue27409 #27408: Document importlib.abc.ExecutionLoader implements get_data() http://bugs.python.org/issue27408 #27404: Misc/NEWS: add [Security] prefix to Python 3.5.2 changelog http://bugs.python.org/issue27404 #27395: Increase test coverage of unittest.runner.TextTestResult http://bugs.python.org/issue27395 #27388: IDLE configdialog: reduce multiple references to Var names http://bugs.python.org/issue27388 #27379: SocketType changed in Python 3 http://bugs.python.org/issue27379 #27376: Add mock_import method to mock module http://bugs.python.org/issue27376 Most recent 15 issues waiting for review (15) ============================================= #27427: Math tests http://bugs.python.org/issue27427 #27423: Failed assertions when running test.test_os on Windows http://bugs.python.org/issue27423 #27419: Bugs in PyImport_ImportModuleLevelObject http://bugs.python.org/issue27419 #27413: Add an option to json.tool to bypass non-ASCII characters. http://bugs.python.org/issue27413 #27409: List socket.SO_*, SCM_*, MSG_*, IPPROTO_* symbols http://bugs.python.org/issue27409 #27405: Ability to trace Tcl commands executed by Tkinter http://bugs.python.org/issue27405 #27404: Misc/NEWS: add [Security] prefix to Python 3.5.2 changelog http://bugs.python.org/issue27404 #27398: configure warning for Python 3.5.2 during compilation http://bugs.python.org/issue27398 #27395: Increase test coverage of unittest.runner.TextTestResult http://bugs.python.org/issue27395 #27385: itertools.groupby has misleading doc string http://bugs.python.org/issue27385 #27380: IDLE: add base Query dialog with ttk widgets http://bugs.python.org/issue27380 #27377: Add smarter socket.fromfd() http://bugs.python.org/issue27377 #27376: Add mock_import method to mock module http://bugs.python.org/issue27376 #27374: Cygwin: Makefile does not install DLL import library http://bugs.python.org/issue27374 #27369: [PATCH] Tests break with --with-system-expat and Expat 2.2.0 http://bugs.python.org/issue27369 Top 10 most discussed issues (10) ================================= #27364: Deprecate invalid unicode escape sequences http://bugs.python.org/issue27364 18 msgs #27417: Call CoInitializeEx on startup http://bugs.python.org/issue27417 18 msgs #27386: Asyncio server hang when clients connect and immediately disco http://bugs.python.org/issue27386 17 msgs #27392: Add a server_side keyword parameter to create_connection http://bugs.python.org/issue27392 15 msgs #23395: _thread.interrupt_main() errors if SIGINT handler in SIG_DFL, http://bugs.python.org/issue23395 12 msgs #26137: [idea] use the Microsoft Antimalware Scan Interface http://bugs.python.org/issue26137 11 msgs #27051: Create PIP gui http://bugs.python.org/issue27051 11 msgs #27391: server_hostname should only be required when checking host nam http://bugs.python.org/issue27391 10 msgs #26226: Test failures with non-ascii character in hostname on Windows http://bugs.python.org/issue26226 9 msgs #22079: Ensure in PyType_Ready() that base class of static type is sta http://bugs.python.org/issue22079 8 msgs Issues closed (39) ================== #4945: json checks True/False by identity, not boolean value http://bugs.python.org/issue4945 closed by serhiy.storchaka #18726: json functions have too many positional parameters http://bugs.python.org/issue18726 closed by serhiy.storchaka #19536: MatchObject should offer __getitem__() http://bugs.python.org/issue19536 closed by berker.peksag #20350: Replace tkapp.split() to tkapp.splitlist() http://bugs.python.org/issue20350 closed by serhiy.storchaka #20770: Inform caller of smtplib STARTTLS failures http://bugs.python.org/issue20770 closed by aclover #22115: Add new methods to trace Tkinter variables http://bugs.python.org/issue22115 closed by serhiy.storchaka #22890: StringIO.StringIO pickled in 2.7 is not unpickleable on 3.x http://bugs.python.org/issue22890 closed by serhiy.storchaka #23401: Add pickle support of Mapping views http://bugs.python.org/issue23401 closed by serhiy.storchaka #24833: IDLE tabnanny check fails http://bugs.python.org/issue24833 closed by terry.reedy #25042: Create an "embedding SDK" distribution? http://bugs.python.org/issue25042 closed by steve.dower #26186: LazyLoader rejecting use of SourceFileLoader http://bugs.python.org/issue26186 closed by brett.cannon #26664: Misuse of $ in activate.fish of venv http://bugs.python.org/issue26664 closed by brett.cannon #26721: Avoid socketserver.StreamRequestHandler.wfile doing partial wr http://bugs.python.org/issue26721 closed by martin.panter #27007: Alternate constructors bytes.fromhex() and bytearray.fromhex() http://bugs.python.org/issue27007 closed by serhiy.storchaka #27038: Make os.DirEntry exist http://bugs.python.org/issue27038 closed by brett.cannon #27252: Make dict views copyable http://bugs.python.org/issue27252 closed by serhiy.storchaka #27253: More efficient deepcopying of Mapping http://bugs.python.org/issue27253 closed by serhiy.storchaka #27255: More opcode predictions http://bugs.python.org/issue27255 closed by serhiy.storchaka #27352: Bug in IMPORT_NAME http://bugs.python.org/issue27352 closed by serhiy.storchaka #27365: Allow non-ascii chars in IDLE NEWS.txt (for contributor names) http://bugs.python.org/issue27365 closed by larry #27372: Test_idle should stop changing locale http://bugs.python.org/issue27372 closed by terry.reedy #27383: executuable in distutils triggering microsoft anti virus http://bugs.python.org/issue27383 closed by steve.dower #27384: itertools islice consumes items when given negative range http://bugs.python.org/issue27384 closed by rhettinger #27390: state of the 3.3 branch unclear http://bugs.python.org/issue27390 closed by brett.cannon #27393: Command to activate venv in Windows has wrong path http://bugs.python.org/issue27393 closed by berker.peksag #27394: Crash with compile returning a value with an error set http://bugs.python.org/issue27394 closed by ebarry #27396: Change default filecmp.cmp shallow option http://bugs.python.org/issue27396 closed by rhettinger #27399: ChainMap.keys() is broken http://bugs.python.org/issue27399 closed by Zahari.Dim #27401: Wrong FTP links in 3.5.2 installer http://bugs.python.org/issue27401 closed by zach.ware #27402: Sequence example in typing module documentation does not typec http://bugs.python.org/issue27402 closed by gvanrossum #27403: os.path.dirname doesn't handle Windows' URNs correctly http://bugs.python.org/issue27403 closed by eryksun #27406: subprocess.Popen() hangs in multi-threaded code http://bugs.python.org/issue27406 closed by r.david.murray #27412: float('???') returns 8.0 http://bugs.python.org/issue27412 closed by eryksun #27416: typo / missing word in docs.python.org/2/library/copy.html http://bugs.python.org/issue27416 closed by haypo #27418: Tools/importbench/importbench.py is broken http://bugs.python.org/issue27418 closed by serhiy.storchaka #27421: PPC64LE Fedora 2.7: certificate for hg.python.org has unexpect http://bugs.python.org/issue27421 closed by haypo #27430: Spelling fixes http://bugs.python.org/issue27430 closed by berker.peksag #27431: Shelve pickle version error http://bugs.python.org/issue27431 closed by berker.peksag #27433: Missing "as err" in Lib/socket.py http://bugs.python.org/issue27433 closed by berker.peksag From lkb.teichmann at gmail.com Sat Jul 2 13:50:45 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sat, 2 Jul 2016 19:50:45 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation Message-ID: Hi list, so this is the next round for PEP 487. During the last round, most of the comments were in the direction that a two step approach for integrating into Python, first in pure Python, later in C, was not a great idea and everything should be in C directly. So I implemented it in C, put it onto the issue tracker here: http://bugs.python.org/issue27366, and also modified the PEP accordingly. For those who had not been in the discussion, PEP 487 proposes to add two hooks, __init_subclass__ which is a classmethod called whenever a class is subclassed, and __set_owner__, a hook in descriptors which gets called once the class the descriptor is part of is created. While implementing PEP 487 I realized that there is and oddity in the type base class: type.__init__ forbids to use keyword arguments, even for the usual three arguments it has (name, base and dict), while type.__new__ allows for keyword arguments. As I plan to forward any keyword arguments to the new __init_subclass__, I stumbled over that. As I write in the PEP, I think it would be a good idea to forbid using keyword arguments for type.__new__ as well. But if people think this would be to big of a change, it would be possible to do it differently. Hoping for good comments Greetings Martin The PEP follows: PEP: 487 Title: Simpler customisation of class creation Version: $Revision$ Last-Modified: $Date$ Author: Martin Teichmann , Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 27-Feb-2015 Python-Version: 3.6 Post-History: 27-Feb-2015, 5-Feb-2016, 24-Jun-2016, 2-Jul-2016 Replaces: 422 Abstract ======== Currently, customising class creation requires the use of a custom metaclass. This custom metaclass then persists for the entire lifecycle of the class, creating the potential for spurious metaclass conflicts. This PEP proposes to instead support a wide range of customisation scenarios through a new ``__init_subclass__`` hook in the class body, and a hook to initialize attributes. The new mechanism should be easier to understand and use than implementing a custom metaclass, and thus should provide a gentler introduction to the full power Python's metaclass machinery. Background ========== Metaclasses are a powerful tool to customize class creation. They have, however, the problem that there is no automatic way to combine metaclasses. If one wants to use two metaclasses for a class, a new metaclass combining those two needs to be created, typically manually. This need often occurs as a surprise to a user: inheriting from two base classes coming from two different libraries suddenly raises the necessity to manually create a combined metaclass, where typically one is not interested in those details about the libraries at all. This becomes even worse if one library starts to make use of a metaclass which it has not done before. While the library itself continues to work perfectly, suddenly every code combining those classes with classes from another library fails. Proposal ======== While there are many possible ways to use a metaclass, the vast majority of use cases falls into just three categories: some initialization code running after class creation, the initalization of descriptors and keeping the order in which class attributes were defined. The first two categories can easily be achieved by having simple hooks into the class creation: 1. An ``__init_subclass__`` hook that initializes all subclasses of a given class. 2. upon class creation, a ``__set_owner__`` hook is called on all the attribute (descriptors) defined in the class, and The third category is the topic of another PEP 520. As an example, the first use case looks as follows:: >>> class SpamBase: ... # this is implicitly a @classmethod ... def __init_subclass__(cls, **kwargs): ... cls.class_args = kwargs ... super().__init_subclass__(cls, **kwargs) >>> class Spam(SpamBase, a=1, b="b"): ... pass >>> Spam.class_args {'a': 1, 'b': 'b'} The base class ``object`` contains an empty ``__init_subclass__`` method which serves as an endpoint for cooperative multiple inheritance. Note that this method has no keyword arguments, meaning that all methods which are more specialized have to process all keyword arguments. This general proposal is not a new idea (it was first suggested for inclusion in the language definition `more than 10 years ago`_, and a similar mechanism has long been supported by `Zope's ExtensionClass`_), but the situation has changed sufficiently in recent years that the idea is worth reconsidering for inclusion. The second part of the proposal adds an ``__set_owner__`` initializer for class attributes, especially if they are descriptors. Descriptors are defined in the body of a class, but they do not know anything about that class, they do not even know the name they are accessed with. They do get to know their owner once ``__get__`` is called, but still they do not know their name. This is unfortunate, for example they cannot put their associated value into their object's ``__dict__`` under their name, since they do not know that name. This problem has been solved many times, and is one of the most important reasons to have a metaclass in a library. While it would be easy to implement such a mechanism using the first part of the proposal, it makes sense to have one solution for this problem for everyone. To give an example of its usage, imagine a descriptor representing weak referenced values:: import weakref class WeakAttribute: def __get__(self, instance, owner): return instance.__dict__[self.name] def __set__(self, instance, value): instance.__dict__[self.name] = weakref.ref(value) # this is the new initializer: def __set_owner__(self, owner, name): self.name = name While this example looks very trivial, it should be noted that until now such an attribute cannot be defined without the use of a metaclass. And given that such a metaclass can make life very hard, this kind of attribute does not exist yet. Initializing descriptors could simply be done in the ``__init_subclass__`` hook. But this would mean that descriptors can only be used in classes that have the proper hook, the generic version like in the example would not work generally. One could also call ``__set_owner__`` from whithin the base implementation of ``object.__init_subclass__``. But given that it is a common mistake to forget to call ``super()``, it would happen too often that suddenly descriptors are not initialized. Key Benefits ============ Easier inheritance of definition time behaviour ----------------------------------------------- Understanding Python's metaclasses requires a deep understanding of the type system and the class construction process. This is legitimately seen as challenging, due to the need to keep multiple moving parts (the code, the metaclass hint, the actual metaclass, the class object, instances of the class object) clearly distinct in your mind. Even when you know the rules, it's still easy to make a mistake if you're not being extremely careful. Understanding the proposed implicit class initialization hook only requires ordinary method inheritance, which isn't quite as daunting a task. The new hook provides a more gradual path towards understanding all of the phases involved in the class definition process. Reduced chance of metaclass conflicts ------------------------------------- One of the big issues that makes library authors reluctant to use metaclasses (even when they would be appropriate) is the risk of metaclass conflicts. These occur whenever two unrelated metaclasses are used by the desired parents of a class definition. This risk also makes it very difficult to *add* a metaclass to a class that has previously been published without one. By contrast, adding an ``__init_subclass__`` method to an existing type poses a similar level of risk to adding an ``__init__`` method: technically, there is a risk of breaking poorly implemented subclasses, but when that occurs, it is recognised as a bug in the subclass rather than the library author breaching backwards compatibility guarantees. New Ways of Using Classes ========================= This proposal has many usecases like the following. In the examples, we still inherit from the ``SubclassInit`` base class. This would become unnecessary once this PEP is included in Python directly. Subclass registration --------------------- Especially when writing a plugin system, one likes to register new subclasses of a plugin baseclass. This can be done as follows:: class PluginBase(Object): subclasses = [] def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) cls.subclasses.append(cls) In this example, ``PluginBase.subclasses`` will contain a plain list of all subclasses in the entire inheritance tree. One should note that this also works nicely as a mixin class. Trait descriptors ----------------- There are many designs of Python descriptors in the wild which, for example, check boundaries of values. Often those "traits" need some support of a metaclass to work. This is how this would look like with this PEP:: class Trait: def __get__(self, instance, owner): return instance.__dict__[self.key] def __set__(self, instance, value): instance.__dict__[self.key] = value def __set_owner__(self, owner, name): self.key = name Implementation Details ====================== For those who prefer reading Python over english, the following is a Python equivalent of the C API changes proposed in this PEP, where the new ``object`` and ``type`` defined here inherit from the usual ones:: import types class type(type): def __new__(cls, *args, **kwargs): if len(args) == 1: return super().__new__(cls, args[0]) name, bases, ns = args init = ns.get('__init_subclass__') if isinstance(init, types.FunctionType): ns['__init_subclass__'] = classmethod(init) self = super().__new__(cls, name, bases, ns) for k, v in self.__dict__.items(): func = getattr(v, '__set_owner__', None) if func is not None: func(self, k) super(self, self).__init_subclass__(**kwargs) return self def __init__(self, name, bases, ns, **kwargs): super().__init__(name, bases, ns) class object: @classmethod def __init_subclass__(cls): pass class object(object, metaclass=type): pass In this code, first the ``__set_owner__`` are called on the descriptors, and then the ``__init_subclass__``. This means that subclass initializers already see the fully initialized descriptors. This way, ``__init_subclass__`` users can fix all descriptors again if this is needed. Another option would have been to call ``__set_owner__`` in the base implementation of ``object.__init_subclass__``. This way it would be possible event to prevent ``__set_owner__`` from being called. Most of the times, however, such a prevention would be accidental, as it often happens that a call to ``super()`` is forgotten. Another small change should be noted here: in the current implementation of CPython, ``type.__init__`` explicitly forbids the use of keyword arguments, while ``type.__new__`` allows for its attributes to be shipped as keyword arguments. This is weirdly incoherent, and thus the above code forbids that. While it would be possible to retain the current behavior, it would be better if this was fixed, as it is probably not used at all: the only use case would be that at metaclass calls its ``super().__new__`` with *name*, *bases* and *dict* (yes, *dict*, not *namespace* or *ns* as mostly used with modern metaclasses) as keyword arguments. This should not be done. As a second change, the new ``type.__init__`` just ignores keyword arguments. Currently, it insists that no keyword arguments are given. This leads to a (wanted) error if one gives keyword arguments to a class declaration if the metaclass does not process them. Metaclass authors that do want to accept keyword arguments must filter them out by overriding ``__init___``. In the new code, it is not ``__init__`` that complains about keyword arguments, but ``__init_subclass__``, whose default implementation takes no arguments. In a classical inheritance scheme using the method resolution order, each ``__init_subclass__`` may take out it's keyword arguments until none are left, which is checked by the default implementation of ``__init_subclass__``. Rejected Design Options ======================= Calling the hook on the class itself ------------------------------------ Adding an ``__autodecorate__`` hook that would be called on the class itself was the proposed idea of PEP 422. Most examples work the same way or even better if the hook is called on the subclass. In general, it is much easier to explicitly call the hook on the class in which it is defined (to opt-in to such a behavior) than to opt-out, meaning that one does not want the hook to be called on the class it is defined in. This becomes most evident if the class in question is designed as a mixin: it is very unlikely that the code of the mixin is to be executed for the mixin class itself, as it is not supposed to be a complete class on its own. The original proposal also made major changes in the class initialization process, rendering it impossible to back-port the proposal to older Python versions. More importantly, having a pure Python implementation allows us to take two preliminary steps before before we actually change the interpreter, giving us the chance to iron out all possible wrinkles in the API. Other variants of calling the hook ---------------------------------- Other names for the hook were presented, namely ``__decorate__`` or ``__autodecorate__``. This proposal opts for ``__init_subclass__`` as it is very close to the ``__init__`` method, just for the subclass, while it is not very close to decorators, as it does not return the class. Requiring an explicit decorator on ``__init_subclass__`` -------------------------------------------------------- One could require the explicit use of ``@classmethod`` on the ``__init_subclass__`` decorator. It was made implicit since there's no sensible interpretation for leaving it out, and that case would need to be detected anyway in order to give a useful error message. This decision was reinforced after noticing that the user experience of defining ``__prepare__`` and forgetting the ``@classmethod`` method decorator is singularly incomprehensible (particularly since PEP 3115 documents it as an ordinary method, and the current documentation doesn't explicitly say anything one way or the other). A more ``__new__``-like hook ---------------------------- In PEP 422 the hook worked more like the ``__new__`` method than the ``__init__`` method, meaning that it returned a class instead of modifying one. This allows a bit more flexibility, but at the cost of much harder implementation and undesired side effects. Adding a class attribute with the attribute order ------------------------------------------------- This got its own PEP 520. History ======= This used to be a competing proposal to PEP 422 by Nick Coghlan and Daniel Urban. PEP 422 intended to achieve the same goals as this PEP, but with a different way of implementation. In the meantime, PEP 422 has been withdrawn favouring this approach. References ========== .. _more than 10 years ago: http://mail.python.org/pipermail/python-dev/2001-November/018651.html .. _Zope's ExtensionClass: http://docs.zope.org/zope_secrets/extensionclass.html Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From gmludo at gmail.com Sat Jul 2 19:17:30 2016 From: gmludo at gmail.com (Ludovic Gasc) Date: Sun, 3 Jul 2016 01:17:30 +0200 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: <5774CD41.9030601@hastings.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: Hi everybody, I fully understand that AsyncIO is a drop in the ocean of CPython, you're working to prepare the entire 3.5.3 release for December, not yet ready. However, you might create a 3.5.2.1 release with only this AsyncIO fix ? PEP 440 doesn't seem to forbid that even if I see only 3 digits examples in PEP, I only find an example with 4 digits: https://www.python.org/dev/peps/pep-0440/#version-specifiers If 3.5.2.1 or 3.5.3 are impossible to release before december, what are the alternative solutions for AsyncIO users ? 1. Use 3.5.1 and hope that Linux distributions won't use 3.5.2 ? 2. Patch by hand asyncio source code ? 3. Remove asyncio folder in CPython, and install asyncio via github repository ? 4. Anything else ? To be honest, I'm migrating an AsyncIO application from 3.4.3 to 3.5.1 with more than 10 000 lines of code, I'm really interested in to know if it's better to keep 3.4.3 for now, or if 3.5 branch is enough stable ? Have a nice week-end. -- Ludovic Gasc (GMLudo) http://www.gmludo.eu/ 2016-06-30 9:41 GMT+02:00 Larry Hastings : > On 06/28/2016 02:51 PM, Larry Hastings wrote: > > > On 06/28/2016 02:05 PM, Yury Selivanov wrote: > > Larry and the release team: would it be possible to make an > "emergency" 3.5.3 release? > > > I'd like to hear from the other asyncio reviewers: is this bug bad enough > to merit such an "emergency" release? > > > Thanks, > > > */arry* > > > There has been a distinct lack of "dear god yes Larry" emails so far. > This absence suggests that, no, it is not a bad enough bug to merit such a > release. > > If we stay to our usual schedule, I expect 3.5.3 to ship December-ish. > > > */arry* > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/gmludo%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 3 00:09:47 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 2 Jul 2016 21:09:47 -0700 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On 2 July 2016 at 16:17, Ludovic Gasc wrote: > Hi everybody, > > I fully understand that AsyncIO is a drop in the ocean of CPython, you're > working to prepare the entire 3.5.3 release for December, not yet ready. > However, you might create a 3.5.2.1 release with only this AsyncIO fix ? That would be more work than just doing a 3.5.3 release, though - the problem isn't with the version number bump, it's with asking the release team to do additional work without clearly explaining the rationale for the request (more on that below). While some parts of the release process are automated, there's still a lot of steps to run through by a number of different people: https://www.python.org/dev/peps/pep-0101/. The first key question to answer in this kind of situation is: "Is there code that will run correctly on 3.5.1 that will now fail on 3.5.2?" (i.e. it's a regression introduced by the asyncio and coroutine changes in the point release rather than something that was already broken in 3.5.0 and 3.5.1). If the answer is "No", then it doesn't inhibit the 3.5.2 rollout in any way, and folks can wait until 3.5.3 for the fix. However, if the answer is "Yes, it's a new regression in 3.5.2" (as in this case), then the next question becomes "Is there an agreed resolution for the regression?" The answer to that is currently "No" - Yury's PR against the asyncio repo is still being discussed. Once the answer to that question is "Yes", *then* the question of releasing a high priority fix in a Python 3.5.3 release can be properly considered by answering the question "Of the folks using asyncio, what proportion of them are likely to encounter problems in upgrading to Python 3.5.2, and is there a workaround they can apply or alternate approach they can use to avoid the problem?". At the moment, Yury's explanation of the fix in the PR is (understandably) addressed at getting the problem resolved within the context of asyncio, and hence just describes the particular APIs affected, and the details of the incorrect behaviour. While that's an important step in the process, it doesn't provide a clear assessment of the *consequences* of the bug aimed at folks that aren't themselves deeply immersed in using asyncio, so we can't tell if the problem is "Some idiomatic code frequently recommended in user facing examples and used in third party asyncio based libraries may hang client processes" (which would weigh in favour of an early 3.5.3 release before people start encountering the regression in practice) or "Some low level API's not recommended for general use may hang if used in a particular non-idiomatic combination only likely to be encountered by event loop implementors" (which would suggest it may be OK to stick with the normal maintenance release cadence). > If 3.5.2.1 or 3.5.3 are impossible to release before december, Early maintenance releases are definitely possible, but the consequences of particular regressions need to be put into terms that make sense to the release team, which generally means stepping up from "APIs X, Y, and Z broke in this way" to "Users doing A, B, and C will be affected in this way". As an example of a case where an early maintenance release took place: several years ago, Python 2.6.3 happened to break both "from logging import *" (due to a missing entry in test___all__ letting an error in logging.__all__ through) and building extension modules with setuptools (due to a change in a private API that setuptools was monkeypatching). Those were considered significant enough for the 2.6.4 release to happen early. > what are the > alternative solutions for AsyncIO users ? > 1. Use 3.5.1 and hope that Linux distributions won't use 3.5.2 ? Linux distributions have mechanisms to carry patches (indeed, selective application of patches is one of the main benefits of using system packages over upstream ones), so any distro that rebases on 3.5.2 can be encouraged to add the fix once it lands regardless of whether or not Larry approves an additional maintenance release outside the normal cadence. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From gmludo at gmail.com Sun Jul 3 03:46:14 2016 From: gmludo at gmail.com (Ludovic Gasc) Date: Sun, 3 Jul 2016 09:46:14 +0200 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: Hi Nick, First, thanks a lot for your detailed answer, it was very instructive to me. My answers below. 2016-07-03 6:09 GMT+02:00 Nick Coghlan : > On 2 July 2016 at 16:17, Ludovic Gasc wrote: > > Hi everybody, > > > > I fully understand that AsyncIO is a drop in the ocean of CPython, you're > > working to prepare the entire 3.5.3 release for December, not yet ready. > > However, you might create a 3.5.2.1 release with only this AsyncIO fix ? > > That would be more work than just doing a 3.5.3 release, though - the > problem isn't with the version number bump, it's with asking the > release team to do additional work without clearly explaining the > rationale for the request (more on that below). While some parts of > the release process are automated, there's still a lot of steps to run > through by a number of different people: > https://www.python.org/dev/peps/pep-0101/. > Thanks for the link, I didn't know this PEP, it was interesting to read. > > The first key question to answer in this kind of situation is: "Is > there code that will run correctly on 3.5.1 that will now fail on > 3.5.2?" (i.e. it's a regression introduced by the asyncio and > coroutine changes in the point release rather than something that was > already broken in 3.5.0 and 3.5.1). > > If the answer is "No", then it doesn't inhibit the 3.5.2 rollout in > any way, and folks can wait until 3.5.3 for the fix. > > However, if the answer is "Yes, it's a new regression in 3.5.2" (as in > this case), then the next question becomes "Is there an agreed > resolution for the regression?" > > The answer to that is currently "No" - Yury's PR against the asyncio > repo is still being discussed. > > Once the answer to that question is "Yes", *then* the question of > releasing a high priority fix in a Python 3.5.3 release can be > properly considered by answering the question "Of the folks using > asyncio, what proportion of them are likely to encounter problems in > upgrading to Python 3.5.2, and is there a workaround they can apply or > alternate approach they can use to avoid the problem?". > > At the moment, Yury's explanation of the fix in the PR is > (understandably) addressed at getting the problem resolved within the > context of asyncio, and hence just describes the particular APIs > affected, and the details of the incorrect behaviour. While that's an > important step in the process, it doesn't provide a clear assessment > of the *consequences* of the bug aimed at folks that aren't themselves > deeply immersed in using asyncio, so we can't tell if the problem is > "Some idiomatic code frequently recommended in user facing examples > and used in third party asyncio based libraries may hang client > processes" (which would weigh in favour of an early 3.5.3 release > before people start encountering the regression in practice) or "Some > low level API's not recommended for general use may hang if used in a > particular non-idiomatic combination only likely to be encountered by > event loop implementors" (which would suggest it may be OK to stick > with the normal maintenance release cadence). > To my basic understanding, it seems to have race conditions to open sockets. If my understanding is true, it's a little bit the heart of AsyncIO is affected ;-) If you search about loop.sock_connect in Github, you've found a lot of results https://github.com/search?l=python&q=loop.sock_connect&ref=searchresults&type=Code&utf8=%E2%9C%93 Moreover, if Yury, one of contributors of AsyncIO: https://github.com/python/asyncio/graphs/contributors and uvloop creator has sent an e-mail about that, I'm tented to believe him. It's why a little bit scared by that, even if we don't have a lot of AsyncIO's users, especially with the latest release. However, Google Trends might give us a good overview of relative users we have, compare to Twisted, Gevent and Tornado: https://www.google.com/trends/explore#q=asyncio%2C%20%2Fm%2F02xknvd%2C%20gevent%2C%20%2Fm%2F07s58h4&date=1%2F2016%2012m&cmpt=q&tz=Etc%2FGMT-2 > > > If 3.5.2.1 or 3.5.3 are impossible to release before december, > > Early maintenance releases are definitely possible, but the > consequences of particular regressions need to be put into terms that > make sense to the release team, which generally means stepping up from > "APIs X, Y, and Z broke in this way" to "Users doing A, B, and C will > be affected in this way". > > As an example of a case where an early maintenance release took place: > several years ago, Python 2.6.3 happened to break both "from logging > import *" (due to a missing entry in test___all__ letting an error in > logging.__all__ through) and building extension modules with > setuptools (due to a change in a private API that setuptools was > monkeypatching). Those were considered significant enough for the > 2.6.4 release to happen early. > Ok, we'll see first what's the decision will emerge about this pull request in AsyncIO. > > > what are the > > alternative solutions for AsyncIO users ? > > 1. Use 3.5.1 and hope that Linux distributions won't use 3.5.2 ? > > Linux distributions have mechanisms to carry patches (indeed, > selective application of patches is one of the main benefits of using > system packages over upstream ones), so any distro that rebases on > 3.5.2 can be encouraged to add the fix once it lands regardless of > whether or not Larry approves an additional maintenance release > outside the normal cadence. > Good to know. It means that it should be more Mac and Windows users who are concerned about this bug, especially new comers, because they download directly from python.org website. Depends on the pull request decision, it might be also a warning message on downloads page to explain to use 3.5.1 instead of 3.5.2 if you want to use AsyncIO. Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 3 03:57:33 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Jul 2016 00:57:33 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On 2 July 2016 at 10:50, Martin Teichmann wrote: > Hi list, > > so this is the next round for PEP 487. During the last round, most of > the comments were in the direction that a two step approach for > integrating into Python, first in pure Python, later in C, was not a > great idea and everything should be in C directly. So I implemented it > in C, put it onto the issue tracker here: > http://bugs.python.org/issue27366, and also modified the PEP > accordingly. > > For those who had not been in the discussion, PEP 487 proposes to add > two hooks, __init_subclass__ which is a classmethod called whenever a > class is subclassed, and __set_owner__, a hook in descriptors which > gets called once the class the descriptor is part of is created. I'm +1 for this part of the proposal. One potential documentation issue is that __init_subclass__ adds yet a third special magic method behaviour: - __new__ is implicitly a static method - __prepare__ isn't implicitly anything (but in hindsight should have implicitly been a class method) - __init_subclass__ is implicitly a class method I think making __init_subclass__ implicitly a class method is still the right thing to do if this proposal gets accepted, we'll just want to see if we can do something to tidy up that aspect of the documentation at the same time. > While implementing PEP 487 I realized that there is and oddity in the > type base class: type.__init__ forbids to use keyword arguments, even > for the usual three arguments it has (name, base and dict), while > type.__new__ allows for keyword arguments. As I plan to forward any > keyword arguments to the new __init_subclass__, I stumbled over that. > As I write in the PEP, I think it would be a good idea to forbid using > keyword arguments for type.__new__ as well. But if people think this > would be to big of a change, it would be possible to do it > differently. I *think* I'm in favour of cleaning this up, but I also think the explanation of the problem with the status quo could stand to be clearer, as could the proposed change in behaviour. Some example code at the interactive prompt may help with that. Positional arguments already either work properly, or give a helpful error message: >>> type("Example", (), {}) >>> type.__new__("Example", (), {}) Traceback (most recent call last): File "", line 1, in TypeError: type.__new__(X): X is not a type object (str) >>> type.__new__(type, "Example", (), {}) >>> type.__init__("Example", (), {}) Traceback (most recent call last): File "", line 1, in TypeError: descriptor '__init__' requires a 'type' object but received a 'str' >>> type.__init__(type, "Example", (), {}) By contrast, attempting to use keyword arguments is a fair collection of implementation defined "Uh, what just happened?": >>> type(name="Example", bases=(), dict={}) Traceback (most recent call last): File "", line 1, in TypeError: type.__init__() takes no keyword arguments >>> type.__new__(name="Example", bases=(), dict={}) # Huh? Traceback (most recent call last): File "", line 1, in TypeError: type.__new__(): not enough arguments >>> type.__new__(type, name="Example", bases=(), dict={}) >>> type.__init__(name="Example", bases=(), dict={}) # Huh? Traceback (most recent call last): File "", line 1, in TypeError: descriptor '__init__' of 'type' object needs an argument >>> type.__init__(type, name="Example", bases=(), dict={}) # Huh? Traceback (most recent call last): File "", line 1, in TypeError: type.__init__() takes no keyword arguments I think the PEP could be accepted without cleaning this up, though - it would just mean __init_subclass__ would see the "name", "bases" and "dict" keys when someone attempted to use keyword arguments with the dynamic type creation APIs. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Sun Jul 3 10:39:08 2016 From: guido at python.org (Guido van Rossum) Date: Sun, 3 Jul 2016 07:39:08 -0700 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: Another thought recently occurred to me. Do releases really have to be such big productions? A recent ACM article by Tom Limoncelli[1] reminded me that we're doing releases the old-fashioned way -- infrequently, and with lots of manual labor. Maybe we could (eventually) try to strive for a lighter-weight, more automated release process? It would be less work, and it would reduce stress for authors of stdlib modules and packages -- there's always the next release. I would think this wouldn't obviate the need for carefully planned and timed "big deal" feature releases, but it could make the bug fix releases *less* of a deal, for everyone. [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract (sadly requires login) -- --Guido van Rossum (python.org/~guido) From steve.dower at python.org Sun Jul 3 12:49:07 2016 From: steve.dower at python.org (Steve Dower) Date: Sun, 3 Jul 2016 09:49:07 -0700 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: Many of our users prefer stability (the sort who plan operating system updates years in advance), but generally I'm in favor of more frequent releases. It will likely require more complex branching though, presumably based on the LTS model everyone else uses. One thing we've discussed before is separating core and stdlib releases. I'd be really interested to see a release where most of the stdlib is just preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle wheels for stable releases and provide a fast track via pip to update individual packages. Probably no better opportunity to make such a fundamental change as we move to a new VCS... Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Guido van Rossum" Sent: ?7/?3/?2016 7:42 To: "Python-Dev" Cc: "Nick Coghlan" Subject: Re: [Python-Dev] Request for CPython 3.5.3 release Another thought recently occurred to me. Do releases really have to be such big productions? A recent ACM article by Tom Limoncelli[1] reminded me that we're doing releases the old-fashioned way -- infrequently, and with lots of manual labor. Maybe we could (eventually) try to strive for a lighter-weight, more automated release process? It would be less work, and it would reduce stress for authors of stdlib modules and packages -- there's always the next release. I would think this wouldn't obviate the need for carefully planned and timed "big deal" feature releases, but it could make the bug fix releases *less* of a deal, for everyone. [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract (sadly requires login) -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 3 16:22:11 2016 From: brett at python.org (Brett Cannon) Date: Sun, 03 Jul 2016 20:22:11 +0000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: [forking the conversation since the subject has shifted] On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: > Many of our users prefer stability (the sort who plan operating system > updates years in advance), but generally I'm in favour of more frequent > releases. > So there's our 18 month cadence for feature/minor releases, and then there's the 6 month cadence for bug-fix/micro releases. At the language summit there was the discussion kicked off by Ned about our release schedule and a group of us had a discussion afterward where a more strict release cadence of 12 months with the release date tied to a consistent month -- e.g. September of every year -- instead of our hand-wavy "about 18 months after the last feature release"; people in the discussion seemed to like the 12 months consistency idea. I think making releases on a regular, annual schedule requires simply a decision by us to do it since the time scale we are talking about is still so large it shouldn't impact the workload of RMs & friends *that* much (I think). As for upping the bug-fix release cadence, if we can automate that then perhaps we can up the frequency (maybe once every quarter), but I'm not sure what kind of overhead that would add and thus how much would need to be automated to make that release cadence work. Doing this kind of shrunken cadence for bug-fix releases would require the RM & friends to decide what would need to be automated to shrink the release schedule to make it viable (e.g. "if we automated steps N & M of the release process then I would be okay releasing every 3 months instead of 6"). For me, I say we shift to an annual feature release in a specific month every year, and switch to a quarterly bug-fix releases only if we can add zero extra work to RMs & friends. > It will likely require more complex branching though, presumably based on > the LTS model everyone else uses. > Why is that? You can almost view our feature releases as LTS releases, at which point our current branching structure is no different. > > One thing we've discussed before is separating core and stdlib releases. > I'd be really interested to see a release where most of the stdlib is just > preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle > wheels for stable releases and provide a fast track via pip to update > individual packages. > > Probably no better opportunity to make such a fundamental change as we > move to a new VCS... > Topic 1 ======= If we separate out the stdlib, we first need to answer why we are doing this? The arguments supporting this idea is (1) it might simplify more frequent releases of Python (but that's a guess), (2) it would make the stdlib less CPython-dependent (if purely by the fact of perception and ease of testing using CI against other interpreters when they have matching version support), and (3) it might make it easier for us to get more contributors who are comfortable helping with just the stdlib vs CPython itself (once again, this might simply be through perception). So if we really wanted to go this route of breaking out the stdlib, I think we have two options. One is to have the cpython repo represent the CPython interpreter and then have a separate stdlib repo. The other option is to still have cpython represent the interpreter but then each stdlib module have their own repository. Since the single repo for the stdlib is not that crazy, I'll talk about the crazier N repo idea (in all scenarios we would probably have a repo that pulled in cpython and the stdlib through either git submodules or subtrees and that would represent a CPython release repo). In this scenario, having each module/package have its own repo could get us a couple of things. One is that it might help simplify module maintenance by allowing each module to have its own issue tracker, set of contributors, etc. This also means it will make it obvious what modules are being neglected which will either draw attention and get help or honestly lead to a deprecation if no one is willing to help maintain it. Separate repos would also allow for easier backport releases (e.g. what asyncio and typing have been doing since they were created). If a module is maintained as if it was its own project then it makes it easier to make releases separated from the stdlib itself (although the usefulness is minimized as long as sys.path has site-packages as its last entry). Separate releases allows for faster releases of the stand-alone module, e.g. if only asyncio has a bug then asyncio can cut their own release and the rest of the stdlib doesn't need to care. Then when a new CPython release is done we can simply bundle up the stable release at the moment and essentially make our mythical sumo release be the stdlib release itself (and this would help stop modules like asyncio and typing from simply copying modules into the stdlib from their external repo if we just pulled in their repo using submodules or subtrees in a master repo). And yes, I realize this might lead to a ton of repos, but maybe that's an important side effect. We have so much code in our stdlib that it's hard to maintain and fixes can get dropped on the floor. If this causes us to re-prioritize what should be in the stdlib and trim it back to things we consider critical to have in all Python releases, then IMO that's as a huge win in maintainability and workload savings instead of carrying forward neglected code (or at least help people focus on modules they care about and let others know where help is truly needed). Topic 2 ======= Independent releases of the stdlib could be done, although if we break the stdlib up into individual repos then it shifts the conversation as individual modules could simply do their own releases independent of the big stdlib release. Personally I don't see a point of doing a stdlib release separate from CPython, but I could see doing a more frequent release of CPython where the only thing that changed is the stdlib itself (but I don't know if that would even alleviate the RM workload). For me, I'm more interested in thinking about breaking the stdlib modules into their own repos and making a CPython release more of a collection of python-dev-approved modules that are maintained under the python organization on GitHub and follow our compatibility guidelines and code quality along with the CPython interpreter. This would also make it much easier for custom distros, e.g. a cloud-targeted CPython release that ignored all GUI libraries. -Brett > > Cheers, > Steve > > Top-posted from my Windows Phone > ------------------------------ > From: Guido van Rossum > Sent: ?7/?3/?2016 7:42 > To: Python-Dev > Cc: Nick Coghlan > Subject: Re: [Python-Dev] Request for CPython 3.5.3 release > > Another thought recently occurred to me. Do releases really have to be > such big productions? A recent ACM article by Tom Limoncelli[1] > reminded me that we're doing releases the old-fashioned way -- > infrequently, and with lots of manual labor. Maybe we could > (eventually) try to strive for a lighter-weight, more automated > release process? It would be less work, and it would reduce stress for > authors of stdlib modules and packages -- there's always the next > release. I would think this wouldn't obviate the need for carefully > planned and timed "big deal" feature releases, but it could make the > bug fix releases *less* of a deal, for everyone. > > [1] > http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract > (sadly requires login) > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 3 16:43:32 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 3 Jul 2016 21:43:32 +0100 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On 3 July 2016 at 21:22, Brett Cannon wrote: > Topic 2 > ======= > Independent releases of the stdlib could be done, although if we break the > stdlib up into individual repos then it shifts the conversation as > individual modules could simply do their own releases independent of the big > stdlib release. Personally I don't see a point of doing a stdlib release > separate from CPython, but I could see doing a more frequent release of > CPython where the only thing that changed is the stdlib itself (but I don't > know if that would even alleviate the RM workload). The one major downside of independent stdlib releases is that it significantly increases the number of permutations of things 3rd parties have to support. It can be hard enough to get a user to report the version of Python they are having an issue with - to get them to report both python and stdlib version would be even trickier. And testing against all the combinations, and deciding which combinations are supported, becomes a much bigger problem. Furthermore, pip/setuptools are just getting to the point of allowing for dependencies conditional on Python version. If independent stdlib releases were introduced, we'd need to implement dependencies based on stdlib version as well - consider depending on a backport of a new module if the user has an older stdlib version that doesn't include it. Changing the principle that the CPython version is a well-defined label for a specific language level and stdlib, is a major change with very wide implications, and I don't see sufficient benefits to justify it. On the other hand, simply decoupling the internal development cycles for the language and the stdlib (or independent stdlib modules), without adding extra "release" cycles, is not that big a deal - in many ways, we do that already with projects like asyncio. Paul From chris at chriskrycho.com Sun Jul 3 16:54:06 2016 From: chris at chriskrycho.com (Chris Krycho) Date: Sun, 3 Jul 2016 16:54:06 -0400 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <7AE54B90-1CD7-4FF7-8923-201A065954B2@chriskrycho.com> As an observer and user? It may be worth asking the Rust team what the main pain points are in coordinating and managing their releases. Some context for those unfamiliar: Rust uses a Chrome- or Firefox-like release train approach, with stable and beta releases every six weeks. Each release cycle includes both the compiler and the standard library. They use feature flags on "nightly" (the master branch) and cut release branches for actually gets shipped in each release. This has the advantage of letting new features and functionality ship whenever they're ready, rather than waiting for Big Bang releases. Because of strong commitments to stability and backwards compatibility as part of that, it hasn't led to any substantial breakage along the way, either. There is also some early discussion of how they might add LTS releases into that mix. The Rust standard library is currently bundled into the same repository as the compiler. Although the stdlib is currently being modularized and somewhat decoupled from the compiler, I don't believe they intend to separate it from the compiler repository or release in that process (not least because there's no need to further speed up their release cadence!). None of that is meant to suggest Python adopt that specific cadence (though I have found it quite nice), but simply to observe that the Rust team might have useful info on upsides, downsides, and particular gotchas as Python considers changing its own release process. Regards, Chris Krycho > On Jul 3, 2016, at 16:22, Brett Cannon wrote: > > [forking the conversation since the subject has shifted] > >> On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: >> Many of our users prefer stability (the sort who plan operating system updates years in advance), but generally I'm in favour of more frequent releases. > > So there's our 18 month cadence for feature/minor releases, and then there's the 6 month cadence for bug-fix/micro releases. At the language summit there was the discussion kicked off by Ned about our release schedule and a group of us had a discussion afterward where a more strict release cadence of 12 months with the release date tied to a consistent month -- e.g. September of every year -- instead of our hand-wavy "about 18 months after the last feature release"; people in the discussion seemed to like the 12 months consistency idea. I think making releases on a regular, annual schedule requires simply a decision by us to do it since the time scale we are talking about is still so large it shouldn't impact the workload of RMs & friends that much (I think). > > As for upping the bug-fix release cadence, if we can automate that then perhaps we can up the frequency (maybe once every quarter), but I'm not sure what kind of overhead that would add and thus how much would need to be automated to make that release cadence work. Doing this kind of shrunken cadence for bug-fix releases would require the RM & friends to decide what would need to be automated to shrink the release schedule to make it viable (e.g. "if we automated steps N & M of the release process then I would be okay releasing every 3 months instead of 6"). > > For me, I say we shift to an annual feature release in a specific month every year, and switch to a quarterly bug-fix releases only if we can add zero extra work to RMs & friends. > >> It will likely require more complex branching though, presumably based on the LTS model everyone else uses. > > Why is that? You can almost view our feature releases as LTS releases, at which point our current branching structure is no different. > >> >> One thing we've discussed before is separating core and stdlib releases. I'd be really interested to see a release where most of the stdlib is just preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle wheels for stable releases and provide a fast track via pip to update individual packages. >> >> Probably no better opportunity to make such a fundamental change as we move to a new VCS... > > > > Topic 1 > ======= > If we separate out the stdlib, we first need to answer why we are doing this? The arguments supporting this idea is (1) it might simplify more frequent releases of Python (but that's a guess), (2) it would make the stdlib less CPython-dependent (if purely by the fact of perception and ease of testing using CI against other interpreters when they have matching version support), and (3) it might make it easier for us to get more contributors who are comfortable helping with just the stdlib vs CPython itself (once again, this might simply be through perception). > > So if we really wanted to go this route of breaking out the stdlib, I think we have two options. One is to have the cpython repo represent the CPython interpreter and then have a separate stdlib repo. The other option is to still have cpython represent the interpreter but then each stdlib module have their own repository. > > Since the single repo for the stdlib is not that crazy, I'll talk about the crazier N repo idea (in all scenarios we would probably have a repo that pulled in cpython and the stdlib through either git submodules or subtrees and that would represent a CPython release repo). In this scenario, having each module/package have its own repo could get us a couple of things. One is that it might help simplify module maintenance by allowing each module to have its own issue tracker, set of contributors, etc. This also means it will make it obvious what modules are being neglected which will either draw attention and get help or honestly lead to a deprecation if no one is willing to help maintain it. > > Separate repos would also allow for easier backport releases (e.g. what asyncio and typing have been doing since they were created). If a module is maintained as if it was its own project then it makes it easier to make releases separated from the stdlib itself (although the usefulness is minimized as long as sys.path has site-packages as its last entry). Separate releases allows for faster releases of the stand-alone module, e.g. if only asyncio has a bug then asyncio can cut their own release and the rest of the stdlib doesn't need to care. Then when a new CPython release is done we can simply bundle up the stable release at the moment and essentially make our mythical sumo release be the stdlib release itself (and this would help stop modules like asyncio and typing from simply copying modules into the stdlib from their external repo if we just pulled in their repo using submodules or subtrees in a master repo). > > And yes, I realize this might lead to a ton of repos, but maybe that's an important side effect. We have so much code in our stdlib that it's hard to maintain and fixes can get dropped on the floor. If this causes us to re-prioritize what should be in the stdlib and trim it back to things we consider critical to have in all Python releases, then IMO that's as a huge win in maintainability and workload savings instead of carrying forward neglected code (or at least help people focus on modules they care about and let others know where help is truly needed). > > Topic 2 > ======= > Independent releases of the stdlib could be done, although if we break the stdlib up into individual repos then it shifts the conversation as individual modules could simply do their own releases independent of the big stdlib release. Personally I don't see a point of doing a stdlib release separate from CPython, but I could see doing a more frequent release of CPython where the only thing that changed is the stdlib itself (but I don't know if that would even alleviate the RM workload). > > For me, I'm more interested in thinking about breaking the stdlib modules into their own repos and making a CPython release more of a collection of python-dev-approved modules that are maintained under the python organization on GitHub and follow our compatibility guidelines and code quality along with the CPython interpreter. This would also make it much easier for custom distros, e.g. a cloud-targeted CPython release that ignored all GUI libraries. > > -Brett > >> >> Cheers, >> Steve >> >> Top-posted from my Windows Phone >> From: Guido van Rossum >> Sent: ?7/?3/?2016 7:42 >> To: Python-Dev >> Cc: Nick Coghlan >> Subject: Re: [Python-Dev] Request for CPython 3.5.3 release >> >> Another thought recently occurred to me. Do releases really have to be >> such big productions? A recent ACM article by Tom Limoncelli[1] >> reminded me that we're doing releases the old-fashioned way -- >> infrequently, and with lots of manual labor. Maybe we could >> (eventually) try to strive for a lighter-weight, more automated >> release process? It would be less work, and it would reduce stress for >> authors of stdlib modules and packages -- there's always the next >> release. I would think this wouldn't obviate the need for carefully >> planned and timed "big deal" feature releases, but it could make the >> bug fix releases *less* of a deal, for everyone. >> >> [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract >> (sadly requires login) >> >> -- >> --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 3 17:04:36 2016 From: brett at python.org (Brett Cannon) Date: Sun, 03 Jul 2016 21:04:36 +0000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On Sun, Jul 3, 2016, 13:43 Paul Moore wrote: > On 3 July 2016 at 21:22, Brett Cannon wrote: > > Topic 2 > > ======= > > Independent releases of the stdlib could be done, although if we break > the > > stdlib up into individual repos then it shifts the conversation as > > individual modules could simply do their own releases independent of the > big > > stdlib release. Personally I don't see a point of doing a stdlib release > > separate from CPython, but I could see doing a more frequent release of > > CPython where the only thing that changed is the stdlib itself (but I > don't > > know if that would even alleviate the RM workload). > > The one major downside of independent stdlib releases is that it > significantly increases the number of permutations of things 3rd > parties have to support. It can be hard enough to get a user to report > the version of Python they are having an issue with - to get them to > report both python and stdlib version would be even trickier. And > testing against all the combinations, and deciding which combinations > are supported, becomes a much bigger problem. > > Furthermore, pip/setuptools are just getting to the point of allowing > for dependencies conditional on Python version. If independent stdlib > releases were introduced, we'd need to implement dependencies based on > stdlib version as well - consider depending on a backport of a new > module if the user has an older stdlib version that doesn't include > it. > > Changing the principle that the CPython version is a well-defined > label for a specific language level and stdlib, is a major change with > very wide implications, and I don't see sufficient benefits to justify > it. On the other hand, simply decoupling the internal development > cycles for the language and the stdlib (or independent stdlib > modules), without adding extra "release" cycles, is not that big a > deal - in many ways, we do that already with projects like asyncio. > This last bit is what I would advocate if we broke the stdlib out unless an emergency patch release is warranted for a specific module (e.g. like asyncio that started this discussion). Obviously backporting is its own thing. -Brett > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 3 17:22:38 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 3 Jul 2016 22:22:38 +0100 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On 3 July 2016 at 22:04, Brett Cannon wrote: > This last bit is what I would advocate if we broke the stdlib out unless an > emergency patch release is warranted for a specific module (e.g. like > asyncio that started this discussion). Obviously backporting is its own > thing. It's also worth noting that pip has no mechanism for installing an updated stdlib module, as everything goes into site-packages, and the stdlib takes precedence over site-packages unless you get into sys.path hacking abominations like setuptools uses (or at least used to use, I don't know if it still does). So as things stand, independent patch releases of stdlib modules would need to be manually copied into place. Allowing users to override the stdlib opens up a different can of worms - not necessarily one that we couldn't resolve, but IIRC, it was always a deliberate policy that overriding the stdlib wasn't possible (that's why backports have names like unittest2...) Paul From njs at pobox.com Sun Jul 3 17:50:40 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 3 Jul 2016 14:50:40 -0700 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On Jul 3, 2016 1:45 PM, "Paul Moore" wrote: > [...] > Furthermore, pip/setuptools are just getting to the point of allowing > for dependencies conditional on Python version. If independent stdlib > releases were introduced, we'd need to implement dependencies based on > stdlib version as well - consider depending on a backport of a new > module if the user has an older stdlib version that doesn't include > it. Regarding this particular point: right now, yeah, there's an annoying thing where you have to know that a dependency on stdlib/backported library X has to be written as "X >= 1.0 [py_version <= 3.4]" or whatever, and every package with this dependency has to encode some complicated indirect knowledge of what versions of X ship with what versions of python. (And life is even more complicated if you want to support pypy/jython/..., who are generally shipping manually maintained stdlib forks, and whose nominal "python version equivalent" is only an approximation.) In the extreme, one can imagine a module like typing still being distributed as part of the standard python download, BUT not in the stdlib, but rather as a "preinstalled package" in site-packages/ that could then be upgraded normally after install. In addition to whatever maintenance advantages this might (or might not) have, with regards to Paul's concerns this would actually be a huge improvement, since if a package needs typing 1.3 or whatever then they could just declare that, without having to know a priori which versions of python shipped which version. (Note that linux distributions already split up the stdlib into pieces, and you're not guaranteed to have all of it available.) Or if we want to be less aggressive and keep the stdlib monolithic, then it would still be great if there were some .dist-info metadata somewhere that said "this version of the stdlib provides typing 1.3, asyncio 1.4, ...". I haven't thought through all the details of how this would work and how pip could best take advantage, though. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 3 17:46:16 2016 From: brett at python.org (Brett Cannon) Date: Sun, 03 Jul 2016 21:46:16 +0000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On Sun, Jul 3, 2016, 14:22 Paul Moore wrote: > On 3 July 2016 at 22:04, Brett Cannon wrote: > > This last bit is what I would advocate if we broke the stdlib out unless > an > > emergency patch release is warranted for a specific module (e.g. like > > asyncio that started this discussion). Obviously backporting is its own > > thing. > > It's also worth noting that pip has no mechanism for installing an > updated stdlib module, as everything goes into site-packages, and the > stdlib takes precedence over site-packages unless you get into > sys.path hacking abominations like setuptools uses (or at least used > to use, I don't know if it still does). So as things stand, > independent patch releases of stdlib modules would need to be manually > copied into place. > I thought I mentioned this depends on changing sys.path; sorry if I didn't. > Allowing users to override the stdlib opens up a different can of > worms - not necessarily one that we couldn't resolve, but IIRC, it was > always a deliberate policy that overriding the stdlib wasn't possible > (that's why backports have names like unittest2...) > I think it could be considered less of an issue now thanks to being able to declare dependencies and the version requirements for pip. -brett > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 3 17:48:27 2016 From: brett at python.org (Brett Cannon) Date: Sun, 03 Jul 2016 21:48:27 +0000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: <7AE54B90-1CD7-4FF7-8923-201A065954B2@chriskrycho.com> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <7AE54B90-1CD7-4FF7-8923-201A065954B2@chriskrycho.com> Message-ID: I actually thought about Rust when thinking about 3 month releases (I know they release faster though). What i would want to know is whether the RMs for Rust are employed by Mozilla and thus have work time to do but it vs Python RMs & friends who vary ob whether they get work time. On Sun, Jul 3, 2016, 13:54 Chris Krycho wrote: > As an observer and user? > > It may be worth asking the Rust team what the main pain points are in > coordinating and managing their releases. > > Some context for those unfamiliar: Rust uses a Chrome- or Firefox-like > release train approach, with stable and beta releases every six weeks. Each > release cycle includes both the compiler and the standard library. They use > feature flags on "nightly" (the master branch) and cut release branches for > actually gets shipped in each release. This has the advantage of letting > new features and functionality ship whenever they're ready, rather than > waiting for Big Bang releases. Because of strong commitments to stability > and backwards compatibility as part of that, it hasn't led to any > substantial breakage along the way, either. > > There is also some early discussion of how they might add LTS releases > into that mix. > > The Rust standard library is currently bundled into the same repository as > the compiler. Although the stdlib is currently being modularized and > somewhat decoupled from the compiler, I don't believe they intend to > separate it from the compiler repository or release in that process (not > least because there's no need to further speed up their release cadence!). > > None of that is meant to suggest Python adopt that specific cadence > (though I have found it *quite* nice), but simply to observe that the > Rust team might have useful info on upsides, downsides, and particular > gotchas as Python considers changing its own release process. > > Regards, > Chris Krycho > > On Jul 3, 2016, at 16:22, Brett Cannon wrote: > > [forking the conversation since the subject has shifted] > > On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: > >> Many of our users prefer stability (the sort who plan operating system >> updates years in advance), but generally I'm in favour of more frequent >> releases. >> > > So there's our 18 month cadence for feature/minor releases, and then > there's the 6 month cadence for bug-fix/micro releases. At the language > summit there was the discussion kicked off by Ned about our release > schedule and a group of us had a discussion afterward where a more strict > release cadence of 12 months with the release date tied to a consistent > month -- e.g. September of every year -- instead of our hand-wavy "about 18 > months after the last feature release"; people in the discussion seemed to > like the 12 months consistency idea. I think making releases on a regular, > annual schedule requires simply a decision by us to do it since the time > scale we are talking about is still so large it shouldn't impact the > workload of RMs & friends *that* much (I think). > > As for upping the bug-fix release cadence, if we can automate that then > perhaps we can up the frequency (maybe once every quarter), but I'm not > sure what kind of overhead that would add and thus how much would need to > be automated to make that release cadence work. Doing this kind of shrunken > cadence for bug-fix releases would require the RM & friends to decide what > would need to be automated to shrink the release schedule to make it viable > (e.g. "if we automated steps N & M of the release process then I would be > okay releasing every 3 months instead of 6"). > > For me, I say we shift to an annual feature release in a specific month > every year, and switch to a quarterly bug-fix releases only if we can add > zero extra work to RMs & friends. > > >> It will likely require more complex branching though, presumably based on >> the LTS model everyone else uses. >> > > Why is that? You can almost view our feature releases as LTS releases, at > which point our current branching structure is no different. > > >> >> One thing we've discussed before is separating core and stdlib releases. >> I'd be really interested to see a release where most of the stdlib is just >> preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle >> wheels for stable releases and provide a fast track via pip to update >> individual packages. >> >> Probably no better opportunity to make such a fundamental change as we >> move to a new VCS... >> > > > > Topic 1 > ======= > If we separate out the stdlib, we first need to answer why we are doing > this? The arguments supporting this idea is (1) it might simplify more > frequent releases of Python (but that's a guess), (2) it would make the > stdlib less CPython-dependent (if purely by the fact of perception and ease > of testing using CI against other interpreters when they have matching > version support), and (3) it might make it easier for us to get more > contributors who are comfortable helping with just the stdlib vs CPython > itself (once again, this might simply be through perception). > > So if we really wanted to go this route of breaking out the stdlib, I > think we have two options. One is to have the cpython repo represent the > CPython interpreter and then have a separate stdlib repo. The other option > is to still have cpython represent the interpreter but then each stdlib > module have their own repository. > > Since the single repo for the stdlib is not that crazy, I'll talk about > the crazier N repo idea (in all scenarios we would probably have a repo > that pulled in cpython and the stdlib through either git submodules or > subtrees and that would represent a CPython release repo). In this > scenario, having each module/package have its own repo could get us a > couple of things. One is that it might help simplify module maintenance by > allowing each module to have its own issue tracker, set of contributors, > etc. This also means it will make it obvious what modules are being > neglected which will either draw attention and get help or honestly lead to > a deprecation if no one is willing to help maintain it. > > Separate repos would also allow for easier backport releases (e.g. what > asyncio and typing have been doing since they were created). If a module is > maintained as if it was its own project then it makes it easier to make > releases separated from the stdlib itself (although the usefulness is > minimized as long as sys.path has site-packages as its last entry). > Separate releases allows for faster releases of the stand-alone module, > e.g. if only asyncio has a bug then asyncio can cut their own release and > the rest of the stdlib doesn't need to care. Then when a new CPython > release is done we can simply bundle up the stable release at the moment > and essentially make our mythical sumo release be the stdlib release itself > (and this would help stop modules like asyncio and typing from simply > copying modules into the stdlib from their external repo if we just pulled > in their repo using submodules or subtrees in a master repo). > > And yes, I realize this might lead to a ton of repos, but maybe that's an > important side effect. We have so much code in our stdlib that it's hard to > maintain and fixes can get dropped on the floor. If this causes us to > re-prioritize what should be in the stdlib and trim it back to things we > consider critical to have in all Python releases, then IMO that's as a huge > win in maintainability and workload savings instead of carrying forward > neglected code (or at least help people focus on modules they care about > and let others know where help is truly needed). > > Topic 2 > ======= > Independent releases of the stdlib could be done, although if we break the > stdlib up into individual repos then it shifts the conversation as > individual modules could simply do their own releases independent of the > big stdlib release. Personally I don't see a point of doing a stdlib > release separate from CPython, but I could see doing a more frequent > release of CPython where the only thing that changed is the stdlib itself > (but I don't know if that would even alleviate the RM workload). > > For me, I'm more interested in thinking about breaking the stdlib modules > into their own repos and making a CPython release more of a collection of > python-dev-approved modules that are maintained under the python > organization on GitHub and follow our compatibility guidelines and code > quality along with the CPython interpreter. This would also make it much > easier for custom distros, e.g. a cloud-targeted CPython release that > ignored all GUI libraries. > > -Brett > > >> >> Cheers, >> Steve >> >> Top-posted from my Windows Phone >> ------------------------------ >> From: Guido van Rossum >> Sent: ?7/?3/?2016 7:42 >> To: Python-Dev >> Cc: Nick Coghlan >> Subject: Re: [Python-Dev] Request for CPython 3.5.3 release >> >> Another thought recently occurred to me. Do releases really have to be >> such big productions? A recent ACM article by Tom Limoncelli[1] >> reminded me that we're doing releases the old-fashioned way -- >> infrequently, and with lots of manual labor. Maybe we could >> (eventually) try to strive for a lighter-weight, more automated >> release process? It would be less work, and it would reduce stress for >> authors of stdlib modules and packages -- there's always the next >> release. I would think this wouldn't obviate the need for carefully >> planned and timed "big deal" feature releases, but it could make the >> bug fix releases *less* of a deal, for everyone. >> >> [1] >> http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract >> (sadly requires login) >> >> -- >> --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkb.teichmann at gmail.com Sun Jul 3 17:56:57 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sun, 3 Jul 2016 23:56:57 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Nick, thanks for the nice review! > I think making __init_subclass__ implicitly a class method is still > the right thing to do if this proposal gets accepted, we'll just want > to see if we can do something to tidy up that aspect of the > documentation at the same time. I could write some documentation, I just don't know where to put it. I personally have no strong feelings whether __init_subclass__ should be implicitly a @classmethod or not - but as the general consensus here seemed to hint making it implicit is better, this is how I wrote it. >> While implementing PEP 487 I realized that there is and oddity in the >> type base class: type.__init__ forbids to use keyword arguments, even >> for the usual three arguments it has (name, base and dict), while >> type.__new__ allows for keyword arguments. As I plan to forward any >> keyword arguments to the new __init_subclass__, I stumbled over that. >> As I write in the PEP, I think it would be a good idea to forbid using >> keyword arguments for type.__new__ as well. But if people think this >> would be to big of a change, it would be possible to do it >> differently. > > [some discussion cut out] > > I think the PEP could be accepted without cleaning this up, though - > it would just mean __init_subclass__ would see the "name", "bases" and > "dict" keys when someone attempted to use keyword arguments with the > dynamic type creation APIs. Yes, this would be possible, albeit a bit ugly. I'm not so sure whether backwards compatibility is so important in this case. It is very easy to change the code to the fully cleaned up version Looking through old stuff I found http://bugs.python.org/issue23722, which describes the following problem: at the time __init_subclass__ is called, super() doesn't work yet for the new class. It does work for __init_subclass__, because it is called on the base class, but not for calls to other classmethods it does. This is a pity especially because also the two argument form of super() cannot be used as the new class has no name yet. The problem is solvable though. The initializations necessary for super() to work properly simply should be moved before the call to __init_subclass__. I implemented that by putting a new attribute into the class's namespace to keep the cell which will later be used by super(). This new attribute would be remove by type.__new__ again, but transiently it would be visible. This technique has already been used for __qualname__. The issue contains a patch that fixes that behavior, and back in the day you proposed I add the problem to the PEP. Should I? Greetings Martin From steve.dower at python.org Sun Jul 3 18:19:06 2016 From: steve.dower at python.org (Steve Dower) Date: Sun, 3 Jul 2016 15:19:06 -0700 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: My thinking on this issue was that some/most packages from the stdlib would move into site-packages. Certainly I'd expect asyncio to be in this category, and probably typing. Even going as far as email and urllib would potentially be beneficial (to those packages, is my thinking). Obviously not every single module can do this, but there are plenty that aren't low-level dependencies for other modules that could. Depending on particular versions of these then becomes a case of adding normal package version constraints - we could even bundle version information for non-updateable packages so that installs fail on incompatible Python versions. The "Uber repository" could be a requirements.txt that pulls down wheels for the selected stable versions of each package so that we still distribute all the same code with the same stability, but users have much more ability to patch their own stdlib after install. (FWIW, we use a system similar to this at Microsoft for building Visual Studio, so I can vouch that it works on much more complicated software than Python.) Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Paul Moore" Sent: ?7/?3/?2016 14:23 To: "Brett Cannon" Cc: "Guido van Rossum" ; "Nick Coghlan" ; "Python-Dev" ; "Steve Dower" Subject: Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) On 3 July 2016 at 22:04, Brett Cannon wrote: > This last bit is what I would advocate if we broke the stdlib out unless an > emergency patch release is warranted for a specific module (e.g. like > asyncio that started this discussion). Obviously backporting is its own > thing. It's also worth noting that pip has no mechanism for installing an updated stdlib module, as everything goes into site-packages, and the stdlib takes precedence over site-packages unless you get into sys.path hacking abominations like setuptools uses (or at least used to use, I don't know if it still does). So as things stand, independent patch releases of stdlib modules would need to be manually copied into place. Allowing users to override the stdlib opens up a different can of worms - not necessarily one that we couldn't resolve, but IIRC, it was always a deliberate policy that overriding the stdlib wasn't possible (that's why backports have names like unittest2...) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkb.teichmann at gmail.com Sun Jul 3 18:27:20 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Mon, 4 Jul 2016 00:27:20 +0200 Subject: [Python-Dev] PEP 487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Guido, sorry I missed your post... >> One of the big issues that makes library authors reluctant to use >> metaclasses >> (even when they would be appropriate) is the risk of metaclass conflicts. > > Really? I've written and reviewed a lot of metaclasses and this has never > worried me. The problem is limited to multiple inheritance, right? I worry a > lot about MI being imposed on classes that weren't written with MI in mind, > but I've never particularly worried about the special case of metaclasses. Yes, the problem only arises with MI. Unfortunately, that's not uncommon: if you want to implement an ABC with a class from a framework which uses metaclasses, you have a metaclass conflict. So then you start making MyFrameworkABCMeta-classes. The worst is if you already have a framework with users out there. No way you add a metaclass to your class, however convenient it would be. Because you never now if some user out there had gotten the idea to implement an ABC with it. Sure, you could let your metaclass inherit from ABCMeta, but is this really how it should be done? (This has already been mentioned by others over at python-ideas: https://mail.python.org/pipermail/python-ideas/2016-February/038506.html) Greetings Martin From tjreedy at udel.edu Sun Jul 3 18:56:40 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 3 Jul 2016 18:56:40 -0400 Subject: [Python-Dev] release cadence In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On 7/3/2016 4:22 PM, Brett Cannon wrote: > So if we really wanted to go this route of breaking out the stdlib, I > think we have two options. One is to have the cpython repo represent the > CPython interpreter and then have a separate stdlib repo. The other > option is to still have cpython represent the interpreter but then each > stdlib module have their own repository. Option 3 is something in between: groups of stdlib modules in their own repository. An obvious example: a gui group with _tkinter, tkinter, idlelib, turtle, turtledemo, and their doc files. Having 100s of repositories would not work well with with TortoiseHg. -- Terry Jan Reedy From steve.dower at python.org Sun Jul 3 19:21:22 2016 From: steve.dower at python.org (Steve Dower) Date: Sun, 3 Jul 2016 16:21:22 -0700 Subject: [Python-Dev] release cadence In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <57799DF2.8000803@python.org> On 03Jul2016 1556, Terry Reedy wrote: > On 7/3/2016 4:22 PM, Brett Cannon wrote: > >> So if we really wanted to go this route of breaking out the stdlib, I >> think we have two options. One is to have the cpython repo represent the >> CPython interpreter and then have a separate stdlib repo. The other >> option is to still have cpython represent the interpreter but then each >> stdlib module have their own repository. > > Option 3 is something in between: groups of stdlib modules in their own > repository. An obvious example: a gui group with _tkinter, tkinter, > idlelib, turtle, turtledemo, and their doc files. Having 100s of > repositories would not work well with with TortoiseHg. > A rough count of how I'd break up the current 3.5 Lib folder (which I happened to have handy) suggests no more than 50 repos. But there'd be no need to have all of them checked out just to build - only the ones you want to modify. And in that case, you'd probably have a stable Python to work against the separate package repo and wouldn't need to clone the core one. (I'm envisioning a build process that generates wheels from online sources and caches them. So updating the stdlib wheel cache would be part of the build process, but then the local wheels are used to install.) I personally would only have about 5 repos cloned on any of my dev machines (core, ctypes, distutils, possibly tkinter, ssl), as I rarely touch any other packages. (Having those separate from core is mostly for the versioning benefits - I doubt we could ever release Python without them, but it'd be great to be able to update distutils, ctypes or ssl in place with a simple pip/package mgr command.) Cheers, Steve From guido at python.org Sun Jul 3 19:27:34 2016 From: guido at python.org (Guido van Rossum) Date: Sun, 3 Jul 2016 16:27:34 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On Sat, Jul 2, 2016 at 10:50 AM, Martin Teichmann wrote: > Hi list, > > so this is the next round for PEP 487. During the last round, most of > the comments were in the direction that a two step approach for > integrating into Python, first in pure Python, later in C, was not a > great idea and everything should be in C directly. So I implemented it > in C, put it onto the issue tracker here: > http://bugs.python.org/issue27366, and also modified the PEP > accordingly. Thanks! Reviewing inline below. > For those who had not been in the discussion, PEP 487 proposes to add > two hooks, __init_subclass__ which is a classmethod called whenever a > class is subclassed, and __set_owner__, a hook in descriptors which > gets called once the class the descriptor is part of is created. > > While implementing PEP 487 I realized that there is and oddity in the > type base class: type.__init__ forbids to use keyword arguments, even > for the usual three arguments it has (name, base and dict), while > type.__new__ allows for keyword arguments. As I plan to forward any > keyword arguments to the new __init_subclass__, I stumbled over that. > As I write in the PEP, I think it would be a good idea to forbid using > keyword arguments for type.__new__ as well. But if people think this > would be to big of a change, it would be possible to do it > differently. This is an area of exceeding subtlety (and also not very well documented/specified, probably). I'd worry that changing anything here might break some code. When a metaclass overrides neither __init__ nor __new__, keyword args will not work because type.__init__ forbids them. However when a metaclass overrides them and calls them using super(), it's quite possible that someone ended up calling super().__init__() with three positional args but super().__new__() with keyword args, since the call sites are distinct (in the overrides for __init__ and __new__ respectively). What's your argument for changing this, apart from a desire for more regularity? > Hoping for good comments > > Greetings > > Martin > > The PEP follows: > > PEP: 487 > Title: Simpler customisation of class creation > Version: $Revision$ > Last-Modified: $Date$ > Author: Martin Teichmann , > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 27-Feb-2015 > Python-Version: 3.6 > Post-History: 27-Feb-2015, 5-Feb-2016, 24-Jun-2016, 2-Jul-2016 > Replaces: 422 > > > Abstract > ======== > > Currently, customising class creation requires the use of a custom metaclass. > This custom metaclass then persists for the entire lifecycle of the class, > creating the potential for spurious metaclass conflicts. > > This PEP proposes to instead support a wide range of customisation > scenarios through a new ``__init_subclass__`` hook in the class body, > and a hook to initialize attributes. > > The new mechanism should be easier to understand and use than > implementing a custom metaclass, and thus should provide a gentler > introduction to the full power Python's metaclass machinery. > > > Background > ========== > > Metaclasses are a powerful tool to customize class creation. They have, > however, the problem that there is no automatic way to combine metaclasses. > If one wants to use two metaclasses for a class, a new metaclass combining > those two needs to be created, typically manually. > > This need often occurs as a surprise to a user: inheriting from two base > classes coming from two different libraries suddenly raises the necessity > to manually create a combined metaclass, where typically one is not > interested in those details about the libraries at all. This becomes > even worse if one library starts to make use of a metaclass which it > has not done before. While the library itself continues to work perfectly, > suddenly every code combining those classes with classes from another library > fails. > > Proposal > ======== > > While there are many possible ways to use a metaclass, the vast majority > of use cases falls into just three categories: some initialization code > running after class creation, the initalization of descriptors and > keeping the order in which class attributes were defined. > > The first two categories can easily be achieved by having simple hooks > into the class creation: > > 1. An ``__init_subclass__`` hook that initializes > all subclasses of a given class. > 2. upon class creation, a ``__set_owner__`` hook is called on all the > attribute (descriptors) defined in the class, and > > The third category is the topic of another PEP 520. > > As an example, the first use case looks as follows:: > > >>> class SpamBase: > ... # this is implicitly a @classmethod > ... def __init_subclass__(cls, **kwargs): > ... cls.class_args = kwargs > ... super().__init_subclass__(cls, **kwargs) > > >>> class Spam(SpamBase, a=1, b="b"): > ... pass > > >>> Spam.class_args > {'a': 1, 'b': 'b'} > > The base class ``object`` contains an empty ``__init_subclass__`` > method which serves as an endpoint for cooperative multiple inheritance. > Note that this method has no keyword arguments, meaning that all > methods which are more specialized have to process all keyword > arguments. I'm confused. In the above example it would seem that the keyword args {'a': 1, 'b': 2} are passed right on to super9).__init_subclass__(). Do you mean that it ignores all keyword args? Or that it has no positional args? (Both of which would be consistent with the example.) > This general proposal is not a new idea (it was first suggested for > inclusion in the language definition `more than 10 years ago`_, and a > similar mechanism has long been supported by `Zope's ExtensionClass`_), > but the situation has changed sufficiently in recent years that > the idea is worth reconsidering for inclusion. Can you state exactly at which point during class initialization __init_class__() is called? (Surely by now, having implemented it, you know exactly where. :-) [This is as far as I got reviewing when the weekend activities interrupted me. In the light of ongoing discussion I'm posting this now -- I'll continue later.] -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Sun Jul 3 19:34:43 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jul 2016 09:34:43 +1000 Subject: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release) Message-ID: On 4 July 2016 at 00:39, Guido van Rossum wrote: > Another thought recently occurred to me. Do releases really have to be > such big productions? A recent ACM article by Tom Limoncelli[1] > reminded me that we're doing releases the old-fashioned way -- > infrequently, and with lots of manual labor. Maybe we could > (eventually) try to strive for a lighter-weight, more automated > release process? It would be less work, and it would reduce stress for > authors of stdlib modules and packages -- there's always the next > release. I would think this wouldn't obviate the need for carefully > planned and timed "big deal" feature releases, but it could make the > bug fix releases *less* of a deal, for everyone. Yes, getting the maintenance releases to the point of being largely automated would be beneficial. However, I don't think the problem is lack of desire for that outcome, it's that maintaining the release toolchain pretty much becomes a job at that point, as you really want to be producing nightly builds (since the creation of those nightlies in effect becomes the regression test suite for the release toolchain), and you also need to more strictly guard against even temporary regressions in the maintenance branches. There are some variants we could pursue around that model (e.g. automating Python-only updates without automating updates that require rebuilding the core interpreter binaries for Windows and Mac OS X), but none of it is the kind of thing likely to make anyone say "I want to work on improving this in my free time". Even for commercial redistributors, it isn't easy for us to make the business case for assigning someone to work on it, since we're generally working from the source trees rather than the upstream binary releases. I do think it's worth putting this into our bucket of "ongoing activities we could potentially propose to the PSF for funding", though. I know Ewa (Jodlowska, the PSF's Director of Operations) is interested in better supporting the Python development community directly (hence https://donate.pypi.io/ ) in addition to the more indirect community building efforts like PyCon US and the grants program, so I've been trying to build up a mental list of CPython development pain points where funded activities could potentially improve the contributor experience for volunteers. So far I have: - issue triage (including better acknowledging folks that help out with triage efforts) - patch review (currently "wait and see" pending the impact of the GitHub migration) - nightly pre-release builds (for ease of contribution without first becoming a de facto C developer and to help make life easier for release managers) That last one is a new addition to my list based on this thread, and I think it's particularly interesting in that it would involve a much smaller set of target users than the first two (with the primary stakeholders being the release managers and the folks preparing the binary installers), but also a far more concrete set of deliverables (i.e. nightly binary builds being available for active development and maintenance branches for at least Windows and Mac OS X, and potentially for the manylinux1 baseline API defined in PEP 513) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Jul 3 20:31:21 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jul 2016 10:31:21 +1000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: On 4 July 2016 at 06:22, Brett Cannon wrote: > [forking the conversation since the subject has shifted] > > On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: >> >> Many of our users prefer stability (the sort who plan operating system >> updates years in advance), but generally I'm in favour of more frequent >> releases. > > > So there's our 18 month cadence for feature/minor releases, and then there's > the 6 month cadence for bug-fix/micro releases. At the language summit there > was the discussion kicked off by Ned about our release schedule and a group > of us had a discussion afterward where a more strict release cadence of 12 > months with the release date tied to a consistent month -- e.g. September of > every year -- instead of our hand-wavy "about 18 months after the last > feature release"; people in the discussion seemed to like the 12 months > consistency idea. While we liked the "consistent calendar cadence that is some multiple of 6 months" idea, several of us thought 12 months was way too short as it makes for too many entries in third party support matrices. I'd also encourage folks to go back and read the two PEPs that were written the last time we had a serious discussion about changing the release cadence, since many of the concerns raised then remain relevant today: * PEP 407 (faster cycle with LTS releases): https://www.python.org/dev/peps/pep-0407/ * PEP 413 (separate stdlib versioning): https://www.python.org/dev/peps/pep-0413/ In particular, the "unsustainable community support matrix" problem I describe in PEP 413 is still a major point of concern for me - we know from PyPI's download metrics that Python 2.6 is still in heavy use, so many folks have only started to bump their "oldest supported version" up to Python 2.7 in the last year or so (5+ years after it was released). People have been a bit more aggressive in dropping compatibility with older Python 3 versions, but it's also been the case that availability and adoption of LTS versions of Python 3 has been limited to date (mainly just the 5 years for 3.2 in Ubuntu 12.04 and 3.4 in Ubuntu 14.04 - the longest support lifecycle I'm aware of after that is Red Hat's 3 years for Red Hat Software Collections). The reason I withdrew PEP 413 as a prospective answer to that problem is that I think there's generally only a limited number of libraries that are affected by the challenge of sometimes getting too old to be useful to cross-platform library and framework developers (mostly network protocol and file format related, but also the ones still marked as provisional), and the introduction of ensurepip gives us a new way of dealing with them: treating *those particular libraries* as independently upgradable bundled libraries where the CPython build process creates wheel files for them, and then uses ensurepip's internally bundled pip to install those wheels at install time, even if pip itself is left out of the installation. In the specific case that prompted this thread for example, I don't think the problem is really that the standard library release cadence is too slow in general: it's that "pip install --upgrade asyncio" *isn't an available option* in Python 3.5, even if you're using a virtual environment. For other standard library modules, we've tackled that by letting people do things like "pip install contextlib2" to get the newer versions, even on older Python releases - individual projects are then responsible for opting in to using either the stdlib version or the potentially-newer backported version. However, aside from the special case of ensurepip, what we've yet to do is ever make a standard library package *itself* independently upgradable (such that the Python version merely implies a *minimum* version of that library, rather than an exact version). Since it has core developers actively involved in its development, and already provides a PyPI package for the sake of Python 3.3 users, perhaps "asyncio" could make a good guinea pig for designing such a bundling process? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris at chriskrycho.com Sun Jul 3 20:34:39 2016 From: chris at chriskrycho.com (Chris Krycho) Date: Sun, 3 Jul 2016 20:34:39 -0400 Subject: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release) In-Reply-To: References: Message-ID: <0B2C5604-D55B-4FBB-AC5C-7834DCE21CE9@chriskrycho.com> The bots Mozilla runs around both Rust and Servo should make a lot of this much lower overhead if they can be repurposed (as I believe several other communities have already done). Homu, the build manager tool, runs CI (including buildbots, Travis, etc.), is integrated with GitHub PRs so maintainers can trigger it with a comment there, and can also roll up a bunch of changes into one (handy to pull together e.g. a bunch of small documentation changes like typo fixes): https://github.com/barosl/homu That seems to keep the pain level of having an always-building-and-passing-tests nightly version much lower. Aside: I don't want to flood these discussions with "yay Rust!" stuff, so this will probably be my last such response unless something else really jumps out. ;-) Thanks for the work you're all doing here. Regards, Chris Krycho > On Jul 3, 2016, at 7:34 PM, Nick Coghlan wrote: > >> On 4 July 2016 at 00:39, Guido van Rossum wrote: >> Another thought recently occurred to me. Do releases really have to be >> such big productions? A recent ACM article by Tom Limoncelli[1] >> reminded me that we're doing releases the old-fashioned way -- >> infrequently, and with lots of manual labor. Maybe we could >> (eventually) try to strive for a lighter-weight, more automated >> release process? It would be less work, and it would reduce stress for >> authors of stdlib modules and packages -- there's always the next >> release. I would think this wouldn't obviate the need for carefully >> planned and timed "big deal" feature releases, but it could make the >> bug fix releases *less* of a deal, for everyone. > > Yes, getting the maintenance releases to the point of being largely > automated would be beneficial. However, I don't think the problem is > lack of desire for that outcome, it's that maintaining the release > toolchain pretty much becomes a job at that point, as you really want > to be producing nightly builds (since the creation of those nightlies > in effect becomes the regression test suite for the release > toolchain), and you also need to more strictly guard against even > temporary regressions in the maintenance branches. > > There are some variants we could pursue around that model (e.g. > automating Python-only updates without automating updates that require > rebuilding the core interpreter binaries for Windows and Mac OS X), > but none of it is the kind of thing likely to make anyone say "I want > to work on improving this in my free time". Even for commercial > redistributors, it isn't easy for us to make the business case for > assigning someone to work on it, since we're generally working from > the source trees rather than the upstream binary releases. > > I do think it's worth putting this into our bucket of "ongoing > activities we could potentially propose to the PSF for funding", > though. I know Ewa (Jodlowska, the PSF's Director of Operations) is > interested in better supporting the Python development community > directly (hence https://donate.pypi.io/ ) in addition to the more > indirect community building efforts like PyCon US and the grants > program, so I've been trying to build up a mental list of CPython > development pain points where funded activities could potentially > improve the contributor experience for volunteers. So far I have: > > - issue triage (including better acknowledging folks that help out > with triage efforts) > - patch review (currently "wait and see" pending the impact of the > GitHub migration) > - nightly pre-release builds (for ease of contribution without first > becoming a de facto C developer and to help make life easier for > release managers) > > That last one is a new addition to my list based on this thread, and I > think it's particularly interesting in that it would involve a much > smaller set of target users than the first two (with the primary > stakeholders being the release managers and the folks preparing the > binary installers), but also a far more concrete set of deliverables > (i.e. nightly binary builds being available for active development and > maintenance branches for at least Windows and Mac OS X, and > potentially for the manylinux1 baseline API defined in PEP 513) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvanrossum at gmail.com Sun Jul 3 21:02:22 2016 From: gvanrossum at gmail.com (Guido van Rossum) Date: Sun, 3 Jul 2016 18:02:22 -0700 Subject: [Python-Dev] PEP 487: Simpler customization of class creation In-Reply-To: References: Message-ID: OK, I see this point now. Still looking for time to review the rest of your PEP! --Guido (mobile) On Jul 3, 2016 3:29 PM, "Martin Teichmann" wrote: > Hi Guido, > > sorry I missed your post... > > >> One of the big issues that makes library authors reluctant to use > >> metaclasses > >> (even when they would be appropriate) is the risk of metaclass > conflicts. > > > > Really? I've written and reviewed a lot of metaclasses and this has never > > worried me. The problem is limited to multiple inheritance, right? I > worry a > > lot about MI being imposed on classes that weren't written with MI in > mind, > > but I've never particularly worried about the special case of > metaclasses. > > Yes, the problem only arises with MI. Unfortunately, that's not > uncommon: if you want to implement an ABC with a class from a > framework which uses metaclasses, you have a metaclass conflict. So > then you start making MyFrameworkABCMeta-classes. > > The worst is if you already have a framework with users out there. No > way you add a metaclass to your class, however convenient it would > be. Because you never now if some user out there had gotten the idea > to implement an ABC with it. Sure, you could let your metaclass > inherit from ABCMeta, but is this really how it should be done? > > (This has already been mentioned by others over at python-ideas: > https://mail.python.org/pipermail/python-ideas/2016-February/038506.html) > > Greetings > > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 3 22:12:58 2016 From: brett at python.org (Brett Cannon) Date: Mon, 04 Jul 2016 02:12:58 +0000 Subject: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release) In-Reply-To: <0B2C5604-D55B-4FBB-AC5C-7834DCE21CE9@chriskrycho.com> References: <0B2C5604-D55B-4FBB-AC5C-7834DCE21CE9@chriskrycho.com> Message-ID: Once the GH migration occurs I think we will take a look at Homu (it's been brought up previously). On Sun, Jul 3, 2016, 17:35 Chris Krycho wrote: > The bots Mozilla runs around both Rust and Servo should make a lot of this > much lower overhead if they can be repurposed (as I believe several other > communities have already done). > > Homu, the build manager tool, runs CI (including buildbots, Travis, etc.), > is integrated with GitHub PRs so maintainers can trigger it with a comment > there, and can also roll up a bunch of changes into one (handy to pull > together e.g. a bunch of small documentation changes like typo fixes): > https://github.com/barosl/homu That seems to keep the pain level of > having an always-building-and-passing-tests nightly version much lower. > > Aside: I don't want to flood these discussions with "yay Rust!" stuff, so > this will probably be my last such response unless something else really > jumps out. ;-) Thanks for the work you're all doing here. > > Regards, > Chris Krycho > > On Jul 3, 2016, at 7:34 PM, Nick Coghlan wrote: > > On 4 July 2016 at 00:39, Guido van Rossum wrote: > > Another thought recently occurred to me. Do releases really have to be > > such big productions? A recent ACM article by Tom Limoncelli[1] > > reminded me that we're doing releases the old-fashioned way -- > > infrequently, and with lots of manual labor. Maybe we could > > (eventually) try to strive for a lighter-weight, more automated > > release process? It would be less work, and it would reduce stress for > > authors of stdlib modules and packages -- there's always the next > > release. I would think this wouldn't obviate the need for carefully > > planned and timed "big deal" feature releases, but it could make the > > bug fix releases *less* of a deal, for everyone. > > > Yes, getting the maintenance releases to the point of being largely > automated would be beneficial. However, I don't think the problem is > lack of desire for that outcome, it's that maintaining the release > toolchain pretty much becomes a job at that point, as you really want > to be producing nightly builds (since the creation of those nightlies > in effect becomes the regression test suite for the release > toolchain), and you also need to more strictly guard against even > temporary regressions in the maintenance branches. > > There are some variants we could pursue around that model (e.g. > automating Python-only updates without automating updates that require > rebuilding the core interpreter binaries for Windows and Mac OS X), > but none of it is the kind of thing likely to make anyone say "I want > to work on improving this in my free time". Even for commercial > redistributors, it isn't easy for us to make the business case for > assigning someone to work on it, since we're generally working from > the source trees rather than the upstream binary releases. > > I do think it's worth putting this into our bucket of "ongoing > activities we could potentially propose to the PSF for funding", > though. I know Ewa (Jodlowska, the PSF's Director of Operations) is > interested in better supporting the Python development community > directly (hence https://donate.pypi.io/ ) in addition to the more > indirect community building efforts like PyCon US and the grants > program, so I've been trying to build up a mental list of CPython > development pain points where funded activities could potentially > improve the contributor experience for volunteers. So far I have: > > - issue triage (including better acknowledging folks that help out > with triage efforts) > - patch review (currently "wait and see" pending the impact of the > GitHub migration) > - nightly pre-release builds (for ease of contribution without first > becoming a de facto C developer and to help make life easier for > release managers) > > That last one is a new addition to my list based on this thread, and I > think it's particularly interesting in that it would involve a much > smaller set of target users than the first two (with the primary > stakeholders being the release managers and the folks preparing the > binary installers), but also a far more concrete set of deliverables > (i.e. nightly binary builds being available for active development and > maintenance branches for at least Windows and Mac OS X, and > potentially for the manylinux1 baseline API defined in PEP 513) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 4 01:04:05 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Jul 2016 15:04:05 +1000 Subject: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release) In-Reply-To: <0B2C5604-D55B-4FBB-AC5C-7834DCE21CE9@chriskrycho.com> References: <0B2C5604-D55B-4FBB-AC5C-7834DCE21CE9@chriskrycho.com> Message-ID: On 4 July 2016 at 10:34, Chris Krycho wrote: > The bots Mozilla runs around both Rust and Servo should make a lot of this > much lower overhead if they can be repurposed (as I believe several other > communities have already done). > > Homu, the build manager tool, runs CI (including buildbots, Travis, etc.), > is integrated with GitHub PRs so maintainers can trigger it with a comment > there, and can also roll up a bunch of changes into one (handy to pull > together e.g. a bunch of small documentation changes like typo fixes): > https://github.com/barosl/homu That seems to keep the pain level of having > an always-building-and-passing-tests nightly version much lower. Aye, as Brett mentioned, we're definitely interested in the work Rust/Mozilla have been doing, and it's come up in previous discussions on the core-workflow list like https://mail.python.org/pipermail/core-workflow/2016-February/000480.html However, automating the Mac OS X and Windows Installer builds and the subsquent uploads to python.org gets more challenging, as at that point you're looking at either producing unsigned binaries, or else automating the creation of signed binaries, and the latter means you start running into secrets management problems that don't exist for plain CI builds. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From srkunze at mail.de Mon Jul 4 06:32:24 2016 From: srkunze at mail.de (Sven R. Kunze) Date: Mon, 4 Jul 2016 12:32:24 +0200 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <577A3B38.20508@mail.de> On 03.07.2016 16:39, Guido van Rossum wrote: > Another thought recently occurred to me. Do releases really have to be > such big productions? A recent ACM article by Tom Limoncelli[1] > reminded me that we're doing releases the old-fashioned way -- > infrequently, and with lots of manual labor. Maybe we could > (eventually) try to strive for a lighter-weight, more automated > release process? I can only recommend such an approach. We use it internally for years now and the workload for releasing, quality assurance and final deployment dropped significantly. We basically automated everything. The devs are pretty happy with it now and sometimes "mis-use" it for some of its side-products; but that's okay as it's very convenient to use. For some parts we use pip to install/upgrade the dependencies but CPython might need to use a different tooling for the stdlib and its C-dependencies. If you need some assistance here, let me know. > It would be less work, and it would reduce stress for > authors of stdlib modules and packages -- there's always the next > release. I would think this wouldn't obviate the need for carefully > planned and timed "big deal" feature releases, but it could make the > bug fix releases *less* of a deal, for everyone. > > [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract > (sadly requires login) > Best, Sven From kevin-lists at theolliviers.com Mon Jul 4 11:22:01 2016 From: kevin-lists at theolliviers.com (Kevin Ollivier) Date: Mon, 04 Jul 2016 08:22:01 -0700 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: <577A3B38.20508@mail.de> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <577A3B38.20508@mail.de> Message-ID: On 7/4/16, 3:32 AM, "Python-Dev on behalf of Sven R. Kunze" wrote: >On 03.07.2016 16:39, Guido van Rossum wrote: >> Another thought recently occurred to me. Do releases really have to be >> such big productions? A recent ACM article by Tom Limoncelli[1] >> reminded me that we're doing releases the old-fashioned way -- >> infrequently, and with lots of manual labor. Maybe we could >> (eventually) try to strive for a lighter-weight, more automated >> release process? > >I can only recommend such an approach. We use it internally for years >now and the workload for releasing, quality assurance and final >deployment dropped significantly. We basically automated everything. >The devs are pretty happy with it now and sometimes "mis-use" it for >some of its side-products; but that's okay as it's very convenient to use. > >For some parts we use pip to install/upgrade the dependencies but >CPython might need to use a different tooling for the stdlib and its >C-dependencies. > > >If you need some assistance here, let me know. I also offer my help with setting up CI and automated builds. :) I've actually done build automation for a number of the projects I've worked on in the past. In every case, doing so gave benefits that far outweighed the work needed to get it going. Regards, Kevin >> It would be less work, and it would reduce stress for >> authors of stdlib modules and packages -- there's always the next >> release. I would think this wouldn't obviate the need for carefully >> planned and timed "big deal" feature releases, but it could make the >> bug fix releases *less* of a deal, for everyone. >> >> [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract >> (sadly requires login) >> > >Best, >Sven >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: https://mail.python.org/mailman/options/python-dev/kevin-lists%40theolliviers.com From doko at ubuntu.com Mon Jul 4 11:26:17 2016 From: doko at ubuntu.com (Matthias Klose) Date: Mon, 4 Jul 2016 17:26:17 +0200 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <577A8019.1000703@ubuntu.com> On 03.07.2016 06:09, Nick Coghlan wrote: > On 2 July 2016 at 16:17, Ludovic Gasc wrote: >> Hi everybody, >> >> I fully understand that AsyncIO is a drop in the ocean of CPython, you're >> working to prepare the entire 3.5.3 release for December, not yet ready. >> However, you might create a 3.5.2.1 release with only this AsyncIO fix ? > > That would be more work than just doing a 3.5.3 release, though - the > problem isn't with the version number bump, it's with asking the > release team to do additional work without clearly explaining the > rationale for the request (more on that below). While some parts of > the release process are automated, there's still a lot of steps to run > through by a number of different people: > https://www.python.org/dev/peps/pep-0101/. > > The first key question to answer in this kind of situation is: "Is > there code that will run correctly on 3.5.1 that will now fail on > 3.5.2?" (i.e. it's a regression introduced by the asyncio and > coroutine changes in the point release rather than something that was > already broken in 3.5.0 and 3.5.1). I don't know about 3.5.1 exactly, but at least things worked on the branch early April, which were broken by 3.5.2 final. I was trying to prepare for an update to 3.5.2 for Ubuntu 16.04 LTS, and found some regressions, documented at https://launchpad.net/bugs/1586673 (comment #6). It looks like that at least the packages nuitka, python-websockets and urwid fail to build with the 3.5.2 release. Still need to investigate. Unless I'm missing things, there is unfortunately no issue in the Python bug tracker, and there is no patch for the 3.5 branch either. My understanding is that it's not yet decided what to do about the issue. Matthias From steve.dower at python.org Mon Jul 4 18:34:18 2016 From: steve.dower at python.org (Steve Dower) Date: Mon, 4 Jul 2016 15:34:18 -0700 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <577A3B38.20508@mail.de> Message-ID: <577AE46A.8010209@python.org> On 04Jul2016 0822, Kevin Ollivier wrote: > On 7/4/16, 3:32 AM, "Python-Dev on behalf of Sven R. Kunze" wrote: >> >> If you need some assistance here, let me know. > > I also offer my help with setting up CI and automated builds. :) I've actually done build automation for a number of the projects I've worked on in the past. In every case, doing so gave benefits that far outweighed the work needed to get it going. It's actually not that much effort - we already have a fleet of buildbots that automatically build, test and report on Python's stability on a range of platforms. Once a build machine is configured, producing a build is typically a single command. The benefit we get from the heavyweight release procedures is that someone trustworthy (the Release Manager) has controlled the process, reducing the rate of change and ensuring stability over the end of the process. Also that trustworthy people (the build managers) have downloaded, built and signed the code without modifying it or injecting unauthorised code. As a result of these, people trust official releases to be correct and stable. It's very hard to put the same trust in an automated system (and it's a great way to lose signing certificates). I don't believe the release procedures are too onerous (though Benjamin, Larry and Ned are welcome to disagree :) ), and possibly there is some more scripting that could help out, but there's really nothing in the direct process that we need to do more releases. More frequent releases would mean more frequent feature freezes and more time in "cherry-picking" mode (where the RM has to approve and merge each individual fix), which affects all contributors. Shorter cycles make it harder to get changes reviewed, merged and tested. This is the limiting factor. So don't worry about offering skills/effort for CI systems (unless you want to maintain a few buildbots, in which case read https://www.python.org/dev/buildbot/) - go and help review and improve some patches instead. The shorter the cycle between finding a need and committing the patch, and the more often issues are found *before* commit, the more frequently we can do releases. Cheers, Steve From brett at python.org Mon Jul 4 18:42:12 2016 From: brett at python.org (Brett Cannon) Date: Mon, 04 Jul 2016 22:42:12 +0000 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: <577AE46A.8010209@python.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <577A3B38.20508@mail.de> <577AE46A.8010209@python.org> Message-ID: I should quickly mention that future workflow-related stuff in regards to https://www.python.org/dev/peps/pep-0512 and the move to GitHub (e.g. CI), happens on the core-workflow mailing list. On Mon, 4 Jul 2016 at 15:35 Steve Dower wrote: > On 04Jul2016 0822, Kevin Ollivier wrote: > > On 7/4/16, 3:32 AM, "Python-Dev on behalf of Sven R. Kunze" > srkunze at mail.de> wrote: > >> > >> If you need some assistance here, let me know. > > > > I also offer my help with setting up CI and automated builds. :) I've > actually done build automation for a number of the projects I've worked on > in the past. In every case, doing so gave benefits that far outweighed the > work needed to get it going. > > It's actually not that much effort - we already have a fleet of > buildbots that automatically build, test and report on Python's > stability on a range of platforms. Once a build machine is configured, > producing a build is typically a single command. > > The benefit we get from the heavyweight release procedures is that > someone trustworthy (the Release Manager) has controlled the process, > reducing the rate of change and ensuring stability over the end of the > process. Also that trustworthy people (the build managers) have > downloaded, built and signed the code without modifying it or injecting > unauthorised code. > > As a result of these, people trust official releases to be correct and > stable. It's very hard to put the same trust in an automated system (and > it's a great way to lose signing certificates). > > I don't believe the release procedures are too onerous (though Benjamin, > Larry and Ned are welcome to disagree :) ), and possibly there is some > more scripting that could help out, but there's really nothing in the > direct process that we need to do more releases. > > More frequent releases would mean more frequent feature freezes and more > time in "cherry-picking" mode (where the RM has to approve and merge > each individual fix), which affects all contributors. Shorter cycles > make it harder to get changes reviewed, merged and tested. This is the > limiting factor. > > So don't worry about offering skills/effort for CI systems (unless you > want to maintain a few buildbots, in which case read > https://www.python.org/dev/buildbot/) - go and help review and improve > some patches instead. The shorter the cycle between finding a need and > committing the patch, and the more often issues are found *before* > commit, the more frequently we can do releases. > > Cheers, > Steve > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Jul 4 20:36:24 2016 From: larry at hastings.org (Larry Hastings) Date: Mon, 4 Jul 2016 19:36:24 -0500 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3, release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <577B0108.30904@hastings.org> On 07/03/2016 09:39 AM, Guido van Rossum wrote: > Do releases really have to be > such big productions? A recent ACM article by Tom Limoncelli[1] > reminded me that we're doing releases the old-fashioned way -- > infrequently, and with lots of manual labor. Maybe we could > (eventually) try to strive for a lighter-weight, more automated > release process? Glyph suggested this as part of his presentation at the 2015 Language Summit: https://lwn.net/Articles/640181/ I won't summarize his comments here, as Jake already did that for us ;-) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Tue Jul 5 01:41:21 2016 From: steve at holdenweb.com (Steve Holden) Date: Tue, 5 Jul 2016 06:41:21 +0100 Subject: [Python-Dev] [Webmaster] Unsafe listing by Norton's "File Insight" In-Reply-To: References: Message-ID: Hi Peter, While the humble webmasters can do little about this it's possible the developers can, so I am forwarding your email to their mailing list. regards Steve Steve Holden On Tue, Jul 5, 2016 at 3:30 AM, Peter via Webmaster wrote: > Hi > I'm a heavy user of Python on Windows, am a Basic PSF member and have > contributed to core Python. > The Python 2.7.12 Windows installer download is being marked as untrusted > by Norton Internet Security. I've been on chat with Symantec, and they've > said that I can't do anything about that rating, but that the site owner > can. > I've been pointed to: > https://safeweb.norton.com/help/site_owners > Interestingly, the 3.5.2 download is flagged as safe. > Hoping to get more Python out to users! > Thanks > Peter > > _______________________________________________ > Webmaster mailing list > Webmaster at python.org > https://mail.python.org/mailman/listinfo/webmaster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From encukou at gmail.com Tue Jul 5 03:53:24 2016 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 5 Jul 2016 09:53:24 +0200 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> On 07/04/2016 12:19 AM, Steve Dower wrote: > My thinking on this issue was that some/most packages from the stdlib > would move into site-packages. Certainly I'd expect asyncio to be in > this category, and probably typing. Even going as far as email and > urllib would potentially be beneficial (to those packages, is my thinking). > > Obviously not every single module can do this, but there are plenty that > aren't low-level dependencies for other modules that could. Depending on > particular versions of these then becomes a case of adding normal > package version constraints - we could even bundle version information > for non-updateable packages so that installs fail on incompatible Python > versions. > > The "Uber repository" could be a requirements.txt that pulls down wheels > for the selected stable versions of each package so that we still > distribute all the same code with the same stability, but users have > much more ability to patch their own stdlib after install. > > (FWIW, we use a system similar to this at Microsoft for building Visual > Studio, so I can vouch that it works on much more complicated software > than Python.) While we're on the subject, I'd like to offer another point for consideration: not all implementations of Python can provide the full stdlib, and not everyone wants the full stdlib. For MicroPython, most of Python's batteries are too heavy. Tkinter on Android is probably not useful enough for people to port it. Weakref can't be emulated nicely in Javascript. If packages had a way to opt-out of needing the whole standard library, and instead specify the stdlib subset they need, answering questions like "will this run on my phone?" and "what piece of the stdlib do we want to port next?" would be easier. Both Debian and Fedora package some parts of the stdlib separately (tkinter, venv, tests), and have opt-in subsets of the stdlib for minimal systems (python-minimal, system-python). Tools like pyinstaller run magic heuristics to determine what parts of stdlib can be left out. It would help these projects if the "not all of stdlib is installed" case was handled more systematically at the CPython or distutils level. As I said on the Language summit, this is just talk; I don't currently have the resources to drive this effort. But if anyone is thinking of splitting the stdlib, please keep these points in mind as well. I think that, at least, if "pip install -U asyncio" becomes possible, "pip uninstall --yes-i-know-what-im-doing asyncio" should be possible as well. > From: Paul Moore > Sent: ?7/?3/?2016 14:23 > To: Brett Cannon > Cc: Guido van Rossum ; Nick Coghlan > ; Python-Dev ; > Steve Dower > Subject: Re: [Python-Dev] release cadence (was: Request for CPython > 3.5.3 release) > > On 3 July 2016 at 22:04, Brett Cannon wrote: >> This last bit is what I would advocate if we broke the stdlib out > unless an >> emergency patch release is warranted for a specific module (e.g. like >> asyncio that started this discussion). Obviously backporting is its own >> thing. > > It's also worth noting that pip has no mechanism for installing an > updated stdlib module, as everything goes into site-packages, and the > stdlib takes precedence over site-packages unless you get into > sys.path hacking abominations like setuptools uses (or at least used > to use, I don't know if it still does). So as things stand, > independent patch releases of stdlib modules would need to be manually > copied into place. > > Allowing users to override the stdlib opens up a different can of > worms - not necessarily one that we couldn't resolve, but IIRC, it was > always a deliberate policy that overriding the stdlib wasn't possible > (that's why backports have names like unittest2...) > From rosuav at gmail.com Tue Jul 5 04:05:57 2016 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 5 Jul 2016 18:05:57 +1000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> Message-ID: On Tue, Jul 5, 2016 at 5:53 PM, Petr Viktorin wrote: > If packages had a way to opt-out of needing the whole standard library, > and instead specify the stdlib subset they need, answering questions > like "will this run on my phone?" and "what piece of the stdlib do we > want to port next?" would be easier. On the flip side, answering questions like "what version of Python do people need to run my program" become harder to answer, particularly if you have third-party dependencies. (The latest version of numpy might decide that it's going to 'import statistics', for instance.) One of the arguments against splitting the stdlib was that corporate approval for software is often hard to obtain, and it's much easier to say "I need approval to use Python, exactly as distributed by python.org" than "I need approval to use Python-core plus these five Python-stdlib sections". ChrisA From encukou at gmail.com Tue Jul 5 05:04:48 2016 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 5 Jul 2016 11:04:48 +0200 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> Message-ID: <17a55b6b-04f3-0427-d01b-98d48b7bf6aa@gmail.com> On 07/05/2016 10:05 AM, Chris Angelico wrote: > On Tue, Jul 5, 2016 at 5:53 PM, Petr Viktorin wrote: >> If packages had a way to opt-out of needing the whole standard library, >> and instead specify the stdlib subset they need, answering questions >> like "will this run on my phone?" and "what piece of the stdlib do we >> want to port next?" would be easier. > > On the flip side, answering questions like "what version of Python do > people need to run my program" become harder to answer, particularly > if you have third-party dependencies. (The latest version of numpy > might decide that it's going to 'import statistics', for instance.) That question is already hard to answer. How do you tell if a library works on Micropython? Or Python for Android? I'm not arguing to change the default; if the next version of numpy doesn't do anything, nothing should change. However, under the status quo, "Python 3.4" means "CPython 3.4 with the full stdlib, otherwise all bets are off", and there's no good way to opt in to more granularity. > One of the arguments against splitting the stdlib was that corporate > approval for software is often hard to obtain, and it's much easier to > say "I need approval to use Python, exactly as distributed by > python.org" than "I need approval to use Python-core plus these five > Python-stdlib sections". I'm not arguing against "Python, exactly as distributed by python.org" not including all of stdlib. I would like making stripped-down variants of CPython easier, and to make it possible to opt-in to use CPython without all of stdlib, so that major problems with stdlib availablility in other Python implementations can be caught early. Basically, instead of projects getting commits like "Add metadata for one flavor of Android packaging tool", I'd like to see "Add common metadata for Android, IPhone, PyInstaller, and minimal Linux, and make sure the CPython-based CI smoke-tests that metadata". Also, I believe corporate approval for python.org's Python is a bit of a red herring ? usually you'd get approval for Python distributed by Continuum or Red Hat or Canonical or some such. As a Red Hat employee, I can say that what I'm suggesting won't make me suffer, and I see no reason it would hurt the others either. From victor.stinner at gmail.com Tue Jul 5 05:19:23 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 5 Jul 2016 11:19:23 +0200 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> Message-ID: Hi, asyncio is a good example because it wants to evolve faster than the whole "CPython package". Each minor version of CPython adds news features in asyncio. It is not always easy to document these changes. Moreover, the behaviour of some functions also changed in minor versions. asyncio doesn't respect the trend of semantic versions http://semver.org/ The major version should change if the behaviour of an existing function changes. 2016-07-05 10:05 GMT+02:00 Chris Angelico : > On the flip side, answering questions like "what version of Python do > people need to run my program" become harder to answer, particularly > if you have third-party dependencies. (The latest version of numpy > might decide that it's going to 'import statistics', for instance.) Recently, I wrote a "perf" module and I wanted to use the statistics module. I was surprised to see that PyPI has a package called "statistics" which just works on Python 2.7. In practice, I can use statistics on Python 2.7, 3.4 and even 3.2 (but I didn't try, this version is too old). It's a matter of describing correctly dependencies. pip supports a requirements.txt file which is a nice may to declare dependency. You can: * specify the minimum library version * make some library specific to some operation systems * skip dependencies on some Python versions -- very helpful for libraries parts of Python 3 stdlib (like statistics) => see Environment markers for conditions on dependencies For perf, I'm using this setup() option in setup.py: 'install_requires': ["statistics; python_version < '3.4'", "six"], > One of the arguments against splitting the stdlib was that corporate > approval for software is often hard to obtain, and it's much easier to > say "I need approval to use Python, exactly as distributed by > python.org" than "I need approval to use Python-core plus these five > Python-stdlib sections". *If* someone wants to split the stdlib into smaller parts and/or move it out of CPython, you should already start to write a PEP. Or you will have to reply to the same questions over and over ;-) Is there a volunteer to write such PEP? Victor From barry at python.org Tue Jul 5 09:56:28 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 09:56:28 -0400 Subject: [Python-Dev] Request for CPython 3.5.3 release In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <20160705095628.2596268b.barry@wooz.org> On Jul 03, 2016, at 01:17 AM, Ludovic Gasc wrote: >If 3.5.2.1 or 3.5.3 are impossible to release before december, what are the >alternative solutions for AsyncIO users ? >1. Use 3.5.1 and hope that Linux distributions won't use 3.5.2 ? Matthias just uploaded a 3.5.2-2 to Debian unstable, which will also soon make its way to Ubuntu 16.10: https://launchpad.net/ubuntu/+source/python3.5/3.5.2-2 Ubuntu 16.04 LTS currently still has 3.5.1. Cheers, -Barry From lkb.teichmann at gmail.com Tue Jul 5 11:54:44 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Tue, 5 Jul 2016 17:54:44 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: > This is an area of exceeding subtlety (and also not very well > documented/specified, probably). I'd worry that changing anything here > might break some code. When a metaclass overrides neither __init__ nor > __new__, keyword args will not work because type.__init__ forbids > them. However when a metaclass overrides them and calls them using > super(), it's quite possible that someone ended up calling > super().__init__() with three positional args but super().__new__() > with keyword args, since the call sites are distinct (in the overrides > for __init__ and __new__ respectively). > > What's your argument for changing this, apart from a desire for more regularity? The implementation gets much simpler if __new__ doesn't take keyword arguments. It's simply that if it does, I have to filter out __new__'s three arguments. That's easily done in Python, unfortunately not so much in C. So we have two options: either type.__new__ is limited to accepting positional arguments only, possibly breaking some code, but which could be changed easily. This leads to a pretty simple implementation: pass over keyword arguments to __init_subclass__, that's it. The other option is: filter out name, bases and dict from the keyword arguments If people think that backwards compatibility is that important, I could do that. But that just leaves quite some code in places where there is already a lot of complicated code. Nick proposed a compromise, just don't filter for name, bases and dict, and pass them over to __init_subclass__. Then the default implementation of __init_subclass__ must support those three keyword arguments and do nothing with them. I'm fine with all three solutions, although I have a preference for the first. I think passing keyword arguments to type.__new__ is already really rare and if it does exist, it's super easy to fix. > I'm confused. In the above example it would seem that the keyword args > {'a': 1, 'b': 2} are passed right on to super9).__init_subclass__(). > Do you mean that it ignores all keyword args? Or that it has no > positional args? (Both of which would be consistent with the example.) The example is just wrong. I'll fix it. > Can you state exactly at which point during class initialization > __init_class__() is called? (Surely by now, having implemented it, you > know exactly where. :-) Further down in the PEP I give the exact > [This is as far as I got reviewing when the weekend activities > interrupted me. In the light of ongoing discussion I'm posting this > now -- I'll continue later.] I hope you had a good weekend not thinking too much about metaclasses... Greetings Martin From steve.dower at python.org Tue Jul 5 12:39:21 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 5 Jul 2016 09:39:21 -0700 Subject: [Python-Dev] [Webmaster] Unsafe listing by Norton's "File Insight" In-Reply-To: References: Message-ID: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> On 04Jul2016 2241, Steve Holden wrote: > Hi Peter, > > While the humble webmasters can do little about this it's possible the > developers can, so I am forwarding your email to their mailing list. > > regards > Steve > > Steve Holden > > On Tue, Jul 5, 2016 at 3:30 AM, Peter via Webmaster > > wrote: > > Hi > I'm a heavy user of Python on Windows, am a Basic PSF member and > have contributed to core Python. > The Python 2.7.12 Windows installer download is being marked as > untrusted by Norton Internet Security. I've been on chat with > Symantec, and they've said that I can't do anything about that > rating, but that the site owner can. > I've been pointed to: > https://safeweb.norton.com/help/site_owners > Interestingly, the 3.5.2 download is flagged as safe. > Hoping to get more Python out to users! > Thanks > Peter Peter, can you provide the exact URL that safeweb is complaining about? I tried a few at https://safeweb.norton.com/ and they all showed up as clean. Also please clarify whether this is what you mean. It's not entirely clear whether the download is being scanned or the reputation of the URL is in question. Cheers, Steve From steve at pearwood.info Tue Jul 5 13:02:06 2016 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 6 Jul 2016 03:02:06 +1000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> Message-ID: <20160705170205.GS27919@ando.pearwood.info> On Tue, Jul 05, 2016 at 09:53:24AM +0200, Petr Viktorin wrote: > While we're on the subject, I'd like to offer another point for > consideration: not all implementations of Python can provide the full > stdlib, and not everyone wants the full stdlib. > > For MicroPython, most of Python's batteries are too heavy. Tkinter on > Android is probably not useful enough for people to port it. Weakref > can't be emulated nicely in Javascript. > If packages had a way to opt-out of needing the whole standard library, > and instead specify the stdlib subset they need, answering questions > like "will this run on my phone?" and "what piece of the stdlib do we > want to port next?" would be easier. I don't know that they will be easier. That seems pretty counter- intuitive to me. At the moment, answering these questions are really easy if you use nothing but the std lib: the answer is, if you can install Python, it will work. As soon as you start using non-stdlib modules, the question becomes: - have you installed Python? have you installed module X? and module Y? and module Z? do they match the version of the interpreter? where did you get them from? are you missing dependencies? I can't tell you how much trouble I've had trying to get tkinter working on some Fedora systems because they split tkinter into a separate package. Sure, if I had *known* that it was split into a separate package, then just running `yum install packagename` would (probably!) have worked, but how was I supposed to know? It's not documented anywhere that I could find. I ended up avoiding the Fedora packages and installing from source. I think there comes a time in every successful organisation that they risk losing sight of what made them successful in the first place. (And, yes, I'm aware that the *other* way that successful organisations lose their way is by failing to change with the times.) Yes, we're all probably sick and tired of hearing all the Chicken Little scare stories about how the GIL is killing Python, how everyone is abandoning Python for Ruby/Javascript/Go/Swift, how Python 3 is killing Python, etc. But sometimes the sky does fall. For many people, Python's single biggest advantage until now has been "batteries included", and I think that changing that is risky and shouldn't be done lightly. It's easy to say "just use pip", but if you've ever been stuck behind a corporate firewall where pip doesn't work, or where dowloading and installing software is a firing offence, then you might think differently. If you've had to teach a room full of 14 year olds, and you spend the entire lesson just helping them to install one library, you might have a different view. The other extreme is Javascript/Node.js, where the "just use pip" (or npm in this case) philosophy has been taken to such extremes that one developer practically brought down the entire Node.js ecosystem by withdrawing an eleven line module, left-pad, in a fit of pique. Being open source, the damage was routed around quite quickly, but still, I think it's a good cautionary example of how a technological advance can transform a programming culture to the worse. -- Steve From p.f.moore at gmail.com Tue Jul 5 13:21:25 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 5 Jul 2016 18:21:25 +0100 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: <20160705170205.GS27919@ando.pearwood.info> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: On 5 July 2016 at 18:02, Steven D'Aprano wrote: > Yes, we're all probably sick and tired of hearing all the Chicken Little > scare stories about how the GIL is killing Python, how everyone is > abandoning Python for Ruby/Javascript/Go/Swift, how Python 3 is killing > Python, etc. But sometimes the sky does fall. For many people, Python's > single biggest advantage until now has been "batteries included", and I > think that changing that is risky and shouldn't be done lightly. +1 To be fair, I don't think anyone is looking at this "lightly", but I do think it's easy to underestimate the value of "batteries included", and the people it's *most* useful for are precisely the people who aren't involved in any of the Python mailing lists. They just want to get on with things, and "it came with the language" is a *huge* selling point. Internal changes in how we manage the stdlib modules are fine. But changing what the end user sees as "python" is a much bigger deal. Paul From barry at python.org Tue Jul 5 13:28:29 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 13:28:29 -0400 Subject: [Python-Dev] release cadence In-Reply-To: <57799DF2.8000803@python.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <57799DF2.8000803@python.org> Message-ID: <20160705132829.23a4deda.barry@wooz.org> On Jul 03, 2016, at 04:21 PM, Steve Dower wrote: >A rough count of how I'd break up the current 3.5 Lib folder (which I >happened to have handy) suggests no more than 50 repos. A concern with a highly split stdlib is local testing. I'm not worried about pull request testing, or after-the-fact buildbot testing since I'd have to assume that we'd make sure the fully integrated sumo package was tested in both environments. But what about local testing? Let's say you change something in one module that causes a regression in a different module in a different repo. If you've only got a small subset checked out, you might never notice that before you PR'd your change. And then once the test fails, how easy will it be for you to recreate the tested environment locally so that you could debug your regression? I'm sure it's doable, but let's not lose sight of that if this path is taken. (Personally, I'm +0 on splitting out the stdlib and -1 on micro-splitting it.) Cheers, -Barry From steve.dower at python.org Tue Jul 5 13:32:55 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 5 Jul 2016 10:32:55 -0700 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: <02489d8f-4fc9-4478-5b99-4b367d20db3b@python.org> On 05Jul2016 1021, Paul Moore wrote: > On 5 July 2016 at 18:02, Steven D'Aprano wrote: >> Yes, we're all probably sick and tired of hearing all the Chicken Little >> scare stories about how the GIL is killing Python, how everyone is >> abandoning Python for Ruby/Javascript/Go/Swift, how Python 3 is killing >> Python, etc. But sometimes the sky does fall. For many people, Python's >> single biggest advantage until now has been "batteries included", and I >> think that changing that is risky and shouldn't be done lightly. > > +1 > > To be fair, I don't think anyone is looking at this "lightly", but I > do think it's easy to underestimate the value of "batteries included", > and the people it's *most* useful for are precisely the people who > aren't involved in any of the Python mailing lists. They just want to > get on with things, and "it came with the language" is a *huge* > selling point. > > Internal changes in how we manage the stdlib modules are fine. But > changing what the end user sees as "python" is a much bigger deal. Also +1 on this - a default install of Python should continue to include everything it currently does. My interest in changing anything at all is to provide options for end-users/distributors to either reduce the footprint (which they already do), to more quickly update specific modules, and perhaps long-term to make user's code be less tied to a particular Python version (instead being tied to, for example, a specific asyncio version that can be brought into a range of supported Python versions). Batteries included is a big deal. Cheers, Steve From barry at python.org Tue Jul 5 13:34:29 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 13:34:29 -0400 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> Message-ID: <20160705133429.73b14a47.barry@wooz.org> On Jul 05, 2016, at 11:19 AM, Victor Stinner wrote: >pip supports a requirements.txt file which is a nice may to declare >dependency. You can: > >* specify the minimum library version >* make some library specific to some operation systems >* skip dependencies on some Python versions -- very helpful for >libraries parts of Python 3 stdlib (like statistics) Interestingly enough, I'm working on a project where we *have* to use packages from the Ubuntu archive, even if there are different (or differently fixed) versions on PyPI. I don't think there's a way to map a requirements.txt into distro package versions and do the install from the distro package manager, but that might be useful. Cheers, -Barry From steve.dower at python.org Tue Jul 5 13:38:21 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 5 Jul 2016 10:38:21 -0700 Subject: [Python-Dev] release cadence In-Reply-To: <20160705132829.23a4deda.barry@wooz.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <57799DF2.8000803@python.org> <20160705132829.23a4deda.barry@wooz.org> Message-ID: <9604aaa1-1cf2-f1f6-4daf-556c7514b1c9@python.org> On 05Jul2016 1028, Barry Warsaw wrote: > On Jul 03, 2016, at 04:21 PM, Steve Dower wrote: > >> A rough count of how I'd break up the current 3.5 Lib folder (which I >> happened to have handy) suggests no more than 50 repos. > > A concern with a highly split stdlib is local testing. I'm not worried about > pull request testing, or after-the-fact buildbot testing since I'd have to > assume that we'd make sure the fully integrated sumo package was tested in > both environments. > > But what about local testing? Let's say you change something in one module > that causes a regression in a different module in a different repo. If you've > only got a small subset checked out, you might never notice that before you > PR'd your change. And then once the test fails, how easy will it be for you > to recreate the tested environment locally so that you could debug your > regression? > > I'm sure it's doable, but let's not lose sight of that if this path is taken. My hope is that it would be essentially a "pip freeze"/"pip install -r ..." (or equivalent with whatever tool is used/created for managing the stdlib). Perhaps using VCS URIs rather than version numbers? That is, the test run would dump a list of exactly which stdlib versions it's using, so that when you review the results it is possible to recreate it. But the point is well taken. I'm very hesitant about splitting out packages that are common dependencies of other parts of the stdlib, but there are plenty of leaf nodes in there too. Creating a complex dependency graph would be a disaster. Cheers, Steve > (Personally, I'm +0 on splitting out the stdlib and -1 on micro-splitting it.) > > Cheers, > -Barry From barry at python.org Tue Jul 5 13:44:49 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 13:44:49 -0400 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> Message-ID: <20160705134449.302fb580.barry@wooz.org> On Jul 04, 2016, at 10:31 AM, Nick Coghlan wrote: >While we liked the "consistent calendar cadence that is some multiple >of 6 months" idea, several of us thought 12 months was way too short >as it makes for too many entries in third party support matrices. 18 months for a major release cadence still seems right to me. Downstreams and third-parties often have to go through *a lot* of work to ensure compatibility, and try as we might, every Python release breaks *something*. Major version releases trigger a huge cascade of other work for lots of other people, and I don't think shortening that would be for the overall community good. It just feels like we'd always be playing catch up. Different downstreams have different cadences. I can only speak for Debian, which has a release-when-ready policy, and Ubuntu, which has strictly timed releases. When the Python release aligns nicely with Ubuntu's LTS releases, we can usually manage the transition fairly well because we can allocate resource way ahead of time. (I'm less concerned about Ubuntu's mid-LTS 6 month releases.) For example, 3.6 final will come out in December 2016, so it'll be past our current 16.10 Ubuntu release. We've pretty much decided to carry Python 3.5 through until 17.04, and that'll give us a good year to make 18.04 LTS have a solid Python 3.6 ecosystem. Projecting ahead, it probably means 3.7 in mid-2018, which is after the Ubuntu 18.04 LTS release, so we'll only do one major transition before the next LTS. From my perspective, that feels about right. Cheers, -Barry From barry at python.org Tue Jul 5 13:47:13 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 13:47:13 -0400 Subject: [Python-Dev] release cadence In-Reply-To: <9604aaa1-1cf2-f1f6-4daf-556c7514b1c9@python.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <57799DF2.8000803@python.org> <20160705132829.23a4deda.barry@wooz.org> <9604aaa1-1cf2-f1f6-4daf-556c7514b1c9@python.org> Message-ID: <20160705134713.0b1b493e@subdivisions.wooz.org> On Jul 05, 2016, at 10:38 AM, Steve Dower wrote: >My hope is that it would be essentially a "pip freeze"/"pip install -r ..." >(or equivalent with whatever tool is used/created for managing the >stdlib). Perhaps using VCS URIs rather than version numbers? > >That is, the test run would dump a list of exactly which stdlib versions it's >using, so that when you review the results it is possible to recreate it. I think you'd have to have vcs checkouts though, because you will often need to fix or change something in one of those other library pieces. The other complication of course is that now you'll have two dependent PRs with reviews in two different repos. >But the point is well taken. I'm very hesitant about splitting out packages >that are common dependencies of other parts of the stdlib, but there are >plenty of leaf nodes in there too. Creating a complex dependency graph would >be a disaster. Yeah. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From encukou at gmail.com Tue Jul 5 14:01:43 2016 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 5 Jul 2016 20:01:43 +0200 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: <20160705170205.GS27919@ando.pearwood.info> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: (sorry if you get this twice, I dropped python-dev by mistake) On 07/05/2016 07:02 PM, Steven D'Aprano wrote: > On Tue, Jul 05, 2016 at 09:53:24AM +0200, Petr Viktorin wrote: > >> While we're on the subject, I'd like to offer another point for >> consideration: not all implementations of Python can provide the full >> stdlib, and not everyone wants the full stdlib. >> >> For MicroPython, most of Python's batteries are too heavy. Tkinter on >> Android is probably not useful enough for people to port it. Weakref >> can't be emulated nicely in Javascript. >> If packages had a way to opt-out of needing the whole standard library, >> and instead specify the stdlib subset they need, answering questions >> like "will this run on my phone?" and "what piece of the stdlib do we >> want to port next?" would be easier. > > I don't know that they will be easier. That seems pretty counter- > intuitive to me. At the moment, answering these questions are really > easy if you use nothing but the std lib: the answer is, if you can > install Python, it will work. As soon as you start using non-stdlib > modules, the question becomes: > > - have you installed Python? have you installed module X? and module Y? > and module Z? do they match the version of the interpreter? where > did you get them from? are you missing dependencies? > > I can't tell you how much trouble I've had trying to get tkinter working > on some Fedora systems because they split tkinter into a separate > package. Sure, if I had *known* that it was split into a separate > package, then just running `yum install packagename` would (probably!) > have worked, but how was I supposed to know? It's not documented > anywhere that I could find. I ended up avoiding the Fedora packages and > installing from source. Ah, but successfully installing from source doesn't always give you the full stdlib either: Python build finished successfully! The necessary bits to build these optional modules were not found: _sqlite3 To find the necessary bits, look in setup.py in detect_modules() for the module's name. I have missed that message before, and wondered pretty hard why the module wasn't there. In the tkinter case, compiling for source is easy on a developer's computer, but doing that on a headless server brings in devel files for the entire graphical environment. Are you saying Python on servers should have a way to do turtle graphics, otherwise it's not Python? > I think there comes a time in every successful organisation that they > risk losing sight of what made them successful in the first place. (And, > yes, I'm aware that the *other* way that successful organisations lose > their way is by failing to change with the times.) > > Yes, we're all probably sick and tired of hearing all the Chicken Little > scare stories about how the GIL is killing Python, how everyone is > abandoning Python for Ruby/Javascript/Go/Swift, how Python 3 is killing > Python, etc. But sometimes the sky does fall. For many people, Python's > single biggest advantage until now has been "batteries included", and I > think that changing that is risky and shouldn't be done lightly. > > It's easy to say "just use pip", but if you've ever been stuck behind a > corporate firewall where pip doesn't work, or where dowloading and > installing software is a firing offence, then you might think > differently. If you've had to teach a room full of 14 year olds, and you > spend the entire lesson just helping them to install one library, you > might have a different view. That is why I'm *not* arguing for shipping an incomplete stdlib in official Python releases. I fully agree that including batteries is great ? I'm just not a fan of welding the battery to the product. There are people who want to cut out what they don't need ? to build self-contained applications (pyinstaller, Python for Android), or to eliminate unnecessary dependencies (python3-tkinter). And they will do it with CPython's blessing or without. I don't think Python can move to the mobile world of self-contained apps without this problem being solved, one way or another. It would be much better for the ecosystem if CPython acknowledges this and sets some rules (like "here's how you can do it, but don't call the result an unqualified Python"). > The other extreme is Javascript/Node.js, where the "just use pip" (or > npm in this case) philosophy has been taken to such extremes that one > developer practically brought down the entire Node.js ecosystem by > withdrawing an eleven line module, left-pad, in a fit of pique. > > Being open source, the damage was routed around quite quickly, but > still, I think it's a good cautionary example of how a technological > advance can transform a programming culture to the worse. I don't understand the analogy. Should the eleven-line module have been in Node's stdlib? Outside of stdlib, people are doing this. From ethan at stoneleaf.us Tue Jul 5 14:30:06 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 05 Jul 2016 11:30:06 -0700 Subject: [Python-Dev] release cadence In-Reply-To: <20160705134449.302fb580.barry@wooz.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: <577BFCAE.9020604@stoneleaf.us> On 07/05/2016 10:44 AM, Barry Warsaw wrote: > On Jul 04, 2016, at 10:31 AM, Nick Coghlan wrote: >> While we liked the "consistent calendar cadence that is some multiple >> of 6 months" idea, several of us thought 12 months was way too short >> as it makes for too many entries in third party support matrices. > > 18 months for a major release cadence still seems right to me. Downstreams > and third-parties often have to go through *a lot* of work to ensure > compatibility, and try as we might, every Python release breaks *something*. > Major version releases trigger a huge cascade of other work for lots of other > people, and I don't think shortening that would be for the overall community > good. It just feels like we'd always be playing catch up. +1 from me as well. Rapid major releases are just a huge headache. The nice thing about a .6 or .7 minor release is that we get closer to no bugs with each one. -- ~Ethan~ From brett at python.org Tue Jul 5 15:11:46 2016 From: brett at python.org (Brett Cannon) Date: Tue, 05 Jul 2016 19:11:46 +0000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: <20160705134449.302fb580.barry@wooz.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: On Tue, 5 Jul 2016 at 10:45 Barry Warsaw wrote: > On Jul 04, 2016, at 10:31 AM, Nick Coghlan wrote: > > >While we liked the "consistent calendar cadence that is some multiple > >of 6 months" idea, several of us thought 12 months was way too short > >as it makes for too many entries in third party support matrices. > > 18 months for a major release cadence still seems right to me. Downstreams > and third-parties often have to go through *a lot* of work to ensure > compatibility, and try as we might, every Python release breaks > *something*. > Major version releases trigger a huge cascade of other work for lots of > other > people, and I don't think shortening that would be for the overall > community > good. It just feels like we'd always be playing catch up. > Sticking w/ 18 months is also fine, but then I would like to discuss choosing what months we try to release to get into a date-based release cadence so we know that every e.g. December and June are when releases typically happen thanks to our 6 month bug-fix release cadence. This has the nice benefit of all of us being generally aware of when a bug-fix release is coming up instead of having to check the PEP or go through our mail archive to find out what month a bug-fix is going to get cut (and also something the community to basically count on). -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Jul 5 16:02:03 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 5 Jul 2016 21:02:03 +0100 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: On 5 July 2016 at 19:01, Petr Viktorin wrote: > There are people who want to cut out what they don't need ? to build > self-contained applications (pyinstaller, Python for Android), or to > eliminate unnecessary dependencies (python3-tkinter). And they will do > it with CPython's blessing or without. [...] > It would be much better for the ecosystem if CPython acknowledges this > and sets some rules (like "here's how you can do it, but don't call the > result an unqualified Python"). That doesn't sound unreasonable in principle. As a baseline, I guess the current policy is essentially: """ You can build your own Python installation with whatever parts of the stdlib omitted that you please. However, if you do this, you accept responsibility for any consequences, in terms of 3rd-party modules not working, or even stdlib breakage (for example, we don't guarantee that parts of the stdlib may not rely on other parts). """ That's pretty simple, both to state and to adhere to. And it's basically the current reality. What I'm not clear about is what *additional* guarantees people want to make, and how we'd make them. First of all, Python's packaging ecosystem has no way to express "this package won't work if pathlib isn't present in your stdlib". It seems to me that without something like that, it's pretty hard to support anything better than the current position with regard to 3rd party modules. Documenting stdlib inter-dependencies may be possible, but I wouldn't like to make "X doesn't depend on Y" a guarantee that's subject to backward compatibility rules. Maybe the suggestion is to provide better tools for people wanting to *build* such stripped down versions? That might be a possibility, I guess. I don't know much about how people build their own copies of Python to be able to comment. So I guess my question is, what is the actual proposal here? People seem to have concerns over things that aren't actually being proposed - but without knowing what *is* being proposed, it's hard to avoid that. Paul From brett at python.org Tue Jul 5 17:04:02 2016 From: brett at python.org (Brett Cannon) Date: Tue, 05 Jul 2016 21:04:02 +0000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: On Tue, 5 Jul 2016 at 13:02 Paul Moore wrote: > On 5 July 2016 at 19:01, Petr Viktorin wrote: > > There are people who want to cut out what they don't need ? to build > > self-contained applications (pyinstaller, Python for Android), or to > > eliminate unnecessary dependencies (python3-tkinter). And they will do > > it with CPython's blessing or without. > [...] > > It would be much better for the ecosystem if CPython acknowledges this > > and sets some rules (like "here's how you can do it, but don't call the > > result an unqualified Python"). > > That doesn't sound unreasonable in principle. As a baseline, I guess > the current policy is essentially: > > """ > You can build your own Python installation with whatever parts of the > stdlib omitted that you please. However, if you do this, you accept > responsibility for any consequences, in terms of 3rd-party modules not > working, or even stdlib breakage (for example, we don't guarantee that > parts of the stdlib may not rely on other parts). > """ > > That's pretty simple, both to state and to adhere to. And it's > basically the current reality. What I'm not clear about is what > *additional* guarantees people want to make, and how we'd make them. > First of all, Python's packaging ecosystem has no way to express "this > package won't work if pathlib isn't present in your stdlib". It seems > to me that without something like that, it's pretty hard to support > anything better than the current position with regard to 3rd party > modules. Documenting stdlib inter-dependencies may be possible, but I > wouldn't like to make "X doesn't depend on Y" a guarantee that's > subject to backward compatibility rules. > > Maybe the suggestion is to provide better tools for people wanting to > *build* such stripped down versions? That might be a possibility, I > guess. I don't know much about how people build their own copies of > Python to be able to comment. > > So I guess my question is, what is the actual proposal here? People > seem to have concerns over things that aren't actually being proposed > - but without knowing what *is* being proposed, it's hard to avoid > that. > Realizing that all of these are just proposals with no solid plan behind them, they are all predicated on moving to GitHub, and none of these are directly promoting releasing every module in the stdlib on PyPI as a stand-alone package with its own versioning, they are: 1. Break the stdlib out from CPython and have it be a stand-alone repo 2. Break the stdlib up into a bunch of independent repos that when viewed together make up the stdlib (Steve Dower did some back-of-envelope grouping and pegged the # of repos at ~50) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pacqa100 at yahoo.com.au Tue Jul 5 19:20:18 2016 From: pacqa100 at yahoo.com.au (Peter) Date: Wed, 6 Jul 2016 09:20:18 +1000 Subject: [Python-Dev] [Webmaster] Unsafe listing by Norton's "File Insight" In-Reply-To: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> References: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> Message-ID: (Screen caps linked) On 6/07/2016 2:39 AM, Steve Dower wrote: > On 04Jul2016 2241, Steve Holden wrote: >> Hi Peter, >> >> While the humble webmasters can do little about this it's possible the >> developers can, so I am forwarding your email to their mailing list. >> >> regards >> Steve >> >> Steve Holden >> >> On Tue, Jul 5, 2016 at 3:30 AM, Peter via Webmaster >> > wrote: >> >> Hi >> I'm a heavy user of Python on Windows, am a Basic PSF member and >> have contributed to core Python. >> The Python 2.7.12 Windows installer download is being marked as >> untrusted by Norton Internet Security. I've been on chat with >> Symantec, and they've said that I can't do anything about that >> rating, but that the site owner can. >> I've been pointed to: >> https://safeweb.norton.com/help/site_owners >> Interestingly, the 3.5.2 download is flagged as safe. >> Hoping to get more Python out to users! >> Thanks >> Peter > > Peter, can you provide the exact URL that safeweb is complaining > about? I tried a few at https://safeweb.norton.com/ and they all > showed up as clean. > > Also please clarify whether this is what you mean. It's not entirely > clear whether the download is being scanned or the reputation of the > URL is in question. > > Cheers, > Steve > > -------------- next part -------------- A non-text attachment was scrubbed... Name: Python352FileInsight.PNG Type: image/png Size: 25939 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Python2712FileInsight.PNG Type: image/png Size: 24814 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Python352IsSafe.PNG Type: image/png Size: 117351 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Python2712NeedsAttention.PNG Type: image/png Size: 226220 bytes Desc: not available URL: From pacqa100 at yahoo.com.au Tue Jul 5 19:15:14 2016 From: pacqa100 at yahoo.com.au (Peter) Date: Wed, 6 Jul 2016 09:15:14 +1000 Subject: [Python-Dev] [Webmaster] Unsafe listing by Norton's "File Insight" In-Reply-To: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> References: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> Message-ID: (Response at bottom) On 6/07/2016 2:39 AM, Steve Dower wrote: > On 04Jul2016 2241, Steve Holden wrote: >> Hi Peter, >> >> While the humble webmasters can do little about this it's possible the >> developers can, so I am forwarding your email to their mailing list. >> >> regards >> Steve >> >> Steve Holden >> >> On Tue, Jul 5, 2016 at 3:30 AM, Peter via Webmaster >> > wrote: >> >> Hi >> I'm a heavy user of Python on Windows, am a Basic PSF member and >> have contributed to core Python. >> The Python 2.7.12 Windows installer download is being marked as >> untrusted by Norton Internet Security. I've been on chat with >> Symantec, and they've said that I can't do anything about that >> rating, but that the site owner can. >> I've been pointed to: >> https://safeweb.norton.com/help/site_owners >> Interestingly, the 3.5.2 download is flagged as safe. >> Hoping to get more Python out to users! >> Thanks >> Peter > > Peter, can you provide the exact URL that safeweb is complaining > about? I tried a few at https://safeweb.norton.com/ and they all > showed up as clean. > > Also please clarify whether this is what you mean. It's not entirely > clear whether the download is being scanned or the reputation of the > URL is in question. > > Cheers, > Steve Hi Steve It's not the URL it is complaining of, rather On Windows, Norton Internet Security virus checks all downloads. One of the names they give to the result of their scanning is 'File Insight'. From what I can tell, it uses a few checks: - virus scanning using known signatures - observable malicious behaviour - how well known or used it is across other users of Nortons. It seems that the last of these is a causing the warning. Obviously this is not a problem for me, but may be concerning for less tech savvy Windows users that get the warning. There isn't a way within Nortons to 'report for clearance' of the file. And my chat with a (entry level) Norton representative got nowhere. I'll email screen captures in the next email. Let me know if they don't come through and I'll paste them somewhere. Let me know if I can give any more information. Peter From steve.dower at python.org Tue Jul 5 19:48:43 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 5 Jul 2016 16:48:43 -0700 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: On 05Jul2016 1404, Brett Cannon wrote: > Realizing that all of these are just proposals with no solid plan behind > them, they are all predicated on moving to GitHub, and none of these are > directly promoting releasing every module in the stdlib on PyPI as a > stand-alone package with its own versioning, they are: > > 1. Break the stdlib out from CPython and have it be a stand-alone repo > 2. Break the stdlib up into a bunch of independent repos that when > viewed together make up the stdlib (Steve Dower did some > back-of-envelope grouping and pegged the # of repos at ~50) Actually, I was meaning to directly promote releasing each module on PyPI as a standalone package :) "Each module" here is at most the ~50 I counted (though likely many many fewer), which sounds intimidating until you realise that there are virtually no cross-dependencies between them and most would only depend on the core stdlib (which would *not* be a package on PyPI - you get most of Lib/*.py with the core install and it's fixed, while much of Lib/**/* is separately updateable). Take email as an example. It's external dependencies (found by grep for "import") are abc, base64, calendar, datetime, functools, imghdr, os, quopri, sndhdr, socket, time, urllib.parse, uu, warnings. IMHO, only urllib has the slightest chance of being non-fixed here (remembering that "non-fixed" means upgradeable from PyPI, not that it's missing). The circular references (email<->urllib) would probably need to be resolved, but I think many would see that as a good cleanliness step anyway. A quick glance suggests that both email and urllib are only using each other's public APIs, which means that any version of the other package is sufficient. An official release picks the two latest designated stable releases (this is where I'm imagining a requirements.txt-like file in the core repository) and bundles them both, and then users can update either package on its own. If email makes a change that requires a particular change to urllib, it adds a version constraint, which will force both users and the core requirements.txt file to update both together (this is probably why the circular references would need breaking...) Done with care and even incrementally (e.g. just the provisional modules at first), I don't think this is that scary a proposition. It's not strictly predicated on moving to github or to many repositories, but if we did decide to make a drastic change to the repository layout (which I think this requires at a minimum, at least for our own sanity), doing it while migrating VCS at least keeps all the disruption together. Cheers, Steve From steve.dower at python.org Tue Jul 5 19:51:31 2016 From: steve.dower at python.org (Steve Dower) Date: Tue, 5 Jul 2016 16:51:31 -0700 Subject: [Python-Dev] [Webmaster] Unsafe listing by Norton's "File Insight" In-Reply-To: References: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> Message-ID: <21b9fd32-b720-2333-1182-578691a1eba9@python.org> On 05Jul2016 1615, Peter wrote: > It's not the URL it is complaining of, rather > On Windows, Norton Internet Security virus checks all downloads. One of > the names they give to the result of their scanning is 'File Insight'. > From what I can tell, it uses a few checks: > - virus scanning using known signatures > - observable malicious behaviour > - how well known or used it is across other users of Nortons. > It seems that the last of these is a causing the warning. > Obviously this is not a problem for me, but may be concerning for less > tech savvy Windows users that get the warning. > There isn't a way within Nortons to 'report for clearance' of the file. > And my chat with a (entry level) Norton representative got nowhere. > I'll email screen captures in the next email. Let me know if they don't > come through and I'll paste them somewhere. > Let me know if I can give any more information. I don't think there's anything we can do about this, other than convince more users of Norton Internet Security to use Python 2.7 (apparently Python 3.5 is more popular with that demographic :) ) The installer is signed with the PSF's certificate, which keeps Windows Smartscreen happy and is the only way we can indicate that it's trustworthy. If Norton requires different criteria then that is on them and not something we can fix. Cheers, Steve From ncoghlan at gmail.com Tue Jul 5 20:02:06 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jul 2016 10:02:06 +1000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: <20160705134449.302fb580.barry@wooz.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: On 6 July 2016 at 03:44, Barry Warsaw wrote: > For example, 3.6 final will come out in December 2016, so it'll be past our > current 16.10 Ubuntu release. We've pretty much decided to carry Python 3.5 > through until 17.04, and that'll give us a good year to make 18.04 LTS have a > solid Python 3.6 ecosystem. This aligns pretty well with Fedora's plans - the typical Fedora release dates are May & November, so we will stick with 3.5 for this year's F25 release, while the Fedora 26 Rawhide branch is expected to switch to 3.6 shortly after the first 3.6 beta is released in September. The results in Rawhide should thus help with upstream 3.6 beta testing, with the full release of F26 happening in May 2017 or so. > Projecting ahead, it probably means 3.7 in mid-2018, which is after the Ubuntu > 18.04 LTS release, so we'll only do one major transition before the next LTS. > From my perspective, that feels about right. Likewise - 24 months is a bit too slow in getting features out, 12 months expands the community version support matrix too much, while 18 months means that even folks supporting 5* year old LTS Linux releases will typically only be a couple of releases behind the latest version. Cheers, Nick. * For folks that don't closely follow the way enterprise Linux distros work, the '5' there isn't arbitrary - it's the lifecycle of Ubuntu LTS releases, and roughly the length of the "Production 1" phase of RHEL releases (where new features may still be added in point releases). Beyond the 5 year mark, I don't think it's particularly reasonable for people to expect free community support, as even Red Hat stops backporting anything other than bug fixes and hardware driver updates at that point. Regardless of your choice of LTS platform, newer versions will be available by the time your current one is that old, so "I don't want to upgrade" is a privilege people can reasonably be expected to pay for. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pacqa100 at yahoo.com.au Tue Jul 5 19:55:29 2016 From: pacqa100 at yahoo.com.au (Peter) Date: Wed, 6 Jul 2016 09:55:29 +1000 Subject: [Python-Dev] [Webmaster] Unsafe listing by Norton's "File Insight" In-Reply-To: <21b9fd32-b720-2333-1182-578691a1eba9@python.org> References: <681835a1-5fd1-3e5e-75d3-c1f6233a12b6@python.org> <21b9fd32-b720-2333-1182-578691a1eba9@python.org> Message-ID: <778040d1-9be8-7ae0-82c3-9fd050295479@yahoo.com.au> On 6/07/2016 9:51 AM, Steve Dower wrote: > On 05Jul2016 1615, Peter wrote: >> It's not the URL it is complaining of, rather >> On Windows, Norton Internet Security virus checks all downloads. One of >> the names they give to the result of their scanning is 'File Insight'. >> From what I can tell, it uses a few checks: >> - virus scanning using known signatures >> - observable malicious behaviour >> - how well known or used it is across other users of Nortons. >> It seems that the last of these is a causing the warning. >> Obviously this is not a problem for me, but may be concerning for less >> tech savvy Windows users that get the warning. >> There isn't a way within Nortons to 'report for clearance' of the file. >> And my chat with a (entry level) Norton representative got nowhere. >> I'll email screen captures in the next email. Let me know if they don't >> come through and I'll paste them somewhere. >> Let me know if I can give any more information. > > I don't think there's anything we can do about this, other than > convince more users of Norton Internet Security to use Python 2.7 > (apparently Python 3.5 is more popular with that demographic :) ) > > The installer is signed with the PSF's certificate, which keeps > Windows Smartscreen happy and is the only way we can indicate that > it's trustworthy. If Norton requires different criteria then that is > on them and not something we can fix. > > Cheers, > Steve I suspect you're right. It's a flawed model that they're using, and they are quite impervious to suggestions. Glad 3.5 is winning :-) Keep up the good work. Peter From ncoghlan at gmail.com Tue Jul 5 20:55:18 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jul 2016 10:55:18 +1000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: On 6 July 2016 at 05:11, Brett Cannon wrote: > Sticking w/ 18 months is also fine, but then I would like to discuss > choosing what months we try to release to get into a date-based release > cadence so we know that every e.g. December and June are when releases > typically happen thanks to our 6 month bug-fix release cadence. This has the > nice benefit of all of us being generally aware of when a bug-fix release is > coming up instead of having to check the PEP or go through our mail archive > to find out what month a bug-fix is going to get cut (and also something the > community to basically count on). I don't have a strong preference on that front, as even the worst case outcome of a schedule misalignment for Fedora is what's happening for Fedora 25 & 26: F25 in November will still have Python 3.5, while Rawhide will get the 3.6 beta in September or so and then F26 will be released with 3.6 in the first half of next year. So really, I think the main criterion here is "Whatever works best for the folks directly involved in release management" However, if we did decide we wanted to take minimising "time to redistribution" for at least Ubuntu & Fedora into account, then the two main points to consider would be: - starting the upstream beta phase before the first downstream alpha freeze - publishing the upstream final release before the last downstream beta freeze Assuming 6 month distro cadences, and taking the F25 and 16.10 release cycles as representative, we get: - Ubuntu alpha 1 releases in late January & June - Fedora alpha freezes in early February & August - Ubuntu final beta freezes in late March & September - Fedora beta freezes in late March & September Further assuming we stuck with the current model of ~3 months from beta 1 -> final release, that would suggest a cadence alternating between: * December beta, February release * May beta, August release If we did that, then 3.6 -> 3.7 would be another "short" cycle (15 months from Dec 2016 to Feb 2018) before settling into a regular cadence of: * 2017-12: 3.7.0b1 * 2018-02: 3.7.0 * 2019-05: 3.8.0b1 * 2019-08: 3.8.0 * 2020-12: 3.9.0b1 * 2021-02: 3.9.0 * 2022-05: 3.10.0b1 * 2022-08: 3.10.0 * etc... The precise timing of maintenance releases isn't as big a deal (since they don't require anywhere near as much downstream coordination), but offsetting them by a month from the feature releases (so March & September in the Fedbuntu driven proposal above) would allow for the X.(Y+1).1 release to go out at the same time as the final X.Y.Z release. I'll reiterate though that we should be able to adjust to *any* consistent 18 month cycle downstream - the only difference will be the typical latency between new versions being released on python.org, and them showing up in Linux distros as the system Python installation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at python.org Tue Jul 5 21:05:21 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 21:05:21 -0400 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: <20160705210521.71e94aa3@subdivisions.wooz.org> On Jul 06, 2016, at 10:02 AM, Nick Coghlan wrote: >On 6 July 2016 at 03:44, Barry Warsaw wrote: > >> Projecting ahead, it probably means 3.7 in mid-2018, which is after the >> Ubuntu 18.04 LTS release, so we'll only do one major transition before the >> next LTS. From my perspective, that feels about right. > >Likewise - 24 months is a bit too slow in getting features out, 12 >months expands the community version support matrix too much, while 18 >months means that even folks supporting 5* year old LTS Linux releases >will typically only be a couple of releases behind the latest version. Cool. Not that there aren't other distros and OSes involved, but having at least this much alignment is a good sign. I should also note that while Debian has a release-when-ready approach, Python 3.6 alpha 2-ish is available in Debian experimental for those who like the bleeding edge. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Tue Jul 5 21:07:52 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jul 2016 11:07:52 +1000 Subject: [Python-Dev] release cadence In-Reply-To: <20160705134713.0b1b493e@subdivisions.wooz.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <57799DF2.8000803@python.org> <20160705132829.23a4deda.barry@wooz.org> <9604aaa1-1cf2-f1f6-4daf-556c7514b1c9@python.org> <20160705134713.0b1b493e@subdivisions.wooz.org> Message-ID: On 6 July 2016 at 03:47, Barry Warsaw wrote: > On Jul 05, 2016, at 10:38 AM, Steve Dower wrote: > >>My hope is that it would be essentially a "pip freeze"/"pip install -r ..." >>(or equivalent with whatever tool is used/created for managing the >>stdlib). Perhaps using VCS URIs rather than version numbers? >> >>That is, the test run would dump a list of exactly which stdlib versions it's >>using, so that when you review the results it is possible to recreate it. > > I think you'd have to have vcs checkouts though, because you will often need > to fix or change something in one of those other library pieces. The other > complication of course is that now you'll have two dependent PRs with reviews > in two different repos. I'd actually advocate for keeping a unified clone, and make any use of pip to manage pieces of the standard library purely an install-time thing (as it is for ensurepip). The main problem I see with actually making stdlib development dependent on having a venv already set up for pip to do its thing without affecting the rest of your system is that it would pose a major bootstrapping problem. It does mean we'd be introducing a greater divergence between the way devs work locally and the way the CI system worked (as in this model we'd definitely want the buildbots to be exercising both the "test in checkout" and "test an installed version" case), but working across multiple repos would be worse. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at python.org Tue Jul 5 21:09:15 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Jul 2016 21:09:15 -0400 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: <20160705210915.2366627f@subdivisions.wooz.org> On Jul 06, 2016, at 10:55 AM, Nick Coghlan wrote: >However, if we did decide we wanted to take minimising "time to >redistribution" for at least Ubuntu & Fedora into account, then the >two main points to consider would be: > >- starting the upstream beta phase before the first downstream alpha freeze >- publishing the upstream final release before the last downstream beta freeze There have been cases in the past where the schedules didn't align perfectly, and we really wanted to get ahead of the game, so we released with a late beta, and then got SRUs (stable release upgrade approval) to move to the final release *after* the Ubuntu final release. This isn't great though, especially for non-LTS releases because they have shorter lifecycles and no point releases. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Tue Jul 5 21:16:11 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jul 2016 11:16:11 +1000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: On 6 July 2016 at 07:04, Brett Cannon wrote: > Realizing that all of these are just proposals with no solid plan behind > them, they are all predicated on moving to GitHub, and none of these are > directly promoting releasing every module in the stdlib on PyPI as a > stand-alone package with its own versioning, they are: > > 1. Break the stdlib out from CPython and have it be a stand-alone repo > 2. Break the stdlib up into a bunch of independent repos that when viewed > together make up the stdlib (Steve Dower did some back-of-envelope grouping > and pegged the # of repos at ~50) 3. Keep everything in the main CPython repo, but add a "Bundled" subdirectory of independently releasable multi-version compatible subprojects that we move some Lib/* components to. I think one of our goals here should be that "./configure && make && make altinstall" continues to get you a full Python installation for the relevant version. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jul 5 21:25:47 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Jul 2016 11:25:47 +1000 Subject: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) In-Reply-To: <20160705210915.2366627f@subdivisions.wooz.org> References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> <20160705210915.2366627f@subdivisions.wooz.org> Message-ID: On 6 July 2016 at 11:09, Barry Warsaw wrote: > On Jul 06, 2016, at 10:55 AM, Nick Coghlan wrote: > >>However, if we did decide we wanted to take minimising "time to >>redistribution" for at least Ubuntu & Fedora into account, then the >>two main points to consider would be: >> >>- starting the upstream beta phase before the first downstream alpha freeze >>- publishing the upstream final release before the last downstream beta freeze > > There have been cases in the past where the schedules didn't align perfectly, > and we really wanted to get ahead of the game, so we released with a late > beta, and then got SRUs (stable release upgrade approval) to move to the final > release *after* the Ubuntu final release. This isn't great though, especially > for non-LTS releases because they have shorter lifecycles and no point > releases. Aye, Petr and I actually discussed doing something like that in order to get Python 3.6 into F25, but eventually decided it would be better to just wait the extra 6 months. We may end up creating a Python 3.6 COPR for F24 & 25 though, similar to the one Matej Stuchlik created for F23 when Python 3.5 didn't quite make the release deadlines. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Tue Jul 5 22:24:10 2016 From: brett at python.org (Brett Cannon) Date: Wed, 06 Jul 2016 02:24:10 +0000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: On Tue, 5 Jul 2016 at 18:16 Nick Coghlan wrote: > On 6 July 2016 at 07:04, Brett Cannon wrote: > > Realizing that all of these are just proposals with no solid plan behind > > them, they are all predicated on moving to GitHub, and none of these are > > directly promoting releasing every module in the stdlib on PyPI as a > > stand-alone package with its own versioning, they are: > > > > 1. Break the stdlib out from CPython and have it be a stand-alone repo > > 2. Break the stdlib up into a bunch of independent repos that when viewed > > together make up the stdlib (Steve Dower did some back-of-envelope > grouping > > and pegged the # of repos at ~50) > > 3. Keep everything in the main CPython repo, but add a "Bundled" > subdirectory of independently releasable multi-version compatible > subprojects that we move some Lib/* components to. > That's basically what Steve is proposing. > > I think one of our goals here should be that "./configure && make && > make altinstall" continues to get you a full Python installation for > the relevant version. > I don't think anyone is suggesting otherwise. You just might have to do `git clone --recursive` to get a full-fledged CPython checkout w/ stdlib. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Tue Jul 5 23:11:16 2016 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 6 Jul 2016 13:11:16 +1000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> Message-ID: <20160706031116.GV27919@ando.pearwood.info> On Tue, Jul 05, 2016 at 08:01:43PM +0200, Petr Viktorin wrote: > In the tkinter case, compiling for source is easy on a developer's > computer, but doing that on a headless server brings in devel files for > the entire graphical environment. > Are you saying Python on servers should have a way to do turtle > graphics, otherwise it's not Python? That's a really good question. I don't think we have an exact answer to "What counts as Python?". It's not like EMCAScript (Javascript) or C where there's a standard that defines the language and standard modules. We just have some de facto guidelines: - CPython is definitely Python; - Jython is surely Python, even if it lacks the byte-code of CPython and some things behave slightly differently; - MicroPython is probably Python, because nobody expects to be able to run Tkinter GUI apps on an embedded device with 256K or RAM; but it's hard to make that judgement except on a case-by-case basis. I think though that even if there's no documented line, most people recognise that there are "core" and "non-core" standard modules. dis and tkinter are non-core: if ?Python leaves out tkinter, nobody will be surprised; if Jython leaves out dis, nobody will hold it against them; but if they leave out math or htmllib that's another story. So a headless server can probably leave out tkinter; but a desktop shouldn't. [...] > > The other extreme is Javascript/Node.js, where the "just use pip" (or > > npm in this case) philosophy has been taken to such extremes that one > > developer practically brought down the entire Node.js ecosystem by > > withdrawing an eleven line module, left-pad, in a fit of pique. > > > > Being open source, the damage was routed around quite quickly, but > > still, I think it's a good cautionary example of how a technological > > advance can transform a programming culture to the worse. > > I don't understand the analogy. Should the eleven-line module have been > in Node's stdlib? Outside of stdlib, people are doing this. The point is that Javascript/Node.js is so lacking in batteries that the community culture has gravitated to an extreme version of "just use pip". I'm not suggesting that you, or anyone else, has proposed that Python do the same, only that there's a balance to be found between the extremes of "everything in the Python ecosystem should be part of the standard installation" and "next to nothing should be part of the standard installation". The hard part is deciding where that balance should be :-) -- Steve From encukou at gmail.com Wed Jul 6 05:01:15 2016 From: encukou at gmail.com (Petr Viktorin) Date: Wed, 6 Jul 2016 11:01:15 +0200 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: <20160706031116.GV27919@ando.pearwood.info> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: On 07/06/2016 05:11 AM, Steven D'Aprano wrote: > On Tue, Jul 05, 2016 at 08:01:43PM +0200, Petr Viktorin wrote: > >> In the tkinter case, compiling for source is easy on a developer's >> computer, but doing that on a headless server brings in devel files for >> the entire graphical environment. >> Are you saying Python on servers should have a way to do turtle >> graphics, otherwise it's not Python? > > That's a really good question. > > I don't think we have an exact answer to "What counts as Python?". It's > not like EMCAScript (Javascript) or C where there's a standard that > defines the language and standard modules. We just have some de facto > guidelines: > > - CPython is definitely Python; > - Jython is surely Python, even if it lacks the byte-code of CPython and > some things behave slightly differently; > - MicroPython is probably Python, because nobody expects to be able to > run Tkinter GUI apps on an embedded device with 256K or RAM; > > but it's hard to make that judgement except on a case-by-case basis. > > I think though that even if there's no documented line, most people > recognise that there are "core" and "non-core" standard modules. dis and > tkinter are non-core: if ?Python leaves out tkinter, nobody will be > surprised; if Jython leaves out dis, nobody will hold it against them; > but if they leave out math or htmllib that's another story. For MicroPython, I would definitely expect htmllib to be an optional add-on ? it's not useful for reading data off a thermometer saving it to an SD card. But I guess that's getting too deep into specifics. > So a headless server can probably leave out tkinter; but a desktop > shouldn't. Up till recently this wasn't possible to express in terms of RPM dependencies. Now, it's on the ever-growing TODO list... Another problem here is that you don't explicitly "install Python" on Fedora: when you install the system, you get a minimal set of packages to make everything work, and most of Python is part of that ? but tkinter is not. This is in contrast to python.org releases, where you explicitly ask for (all of) Python. Technically it would now be possible to have to install Python to use it, but we run into another "batteries included" problem: Python (or, "most-of-Python") is a pretty good battery for an OS. Maybe a good short-term solution would be to make "import tkinter" raise ImportError("Run `dnf install tkinter` to install the tkinter module") if not found. This would prevent confusion while keeping the status quo. I'll look into that. > [...] >>> The other extreme is Javascript/Node.js, where the "just use pip" (or >>> npm in this case) philosophy has been taken to such extremes that one >>> developer practically brought down the entire Node.js ecosystem by >>> withdrawing an eleven line module, left-pad, in a fit of pique. >>> >>> Being open source, the damage was routed around quite quickly, but >>> still, I think it's a good cautionary example of how a technological >>> advance can transform a programming culture to the worse. >> >> I don't understand the analogy. Should the eleven-line module have been >> in Node's stdlib? Outside of stdlib, people are doing this. > > The point is that Javascript/Node.js is so lacking in batteries that the > community culture has gravitated to an extreme version of "just use > pip". I'm not suggesting that you, or anyone else, has proposed that > Python do the same, only that there's a balance to be found between the > extremes of "everything in the Python ecosystem should be part of the > standard installation" and "next to nothing should be part of the > standard installation". > > The hard part is deciding where that balance should be :-) I think the balance is where it needs to be for CPython, and it's also where it needs to be for Fedora. The real hard part is acknowledging that it needs to be in different places for different use cases, and making sure work to support the different use cases is coordinated. So, I guess I'm starting to form a concrete proposal: 1) Document what should happen when a stdlib module is not available. This should be an ImportError informative error message, something along the lines of 'This build of Python does not include SQLite support.' or 'MicroPython does not support turtle' or 'Use `sudo your-package-manager` install tkinter` to install this module'. 2) Document leaf modules (or "clusters") that can be removed from the stdlib, and their dependencies. Make no guarantees about cross-version compatibility of this metadata. 3) Standardize a way to query which stdlib modules are present (without side effects, i.e. trying an import doesn't count) 4) Adjust pip to ignore installed stdlib modules that are present, so distributions can depend on "statistics" and not "statistics if python_ver<3.4". (statistics is just an example, obviously this would only work for modules added after the PEP). For missing stdlib modules, pip should fail with the message from 1). (Unless the "pip upgrade asynciio" proposal goes through, in which case install the module if it's upgradable). 5) Reserve all stdlib module names on PyPI for backports or non-installable placeholders. 6) To ease smoke-testing behavior on Pythons without all of the stdlib, allow pip to remove leaf stdlib modules from a venv. Looks like it's time for a PEP. From rosuav at gmail.com Wed Jul 6 05:09:04 2016 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 6 Jul 2016 19:09:04 +1000 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: On Wed, Jul 6, 2016 at 7:01 PM, Petr Viktorin wrote: > Maybe a good short-term solution would be to make "import tkinter" raise > ImportError("Run `dnf install tkinter` to install the tkinter module") > if not found. This would prevent confusion while keeping the status quo. > I'll look into that. > +1. There's precedent for it; Debian does this: rosuav at sikorsky:~$ python Python 2.7.11+ (default, Jun 2 2016, 19:34:15) [GCC 5.3.1 20160528] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import Tkinter Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 42, in raise ImportError, str(msg) + ', please install the python-tk package' ImportError: No module named _tkinter, please install the python-tk package ChrisA From p.f.moore at gmail.com Wed Jul 6 05:29:44 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 6 Jul 2016 10:29:44 +0100 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: On 6 July 2016 at 10:01, Petr Viktorin wrote: > 4) Adjust pip to ignore installed stdlib modules that are present, so > distributions can depend on "statistics" and not "statistics if > python_ver<3.4". (statistics is just an example, obviously this would > only work for modules added after the PEP). For missing stdlib modules, > pip should fail with the message from 1). (Unless the "pip upgrade > asynciio" proposal goes through, in which case install the module if > it's upgradable). A couple of comments here. 1. Projects may still need to depend on "statistics from Python 3.6 or later, but the one in 3.5 isn't good enough". Consider for example unittest, where projects often need the backport unittest2 to get access to features that aren't in older versions. 2. This is easy enough to do if we make stdlib modules publish version metadata. But it does raise the question of what the version of a stdlib module is - probably Python version plus a micro version for interim updates. Also, I have a recollection of pip having problems with some stdlib modules that publish version data right now (wsgiref?) - that should be checked to make sure this approach would work. > Looks like it's time for a PEP. Probably - in principle, something like this proposal could be workable, it'll be a matter of thrashing out the details (which is something the PEP process is good at). Paul From steve.dower at python.org Wed Jul 6 09:55:45 2016 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Jul 2016 06:55:45 -0700 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: I think the wsgiref issue was that it wasn't in site-packages and so couldn't be removed or upgraded. Having the dist-info available and putting them in site-packages (or a new directory?) lets us handle querying/replacing/removing using the existing tools we use for distribution. Also, I think the version numbers do need to be independent of Python version in case nothing changes between releases. If you develop something using statistics==3.7, but statistics==3.6 is identical, how do you know you can put the lower constraint? Alternatively, if it's statistics==1.0 in both, people won't assume it has anything to do with the Python version. The tricky part here is when everything is in the one repo and everyone implicitly uses the latest version for development, you get the reproducibility issues Barry mentioned earlier. If they're separate and most people have pinned versions, that goes away. Top-posted from my Windows Phone -----Original Message----- From: "Paul Moore" Sent: ?7/?6/?2016 2:32 To: "Petr Viktorin" Cc: "Python-Dev" Subject: Re: [Python-Dev] Breaking up the stdlib (Was: release cadence) On 6 July 2016 at 10:01, Petr Viktorin wrote: > 4) Adjust pip to ignore installed stdlib modules that are present, so > distributions can depend on "statistics" and not "statistics if > python_ver<3.4". (statistics is just an example, obviously this would > only work for modules added after the PEP). For missing stdlib modules, > pip should fail with the message from 1). (Unless the "pip upgrade > asynciio" proposal goes through, in which case install the module if > it's upgradable). A couple of comments here. 1. Projects may still need to depend on "statistics from Python 3.6 or later, but the one in 3.5 isn't good enough". Consider for example unittest, where projects often need the backport unittest2 to get access to features that aren't in older versions. 2. This is easy enough to do if we make stdlib modules publish version metadata. But it does raise the question of what the version of a stdlib module is - probably Python version plus a micro version for interim updates. Also, I have a recollection of pip having problems with some stdlib modules that publish version data right now (wsgiref?) - that should be checked to make sure this approach would work. > Looks like it's time for a PEP. Probably - in principle, something like this proposal could be workable, it'll be a matter of thrashing out the details (which is something the PEP process is good at). Paul _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Jul 6 10:09:51 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 6 Jul 2016 15:09:51 +0100 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: On 6 July 2016 at 14:55, Steve Dower wrote: > I think the wsgiref issue was that it wasn't in site-packages and so > couldn't be removed or upgraded. Having the dist-info available and putting > them in site-packages (or a new directory?) lets us handle > querying/replacing/removing using the existing tools we use for > distribution. That's the one. Thanks for the reminder. So either we move the stdlib (or parts of it) into site-packages, or pip needs to learn to handle a versioned stdlib. Cool. > Also, I think the version numbers do need to be independent of Python > version in case nothing changes between releases. If you develop something > using statistics==3.7, but statistics==3.6 is identical, how do you know you > can put the lower constraint? Alternatively, if it's statistics==1.0 in > both, people won't assume it has anything to do with the Python version. This boils down to whether we want to present the stdlib as a unified object tied to the Python release, or a set of modules no different from those on PyPI, that happen to be shipped with Python. I prefer the former view (it matches better how I think of "batteries included") whereas it looks like you prefer the latter (but don't see that as being in conflict with "batteries included"). Debating this in the abstract is probably not productive, so let's wait for a concrete PEP to thrash out details like this. Paul From steve.dower at python.org Wed Jul 6 10:27:01 2016 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Jul 2016 07:27:01 -0700 Subject: [Python-Dev] Making stdlib modules optional for distributions (Was: Breaking up the stdlib (Was: release cadence)) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: I consider making stdlib modules "optional" like this to be completely separate from making them individually versioned - can't quite tell whether you guys do as well? To everyone: please don't conflate these two discussions. The other is about CPython workflow and this one is about community/user expectations (I have not been proposing to remove stdlib modules at any point). Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Petr Viktorin" Sent: ?7/?6/?2016 2:04 To: "Steven D'Aprano" ; "Python-Dev" Subject: Re: [Python-Dev] Breaking up the stdlib (Was: release cadence) On 07/06/2016 05:11 AM, Steven D'Aprano wrote: > On Tue, Jul 05, 2016 at 08:01:43PM +0200, Petr Viktorin wrote: > >> In the tkinter case, compiling for source is easy on a developer's >> computer, but doing that on a headless server brings in devel files for >> the entire graphical environment. >> Are you saying Python on servers should have a way to do turtle >> graphics, otherwise it's not Python? > > That's a really good question. > > I don't think we have an exact answer to "What counts as Python?". It's > not like EMCAScript (Javascript) or C where there's a standard that > defines the language and standard modules. We just have some de facto > guidelines: > > - CPython is definitely Python; > - Jython is surely Python, even if it lacks the byte-code of CPython and > some things behave slightly differently; > - MicroPython is probably Python, because nobody expects to be able to > run Tkinter GUI apps on an embedded device with 256K or RAM; > > but it's hard to make that judgement except on a case-by-case basis. > > I think though that even if there's no documented line, most people > recognise that there are "core" and "non-core" standard modules. dis and > tkinter are non-core: if ?Python leaves out tkinter, nobody will be > surprised; if Jython leaves out dis, nobody will hold it against them; > but if they leave out math or htmllib that's another story. For MicroPython, I would definitely expect htmllib to be an optional add-on ? it's not useful for reading data off a thermometer saving it to an SD card. But I guess that's getting too deep into specifics. > So a headless server can probably leave out tkinter; but a desktop > shouldn't. Up till recently this wasn't possible to express in terms of RPM dependencies. Now, it's on the ever-growing TODO list... Another problem here is that you don't explicitly "install Python" on Fedora: when you install the system, you get a minimal set of packages to make everything work, and most of Python is part of that ? but tkinter is not. This is in contrast to python.org releases, where you explicitly ask for (all of) Python. Technically it would now be possible to have to install Python to use it, but we run into another "batteries included" problem: Python (or, "most-of-Python") is a pretty good battery for an OS. Maybe a good short-term solution would be to make "import tkinter" raise ImportError("Run `dnf install tkinter` to install the tkinter module") if not found. This would prevent confusion while keeping the status quo. I'll look into that. > [...] >>> The other extreme is Javascript/Node.js, where the "just use pip" (or >>> npm in this case) philosophy has been taken to such extremes that one >>> developer practically brought down the entire Node.js ecosystem by >>> withdrawing an eleven line module, left-pad, in a fit of pique. >>> >>> Being open source, the damage was routed around quite quickly, but >>> still, I think it's a good cautionary example of how a technological >>> advance can transform a programming culture to the worse. >> >> I don't understand the analogy. Should the eleven-line module have been >> in Node's stdlib? Outside of stdlib, people are doing this. > > The point is that Javascript/Node.js is so lacking in batteries that the > community culture has gravitated to an extreme version of "just use > pip". I'm not suggesting that you, or anyone else, has proposed that > Python do the same, only that there's a balance to be found between the > extremes of "everything in the Python ecosystem should be part of the > standard installation" and "next to nothing should be part of the > standard installation". > > The hard part is deciding where that balance should be :-) I think the balance is where it needs to be for CPython, and it's also where it needs to be for Fedora. The real hard part is acknowledging that it needs to be in different places for different use cases, and making sure work to support the different use cases is coordinated. So, I guess I'm starting to form a concrete proposal: 1) Document what should happen when a stdlib module is not available. This should be an ImportError informative error message, something along the lines of 'This build of Python does not include SQLite support.' or 'MicroPython does not support turtle' or 'Use `sudo your-package-manager` install tkinter` to install this module'. 2) Document leaf modules (or "clusters") that can be removed from the stdlib, and their dependencies. Make no guarantees about cross-version compatibility of this metadata. 3) Standardize a way to query which stdlib modules are present (without side effects, i.e. trying an import doesn't count) 4) Adjust pip to ignore installed stdlib modules that are present, so distributions can depend on "statistics" and not "statistics if python_ver<3.4". (statistics is just an example, obviously this would only work for modules added after the PEP). For missing stdlib modules, pip should fail with the message from 1). (Unless the "pip upgrade asynciio" proposal goes through, in which case install the module if it's upgradable). 5) Reserve all stdlib module names on PyPI for backports or non-installable placeholders. 6) To ease smoke-testing behavior on Pythons without all of the stdlib, allow pip to remove leaf stdlib modules from a venv. Looks like it's time for a PEP. _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Wed Jul 6 10:53:08 2016 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Jul 2016 07:53:08 -0700 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: Thrashing out details should go on the workflow SIG, and I guess I'm the obvious candidate to push it asking. But given my own time constraints right now, I'm not going to dive into details if the high level concept (stdlib packages can be individually updated by end users apart from a full CPython release) is at issue. Once there seems to be general agreement that this is a worthy goal, I'll see if I can put down details for how I would implement it. (And go join the core-workflow list, I guess :) ) Top-posted from my Windows Phone -----Original Message----- From: "Paul Moore" Sent: ?7/?6/?2016 7:10 To: "Steve Dower" Cc: "Petr Viktorin" ; "Python-Dev" Subject: Re: [Python-Dev] Breaking up the stdlib (Was: release cadence) On 6 July 2016 at 14:55, Steve Dower wrote: > I think the wsgiref issue was that it wasn't in site-packages and so > couldn't be removed or upgraded. Having the dist-info available and putting > them in site-packages (or a new directory?) lets us handle > querying/replacing/removing using the existing tools we use for > distribution. That's the one. Thanks for the reminder. So either we move the stdlib (or parts of it) into site-packages, or pip needs to learn to handle a versioned stdlib. Cool. > Also, I think the version numbers do need to be independent of Python > version in case nothing changes between releases. If you develop something > using statistics==3.7, but statistics==3.6 is identical, how do you know you > can put the lower constraint? Alternatively, if it's statistics==1.0 in > both, people won't assume it has anything to do with the Python version. This boils down to whether we want to present the stdlib as a unified object tied to the Python release, or a set of modules no different from those on PyPI, that happen to be shipped with Python. I prefer the former view (it matches better how I think of "batteries included") whereas it looks like you prefer the latter (but don't see that as being in conflict with "batteries included"). Debating this in the abstract is probably not productive, so let's wait for a concrete PEP to thrash out details like this. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Wed Jul 6 12:26:54 2016 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Jul 2016 09:26:54 -0700 Subject: [Python-Dev] Breaking up the stdlib (Was: release cadence) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: <4a43c818-b42e-2fee-aead-9f3f8bd476e8@python.org> On 06Jul2016 0753, Steve Dower wrote: > Thrashing out details should go on the workflow SIG, and I guess I'm the > obvious candidate to push it asking. But given my own time constraints > right now, I'm not going to dive into details if the high level concept > (stdlib packages can be individually updated by end users apart from a > full CPython release) is at issue. > > Once there seems to be general agreement that this is a worthy goal, > I'll see if I can put down details for how I would implement it. (And go > join the core-workflow list, I guess :) ) Rather than wait for general agreement, since this thread is probably widely muted already, I'll put a PEP together to clearly set out what I'm envisioning here in a form we can directly discuss. But time is precious, so don't expect it this week :) (Also, on Brett's advice, this belongs on python-dev and not core-workflow right now.) Cheers, Steve From ncoghlan at gmail.com Wed Jul 6 22:44:31 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 7 Jul 2016 12:44:31 +1000 Subject: [Python-Dev] Making stdlib modules optional for distributions (Was: Breaking up the stdlib (Was: release cadence)) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: On 7 July 2016 at 00:27, Steve Dower wrote: > I consider making stdlib modules "optional" like this to be completely > separate from making them individually versioned - can't quite tell whether > you guys do as well? The point of overlap I see is that if the stdlib starts putting some selected modules into site-packages (so "pip install --upgrade " works without any changes to pip or equivalent tools), then that also solves the "How to explicitly declare dependencies on particular pieces of the standard library" problem: you use the same mechanisms we already use to declare dependencies on 3rd party packages distributed via a packaging server. I really like the idea of those independently versioned subcomponents of the standard library only being special in the way they're developed (i.e. as part of the standard library) and the way they're published (i.e. included by default with the python.org binary installers and source tarballs), rather than also being special in the way they're managed at runtime. Versioning selected stdlib subsets also has potential to help delineate clear expectations for redistributors: Are you leaving a particular subset out of your default install? Then you should propose that it become an independently versioned subset of the stdlib. Is a given subset already independently versioned in the stdlib? Then it may be OK to leave it out of your default install and document that you've done so. > To everyone: please don't conflate these two discussions. The other is about > CPython workflow and this one is about community/user expectations (I have > not been proposing to remove stdlib modules at any point). While I agree they're separate discussions, the workflow management one has the potential to *also* improve the user experience in cases where redistributors are already separating out pieces of the stdlib to make them optional. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From eric at trueblade.com Thu Jul 7 08:12:27 2016 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 7 Jul 2016 08:12:27 -0400 Subject: [Python-Dev] Making stdlib modules optional for distributions (Was: Breaking up the stdlib (Was: release cadence)) In-Reply-To: References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> Message-ID: <0040ac0f-9f99-f43b-0c72-90eba6ffc4b1@trueblade.com> On 7/6/2016 10:44 PM, Nick Coghlan wrote: > The point of overlap I see is that if the stdlib starts putting some > selected modules into site-packages (so "pip install --upgrade > " works without any changes to pip or equivalent tools), > then that also solves the "How to explicitly declare dependencies on > particular pieces of the standard library" problem: you use the same > mechanisms we already use to declare dependencies on 3rd party > packages distributed via a packaging server. One thing to keep in mind if we do this is how it interacts with the -S command line option to not include site-packages in sys.path. I currently use -S to basically mean "give my python as it was distributed, and don't include anything that was subsequently added by adding other RPM's (or package manager of your choice)". I realize that's a rough description, and possibly an abuse of -S. If using -S were to start excluding parts of the stdlib, that would be a problem for me. Eric. From barry at python.org Thu Jul 7 09:24:13 2016 From: barry at python.org (Barry Warsaw) Date: Thu, 7 Jul 2016 09:24:13 -0400 Subject: [Python-Dev] Making stdlib modules optional for distributions (Was: Breaking up the stdlib (Was: release cadence)) In-Reply-To: <0040ac0f-9f99-f43b-0c72-90eba6ffc4b1@trueblade.com> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> <0040ac0f-9f99-f43b-0c72-90eba6ffc4b1@trueblade.com> Message-ID: <20160707092413.1e1e2680@subdivisions.wooz.org> On Jul 07, 2016, at 08:12 AM, Eric V. Smith wrote: >One thing to keep in mind if we do this is how it interacts with the -S >command line option to not include site-packages in sys.path. I currently use >-S to basically mean "give my python as it was distributed, and don't include >anything that was subsequently added by adding other RPM's (or package >manager of your choice)". I realize that's a rough description, and possibly >an abuse of -S. If using -S were to start excluding parts of the stdlib, that >would be a problem for me. It's an important consideration, and leads to another discussion that's recurred over the years. Operating systems often want an "isolated" Python, similar to what's given by -I, which cannot be altered by subsequent installs. It's one of the things that lead to the Debian ecosystem using dist-packages for PyPI installed packages. Without isolation, it's just too easy for some random PyPI thing to break your system, and yes, that has really happened in the past. So if we go down the path of moving more of the stdlib to site-packages, we also need to get serious about a system-specific isolated Python. Cheers, -Barry From dholth at gmail.com Thu Jul 7 09:34:40 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 07 Jul 2016 13:34:40 +0000 Subject: [Python-Dev] Making stdlib modules optional for distributions (Was: Breaking up the stdlib (Was: release cadence)) In-Reply-To: <20160707092413.1e1e2680@subdivisions.wooz.org> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> <0040ac0f-9f99-f43b-0c72-90eba6ffc4b1@trueblade.com> <20160707092413.1e1e2680@subdivisions.wooz.org> Message-ID: Yes, not too long ago I installed "every" Python package on Ubuntu, and Python basically would not start. Perhaps some plugin system was trying to import everything and caused a segfault in GTK. The "short sys.path" model of everything installed is importable has its limits. On Thu, Jul 7, 2016 at 9:24 AM Barry Warsaw wrote: > On Jul 07, 2016, at 08:12 AM, Eric V. Smith wrote: > > >One thing to keep in mind if we do this is how it interacts with the -S > >command line option to not include site-packages in sys.path. I currently > use > >-S to basically mean "give my python as it was distributed, and don't > include > >anything that was subsequently added by adding other RPM's (or package > >manager of your choice)". I realize that's a rough description, and > possibly > >an abuse of -S. If using -S were to start excluding parts of the stdlib, > that > >would be a problem for me. > > It's an important consideration, and leads to another discussion that's > recurred over the years. Operating systems often want an "isolated" > Python, > similar to what's given by -I, which cannot be altered by subsequent > installs. It's one of the things that lead to the Debian ecosystem using > dist-packages for PyPI installed packages. Without isolation, it's just > too > easy for some random PyPI thing to break your system, and yes, that has > really > happened in the past. > > So if we go down the path of moving more of the stdlib to site-packages, we > also need to get serious about a system-specific isolated Python. > > Cheers, > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/dholth%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Artyom.Skrobov at arm.com Thu Jul 7 10:43:54 2016 From: Artyom.Skrobov at arm.com (Artyom Skrobov) Date: Thu, 7 Jul 2016 14:43:54 +0000 Subject: [Python-Dev] Python parser performance optimizations In-Reply-To: References: Message-ID: Hello, This is a monthly ping to get a review on http://bugs.python.org/issue26415 -- "Excessive peak memory consumption by the Python parser". The first patch of the series (an NFC refactoring) was successfully committed earlier in June, so the next step is to get the second patch, "the payload", reviewed and committed. To address the concerns raised by the commenters back in May: the patch doesn't lead to negative memory consumption, of course. The base for calculating percentages is the smaller number of the two; this is the same style of reporting that perf.py uses. In other words, "200% less memory usage" is a threefold shrink. The absolute values, and the way they were produced, are all reported under the ticket. From: Artyom Skrobov Sent: 26 May 2016 11:19 To: 'python-dev at python.org' Subject: Python parser performance optimizations Hello, Back in March, I've posted a patch at http://bugs.python.org/issue26526 -- "In parsermodule.c, replace over 2KLOC of hand-crafted validation code, with a DFA". The motivation for this patch was to enable a memory footprint optimization, discussed at http://bugs.python.org/issue26415 My proposed optimization reduces the memory footprint by up to 30% on the standard benchmarks, and by 200% on a degenerate case which sparked the discussion. The run time stays unaffected by this optimization. Python Developer's Guide says: "If you don't get a response within a few days after pinging the issue, then you can try emailing python-dev at python.org asking for someone to review your patch." So, here I am. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.croitor at qt.io Wed Jul 6 12:14:34 2016 From: alexandru.croitor at qt.io (Alexandru Croitor) Date: Wed, 6 Jul 2016 16:14:34 +0000 Subject: [Python-Dev] Debugging Python scripts with GDB on OSX Message-ID: Hello, I'm interested to find out if debugging Python scripts with GDB is supported on OSX at all? I'm referring to the functionality described on https://wiki.python.org/moin/DebuggingWithGdb and on http://fedoraproject.org/wiki/Features/EasierPythonDebugging. I've tried so far various combinations of pre-compiled GDB from the homebrew package manager, locally-compiled GDB from homebrew, as well as locally compiled GDB from MacPorts, together with a pre-compiled Python 2.7, homebrew-compiled 2.7, and custom compiled Python 2.7 from the official source tarball. My results so far were not successful. The legacy GDB commands to show a python stack trace or the local variables - do not work. And the new GDB commands (referenced on the Fedora project page) are not present at all in any of the GDB versions. I've checked the python CI build bot tests, and it seems the new GDB commands are only successfully tested on Linux machines, and are skipped on FreeBSD, OS X, and Solaris machines. Are the new python <-> GDB commands specific to Linux? Are there any considerations to take in regards to debug symbols for Python / GDB on OSX? Has anyone attempted what I'm trying to do? I would be grateful for any advice. And I apologize if my choice of the mailing lists is not the best. Regards, Alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Thu Jul 7 11:34:58 2016 From: steve.dower at python.org (Steve Dower) Date: Thu, 7 Jul 2016 08:34:58 -0700 Subject: [Python-Dev] Making stdlib modules optional for distributions (Was: Breaking up the stdlib (Was: release cadence)) In-Reply-To: <20160707092413.1e1e2680@subdivisions.wooz.org> References: <177a7bdf-6744-3b18-29d4-5e6b6e1f17ab@gmail.com> <20160705170205.GS27919@ando.pearwood.info> <20160706031116.GV27919@ando.pearwood.info> <0040ac0f-9f99-f43b-0c72-90eba6ffc4b1@trueblade.com> <20160707092413.1e1e2680@subdivisions.wooz.org> Message-ID: <0a06fedf-f739-64ca-d3b1-d606d1c5a5cd@python.org> On 07Jul2016 0624, Barry Warsaw wrote: > On Jul 07, 2016, at 08:12 AM, Eric V. Smith wrote: > >> One thing to keep in mind if we do this is how it interacts with the -S >> command line option to not include site-packages in sys.path. I currently use >> -S to basically mean "give my python as it was distributed, and don't include >> anything that was subsequently added by adding other RPM's (or package >> manager of your choice)". I realize that's a rough description, and possibly >> an abuse of -S. If using -S were to start excluding parts of the stdlib, that >> would be a problem for me. > > It's an important consideration, and leads to another discussion that's > recurred over the years. Operating systems often want an "isolated" Python, > similar to what's given by -I, which cannot be altered by subsequent > installs. It's one of the things that lead to the Debian ecosystem using > dist-packages for PyPI installed packages. Without isolation, it's just too > easy for some random PyPI thing to break your system, and yes, that has really > happened in the past. > > So if we go down the path of moving more of the stdlib to site-packages, we > also need to get serious about a system-specific isolated Python. I've done just enough research to basically decide that putting any of the stdlib in site-packages is infeasible (it'll break virtualenv/venv as well), so don't worry about that. A "dist-packages" equivalent is a possibility, and it may even be possible to manage these packages directly in Lib/, which would result in basically no visible impact for anyone who doesn't care to update individual parts. Cheers, Steve > Cheers, > -Barry From rdmurray at bitdance.com Thu Jul 7 13:01:17 2016 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 07 Jul 2016 13:01:17 -0400 Subject: [Python-Dev] Debugging Python scripts with GDB on OSX In-Reply-To: References: Message-ID: <20160707170118.54739B1415B@webabinitio.net> On Wed, 06 Jul 2016 16:14:34 -0000, Alexandru Croitor wrote: > I'm interested to find out if debugging Python scripts with GDB is supported on OSX at all? > > I'm referring to the functionality described on https://wiki.python.org/moin/DebuggingWithGdb and on http://fedoraproject.org/wiki/Features/EasierPythonDebugging. > > I've tried so far various combinations of pre-compiled GDB from the homebrew package manager, locally-compiled GDB from homebrew, as well as locally compiled GDB from MacPorts, together with a pre-compiled Python 2.7, homebrew-compiled 2.7, and custom compiled Python 2.7 from the official source tarball. > > My results so far were not successful. The legacy GDB commands to show a python stack trace or the local variables - do not work. And the new GDB commands (referenced on the Fedora project page) are not present at all in any of the GDB versions. > > I've checked the python CI build bot tests, and it seems the new GDB commands are only successfully tested on Linux machines, and are skipped on FreeBSD, OS X, and Solaris machines. > > Are the new python <-> GDB commands specific to Linux? > Are there any considerations to take in regards to debug symbols for Python / GDB on OSX? > > Has anyone attempted what I'm trying to do? > > I would be grateful for any advice. > > And I apologize if my choice of the mailing lists is not the best. I tried to do this a few weeks ago myself, with similar negative results. The only thing I tried that you don't mention (I didn't try everything you did) is a compile from raw gdb source...and that didn't support OSX format core dumps. So I gave up. --David From carlosjosepita at gmail.com Thu Jul 7 13:09:16 2016 From: carlosjosepita at gmail.com (Carlos Pita) Date: Thu, 7 Jul 2016 14:09:16 -0300 Subject: [Python-Dev] __qualname__ exposed as a local variable: standard? Message-ID: Hi all, I noticed __qualname__ is exposed by locals() while defining a class. This is handy but I'm not sure about its status: is it standard or just an artifact of the current implementation? (btw, the pycodestyle linter -former pep8- rejects its usage). I was unable to find any reference to this behavior in PEP 3155 nor in the language reference. Thank you in advance -- Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From doko at ubuntu.com Fri Jul 8 08:08:35 2016 From: doko at ubuntu.com (Matthias Klose) Date: Fri, 8 Jul 2016 14:08:35 +0200 Subject: [Python-Dev] release cadence In-Reply-To: References: <5772F17A.1080902@hastings.org> <5774CD41.9030601@hastings.org> <20160705134449.302fb580.barry@wooz.org> Message-ID: <577F97C3.7000000@ubuntu.com> On 05.07.2016 21:11, Brett Cannon wrote: > On Tue, 5 Jul 2016 at 10:45 Barry Warsaw wrote: > >> On Jul 04, 2016, at 10:31 AM, Nick Coghlan wrote: >> >>> While we liked the "consistent calendar cadence that is some multiple >>> of 6 months" idea, several of us thought 12 months was way too short >>> as it makes for too many entries in third party support matrices. >> >> 18 months for a major release cadence still seems right to me. Downstreams >> and third-parties often have to go through *a lot* of work to ensure >> compatibility, and try as we might, every Python release breaks >> *something*. >> Major version releases trigger a huge cascade of other work for lots of >> other >> people, and I don't think shortening that would be for the overall >> community >> good. It just feels like we'd always be playing catch up. >> > > Sticking w/ 18 months is also fine, but then I would like to discuss > choosing what months we try to release to get into a date-based release > cadence so we know that every e.g. December and June are when releases > typically happen thanks to our 6 month bug-fix release cadence. This has > the nice benefit of all of us being generally aware of when a bug-fix > release is coming up instead of having to check the PEP or go through our > mail archive to find out what month a bug-fix is going to get cut (and also > something the community to basically count on). I like the 18 months cycle, because it's a multiple of six, which fits the Ubuntu release cadence (as as I understand, the Fedora cadence as well). Sometimes it might be ambitious to update reverse dependencies in the distro within two months until the distro freeze, and two more months during the freeze leading to a distro release, but such is life, and it's then up to distro maintainers of LTS releases to prepare the distro for a new version with only four months left. My hope with time based releases is that also upstreams will start testing with new versions more early when they can anticipate the release date. Matthias From montazami.trading at yahoo.it Fri Jul 8 09:16:00 2016 From: montazami.trading at yahoo.it (Hadi Montazami safavi) Date: Fri, 8 Jul 2016 13:16:00 +0000 (UTC) Subject: [Python-Dev] digital panels processing text sowftware write with Python language References: <2080976027.6803033.1467983760829.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <2080976027.6803033.1467983760829.JavaMail.yahoo@mail.yahoo.com> Dear SirsWith thanks, iam looking digital panels processing text sowftware write with Python languagefor my project.Could you please guide me if there is or guiding to write it personally.Thanking you in advance and awaiting.With best regards.M.Safavi -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Fri Jul 8 12:07:04 2016 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 9 Jul 2016 02:07:04 +1000 Subject: [Python-Dev] digital panels processing text sowftware write with Python language In-Reply-To: <2080976027.6803033.1467983760829.JavaMail.yahoo@mail.yahoo.com> References: <2080976027.6803033.1467983760829.JavaMail.yahoo.ref@mail.yahoo.com> <2080976027.6803033.1467983760829.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20160708160703.GC27919@ando.pearwood.info> On Fri, Jul 08, 2016 at 01:16:00PM +0000, Hadi Montazami safavi via Python-Dev wrote: > Dear SirsWith thanks, iam looking digital panels processing text > sowftware write with Python languagefor my project.Could you please > guide me if there is or guiding to write it personally.Thanking you in > advance and awaiting.With best regards.M.Safavi Sorry, we cannot help you on this list. It is for the development of the Python interpreter, not for projects using Python. Please try the Python-List mailing list, they may be able to help you. https://mail.python.org/mailman/listinfo/python-list -- Steve From status at bugs.python.org Fri Jul 8 12:08:45 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 8 Jul 2016 18:08:45 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160708160845.9D8C35692A@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-07-01 - 2016-07-08) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5550 ( +5) closed 33676 (+28) total 39226 (+33) Open issues with patches: 2427 Issues opened (22) ================== #23908: Check path arguments of os functions for null character http://bugs.python.org/issue23908 reopened by koobs #26765: Factor out common bytes and bytearray implementation http://bugs.python.org/issue26765 reopened by serhiy.storchaka #27441: redundant assignments to ob_size of new ints that _PyLong_New http://bugs.python.org/issue27441 opened by Oren Milman #27442: expose the Android API level in sysconfig.get_config_vars() http://bugs.python.org/issue27442 opened by xdegaye #27444: Python doesn't build due to test_float.py broken on non-IEEE m http://bugs.python.org/issue27444 opened by stark #27445: Charset instance not passed to set_payload() http://bugs.python.org/issue27445 opened by claudep #27446: struct: allow per-item byte order http://bugs.python.org/issue27446 opened by zwol #27447: python -m doctest script_file_with_no_py_extension produces co http://bugs.python.org/issue27447 opened by towb #27448: Race condition in subprocess.Popen which causes a huge memory http://bugs.python.org/issue27448 opened by aonishuk #27450: bz2: BZ2File should expose compression level as an attribute http://bugs.python.org/issue27450 opened by joshtriplett #27451: gzip.py: Please save more of the gzip header for later examina http://bugs.python.org/issue27451 opened by joshtriplett #27452: IDLE: Cleanup config code http://bugs.python.org/issue27452 opened by terry.reedy #27453: $CPP invocation in configure must use $CPPFLAGS http://bugs.python.org/issue27453 opened by Chi Hsuan Yen #27454: PyUnicode_InternInPlace can use PyDict_SetDefault http://bugs.python.org/issue27454 opened by methane #27455: Fix tkinter examples to be PEP8 compliant http://bugs.python.org/issue27455 opened by John Hagen #27456: TCP_NODELAY http://bugs.python.org/issue27456 opened by j1m #27461: Optimize PNGs http://bugs.python.org/issue27461 opened by scop #27464: Document that SplitResult & friends are namedtuples http://bugs.python.org/issue27464 opened by kmike #27465: IDLE:Make help source menu entries unique and sorted. http://bugs.python.org/issue27465 opened by terry.reedy #27466: [Copy from github user macartur] time2netscape missing comma http://bugs.python.org/issue27466 opened by Robby Daigle #27468: Erroneous memory behaviour for objects created in another thre http://bugs.python.org/issue27468 opened by Adria Garriga #27469: Unicode filename gets crippled on Windows when drag and drop http://bugs.python.org/issue27469 opened by Drekin Most recent 15 issues with no replies (15) ========================================== #27469: Unicode filename gets crippled on Windows when drag and drop http://bugs.python.org/issue27469 #27468: Erroneous memory behaviour for objects created in another thre http://bugs.python.org/issue27468 #27464: Document that SplitResult & friends are namedtuples http://bugs.python.org/issue27464 #27451: gzip.py: Please save more of the gzip header for later examina http://bugs.python.org/issue27451 #27446: struct: allow per-item byte order http://bugs.python.org/issue27446 #27445: Charset instance not passed to set_payload() http://bugs.python.org/issue27445 #27435: ctypes and AIX - also for 2.7.X (and later) http://bugs.python.org/issue27435 #27428: Document WindowsRegistryFinder inherits from MetaPathFinder http://bugs.python.org/issue27428 #27427: Add new math module tests http://bugs.python.org/issue27427 #27426: Encoding mismatch causes some tests to fail on Windows http://bugs.python.org/issue27426 #27420: Docs for os.link - say what happens if link already exists http://bugs.python.org/issue27420 #27411: Possible different behaviour of explicit and implicit __dict__ http://bugs.python.org/issue27411 #27409: List socket.SO_*, SCM_*, MSG_*, IPPROTO_* symbols http://bugs.python.org/issue27409 #27408: Document importlib.abc.ExecutionLoader implements get_data() http://bugs.python.org/issue27408 #27404: Misc/NEWS: add [Security] prefix to Python 3.5.2 changelog http://bugs.python.org/issue27404 Most recent 15 issues waiting for review (15) ============================================= #27466: [Copy from github user macartur] time2netscape missing comma http://bugs.python.org/issue27466 #27461: Optimize PNGs http://bugs.python.org/issue27461 #27455: Fix tkinter examples to be PEP8 compliant http://bugs.python.org/issue27455 #27454: PyUnicode_InternInPlace can use PyDict_SetDefault http://bugs.python.org/issue27454 #27453: $CPP invocation in configure must use $CPPFLAGS http://bugs.python.org/issue27453 #27452: IDLE: Cleanup config code http://bugs.python.org/issue27452 #27448: Race condition in subprocess.Popen which causes a huge memory http://bugs.python.org/issue27448 #27445: Charset instance not passed to set_payload() http://bugs.python.org/issue27445 #27442: expose the Android API level in sysconfig.get_config_vars() http://bugs.python.org/issue27442 #27441: redundant assignments to ob_size of new ints that _PyLong_New http://bugs.python.org/issue27441 #27429: xml.sax.saxutils.escape doesn't escape multiple characters saf http://bugs.python.org/issue27429 #27427: Add new math module tests http://bugs.python.org/issue27427 #27423: Failed assertions when running test.test_os on Windows http://bugs.python.org/issue27423 #27419: Bugs in PyImport_ImportModuleLevelObject http://bugs.python.org/issue27419 #27413: Add an option to json.tool to bypass non-ASCII characters. http://bugs.python.org/issue27413 Top 10 most discussed issues (10) ================================= #27442: expose the Android API level in sysconfig.get_config_vars() http://bugs.python.org/issue27442 25 msgs #27456: TCP_NODELAY http://bugs.python.org/issue27456 9 msgs #27455: Fix tkinter examples to be PEP8 compliant http://bugs.python.org/issue27455 8 msgs #23908: Check path arguments of os functions for null character http://bugs.python.org/issue23908 7 msgs #27436: Strange code in selectors.KqueueSelector http://bugs.python.org/issue27436 7 msgs #27444: Python doesn't build due to test_float.py broken on non-IEEE m http://bugs.python.org/issue27444 7 msgs #27452: IDLE: Cleanup config code http://bugs.python.org/issue27452 7 msgs #27448: Race condition in subprocess.Popen which causes a huge memory http://bugs.python.org/issue27448 6 msgs #26765: Factor out common bytes and bytearray implementation http://bugs.python.org/issue26765 5 msgs #27081: Cannot capture sys.stderr output from an uncaught exception in http://bugs.python.org/issue27081 5 msgs Issues closed (28) ================== #11027: Implement sectionxform in configparser http://bugs.python.org/issue11027 closed by berker.peksag #19527: Test failures with COUNT_ALLOCS http://bugs.python.org/issue19527 closed by serhiy.storchaka #22198: Odd floor-division corner case http://bugs.python.org/issue22198 closed by serhiy.storchaka #22624: Bogus usage of floatclock in timemodule http://bugs.python.org/issue22624 closed by haypo #23034: Dynamically control debugging output http://bugs.python.org/issue23034 closed by serhiy.storchaka #24557: Refactor LibreSSL / EGD detection http://bugs.python.org/issue24557 closed by python-dev #27019: Reduce marshal stack depth for 2.7 on Windows debug build http://bugs.python.org/issue27019 closed by python-dev #27220: Add a pure Python version of 'collections.defaultdict' http://bugs.python.org/issue27220 closed by rhettinger #27248: Possible refleaks in PyType_Ready in error condition http://bugs.python.org/issue27248 closed by python-dev #27254: UAF in Tkinter module http://bugs.python.org/issue27254 closed by Emin Ghuliev #27332: Clinic: first parameter for module-level functions should be P http://bugs.python.org/issue27332 closed by serhiy.storchaka #27380: IDLE: add base Query dialog with ttk widgets http://bugs.python.org/issue27380 closed by terry.reedy #27410: DLL hijacking vulnerability in Python 3.5.2 installer http://bugs.python.org/issue27410 closed by steve.dower #27422: Deadlock when mixing threading and multiprocessing http://bugs.python.org/issue27422 closed by davin #27434: cross-building python 3.6 with an older interpreter fails http://bugs.python.org/issue27434 closed by xdegaye #27437: IDLE tests must be able to set user configuration values. http://bugs.python.org/issue27437 closed by terry.reedy #27438: Refactor simple iterators implementation http://bugs.python.org/issue27438 closed by serhiy.storchaka #27439: Add a product() function to the standard library http://bugs.python.org/issue27439 closed by rhettinger #27440: Trigonometric bug http://bugs.python.org/issue27440 closed by tim.peters #27443: __length_hint__() of bytearray iterator can return negative in http://bugs.python.org/issue27443 closed by serhiy.storchaka #27449: pip install --upgrade pip (Windows) http://bugs.python.org/issue27449 closed by zach.ware #27457: shlex.quote incorrectly quotes ampersants, pipes http://bugs.python.org/issue27457 closed by r.david.murray #27458: Allow subtypes of unicode/str to hit the optimized unicode_con http://bugs.python.org/issue27458 closed by haypo #27459: unintended changes occur when dealing with list of list http://bugs.python.org/issue27459 closed by eryksun #27460: Change bytes exception when overflow http://bugs.python.org/issue27460 closed by serhiy.storchaka #27462: NULL Pointer deref in binary_iop1 function http://bugs.python.org/issue27462 closed by r.david.murray #27463: Floor division is not the same as the floor of division http://bugs.python.org/issue27463 closed by r.david.murray #27467: distutils.config API different between <=3.5.1 and 3.5.2 http://bugs.python.org/issue27467 closed by berker.peksag From nad at python.org Fri Jul 8 21:07:53 2016 From: nad at python.org (Ned Deily) Date: Fri, 8 Jul 2016 21:07:53 -0400 Subject: [Python-Dev] Reminder: 3.6.0a3 shapshot 2016-07-11 12:00 UTC Message-ID: The next alpha snapshot for the 3.6 release cycle is coming up in a couple of days. This is the third of four alphas we have planned. Keep that feature code coming in! As a reminder, alpha releases are intended to make it easier for the wider community to test the current state of new features and bug fixes for an upcoming Python release as a whole and for us to test the release process. During the alpha phase, features may be added, modified, or deleted up until the start of the beta phase. Alpha users beware! Looking ahead, the last alpha release, 3.6.0a4, will follow next month on 2016-08-15. This is a slight change from the previous schedule. Due to popular request, we're moved the feature code cutoff date, 3.6.0b1, from Wednesday, 09-07, to Monday, 09-12, to allow a bit more time after the end-of-summer holiday. In doing so, it seems to make sense to move the final alpha back a week to give a bit more time there for check-ins. To recap: 2016-07-11 ~12:00 UTC: code snapshot for 3.6.0 alpha 3 2016-08-15 ~12:00 UTC: code snapshot for 3.6.0 alpha 4. (**CHANGED**, was 08-08) now to 2016-09-12: Alpha phase (unrestricted feature development) 2016-09-12: 3.6.0 feature code freeze, 3.7.0 feature development begins (**CHANGED**, was 09-07) 2016-09-12 to 2016-12-04: 3.6.0 beta phase (bug and regression fixes, no new features) 2016-12-04 3.6.0 release candidate 1 (3.6.0 code freeze) 2016-12-16 3.6.0 release (3.6.0rc1 plus, if necessary, any dire emergency fixes) --Ned https://www.python.org/dev/peps/pep-0494/ -- Ned Deily nad at python.org -- [] From ncoghlan at gmail.com Sat Jul 9 09:50:59 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 9 Jul 2016 23:50:59 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules Message-ID: I'm in the process of trying to disentangle http://bugs.python.org/issue27137 which points out some of the behavioural differences that arise when falling back from the original C implementation of functools.partial to the pure Python emulation that uses a closure. That issue was opened due to a few things that work with the C implementation that fail with the Python implementation: - the C version can be pickled (and hence used with multiprocessing) - the C version can be subclassed - the C version can be used in "isinstance" checks - the C version behaves as a static method, the Python version as a normal instance method While I'm planning to accept the patch that converts the pure Python version to a full class that matches the semantics of the C version in these areas as well as in its core behaviour, that last case is one where the pure Python version merely exhibits different behaviour from the C version, rather than failing outright. Given that the issues that arose in this case weren't at all obvious up front, what do folks think of the idea of updating PEP 399 to explicitly prohibit class/function mismatches between accelerator modules and their pure Python counterparts? The rationale for making such a change is that when it comes to true drop-in API compatibility, we have reasonable evidence that "they're both callables" isn't sufficient once the complexities of real world applications enter the picture. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tseaver at palladion.com Sat Jul 9 10:56:46 2016 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 9 Jul 2016 10:56:46 -0400 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/09/2016 09:50 AM, Nick Coghlan wrote: > Given that the issues that arose in this case weren't at all obvious > up front, what do folks think of the idea of updating PEP 399 to > explicitly prohibit class/function mismatches between accelerator > modules and their pure Python counterparts? > > The rationale for making such a change is that when it comes to true > drop-in API compatibility, we have reasonable evidence that "they're > both callables" isn't sufficient once the complexities of real world > applications enter the picture. +1. Might need some clarification: - - "C functions can fall back to ________________". - - "C classes must fall back to Python classes". Outlining the constraints in the PEP (identical pickling semantics, sublcassability, etc.) might be important, too. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXgRCpAAoJEPKpaDSJE9HYNhwP+gN1xGSZlEvrxz5SGrqTneUx 5WDh2oUJzlTFHDrbSMTeGcpoYviWPLWFy0Hw7PBgRhrlA/TS7WA5/4ABde+2Zs0w PN5AaaXZrkGAHvcQZkBzcEY9ITSpeb+GSmLG4Eih30UAuPFnM3M1UYSjEGjVZV23 tYeDTcmORNaBcQDPG8HiidfOArBKTcz8Jd1IimFrYEOFGSsk6DxPWffJ3EkR6qFj FwsktPDZT113AiztrkLt1s8vyLj8JdzkGKJO+fSfOsp70NZCRy1SKi6tJjHfd4e5 qf9qgi9yh39y+VktBV0o83+gkaGlOKIPRCqOYdkOQHl59RT0YWBoRuXrVtmE00fl QoePBxJjVszlzknULLOXptv0B0sv0ZhsXPgID3hhZ0Z78LQ1RG/9fquGGbfOFfe0 qiPR4LZKCTatP4jvxV3PVKJ9NdXb8OKmfF7oEO7t8WZBJUMtpeuKOw5Qj6Am1pTA UUtDuCXeP0rPVE6Nj5p3NuhkVWuW9eX+7v4XhUC+t4c74PeDo+Fx+LjZF/D3WGVr yD2fcoL16mZ/+LWbxblVkhmsNQpyogtZfj/yvnLctMlGfvseMV9tPOe4GG5QLexW HRl3fSMRIi6MjYnxQyeF/vp+eWd6ApK9EIFqYcLWO+AjzXeZ8uS8+ezGzA7ZUvyG GKJB/ThZHTxuszh7kUgq =mRpk -----END PGP SIGNATURE----- From ethan at stoneleaf.us Sat Jul 9 12:51:15 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Sat, 09 Jul 2016 09:51:15 -0700 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: <57812B83.8020908@stoneleaf.us> On 07/09/2016 06:50 AM, Nick Coghlan wrote: > Given that the issues that arose in this case weren't at all obvious > up front, what do folks think of the idea of updating PEP 399 to > explicitly prohibit class/function mismatches between accelerator > modules and their pure Python counterparts? So this is basically a doc change and a reminder to test both the C version and the Python version to ensure identical semantics? -- ~Ethan~ From tjreedy at udel.edu Sat Jul 9 14:48:06 2016 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 9 Jul 2016 14:48:06 -0400 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: On 7/9/2016 9:50 AM, Nick Coghlan wrote: > I'm in the process of trying to disentangle > http://bugs.python.org/issue27137 which points out some of the > behavioural differences that arise when falling back from the original > C implementation of functools.partial to the pure Python emulation > that uses a closure. > > That issue was opened due to a few things that work with the C > implementation that fail with the Python implementation: > > - the C version can be pickled (and hence used with multiprocessing) > - the C version can be subclassed > - the C version can be used in "isinstance" checks > - the C version behaves as a static method, the Python version as a > normal instance method > > While I'm planning to accept the patch that converts the pure Python > version to a full class that matches the semantics of the C version in > these areas as well as in its core behaviour, that last case is one > where the pure Python version merely exhibits different behaviour from > the C version, rather than failing outright. > > Given that the issues that arose in this case weren't at all obvious > up front, what do folks think of the idea of updating PEP 399 to > explicitly prohibit class/function mismatches between accelerator > modules and their pure Python counterparts? +1 I would put it positively that both version have to use matching classes or matching functions. I assumed that this was already true. > The rationale for making such a change is that when it comes to true > drop-in API compatibility, we have reasonable evidence that "they're > both callables" isn't sufficient once the complexities of real world > applications enter the picture. This same issue has come up in the itertools docs. The C code callables are iterator classes that return iterator instances. The didactic doc 'equivalents' are generator functions that return generator instances. The result is functionally the same in that the respective iterators yield the same sequence of objects. In this, the doc equivalents concisely fulfill their purpose. But non-iterator behaviors of C class instances and generators are different, and this confused some people. Python-coded itertools would have to be iterator classes to be really equivalent. In May, Raymond added 'Roughly' to 'equivalent'. -- Terry Jan Reedy From steve at pearwood.info Sat Jul 9 15:10:05 2016 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 10 Jul 2016 05:10:05 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: <20160709190957.GD27919@ando.pearwood.info> On Sat, Jul 09, 2016 at 11:50:59PM +1000, Nick Coghlan wrote: > I'm in the process of trying to disentangle > http://bugs.python.org/issue27137 which points out some of the > behavioural differences that arise when falling back from the original > C implementation of functools.partial to the pure Python emulation > that uses a closure. [...] > Given that the issues that arose in this case weren't at all obvious > up front, what do folks think of the idea of updating PEP 399 to > explicitly prohibit class/function mismatches between accelerator > modules and their pure Python counterparts? > > The rationale for making such a change is that when it comes to true > drop-in API compatibility, we have reasonable evidence that "they're > both callables" isn't sufficient once the complexities of real world > applications enter the picture. Are these problems common enough, or serious enough, to justify a blanket ban on mismatches (a "MUST NOT" rather than a "SHOULD NOT" in RFC 2119 terminology)? The other side of the issue is that requiring exact correspondence is considerably more difficult and may be over-kill for some uses. Hypothetically speaking, if I have an object that only supports pickling by accident, and I replace it with one that doesn't support pickling, shouldn't it be my decision whether that counts as a functional regression (a bug) or a reliance on an undocumented and accidental implementation detail? -- Steve From ncoghlan at gmail.com Sat Jul 9 20:15:33 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 10 Jul 2016 10:15:33 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <20160709190957.GD27919@ando.pearwood.info> References: <20160709190957.GD27919@ando.pearwood.info> Message-ID: On 10 July 2016 at 05:10, Steven D'Aprano wrote: > The other side of the issue is that requiring exact correspondence is > considerably more difficult and may be over-kill for some uses. > Hypothetically speaking, if I have an object that only supports pickling > by accident, and I replace it with one that doesn't support pickling, > shouldn't it be my decision whether that counts as a functional > regression (a bug) or a reliance on an undocumented and accidental > implementation detail? That's the proposed policy change and the reason I figured it needed a python-dev discussion, as currently it's up to folks adding the Python equivalents (or the C accelerators) to decide on a case by case basis whether or not to care about compatibility for: - string representations - pickling (and hence multiprocessing support) - subclassing - isinstance checks - descriptor behaviour The main way for such discrepancies to arise is for the Python implementation to be a function (or closure), while the C implementation is a custom stateful callable. The problem with the current "those are just technical implementation details" approach is that a lot of Pythonistas learn standard library API behaviour and capabilities through a mix of experimentation and introspection rather than reading the documentation, so if CPython uses the accelerated version by default, then most folks aren't going to notice the discrepancies until they (or their users) are trying to debug a problem like "my library works fine in CPython, but breaks when used with multiprocessing on PyPy" or "my doctests fail when running under MicroPython". For non-standard library code, those kinds of latent compatibility defects are fine, but PEP 399 is specifically about setting design & development policy for the *standard library* to help improve cross-implementation compatibility not just of the standard library itself, but of code *using* the standard library in ways that work on CPython in its default configuration. One example of a practical consequence of the change in policy would be to say that if you don't want to support subclassing, then don't give the *type* a public name - hide it behind a factory function, the way contextlib.contextmanager hides contextlib._GeneratorContextManager. That example also shows that accidentally making a type public (but still undocumented) isn't necessarily a commitment to keeping it public forever - that type was originally contextlib.GeneratorContextManager, so when it was pointed out it wasn't documented, I had to choose between supporting it as a public API (which I didn't want to do) and adding the leading underscore to its name to better reflect its implementation detail status. In a similar fashion, the policy I'm proposing here also wouldn't require that discrepancies always be resolved in favour of enhancing the Python version to match the C version - in some cases, if the module maintainer genuinely doesn't want to support a particular behaviour, then they can make sure that behaviour isn't available for the C version either. The key change is that it would become officially *not* OK for the feature set of the C version to be a superset of the feature set of the Python version - either the C version has to be constrained to match the Python one, or the Python one enhanced to match the C one, rather than leaving the latent compatibility defect in place as a barrier to adoption for alternate runtimes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Sat Jul 9 21:15:59 2016 From: brett at python.org (Brett Cannon) Date: Sun, 10 Jul 2016 01:15:59 +0000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: On Sat, 9 Jul 2016 at 06:52 Nick Coghlan wrote: > I'm in the process of trying to disentangle > http://bugs.python.org/issue27137 which points out some of the > behavioural differences that arise when falling back from the original > C implementation of functools.partial to the pure Python emulation > that uses a closure. > > That issue was opened due to a few things that work with the C > implementation that fail with the Python implementation: > > - the C version can be pickled (and hence used with multiprocessing) > - the C version can be subclassed > - the C version can be used in "isinstance" checks > - the C version behaves as a static method, the Python version as a > normal instance method > > While I'm planning to accept the patch that converts the pure Python > version to a full class that matches the semantics of the C version in > these areas as well as in its core behaviour, that last case is one > where the pure Python version merely exhibits different behaviour from > the C version, rather than failing outright. > > Given that the issues that arose in this case weren't at all obvious > up front, what do folks think of the idea of updating PEP 399 to > explicitly prohibit class/function mismatches between accelerator > modules and their pure Python counterparts? > I think flat-out prohibiting won't work in the Python -> C case as you can do things such as closures and such that I don't know if we provide the APIs to mimic through the C API. I'm fine saying we "strongly encourage mirroring the design between the pure Python and accelerated version for various reasons". -Brett > > The rationale for making such a change is that when it comes to true > drop-in API compatibility, we have reasonable evidence that "they're > both callables" isn't sufficient once the complexities of real world > applications enter the picture. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 10 03:10:30 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 10 Jul 2016 17:10:30 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: On 10 July 2016 at 11:15, Brett Cannon wrote: > On Sat, 9 Jul 2016 at 06:52 Nick Coghlan wrote: >> That issue was opened due to a few things that work with the C >> implementation that fail with the Python implementation: >> >> - the C version can be pickled (and hence used with multiprocessing) >> - the C version can be subclassed >> - the C version can be used in "isinstance" checks >> - the C version behaves as a static method, the Python version as a >> normal instance method >> >> While I'm planning to accept the patch that converts the pure Python >> version to a full class that matches the semantics of the C version in >> these areas as well as in its core behaviour, that last case is one >> where the pure Python version merely exhibits different behaviour from >> the C version, rather than failing outright. >> >> Given that the issues that arose in this case weren't at all obvious >> up front, what do folks think of the idea of updating PEP 399 to >> explicitly prohibit class/function mismatches between accelerator >> modules and their pure Python counterparts? > > I think flat-out prohibiting won't work in the Python -> C case as you can > do things such as closures and such that I don't know if we provide the APIs > to mimic through the C API. I'm fine saying we "strongly encourage mirroring > the design between the pure Python and accelerated version for various > reasons". I think we should be more specific than that, as the main problem is that the obvious way to emulate a closure in C is with a custom callable, and there are some subtleties involved in doing that in a way that doesn't create future cross-implementation compatibility traps. Specifically, if the Python implementation is a closure, then from an external behaviour perspective, the key behaviours to mimic in a C implementation would be: - disable subclassing & isinstance checks against the public API (e.g. by implementing it as a factory function rather than exposing the custom type directly) - either wrap the Python version in staticmethod, or add descriptor protocol support to the C version - don't add a custom representation in C without also adding it to the Python version - don't add pickling support in C without also adding it to the Python version Similarly, if an existing C implementation uses a custom callable, then a closure may not be a sufficiently compatible alternative, even though it's clean to write and easy to read. These issues don't tend to arise with normal functions, as the obvious replacement for a module level function written in Python is a module level function written in C, and those already tend to behave similarly in all these respects. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vgr255 at live.ca Sun Jul 10 20:34:36 2016 From: vgr255 at live.ca (Emanuel Barry) Date: Mon, 11 Jul 2016 00:34:36 +0000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: Message-ID: Hello all, and thanks Nick for starting the discussion! Long wall of text ahead, whoops! TL;DR - everyone seems to agree, let's do it. I think the main issue that we're hitting is that we (whatever you want "we" to mean) prefer to make Python code in the standard library as easily understandable and readable as possible (a point Raymond raised in the issue, which could be a discussion on its own too, but I won't get into that). People new to Python will sometimes look into the code to try and understand how it works, while only contributors and core devs (read: people who know C) will look at the C code, so keeping the code simple isn't as baked in the design as the Python versions. As such, making closures might be TOOWTDI in Python, but it can quickly become annoying to reimplement in C - either you hide away some of the implementation details behind C level variables and pretend like you're a closure, or you change the Python version to something else. It's much less consequential to change a closure into a class than it is to change a class into a function. In this particular case, we lose the descriptorness(?) of the Python version, but I'd rather think of this as fixing a bug rather than removing a feature, especially when partialmethod literally lies right below in the source. (Side-note: I noticed the source says "Purely functional, no descriptor behaviour" but functions exhibit descriptor behaviour) I think the change is worth it (well, there'd be a problem if I didn't ;), but I'm much more concerned about ensuring that: - Someone at some point finding a bunch of bugs^Wdiscrepancies between the Python and C versions of a feature to have some concise rules on the changes they can and cannot make; - Python implementation of existing C features, of C implementations of existing Python features to know exactly the liberty they can and cannot take; - New features implemented both in C and Python, to know offhand their limits and make sure someone further down the line doesn't have to fix it when they realize e.g. PyPy behaves differently. > On Saturday, July 09, 2016 8:16 PM, Nick Coghlan wrote: > > That's the proposed policy change and the reason I figured it needed a > python-dev discussion, as currently it's up to folks adding the Python > equivalents (or the C accelerators) to decide on a case by case basis > whether or not to care about compatibility for: > > - string representations > - pickling (and hence multiprocessing support) > - subclassing > - isinstance checks > - descriptor behaviour That's quite an exhaustive list for "let the person making the patch decide what to do with that;" quite the more reason to make this concise (also see my reply to Brett below). > The main way for such discrepancies to arise is for the Python > implementation to be a function (or closure), while the C > implementation is a custom stateful callable. Maybe closures are "too complicated" to be a proper design if something is to be written in both Python and C ;) > The problem with the current "those are just technical implementation > details" approach is that a lot of Pythonistas learn standard library > API behaviour and capabilities through a mix of experimentation and > introspection rather than reading the documentation, so if CPython > uses the accelerated version by default, then most folks aren't going > to notice the discrepancies until they (or their users) are trying to > debug a problem like "my library works fine in CPython, but breaks > when used with multiprocessing on PyPy" or "my doctests fail when > running under MicroPython". I noticed that Python's standard library takes a very "duck-typed" approach to the accelerator modules: "I have this thing which is a function, and I [expose it as a global/use it privately], but before I do so, let's see if there's something with the same name in that other module, then use it." In practice, this doesn't create much of an issue, except this thread exists. (I'm not proposing to change how accelerator modules are handled, merely pointing out that making designs identical was never a hard requirement and depended on the developer(s)) > One example of a practical consequence of the change in policy would > be to say that if you don't want to support subclassing, then don't > give the *type* a public name - hide it behind a factory function [...] It's a mixed bag though. How do you disallow subclassing but still allow isinstance() checks? Now let's try it in Python and without metaclasses, and the documented vs undocumented (and unguaranteed) API differences become much more important. But you have to be a consenting adult if you're working your way around the rules, so there's that I guess. > On Saturday, July 09, 2016 9:16 PM, Brett Cannon wrote: > I think flat-out prohibiting won't work in the Python -> C case as you can do things such as closures and such that I don't know if we provide the APIs to mimic through the C API. I'm fine saying we "strongly encourage mirroring the design between the pure Python and accelerated version for various reasons". I think these reasons should probably be explained, if we're to set for a "don't enforce it, just strongly encourage it" wording. I'd rather go for a more aggressive, "exactly mirror the design if possible, and if not suggest changing the design" way, though. To me, "we strongly encourage it" seems like it could be too lax and then years later another similar patch surfaces (which properly fixes it), and it's this discussion all over again - because "strongly encourage" is up to the reviewers' discretion, and that's pretty much the status quo in my eyes. -Emanuel From steve at pearwood.info Sun Jul 10 23:26:14 2016 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 11 Jul 2016 13:26:14 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: <20160709190957.GD27919@ando.pearwood.info> Message-ID: <20160711032613.GH27919@ando.pearwood.info> On Sun, Jul 10, 2016 at 10:15:33AM +1000, Nick Coghlan wrote: > On 10 July 2016 at 05:10, Steven D'Aprano wrote: > > The other side of the issue is that requiring exact correspondence is > > considerably more difficult and may be over-kill for some uses. > > Hypothetically speaking, if I have an object that only supports pickling > > by accident, and I replace it with one that doesn't support pickling, > > shouldn't it be my decision whether that counts as a functional > > regression (a bug) or a reliance on an undocumented and accidental > > implementation detail? > > That's the proposed policy change and the reason I figured it needed a > python-dev discussion, as currently it's up to folks adding the Python > equivalents (or the C accelerators) to decide on a case by case basis > whether or not to care about compatibility for: > > - string representations > - pickling (and hence multiprocessing support) > - subclassing > - isinstance checks > - descriptor behaviour Right... and that's what I'm saying *ought* to be the decision of the maintainer. Do I understand that you agree with this, but you want to ensure that such decisions are made up front rather than when and if discrepencies are noticed? > The main way for such discrepancies to arise is for the Python > implementation to be a function (or closure), while the C > implementation is a custom stateful callable. > > The problem with the current "those are just technical implementation > details" approach is that a lot of Pythonistas learn standard library > API behaviour and capabilities through a mix of experimentation and > introspection rather than reading the documentation, Indeed. > so if CPython > uses the accelerated version by default, then most folks aren't going > to notice the discrepancies until they (or their users) are trying to > debug a problem like "my library works fine in CPython, but breaks > when used with multiprocessing on PyPy" or "my doctests fail when > running under MicroPython". Yes, and that's a problem, but is it a big enough problem to justify a policy change and pre-emptive effort to prevent it? The great majority of people are never going to run their Python code on anything other than CPython, and while I always encourage people to write in the most platform-independent fashion possible, I am also realistic enough to recognise that platform independence is an ideal that many people will fail to meet. (I'm sure that I've written code that isn't as platform independent as I hope.) My two core questions are: (1) How much extra effort are we going to *mandate* that core devs put in to hide the differences between C and Python code, for the benefit of a small minority that will notice them? (2) When should that effort be done? Upfront, or when and as problems are reported or noticed? My preference for answers will be, (1) not much, and (2) when problems are reported. In other words, close to the status quo. I can't speak for others, but I have a tendency towards over-analysing my code, trying to pre-emptively spot and avoid even really obscure failure modes before they occur. That's a trap: it makes it hard to finish (as much as any code is finished) and harder to meet deadlines. It's taken me a lot of effort, and much influence from TDD, to realise that it's okay to release code with a bug you didn't spot. You can always fix it in the next release. I think the same applies here. I'm okay with something close to the status quo: if a C accelerator doesn't quite have the same undocumented interface as the pure Python one, then its a bug in one or the other, in which case its okay to fix it when somebody notices. But I don't think I'm okay to make it mandatory that we prevent such possible incompatibilities ahead of time. If we do make this mandatory, how is it going to be enforced and checked? The normal way to enforce that accelerated code has the same behaviour as Python code is to see that they both pass the same tests. But this can only check for features where a test has been written. If you don't think of an incompatibility ahead of time, how do you write a test for it? I appreciate that the standard library should be held up to a higher level of professionalism than external code, but I don't think that *all* the burden should fall on the core developers. Reliance on undocumented features is always a dubious thing to do. We all do it, and when it turns out that the feature can't be counted on (because it changes from one version to another, or isn't available on some platforms), who is to blame for our application breaking? As the programmer who relied on a promise that was never made, surely I must take at least a bit of responsibility? Its not like the docs are locked up in a filing cabinent in the basement behind a door with a sign saying "Beware of the leopard". I'm just not comfortable with mandating that core devs must do even more work to protect programmers (including myself) from our own failures to code defensively and read the docs, with respect to this specific issue. We have to draw the line somewhere. I've seen people write code that relies on the exact wording of error messages. I've even done that myself, a long time ago. I hope that we would all agree that mirroring error messages is crossing the line. But beyond that, I'm not sure where the line lies, and I'd rather put off dealing with it until necessary, and on a case-by-case basis. > For non-standard library code, those kinds of latent compatibility > defects are fine, but PEP 399 is specifically about setting design & > development policy for the *standard library* to help improve > cross-implementation compatibility not just of the standard library > itself, but of code *using* the standard library in ways that work on > CPython in its default configuration. PEP 399 already raises this issue. Quote: Any new accelerated code must act as a drop-in replacement as close to the pure Python implementation as reasonable. Technical details of the VM providing the accelerated code are allowed to differ as necessary Are you saying that's not good enough? If so, what's your proposed wording? -- Steve From rosuav at gmail.com Sun Jul 10 23:32:32 2016 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 11 Jul 2016 13:32:32 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <20160711032613.GH27919@ando.pearwood.info> References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> Message-ID: On Mon, Jul 11, 2016 at 1:26 PM, Steven D'Aprano wrote: > (1) How much extra effort are we going to *mandate* that core devs put > in to hide the differences between C and Python code, for the benefit of > a small minority that will notice them? > The subject line is raising one specific difference: the use of a function in one version and a class in the other. I think it's not unreasonable to stipulate one specific incompatibility that mustn't be permitted. ChrisA From ethan at stoneleaf.us Mon Jul 11 02:25:24 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 10 Jul 2016 23:25:24 -0700 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> Message-ID: <57833BD4.9060208@stoneleaf.us> On 07/10/2016 08:32 PM, Chris Angelico wrote: > On Mon, Jul 11, 2016 at 1:26 PM, Steven D'Aprano wrote: >> (1) How much extra effort are we going to *mandate* that core devs put >> in to hide the differences between C and Python code, for the benefit of >> a small minority that will notice them? >> > > The subject line is raising one specific difference: the use of a > function in one version and a class in the other. I think it's not > unreasonable to stipulate one specific incompatibility that mustn't be > permitted. Is that what the subject line meant? I missed that, thanks for pointing that out! I think I can agree with having both versions being functions or both versions being classes. -- ~Ethan~ From rosuav at gmail.com Mon Jul 11 02:28:24 2016 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 11 Jul 2016 16:28:24 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <57833BD4.9060208@stoneleaf.us> References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <57833BD4.9060208@stoneleaf.us> Message-ID: On Mon, Jul 11, 2016 at 4:25 PM, Ethan Furman wrote: > On 07/10/2016 08:32 PM, Chris Angelico wrote: >> >> On Mon, Jul 11, 2016 at 1:26 PM, Steven D'Aprano >> wrote: >>> >>> (1) How much extra effort are we going to *mandate* that core devs put >>> in to hide the differences between C and Python code, for the benefit of >>> a small minority that will notice them? >>> >> >> The subject line is raising one specific difference: the use of a >> function in one version and a class in the other. I think it's not >> unreasonable to stipulate one specific incompatibility that mustn't be >> permitted. > > > Is that what the subject line meant? I missed that, thanks for pointing > that out! > > I think I can agree with having both versions being functions or both > versions being classes. > What I mean is, the subject line is ONLY raising that one difference. If the C version of a class has no __dict__ but the Python version does, that's not as big a difference. ChrisA From ncoghlan at gmail.com Mon Jul 11 03:58:11 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 11 Jul 2016 17:58:11 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <20160711032613.GH27919@ando.pearwood.info> References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> Message-ID: On 11 July 2016 at 13:26, Steven D'Aprano wrote: > My two core questions are: > > (1) How much extra effort are we going to *mandate* that core devs put > in to hide the differences between C and Python code, for the benefit of > a small minority that will notice them? > > (2) When should that effort be done? Upfront, or when and as problems > are reported or noticed? > > My preference for answers will be, (1) not much, and (2) when problems > are reported. In other words, close to the status quo. I think it's still OK to defer fixing discrepancies between the alternate implementations until people explicitly report a problem, which I guess means the policy update that I'd be looking for is that in cases where the C version currently implements a superset of the Python version's behaviour, the recommended policy be to accept patches to align the two even if: - it makes the Python version more complex and hence harder to read - it makes the Python version slower on CPython (and likely other runtimes without even a method JIT) Replacing a closure with a custom callable (as with the functools.partial patch) is a case where both of those objections legitimately apply, hence making the right thing to do less than obvious given the current wording of PEP 399 - it isn't clear whether those simplicity focused considerations should trump the cross-runtime compatibility ones. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lele at metapensiero.it Mon Jul 11 05:51:09 2016 From: lele at metapensiero.it (Lele Gaifax) Date: Mon, 11 Jul 2016 11:51:09 +0200 Subject: [Python-Dev] Supporting native backup facility of SQLite Message-ID: <877fcsv6ya.fsf@metapensiero.it> Hi all, as I'm going to have a need to use the native `online backup API`__ provided by SQLite, I looked around for existing solutions and found `sqlitebck`__. I somewhat dislike the approach taken by that 3rd party module, and I wonder if the API should/could be exposed by the standard library sqlite module instead. Another option would be using ctypes, but as I never used it, I dunno how easy it is to maintain compatibility between different OSes... What do you think? Thanks&bye, lele. __ https://www.sqlite.org/backup.html __ https://github.com/husio/python-sqlite3-backup -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From p.f.moore at gmail.com Mon Jul 11 05:58:24 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 11 Jul 2016 10:58:24 +0100 Subject: [Python-Dev] Supporting native backup facility of SQLite In-Reply-To: <877fcsv6ya.fsf@metapensiero.it> References: <877fcsv6ya.fsf@metapensiero.it> Message-ID: On 11 July 2016 at 10:51, Lele Gaifax wrote: > as I'm going to have a need to use the native `online backup API`__ provided > by SQLite, I looked around for existing solutions and found `sqlitebck`__. > > I somewhat dislike the approach taken by that 3rd party module, and I wonder > if the API should/could be exposed by the standard library sqlite module > instead. > > Another option would be using ctypes, but as I never used it, I dunno how easy > it is to maintain compatibility between different OSes... There's also apsw (https://github.com/rogerbinns/apsw), which appears to support backup - http://rogerbinns.github.io/apsw/backup.html. Paul From steve at pearwood.info Mon Jul 11 09:11:28 2016 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 11 Jul 2016 23:11:28 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> Message-ID: <20160711131127.GI27919@ando.pearwood.info> On Mon, Jul 11, 2016 at 01:32:32PM +1000, Chris Angelico wrote: > On Mon, Jul 11, 2016 at 1:26 PM, Steven D'Aprano wrote: > > (1) How much extra effort are we going to *mandate* that core devs put > > in to hide the differences between C and Python code, for the benefit of > > a small minority that will notice them? > > > > The subject line is raising one specific difference: the use of a > function in one version and a class in the other. I think it's not > unreasonable to stipulate one specific incompatibility that mustn't be > permitted. Not so fast. Have you read the bug tracker issue that started this? There are certain things, such as functools.partial() objects, which are most naturally implemented as a closure (i.e. a function) in Python, and as a class in C. Whether partial() happens to be a class or a function is an implementation detail. There are advantages to being able to write the simplest code that works, and prohibiting that mismatch means that you cannot do that. If a core developer wishes to extend the API of partial objects to include such things as subclassing, isinstance tests, pickling etc, then it is reasonable to insist that both the C and the Python implementation both support the same API and both use a class. But at the moment I don't think any of those things are promised or supported[1], except by accident, so removing the discrepency between them is not a bug fix, but adding new features. The more I think about it, the more I feel that it is *unreasonable* to mandate that devs must ensure that alternate implementations mirror each others *unsupported and accidental features*. Mirror the supported API? Absolutely! But not accidental features. I think its worth reading the issue on the tracker: http://bugs.python.org/issue27137 This isn't an actual problem that occurred in real code, it's a theoretical issue that Emanuel discovered, and by his own admission feels that he was doing something dubious ("It may not be the best idea to subclass something that is meant to be final" -- ya think?). Raymond Hettinger makes some good points about the costs of feature creep needed to support these accidental implementation features, and is against it. But on the other hand, Serhiy also makes some good points about the usefulness of pickling partial objects. So as far as this *specific* issue goes, perhaps it is justified to make sure the Python implementation supports pickling. (Aside: why can't closures be pickled?) But generalising this to all possibly mismatches between a C class implementation and a Python function implementation doesn't necessarily follow. Raymond's general point about simplicity versus feature creep still stands, even if in this case adding pickling is useful. [1] If I'm wrong about this, and these features are supported, then Emanuel has found a hole in the functools test suite and a weakness in our testing: it's too hard to ensure that *both* the Python and C code is tested. -- Steve From vgr255 at live.ca Mon Jul 11 09:42:00 2016 From: vgr255 at live.ca (Emanuel Barry) Date: Mon, 11 Jul 2016 13:42:00 +0000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <20160711131127.GI27919@ando.pearwood.info> References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <20160711131127.GI27919@ando.pearwood.info> Message-ID: Hello, > From: Steven D'Aprano > Sent: Monday, July 11, 2016 9:11 AM > > This isn't an actual problem that occurred in real code, it's a > theoretical issue that Emanuel discovered, and by his own admission > feels that he was doing something dubious ("It may not be the best idea > to subclass something that is meant to be final" -- ya think?). Raymond > Hettinger makes some good points about the costs of feature creep needed > to support these accidental implementation features, and is against it. Yes, this hasn't actually happened; hypothetic bugs make for the best discussions though ;) > But on the other hand, Serhiy also makes some good points about the > usefulness of pickling partial objects. So as far as this *specific* > issue goes, perhaps it is justified to make sure the Python > implementation supports pickling. > > But generalising this to all possibly mismatches between a C class > implementation and a Python function implementation doesn't necessarily > follow. Raymond's general point about simplicity versus feature creep > still stands, even if in this case adding pickling is useful. I'm not sure about feature creep in this particular case; pickling for instance is no accident, even if it's not documented. Is there a particular stance on non-accidental, undocumented features (if that makes any sense)? > [1] If I'm wrong about this, and these features are supported, then > Emanuel has found a hole in the functools test suite and a weakness in > our testing: it's too hard to ensure that *both* the Python and C code > is tested. As far as tests go, there's a large common set of tests for either version (which is the entirety of the Python version's tests), and the C version extends those with the tests for repr(), pickling and copying, and does all the tests again for a subclass. This probably doesn't mean much as far as guaranteed API is concerned, but it does mean that it's (at least internally) supported. -Emanuel From ncoghlan at gmail.com Mon Jul 11 11:51:21 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 12 Jul 2016 01:51:21 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <20160711131127.GI27919@ando.pearwood.info> References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <20160711131127.GI27919@ando.pearwood.info> Message-ID: On 11 July 2016 at 23:11, Steven D'Aprano wrote: > If a core developer wishes to extend the API of partial objects to > include such things as subclassing, isinstance tests, pickling etc, then > it is reasonable to insist that both the C and the Python implementation > both support the same API and both use a class. But at the moment I > don't think any of those things are promised or supported[1], except by > accident, so removing the discrepency between them is not a bug fix, but > adding new features. functools.partial was originally C only [1], then pickling support was added [2], then a custom __repr__ was added [3], and only then was a Python fallback added [4]. When [4] was implemented as a closure rather than as a custom callable, the test cases for the pure Python version needed to be customised to skip the custom __repr__ and pickling support tests added by [2] and [3]. Emanuel's patch eliminates the special casing of the Python version in the test suite by replacing the closure with a custom callable that also implements the custom __repr__ and pickling support, and also adds the test case needed to ensure subclasses of the Python version work as expected (previously that test case was simply omitted, since subclassing the Python version wasn't supported). [1] https://www.python.org/dev/peps/pep-0309/ [2] https://hg.python.org/cpython/rev/184ca6293218 [3] https://hg.python.org/cpython/rev/2baad8bd0b4f [4] https://hg.python.org/cpython/rev/fcfaca024160 > The more I think about it, the more I feel that it is *unreasonable* to > mandate that devs must ensure that alternate implementations mirror each > others *unsupported and accidental features*. Mirror the supported API? > Absolutely! But not accidental features. The pickling and custom __repr__ on functools.partial weren't accidental features - they were features deliberately added to the C version before the Python version was implemented that were nevertheless originally deemed not part of the standard library definition. The fact the incomplete implementation was simpler and easier to read was then used as an argument against providing a more complete fallback implementation of the original API. > (Aside: why can't closures be pickled?) Functions and classes are deserialised based on their name, but just the name isn't sufficient to deserialise a closure - you also need the information about the cell references and their contents. However, the internal mechanics of the way closures work in practice aren't standardised at the language level, which means there isn't an implementation independent way to represent them as a pickle (that's not to say such a scheme couldn't be created - it just hasn't been done to date). > [1] If I'm wrong about this, and these features are supported, then > Emanuel has found a hole in the functools test suite and a weakness in > our testing: it's too hard to ensure that *both* the Python and C code > is tested. There's no hole, as the test suite was structured to special case the Python fallback and only test the affected features for the C version. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lele at metapensiero.it Mon Jul 11 11:59:46 2016 From: lele at metapensiero.it (Lele Gaifax) Date: Mon, 11 Jul 2016 17:59:46 +0200 Subject: [Python-Dev] Supporting native backup facility of SQLite References: <877fcsv6ya.fsf@metapensiero.it> Message-ID: <87y458tbbh.fsf@metapensiero.it> Paul Moore writes: > There's also apsw (https://github.com/rogerbinns/apsw), which appears > to support backup - http://rogerbinns.github.io/apsw/backup.html. Thank you, will have a look: not sure it fits my need, because the application is currently based on Python's sqlite module (thru SQLAlchemy), and it seems I'd have to open another APSW connection just to make the backup... Original questions still hold, though. ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From p.f.moore at gmail.com Mon Jul 11 12:05:13 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 11 Jul 2016 17:05:13 +0100 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: <20160711131127.GI27919@ando.pearwood.info> References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <20160711131127.GI27919@ando.pearwood.info> Message-ID: On 11 July 2016 at 14:11, Steven D'Aprano wrote: > But generalising this to all possibly mismatches between a C class > implementation and a Python function implementation doesn't necessarily > follow. Raymond's general point about simplicity versus feature creep > still stands, even if in this case adding pickling is useful. On the specific case that started this thread, I think it's fair to say that when people see "object" in the documentation, they think in terms of "class instance", so an implementation that uses a closure instead of the standard (C implementation) approach is going to trip people up. So it would seem OK to me to say that both implementations should implement something described as an "object" in the same way (class, closure, or whatever else) - *unless* the documentation is explicit (for example by using a term like "opaque object" or "internal object") that the implementation details of the object are not public. (An alternative resolution to the original issue would hence be to update the documentation to make it explicit that the partial object's implementation is undefined, and cannot be relied on by user code). However, I don't think we can assume it'll be possible to make a high-level ruling on *every* possible situation. I don't think that's reasonable - there needs to be some level of judgement applied. The *documented* API is definitive, both implementations must follow the documented API - but if the documentation is silent or ambiguous, I don't think we should feel under undue pressure to replicate every behaviour of the CPython implementation - if we take that view, then we're back to implementation-defined semantics. Outside of documented behaviour, I don't think mandating compatibility without qualification is helpful. It's worth having guidelines on what implementers should be ensuring match between C and pure-Python implementation - and even more so having reminders about common pitfalls and places where extra work is needed to get the same behaviour - but I'd be concerned if the rules became dogma rather than guidance. We need to be prepared to trust the judgement of the core devs in this matter. Paul From p.f.moore at gmail.com Mon Jul 11 13:06:00 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 11 Jul 2016 18:06:00 +0100 Subject: [Python-Dev] Supporting native backup facility of SQLite In-Reply-To: <87y458tbbh.fsf@metapensiero.it> References: <877fcsv6ya.fsf@metapensiero.it> <87y458tbbh.fsf@metapensiero.it> Message-ID: On 11 July 2016 at 16:59, Lele Gaifax wrote: > Paul Moore writes: > >> There's also apsw (https://github.com/rogerbinns/apsw), which appears >> to support backup - http://rogerbinns.github.io/apsw/backup.html. > > Thank you, will have a look: not sure it fits my need, because the application > is currently based on Python's sqlite module (thru SQLAlchemy), and it seems > I'd have to open another APSW connection just to make the backup... > > Original questions still hold, though. Indeed - I don't see any reason why exposing the backup API through the sdlib module would be unacceptable (there's plenty of sqlite-specific functionality in there already, it's not as if there's a need to limit the module to just the DB-API interface). If you were interested in doing that, I'd suggest opening a tracker issue with a patch. Paul From brett at python.org Mon Jul 11 13:12:35 2016 From: brett at python.org (Brett Cannon) Date: Mon, 11 Jul 2016 17:12:35 +0000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <20160711131127.GI27919@ando.pearwood.info> Message-ID: On Mon, 11 Jul 2016 at 09:06 Paul Moore wrote: > On 11 July 2016 at 14:11, Steven D'Aprano wrote: > > But generalising this to all possibly mismatches between a C class > > implementation and a Python function implementation doesn't necessarily > > follow. Raymond's general point about simplicity versus feature creep > > still stands, even if in this case adding pickling is useful. > > On the specific case that started this thread, I think it's fair to > say that when people see "object" in the documentation, they think in > terms of "class instance", so an implementation that uses a closure > instead of the standard (C implementation) approach is going to trip > people up. So it would seem OK to me to say that both implementations > should implement something described as an "object" in the same way > (class, closure, or whatever else) - *unless* the documentation is > explicit (for example by using a term like "opaque object" or > "internal object") that the implementation details of the object are > not public. (An alternative resolution to the original issue would > hence be to update the documentation to make it explicit that the > partial object's implementation is undefined, and cannot be relied on > by user code). > > However, I don't think we can assume it'll be possible to make a > high-level ruling on *every* possible situation. I don't think that's > reasonable - there needs to be some level of judgement applied. The > *documented* API is definitive, both implementations must follow the > documented API - but if the documentation is silent or ambiguous, I > don't think we should feel under undue pressure to replicate every > behaviour of the CPython implementation - if we take that view, then > we're back to implementation-defined semantics. Outside of documented > behaviour, I don't think mandating compatibility without qualification > is helpful. > > It's worth having guidelines on what implementers should be ensuring > match between C and pure-Python implementation - and even more so > having reminders about common pitfalls and places where extra work is > needed to get the same behaviour - but I'd be concerned if the rules > became dogma rather than guidance. We need to be prepared to trust the > judgement of the core devs in this matter. > The other thing to consider is that CPython isn't the only interpreter that might implement accelerated versions of modules in the stdlib. Those implementations could have their own needs, etc. that won't necessarily align with CPython's needs at the C level. And if we actually start managing the stdlib as its own project then this will just become more apparently as a separate stdlib repo could empower other implementations to more easily send contributions upstream. IOW we need to be careful to not make this too CPython-specific. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lele at metapensiero.it Mon Jul 11 13:16:00 2016 From: lele at metapensiero.it (Lele Gaifax) Date: Mon, 11 Jul 2016 19:16:00 +0200 Subject: [Python-Dev] Supporting native backup facility of SQLite References: <877fcsv6ya.fsf@metapensiero.it> <87y458tbbh.fsf@metapensiero.it> Message-ID: <87twfwt7sf.fsf@metapensiero.it> Paul Moore writes: >> Original questions still hold, though. > > Indeed - I don't see any reason why exposing the backup API through > the sdlib module would be unacceptable (there's plenty of > sqlite-specific functionality in there already, it's not as if there's > a need to limit the module to just the DB-API interface). If you were > interested in doing that, I'd suggest opening a tracker issue with a > patch. Excellent, will do that, thank you for the encouragement! ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From c4obi at yahoo.com Mon Jul 11 16:56:53 2016 From: c4obi at yahoo.com (Obiesie ike-nwosu) Date: Mon, 11 Jul 2016 21:56:53 +0100 Subject: [Python-Dev] Python CFG Basic blocks Message-ID: <1CD91D28-7AE2-4982-972D-C59094D49536@yahoo.com> Hi, I am looking into how the python compiler generates basic blocks during the CFG generation process and my expectations from CFG theory seems to be at odds with how the python compiler actually generates its CFG. Take the following code snippet for example: def median(pool): copy = sorted(pool) size = len(copy) if size % 2 == 1: return copy[(size - 1) / 2] else: return (copy[size/2 - 1] + copy[size/2]) / 2 From my understanding of basic blocks in compilers, the above code snippet should have at least 3 basic blocks as follows: 1. Block 1 - all instructions up to and including those for the if test. 2. Block 2 - all instructions for the if body i.e the first return statement. 3. Block 3 - instructions for the else block i.e. the second return statement. My understanding of the the section on Control flow Graphs in the ?Design of the CPython Compiler? also alludes to this - >> As an example, consider an ?if? statement with an ?else? block. The guard on the ?if? is a basic block which is pointed to by the basic block containing the code leading to the ?if? statement. The ?if? statement block contains jumps (which are exit points) to the true body of the ?if? and the ?else? body (which may be NULL), each of which are their own basic blocks. Both of those blocks in turn point to the basic block representing the code following the entire ?if? statement. The CPython compiler however seems to generate 2 basic blocks for the above snippets - 1. Block 1 - all instructions up to and including the if statement and the body of the if statement (the first return statement in this case) 2. Block 2 - instructions for the else block (the second return statement) Is there any reason for this or have I somehow missed something in the CFG generation process? Regards, Obi -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jul 11 18:56:44 2016 From: brett at python.org (Brett Cannon) Date: Mon, 11 Jul 2016 22:56:44 +0000 Subject: [Python-Dev] Python CFG Basic blocks In-Reply-To: <1CD91D28-7AE2-4982-972D-C59094D49536@yahoo.com> References: <1CD91D28-7AE2-4982-972D-C59094D49536@yahoo.com> Message-ID: On Mon, 11 Jul 2016 at 14:02 Obiesie ike-nwosu via Python-Dev < python-dev at python.org> wrote: > Hi, > > I am looking into how the python compiler generates basic blocks during > the CFG generation process and my expectations from CFG theory seems to be > at odds with how the python compiler actually generates its CFG. Take the > following code snippet for example: > > def median(pool): > copy = sorted(pool) > size = len(copy) > if size % 2 == 1: > return copy[(size - 1) / 2] > else: > return (copy[size/2 - 1] + copy[size/2]) / 2 > > From my understanding of basic blocks in compilers, the above code snippet > should have at least 3 basic blocks as follows: > 1. Block 1 - all instructions up to and including those for the if test. > 2. Block 2 - all instructions for the if body i.e the first return > statement. > 3. Block 3 - instructions for the else block i.e. the second return > statement. > > My understanding of the the section on Control flow Graphs in the ?Design > of the CPython Compiler? also alludes to this - > > > As an example, consider an ?if? statement with an ?else? block. The guard > on the ?if? is a basic block which is pointed to by the basic block > containing the code leading to the ?if? statement. The ?if? statement block > contains jumps (which are exit points) to the true body of the ?if? and the > ?else? body (which may be NULL), each of which are their own basic blocks. > Both of those blocks in turn point to the basic block representing the code > following the entire ?if? statement. > > > The CPython compiler however seems to generate 2 basic blocks for the > above snippets - > 1. Block 1 - all instructions up to and including the if statement and the > body of the if statement (the first return statement in this case) > 2. Block 2 - instructions for the else block (the second return statement) > > Is there any reason for this or have I somehow missed something in the CFG > generation process? > I have not looked at the code or the CFGs that are generated from your example code, but my guess is it's two blocks instead of three is because two block is all that's necessary to generate jump targets (since there's only one jump in that function). Since Python doesn't have block-level scope there's no need to generate a separate block in the `if` statement. And since the `if` statement will have a jump to the `else` block and otherwise fall through then there's only a need to have the block that starts the function and then the second block that the `if` jump goes to and that's it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadmium+py at gmail.com Tue Jul 12 00:14:35 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 12 Jul 2016 04:14:35 +0000 Subject: [Python-Dev] [Python-checkins] devguide: Star self as idlelib expert. Mark other 2 as inactive. In-Reply-To: <20160712034713.20043.55391.F4E01A58@psf.io> References: <20160712034713.20043.55391.F4E01A58@psf.io> Message-ID: On 12 July 2016 at 03:47, terry.reedy wrote: > https://hg.python.org/devguide/rev/cc1c0dd798e7 Terry it looks like you accidentally added Christian back (undoing ) > -xml.parsers.expat > -xml.sax > +xml.parsers.expat christian.heimes > +xml.sax christian.heimes > xml.sax.handler > xml.sax.saxutils > xml.sax.xmlreader > @@ -319,7 +319,7 @@ > bytecode benjamin.peterson, pitrou, georg.brandl, yselivanov > context managers ncoghlan > coverity scan christian.heimes, brett.cannon, twouters > -cryptography gregory.p.smith, dstufft > +cryptography christian.heimes, gregory.p.smith, dstufft > data formats mark.dickinson, georg.brandl > database lemburg > devguide ncoghlan, eric.araujo, ezio.melotti, willingc > @@ -341,7 +341,7 @@ > georg.brandl > str.format eric.smith > testing michael.foord, pitrou, ezio.melotti > -test coverage giampaolo.rodola > +test coverage giampaolo.rodola, christian.heimes > threads pitrou > time and dates lemburg, belopolsky > unicode lemburg, ezio.melotti, haypo, benjamin.peterson, pitrou From vadmium+py at gmail.com Tue Jul 12 00:17:18 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 12 Jul 2016 04:17:18 +0000 Subject: [Python-Dev] [Python-checkins] devguide: Star self as idlelib expert. Mark other 2 as inactive. In-Reply-To: References: <20160712034713.20043.55391.F4E01A58@psf.io> Message-ID: On 12 July 2016 at 04:14, Martin Panter wrote: > On 12 July 2016 at 03:47, terry.reedy wrote: >> https://hg.python.org/devguide/rev/cc1c0dd798e7 > > Terry it looks like you accidentally added Christian back (undoing > ) Sorry, you can ignore that, I see you already reverted it :) From ncoghlan at gmail.com Tue Jul 12 01:21:54 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 12 Jul 2016 15:21:54 +1000 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <20160711131127.GI27919@ando.pearwood.info> Message-ID: On 12 July 2016 at 02:05, Paul Moore wrote: > However, I don't think we can assume it'll be possible to make a > high-level ruling on *every* possible situation. I don't think that's > reasonable - there needs to be some level of judgement applied. The > *documented* API is definitive, both implementations must follow the > documented API - but if the documentation is silent or ambiguous, I > don't think we should feel under undue pressure to replicate every > behaviour of the CPython implementation - if we take that view, then > we're back to implementation-defined semantics. Outside of documented > behaviour, I don't think mandating compatibility without qualification > is helpful. > > It's worth having guidelines on what implementers should be ensuring > match between C and pure-Python implementation - and even more so > having reminders about common pitfalls and places where extra work is > needed to get the same behaviour - but I'd be concerned if the rules > became dogma rather than guidance. We need to be prepared to trust the > judgement of the core devs in this matter. Based on this discussion, I've come to the conclusion that there are only two cases where I'd like PEP 399 to document pre-emptive answers to "What counts as sufficiently compatible?" question: 1. If an existing C API implementation is a public custom type, a subsequently added Python fallback should also be a custom type, not a closure or other non-type object that differs from the C implementation in regards to subclassing and pickling support. 2. If an existing Python API implementation returns a closure, a subsequently added accelerated version should either also return a closure (e.g. if the accelerator is implemented in a language with closure support, like Cython) or else be implemented as a factory function returning an instance of a non-public type. The first case is the one that applies to functools.partial: preserving consistency in subclassing and pickling behaviour was considered optional, even though it required disabling some of the existing tests for the C version when testing the pure Python version. The second case is, as far as I know, currently hypothetical, but should make it clearer that it's important to watch out for inadvertently *expanding* the inferred public API when implementing a C accelerator - regardless of whether the C version or the Python version is written first, we want to avoid the situation where the C version implements a superset of the Python one, and folks mistakenly consider that to be the supported cross-implementation definition of the API. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From nad at python.org Tue Jul 12 01:27:47 2016 From: nad at python.org (Ned Deily) Date: Tue, 12 Jul 2016 01:27:47 -0400 Subject: [Python-Dev] [RELEASE] Python 3.6.0a3 is now available Message-ID: <0E07A2EA-2F6A-4FD8-8C27-C0C2FBA8241B@python.org> On behalf of the Python development community and the Python 3.6 release team, I'm happy to announce the availability of Python 3.6.0a3. 3.6.0a3 is the third of four planned alpha releases of Python 3.6, the next major release of Python. During the alpha phase, Python 3.6 remains under heavy development: additional features will be added and existing features may be modified or deleted. Please keep in mind that this is a preview release and its use is not recommended for production environments. You can find Python 3.6.0a3 here: https://www.python.org/downloads/release/python-360a3/ The next release of Python 3.6 will be 3.6.0a4, currently scheduled for 2016-08-15. --Ned -- Ned Deily nad at python.org -- [] From p.f.moore at gmail.com Tue Jul 12 02:55:48 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 12 Jul 2016 07:55:48 +0100 Subject: [Python-Dev] Proposal: explicitly disallow function/class mismatches in accelerator modules In-Reply-To: References: <20160709190957.GD27919@ando.pearwood.info> <20160711032613.GH27919@ando.pearwood.info> <20160711131127.GI27919@ando.pearwood.info> Message-ID: On 12 July 2016 at 06:21, Nick Coghlan wrote: > Based on this discussion, I've come to the conclusion that there are > only two cases where I'd like PEP 399 to document pre-emptive answers > to "What counts as sufficiently compatible?" question This sounds reasonable to me (particularly as it says that implementations "should" do this rather than "must"). I think it would be useful to see some explanation of *why* this is important included as well, capturing the situation that prompted this thread, plus a reminder that the *documented* API remains paramount but that we prefer to avoid unexpected surprises for people working from introspection where possible (hence these new recommendations). Paul From victor.stinner at gmail.com Tue Jul 12 05:26:19 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 12 Jul 2016 11:26:19 +0200 Subject: [Python-Dev] Status of Python 3.6 PEPs? Message-ID: Hi, I see many PEPs accepted for Python 3.6, or stil in draft status, but only a few final PEPs. What is happening? Reminder: the deadline for new features in Python 3.6 is 2016-09-12, only in 2 months and these 2 months are summer in the northern hemisphere which means holiday for many of them... Python 3.6 schedule and What's New in Python 3.6 list some PEPs: https://www.python.org/dev/peps/pep-0494/ https://docs.python.org/dev/whatsnew/3.6.html "PEP 499 -- python -m foo should bind sys.modules['foo'] in addition to sys.modules['__main__']" https://www.python.org/dev/peps/pep-0499/ => draft "PEP 498 -- Literal String Interpolation" https://www.python.org/dev/peps/pep-0498/ => accepted -- it's merged in Python 3.6, the status should be updated to Final no? "PEP 495 -- Local Time Disambiguation" https://www.python.org/dev/peps/pep-0495/ => accepted Alexander Belopolsky asked for a review of the implementation: https://mail.python.org/pipermail/python-dev/2016-June/145450.html "PEP 447 -- Add __getdescriptor__ method to metaclass" https://www.python.org/dev/peps/pep-0447/ => draft "PEP 487 -- Simpler customisation of class creation" https://www.python.org/dev/peps/pep-0487/ => draft "PEP 520 -- Preserving Class Attribute Definition Order" https://www.python.org/dev/peps/pep-0520/ => accepted -- what is the status of its implementation? "PEP 519 -- Adding a file system path protocol" https://www.python.org/dev/peps/pep-0519/ => accepted "PEP 467 -- Minor API improvements for binary sequences" https://www.python.org/dev/peps/pep-0467 => draft -- I saw recently some discussions around this PEP (on python-ideas?) It looks like os.fspath() exists, so the PEP is implemented. Its status should be Final, but the PEP should also be mentioned in What's New in Python 3.6 please. I also see some discussions for even more compact dict implementation. I wrote 3 PEPs, but I didn't have time recently to work of them (to make progress on the implementation of FAT Python): "PEP 509 -- Add a private version to dict" https://www.python.org/dev/peps/pep-0509/ => draft Pyjion, Cython, and Yury Selivanov are interested to use this feature, but last time I asked Guido, he didn't seem convinced by the advantages of the PEP. "PEP 510 -- Specialize functions with guards" https://www.python.org/dev/peps/pep-0510/ "PEP 511 -- API for code transformers" https://www.python.org/dev/peps/pep-0511/ These two PEPs are directly related to my FAT Python work. I was asked to prove that FAT Python makes CPython faster. Sadly, I failed to prove that. Moreover, it took me almost 2 months (and I'm not done yet!) to get stable benchmarks results on Python. I want to make sure that my changes don't make Python slower (don't introduce Python regressions), but the CPython benchmark is unstable, some benchmarks are very unstable. To get more information, follow the speed at python.org mailing list ;-) I probably forgot some PEPs, there are so many PEPs in the draft state :-( Victor From rosuav at gmail.com Tue Jul 12 05:30:04 2016 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 12 Jul 2016 19:30:04 +1000 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: On Tue, Jul 12, 2016 at 7:26 PM, Victor Stinner wrote: > "PEP 499 -- python -m foo should bind sys.modules['foo'] in addition > to sys.modules['__main__']" > https://www.python.org/dev/peps/pep-0499/ > => draft > I have a vague recollection that this ran into some trickinesses with certain forms of import (zip??). If that's not the case, is this one simply awaiting pronouncement? ChrisA From victor.stinner at gmail.com Tue Jul 12 05:36:08 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 12 Jul 2016 11:36:08 +0200 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: I opened the PEP 499: it links to https://bugs.python.org/issue19702 "Update pickle to take advantage of PEP 451" which is still open (since 2013!). (It also has two dependencies which are now closed.) "PEP 451 -- A ModuleSpec Type for the Import System" https://www.python.org/dev/peps/pep-0451/ (this one was already implemented in Python 3.4) Victor 2016-07-12 11:30 GMT+02:00 Chris Angelico : > On Tue, Jul 12, 2016 at 7:26 PM, Victor Stinner > wrote: >> "PEP 499 -- python -m foo should bind sys.modules['foo'] in addition >> to sys.modules['__main__']" >> https://www.python.org/dev/peps/pep-0499/ >> => draft >> > > I have a vague recollection that this ran into some trickinesses with > certain forms of import (zip??). If that's not the case, is this one > simply awaiting pronouncement? > > ChrisA > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From songofacandy at gmail.com Tue Jul 12 06:05:05 2016 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 12 Jul 2016 19:05:05 +0900 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: > > "PEP 520 -- Preserving Class Attribute Definition Order" > https://www.python.org/dev/peps/pep-0520/ > => accepted -- what is the status of its implementation? > ... > > > I also see some discussions for even more compact dict implementation. > Here is implementation of the compact dict preserving insertion order. http://bugs.python.org/issue27350 I hope it is reviewed before merging PEP 520 implementation. From dholth at gmail.com Tue Jul 12 09:21:29 2016 From: dholth at gmail.com (Daniel Holth) Date: Tue, 12 Jul 2016 13:21:29 +0000 Subject: [Python-Dev] Should PY_SSIZE_T_CLEAN break Py_LIMITED_API? Message-ID: I was using Py_LIMITED_API under 3.5 and PY_SSIZE_T_CLEAN was set, this causes some functions not in the limited api to be used and the resulting extension segfaults in Linux. Is that right? Thanks, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jul 12 11:34:16 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Jul 2016 01:34:16 +1000 Subject: [Python-Dev] Should PY_SSIZE_T_CLEAN break Py_LIMITED_API? In-Reply-To: References: Message-ID: On 12 July 2016 at 23:21, Daniel Holth wrote: > I was using Py_LIMITED_API under 3.5 and PY_SSIZE_T_CLEAN was set, this > causes some functions not in the limited api to be used and the resulting > extension segfaults in Linux. Is that right? No, it suggests there's a bug in the way some of the #ifdef's are interacting. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Tue Jul 12 12:01:53 2016 From: brett at python.org (Brett Cannon) Date: Tue, 12 Jul 2016 16:01:53 +0000 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: On Tue, 12 Jul 2016 at 02:27 Victor Stinner wrote: > Hi, > > I see many PEPs accepted for Python 3.6, or stil in draft status, but > only a few final PEPs. What is happening? > > [SNIP] > > "PEP 519 -- Adding a file system path protocol" > https://www.python.org/dev/peps/pep-0519/ > => accepted > > [SNIP] > > It looks like os.fspath() exists, so the PEP is implemented. Its > status should be Final, but the PEP should also be mentioned in What's > New in Python 3.6 please. > I'm gong to mark the PEP as final once we finish implementing it (still need to update os.path: http://bugs.python.org/issue27182). Considering we have updated the PEP once already based on implementation lessons I don't want to rush flipping its state. As for not being in What's New, I have a tracking issue and that doesn't need to happen by b1 so I'm not spending time on it yet ( http://bugs.python.org/issue27283). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Jul 12 12:25:48 2016 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 12 Jul 2016 10:25:48 -0600 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: On Tue, Jul 12, 2016 at 3:26 AM, Victor Stinner wrote: > "PEP 520 -- Preserving Class Attribute Definition Order" > https://www.python.org/dev/peps/pep-0520/ > => accepted -- what is the status of its implementation? The implementation is currently under review (http://bugs.python.org/issue24254). -eric From ncoghlan at gmail.com Tue Jul 12 21:54:21 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Jul 2016 11:54:21 +1000 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: On 12 July 2016 at 19:26, Victor Stinner wrote: > "PEP 499 -- python -m foo should bind sys.modules['foo'] in addition > to sys.modules['__main__']" > https://www.python.org/dev/peps/pep-0499/ > => draft I'm a little wary of this one, as we just received a bug report regarding some subtleties of the 3.5.2 change that separated the "import the parent module" step from the "execute the requested main module" step when using the -m switch with a submodule: http://bugs.python.org/issue27487 The problem there relates to some odd behaviour that can arise when importing the parent module implicitly imports the submodule that has been requested to be run as __main__. While PEP 499 would eliminate the dual import problem when the import happens *after* __main__ starts execution, it wouldn't prevent it when the import happens *first* (as in the case of it happening as a side-effect of importing the parent module), making the consequences even more surprising and harder to debug. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jul 12 21:57:59 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Jul 2016 11:57:59 +1000 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: On 12 July 2016 at 20:05, INADA Naoki wrote: >> >> "PEP 520 -- Preserving Class Attribute Definition Order" >> https://www.python.org/dev/peps/pep-0520/ >> => accepted -- what is the status of its implementation? >> I also see some discussions for even more compact dict implementation. > > Here is implementation of the compact dict preserving insertion order. > http://bugs.python.org/issue27350 > > I hope it is reviewed before merging PEP 520 implementation. Several of my review comments on the draft 520 implementation were aimed at ensuring the assumption of the use of ODict specifically were minimised, so the test, docs and implementation tweaks needed to adjust back to an insertion-ordered-by-default plain dict will be pretty minimal, even if the current 520 implementation lands first (which seems likely). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lkb.teichmann at gmail.com Wed Jul 13 10:00:35 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Wed, 13 Jul 2016 16:00:35 +0200 Subject: [Python-Dev] __qualname__ exposed as a local variable: standard? In-Reply-To: References: Message-ID: Hi list, > I noticed __qualname__ is exposed by locals() while defining a class. This > is handy but I'm not sure about its status: is it standard or just an > artifact of the current implementation? (btw, the pycodestyle linter -former > pep8- rejects its usage). I was unable to find any reference to this > behavior in PEP 3155 nor in the language reference. I would like to underline the importance of this question, and give some background, as it happens to resonate with my work on PEP 487. The __qualname__ of a class is originally determined by the compiler. One might now think that it would be easiest to simply set the __qualname__ of a class once the class is created, but this is not as easy as it sounds. The class is created by its metaclass, so possibly by user code, which might create whatever it wants, including something which is not even a class. So the decision had been taken to sneak the __qualname__ through user code, and pass it to the metaclasses __new__ method as part of the namespace, where it is deleted from the namespace again. This has weird side effects, as the namespace may be user code as well, leading to the funniest possible abuses, too obscene to publish on a public mailing list. A very different approach has been taken for super(). It has similar problems: the zero argument version of super looks in the surrounding scope for __class__ for the containing class. This does not exist yet at the time of creation of the methods, so a PyCell is put into the function's scope, which will later be filled. It is actually filled with whatever the metaclasses __new__ returns, which may, as already be said, anything (some sanity checks are done to avoid crashing the interpreter). I personally prefer the first way of doing things like for __qualname__, even at the cost of adding things to the classes namespace. It could be moved after the end of the class definition, such that it doesn't show up while the class body is executed. We might also rename it to __ at qualname__, this way it cannot be accessed by users in the class body, unless they look into locals(). This has the large advange that super() would work immediately after the class has been defined, i.e. already in the __new__ of the metaclass after it has called type.__new__. All of this changes the behavior of the interpreter, but we are talking about undocumented behavior. The changes necessary to make super() work earlier are store in http://bugs.python.org/issue23722 Greetings Martin From lkb.teichmann at gmail.com Wed Jul 13 10:15:23 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Wed, 13 Jul 2016 16:15:23 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation Message-ID: Hi list, another round for PEP 487, is there any chance it still makes it into Python 3.6? The PEP should be effectively done, I updated the examples in it, given that I implemented the PEP I could actually test the examples, so now they work. The implementation is at http://bugs.python.org/issue27366, including documentation and tests. Unfortunately nobody has reviewed the patch yet. The new version of the PEP is attached. Greetings Martin PEP: 487 Title: Simpler customisation of class creation Version: $Revision$ Last-Modified: $Date$ Author: Martin Teichmann , Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 27-Feb-2015 Python-Version: 3.6 Post-History: 27-Feb-2015, 5-Feb-2016, 24-Jun-2016, 2-Jul-2016, 13-Jul-2016 Replaces: 422 Abstract ======== Currently, customising class creation requires the use of a custom metaclass. This custom metaclass then persists for the entire lifecycle of the class, creating the potential for spurious metaclass conflicts. This PEP proposes to instead support a wide range of customisation scenarios through a new ``__init_subclass__`` hook in the class body, and a hook to initialize attributes. The new mechanism should be easier to understand and use than implementing a custom metaclass, and thus should provide a gentler introduction to the full power of Python's metaclass machinery. Background ========== Metaclasses are a powerful tool to customize class creation. They have, however, the problem that there is no automatic way to combine metaclasses. If one wants to use two metaclasses for a class, a new metaclass combining those two needs to be created, typically manually. This need often occurs as a surprise to a user: inheriting from two base classes coming from two different libraries suddenly raises the necessity to manually create a combined metaclass, where typically one is not interested in those details about the libraries at all. This becomes even worse if one library starts to make use of a metaclass which it has not done before. While the library itself continues to work perfectly, suddenly every code combining those classes with classes from another library fails. Proposal ======== While there are many possible ways to use a metaclass, the vast majority of use cases falls into just three categories: some initialization code running after class creation, the initalization of descriptors and keeping the order in which class attributes were defined. The first two categories can easily be achieved by having simple hooks into the class creation: 1. An ``__init_subclass__`` hook that initializes all subclasses of a given class. 2. upon class creation, a ``__set_owner__`` hook is called on all the attribute (descriptors) defined in the class, and The third category is the topic of another PEP 520. As an example, the first use case looks as follows:: >>> class QuestBase: ... # this is implicitly a @classmethod ... def __init_subclass__(cls, swallow, **kwargs): ... cls.swallow = swallow ... super().__init_subclass__(**kwargs) >>> class Quest(QuestBase, swallow="african"): ... pass >>> Quest.swallow 'african' The base class ``object`` contains an empty ``__init_subclass__`` method which serves as an endpoint for cooperative multiple inheritance. Note that this method has no keyword arguments, meaning that all methods which are more specialized have to process all keyword arguments. This general proposal is not a new idea (it was first suggested for inclusion in the language definition `more than 10 years ago`_, and a similar mechanism has long been supported by `Zope's ExtensionClass`_), but the situation has changed sufficiently in recent years that the idea is worth reconsidering for inclusion. The second part of the proposal adds an ``__set_owner__`` initializer for class attributes, especially if they are descriptors. Descriptors are defined in the body of a class, but they do not know anything about that class, they do not even know the name they are accessed with. They do get to know their owner once ``__get__`` is called, but still they do not know their name. This is unfortunate, for example they cannot put their associated value into their object's ``__dict__`` under their name, since they do not know that name. This problem has been solved many times, and is one of the most important reasons to have a metaclass in a library. While it would be easy to implement such a mechanism using the first part of the proposal, it makes sense to have one solution for this problem for everyone. To give an example of its usage, imagine a descriptor representing weak referenced values:: import weakref class WeakAttribute: def __get__(self, instance, owner): return instance.__dict__[self.name]() def __set__(self, instance, value): instance.__dict__[self.name] = weakref.ref(value) # this is the new initializer: def __set_owner__(self, owner, name): self.name = name While this example looks very trivial, it should be noted that until now such an attribute cannot be defined without the use of a metaclass. And given that such a metaclass can make life very hard, this kind of attribute does not exist yet. Initializing descriptors could simply be done in the ``__init_subclass__`` hook. But this would mean that descriptors can only be used in classes that have the proper hook, the generic version like in the example would not work generally. One could also call ``__set_owner__`` from whithin the base implementation of ``object.__init_subclass__``. But given that it is a common mistake to forget to call ``super()``, it would happen too often that suddenly descriptors are not initialized. Key Benefits ============ Easier inheritance of definition time behaviour ----------------------------------------------- Understanding Python's metaclasses requires a deep understanding of the type system and the class construction process. This is legitimately seen as challenging, due to the need to keep multiple moving parts (the code, the metaclass hint, the actual metaclass, the class object, instances of the class object) clearly distinct in your mind. Even when you know the rules, it's still easy to make a mistake if you're not being extremely careful. Understanding the proposed implicit class initialization hook only requires ordinary method inheritance, which isn't quite as daunting a task. The new hook provides a more gradual path towards understanding all of the phases involved in the class definition process. Reduced chance of metaclass conflicts ------------------------------------- One of the big issues that makes library authors reluctant to use metaclasses (even when they would be appropriate) is the risk of metaclass conflicts. These occur whenever two unrelated metaclasses are used by the desired parents of a class definition. This risk also makes it very difficult to *add* a metaclass to a class that has previously been published without one. By contrast, adding an ``__init_subclass__`` method to an existing type poses a similar level of risk to adding an ``__init__`` method: technically, there is a risk of breaking poorly implemented subclasses, but when that occurs, it is recognised as a bug in the subclass rather than the library author breaching backwards compatibility guarantees. New Ways of Using Classes ========================= Subclass registration --------------------- Especially when writing a plugin system, one likes to register new subclasses of a plugin baseclass. This can be done as follows:: class PluginBase(Object): subclasses = [] def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) cls.subclasses.append(cls) In this example, ``PluginBase.subclasses`` will contain a plain list of all subclasses in the entire inheritance tree. One should note that this also works nicely as a mixin class. Trait descriptors ----------------- There are many designs of Python descriptors in the wild which, for example, check boundaries of values. Often those "traits" need some support of a metaclass to work. This is how this would look like with this PEP:: class Trait: def __init__(self, minimum, maximum): self.minimum = minimum self.maximum = maximum def __get__(self, instance, owner): return instance.__dict__[self.key] def __set__(self, instance, value): if self.minimum < value < self.maximum: instance.__dict__[self.key] = value else: raise ValueError("value not in range") def __set_owner__(self, owner, name): self.key = name Implementation Details ====================== For those who prefer reading Python over english, the following is a Python equivalent of the C API changes proposed in this PEP, where the new ``object`` and ``type`` defined here inherit from the usual ones:: import types class type(type): def __new__(cls, *args, **kwargs): if len(args) == 1: return super().__new__(cls, args[0]) name, bases, ns = args init = ns.get('__init_subclass__') if isinstance(init, types.FunctionType): ns['__init_subclass__'] = classmethod(init) self = super().__new__(cls, name, bases, ns) for k, v in self.__dict__.items(): func = getattr(v, '__set_owner__', None) if func is not None: func(self, k) super(self, self).__init_subclass__(**kwargs) return self def __init__(self, name, bases, ns, **kwargs): super().__init__(name, bases, ns) class object: @classmethod def __init_subclass__(cls): pass class object(object, metaclass=type): pass In this code, first the ``__set_owner__`` are called on the descriptors, and then the ``__init_subclass__``. This means that subclass initializers already see the fully initialized descriptors. This way, ``__init_subclass__`` users can fix all descriptors again if this is needed. Another option would have been to call ``__set_owner__`` in the base implementation of ``object.__init_subclass__``. This way it would be possible event to prevent ``__set_owner__`` from being called. Most of the times, however, such a prevention would be accidental, as it often happens that a call to ``super()`` is forgotten. Another small change should be noted here: in the current implementation of CPython, ``type.__init__`` explicitly forbids the use of keyword arguments, while ``type.__new__`` allows for its attributes to be shipped as keyword arguments. This is weirdly incoherent, and thus the above code forbids that. While it would be possible to retain the current behavior, it would be better if this was fixed, as it is probably not used at all: the only use case would be that at metaclass calls its ``super().__new__`` with *name*, *bases* and *dict* (yes, *dict*, not *namespace* or *ns* as mostly used with modern metaclasses) as keyword arguments. This should not be done. As a second change, the new ``type.__init__`` just ignores keyword arguments. Currently, it insists that no keyword arguments are given. This leads to a (wanted) error if one gives keyword arguments to a class declaration if the metaclass does not process them. Metaclass authors that do want to accept keyword arguments must filter them out by overriding ``__init___``. In the new code, it is not ``__init__`` that complains about keyword arguments, but ``__init_subclass__``, whose default implementation takes no arguments. In a classical inheritance scheme using the method resolution order, each ``__init_subclass__`` may take out it's keyword arguments until none are left, which is checked by the default implementation of ``__init_subclass__``. Rejected Design Options ======================= Calling the hook on the class itself ------------------------------------ Adding an ``__autodecorate__`` hook that would be called on the class itself was the proposed idea of PEP 422. Most examples work the same way or even better if the hook is called on the subclass. In general, it is much easier to explicitly call the hook on the class in which it is defined (to opt-in to such a behavior) than to opt-out, meaning that one does not want the hook to be called on the class it is defined in. This becomes most evident if the class in question is designed as a mixin: it is very unlikely that the code of the mixin is to be executed for the mixin class itself, as it is not supposed to be a complete class on its own. The original proposal also made major changes in the class initialization process, rendering it impossible to back-port the proposal to older Python versions. More importantly, having a pure Python implementation allows us to take two preliminary steps before before we actually change the interpreter, giving us the chance to iron out all possible wrinkles in the API. Other variants of calling the hook ---------------------------------- Other names for the hook were presented, namely ``__decorate__`` or ``__autodecorate__``. This proposal opts for ``__init_subclass__`` as it is very close to the ``__init__`` method, just for the subclass, while it is not very close to decorators, as it does not return the class. Requiring an explicit decorator on ``__init_subclass__`` -------------------------------------------------------- One could require the explicit use of ``@classmethod`` on the ``__init_subclass__`` decorator. It was made implicit since there's no sensible interpretation for leaving it out, and that case would need to be detected anyway in order to give a useful error message. This decision was reinforced after noticing that the user experience of defining ``__prepare__`` and forgetting the ``@classmethod`` method decorator is singularly incomprehensible (particularly since PEP 3115 documents it as an ordinary method, and the current documentation doesn't explicitly say anything one way or the other). A more ``__new__``-like hook ---------------------------- In PEP 422 the hook worked more like the ``__new__`` method than the ``__init__`` method, meaning that it returned a class instead of modifying one. This allows a bit more flexibility, but at the cost of much harder implementation and undesired side effects. Adding a class attribute with the attribute order ------------------------------------------------- This got its own PEP 520. History ======= This used to be a competing proposal to PEP 422 by Nick Coghlan and Daniel Urban. PEP 422 intended to achieve the same goals as this PEP, but with a different way of implementation. In the meantime, PEP 422 has been withdrawn favouring this approach. References ========== .. _more than 10 years ago: http://mail.python.org/pipermail/python-dev/2001-November/018651.html .. _Zope's ExtensionClass: http://docs.zope.org/zope_secrets/extensionclass.html Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From lkb.teichmann at gmail.com Wed Jul 13 10:20:29 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Wed, 13 Jul 2016 16:20:29 +0200 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: Hi, > "PEP 487 -- Simpler customisation of class creation" > https://www.python.org/dev/peps/pep-0487/ > => draft I would like to get that into Python 3.6. It's already implemented, including documentation and tests (http://bugs.python.org/issue27366). Greetings Martin From ncoghlan at gmail.com Wed Jul 13 10:45:03 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Jul 2016 00:45:03 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On 14 July 2016 at 00:15, Martin Teichmann wrote: > Hi list, > > another round for PEP 487, is there any chance it still makes it into > Python 3.6? > > The PEP should be effectively done, I updated the examples in it, > given that I implemented the PEP I could actually test the examples, > so now they work. > > The implementation is at http://bugs.python.org/issue27366, including > documentation and tests. Unfortunately nobody has reviewed the patch > yet. > > The new version of the PEP is attached. +1 from me for this version - between them, this and PEP 520 address everything I hoped to achieve with PEP 422, and a bit more besides. There's no BDFL delegation in place for this one though, so it's really Guido's +1 that you need :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dmitry.trofimov at jetbrains.com Wed Jul 13 13:19:14 2016 From: dmitry.trofimov at jetbrains.com (Dmitry Trofimov) Date: Wed, 13 Jul 2016 19:19:14 +0200 Subject: [Python-Dev] PyPI index deprecation Message-ID: Hi, as you probably already know, today the PyPI index page ( https://pypi.python.org/pypi?%3Aaction=index) was deprecated and ceased to be. Among other things it affected PyCharm IDE that relied on that page to enable packaging related features from the IDE. As a result users of PyCharm can no longer install/update PyPI packages from PyCharm. Here is an issue about that in our tracker: https://youtrack.jetbrains.com/issue/PY-20081 Given that there are several hundred thouthands of PyCharm users in the world -- all 3 editions: Professional, Community, and Educational are affected -- this can lead to a storm of a negative feedback, when people will start to face the denial of the service. The deprecation of the index was totally unexpected for us and we weren't prepared for that. Maybe we missed some announcement. We will be very happy if the functionality of the index is restored at least for some short period of time: please, give as a couple of weeks. That will allow us to implement a workaround and provide the fix for the several latest major versions of PyChram. Does anybody know who is responsible for that decision and whom to connect about it? Please help. Best regards, Dmitry Trofimov PyCharm Team Lead JetBrainshttp://www.jetbrains.com The Drive To Develop -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Jul 13 13:32:30 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 13 Jul 2016 17:32:30 +0000 Subject: [Python-Dev] PyPI index deprecation In-Reply-To: References: Message-ID: You may know that there are approximately 3 pypi maintainers, all overworked and one paid. It is amazing that it works at all. I don't know anything about that particular decision though. On Wed, Jul 13, 2016 at 1:21 PM Dmitry Trofimov < dmitry.trofimov at jetbrains.com> wrote: > Hi, > > as you probably already know, today the PyPI index page ( > https://pypi.python.org/pypi?%3Aaction=index) was deprecated and ceased > to be. > > Among other things it affected PyCharm IDE that relied on that page to > enable packaging related features from the IDE. As a result users of > PyCharm can no longer install/update PyPI packages from PyCharm. > > Here is an issue about that in our tracker: > https://youtrack.jetbrains.com/issue/PY-20081 > > Given that there are several hundred thouthands of PyCharm users in the > world -- all 3 editions: Professional, Community, and Educational are > affected -- this can lead to a storm of a negative feedback, when people > will start to face the denial of the service. > > The deprecation of the index was totally unexpected for us and we weren't > prepared for that. Maybe we missed some announcement. > > We will be very happy if the functionality of the index is restored at > least for some short > period of time: please, give as a couple of weeks. That will allow us to > implement a workaround and provide the fix for the several latest major > versions of PyChram. > > Does anybody know who is responsible for that decision and whom to > connect about it? Please help. > > Best regards, > > Dmitry Trofimov > PyCharm Team Lead > JetBrainshttp://www.jetbrains.com > The Drive To Develop > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/dholth%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 13 13:43:27 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 13 Jul 2016 17:43:27 +0000 (UTC) Subject: [Python-Dev] PyPI index deprecation References: Message-ID: Dmitry Trofimov jetbrains.com> writes: > > We will be very happy if the functionality of the index is restored at least > for some short?period of time: please, give as a couple of weeks. That will > allow us to implement a workaround and provide the fix for the several?latest > major versions of PyChram.? > We've temporarily restored this url. It would be easiest if you could come into distutils-sig as I'm not subscribed to python-dev (posting from gmane, hopefully this works!) and we can help you figure out how to use the officially supported APIs to access the information you need as well as figure out a date when we can shut it down again? We want to disable this particular URL because it accounts for a large amount of the slow downs in PyPI and we're not going to be able to continue supporting it into the future. From dmitry.trofimov at jetbrains.com Wed Jul 13 14:10:37 2016 From: dmitry.trofimov at jetbrains.com (Dmitry Trofimov) Date: Wed, 13 Jul 2016 20:10:37 +0200 Subject: [Python-Dev] PyPI index deprecation In-Reply-To: References: Message-ID: Hi Donald, thanks for your immediate response! Let's move the discussion to the distutils-sig. Best regards, Dmitry On Wed, Jul 13, 2016 at 7:43 PM, Donald Stufft wrote: > Dmitry Trofimov jetbrains.com> writes: > > > > > We will be very happy if the functionality of the index is restored at > least > > for some short period of time: please, give as a couple of weeks. That > will > > allow us to implement a workaround and provide the fix for the > several latest > > major versions of PyChram. > > > > We've temporarily restored this url. It would be easiest if you could come > into > distutils-sig as I'm not subscribed to python-dev (posting from gmane, > hopefully this works!) and we can help you figure out how to use the > officially > supported APIs to access the information you need as well as figure out a > date > when we can shut it down again? We want to disable this particular URL > because > it accounts for a large amount of the slow downs in PyPI and we're not > going to > be able to continue supporting it into the future. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/dmitry.trofimov%40jetbrains.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jul 13 17:02:07 2016 From: guido at python.org (Guido van Rossum) Date: Wed, 13 Jul 2016 14:02:07 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: I'm reviewing this now. Martin, can you please submit the new version of your PEP as a Pull Request to the new peps repo on GitHub? https://github.com/python/peps --Guido On Wed, Jul 13, 2016 at 7:45 AM, Nick Coghlan wrote: > On 14 July 2016 at 00:15, Martin Teichmann wrote: >> Hi list, >> >> another round for PEP 487, is there any chance it still makes it into >> Python 3.6? >> >> The PEP should be effectively done, I updated the examples in it, >> given that I implemented the PEP I could actually test the examples, >> so now they work. >> >> The implementation is at http://bugs.python.org/issue27366, including >> documentation and tests. Unfortunately nobody has reviewed the patch >> yet. >> >> The new version of the PEP is attached. > > +1 from me for this version - between them, this and PEP 520 address > everything I hoped to achieve with PEP 422, and a bit more besides. > > There's no BDFL delegation in place for this one though, so it's > really Guido's +1 that you need :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From guido at python.org Wed Jul 13 18:46:10 2016 From: guido at python.org (Guido van Rossum) Date: Wed, 13 Jul 2016 15:46:10 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On Wed, Jul 13, 2016 at 7:15 AM, Martin Teichmann wrote: > Hi list, > > another round for PEP 487, is there any chance it still makes it into > Python 3.6? Sure, feature freeze isn't until September (https://www.python.org/dev/peps/pep-0494/). > The PEP should be effectively done, I updated the examples in it, > given that I implemented the PEP I could actually test the examples, > so now they work. I am +1 on the idea of the PEP; below I am just asking for some clarifications and pointing out a typo or two. Please submit the next version to the github peps project as a PR! Re-review should be much quicker then. > The implementation is at http://bugs.python.org/issue27366, including > documentation and tests. Unfortunately nobody has reviewed the patch > yet. Sorry, I don't have time for that part, but I'm sure once the PEP is approved the review will follow. > The new version of the PEP is attached. > > Greetings > > Martin > > PEP: 487 > Title: Simpler customisation of class creation > Version: $Revision$ > Last-Modified: $Date$ > Author: Martin Teichmann , > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 27-Feb-2015 > Python-Version: 3.6 > Post-History: 27-Feb-2015, 5-Feb-2016, 24-Jun-2016, 2-Jul-2016, 13-Jul-2016 > Replaces: 422 > > > Abstract > ======== > > Currently, customising class creation requires the use of a custom metaclass. > This custom metaclass then persists for the entire lifecycle of the class, > creating the potential for spurious metaclass conflicts. > > This PEP proposes to instead support a wide range of customisation > scenarios through a new ``__init_subclass__`` hook in the class body, > and a hook to initialize attributes. > > The new mechanism should be easier to understand and use than > implementing a custom metaclass, and thus should provide a gentler > introduction to the full power of Python's metaclass machinery. > > > Background > ========== > > Metaclasses are a powerful tool to customize class creation. They have, > however, the problem that there is no automatic way to combine metaclasses. > If one wants to use two metaclasses for a class, a new metaclass combining > those two needs to be created, typically manually. > > This need often occurs as a surprise to a user: inheriting from two base > classes coming from two different libraries suddenly raises the necessity > to manually create a combined metaclass, where typically one is not > interested in those details about the libraries at all. This becomes > even worse if one library starts to make use of a metaclass which it > has not done before. While the library itself continues to work perfectly, > suddenly every code combining those classes with classes from another library > fails. > > Proposal > ======== > > While there are many possible ways to use a metaclass, the vast majority > of use cases falls into just three categories: some initialization code > running after class creation, the initalization of descriptors and initialization > keeping the order in which class attributes were defined. > > The first two categories can easily be achieved by having simple hooks > into the class creation: > > 1. An ``__init_subclass__`` hook that initializes > all subclasses of a given class. > 2. upon class creation, a ``__set_owner__`` hook is called on all the > attribute (descriptors) defined in the class, and > > The third category is the topic of another PEP 520. PEP, PEP 520. > As an example, the first use case looks as follows:: > > >>> class QuestBase: > ... # this is implicitly a @classmethod maybe add "(see below for motivation)" ? > ... def __init_subclass__(cls, swallow, **kwargs): > ... cls.swallow = swallow > ... super().__init_subclass__(**kwargs) > > >>> class Quest(QuestBase, swallow="african"): > ... pass > > >>> Quest.swallow > 'african' > > The base class ``object`` contains an empty ``__init_subclass__`` > method which serves as an endpoint for cooperative multiple inheritance. > Note that this method has no keyword arguments, meaning that all > methods which are more specialized have to process all keyword > arguments. > > This general proposal is not a new idea (it was first suggested for > inclusion in the language definition `more than 10 years ago`_, and a > similar mechanism has long been supported by `Zope's ExtensionClass`_), > but the situation has changed sufficiently in recent years that > the idea is worth reconsidering for inclusion. > > The second part of the proposal adds an ``__set_owner__`` > initializer for class attributes, especially if they are descriptors. > Descriptors are defined in the body of a > class, but they do not know anything about that class, they do not > even know the name they are accessed with. They do get to know their > owner once ``__get__`` is called, but still they do not know their > name. This is unfortunate, for example they cannot put their > associated value into their object's ``__dict__`` under their name, > since they do not know that name. This problem has been solved many > times, and is one of the most important reasons to have a metaclass in > a library. While it would be easy to implement such a mechanism using > the first part of the proposal, it makes sense to have one solution > for this problem for everyone. > > To give an example of its usage, imagine a descriptor representing weak > referenced values:: > > import weakref > > class WeakAttribute: > def __get__(self, instance, owner): > return instance.__dict__[self.name]() > > def __set__(self, instance, value): > instance.__dict__[self.name] = weakref.ref(value) > > # this is the new initializer: > def __set_owner__(self, owner, name): > self.name = name This example is missing something -- an example of how the WeakAttribute class would be *used*. I suppose something like # We wish we could write class C: foo = WeakAttribute() x = C() x.foo = ... print(x.foo) > While this example looks very trivial, it should be noted that until > now such an attribute cannot be defined without the use of a metaclass. > And given that such a metaclass can make life very hard, this kind of > attribute does not exist yet. > > Initializing descriptors could simply be done in the > ``__init_subclass__`` hook. But this would mean that descriptors can > only be used in classes that have the proper hook, the generic version > like in the example would not work generally. One could also call > ``__set_owner__`` from whithin the base implementation of > ``object.__init_subclass__``. But given that it is a common mistake > to forget to call ``super()``, it would happen too often that suddenly > descriptors are not initialized. > > > Key Benefits > ============ > > > Easier inheritance of definition time behaviour > ----------------------------------------------- > > Understanding Python's metaclasses requires a deep understanding of > the type system and the class construction process. This is legitimately > seen as challenging, due to the need to keep multiple moving parts (the code, > the metaclass hint, the actual metaclass, the class object, instances of the > class object) clearly distinct in your mind. Even when you know the rules, > it's still easy to make a mistake if you're not being extremely careful. > > Understanding the proposed implicit class initialization hook only requires > ordinary method inheritance, which isn't quite as daunting a task. The new > hook provides a more gradual path towards understanding all of the phases > involved in the class definition process. > > > Reduced chance of metaclass conflicts > ------------------------------------- > > One of the big issues that makes library authors reluctant to use metaclasses > (even when they would be appropriate) is the risk of metaclass conflicts. > These occur whenever two unrelated metaclasses are used by the desired > parents of a class definition. This risk also makes it very difficult to > *add* a metaclass to a class that has previously been published without one. > > By contrast, adding an ``__init_subclass__`` method to an existing type poses > a similar level of risk to adding an ``__init__`` method: technically, there > is a risk of breaking poorly implemented subclasses, but when that occurs, > it is recognised as a bug in the subclass rather than the library author > breaching backwards compatibility guarantees. > > > New Ways of Using Classes > ========================= > > Subclass registration > --------------------- > > Especially when writing a plugin system, one likes to register new > subclasses of a plugin baseclass. This can be done as follows:: > > class PluginBase(Object): What is "Object"? I presume just a typo for "object"? > subclasses = [] > > def __init_subclass__(cls, **kwargs): > super().__init_subclass__(**kwargs) > cls.subclasses.append(cls) > > In this example, ``PluginBase.subclasses`` will contain a plain list of all > subclasses in the entire inheritance tree. One should note that this also > works nicely as a mixin class. > > Trait descriptors > ----------------- > > There are many designs of Python descriptors in the wild which, for > example, check boundaries of values. Often those "traits" need some support > of a metaclass to work. This is how this would look like with this > PEP:: > > class Trait: > def __init__(self, minimum, maximum): > self.minimum = minimum > self.maximum = maximum > > def __get__(self, instance, owner): > return instance.__dict__[self.key] > > def __set__(self, instance, value): > if self.minimum < value < self.maximum: > instance.__dict__[self.key] = value > else: > raise ValueError("value not in range") > > def __set_owner__(self, owner, name): I wonder if this should be renamed to __set_name__ or something else that clarifies we're passing it the name of the attribute? The method name __set_owner__ made me assume this is about the owning object (which is often a useful term in other discussions about objects), whereas it is really about telling the descriptor the name of the attribute for which it applies. > self.key = name > > Implementation Details > ====================== > > For those who prefer reading Python over english, the following is a Python > equivalent of the C API changes proposed in this PEP, where the new ``object`` > and ``type`` defined here inherit from the usual ones:: That (inheriting type from type, and object from object) is very confusing. Why not just define new classes e.g. NewType and NewObject here, since it's just pseudo code anyway? > import types > > class type(type): > def __new__(cls, *args, **kwargs): > if len(args) == 1: > return super().__new__(cls, args[0]) > name, bases, ns = args > init = ns.get('__init_subclass__') > if isinstance(init, types.FunctionType): > ns['__init_subclass__'] = classmethod(init) > self = super().__new__(cls, name, bases, ns) > for k, v in self.__dict__.items(): > func = getattr(v, '__set_owner__', None) > if func is not None: > func(self, k) > super(self, self).__init_subclass__(**kwargs) > return self > > def __init__(self, name, bases, ns, **kwargs): > super().__init__(name, bases, ns) What does this definition of __init__ add? > class object: > @classmethod > def __init_subclass__(cls): > pass > > class object(object, metaclass=type): > pass Eek! Too many things named object. > In this code, first the ``__set_owner__`` are called on the descriptors, and > then the ``__init_subclass__``. This means that subclass initializers already > see the fully initialized descriptors. This way, ``__init_subclass__`` users > can fix all descriptors again if this is needed. > > Another option would have been to call ``__set_owner__`` in the base > implementation of ``object.__init_subclass__``. This way it would be possible > event to prevent ``__set_owner__`` from being called. Most of the times, event to prevent??? > however, such a prevention would be accidental, as it often happens that a call > to ``super()`` is forgotten. > > Another small change should be noted here: in the current implementation of > CPython, ``type.__init__`` explicitly forbids the use of keyword arguments, > while ``type.__new__`` allows for its attributes to be shipped as keyword > arguments. This is weirdly incoherent, and thus the above code forbids that. > While it would be possible to retain the current behavior, it would be better > if this was fixed, as it is probably not used at all: the only use case would > be that at metaclass calls its ``super().__new__`` with *name*, *bases* and > *dict* (yes, *dict*, not *namespace* or *ns* as mostly used with modern > metaclasses) as keyword arguments. This should not be done. > > As a second change, the new ``type.__init__`` just ignores keyword > arguments. Currently, it insists that no keyword arguments are given. This > leads to a (wanted) error if one gives keyword arguments to a class declaration > if the metaclass does not process them. Metaclass authors that do want to > accept keyword arguments must filter them out by overriding ``__init___``. > > In the new code, it is not ``__init__`` that complains about keyword arguments, > but ``__init_subclass__``, whose default implementation takes no arguments. In > a classical inheritance scheme using the method resolution order, each > ``__init_subclass__`` may take out it's keyword arguments until none are left, > which is checked by the default implementation of ``__init_subclass__``. I called this out previously, and I am still a bit uncomfortable with the backwards incompatibility here. But I believe what you describe here is the compromise proposed by Nick, and if that's the case I have peace with it. > Rejected Design Options > ======================= > > > Calling the hook on the class itself > ------------------------------------ > > Adding an ``__autodecorate__`` hook that would be called on the class > itself was the proposed idea of PEP 422. Most examples work the same > way or even better if the hook is called on the subclass. In general, > it is much easier to explicitly call the hook on the class in which it > is defined (to opt-in to such a behavior) than to opt-out, meaning > that one does not want the hook to be called on the class it is > defined in. > > This becomes most evident if the class in question is designed as a > mixin: it is very unlikely that the code of the mixin is to be > executed for the mixin class itself, as it is not supposed to be a > complete class on its own. > > The original proposal also made major changes in the class > initialization process, rendering it impossible to back-port the > proposal to older Python versions. > > More importantly, having a pure Python implementation allows us to > take two preliminary steps before before we actually change the > interpreter, giving us the chance to iron out all possible wrinkles > in the API. > > > Other variants of calling the hook > ---------------------------------- > > Other names for the hook were presented, namely ``__decorate__`` or > ``__autodecorate__``. This proposal opts for ``__init_subclass__`` as > it is very close to the ``__init__`` method, just for the subclass, > while it is not very close to decorators, as it does not return the > class. > > > Requiring an explicit decorator on ``__init_subclass__`` > -------------------------------------------------------- > > One could require the explicit use of ``@classmethod`` on the > ``__init_subclass__`` decorator. It was made implicit since there's no > sensible interpretation for leaving it out, and that case would need > to be detected anyway in order to give a useful error message. > > This decision was reinforced after noticing that the user experience of > defining ``__prepare__`` and forgetting the ``@classmethod`` method > decorator is singularly incomprehensible (particularly since PEP 3115 > documents it as an ordinary method, and the current documentation doesn't > explicitly say anything one way or the other). > > A more ``__new__``-like hook > ---------------------------- > > In PEP 422 the hook worked more like the ``__new__`` method than the > ``__init__`` method, meaning that it returned a class instead of > modifying one. This allows a bit more flexibility, but at the cost > of much harder implementation and undesired side effects. > > Adding a class attribute with the attribute order > ------------------------------------------------- > > This got its own PEP 520. > > > History > ======= > > This used to be a competing proposal to PEP 422 by Nick Coghlan and Daniel > Urban. PEP 422 intended to achieve the same goals as this PEP, but with a > different way of implementation. In the meantime, PEP 422 has been withdrawn > favouring this approach. > > References > ========== > > .. _more than 10 years ago: > http://mail.python.org/pipermail/python-dev/2001-November/018651.html > > .. _Zope's ExtensionClass: > http://docs.zope.org/zope_secrets/extensionclass.html > > > Copyright > ========= > > This document has been placed in the public domain. > > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From guido at python.org Wed Jul 13 18:49:12 2016 From: guido at python.org (Guido van Rossum) Date: Wed, 13 Jul 2016 15:49:12 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: FWIW I copied the version you posted into the peps repo already, since it provides a significant update to the last version there. On Wed, Jul 13, 2016 at 2:02 PM, Guido van Rossum wrote: > I'm reviewing this now. > > Martin, can you please submit the new version of your PEP as a Pull > Request to the new peps repo on GitHub? https://github.com/python/peps > > --Guido > > On Wed, Jul 13, 2016 at 7:45 AM, Nick Coghlan wrote: >> On 14 July 2016 at 00:15, Martin Teichmann wrote: >>> Hi list, >>> >>> another round for PEP 487, is there any chance it still makes it into >>> Python 3.6? >>> >>> The PEP should be effectively done, I updated the examples in it, >>> given that I implemented the PEP I could actually test the examples, >>> so now they work. >>> >>> The implementation is at http://bugs.python.org/issue27366, including >>> documentation and tests. Unfortunately nobody has reviewed the patch >>> yet. >>> >>> The new version of the PEP is attached. >> >> +1 from me for this version - between them, this and PEP 520 address >> everything I hoped to achieve with PEP 422, and a bit more besides. >> >> There's no BDFL delegation in place for this one though, so it's >> really Guido's +1 that you need :) >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Thu Jul 14 03:51:55 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 14 Jul 2016 17:51:55 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On 14 July 2016 at 08:46, Guido van Rossum wrote: > On Wed, Jul 13, 2016 at 7:15 AM, Martin Teichmann wrote: >> Another small change should be noted here: in the current implementation of >> CPython, ``type.__init__`` explicitly forbids the use of keyword arguments, >> while ``type.__new__`` allows for its attributes to be shipped as keyword >> arguments. This is weirdly incoherent, and thus the above code forbids that. >> While it would be possible to retain the current behavior, it would be better >> if this was fixed, as it is probably not used at all: the only use case would >> be that at metaclass calls its ``super().__new__`` with *name*, *bases* and >> *dict* (yes, *dict*, not *namespace* or *ns* as mostly used with modern >> metaclasses) as keyword arguments. This should not be done. >> >> As a second change, the new ``type.__init__`` just ignores keyword >> arguments. Currently, it insists that no keyword arguments are given. This >> leads to a (wanted) error if one gives keyword arguments to a class declaration >> if the metaclass does not process them. Metaclass authors that do want to >> accept keyword arguments must filter them out by overriding ``__init___``. >> >> In the new code, it is not ``__init__`` that complains about keyword arguments, >> but ``__init_subclass__``, whose default implementation takes no arguments. In >> a classical inheritance scheme using the method resolution order, each >> ``__init_subclass__`` may take out it's keyword arguments until none are left, >> which is checked by the default implementation of ``__init_subclass__``. > > I called this out previously, and I am still a bit uncomfortable with > the backwards incompatibility here. But I believe what you describe > here is the compromise proposed by Nick, and if that's the case I have > peace with it. It would be worth spelling out the end result of the new behaviour in the PEP to make sure it's what we want. Trying to reason about how that code works is difficult, but looking at some class definition scenarios and seeing how they behave with the old semantics and the new semantics should be relatively straightforward (and they can become test cases for the revised implementation). The basic scenario to cover would be defining a metaclass which *doesn't* accept any additional keyword arguments and seeing how it fails when passed an unsupported parameter: class MyMeta(type): pass class MyClass(metaclass=MyMeta, otherarg=1): pass MyMeta("MyClass", (), otherargs=1) import types types.new_class("MyClass", (), dict(metaclass=MyMeta, otherarg=1)) types.prepare_class("MyClass", (), dict(metaclass=MyMeta, otherarg=1)) Current behaviour: >>> class MyMeta(type): ... pass ... >>> class MyClass(metaclass=MyMeta, otherarg=1): ... pass ... Traceback (most recent call last): File "", line 1, in TypeError: type() takes 1 or 3 arguments >>> MyMeta("MyClass", (), otherargs=1) Traceback (most recent call last): File "", line 1, in TypeError: Required argument 'dict' (pos 3) not found >>> import types >>> types.new_class("MyClass", (), dict(metaclass=MyMeta, otherarg=1)) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.5/types.py", line 57, in new_class return meta(name, bases, ns, **kwds) TypeError: type() takes 1 or 3 arguments >>> types.prepare_class("MyClass", (), dict(metaclass=MyMeta, otherarg=1)) (, {}, {'otherarg': 1}) The error messages may change, but the cases which currently fail should continue to fail with TypeError Further scenarios would then cover the changes needed to the definition of "MyMeta" to make the class creation invocations above actually work (since the handling of __prepare__ already tolerates unknown arguments). First, just defining __new__ (which currently fails): >>> class MyMeta(type): ... def __new__(cls, name, bases, namespace, otherarg): ... self = super().__new__(cls, name, bases, namespace) ... self.otherarg = otherarg ... return self ... >>> class MyClass(metaclass=MyMeta, otherarg=1): ... pass ... Traceback (most recent call last): File "", line 1, in TypeError: type.__init__() takes no keyword arguments Making this work would be fine, and that's what I believe will happen with the PEP's revised semantics. Then, just defining __init__ (which also fails): >>> class MyMeta(type): ... def __init__(self, name, bases, namespace, otherarg): ... super().__init__(name, bases, namespace) ... self.otherarg = otherarg ... >>> class MyClass(metaclass=MyMeta, otherarg=1): ... pass ... Traceback (most recent call last): File "", line 1, in TypeError: type() takes 1 or 3 arguments The PEP shouldn't result in any changes in this case. And finally defining both of them (which succeeds): >>> class MyMeta(type): ... def __new__(cls, name, bases, namespace, otherarg): ... self = super().__new__(cls, name, bases, namespace) ... self.otherarg = otherarg ... return self ... def __init__(self, name, bases, namespace, otherarg): ... super().__init__(name, bases, namespace) ... >>> class MyClass(metaclass=MyMeta, otherarg=1): ... pass ... >>> MyClass.otherarg 1 That last scenario is the one we need to ensure keeps working (and I believe it does with Martin's current implementation) >From a documentation perspective, one subtlety we should highlight is that the invocation order during subtype creation is: * mcl.__new__ - descr.__set_name__ - cls.__init_subclass__ * mcl.__init__ So if the metaclass defines both __new__ and __init__ methods, the new hooks will run before the __init__ method does. (I think that's fine, the docs just need to make it clear that type.__new__ is the operation doing the heavy lifting) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lkb.teichmann at gmail.com Thu Jul 14 09:50:31 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Thu, 14 Jul 2016 15:50:31 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Guido, Hi list, Thanks for the nice review! I applied followed up your ideas and put it into a github pull request: https://github.com/python/peps/pull/53 Soon we'll be working there, until then, some responses to your comments: > I wonder if this should be renamed to __set_name__ or something else > that clarifies we're passing it the name of the attribute? The method > name __set_owner__ made me assume this is about the owning object > (which is often a useful term in other discussions about objects), > whereas it is really about telling the descriptor the name of the > attribute for which it applies. The name for this has been discussed a bit already, __set_owner__ was Nick's idea, and indeed, the owner is also set. Technically, __set_owner_and_name__ would be correct, but actually I like your idea of __set_name__. > That (inheriting type from type, and object from object) is very > confusing. Why not just define new classes e.g. NewType and NewObject > here, since it's just pseudo code anyway? Actually, it's real code. If you drop those lines at the beginning of the tests for the implementation (as I have done here: https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py), the test runs on older Pythons. But I see that my idea to formulate things here in Python was a bad idea, I will put the explanation first and turn the code into pseudo-code. >> def __init__(self, name, bases, ns, **kwargs): >> super().__init__(name, bases, ns) > > What does this definition of __init__ add? It removes the keyword arguments. I describe that in prose a bit down. >> class object: >> @classmethod >> def __init_subclass__(cls): >> pass >> >> class object(object, metaclass=type): >> pass > > Eek! Too many things named object. Well, I had to do that to make the tests run... I'll take that out. >> In the new code, it is not ``__init__`` that complains about keyword arguments, >> but ``__init_subclass__``, whose default implementation takes no arguments. In >> a classical inheritance scheme using the method resolution order, each >> ``__init_subclass__`` may take out it's keyword arguments until none are left, >> which is checked by the default implementation of ``__init_subclass__``. > > I called this out previously, and I am still a bit uncomfortable with > the backwards incompatibility here. But I believe what you describe > here is the compromise proposed by Nick, and if that's the case I have > peace with it. No, this is not Nick's compromise, this is my original. Nick just sent another mail to this list where he goes a bit more into the details, I'll respond to that about this topic. Greetings Martin P.S.: I just realized that my changes to the PEP were accepted by someone else than Guido. I am a bit surprised about that, but I guess this is how it works? From lkb.teichmann at gmail.com Thu Jul 14 10:04:18 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Thu, 14 Jul 2016 16:04:18 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Nick, hi list, > It would be worth spelling out the end result of the new behaviour in > the PEP to make sure it's what we want. Trying to reason about how > that code works is difficult, but looking at some class definition > scenarios and seeing how they behave with the old semantics and the > new semantics should be relatively straightforward (and they can > become test cases for the revised implementation). I agree with that. All the examples that Nick gives should work the same way as they used to. I'll turn them into tests. I hope it is fine if the error messages change slightly, as long as the error type stays the same? Let's look at an example that won't work anymore if my proposal goes throught: class MyType(type): def __new__(cls, name, bases, namespace): return super().__new__(cls, name=name, bases=bases, dict=namespace) I guess this kind of code is pretty rare. Note that I need to call the third parameter dict, as this is how it is named. Even if there is code out there like that, it would be pretty easy to change. Just in case someone wondered: def __new__(cls, **kwargs): return super().__new__(cls, **kwargs) doesn't work now and won't work afterwards, as the interpreter calls the metaclass with positional arguments. That said, it would be possible, at the cost of quite some lines of code, to make it fully backwards compatible. If the consensus is that this is needed, I'll change the PEP and code accordingly. My proposal also has the advantage that name, bases and dict may be used as class keyword arguments. At least for name I see a usecase: class MyMangledClass(BaseClass, name="Nice class name"): pass Greetings Martin From guido at python.org Thu Jul 14 11:47:50 2016 From: guido at python.org (Guido van Rossum) Date: Thu, 14 Jul 2016 08:47:50 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: I just reviewed the changes you made, I like __set_name__(). I'll just wait for your next update, incorporating Nick's suggestions. Regarding who merges PRs to the PEPs repo, since you are the author the people who merge don't pass any judgment on the changes (unless it doesn't build cleanly or maybe if they see a typo). If you intend a PR as a base for discussion you can add a comment saying e.g. "Don't merge yet". If you call out @gvanrossum, GitHub will make sure I get a message about it. I think the substantial discussion about the PEP should remain here in python-dev; comments about typos, grammar and other minor editorial issues can go on GitHub. Hope this part of the process makes sense! On Thu, Jul 14, 2016 at 6:50 AM, Martin Teichmann wrote: > Hi Guido, Hi list, > > Thanks for the nice review! I applied followed up your ideas and put > it into a github pull request: https://github.com/python/peps/pull/53 > > Soon we'll be working there, until then, some responses to your comments: > >> I wonder if this should be renamed to __set_name__ or something else >> that clarifies we're passing it the name of the attribute? The method >> name __set_owner__ made me assume this is about the owning object >> (which is often a useful term in other discussions about objects), >> whereas it is really about telling the descriptor the name of the >> attribute for which it applies. > > The name for this has been discussed a bit already, __set_owner__ was > Nick's idea, and indeed, the owner is also set. Technically, > __set_owner_and_name__ would be correct, but actually I like your idea > of __set_name__. > >> That (inheriting type from type, and object from object) is very >> confusing. Why not just define new classes e.g. NewType and NewObject >> here, since it's just pseudo code anyway? > > Actually, it's real code. If you drop those lines at the beginning of > the tests for the implementation (as I have done here: > https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py), > the test runs on older Pythons. > > But I see that my idea to formulate things here in Python was a bad > idea, I will put the explanation first and turn the code into > pseudo-code. > >>> def __init__(self, name, bases, ns, **kwargs): >>> super().__init__(name, bases, ns) >> >> What does this definition of __init__ add? > > It removes the keyword arguments. I describe that in prose a bit down. > >>> class object: >>> @classmethod >>> def __init_subclass__(cls): >>> pass >>> >>> class object(object, metaclass=type): >>> pass >> >> Eek! Too many things named object. > > Well, I had to do that to make the tests run... I'll take that out. > >>> In the new code, it is not ``__init__`` that complains about keyword arguments, >>> but ``__init_subclass__``, whose default implementation takes no arguments. In >>> a classical inheritance scheme using the method resolution order, each >>> ``__init_subclass__`` may take out it's keyword arguments until none are left, >>> which is checked by the default implementation of ``__init_subclass__``. >> >> I called this out previously, and I am still a bit uncomfortable with >> the backwards incompatibility here. But I believe what you describe >> here is the compromise proposed by Nick, and if that's the case I have >> peace with it. > > No, this is not Nick's compromise, this is my original. Nick just sent > another mail to this list where he goes a bit more into the details, > I'll respond to that about this topic. > > Greetings > > Martin > > P.S.: I just realized that my changes to the PEP were accepted by > someone else than Guido. I am a bit surprised about that, but I guess > this is how it works? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From guido at python.org Thu Jul 14 12:40:29 2016 From: guido at python.org (Guido van Rossum) Date: Thu, 14 Jul 2016 09:40:29 -0700 Subject: [Python-Dev] __qualname__ exposed as a local variable: standard? In-Reply-To: References: Message-ID: I think this and similar issues suffer from a lack of people who have both the time and the understanding to review patches and answer questions of this nature. To the OP: I would recommend not depending on the presence of __qualname__ in the locals, as another compiler/interpreter may come up with a different approach. However, if you have a use case, please present it here and maybe we could consider making this a documented feature. To Martin: it would be easier for people (even myself, who implemented this super() hack eons ago) to review your patch if you were able to explain the current and proposed behavior more precisely. You are doing your best but realistically almost nobody has enough context to understand what you wrote in the tracker (certainly to me it's not enough to remind me of how the super() machinery currently works -- but reading the patch doesn't enlighten me much either as it's mostly about changes to the low-level C code). On Wed, Jul 13, 2016 at 7:00 AM, Martin Teichmann wrote: > Hi list, > >> I noticed __qualname__ is exposed by locals() while defining a class. This >> is handy but I'm not sure about its status: is it standard or just an >> artifact of the current implementation? (btw, the pycodestyle linter -former >> pep8- rejects its usage). I was unable to find any reference to this >> behavior in PEP 3155 nor in the language reference. > > I would like to underline the importance of this question, and give > some background, as it happens to resonate with my work on PEP 487. > > The __qualname__ of a class is originally determined by the compiler. > One might now think that it would be easiest to simply set the > __qualname__ of a class once the class is created, but this is not as > easy as it sounds. The class is created by its metaclass, so possibly > by user code, which might create whatever it wants, including > something which is not even a class. So the decision had been taken to > sneak the __qualname__ through user code, and pass it to the > metaclasses __new__ method as part of the namespace, where it is > deleted from the namespace again. This has weird side effects, as the > namespace may be user code as well, leading to the funniest possible > abuses, too obscene to publish on a public mailing list. > > A very different approach has been taken for super(). It has similar > problems: the zero argument version of super looks in the surrounding > scope for __class__ for the containing class. This does not exist yet > at the time of creation of the methods, so a PyCell is put into the > function's scope, which will later be filled. It is actually filled > with whatever the metaclasses __new__ returns, which may, as already > be said, anything (some sanity checks are done to avoid crashing the > interpreter). > > I personally prefer the first way of doing things like for > __qualname__, even at the cost of adding things to the classes > namespace. It could be moved after the end of the class definition, > such that it doesn't show up while the class body is executed. We > might also rename it to __ at qualname__, this way it cannot be accessed > by users in the class body, unless they look into locals(). > > This has the large advange that super() would work immediately after > the class has been defined, i.e. already in the __new__ of the > metaclass after it has called type.__new__. > > All of this changes the behavior of the interpreter, but we are > talking about undocumented behavior. > > The changes necessary to make super() work earlier are store in > http://bugs.python.org/issue23722 > > Greetings > > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From tseaver at palladion.com Thu Jul 14 14:03:46 2016 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 14 Jul 2016 14:03:46 -0400 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/14/2016 11:47 AM, Guido van Rossum wrote: > If you intend a PR as a base for discussion you can add a comment > saying e.g. "Don't merge yet". If you call out @gvanrossum, GitHub > will make sure I get a message about it. FWIW, I often use a Github label, "don't merge" (colored red for urgency), to indicate that PRs are still in discussion stage: removing it is a lightweight way to signify that blocking issues have been resolved (in the opinion of the owner/matintainer, anyway). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXh9P8AAoJEPKpaDSJE9HYsC0QAMPf/J+n35fg7sMjy07/fE8v xXXc+JbqnphnZolX1Xjla8sUD6tSpq0dp234VYeGwm4z19p3U5SYpYX4zzNuYCwE V2AVfdNpY0xwTkoFCbxXTwikZVIGt6o8IQLvqcjlQpCj3wl5A0ggcoaYDnXeKrZd wE4MF4t9YFgdABZ2i2RVbZNoSRUcMa1kKq9BKpnLnq65dPv2yAYQDDbWIatXLLbi 7kxAfK4CjSWR8BKNzo71uJDeVJVyk6N2nWLuGNOEff8BVZe83cG/2SRjRGALSb0h kV6FdPhwIhoZ+KrVvkLcbJYpUykBAPK68VSnomXNU14jpY9a3zqEIrirB4YLM3tS 9Ov2GYH+AhDPQ840B197mmkGN4nu/d52jCHPfecgccz2gooy+qoK3FRrMlshTTaD dbnTlNm/mkEBad8dz7l/u7cGvVG+k5AiFCGkOMikg4So0xXw7C9ulCQhoARWa0DS J0gTqEGHzGqYAwMXvWxobvlm3HxcxutWuYYx7vD0DRKrPRdpz/ELE7XpOh5bPjjU sEpt7gaAn/q962QorCDRopvqgd7MeRkrAdPKJzhCIeSUp9+Y/oqolZ/my4uEXSju W8WHWx41ioDvoUEHFW3pYljSN075STP21SCuxJh+GBDOVS2HsMXEb09wxM81GOAt V/mBLuZeptsVMiVSQk6J =/KS/ -----END PGP SIGNATURE----- From brett at python.org Thu Jul 14 14:10:01 2016 From: brett at python.org (Brett Cannon) Date: Thu, 14 Jul 2016 18:10:01 +0000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On Thu, 14 Jul 2016 at 11:05 Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 07/14/2016 11:47 AM, Guido van Rossum wrote: > > > If you intend a PR as a base for discussion you can add a comment > > saying e.g. "Don't merge yet". If you call out @gvanrossum, GitHub > > will make sure I get a message about it. > > FWIW, I often use a Github label, "don't merge" (colored red for > urgency), to indicate that PRs are still in discussion stage: removing > it is a lightweight way to signify that blocking issues have been > resolved (in the opinion of the owner/matintainer, anyway). > Just start the title with `[WIP]` and it will be obvious that it's a work-in-progress (it's a GitHub idiom). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Thu Jul 14 14:42:40 2016 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 14 Jul 2016 14:42:40 -0400 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/14/2016 02:10 PM, Brett Cannon wrote: > On Thu, 14 Jul 2016 at 11:05 Tres Seaver > wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 >> >> On 07/14/2016 11:47 AM, Guido van Rossum wrote: >> >>> If you intend a PR as a base for discussion you can add a comment >>> saying e.g. "Don't merge yet". If you call out @gvanrossum, >>> GitHub will make sure I get a message about it. >> >> FWIW, I often use a Github label, "don't merge" (colored red for >> urgency), to indicate that PRs are still in discussion stage: >> removing it is a lightweight way to signify that blocking issues >> have been resolved (in the opinion of the owner/matintainer, >> anyway). >> > > Just start the title with `[WIP]` and it will be obvious that it's a > work-in-progress (it's a GitHub idiom). De gustibus, I guess: unlike the title, labels stay visible no matter how one scrolls the PR / issue, and they are more easily searchable. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXh90aAAoJEPKpaDSJE9HYeFMP/3r+b6MP4VI+SLnPT6PeZdHU aVn9/bz4hMb4DtIB1adp6CBEdtxijg0Y2H6BgcmnoFcqhVO0yXquOtJmVfqQr44T zO8DY+v+eVBcw8KX5MduQt3jLq8fBXviFq0yu55bWYboQRKbUKrfzFwZFlZJ9gH7 AAdieX/26NK4RkFxePYn5dJeJ1EIX7RoRuIB8X5NPve6FA08eRUHvSicQN4Vpvey Xs+eiLcz+3pOHCu4hiERInu19lztoL5GmdC+cL3mq2A9qpKy9fEAWVRhU84VaDa1 86/jKXgoXfZt2wH7Wj/MC6Z8gXMutIyjcrjVyZEbPQe4zt5o5Vdv/M9nxk1iOnV3 sSqY72HQiiaWvwjWasv0F78LT0nKqt9+bq+aBHrF5PHd0epxInI7KQEScuB+BcaS aNNVZtSRRQhCEnO8MB6cedBv90sg2FVv8ITBNHac/Zn2ThljMJ8s90gHZZbC3T6/ uP0uvwS8aYzKJoTH5Mmxvt4m4vQCg+tintOwF8/nwN4y4kQFXZcCZqeb4l55XRAE INal/Khx0eHqd07D7BRZ/a1lKTDuyEuTifJNjZjr9fC704xplMTygJc/kuaTvMfN 4e30iKbMO4oJ3Oyrysr/2E81YlqBe9ZMGdkdBwvyYmGnIKXbmlsHHUQn1asRwF64 l5HJUWDAWxccJ8d83q0g =NdlU -----END PGP SIGNATURE----- From status at bugs.python.org Fri Jul 15 12:08:45 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 15 Jul 2016 18:08:45 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160715160845.E9AE056516@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-07-08 - 2016-07-15) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5562 (+12) closed 33715 (+39) total 39277 (+51) Open issues with patches: 2429 Issues opened (33) ================== #27469: Unicode filename gets crippled on Windows when drag and drop http://bugs.python.org/issue27469 reopened by eryksun #27470: -3 commandline option documented differently via man http://bugs.python.org/issue27470 opened by mgilson #27471: sre_constants.error: bad escape \d http://bugs.python.org/issue27471 opened by Noah Petherbridge #27472: add the 'unix_shell' attribute to test.support http://bugs.python.org/issue27472 opened by xdegaye #27477: IDLE: Switch dialogs to ttk widgets. http://bugs.python.org/issue27477 opened by terry.reedy #27482: heap-buffer-overflow on address 0x6250000078ff http://bugs.python.org/issue27482 opened by mtowalski #27483: Expose HEAD_LOCK/HEAD_UNLOCK in pystate.c http://bugs.python.org/issue27483 opened by fijall #27485: urllib.splitport -- is it official or not? http://bugs.python.org/issue27485 opened by gvanrossum #27486: FTPlib hangs on some pasv responses http://bugs.python.org/issue27486 opened by Byoungwoo Song #27487: -m switch regression in Python 3.5.2 (under rare circumstances http://bugs.python.org/issue27487 opened by wolma #27490: ARM cross-compile: pgen built without $(CFLAGS) as $(LIBRARY) http://bugs.python.org/issue27490 opened by Thomas Perl #27491: Errors when building with UNICODE character set http://bugs.python.org/issue27491 opened by Minmin.Gong #27492: Enhance bytearray_repr with bytes_repr's logic http://bugs.python.org/issue27492 opened by xiang.zhang #27493: logging module fails with unclear error when supplied a (Posix http://bugs.python.org/issue27493 opened by rhendrikse #27494: 2to3 parser failure caused by a comma after a generator expres http://bugs.python.org/issue27494 opened by jstasiak #27495: Pretty printing sorting for set and frozenset instances http://bugs.python.org/issue27495 opened by danilo.bellini #27496: unicodedata.name() doesn't have names for control characters http://bugs.python.org/issue27496 opened by zwol #27497: csv module: Add return value to DictWriter.writeheader http://bugs.python.org/issue27497 opened by lsowen #27499: PY_SSIZE_T_CLEAN conflicts with Py_LIMITED_API http://bugs.python.org/issue27499 opened by dholth #27500: ProactorEventLoop cannot open connection to ::1 http://bugs.python.org/issue27500 opened by sebastien.bourdeauducq #27501: Add typing.py class describing a PEP 3118 buffer object http://bugs.python.org/issue27501 opened by Daniel Moisset #27502: Python -m Module Vulnerable to Buffer Over Flow. http://bugs.python.org/issue27502 opened by DhirajMishra #27505: Missing documentation for setting module __class__ attribute http://bugs.python.org/issue27505 opened by ncoghlan #27506: make bytes/bytearray delete a keyword argument http://bugs.python.org/issue27506 opened by xiang.zhang #27507: bytearray.extend lacks overflow check when increasing buffer http://bugs.python.org/issue27507 opened by xiang.zhang #27509: Some tests breaks PGO build on Windows http://bugs.python.org/issue27509 opened by Charles G. #27511: Add PathLike objects support to BZ2File http://bugs.python.org/issue27511 opened by xiang.zhang #27512: os.fspath is certain to crash when exception raised in __fspat http://bugs.python.org/issue27512 opened by xiang.zhang #27513: email.utils.getaddresses does not handle Header objects http://bugs.python.org/issue27513 opened by frispete #27515: Dotted name re-import does not rebind after deletion http://bugs.python.org/issue27515 opened by terry.reedy #27516: Wrong initialization of python path with embeddable distributi http://bugs.python.org/issue27516 opened by palm.kevin #27517: LZMACompressor and LZMADecompressor raise exceptions if given http://bugs.python.org/issue27517 opened by benfogle #27520: Issue when building PGO http://bugs.python.org/issue27520 opened by Decorater Most recent 15 issues with no replies (15) ========================================== #27520: Issue when building PGO http://bugs.python.org/issue27520 #27511: Add PathLike objects support to BZ2File http://bugs.python.org/issue27511 #27505: Missing documentation for setting module __class__ attribute http://bugs.python.org/issue27505 #27494: 2to3 parser failure caused by a comma after a generator expres http://bugs.python.org/issue27494 #27491: Errors when building with UNICODE character set http://bugs.python.org/issue27491 #27486: FTPlib hangs on some pasv responses http://bugs.python.org/issue27486 #27482: heap-buffer-overflow on address 0x6250000078ff http://bugs.python.org/issue27482 #27470: -3 commandline option documented differently via man http://bugs.python.org/issue27470 #27451: gzip.py: Please save more of the gzip header for later examina http://bugs.python.org/issue27451 #27446: struct: allow per-item byte order http://bugs.python.org/issue27446 #27445: Charset instance not passed to set_payload() http://bugs.python.org/issue27445 #27435: ctypes and AIX - also for 2.7.X (and later) http://bugs.python.org/issue27435 #27428: Document WindowsRegistryFinder inherits from MetaPathFinder http://bugs.python.org/issue27428 #27426: Encoding mismatch causes some tests to fail on Windows http://bugs.python.org/issue27426 #27420: Docs for os.link - say what happens if link already exists http://bugs.python.org/issue27420 Most recent 15 issues waiting for review (15) ============================================= #27517: LZMACompressor and LZMADecompressor raise exceptions if given http://bugs.python.org/issue27517 #27512: os.fspath is certain to crash when exception raised in __fspat http://bugs.python.org/issue27512 #27511: Add PathLike objects support to BZ2File http://bugs.python.org/issue27511 #27507: bytearray.extend lacks overflow check when increasing buffer http://bugs.python.org/issue27507 #27506: make bytes/bytearray delete a keyword argument http://bugs.python.org/issue27506 #27501: Add typing.py class describing a PEP 3118 buffer object http://bugs.python.org/issue27501 #27495: Pretty printing sorting for set and frozenset instances http://bugs.python.org/issue27495 #27492: Enhance bytearray_repr with bytes_repr's logic http://bugs.python.org/issue27492 #27491: Errors when building with UNICODE character set http://bugs.python.org/issue27491 #27477: IDLE: Switch dialogs to ttk widgets. http://bugs.python.org/issue27477 #27472: add the 'unix_shell' attribute to test.support http://bugs.python.org/issue27472 #27461: Optimize PNGs http://bugs.python.org/issue27461 #27454: PyUnicode_InternInPlace can use PyDict_SetDefault http://bugs.python.org/issue27454 #27453: $CPP invocation in configure must use $CPPFLAGS http://bugs.python.org/issue27453 #27452: IDLE: Cleanup config code http://bugs.python.org/issue27452 Top 10 most discussed issues (10) ================================= #26988: Add AutoNumberedEnum to stdlib http://bugs.python.org/issue26988 19 msgs #27078: Make f'' strings faster than .format: BUILD_STRING opcode? http://bugs.python.org/issue27078 15 msgs #27487: -m switch regression in Python 3.5.2 (under rare circumstances http://bugs.python.org/issue27487 15 msgs #27512: os.fspath is certain to crash when exception raised in __fspat http://bugs.python.org/issue27512 14 msgs #14977: mailcap does not respect precedence in the presence of wildcar http://bugs.python.org/issue14977 13 msgs #18966: Threads within multiprocessing Process terminate early http://bugs.python.org/issue18966 13 msgs #27392: Add a server_side keyword parameter to create_connection http://bugs.python.org/issue27392 12 msgs #27515: Dotted name re-import does not rebind after deletion http://bugs.python.org/issue27515 12 msgs #27469: Unicode filename gets crippled on Windows when drag and drop http://bugs.python.org/issue27469 11 msgs #27497: csv module: Add return value to DictWriter.writeheader http://bugs.python.org/issue27497 10 msgs Issues closed (37) ================== #8538: Add FlagAction to argparse http://bugs.python.org/issue8538 closed by haypo #10697: host and port attributes not documented well in function urlli http://bugs.python.org/issue10697 closed by martin.panter #20674: Update comments in dictobject.c http://bugs.python.org/issue20674 closed by r.david.murray #22125: Cure signedness warnings introduced by #22003 http://bugs.python.org/issue22125 closed by berker.peksag #25548: Show the address in the repr for class objects http://bugs.python.org/issue25548 closed by python-dev #25572: _ssl doesn't build on OSX 10.11 http://bugs.python.org/issue25572 closed by matrixise #26176: EmailMessage example doesn't work http://bugs.python.org/issue26176 closed by r.david.murray #26446: Mention in the devguide that core dev stuff falls under the PS http://bugs.python.org/issue26446 closed by berker.peksag #26896: mix-up with the terms 'importer', 'finder', 'loader' in the im http://bugs.python.org/issue26896 closed by brett.cannon #26972: mistakes in docstrings in the import machinery http://bugs.python.org/issue26972 closed by brett.cannon #27027: add the 'is_android' attribute to test.support http://bugs.python.org/issue27027 closed by xdegaye #27180: Doc/pathlib: Please describe the behaviour of Path().rename() http://bugs.python.org/issue27180 closed by berker.peksag #27285: Document the deprecation of pyvenv in favor of `python3 -m ven http://bugs.python.org/issue27285 closed by brett.cannon #27369: Tests break with --with-system-expat and Expat 2.2.0 http://bugs.python.org/issue27369 closed by benjamin.peterson #27442: expose the Android API level in sysconfig.get_config_vars() http://bugs.python.org/issue27442 closed by xdegaye #27455: Fix tkinter examples to be PEP8 compliant http://bugs.python.org/issue27455 closed by berker.peksag #27466: [Copy from github user macartur] time2netscape missing comma http://bugs.python.org/issue27466 closed by orsenthil #27468: Erroneous memory behaviour for objects created in another thre http://bugs.python.org/issue27468 closed by Adria Garriga #27473: bytes_concat seems to check overflow using undefined behaviour http://bugs.python.org/issue27473 closed by serhiy.storchaka #27474: Unify exception in _Py_bytes_contains for integers http://bugs.python.org/issue27474 closed by serhiy.storchaka #27475: define_macros uses incorrect parameter for msvc compilers http://bugs.python.org/issue27475 closed by eryksun #27476: Introduce a .github folder with PULL_REQUEST_TEMPLATE http://bugs.python.org/issue27476 closed by berker.peksag #27478: Python Can't run http://bugs.python.org/issue27478 closed by orsenthil #27479: Slicing strings out of bounds does not raise IndexError http://bugs.python.org/issue27479 closed by eryksun #27480: Cannot link _crypt and _nis modules on a host with glibc-2.12 http://bugs.python.org/issue27480 closed by r.david.murray #27481: Replace TypeError with ValueError in doc regarding "embedded N http://bugs.python.org/issue27481 closed by serhiy.storchaka #27484: Some Examples in Format String Syntax are incorrect or poorly http://bugs.python.org/issue27484 closed by r.david.murray #27488: Underscore not showing Mac El Capitan http://bugs.python.org/issue27488 closed by zach.ware #27489: Win 10, choco install python gets message: Access to the path http://bugs.python.org/issue27489 closed by eryksun #27498: Regression in repr() of class object http://bugs.python.org/issue27498 closed by python-dev #27503: support RUSAGE_THREAD as a constant in the resource module http://bugs.python.org/issue27503 closed by r.david.murray #27504: Missing assertion methods in unittest documentation http://bugs.python.org/issue27504 closed by berker.peksag #27508: process thread with implicit join is killed unexpectedly http://bugs.python.org/issue27508 closed by tim.peters #27510: Found some Solution build missconfigurations. http://bugs.python.org/issue27510 closed by steve.dower #27514: SystemError when compiling deeply nested for loops http://bugs.python.org/issue27514 closed by python-dev #27518: small typo error in Grammar/Grammar http://bugs.python.org/issue27518 closed by berker.peksag #27519: update the references to http://mercurial.selenic.com http://bugs.python.org/issue27519 closed by berker.peksag From steve.dower at python.org Fri Jul 15 18:20:23 2016 From: steve.dower at python.org (Steve Dower) Date: Fri, 15 Jul 2016 15:20:23 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry Message-ID: Hi all I'd like to get this PEP approved (status changed to Active, IIUC). So far (to my knowledge), Anaconda is writing out the new metadata and Visual Studio is reading it. Any changes to the schema now will require somewhat public review anyway, so I don't see any harm in approving the PEP right now. To reiterate, this doesn't require changing anything about CPython at all and has no backwards compatibility impact on official releases (but hopefully it will stop alternative distros from overwriting our essential metadata and causing problems). I suppose I look to Guido first, unless he wants to delegate to one of the other Windows contributors? Cheers, Steve URL: https://www.python.org/dev/peps/pep-0514/ Full text ------- PEP: 514 Title: Python registration in the Windows registry Version: $Revision$ Last-Modified: $Date$ Author: Steve Dower Status: Draft Type: Informational Content-Type: text/x-rst Created: 02-Feb-2016 Post-History: 02-Feb-2016, 01-Mar-2016 Abstract ======== This PEP defines a schema for the Python registry key to allow third-party installers to register their installation, and to allow applications to detect and correctly display all Python environments on a user's machine. No implementation changes to Python are proposed with this PEP. Python environments are not required to be registered unless they want to be automatically discoverable by external tools. The schema matches the registry values that have been used by the official installer since at least Python 2.5, and the resolution behaviour matches the behaviour of the official Python releases. Motivation ========== When installed on Windows, the official Python installer creates a registry key for discovery and detection by other applications. This allows tools such as installers or IDEs to automatically detect and display a user's Python installations. Third-party installers, such as those used by distributions, typically create identical keys for the same purpose. Most tools that use the registry to detect Python installations only inspect the keys used by the official installer. As a result, third-party installations that wish to be discoverable will overwrite these values, resulting in users "losing" their Python installation. By describing a layout for registry keys that allows third-party installations to register themselves uniquely, as well as providing tool developers guidance for discovering all available Python installations, these collisions should be prevented. Definitions =========== A "registry key" is the equivalent of a file-system path into the registry. Each key may contain "subkeys" (keys nested within keys) and "values" (named and typed attributes attached to a key). ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in user, and this user can generally read and write all settings under this root. ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any user can read these settings but only administrators can modify them. It is typical for values under ``HKEY_CURRENT_USER`` to take precedence over those in ``HKEY_LOCAL_MACHINE``. On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special key that 32-bit processes transparently read and write to rather than accessing the ``Software`` key directly. Structure ========= We consider there to be a single collection of Python environments on a machine, where the collection may be different for each user of the machine. There are three potential registry locations where the collection may be stored based on the installation options of each environment:: HKEY_CURRENT_USER\Software\Python\\ HKEY_LOCAL_MACHINE\Software\Python\\ HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ Environments are uniquely identified by their Company-Tag pair, with two options for conflict resolution: include everything, or give priority to user preferences. Tools that include every installed environment, even where the Company-Tag pairs match, should ensure users can easily identify whether the registration was per-user or per-machine. When tools are selecting a single installed environment from all registered environments, the intent is that user preferences from ``HKEY_CURRENT_USER`` will override matching Company-Tag pairs in ``HKEY_LOCAL_MACHINE``. Official Python releases use ``PythonCore`` for Company, and the value of ``sys.winver`` for Tag. Other registered environments may use any values for Company and Tag. Recommendations are made in the following sections. Python environments are not required to register themselves unless they want to be automatically discoverable by external tools. Backwards Compatibility ----------------------- Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds in ``sys.winver``. As a result, it is possible to have valid side-by-side installations of both 32-bit and 64-bit interpreters. To ensure backwards compatibility, applications should treat environments listed under the following two registry keys as distinct, even when the Tag matches:: HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct from both of the above keys, potentially resulting in three environments discovered using the same Tag. Alternatively, a tool may determine whether the per-user environment is 64-bit or 32-bit and give it priority over the per-machine environment, resulting in a maximum of two discovered environments. It is not possible to detect side-by-side installations of both 64-bit and 32-bit versions of Python prior to 3.5 when they have been installed for the current user. Python 3.5 and later always uses different Tags for 64-bit and 32-bit versions. Environments registered under other Company names must use distinct Tags to support side-by-side installations. Tools consuming these registrations are not required to disambiguate tags other than by preferring the user's setting. Company ------- The Company part of the key is intended to group related environments and to ensure that Tags are namespaced appropriately. The key name should be alphanumeric without spaces and likely to be unique. For example, a trademarked name, a UUID, or a hostname would be appropriate:: HKEY_CURRENT_USER\Software\Python\ExampleCorp HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 HKEY_CURRENT_USER\Software\Python\www.example.com The company name ``PyLauncher`` is reserved for the PEP 397 launcher (``py.exe``). It does not follow this convention and should be ignored by tools. If a string value named ``DisplayName`` exists, it should be used to identify the environment category to users. Otherwise, the name of the key should be used. If a string value named ``SupportUrl`` exists, it may be displayed or otherwise used to direct users to a web site related to the environment. A complete example may look like:: HKEY_CURRENT_USER\Software\Python\ExampleCorp (Default) = (value not set) DisplayName = "Example Corp" SupportUrl = "http://www.example.com" Tag --- The Tag part of the key is intended to uniquely identify an environment within those provided by a single company. The key name should be alphanumeric without spaces and stable across installations. For example, the Python language version, a UUID or a partial/complete hash would be appropriate; an integer counter that increases for each new environment may not:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 If a string value named ``DisplayName`` exists, it should be used to identify the environment to users. Otherwise, the name of the key should be used. If a string value named ``SupportUrl`` exists, it may be displayed or otherwise used to direct users to a web site related to the environment. If a string value named ``Version`` exists, it should be used to identify the version of the environment. This is independent from the version of Python implemented by the environment. If a string value named ``SysVersion`` exists, it must be in ``x.y`` or ``x.y.z`` format matching the version returned by ``sys.version_info`` in the interpreter. Otherwise, if the Tag matches this format it is used. If not, the Python version is unknown. Note that each of these values is recommended, but optional. A complete example may look like this:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 (Default) = (value not set) DisplayName = "Distro 3" SupportUrl = "http://www.example.com/distro-3" Version = "3.0.12345.0" SysVersion = "3.6.0" InstallPath ----------- Beneath the environment key, an ``InstallPath`` key must be created. This key is always named ``InstallPath``, and the default value must match ``sys.prefix``:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath (Default) = "C:\ExampleCorpPy36" If a string value named ``ExecutablePath`` exists, it must be a path to the ``python.exe`` (or equivalent) executable. Otherwise, the interpreter executable is assumed to be called ``python.exe`` and exist in the directory referenced by the default value. If a string value named ``WindowedExecutablePath`` exists, it must be a path to the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed interpreter executable is assumed to be called ``pythonw.exe`` and exist in the directory referenced by the default value. A complete example may look like:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\InstallPath (Default) = "C:\ExampleDistro30" ExecutablePath = "C:\ExampleDistro30\ex_python.exe" WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" Help ---- Beneath the environment key, a ``Help`` key may be created. This key is always named ``Help`` if present and has no default value. Each subkey of ``Help`` specifies a documentation file, tool, or URL associated with the environment. The subkey may have any name, and the default value is a string appropriate for passing to ``os.startfile`` or equivalent. If a string value named ``DisplayName`` exists, it should be used to identify the help file to users. Otherwise, the key name should be used. A complete example may look like:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help Python\ (Default) = "C:\ExampleDistro30\python36.chm" DisplayName = "Python Documentation" Extras\ (Default) = "http://www.example.com/tutorial" DisplayName = "Example Distro Online Tutorial" Other Keys ---------- Some other registry keys are used for defining or inferring search paths under certain conditions. A third-party installation is permitted to define these keys under their Company-Tag key, however, the interpreter must be modified and rebuilt in order to read these values. Alternatively, the interpreter may be modified to not use any registry keys for determining search paths. Making such changes is a decision for the third party; this PEP makes no recommendation either way. Copyright ========= This document has been placed in the public domain. From guido at python.org Fri Jul 15 18:26:08 2016 From: guido at python.org (Guido van Rossum) Date: Fri, 15 Jul 2016 15:26:08 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: I was going to delegate to our resident Windows expert, but that's you. :-( Can you suggest someone else? I really don't want to swap in what I once knew about the Windows registry... On Fri, Jul 15, 2016 at 3:20 PM, Steve Dower wrote: > Hi all > > I'd like to get this PEP approved (status changed to Active, IIUC). > > So far (to my knowledge), Anaconda is writing out the new metadata and > Visual Studio is reading it. Any changes to the schema now will require > somewhat public review anyway, so I don't see any harm in approving the PEP > right now. > > To reiterate, this doesn't require changing anything about CPython at all > and has no backwards compatibility impact on official releases (but > hopefully it will stop alternative distros from overwriting our essential > metadata and causing problems). > > I suppose I look to Guido first, unless he wants to delegate to one of the > other Windows contributors? > > Cheers, > Steve > > URL: https://www.python.org/dev/peps/pep-0514/ > > Full text > ------- > > PEP: 514 > Title: Python registration in the Windows registry > Version: $Revision$ > Last-Modified: $Date$ > Author: Steve Dower > Status: Draft > Type: Informational > Content-Type: text/x-rst > Created: 02-Feb-2016 > Post-History: 02-Feb-2016, 01-Mar-2016 > > Abstract > ======== > > This PEP defines a schema for the Python registry key to allow third-party > installers to register their installation, and to allow applications to > detect > and correctly display all Python environments on a user's machine. No > implementation changes to Python are proposed with this PEP. > > Python environments are not required to be registered unless they want to be > automatically discoverable by external tools. > > The schema matches the registry values that have been used by the official > installer since at least Python 2.5, and the resolution behaviour matches > the > behaviour of the official Python releases. > > Motivation > ========== > > When installed on Windows, the official Python installer creates a registry > key > for discovery and detection by other applications. This allows tools such as > installers or IDEs to automatically detect and display a user's Python > installations. > > Third-party installers, such as those used by distributions, typically > create > identical keys for the same purpose. Most tools that use the registry to > detect > Python installations only inspect the keys used by the official installer. > As a > result, third-party installations that wish to be discoverable will > overwrite > these values, resulting in users "losing" their Python installation. > > By describing a layout for registry keys that allows third-party > installations > to register themselves uniquely, as well as providing tool developers > guidance > for discovering all available Python installations, these collisions should > be > prevented. > > Definitions > =========== > > A "registry key" is the equivalent of a file-system path into the registry. > Each > key may contain "subkeys" (keys nested within keys) and "values" (named and > typed attributes attached to a key). > > ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in > user, > and this user can generally read and write all settings under this root. > > ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any > user can read these settings but only administrators can modify them. It is > typical for values under ``HKEY_CURRENT_USER`` to take precedence over those > in > ``HKEY_LOCAL_MACHINE``. > > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special > key > that 32-bit processes transparently read and write to rather than accessing > the > ``Software`` key directly. > > Structure > ========= > > We consider there to be a single collection of Python environments on a > machine, > where the collection may be different for each user of the machine. There > are > three potential registry locations where the collection may be stored based > on > the installation options of each environment:: > > HKEY_CURRENT_USER\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ > > Environments are uniquely identified by their Company-Tag pair, with two > options > for conflict resolution: include everything, or give priority to user > preferences. > > Tools that include every installed environment, even where the Company-Tag > pairs > match, should ensure users can easily identify whether the registration was > per-user or per-machine. > > When tools are selecting a single installed environment from all registered > environments, the intent is that user preferences from ``HKEY_CURRENT_USER`` > will override matching Company-Tag pairs in ``HKEY_LOCAL_MACHINE``. > > Official Python releases use ``PythonCore`` for Company, and the value of > ``sys.winver`` for Tag. Other registered environments may use any values for > Company and Tag. Recommendations are made in the following sections. > > Python environments are not required to register themselves unless they want > to > be automatically discoverable by external tools. > > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds > in > ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. > > To ensure backwards compatibility, applications should treat environments > listed > under the following two registry keys as distinct, even when the Tag > matches:: > > HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ > > Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct > from > both of the above keys, potentially resulting in three environments > discovered > using the same Tag. Alternatively, a tool may determine whether the per-user > environment is 64-bit or 32-bit and give it priority over the per-machine > environment, resulting in a maximum of two discovered environments. > > It is not possible to detect side-by-side installations of both 64-bit and > 32-bit versions of Python prior to 3.5 when they have been installed for the > current user. Python 3.5 and later always uses different Tags for 64-bit and > 32-bit versions. > > Environments registered under other Company names must use distinct Tags to > support side-by-side installations. Tools consuming these registrations are > not required to disambiguate tags other than by preferring the user's > setting. > > Company > ------- > > The Company part of the key is intended to group related environments and to > ensure that Tags are namespaced appropriately. The key name should be > alphanumeric without spaces and likely to be unique. For example, a > trademarked > name, a UUID, or a hostname would be appropriate:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 > HKEY_CURRENT_USER\Software\Python\www.example.com > > The company name ``PyLauncher`` is reserved for the PEP 397 launcher > (``py.exe``). It does not follow this convention and should be ignored by > tools. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the environment category to users. Otherwise, the name of the key should be > used. > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise > used to direct users to a web site related to the environment. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > (Default) = (value not set) > DisplayName = "Example Corp" > SupportUrl = "http://www.example.com" > > Tag > --- > > The Tag part of the key is intended to uniquely identify an environment > within > those provided by a single company. The key name should be alphanumeric > without > spaces and stable across installations. For example, the Python language > version, a UUID or a partial/complete hash would be appropriate; an integer > counter that increases for each new environment may not:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > > If a string value named ``DisplayName`` exists, it should be used to > identify > the environment to users. Otherwise, the name of the key should be used. > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise > used to direct users to a web site related to the environment. > > If a string value named ``Version`` exists, it should be used to identify > the > version of the environment. This is independent from the version of Python > implemented by the environment. > > If a string value named ``SysVersion`` exists, it must be in ``x.y`` or > ``x.y.z`` format matching the version returned by ``sys.version_info`` in > the > interpreter. Otherwise, if the Tag matches this format it is used. If not, > the > Python version is unknown. > > Note that each of these values is recommended, but optional. A complete > example > may look like this:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > (Default) = (value not set) > DisplayName = "Distro 3" > SupportUrl = "http://www.example.com/distro-3" > Version = "3.0.12345.0" > SysVersion = "3.6.0" > > InstallPath > ----------- > > Beneath the environment key, an ``InstallPath`` key must be created. This > key is > always named ``InstallPath``, and the default value must match > ``sys.prefix``:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath > (Default) = "C:\ExampleCorpPy36" > > If a string value named ``ExecutablePath`` exists, it must be a path to the > ``python.exe`` (or equivalent) executable. Otherwise, the interpreter > executable > is assumed to be called ``python.exe`` and exist in the directory referenced > by > the default value. > > If a string value named ``WindowedExecutablePath`` exists, it must be a path > to > the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed > interpreter executable is assumed to be called ``pythonw.exe`` and exist in > the > directory referenced by the default value. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\InstallPath > (Default) = "C:\ExampleDistro30" > ExecutablePath = "C:\ExampleDistro30\ex_python.exe" > WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" > > Help > ---- > > Beneath the environment key, a ``Help`` key may be created. This key is > always > named ``Help`` if present and has no default value. > > Each subkey of ``Help`` specifies a documentation file, tool, or URL > associated > with the environment. The subkey may have any name, and the default value is > a > string appropriate for passing to ``os.startfile`` or equivalent. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the help file to users. Otherwise, the key name should be used. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help > Python\ > (Default) = "C:\ExampleDistro30\python36.chm" > DisplayName = "Python Documentation" > Extras\ > (Default) = "http://www.example.com/tutorial" > DisplayName = "Example Distro Online Tutorial" > > Other Keys > ---------- > > Some other registry keys are used for defining or inferring search paths > under > certain conditions. A third-party installation is permitted to define these > keys > under their Company-Tag key, however, the interpreter must be modified and > rebuilt in order to read these values. Alternatively, the interpreter may be > modified to not use any registry keys for determining search paths. Making > such > changes is a decision for the third party; this PEP makes no recommendation > either way. > > Copyright > ========= > > This document has been placed in the public domain. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From steve.dower at python.org Fri Jul 15 18:39:16 2016 From: steve.dower at python.org (Steve Dower) Date: Fri, 15 Jul 2016 15:39:16 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: On 15Jul2016 1526, Guido van Rossum wrote: > I was going to delegate to our resident Windows expert, but that's you. :-( > > Can you suggest someone else? I really don't want to swap in what I > once knew about the Windows registry... He might not be pleased at the nomination, but Paul Moore would be my first choice. Otherwise Zach still has the obligation that comes with being labelled a Windows expert in the dev guide ;) > On Fri, Jul 15, 2016 at 3:20 PM, Steve Dower wrote: >> Hi all >> >> I'd like to get this PEP approved (status changed to Active, IIUC). >> >> So far (to my knowledge), Anaconda is writing out the new metadata and >> Visual Studio is reading it. Any changes to the schema now will require >> somewhat public review anyway, so I don't see any harm in approving the PEP >> right now. >> >> To reiterate, this doesn't require changing anything about CPython at all >> and has no backwards compatibility impact on official releases (but >> hopefully it will stop alternative distros from overwriting our essential >> metadata and causing problems). >> >> I suppose I look to Guido first, unless he wants to delegate to one of the >> other Windows contributors? >> >> Cheers, >> Steve >> >> URL: https://www.python.org/dev/peps/pep-0514/ >> >> Full text >> ------- >> >> PEP: 514 >> Title: Python registration in the Windows registry >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Steve Dower >> Status: Draft >> Type: Informational >> Content-Type: text/x-rst >> Created: 02-Feb-2016 >> Post-History: 02-Feb-2016, 01-Mar-2016 >> >> Abstract >> ======== >> >> This PEP defines a schema for the Python registry key to allow third-party >> installers to register their installation, and to allow applications to >> detect >> and correctly display all Python environments on a user's machine. No >> implementation changes to Python are proposed with this PEP. >> >> Python environments are not required to be registered unless they want to be >> automatically discoverable by external tools. >> >> The schema matches the registry values that have been used by the official >> installer since at least Python 2.5, and the resolution behaviour matches >> the >> behaviour of the official Python releases. >> >> Motivation >> ========== >> >> When installed on Windows, the official Python installer creates a registry >> key >> for discovery and detection by other applications. This allows tools such as >> installers or IDEs to automatically detect and display a user's Python >> installations. >> >> Third-party installers, such as those used by distributions, typically >> create >> identical keys for the same purpose. Most tools that use the registry to >> detect >> Python installations only inspect the keys used by the official installer. >> As a >> result, third-party installations that wish to be discoverable will >> overwrite >> these values, resulting in users "losing" their Python installation. >> >> By describing a layout for registry keys that allows third-party >> installations >> to register themselves uniquely, as well as providing tool developers >> guidance >> for discovering all available Python installations, these collisions should >> be >> prevented. >> >> Definitions >> =========== >> >> A "registry key" is the equivalent of a file-system path into the registry. >> Each >> key may contain "subkeys" (keys nested within keys) and "values" (named and >> typed attributes attached to a key). >> >> ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in >> user, >> and this user can generally read and write all settings under this root. >> >> ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any >> user can read these settings but only administrators can modify them. It is >> typical for values under ``HKEY_CURRENT_USER`` to take precedence over those >> in >> ``HKEY_LOCAL_MACHINE``. >> >> On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special >> key >> that 32-bit processes transparently read and write to rather than accessing >> the >> ``Software`` key directly. >> >> Structure >> ========= >> >> We consider there to be a single collection of Python environments on a >> machine, >> where the collection may be different for each user of the machine. There >> are >> three potential registry locations where the collection may be stored based >> on >> the installation options of each environment:: >> >> HKEY_CURRENT_USER\Software\Python\\ >> HKEY_LOCAL_MACHINE\Software\Python\\ >> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ >> >> Environments are uniquely identified by their Company-Tag pair, with two >> options >> for conflict resolution: include everything, or give priority to user >> preferences. >> >> Tools that include every installed environment, even where the Company-Tag >> pairs >> match, should ensure users can easily identify whether the registration was >> per-user or per-machine. >> >> When tools are selecting a single installed environment from all registered >> environments, the intent is that user preferences from ``HKEY_CURRENT_USER`` >> will override matching Company-Tag pairs in ``HKEY_LOCAL_MACHINE``. >> >> Official Python releases use ``PythonCore`` for Company, and the value of >> ``sys.winver`` for Tag. Other registered environments may use any values for >> Company and Tag. Recommendations are made in the following sections. >> >> Python environments are not required to register themselves unless they want >> to >> be automatically discoverable by external tools. >> >> Backwards Compatibility >> ----------------------- >> >> Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds >> in >> ``sys.winver``. As a result, it is possible to have valid side-by-side >> installations of both 32-bit and 64-bit interpreters. >> >> To ensure backwards compatibility, applications should treat environments >> listed >> under the following two registry keys as distinct, even when the Tag >> matches:: >> >> HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ >> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ >> >> Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct >> from >> both of the above keys, potentially resulting in three environments >> discovered >> using the same Tag. Alternatively, a tool may determine whether the per-user >> environment is 64-bit or 32-bit and give it priority over the per-machine >> environment, resulting in a maximum of two discovered environments. >> >> It is not possible to detect side-by-side installations of both 64-bit and >> 32-bit versions of Python prior to 3.5 when they have been installed for the >> current user. Python 3.5 and later always uses different Tags for 64-bit and >> 32-bit versions. >> >> Environments registered under other Company names must use distinct Tags to >> support side-by-side installations. Tools consuming these registrations are >> not required to disambiguate tags other than by preferring the user's >> setting. >> >> Company >> ------- >> >> The Company part of the key is intended to group related environments and to >> ensure that Tags are namespaced appropriately. The key name should be >> alphanumeric without spaces and likely to be unique. For example, a >> trademarked >> name, a UUID, or a hostname would be appropriate:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 >> HKEY_CURRENT_USER\Software\Python\www.example.com >> >> The company name ``PyLauncher`` is reserved for the PEP 397 launcher >> (``py.exe``). It does not follow this convention and should be ignored by >> tools. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the environment category to users. Otherwise, the name of the key should be >> used. >> >> If a string value named ``SupportUrl`` exists, it may be displayed or >> otherwise >> used to direct users to a web site related to the environment. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> (Default) = (value not set) >> DisplayName = "Example Corp" >> SupportUrl = "http://www.example.com" >> >> Tag >> --- >> >> The Tag part of the key is intended to uniquely identify an environment >> within >> those provided by a single company. The key name should be alphanumeric >> without >> spaces and stable across installations. For example, the Python language >> version, a UUID or a partial/complete hash would be appropriate; an integer >> counter that increases for each new environment may not:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the environment to users. Otherwise, the name of the key should be used. >> >> If a string value named ``SupportUrl`` exists, it may be displayed or >> otherwise >> used to direct users to a web site related to the environment. >> >> If a string value named ``Version`` exists, it should be used to identify >> the >> version of the environment. This is independent from the version of Python >> implemented by the environment. >> >> If a string value named ``SysVersion`` exists, it must be in ``x.y`` or >> ``x.y.z`` format matching the version returned by ``sys.version_info`` in >> the >> interpreter. Otherwise, if the Tag matches this format it is used. If not, >> the >> Python version is unknown. >> >> Note that each of these values is recommended, but optional. A complete >> example >> may look like this:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 >> (Default) = (value not set) >> DisplayName = "Distro 3" >> SupportUrl = "http://www.example.com/distro-3" >> Version = "3.0.12345.0" >> SysVersion = "3.6.0" >> >> InstallPath >> ----------- >> >> Beneath the environment key, an ``InstallPath`` key must be created. This >> key is >> always named ``InstallPath``, and the default value must match >> ``sys.prefix``:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath >> (Default) = "C:\ExampleCorpPy36" >> >> If a string value named ``ExecutablePath`` exists, it must be a path to the >> ``python.exe`` (or equivalent) executable. Otherwise, the interpreter >> executable >> is assumed to be called ``python.exe`` and exist in the directory referenced >> by >> the default value. >> >> If a string value named ``WindowedExecutablePath`` exists, it must be a path >> to >> the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed >> interpreter executable is assumed to be called ``pythonw.exe`` and exist in >> the >> directory referenced by the default value. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\InstallPath >> (Default) = "C:\ExampleDistro30" >> ExecutablePath = "C:\ExampleDistro30\ex_python.exe" >> WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" >> >> Help >> ---- >> >> Beneath the environment key, a ``Help`` key may be created. This key is >> always >> named ``Help`` if present and has no default value. >> >> Each subkey of ``Help`` specifies a documentation file, tool, or URL >> associated >> with the environment. The subkey may have any name, and the default value is >> a >> string appropriate for passing to ``os.startfile`` or equivalent. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the help file to users. Otherwise, the key name should be used. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help >> Python\ >> (Default) = "C:\ExampleDistro30\python36.chm" >> DisplayName = "Python Documentation" >> Extras\ >> (Default) = "http://www.example.com/tutorial" >> DisplayName = "Example Distro Online Tutorial" >> >> Other Keys >> ---------- >> >> Some other registry keys are used for defining or inferring search paths >> under >> certain conditions. A third-party installation is permitted to define these >> keys >> under their Company-Tag key, however, the interpreter must be modified and >> rebuilt in order to read these values. Alternatively, the interpreter may be >> modified to not use any registry keys for determining search paths. Making >> such >> changes is a decision for the third party; this PEP makes no recommendation >> either way. >> >> Copyright >> ========= >> >> This document has been placed in the public domain. >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > From ncoghlan at gmail.com Sat Jul 16 02:52:58 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 16 Jul 2016 16:52:58 +1000 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: On 16 July 2016 at 08:20, Steve Dower wrote: > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds > in > ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. The second sentence here seems like it should say "... it is not possible ..." (since subsequent paragraphs explain that side-by-side installs of 32-bit and 64-bit versions don't really work properly until 3.5) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Sat Jul 16 05:44:53 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 16 Jul 2016 10:44:53 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: On 15 July 2016 at 23:39, Steve Dower wrote: > On 15Jul2016 1526, Guido van Rossum wrote: >> >> I was going to delegate to our resident Windows expert, but that's you. >> :-( >> >> Can you suggest someone else? I really don't want to swap in what I >> once knew about the Windows registry... > > > He might not be pleased at the nomination, but Paul Moore would be my first > choice. :-) Thanks for the vote of confidence, Steve - if Guido's OK with it I'd be willing to do this. Paul From steve.dower at python.org Sat Jul 16 08:35:57 2016 From: steve.dower at python.org (Steve Dower) Date: Sat, 16 Jul 2016 05:35:57 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: Good catch, thanks. Top-posted from my Windows Phone -----Original Message----- From: "Nick Coghlan" Sent: ?7/?15/?2016 23:53 To: "Steve Dower" Cc: "Python Dev" Subject: Re: [Python-Dev] PEP 514: Python registration in the Windows registry On 16 July 2016 at 08:20, Steve Dower wrote: > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds > in > ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. The second sentence here seems like it should say "... it is not possible ..." (since subsequent paragraphs explain that side-by-side installs of 32-bit and 64-bit versions don't really work properly until 3.5) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Jul 16 13:59:17 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 16 Jul 2016 10:59:17 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: Yup! Paul is now officially the BDFL-delegate for PEP 514. On Sat, Jul 16, 2016 at 2:44 AM, Paul Moore wrote: > On 15 July 2016 at 23:39, Steve Dower wrote: >> On 15Jul2016 1526, Guido van Rossum wrote: >>> >>> I was going to delegate to our resident Windows expert, but that's you. >>> :-( >>> >>> Can you suggest someone else? I really don't want to swap in what I >>> once knew about the Windows registry... >> >> >> He might not be pleased at the nomination, but Paul Moore would be my first >> choice. > > :-) Thanks for the vote of confidence, Steve - if Guido's OK with it > I'd be willing to do this. > > Paul -- --Guido van Rossum (python.org/~guido) From alexander.belopolsky at gmail.com Sat Jul 16 14:40:08 2016 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sat, 16 Jul 2016 13:40:08 -0500 Subject: [Python-Dev] Status of Python 3.6 PEPs? In-Reply-To: References: Message-ID: On Tue, Jul 12, 2016 at 4:26 AM, Victor Stinner wrote: > > "PEP 495 -- Local Time Disambiguation" > https://www.python.org/dev/peps/pep-0495/ > => accepted > > Alexander Belopolsky asked for a review of the implementation: > https://mail.python.org/pipermail/python-dev/2016-June/145450.html Victor, I know your plate is full, but you are best qualified to review the C implementation. Tim reviewed the Python implementation early on and made several valuable suggestions, but he refuses to deal with C these days. :-( I tried to set up a Windows VM capable of building CPython, but gave up after a few futile attempts. It would be great if you could help me with the Windows port. I posted the latest patch at the Bug Tracker: http://bugs.python.org/issue24773 -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 16 15:54:02 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 16 Jul 2016 20:54:02 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: On 15 July 2016 at 23:20, Steve Dower wrote: > Hi all > > I'd like to get this PEP approved (status changed to Active, IIUC). Some comments below. > So far (to my knowledge), Anaconda is writing out the new metadata and > Visual Studio is reading it. Any changes to the schema now will require > somewhat public review anyway, so I don't see any harm in approving the PEP > right now. > > To reiterate, this doesn't require changing anything about CPython at all > and has no backwards compatibility impact on official releases (but > hopefully it will stop alternative distros from overwriting our essential > metadata and causing problems). Certainly there's nothing that impacts existing releases. I've noted an issue around sys.winver below, that as an absolute minimum needs a clarification in the 3.6 docs (the documented behaviour of sys.winver isn't explicit enough to provide the uniqueness guarantees this PEP needs) and may in fact need a code change or a PEP change if sys.winver doesn't actually distinguish between 32-bit and 64-bit builds (I've not been able to confirm that either way, unfortunately). [...] > Motivation > ========== > > When installed on Windows, the official Python installer creates a registry > key for discovery and detection by other applications. This allows tools such > as installers or IDEs to automatically detect and display a user's Python > installations. The PEP seems quite strongly focused on GUI tools, where the normal mode of operation would be to present the user with a list of "available installations" (with extra data where possible, not just a bare list of names) and ask for a selection. I'd like to see console tools considered as well. Basically, I'd like to avoid tool developers reading this section and thinking "it only applies to GUI tools or OS integration, not to me". For example, virtualenv introspects the available Python installations - see https://github.com/pypa/virtualenv/blob/master/virtualenv.py#L86 - to support the "-p " flag. To handle this well, it would be useful to allow distributions to register a "short tag", so that as well as "-p 3.5" or "-p 2", Virtualenv could support (say) "-p conda3.4" or "-p pypy2". (The short tag should be at the Company level, so "conda" or "pypy", and the version gets added to that). Another place where this might be useful is the py.exe launcher (it's not in scope for this PEP, but having the data needed to allow the launcher to invoke any available installation could be useful for future enhancements). Another key motivation for me would be to define clearly what information tools can rely on being able to get from the available registry entries describing what's installed. Whenever I've needed to scan the registry, the things I've needed to find out are where I find the Python interpreter, what Python version it is, and whether it's 32-bit or 64-bit. The first so that I can run Python, and the latter two so that I can tell if this is a version I support *without* needing to run the interpreter. For me, everything else in this PEP is about UI, but those 3 items plus the "short tag" idea are more about what capabilities I can provide. [...] > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special > key that 32-bit processes transparently read and write to rather than > accessing the ``Software`` key directly. It might be worth being more explicit here that 32-bit and 64-bit processes see the registry keys slightly differently. More on this below. > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds > in ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. (As Nick pointed out, "it is not possible to have valid...". I'd also add "under the rules described above"). Also, Python 3.5 doesn't appear to include the architecture in sys.winver either. >py Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.winver '3.5' (Unless it adds -32 for 32-bit, and reserves the bare version for 64-bit. I've skimmed the CPython source but can't confirm that). The documentation of sys.winver makes no mention of whether it distinguishes 32- and 64-bit builds. In fact, it states "The value is normally the first three characters of version". If we're relying on sys.winver being unique by version/architecture, the docs need to say so (so that future changes don't accidentally violate that). > To ensure backwards compatibility, applications should treat environments > listed under the following two registry keys as distinct, even when the Tag > matches:: > > HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ > > Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct > from both of the above keys, potentially resulting in three environments > discovered using the same Tag. Alternatively, a tool may determine whether > the per-user environment is 64-bit or 32-bit and give it priority over the > per-machine environment, resulting in a maximum of two discovered > environments. > > It is not possible to detect side-by-side installations of both 64-bit and > 32-bit versions of Python prior to 3.5 when they have been installed for the > current user. Python 3.5 and later always uses different Tags for 64-bit and > 32-bit versions. >From what I can see, this latter isn't true. I presume that 64-bit uses no suffix, but 32-bit uses a "-32" suffix? This should probably be made explicit. At a minimum, if I were writing a tool to list all installed Python versions, with only what I have available to go on (the PEP and a 64-bit Python 3.5) I wouldn't be able to write correct code, as I don't have all the information I need. Also, if we expect to be able to distinguish 32 and 64 bit implementations in this way, that's putting a new restriction on sys.winver, that it returns a different value for 32-bit and 64-bit builds. If that's the case, I'd rather see that explicitly documented, both here and in the sys.winver documentation. I'd actually prefer a more explicit mechanism going forward, but as this is a "backward compatibility" section I'll save that for later. > Environments registered under other Company names must use distinct Tags to > support side-by-side installations. Tools consuming these registrations are > not required to disambiguate tags other than by preferring the user's > setting. Clarification needed here? "Environments registered under other Company names have no backward compatibility requirements, and thus each distinct environment must use a distinct Tag, to support side-by-side installations." > Company > ------- > > The Company part of the key is intended to group related environments and to > ensure that Tags are namespaced appropriately. The key name should be > alphanumeric without spaces and likely to be unique. For example, a > trademarked > name, a UUID, or a hostname would be appropriate:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 > HKEY_CURRENT_USER\Software\Python\www.example.com I'd suggest adding "Human-readable Company values are preferred". UUIDs seem like a horrible idea in practice. > If a string value named ``DisplayName`` exists, it should be used to identify > the environment category to users. Otherwise, the name of the key should be > used. > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise used to direct users to a web site related to the environment. The next few sections are talking about what data gets included in the registry. Much of this is optional, which is perfectly OK, but there are some defaulting rules here as well. I think we should clearly note those data items that tools which read the data can rely on having available. For example, the "Display Name" can always be obtained, either directly or from the Company key. But the support URL may or may not exist. This is important IMO, as it provides a guide for tool writers over what details they are entitled to assume they know about a distribution. This becomes more important later, when the technical information starts appearing. It's also worth noting that "Display Name" isn't actually as useful as it sounds, in practice. A tool that relies on it would report the python.org installers as being provided by "PythonCore", which isn't particularly user friendly. Maybe we need something in the "Backward Compatibility" section going into a bit more detail as to how tools should deal with that, and maybe we need to add a "DisplayName" in 3.6+. > The Tag part of the key is intended to uniquely identify an environment > within those provided by a single company. The key name should be > alphanumeric without spaces and stable across installations. For example, the > Python language version, a UUID or a partial/complete hash would be > appropriate; an integer counter that increases for each new environment may > not:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 Again, I'd add a recommendation that human readable Tag values be used whenever possible. > If a string value named ``DisplayName`` exists, it should be used to > identify the environment to users. Otherwise, the name of the key should be used. To an extent there's the same comment here as for DisplayName for Company - it needs to be defined with consideration for how it will be used. This is, of course, more of a "quality of implementation" matter than a standards one. But the PEP might benefit from an example of use, maybe showing the output from a hypothetical command line tool that lists all installations on the machine. > If a string value named ``Version`` exists, it should be used to identify the > version of the environment. This is independent from the version of Python > implemented by the environment. > > If a string value named ``SysVersion`` exists, it must be in ``x.y`` or > ``x.y.z`` format matching the version returned by ``sys.version_info`` in the > interpreter. Otherwise, if the Tag matches this format it is used. If not, > the Python version is unknown. I'm not too happy with this. What's the benefit of allowing an installation to *not* provide the Python version? Instead, I'd prefer to say: 1. All installations must provide the Python version. They are free to use x.y or x.y.z. format (i.e., the micro version is optional - although again what's the benefit? Why not mandate x.y for consistency?). The rule given in SysVersion is fine (without the final sentence). 2. If CPython *does*, as I'm assuming, use 3.5-32, then that's an issue, because CPython doesn't follow the PEP. Maybe we should allow the Tag to be version-architecture. 3. Following on from (2) we should include a string value SysArchitecture for the architecture (32 or 64) as well. Again, this should always be available from the value or the Tag. The reason I think that the interpreter version and architecture should be mandatory is because otherwise a tool that (for example) only supports Python 3.4 or greater, or only 64-bit, has no way to exclude unsupported installations. So in summary: SysVersion = x.y SysArchitecture = 32 or 64 If SysArchitecture is missing, Tag must end in -32 or -64, and the part after the "-" is the architecture. If SysVersion is missing, Tag must be x.y or x.y-NN and the version is x.y. For backward compatibility, if Company is "PythonCore", SysArchitecture is missing, and Tag doesn't end in -NN, then SysArchitecture is 32 if the registry key is under Wow6432Node. Otherwise, it's 64 if we're a 64-bit process and 32 if we're a 32-bit process. This final heuristic could be wrong, though, and code that cannot cope with getting the wrong value (for example, it's planning on loading the Python DLL into its address space) MUST take other measures to check, or ignore any ambiguous entries. (I'm open to the above being corrected - I didn't check any references when writing it down). BTW, is there any reason why the python.org installers couldn't be modified to provide *all* the information suggested in this PEP, rather than just sticking with what we've traditionally provided? It would be a good example of how to register yourself "properly", as well as avoiding the sort of ambiguity we see above. > Note that each of these values is recommended, but optional. SysVersion and SysArchitecture (or a Tag that works as a fallback) should be mandatory. Otherwise I'm OK with this statement. > Beneath the environment key, an ``InstallPath`` key must be created. This key > is always named ``InstallPath``, and the default value must match > ``sys.prefix``:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath > (Default) = "C:\ExampleCorpPy36" > > If a string value named ``ExecutablePath`` exists, it must be a path to the > ``python.exe`` (or equivalent) executable. Otherwise, the interpreter > executable is assumed to be called ``python.exe`` and exist in the directory > referenced by the default value. > > If a string value named ``WindowedExecutablePath`` exists, it must be a path > to the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed > interpreter executable is assumed to be called ``pythonw.exe`` and exist in > the directory referenced by the default value. These two items assume implicitly that a Python installation must provide python.exe and pythonw.exe. I'm inclined to make this explicit. Specifically, I think it's crucial that tools can read the (console or windowed) executable path as described here, and run that executable with standard Python command line arguments, and expect it to work. Otherwise there's little point in the installation registering its existence. I can see an argument for a distribution providing just python.exe and omitting pythonw.exe (or even the other way around). But I can't see how I could write generic code to work with such a distribution. So let's disallow that possibility until someone comes up with a concrete use case [...] > Other Keys > ---------- > > Some other registry keys are used for defining or inferring search paths > under certain conditions. A third-party installation is permitted to define > these keys under their Company-Tag key, however, the interpreter must be > modified and rebuilt in order to read these values. Alternatively, the > interpreter may be modified to not use any registry keys for determining > search paths. Making such changes is a decision for the third party; this PEP > makes no recommendation either way. I think we need to be clearer here. First of all, it should probably clearly state that any subkey of the \ key (and any value of that key, I guess), unless explicitly documented in this PEP, is free for any use by the vendor (Although this may make later expansion of this PEP hard - do we want to worry about that?). We should also note that PythonCore has a number of such "private" keys, and tools should not assume any particular meaning for them. Secondly, I think we should be more explicit about the search path issue. Maybe something like the following (this is based on my memory of the issue, so apologies for any inaccuracy): """ The Python core has traditionally used certain other keys under the PythonCore\ key to set interpreter paths and similar. This usage is considered historical, and is retained mainly for backward compatibility[1]. Third party installations are permitted to use a similar approach under their own \ namespace, but the interpreter must be modified and rebuilt in order to read these values. Alternatively, the interpreter may be modified to not use any registry keys (not even the PythonCore ones) for determining search paths. Making such changes is a decision for the third party; this PEP makes no recommendation either way. It should be noted, however, that without modification, the Python interpreter's behaviour will be based on the values under the PythonCore namespace, not under the vendor's namespace. """ [1] Is this sentence true? IIRC, nothing new is using that feature, and older stuff that did, such as pywin32, is removing it. But I know of no actual plans to rip it out at any point. Paul From p.f.moore at gmail.com Sat Jul 16 15:59:54 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 16 Jul 2016 20:59:54 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: On 16 July 2016 at 18:59, Guido van Rossum wrote: > Yup! Paul is now officially the BDFL-delegate for PEP 514. OK. I've just been reviewing the PEP and have posted some comments. There's a lot of words(!), but I don't think there's a huge amount of substantive change, mostly it's just confirmation of intent. I'll let Steve ponder that, and if anyone else has any further comments to make, now's the time to speak up. Paul From ethan at stoneleaf.us Sat Jul 16 20:03:13 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Sat, 16 Jul 2016 17:03:13 -0700 Subject: [Python-Dev] PEP 467: Minor API improvements to bytes, bytearray, and memoryview In-Reply-To: References: <57572E5D.4020101@stoneleaf.us> Message-ID: <578ACB41.7030706@stoneleaf.us> On 06/07/2016 10:42 PM, Serhiy Storchaka wrote: > On 07.06.16 23:28, Ethan Furman wrote: >> * Add ``bytes.iterbytes``, ``bytearray.iterbytes`` and >> ``memoryview.iterbytes`` alternative iterators > > "Byte" is an alias to "octet" (8-bit integer) in modern terminology. Maybe so, but not, to my knowledge, in Python terminology. > Iterating bytes and bytearray already produce bytes. No, it produces integers: >>> for b in b'abcid': ... print(b) ... 97 98 99 105 100 -- ~Ethan~ From lkb.teichmann at gmail.com Sun Jul 17 07:32:57 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sun, 17 Jul 2016 13:32:57 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Guido, Hi Nick, Hi list, so I just updated PEP 487, you can find it here: https://github.com/python/peps/pull/57 if it hasn't already been merged. There are no substantial updates there, I only updated the wording as suggested, and added some words about backwards compatibility as hinted by Nick. Greetings Martin 2016-07-14 17:47 GMT+02:00 Guido van Rossum : > I just reviewed the changes you made, I like __set_name__(). I'll just > wait for your next update, incorporating Nick's suggestions. Regarding > who merges PRs to the PEPs repo, since you are the author the people > who merge don't pass any judgment on the changes (unless it doesn't > build cleanly or maybe if they see a typo). If you intend a PR as a > base for discussion you can add a comment saying e.g. "Don't merge > yet". If you call out @gvanrossum, GitHub will make sure I get a > message about it. > > I think the substantial discussion about the PEP should remain here in > python-dev; comments about typos, grammar and other minor editorial > issues can go on GitHub. Hope this part of the process makes sense! > > On Thu, Jul 14, 2016 at 6:50 AM, Martin Teichmann > wrote: >> Hi Guido, Hi list, >> >> Thanks for the nice review! I applied followed up your ideas and put >> it into a github pull request: https://github.com/python/peps/pull/53 >> >> Soon we'll be working there, until then, some responses to your comments: >> >>> I wonder if this should be renamed to __set_name__ or something else >>> that clarifies we're passing it the name of the attribute? The method >>> name __set_owner__ made me assume this is about the owning object >>> (which is often a useful term in other discussions about objects), >>> whereas it is really about telling the descriptor the name of the >>> attribute for which it applies. >> >> The name for this has been discussed a bit already, __set_owner__ was >> Nick's idea, and indeed, the owner is also set. Technically, >> __set_owner_and_name__ would be correct, but actually I like your idea >> of __set_name__. >> >>> That (inheriting type from type, and object from object) is very >>> confusing. Why not just define new classes e.g. NewType and NewObject >>> here, since it's just pseudo code anyway? >> >> Actually, it's real code. If you drop those lines at the beginning of >> the tests for the implementation (as I have done here: >> https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py), >> the test runs on older Pythons. >> >> But I see that my idea to formulate things here in Python was a bad >> idea, I will put the explanation first and turn the code into >> pseudo-code. >> >>>> def __init__(self, name, bases, ns, **kwargs): >>>> super().__init__(name, bases, ns) >>> >>> What does this definition of __init__ add? >> >> It removes the keyword arguments. I describe that in prose a bit down. >> >>>> class object: >>>> @classmethod >>>> def __init_subclass__(cls): >>>> pass >>>> >>>> class object(object, metaclass=type): >>>> pass >>> >>> Eek! Too many things named object. >> >> Well, I had to do that to make the tests run... I'll take that out. >> >>>> In the new code, it is not ``__init__`` that complains about keyword arguments, >>>> but ``__init_subclass__``, whose default implementation takes no arguments. In >>>> a classical inheritance scheme using the method resolution order, each >>>> ``__init_subclass__`` may take out it's keyword arguments until none are left, >>>> which is checked by the default implementation of ``__init_subclass__``. >>> >>> I called this out previously, and I am still a bit uncomfortable with >>> the backwards incompatibility here. But I believe what you describe >>> here is the compromise proposed by Nick, and if that's the case I have >>> peace with it. >> >> No, this is not Nick's compromise, this is my original. Nick just sent >> another mail to this list where he goes a bit more into the details, >> I'll respond to that about this topic. >> >> Greetings >> >> Martin >> >> P.S.: I just realized that my changes to the PEP were accepted by >> someone else than Guido. I am a bit surprised about that, but I guess >> this is how it works? >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) From guido at python.org Sun Jul 17 12:57:59 2016 From: guido at python.org (Guido van Rossum) Date: Sun, 17 Jul 2016 09:57:59 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: This PEP is now accepted for inclusion in Python 3.6. Martin, congratulations! Someone (not me) needs to review and commit your changes, before September 12, when the 3.6 feature freeze goes into effect (see https://www.python.org/dev/peps/pep-0494/#schedule). On Sun, Jul 17, 2016 at 4:32 AM, Martin Teichmann wrote: > Hi Guido, Hi Nick, Hi list, > > so I just updated PEP 487, you can find it here: > https://github.com/python/peps/pull/57 if it hasn't already been > merged. There are no substantial updates there, I only updated the > wording as suggested, and added some words about backwards > compatibility as hinted by Nick. > > Greetings > > Martin > > 2016-07-14 17:47 GMT+02:00 Guido van Rossum : >> I just reviewed the changes you made, I like __set_name__(). I'll just >> wait for your next update, incorporating Nick's suggestions. Regarding >> who merges PRs to the PEPs repo, since you are the author the people >> who merge don't pass any judgment on the changes (unless it doesn't >> build cleanly or maybe if they see a typo). If you intend a PR as a >> base for discussion you can add a comment saying e.g. "Don't merge >> yet". If you call out @gvanrossum, GitHub will make sure I get a >> message about it. >> >> I think the substantial discussion about the PEP should remain here in >> python-dev; comments about typos, grammar and other minor editorial >> issues can go on GitHub. Hope this part of the process makes sense! >> >> On Thu, Jul 14, 2016 at 6:50 AM, Martin Teichmann >> wrote: >>> Hi Guido, Hi list, >>> >>> Thanks for the nice review! I applied followed up your ideas and put >>> it into a github pull request: https://github.com/python/peps/pull/53 >>> >>> Soon we'll be working there, until then, some responses to your comments: >>> >>>> I wonder if this should be renamed to __set_name__ or something else >>>> that clarifies we're passing it the name of the attribute? The method >>>> name __set_owner__ made me assume this is about the owning object >>>> (which is often a useful term in other discussions about objects), >>>> whereas it is really about telling the descriptor the name of the >>>> attribute for which it applies. >>> >>> The name for this has been discussed a bit already, __set_owner__ was >>> Nick's idea, and indeed, the owner is also set. Technically, >>> __set_owner_and_name__ would be correct, but actually I like your idea >>> of __set_name__. >>> >>>> That (inheriting type from type, and object from object) is very >>>> confusing. Why not just define new classes e.g. NewType and NewObject >>>> here, since it's just pseudo code anyway? >>> >>> Actually, it's real code. If you drop those lines at the beginning of >>> the tests for the implementation (as I have done here: >>> https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py), >>> the test runs on older Pythons. >>> >>> But I see that my idea to formulate things here in Python was a bad >>> idea, I will put the explanation first and turn the code into >>> pseudo-code. >>> >>>>> def __init__(self, name, bases, ns, **kwargs): >>>>> super().__init__(name, bases, ns) >>>> >>>> What does this definition of __init__ add? >>> >>> It removes the keyword arguments. I describe that in prose a bit down. >>> >>>>> class object: >>>>> @classmethod >>>>> def __init_subclass__(cls): >>>>> pass >>>>> >>>>> class object(object, metaclass=type): >>>>> pass >>>> >>>> Eek! Too many things named object. >>> >>> Well, I had to do that to make the tests run... I'll take that out. >>> >>>>> In the new code, it is not ``__init__`` that complains about keyword arguments, >>>>> but ``__init_subclass__``, whose default implementation takes no arguments. In >>>>> a classical inheritance scheme using the method resolution order, each >>>>> ``__init_subclass__`` may take out it's keyword arguments until none are left, >>>>> which is checked by the default implementation of ``__init_subclass__``. >>>> >>>> I called this out previously, and I am still a bit uncomfortable with >>>> the backwards incompatibility here. But I believe what you describe >>>> here is the compromise proposed by Nick, and if that's the case I have >>>> peace with it. >>> >>> No, this is not Nick's compromise, this is my original. Nick just sent >>> another mail to this list where he goes a bit more into the details, >>> I'll respond to that about this topic. >>> >>> Greetings >>> >>> Martin >>> >>> P.S.: I just realized that my changes to the PEP were accepted by >>> someone else than Guido. I am a bit surprised about that, but I guess >>> this is how it works? >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From lkb.teichmann at gmail.com Sun Jul 17 13:01:04 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sun, 17 Jul 2016 19:01:04 +0200 Subject: [Python-Dev] __qualname__ exposed as a local variable: standard? In-Reply-To: References: Message-ID: Hi, so I did quite some research on this topic. And what I found out is that __qualname__ needs to exist in the namespace. Not necessarily because it should be used, but because it may be modified. The story goes as follows: the compiler sets the __qualname__ at the beginning of the class body. Within the class body, it may be modified as needed. Then type.__new__ takes it and uses it. Now one could think that instead of setting the __qualname__ at the beginning of the class body, we could do so at the end as to not clutter the namespace, and only if the __qualname__ has been set in the class body we would use the user-supplied version. But this is forgetting __prepare__: unfortunately, we have no good way to find out whether something has been set in a class body, because we have no guarantee that the object returned by __prepare__ doesn't do something weird, as autogenerating values for all requested keys. > To Martin: it would be easier for people (even myself, who implemented > this super() hack eons ago) to review your patch if you were able to > explain the current and proposed behavior more precisely. I tried to give some context on my issue (http://bugs.python.org/issue23722). Hope that helps. Greetings Martin From guido at python.org Sun Jul 17 13:18:01 2016 From: guido at python.org (Guido van Rossum) Date: Sun, 17 Jul 2016 10:18:01 -0700 Subject: [Python-Dev] __qualname__ exposed as a local variable: standard? In-Reply-To: References: Message-ID: So, for __qualname__, should we just update the docs to make this the law? If that's your recommendation, I'm fine with it, and you can submit a doc patch. On Sun, Jul 17, 2016 at 10:01 AM, Martin Teichmann wrote: > Hi, > > so I did quite some research on this topic. And what I found out is > that __qualname__ needs to exist in the namespace. Not necessarily > because it should be used, but because it may be modified. > > The story goes as follows: the compiler sets the __qualname__ at the > beginning of the class body. Within the class body, it may be modified > as needed. Then type.__new__ takes it and uses it. > > Now one could think that instead of setting the __qualname__ at the > beginning of the class body, we could do so at the end as to not > clutter the namespace, and only if the __qualname__ has been set in > the class body we would use the user-supplied version. But this is > forgetting __prepare__: unfortunately, we have no good way to find out > whether something has been set in a class body, because we have no > guarantee that the object returned by __prepare__ doesn't do something > weird, as autogenerating values for all requested keys. > >> To Martin: it would be easier for people (even myself, who implemented >> this super() hack eons ago) to review your patch if you were able to >> explain the current and proposed behavior more precisely. > > I tried to give some context on my issue > (http://bugs.python.org/issue23722). Hope that helps. > > Greetings > > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From ethan at stoneleaf.us Sun Jul 17 14:04:06 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 17 Jul 2016 11:04:06 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: <578BC896.8090907@stoneleaf.us> On 07/17/2016 09:57 AM, Guido van Rossum wrote: > This PEP is now accepted for inclusion in Python 3.6. Martin, > congratulations! Congratulations, Martin! I'm looking forward to this feature. :) -- ~Ethan~ From lkb.teichmann at gmail.com Sun Jul 17 15:58:35 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sun, 17 Jul 2016 21:58:35 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: > This PEP is now accepted for inclusion in Python 3.6. Martin, > congratulations! Thank you very much! What a great news! Greetings Martin From steve.dower at python.org Mon Jul 18 12:33:22 2016 From: steve.dower at python.org (Steve Dower) Date: Mon, 18 Jul 2016 09:33:22 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: <53d8549b-f2da-7549-183b-1fb0ae5121e6@python.org> On 16Jul2016 1254, Paul Moore wrote: > On 15 July 2016 at 23:20, Steve Dower wrote: >> Hi all >> >> I'd like to get this PEP approved (status changed to Active, IIUC). > > Some comments below. Awesome, thanks! Posted a pull request at https://github.com/python/peps/pull/59 for ease of diff reading, and some commentary below (with aggressive snipping). >> Motivation >> ========== >> >> When installed on Windows, the official Python installer creates a registry >> key for discovery and detection by other applications. This allows tools such >> as installers or IDEs to automatically detect and display a user's Python >> installations. > > The PEP seems quite strongly focused on GUI tools ... I'd like to avoid tool > developers reading this section and thinking "it only applies to GUI tools or > OS integration, not to me". Agreed. I tried to avoid any console/GUI-specific terms, but I can probably be more explicit about it being useful to both. > For example, virtualenv introspects the available Python installations > - see https://github.com/pypa/virtualenv/blob/master/virtualenv.py#L86 > - to support the "-p " flag. To handle this well, it > would be useful to allow distributions to register a "short tag", so > that as well as "-p 3.5" or "-p 2", Virtualenv could support (say) "-p > conda3.4" or "-p pypy2". (The short tag should be at the Company > level, so "conda" or "pypy", and the version gets added to that). > > Another place where this might be useful is the py.exe launcher (it's > not in scope for this PEP, but having the data needed to allow the > launcher to invoke any available installation could be useful for > future enhancements). virtualenv would be a great example to use. My thinking was that the Tag should be appropriate here (perhaps with the Company to disambiguate when necessary), and that is now explicit. Anaconda currently has "Anaconda_4.1.1_64-bit" as their tag, which would not be convenient, so an explicit suggestion here would help ensure this is useful. > Another key motivation for me would be to define clearly what > information tools can rely on being able to get from the available > registry entries describing what's installed. Whenever I've needed to > scan the registry, the things I've needed to find out are where I find > the Python interpreter, what Python version it is, and whether it's > 32-bit or 64-bit. The first so that I can run Python, and the latter > two so that I can tell if this is a version I support *without* > needing to run the interpreter. For me, everything else in this PEP is > about UI, but those 3 items plus the "short tag" idea are more about > what capabilities I can provide. Good points. I discussed architecture with a colleague at one point and I'm not entirely sure it's universally useful (what architecture is IronPython when built for Any CPU? what architecture is Jython?), but maybe something like the contents of importlib.machinery.IMPORT_SUFFIXES would be? >> On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special >> key that 32-bit processes transparently read and write to rather than >> accessing the ``Software`` key directly. > > It might be worth being more explicit here that 32-bit and 64-bit > processes see the registry keys slightly differently. More on this > below. I considered this and thought I had a link to the official docs about it. I don't want this PEP to be mistaken for documentation on how registry redirection works :) >> Backwards Compatibility >> ----------------------- >> > Also, Python 3.5 doesn't appear to include the architecture in > sys.winver either. > > ... > (Unless it adds -32 for 32-bit, and reserves the bare version for > 64-bit. I've skimmed the CPython source but can't confirm that). The > documentation of sys.winver makes no mention of whether it > distinguishes 32- and 64-bit builds. In fact, it states "The value is > normally the first three characters of version". If we're relying on > sys.winver being unique by version/architecture, the docs need to say > so (so that future changes don't accidentally violate that). I'll update the docs, but your guess is correct. I changed sys.winver on 32-bit to be "3.5-32" since that matches what py.exe was already using to refer to it. I didn't want to invent yet-another-way-to-tag-architectures. (I also updated py.exe to match tags directly under PythonCore, so 3.5-32 is matched without scanning the binary type.) Also, sys.winver is defined in PCBuild/python.props, which is how we accidentally backported the suffix to 2.7.11 :( >> It is not possible to detect side-by-side installations of both 64-bit and >> 32-bit versions of Python prior to 3.5 when they have been installed for the >> current user. Python 3.5 and later always uses different Tags for 64-bit and >> 32-bit versions. > > From what I can see, this latter isn't true. I presume that 64-bit > uses no suffix, but 32-bit uses a "-32" suffix? This should probably > be made explicit. At a minimum, if I were writing a tool to list all > installed Python versions, with only what I have available to go on > (the PEP and a 64-bit Python 3.5) I wouldn't be able to write correct > code, as I don't have all the information I need. > > Also, if we expect to be able to distinguish 32 and 64 bit > implementations in this way, that's putting a new restriction on > sys.winver, that it returns a different value for 32-bit and 64-bit > builds. If that's the case, I'd rather see that explicitly documented, > both here and in the sys.winver documentation. > > I'd actually prefer a more explicit mechanism going forward, but as > this is a "backward compatibility" section I'll save that for later. I don't want to lock in the actual scheme of the tags used by CPython. Granted, without SysArchitecture in the key you can't identify the architecture from the information you have, but Tags are supposed to be treated as opaque with the exception of those that were released prior to 3.5 (also not explicit, so I'll fix that). >> The Company part of the key is intended to group related environments and to >> ensure that Tags are namespaced appropriately. The key name should be >> alphanumeric without spaces and likely to be unique. For example, a >> trademarked >> name, a UUID, or a hostname would be appropriate:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 >> HKEY_CURRENT_USER\Software\Python\www.example.com > > I'd suggest adding "Human-readable Company values are preferred". > UUIDs seem like a horrible idea in practice. Maybe, but name-squatting seems equally bad (and I've seen enough collaborative registration systems like this to see the value in guaranteed uniqueness). I'll recommend trademarked names though. > It's also worth noting that "Display Name" isn't actually as useful as > it sounds, in practice. A tool that relies on it would report the > python.org installers as being provided by "PythonCore", which isn't > particularly user friendly. Maybe we need something in the "Backward > Compatibility" section going into a bit more detail as to how tools > should deal with that, and maybe we need to add a "DisplayName" in > 3.6+. I'll specify defaults for PythonCore on each of them and add them to 3.6 (once the PEP is approved and we agree on the values). It certainly doesn't harm the usefulness, but we do want to make sure that tools are handling old PythonCore entries in a consistent way. >> If a string value named ``DisplayName`` exists, it should be used to >> identify the environment to users. Otherwise, the name of the key should be used. > > To an extent there's the same comment here as for DisplayName for > Company - it needs to be defined with consideration for how it will be > used. This is, of course, more of a "quality of implementation" matter > than a standards one. But the PEP might benefit from an example of > use, maybe showing the output from a hypothetical command line tool > that lists all installations on the machine. It's defined as being used to "identify ... to users". Equally, the SupportUrl "may be displayed or otherwise used to direct users to [support]". I feel that these are strong enough definitions, and that showing an hypothetical command line tool output might be seen as too prescriptive (or alternatively, swinging the pendulum too far away from GUI tools). >> If a string value named ``Version`` exists, it should be used to identify the >> version of the environment. This is independent from the version of Python >> implemented by the environment. >> >> If a string value named ``SysVersion`` exists, it must be in ``x.y`` or >> ``x.y.z`` format matching the version returned by ``sys.version_info`` in the >> interpreter. Otherwise, if the Tag matches this format it is used. If not, >> the Python version is unknown. > > I'm not too happy with this. [...] > >> Note that each of these values is recommended, but optional. > > SysVersion and SysArchitecture (or a Tag that works as a fallback) > should be mandatory. Otherwise I'm OK with this statement. Snipped most of the details because I agree it's unsatisfying right now, but I disagree with enough of the counterproposal that it was getting to be messy commenting on each bit. Basically, I added SysArchitecture (to match platform.architecture()[0], typically '32bit' or '64bit' but extensible without having to define all potential values in the PEP) with a note that for PythonCore it should be inferred from the registry path. SysVersion no longer allows inferring it from the Tag, except for PythonCore and only when SysVersion is missing. I'm very keen to not force any of this information to be required as it is very difficult to know how to deal with interpreters that don't include it. Does virtualenv refuse to list/use it, even if the install path is valid, just because SysArchitecture was omitted? What if the registry becomes corrupt - should Visual Studio refuse to show what information it can obtain? I already stated that all information is recommended. The change I've made now is that tools shouldn't work too hard to guess - if SysVersion is missing, they can simply say they don't know what version the interpreter is. If knowing the version is critically important, they can refuse to use it, but for many applications this is not going to be the case. And Python 3.6 will specify all of the keys. Adding it to Python 3.5 is only likely to cause issues now with people who test against 3.5.3 and not 3.5.2, which didn't have the keys. >> Beneath the environment key, an ``InstallPath`` key must be created. This key >> is always named ``InstallPath``, and the default value must match >> ``sys.prefix``:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath >> (Default) = "C:\ExampleCorpPy36" >> >> If a string value named ``ExecutablePath`` exists, it must be a path to the >> ``python.exe`` (or equivalent) executable. Otherwise, the interpreter >> executable is assumed to be called ``python.exe`` and exist in the directory >> referenced by the default value. >> >> If a string value named ``WindowedExecutablePath`` exists, it must be a path >> to the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed >> interpreter executable is assumed to be called ``pythonw.exe`` and exist in >> the directory referenced by the default value. > > These two items assume implicitly that a Python installation must > provide python.exe and pythonw.exe. I'm inclined to make this > explicit. Specifically, I think it's crucial that tools can read the > (console or windowed) executable path as described here, and run that > executable with standard Python command line arguments, and expect it > to work. Otherwise there's little point in the installation > registering its existence. Again, there's a backwards compatibility argument here in that Python 3.4 and earlier did not create full paths to the executables. But that can be called out separately. When you say "assume implicitly ... python.exe and pythonw.exe", do you mean executables by those names (counterexample - IronPython includes ipy.exe and ipyw.exe, which you either know specially or would discover from these keys)? Or BOTH executables (e.g. python.exe AND pythonw.exe)? Or executables with equivalent behaviour? I'd argue that the whole PEP only applies to Python interpreters, and if you don't support standard command line arguments you aren't really a Python interpreter and shouldn't be registering as one. But I hesitate to try and define a hard rule that captures all the possible nuances here - I'd rather deal with it by having users file bugs against offending interpreters for not working correctly. > I can see an argument for a distribution providing just python.exe and > omitting pythonw.exe (or even the other way around). But I can't see > how I could write generic code to work with such a distribution. So > let's disallow that possibility until someone comes up with a concrete > use case I think in this case, you'd either specify both keys with the same path (so tools that want the windowed executable are going to get a console window) or omit the key and make sure you don't have a "pythonw.exe" in your install directory (which seems unlikely :) ). These keys are mainly about the possibility of renaming the executables, as shown in the example. But I've added a note reminding tools developers that the executable may not exist, and attempting to launch it may fail (i.e. business as usual). >> Other Keys >> ---------- >> >> Some other registry keys are used for defining or inferring search paths >> under certain conditions. A third-party installation is permitted to define >> these keys under their Company-Tag key, however, the interpreter must be >> modified and rebuilt in order to read these values. Alternatively, the >> interpreter may be modified to not use any registry keys for determining >> search paths. Making such changes is a decision for the third party; this PEP >> makes no recommendation either way. > > I think we need to be clearer here. ... Great suggestion. I've revised this section. (I have vague plans to make the PythonPath subkey redundant in more cases for 3.6, and I *think* we can probably drop the Modules key completely, but I'm not entirely sure it's a good idea. Still thinking about it :) ) Cheers, Steve From p.f.moore at gmail.com Mon Jul 18 13:01:19 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 18 Jul 2016 18:01:19 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: <53d8549b-f2da-7549-183b-1fb0ae5121e6@python.org> References: <53d8549b-f2da-7549-183b-1fb0ae5121e6@python.org> Message-ID: On 18 July 2016 at 17:33, Steve Dower wrote: >> Some comments below. > > Awesome, thanks! Posted a pull request at > https://github.com/python/peps/pull/59 for ease of diff reading, and some > commentary below (with aggressive snipping). Thanks - I'll do a proper review of that, but just wanted to make a few comments here. > virtualenv would be a great example to use. My thinking was that the Tag > should be appropriate here (perhaps with the Company to disambiguate when > necessary), and that is now explicit. > > Anaconda currently has "Anaconda_4.1.1_64-bit" as their tag, which would not > be convenient, so an explicit suggestion here would help ensure this is > useful. Yeah, that's not a useful value for this use case. What I'm thinking of is that currently a number of projects (for example, virtualenv, tox, and a personal Powershell wrapper I have round virtualenv) do this registry introspection exercise, purely to provide a "more convenient" way of specifying a Python version than giving the full path to the interpreter. Unix users have versioned executables, so -p python3.5 works fine, but Windows users don't have that. So my idea is "something as easy to remember as python3.5". But having said this, we're talking about a theoretical extension to existing functionality, that probably has marginal utility at best, so I don't want to get hung up on details here. > Snipped most of the details because I agree it's unsatisfying right now, but > I disagree with enough of the counterproposal that it was getting to be > messy commenting on each bit. I take your points here. What I was trying to avoid (because I've encountered it myself) is having to actually *run* the Python interpreter to extract this information. Unix code does this freely, because running subprocesses is so cheap there, but starting up a load of processes on Windows is a non-trivial cost. But again, this is in the area of "potential use cases" rather than "we need it now", so I'm OK with deferring the question if you're uncertain. OK, that's enough off-the-cuff responses. I'll find some time to review your PR (probably tomorrow) and comment there. Paul From ericsnowcurrently at gmail.com Mon Jul 18 14:15:05 2016 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 18 Jul 2016 12:15:05 -0600 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Great job, Martin! Thanks for seeing this through. :) -eric On Sun, Jul 17, 2016 at 10:57 AM, Guido van Rossum wrote: > This PEP is now accepted for inclusion in Python 3.6. Martin, > congratulations! Someone (not me) needs to review and commit your > changes, before September 12, when the 3.6 feature freeze goes into > effect (see https://www.python.org/dev/peps/pep-0494/#schedule). > > On Sun, Jul 17, 2016 at 4:32 AM, Martin Teichmann > wrote: >> Hi Guido, Hi Nick, Hi list, >> >> so I just updated PEP 487, you can find it here: >> https://github.com/python/peps/pull/57 if it hasn't already been >> merged. There are no substantial updates there, I only updated the >> wording as suggested, and added some words about backwards >> compatibility as hinted by Nick. >> >> Greetings >> >> Martin >> >> 2016-07-14 17:47 GMT+02:00 Guido van Rossum : >>> I just reviewed the changes you made, I like __set_name__(). I'll just >>> wait for your next update, incorporating Nick's suggestions. Regarding >>> who merges PRs to the PEPs repo, since you are the author the people >>> who merge don't pass any judgment on the changes (unless it doesn't >>> build cleanly or maybe if they see a typo). If you intend a PR as a >>> base for discussion you can add a comment saying e.g. "Don't merge >>> yet". If you call out @gvanrossum, GitHub will make sure I get a >>> message about it. >>> >>> I think the substantial discussion about the PEP should remain here in >>> python-dev; comments about typos, grammar and other minor editorial >>> issues can go on GitHub. Hope this part of the process makes sense! >>> >>> On Thu, Jul 14, 2016 at 6:50 AM, Martin Teichmann >>> wrote: >>>> Hi Guido, Hi list, >>>> >>>> Thanks for the nice review! I applied followed up your ideas and put >>>> it into a github pull request: https://github.com/python/peps/pull/53 >>>> >>>> Soon we'll be working there, until then, some responses to your comments: >>>> >>>>> I wonder if this should be renamed to __set_name__ or something else >>>>> that clarifies we're passing it the name of the attribute? The method >>>>> name __set_owner__ made me assume this is about the owning object >>>>> (which is often a useful term in other discussions about objects), >>>>> whereas it is really about telling the descriptor the name of the >>>>> attribute for which it applies. >>>> >>>> The name for this has been discussed a bit already, __set_owner__ was >>>> Nick's idea, and indeed, the owner is also set. Technically, >>>> __set_owner_and_name__ would be correct, but actually I like your idea >>>> of __set_name__. >>>> >>>>> That (inheriting type from type, and object from object) is very >>>>> confusing. Why not just define new classes e.g. NewType and NewObject >>>>> here, since it's just pseudo code anyway? >>>> >>>> Actually, it's real code. If you drop those lines at the beginning of >>>> the tests for the implementation (as I have done here: >>>> https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py), >>>> the test runs on older Pythons. >>>> >>>> But I see that my idea to formulate things here in Python was a bad >>>> idea, I will put the explanation first and turn the code into >>>> pseudo-code. >>>> >>>>>> def __init__(self, name, bases, ns, **kwargs): >>>>>> super().__init__(name, bases, ns) >>>>> >>>>> What does this definition of __init__ add? >>>> >>>> It removes the keyword arguments. I describe that in prose a bit down. >>>> >>>>>> class object: >>>>>> @classmethod >>>>>> def __init_subclass__(cls): >>>>>> pass >>>>>> >>>>>> class object(object, metaclass=type): >>>>>> pass >>>>> >>>>> Eek! Too many things named object. >>>> >>>> Well, I had to do that to make the tests run... I'll take that out. >>>> >>>>>> In the new code, it is not ``__init__`` that complains about keyword arguments, >>>>>> but ``__init_subclass__``, whose default implementation takes no arguments. In >>>>>> a classical inheritance scheme using the method resolution order, each >>>>>> ``__init_subclass__`` may take out it's keyword arguments until none are left, >>>>>> which is checked by the default implementation of ``__init_subclass__``. >>>>> >>>>> I called this out previously, and I am still a bit uncomfortable with >>>>> the backwards incompatibility here. But I believe what you describe >>>>> here is the compromise proposed by Nick, and if that's the case I have >>>>> peace with it. >>>> >>>> No, this is not Nick's compromise, this is my original. Nick just sent >>>> another mail to this list where he goes a bit more into the details, >>>> I'll respond to that about this topic. >>>> >>>> Greetings >>>> >>>> Martin >>>> >>>> P.S.: I just realized that my changes to the PEP were accepted by >>>> someone else than Guido. I am a bit surprised about that, but I guess >>>> this is how it works? >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com From ethan at stoneleaf.us Mon Jul 18 15:58:17 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 18 Jul 2016 12:58:17 -0700 Subject: [Python-Dev] PEP 467: Minor API improvements to bytes, bytearray, and memoryview In-Reply-To: References: <57572E5D.4020101@stoneleaf.us> Message-ID: <578D34D9.4080601@stoneleaf.us> On 06/07/2016 02:34 PM, Koos Zevenhoven wrote: > Why not bytes.viewbytes (or whatever name) so that one could also > subscript it? And if it were a property, one could perhaps > conveniently get the n'th byte: > > b'abcde'.viewbytes[n] # compared to b'abcde'[n:n+1] AFAICT, 'viewbytes' doesn't add much over bytes itself if we add a 'getbyte' method. > Also, would it not be more clear to call the int -> bytes method > something like bytes.fromint or bytes.fromord and introduce the same > thing on str? And perhaps allow multiple arguments to create a > str/bytes of length > 1. I guess this may violate TOOWTDI, but anyway, > just a thought. Yes, it would. Changing to 'bytes.fromint'. -- ~Ethan~ From ethan at stoneleaf.us Mon Jul 18 16:17:50 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 18 Jul 2016 13:17:50 -0700 Subject: [Python-Dev] PEP 467: next round Message-ID: <578D396E.4050304@stoneleaf.us> Taking into consideration the comments from the last round: - 'bytes.zeros' renamed to 'bytes.size', with option byte filler (defaults to b'\x00') - 'bytes.byte' renamed to 'fromint', add 'bchr' function - deprecation and removal softened to deprecation/discouragement ----------- PEP: 467 Title: Minor API improvements for binary sequences Version: $Revision$ Last-Modified: $Date$ Author: Nick Coghlan , Ethan Furman Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 2014-03-30 Python-Version: 3.6 Post-History: 2014-03-30 2014-08-15 2014-08-16 2016-06-07 Abstract ======== During the initial development of the Python 3 language specification, the core ``bytes`` type for arbitrary binary data started as the mutable type that is now referred to as ``bytearray``. Other aspects of operating in the binary domain in Python have also evolved over the course of the Python 3 series. This PEP proposes five small adjustments to the APIs of the ``bytes``, ``bytearray`` and ``memoryview`` types to make it easier to operate entirely in the binary domain: * Deprecate passing single integer values to ``bytes`` and ``bytearray`` * Add ``bytes.size`` and ``bytearray.size`` alternative constructors * Add ``bytes.fromint`` and ``bytearray.fromint`` alternative constructors * Add ``bytes.getbyte`` and ``bytearray.getbyte`` byte retrieval methods * Add ``bytes.iterbytes``, ``bytearray.iterbytes`` and ``memoryview.iterbytes`` alternative iterators Proposals ========= Deprecation of current "zero-initialised sequence" behaviour without removal ---------------------------------------------------------------------------- Currently, the ``bytes`` and ``bytearray`` constructors accept an integer argument and interpret it as meaning to create a zero-initialised sequence of the given size:: >>> bytes(3) b'\x00\x00\x00' >>> bytearray(3) bytearray(b'\x00\x00\x00') This PEP proposes to deprecate that behaviour in Python 3.6, but to leave it in place for at least as long as Python 2.7 is supported, possibly indefinitely. No other changes are proposed to the existing constructors. Addition of explicit "count and byte initialised sequence" constructors ----------------------------------------------------------------------- To replace the deprecated behaviour, this PEP proposes the addition of an explicit ``size`` alternative constructor as a class method on both ``bytes`` and ``bytearray`` whose first argument is the count, and whose second argument is the fill byte to use (defaults to ``\x00``):: >>> bytes.size(3) b'\x00\x00\x00' >>> bytearray.size(3) bytearray(b'\x00\x00\x00') >>> bytes.size(5, b'\x0a') b'\x0a\x0a\x0a\x0a\x0a' >>> bytearray.size(5, b'\x0a') bytearray(b'\x0a\x0a\x0a\x0a\x0a') It will behave just as the current constructors behave when passed a single integer. Addition of "bchr" function and explicit "single byte" constructors ------------------------------------------------------------------- As binary counterparts to the text ``chr`` function, this PEP proposes the addition of a ``bchr`` function and an explicit ``fromint`` alternative constructor as a class method on both ``bytes`` and ``bytearray``:: >>> bchr(ord("A")) b'A' >>> bchr(ord(b"A")) b'A' >>> bytes.fromint(65) b'A' >>> bytearray.fromint(65) bytearray(b'A') These methods will only accept integers in the range 0 to 255 (inclusive):: >>> bytes.fromint(512) Traceback (most recent call last): File "", line 1, in ValueError: integer must be in range(0, 256) >>> bytes.fromint(1.0) Traceback (most recent call last): File "", line 1, in TypeError: 'float' object cannot be interpreted as an integer The documentation of the ``ord`` builtin will be updated to explicitly note that ``bchr`` is the primary inverse operation for binary data, while ``chr`` is the inverse operation for text data, and that ``bytes.fromint`` and ``bytearray.fromint`` also exist. Behaviourally, ``bytes.fromint(x)`` will be equivalent to the current ``bytes([x])`` (and similarly for ``bytearray``). The new spelling is expected to be easier to discover and easier to read (especially when used in conjunction with indexing operations on binary sequence types). As a separate method, the new spelling will also work better with higher order functions like ``map``. Addition of "getbyte" method to retrieve a single byte ------------------------------------------------------ This PEP proposes that ``bytes`` and ``bytearray`` gain the method ``getbyte`` which will always return ``bytes``:: >>> b'abc'.getbyte(0) b'a' If an index is asked for that doesn't exist, ``IndexError`` is raised:: >>> b'abc'.getbyte(9) Traceback (most recent call last): File "", line 1, in IndexError: index out of range Addition of optimised iterator methods that produce ``bytes`` objects --------------------------------------------------------------------- This PEP proposes that ``bytes``, ``bytearray`` and ``memoryview`` gain an optimised ``iterbytes`` method that produces length 1 ``bytes`` objects rather than integers:: for x in data.iterbytes(): # x is a length 1 ``bytes`` object, rather than an integer For example:: >>> tuple(b"ABC".iterbytes()) (b'A', b'B', b'C') The method can be used with arbitrary buffer exporting objects by wrapping them in a ``memoryview`` instance first:: for x in memoryview(data).iterbytes(): # x is a length 1 ``bytes`` object, rather than an integer For ``memoryview``, the semantics of ``iterbytes()`` are defined such that:: memview.tobytes() == b''.join(memview.iterbytes()) This allows the raw bytes of the memory view to be iterated over without needing to make a copy, regardless of the defined shape and format. The main advantage this method offers over the ``map(bytes.byte, data)`` approach is that it is guaranteed *not* to fail midstream with a ``ValueError`` or ``TypeError``. By contrast, when using the ``map`` based approach, the type and value of the individual items in the iterable are only checked as they are retrieved and passed through the ``bytes.byte`` constructor. Design discussion ================= Why not rely on sequence repetition to create zero-initialised sequences? ------------------------------------------------------------------------- Zero-initialised sequences can be created via sequence repetition:: >>> b'\x00' * 3 b'\x00\x00\x00' >>> bytearray(b'\x00') * 3 bytearray(b'\x00\x00\x00') However, this was also the case when the ``bytearray`` type was originally designed, and the decision was made to add explicit support for it in the type constructor. The immutable ``bytes`` type then inherited that feature when it was introduced in PEP 3137. This PEP isn't revisiting that original design decision, just changing the spelling as users sometimes find the current behaviour of the binary sequence constructors surprising. In particular, there's a reasonable case to be made that ``bytes(x)`` (where ``x`` is an integer) should behave like the ``bytes.byte(x)`` proposal in this PEP. Providing both behaviours as separate class methods avoids that ambiguity. References ========== .. [1] Initial March 2014 discussion thread on python-ideas (https://mail.python.org/pipermail/python-ideas/2014-March/027295.html) .. [2] Guido's initial feedback in that thread (https://mail.python.org/pipermail/python-ideas/2014-March/027376.html) .. [3] Issue proposing moving zero-initialised sequences to a dedicated API (http://bugs.python.org/issue20895) .. [4] Issue proposing to use calloc() for zero-initialised binary sequences (http://bugs.python.org/issue21644) .. [5] August 2014 discussion thread on python-dev (https://mail.python.org/pipermail/python-ideas/2014-March/027295.html) .. [6] June 2016 discussion thread on python-dev (https://mail.python.org/pipermail/python-dev/2016-June/144875.html) Copyright ========= This document has been placed in the public domain. From alexander.belopolsky at gmail.com Mon Jul 18 16:45:49 2016 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 18 Jul 2016 16:45:49 -0400 Subject: [Python-Dev] PEP 467: next round In-Reply-To: <578D396E.4050304@stoneleaf.us> References: <578D396E.4050304@stoneleaf.us> Message-ID: On Mon, Jul 18, 2016 at 4:17 PM, Ethan Furman wrote: > - 'bytes.zeros' renamed to 'bytes.size', with option byte filler > (defaults to b'\x00') > Seriously? You went from a numpy-friendly feature to something rather numpy-hostile. In numpy, ndarray.size is an attribute that returns the number of elements in the array. The constructor that creates an arbitrary repeated value also exists and is called numpy.full(). Even ignoring numpy, bytes.size(count, value=b'\x00') is completely unintuitive. If I see bytes.size(42) in someone's code, I will think: "something like int.bit_length(), but in bytes." -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Mon Jul 18 17:01:04 2016 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Mon, 18 Jul 2016 17:01:04 -0400 Subject: [Python-Dev] PEP 467: next round In-Reply-To: References: <578D396E.4050304@stoneleaf.us> Message-ID: *de-lurks* On Mon, Jul 18, 2016 at 4:45 PM, Alexander Belopolsky wrote: > On Mon, Jul 18, 2016 at 4:17 PM, Ethan Furman wrote: >> >> - 'bytes.zeros' renamed to 'bytes.size', with option byte filler >> (defaults to b'\x00') > > > Seriously? You went from a numpy-friendly feature to something rather > numpy-hostile. > In numpy, ndarray.size is an attribute that returns the number of elements > in the array. > > The constructor that creates an arbitrary repeated value also exists and is > called numpy.full(). > > Even ignoring numpy, bytes.size(count, value=b'\x00') is completely > unintuitive. If I see bytes.size(42) in someone's code, I will think: > "something like int.bit_length(), but in bytes." full(), despite its use in numpy, is also unintuitive to me (my first thought is that it would indicate whether an object has room for more entries). Perhaps bytes.fillsize? That would seem the most intuitive to me: "fill an object of this size with this byte". I'm unfamiliar with numpy, but a quick Google search suggests that this would not conflict with anything there, if that is a concern. > This PEP isn't revisiting that original design decision, just changing the > spelling as users sometimes find the current behaviour of the binary > sequence > constructors surprising. In particular, there's a reasonable case to be made > that ``bytes(x)`` (where ``x`` is an integer) should behave like the > ``bytes.byte(x)`` proposal in this PEP. Providing both behaviours as > separate > class methods avoids that ambiguity. You have a leftover bytes.byte here. From alexander.belopolsky at gmail.com Mon Jul 18 17:34:05 2016 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 18 Jul 2016 17:34:05 -0400 Subject: [Python-Dev] PEP 467: next round In-Reply-To: References: <578D396E.4050304@stoneleaf.us> Message-ID: On Mon, Jul 18, 2016 at 5:01 PM, Jonathan Goble wrote: > full(), despite its use in numpy, is also unintuitive to me (my first > thought is that it would indicate whether an object has room for more > entries). > > Perhaps bytes.fillsize? > I wouldn't want to see bytes.full() either. Maybe bytes.of_size()? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jul 18 17:45:27 2016 From: brett at python.org (Brett Cannon) Date: Mon, 18 Jul 2016 21:45:27 +0000 Subject: [Python-Dev] PEP 467: next round In-Reply-To: References: <578D396E.4050304@stoneleaf.us> Message-ID: On Mon, 18 Jul 2016 at 14:35 Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > > On Mon, Jul 18, 2016 at 5:01 PM, Jonathan Goble > wrote: > >> full(), despite its use in numpy, is also unintuitive to me (my first >> thought is that it would indicate whether an object has room for more >> entries). >> >> Perhaps bytes.fillsize? >> > > I wouldn't want to see bytes.full() either. Maybe bytes.of_size()? > Or bytes.fromsize() to stay with the trend of naming constructor methods as from*() ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Jul 18 17:58:37 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 18 Jul 2016 14:58:37 -0700 Subject: [Python-Dev] PEP 467: next round In-Reply-To: References: <578D396E.4050304@stoneleaf.us> Message-ID: <578D510D.7030208@stoneleaf.us> On 07/18/2016 02:01 PM, Jonathan Goble wrote: >> This PEP isn't revisiting that original design decision, just changing the >> spelling as users sometimes find the current behaviour of the binary >> sequence >> constructors surprising. In particular, there's a reasonable case to be made >> that ``bytes(x)`` (where ``x`` is an integer) should behave like the >> ``bytes.byte(x)`` proposal in this PEP. Providing both behaviours as >> separate >> class methods avoids that ambiguity. > > You have a leftover bytes.byte here. Thanks, fixed (plus the other couple locations ;) -- ~Ethan~ From ethan at stoneleaf.us Mon Jul 18 18:00:41 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 18 Jul 2016 15:00:41 -0700 Subject: [Python-Dev] PEP 467: next round In-Reply-To: References: <578D396E.4050304@stoneleaf.us> Message-ID: <578D5189.60608@stoneleaf.us> On 07/18/2016 02:45 PM, Brett Cannon wrote: > On Mon, 18 Jul 2016 at 14:35 Alexander Belopolsky wrote: >> On Mon, Jul 18, 2016 at 5:01 PM, Jonathan Goble wrote: >>> full(), despite its use in numpy, is also unintuitive to me (my first >>> thought is that it would indicate whether an object has room for more >>> entries). >>> >>> Perhaps bytes.fillsize? >> >> I wouldn't want to see bytes.full() either. Maybe bytes.of_size()? > > Or bytes.fromsize() to stay with the trend of naming constructor methods > as from*() ? bytes.fromsize() sounds good to me, thanks for brainstorming that one for me. I wasn't really happy with 'size()' either. -- ~Ethan~ From alexander.belopolsky at gmail.com Mon Jul 18 18:30:12 2016 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 18 Jul 2016 18:30:12 -0400 Subject: [Python-Dev] PEP 467: next round In-Reply-To: <578D5189.60608@stoneleaf.us> References: <578D396E.4050304@stoneleaf.us> <578D5189.60608@stoneleaf.us> Message-ID: On Mon, Jul 18, 2016 at 6:00 PM, Ethan Furman wrote: > bytes.fromsize() sounds good to me, thanks for brainstorming that one for > me. > +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Mon Jul 18 18:48:07 2016 From: random832 at fastmail.com (Random832) Date: Mon, 18 Jul 2016 18:48:07 -0400 Subject: [Python-Dev] PEP 467: next round In-Reply-To: References: <578D396E.4050304@stoneleaf.us> Message-ID: <1468882087.1788439.670005897.30F0E3BF@webmail.messagingengine.com> On Mon, Jul 18, 2016, at 17:34, Alexander Belopolsky wrote: > On Mon, Jul 18, 2016 at 5:01 PM, Jonathan Goble > wrote: > > > full(), despite its use in numpy, is also unintuitive to me (my first > > thought is that it would indicate whether an object has room for more > > entries). > > > > Perhaps bytes.fillsize? > > I wouldn't want to see bytes.full() either. Maybe bytes.of_size()? What's wrong with b'\0'*42? From brett at python.org Mon Jul 18 18:52:46 2016 From: brett at python.org (Brett Cannon) Date: Mon, 18 Jul 2016 22:52:46 +0000 Subject: [Python-Dev] PEP 467: next round In-Reply-To: <1468882087.1788439.670005897.30F0E3BF@webmail.messagingengine.com> References: <578D396E.4050304@stoneleaf.us> <1468882087.1788439.670005897.30F0E3BF@webmail.messagingengine.com> Message-ID: On Mon, 18 Jul 2016 at 15:49 Random832 wrote: > On Mon, Jul 18, 2016, at 17:34, Alexander Belopolsky wrote: > > On Mon, Jul 18, 2016 at 5:01 PM, Jonathan Goble > > wrote: > > > > > full(), despite its use in numpy, is also unintuitive to me (my first > > > thought is that it would indicate whether an object has room for more > > > entries). > > > > > > Perhaps bytes.fillsize? > > > > I wouldn't want to see bytes.full() either. Maybe bytes.of_size()? > > What's wrong with b'\0'*42? > It's mentioned in the PEP as to why. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mistersheik at gmail.com Mon Jul 18 19:26:29 2016 From: mistersheik at gmail.com (Neil Girdhar) Date: Mon, 18 Jul 2016 23:26:29 +0000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Yes, I'm very excited about this! Will this mean no more metaclass conflicts if using @abstractmethod? On Sun, Jul 17, 2016 at 12:59 PM Guido van Rossum wrote: > This PEP is now accepted for inclusion in Python 3.6. Martin, > congratulations! Someone (not me) needs to review and commit your > changes, before September 12, when the 3.6 feature freeze goes into > effect (see https://www.python.org/dev/peps/pep-0494/#schedule). > > On Sun, Jul 17, 2016 at 4:32 AM, Martin Teichmann > wrote: > > Hi Guido, Hi Nick, Hi list, > > > > so I just updated PEP 487, you can find it here: > > https://github.com/python/peps/pull/57 if it hasn't already been > > merged. There are no substantial updates there, I only updated the > > wording as suggested, and added some words about backwards > > compatibility as hinted by Nick. > > > > Greetings > > > > Martin > > > > 2016-07-14 17:47 GMT+02:00 Guido van Rossum : > >> I just reviewed the changes you made, I like __set_name__(). I'll just > >> wait for your next update, incorporating Nick's suggestions. Regarding > >> who merges PRs to the PEPs repo, since you are the author the people > >> who merge don't pass any judgment on the changes (unless it doesn't > >> build cleanly or maybe if they see a typo). If you intend a PR as a > >> base for discussion you can add a comment saying e.g. "Don't merge > >> yet". If you call out @gvanrossum, GitHub will make sure I get a > >> message about it. > >> > >> I think the substantial discussion about the PEP should remain here in > >> python-dev; comments about typos, grammar and other minor editorial > >> issues can go on GitHub. Hope this part of the process makes sense! > >> > >> On Thu, Jul 14, 2016 at 6:50 AM, Martin Teichmann > >> wrote: > >>> Hi Guido, Hi list, > >>> > >>> Thanks for the nice review! I applied followed up your ideas and put > >>> it into a github pull request: https://github.com/python/peps/pull/53 > >>> > >>> Soon we'll be working there, until then, some responses to your > comments: > >>> > >>>> I wonder if this should be renamed to __set_name__ or something else > >>>> that clarifies we're passing it the name of the attribute? The method > >>>> name __set_owner__ made me assume this is about the owning object > >>>> (which is often a useful term in other discussions about objects), > >>>> whereas it is really about telling the descriptor the name of the > >>>> attribute for which it applies. > >>> > >>> The name for this has been discussed a bit already, __set_owner__ was > >>> Nick's idea, and indeed, the owner is also set. Technically, > >>> __set_owner_and_name__ would be correct, but actually I like your idea > >>> of __set_name__. > >>> > >>>> That (inheriting type from type, and object from object) is very > >>>> confusing. Why not just define new classes e.g. NewType and NewObject > >>>> here, since it's just pseudo code anyway? > >>> > >>> Actually, it's real code. If you drop those lines at the beginning of > >>> the tests for the implementation (as I have done here: > >>> > https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py > ), > >>> the test runs on older Pythons. > >>> > >>> But I see that my idea to formulate things here in Python was a bad > >>> idea, I will put the explanation first and turn the code into > >>> pseudo-code. > >>> > >>>>> def __init__(self, name, bases, ns, **kwargs): > >>>>> super().__init__(name, bases, ns) > >>>> > >>>> What does this definition of __init__ add? > >>> > >>> It removes the keyword arguments. I describe that in prose a bit down. > >>> > >>>>> class object: > >>>>> @classmethod > >>>>> def __init_subclass__(cls): > >>>>> pass > >>>>> > >>>>> class object(object, metaclass=type): > >>>>> pass > >>>> > >>>> Eek! Too many things named object. > >>> > >>> Well, I had to do that to make the tests run... I'll take that out. > >>> > >>>>> In the new code, it is not ``__init__`` that complains about keyword > arguments, > >>>>> but ``__init_subclass__``, whose default implementation takes no > arguments. In > >>>>> a classical inheritance scheme using the method resolution order, > each > >>>>> ``__init_subclass__`` may take out it's keyword arguments until none > are left, > >>>>> which is checked by the default implementation of > ``__init_subclass__``. > >>>> > >>>> I called this out previously, and I am still a bit uncomfortable with > >>>> the backwards incompatibility here. But I believe what you describe > >>>> here is the compromise proposed by Nick, and if that's the case I have > >>>> peace with it. > >>> > >>> No, this is not Nick's compromise, this is my original. Nick just sent > >>> another mail to this list where he goes a bit more into the details, > >>> I'll respond to that about this topic. > >>> > >>> Greetings > >>> > >>> Martin > >>> > >>> P.S.: I just realized that my changes to the PEP were accepted by > >>> someone else than Guido. I am a bit surprised about that, but I guess > >>> this is how it works? > >>> _______________________________________________ > >>> Python-Dev mailing list > >>> Python-Dev at python.org > >>> https://mail.python.org/mailman/listinfo/python-dev > >>> Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > >> > >> > >> > >> -- > >> --Guido van Rossum (python.org/~guido) > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Jul 18 19:40:05 2016 From: barry at python.org (Barry Warsaw) Date: Mon, 18 Jul 2016 19:40:05 -0400 Subject: [Python-Dev] Fun with ExitStack Message-ID: <20160718194005.0f5885b6@anarchist.wooz.org> I was trying to debug a problem in some work code and I ran into some interesting oddities with contextlib.ExitStack and other context managers in Python 3.5. This program creates a temporary directory, and I wanted to give it a --keep flag to not automatically delete the tempdir at program exit. I was using an ExitStack to manage a bunch of resources, and the temporary directory is the first thing pushed into the ExitStack. At that point in the program, I check the value of --keep and if it's set, I use ExitStack.pop_all() to clear the stack, and thus, presumably, prevent the temporary directory from being deleted. There's this relevant quote in the contextlib documentation: """ Each instance [of an ExitStack] maintains a stack of registered callbacks that are called in reverse order when the instance is closed (either explicitly or implicitly at the end of a with statement). Note that callbacks are not invoked implicitly when the context stack instance is garbage collected. """ However if I didn't save the reference to the pop_all'd ExitStack, the tempdir would be immediately deleted. If I did save a reference to the pop_all'd ExitStack, the tempdir would live until the saved reference went out of scope and got refcounted away. As best I can tell this happens because TemporaryDirectory.__init__() creates a weakref finalizer which ends up calling the _cleanup() function. Although it's rather difficult to trace, it does appear that when the ExitStack is gc'd, this finalizer gets triggered (via weakref.detach()), thus causing the _cleanup() method to be called and the tmpdir to get deleted. I "fix" this by doing: def __init__(self): tmpdir = TemporaryDirectory() self._tmpdir = (tmpdir.name if keep else self.resources.enter_context(tmpdir)) There must be more to the story because when __init__() exits in the --keep case, tmpdir should have gotten refcounted away and the directory deleted, but it doesn't. I haven't dug down deep enough to figure that out. Now, while I was debugging that behavior, I ran across more interesting bits. I put this in a file to drive some tests: ------snip snip----- with ExitStack() as resources: print('enter context') tmpdir = resources.enter_context(X()) resources.pop_all() print('exit context') ------snip snip----- Let's say X is: class X: def __enter__(self): print('enter Foo') return self def __exit__(self, *args, **kws): print('exit Foo') return False the output is: enter context enter Foo exit context So far so good. A fairly standard context manager class doesn't get its __exit__() called even when the program exits. Let's try this: @contextmanager def X(): print('enter bar') yield print('exit bar') still good: enter context enter bar exit context Let's modify X a little bit to be a more common idiom: @contextmanager def X(): print('enter foo') try: yield finally: print('exit foo') enter context enter foo exit foo exit context Ah, the try-finally changes the behavior! There's probably some documentation somewhere that defines how a generator gets finalized, and that triggers the finally clause, whereas in the previous example, nothing after the yield gets run. I just can't find that anything that would describe the observed behavior. It's all very twisty, and I'm not sure Python is doing anything wrong, but I'm also not sure it's *not* doing anything wrong. ;) In any case, the contextlib documentation quoted above should probably be more liberally sprinkled with salty caveats. Just calling .pop_all() isn't necessarily enough to ensure that resources managed by an ExitStack will survive its garbage collection. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From vadmium+py at gmail.com Mon Jul 18 21:57:50 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 19 Jul 2016 01:57:50 +0000 Subject: [Python-Dev] Fun with ExitStack In-Reply-To: <20160718194005.0f5885b6@anarchist.wooz.org> References: <20160718194005.0f5885b6@anarchist.wooz.org> Message-ID: On 18 July 2016 at 23:40, Barry Warsaw wrote: > I was trying to debug a problem in some work code and I ran into some > interesting oddities with contextlib.ExitStack and other context managers in > Python 3.5. > > This program creates a temporary directory, and I wanted to give it a --keep > flag to not automatically delete the tempdir at program exit. I was using an > ExitStack to manage a bunch of resources, and the temporary directory is the > first thing pushed into the ExitStack. At that point in the program, I check > the value of --keep and if it's set, I use ExitStack.pop_all() to clear the > stack, and thus, presumably, prevent the temporary directory from being > deleted. > > There's this relevant quote in the contextlib documentation: > > """ > Each instance [of an ExitStack] maintains a stack of registered callbacks that > are called in reverse order when the instance is closed (either explicitly or > implicitly at the end of a with statement). Note that callbacks are not > invoked implicitly when the context stack instance is garbage collected. > """ > > However if I didn't save the reference to the pop_all'd ExitStack, the tempdir > would be immediately deleted. If I did save a reference to the pop_all'd > ExitStack, the tempdir would live until the saved reference went out of scope > and got refcounted away. > > As best I can tell this happens because TemporaryDirectory.__init__() creates > a weakref finalizer which ends up calling the _cleanup() function. Although > it's rather difficult to trace, it does appear that when the ExitStack is > gc'd, this finalizer gets triggered (via weakref.detach()), thus causing the > _cleanup() method to be called and the tmpdir to get deleted. This seems to be missing from the documentation, but when you delete a TemporaryDirectory instance without using a context manager or cleanup(), it complains, and then cleans up anyway: >>> d = TemporaryDirectory() >>> del d /usr/lib/python3.5/tempfile.py:797: ResourceWarning: Implicitly cleaning up _warnings.warn(warn_message, ResourceWarning) Perhaps you will have to use the lower-level mkdtemp() function instead if you want the option of making the ?temporary? directory long-lived. > I "fix" this by > doing: > > def __init__(self): > tmpdir = TemporaryDirectory() > self._tmpdir = (tmpdir.name if keep > else self.resources.enter_context(tmpdir)) > > There must be more to the story because when __init__() exits in the --keep > case, tmpdir should have gotten refcounted away and the directory deleted, but > it doesn't. I haven't dug down deep enough to figure that out. > > Now, while I was debugging that behavior, I ran across more interesting bits. > I put this in a file to drive some tests: > > ------snip snip----- > with ExitStack() as resources: > print('enter context') > tmpdir = resources.enter_context(X()) > resources.pop_all() > print('exit context') > ------snip snip----- > > Let's say X is: > > class X: > def __enter__(self): > print('enter Foo') > return self > > def __exit__(self, *args, **kws): > print('exit Foo') > return False > > the output is: > > enter context > enter Foo > exit context > > So far so good. A fairly standard context manager class doesn't get its > __exit__() called even when the program exits. Let's try this: > > @contextmanager > def X(): > print('enter bar') > yield > print('exit bar') > > still good: > > enter context > enter bar > exit context > > Let's modify X a little bit to be a more common idiom: > > @contextmanager > def X(): > print('enter foo') > try: > yield > finally: > print('exit foo') > > enter context > enter foo > exit foo > exit context > > Ah, the try-finally changes the behavior! There's probably some documentation > somewhere that defines how a generator gets finalized, and that triggers the > finally clause, whereas in the previous example, nothing after the yield gets > run. I just can't find that anything that would describe the observed > behavior. I suspect the documentation doesn?t spell everything out, but my understanding is that garbage collection of a generator instance effectively calls its close() method, triggering any ?finally? and __exit__() handlers. IMO, in some cases if a generator would execute these handlers and gets garbage collected, it is a programming error because the programmer should have explicitly called generator.close(). In these cases, it would be nice to emit a ResourceWarning, just like forgetting to close a file, or delete your temporay directory above. But maybe there are other cases where there is no valid reason to emit a warning. I have hesitated in suggesting this change in the past, but I don?t remember why. One reason is it might be an annoyance with code that also wants to handle non-generator iterators that don?t have a close() method. In a world before ?yield from?, imagine having to change: # Rough equivalent of ?yield from g(...)? for item in g(...): yield item to: with contextlib.closing(g(...)) as subgen: for item in subgen: yield item It would be even worse if you had to support g() returning some other iterator not supporting close(). > It's all very twisty, and I'm not sure Python is doing anything wrong, but I'm > also not sure it's *not* doing anything wrong. ;) > > In any case, the contextlib documentation quoted above should probably be more > liberally sprinkled with salty caveats. Just calling .pop_all() isn't > necessarily enough to ensure that resources managed by an ExitStack will > survive its garbage collection. From ncoghlan at gmail.com Tue Jul 19 00:12:08 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jul 2016 14:12:08 +1000 Subject: [Python-Dev] PEP 467: next round In-Reply-To: <578D396E.4050304@stoneleaf.us> References: <578D396E.4050304@stoneleaf.us> Message-ID: (Thanks for moving this forward, Ethan!) On 19 July 2016 at 06:17, Ethan Furman wrote: > * Add ``bytes.getbyte`` and ``bytearray.getbyte`` byte retrieval methods > * Add ``bytes.iterbytes``, ``bytearray.iterbytes`` and > ``memoryview.iterbytes`` alternative iterators As a possible alternative to this aspect, what if we adjusted memorview.cast() to also support the "s" format code from the struct module? At the moment, trying to use "s" gives a value error: >>> bview = memoryview(data).cast("s") Traceback (most recent call last): File "", line 1, in ValueError: memoryview: destination format must be a native single character format prefixed with an optional '@' However, it could be supported by always interpreting it as equivalent to "1s", such that the view produced length 1 bytes objects on indexing and iteration, rather than integers (which is what it does given the default "b" format). Given "memoryview(data).cast('s')" as a basic building block, most of the other aspects of working with bytes objects as if they were Python 2 strings should become relatively straightforward, so the question would be whether we wanted to make it easy for people to avoid constructing the mediating memoryview object. > Proposals > ========= > > Deprecation of current "zero-initialised sequence" behaviour without removal > ---------------------------------------------------------------------------- > > Currently, the ``bytes`` and ``bytearray`` constructors accept an integer > argument and interpret it as meaning to create a zero-initialised sequence > of the given size:: > > >>> bytes(3) > b'\x00\x00\x00' > >>> bytearray(3) > bytearray(b'\x00\x00\x00') > > This PEP proposes to deprecate that behaviour in Python 3.6, but to leave > it in place for at least as long as Python 2.7 is supported, possibly > indefinitely. I'd suggest being more explicit that this would just be a documented deprecation, rather than a programmatic deprecatation warning. > Addition of explicit "count and byte initialised sequence" constructors > ----------------------------------------------------------------------- > > To replace the deprecated behaviour, this PEP proposes the addition of an > explicit ``size`` alternative constructor as a class method on both > ``bytes`` and ``bytearray`` whose first argument is the count, and whose > second argument is the fill byte to use (defaults to ``\x00``):: > > >>> bytes.size(3) > b'\x00\x00\x00' > >>> bytearray.size(3) > bytearray(b'\x00\x00\x00') > >>> bytes.size(5, b'\x0a') > b'\x0a\x0a\x0a\x0a\x0a' > >>> bytearray.size(5, b'\x0a') > bytearray(b'\x0a\x0a\x0a\x0a\x0a') While I like the notion of having "size" in the name, the "noun-as-constructor" phrasing doesn't read right to me. Perhaps "fromsize" for consistency with "fromhex"? > It will behave just as the current constructors behave when passed a single > integer. This last paragraph feels incomplete now, given the expansion to allow the fill value to be specified. > Addition of "bchr" function and explicit "single byte" constructors > ------------------------------------------------------------------- > > As binary counterparts to the text ``chr`` function, this PEP proposes > the addition of a ``bchr`` function and an explicit ``fromint`` alternative > constructor as a class method on both ``bytes`` and ``bytearray``:: > > >>> bchr(ord("A")) > b'A' > >>> bchr(ord(b"A")) > b'A' > >>> bytes.fromint(65) > b'A' > >>> bytearray.fromint(65) > bytearray(b'A') Since "fromsize" would also accept an int value, "fromint" feels ambiguous here. Perhaps "fromord" to emphasise the integer is being interpreted as an ordinal bytes value, rather than as a size? The apparent "two ways to do it" here also deserves some additional explanation: - the bchr builtin is to recreate the ord/chr/unichr trio from Python 2 under a different naming scheme - the class method is mainly for the "bytearray.fromord" case, with bytes.fromord added for consistency [snip sections on accessing elements as bytes object] > Design discussion > ================= > > Why not rely on sequence repetition to create zero-initialised sequences? > ------------------------------------------------------------------------- > > Zero-initialised sequences can be created via sequence repetition:: > > >>> b'\x00' * 3 > b'\x00\x00\x00' > >>> bytearray(b'\x00') * 3 > bytearray(b'\x00\x00\x00') > > However, this was also the case when the ``bytearray`` type was originally > designed, and the decision was made to add explicit support for it in the > type constructor. The immutable ``bytes`` type then inherited that feature > when it was introduced in PEP 3137. > > This PEP isn't revisiting that original design decision, just changing the > spelling as users sometimes find the current behaviour of the binary > sequence > constructors surprising. In particular, there's a reasonable case to be made > that ``bytes(x)`` (where ``x`` is an integer) should behave like the > ``bytes.byte(x)`` proposal in this PEP. Providing both behaviours as > separate > class methods avoids that ambiguity. This note will need some tweaks to match the updated method names in the proposal. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jul 19 00:13:52 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jul 2016 14:13:52 +1000 Subject: [Python-Dev] PEP 467: next round In-Reply-To: <578D5189.60608@stoneleaf.us> References: <578D396E.4050304@stoneleaf.us> <578D5189.60608@stoneleaf.us> Message-ID: On 19 July 2016 at 08:00, Ethan Furman wrote: > On 07/18/2016 02:45 PM, Brett Cannon wrote: >> >> On Mon, 18 Jul 2016 at 14:35 Alexander Belopolsky wrote: >>> >>> On Mon, Jul 18, 2016 at 5:01 PM, Jonathan Goble wrote: > > >>>> full(), despite its use in numpy, is also unintuitive to me (my first >>>> thought is that it would indicate whether an object has room for more >>>> entries). >>>> >>>> Perhaps bytes.fillsize? >>> >>> >>> I wouldn't want to see bytes.full() either. Maybe bytes.of_size()? >> >> >> Or bytes.fromsize() to stay with the trend of naming constructor methods >> as from*() ? > > > bytes.fromsize() sounds good to me, thanks for brainstorming that one for > me. I wasn't really happy with 'size()' either. Heh, I should have finished reading the thread before replying - this and one of my other comments were already picked up :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jul 19 00:21:37 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jul 2016 14:21:37 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On 19 July 2016 at 09:26, Neil Girdhar wrote: > Yes, I'm very excited about this! > > Will this mean no more metaclass conflicts if using @abstractmethod? ABCMeta and EnumMeta both create persistent behavioural differences rather than only influencing subtype definition, so they'll need to remain as custom metaclasses. What this PEP (especially in combination with PEP 520) is aimed at enabling is subclassing APIs designed more around the notion of "implicit class decoration" where a common base class or mixin can be adjusted to perform certain actions whenever a new subclass is defined, without changing the runtime behaviour of those subclasses. (For example: a mixin or base class may require that certain parameters be set as class attributes - this PEP will allow the base class to check for those and throw an error at definition time, rather than getting a potentially cryptic error when it attempts to use the missing attribute) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jul 19 00:32:14 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Jul 2016 14:32:14 +1000 Subject: [Python-Dev] Fun with ExitStack In-Reply-To: <20160718194005.0f5885b6@anarchist.wooz.org> References: <20160718194005.0f5885b6@anarchist.wooz.org> Message-ID: On 19 July 2016 at 09:40, Barry Warsaw wrote: > Ah, the try-finally changes the behavior! There's probably some documentation > somewhere that defines how a generator gets finalized, and that triggers the > finally clause, whereas in the previous example, nothing after the yield gets > run. I just can't find that anything that would describe the observed > behavior. For the generator case, their __del__ calls self.close(), and that throws a GeneratorExit exception into the current yield point. Since it's an exception, that will run try/finally clauses and context manager __exit__ methods, but otherwise bypass code after the yield statement. > It's all very twisty, and I'm not sure Python is doing anything wrong, but I'm > also not sure it's *not* doing anything wrong. ;) I think we're a bit inconsistent in how we treat the lazy cleanup for managed resources - sometimes __del__ handles it, sometimes we register a weakref finalizer, sometimes we don't do it at all. That makes it hard to predict precise behaviour without knowing the semantic details of the specific context managers involved in the stack. > In any case, the contextlib documentation quoted above should probably be more > liberally sprinkled with salty caveats. Just calling .pop_all() isn't > necessarily enough to ensure that resources managed by an ExitStack will > survive its garbage collection. Aye, I'd be open to changes that clarified that even though the ExitStack instance won't invoke any cleanup callbacks explicitly following a pop_all(), *implicit* cleanup from its references to those objects going away may still be triggered if you don't save the result of the pop_all() call somewhere. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mistersheik at gmail.com Tue Jul 19 02:41:32 2016 From: mistersheik at gmail.com (Neil Girdhar) Date: Tue, 19 Jul 2016 06:41:32 +0000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Yes, I see what you're saying. However, I don't understand why __init_subclass__ (defined on some class C) cannot be used to implement the checks required by @abstractmethod instead of doing it in ABCMeta. This would prevent metaclass conflicts since you could use @abstractmethod with any metaclass or no metaclass at all provided you inherit from C. On Tue, Jul 19, 2016 at 12:21 AM Nick Coghlan wrote: > On 19 July 2016 at 09:26, Neil Girdhar wrote: > > Yes, I'm very excited about this! > > > > Will this mean no more metaclass conflicts if using @abstractmethod? > > ABCMeta and EnumMeta both create persistent behavioural differences > rather than only influencing subtype definition, so they'll need to > remain as custom metaclasses. > > What this PEP (especially in combination with PEP 520) is aimed at > enabling is subclassing APIs designed more around the notion of > "implicit class decoration" where a common base class or mixin can be > adjusted to perform certain actions whenever a new subclass is > defined, without changing the runtime behaviour of those subclasses. > (For example: a mixin or base class may require that certain > parameters be set as class attributes - this PEP will allow the base > class to check for those and throw an error at definition time, rather > than getting a potentially cryptic error when it attempts to use the > missing attribute) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Jul 19 05:49:36 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 19 Jul 2016 10:49:36 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: <53d8549b-f2da-7549-183b-1fb0ae5121e6@python.org> Message-ID: On 18 July 2016 at 18:01, Paul Moore wrote: > On 18 July 2016 at 17:33, Steve Dower wrote: >>> Some comments below. >> >> Awesome, thanks! Posted a pull request at >> https://github.com/python/peps/pull/59 for ease of diff reading, and some >> commentary below (with aggressive snipping). > > Thanks - I'll do a proper review of that, but just wanted to make a > few comments here. Added some comments to the PR. Basically: 1. We could do with a better descrition of use cases. 2. We either require registry keys to be user-friendly (no UUIDs!) or we find an alternative approach for the virtualenv-style use case. 3. Registering "how to start the interpreter" (python.exe or whatever) should be mandatory. I don't think we're far off, though. Paul From p.f.moore at gmail.com Tue Jul 19 08:40:57 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 19 Jul 2016 13:40:57 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: <53d8549b-f2da-7549-183b-1fb0ae5121e6@python.org> Message-ID: On 19 July 2016 at 10:49, Paul Moore wrote: > On 18 July 2016 at 18:01, Paul Moore wrote: >> On 18 July 2016 at 17:33, Steve Dower wrote: >>>> Some comments below. >>> >>> Awesome, thanks! Posted a pull request at >>> https://github.com/python/peps/pull/59 for ease of diff reading, and some >>> commentary below (with aggressive snipping). >> >> Thanks - I'll do a proper review of that, but just wanted to make a >> few comments here. > > Added some comments to the PR. Basically: > > 1. We could do with a better descrition of use cases. > 2. We either require registry keys to be user-friendly (no UUIDs!) or > we find an alternative approach for the virtualenv-style use case. > 3. Registering "how to start the interpreter" (python.exe or whatever) > should be mandatory. > > I don't think we're far off, though. For what it's worth, the following code seems to do a reasonable job of scanning the registry according to the rules in the PEP. If anyone has access to any 3rd party installations of Python that follow the proposed PEP's rules, it would be interesting to try this script against them. import winreg def list_subkeys(key): i = 0 while True: try: yield winreg.EnumKey(key, i) except WindowsError: return i += 1 locations = [ ("64-bit", winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_64KEY), ("32-bit", winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_32KEY), ("user", winreg.HKEY_CURRENT_USER, 0), ] if __name__ == "__main__": seen = {} for loc, hive, bits in locations: k = winreg.CreateKeyEx(hive, "Software\\Python", 0, winreg.KEY_READ | bits) for company in list_subkeys(k): sk = winreg.CreateKey(k, company) for tag in list_subkeys(sk): try: prefix = winreg.QueryValue(sk, "{}\InstallPath".format(tag)) except WindowsError: # If there's no InstallPath, it's not a valid registration continue clash = "" if seen.get((company, tag)): clash = "Overrides" if hive == winreg.HKEY_CURRENT_USER else "Ambiguous" print("{}\{} ({}) {} {}".format(company, tag, loc, prefix, clash)) seen[(company, tag)] = True This code works (and gives the same results) on at least 2.7 and 3.4+ (32 and 64 bit). I didn't test on older versions. It was tricky enough to get right that I'll probably package it up and publish it at some point. Steve - if you want to add it to the PEP as a sample implementation, that's fine with me, too. Some notes: 1. Python 2.7 allows "Ambiguous" system installs. In other words, the is the same for 32 and 64 bit, but the installer doesn't stop you installing both. Python 3.4 has the same but the installer prevents installing both 32 and 64 bit versions. 2. Prior to 3.5, it's impossible without running the interpreter or inspecting the exe to determine if a user install is 32 or 64 bit. You can tell for a system install based on which registry key you found the data on. As these are both legacy / "backward compatibility" issues, they aren't a problem with the PEP as such. Paul From ncoghlan at gmail.com Tue Jul 19 10:34:07 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Jul 2016 00:34:07 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On 19 July 2016 at 16:41, Neil Girdhar wrote: > Yes, I see what you're saying. However, I don't understand why > __init_subclass__ (defined on some class C) cannot be used to implement the > checks required by @abstractmethod instead of doing it in ABCMeta. This > would prevent metaclass conflicts since you could use @abstractmethod with > any metaclass or no metaclass at all provided you inherit from C. ABCMeta also changes how __isinstance__ and __issubclass__ work and adds additional methods (like register()), so enabling the use of @abstractmethod without otherwise making the type an ABC would be very confusing behaviour that we wouldn't enable by default. But yes, this change does make it possible to write a mixin class that implements the "@abstractmethod instances must all be overridden to allow instances to to be created" logic from ABCMeta without otherwise turning the class into an ABC instance. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mistersheik at gmail.com Tue Jul 19 10:52:54 2016 From: mistersheik at gmail.com (Neil Girdhar) Date: Tue, 19 Jul 2016 14:52:54 +0000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Thanks for clarifying. On Tue, Jul 19, 2016 at 10:34 AM Nick Coghlan wrote: > On 19 July 2016 at 16:41, Neil Girdhar wrote: > > Yes, I see what you're saying. However, I don't understand why > > __init_subclass__ (defined on some class C) cannot be used to implement > the > > checks required by @abstractmethod instead of doing it in ABCMeta. This > > would prevent metaclass conflicts since you could use @abstractmethod > with > > any metaclass or no metaclass at all provided you inherit from C. > > ABCMeta also changes how __isinstance__ and __issubclass__ work and > adds additional methods (like register()), so enabling the use of > @abstractmethod without otherwise making the type an ABC would be very > confusing behaviour that we wouldn't enable by default. > > But yes, this change does make it possible to write a mixin class that > implements the "@abstractmethod instances must all be overridden to > allow instances to to be created" logic from ABCMeta without otherwise > turning the class into an ABC instance. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Wed Jul 20 01:26:30 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 20 Jul 2016 07:26:30 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation Message-ID: Hello, This is my first post on python-dev and I hope that I am not breaking any rule. I wanted to react on the discussion regarding PEP487. This year, we have been working on a refactoring of the `traitlets` library, an implementation of the descriptor pattern that is used in Project Jupyter / IPython. The motivations for the refactoring was similar to those of this PEP: having a more generic metaclass allowing more flexibility in terms of types of descriptors, in order to avoid conflicts between meta classes. We ended up with: - A metaclass called MetaHasDescriptor - A base class of meta MetaHasDescriptor named HasDescriptors Usage: class MyClass(HasDescriptors): attr = DesType() DesType inherits from a base Descriptor type. The key is that their initialization is done in three stages - the main DesType.__init__ - the part of the initialization of DesType that depends on the definition of MyClass DesType.class_init(self, cls, name) which is called from MetaHasDescriptors.__new__ - a method of DesType that depends on the definition of instances of MyClass DesType.instance_init(self, obj) which is called from HasDescriptors.__new__. instance_init, may make modifications on the HasDescriptors instance. My understanding is that the proposed __set_name__ in PEP487 exactly corresponds to our class_init, although interestingly we often do much more in class_init than setting the name of the descriptor, such as setting a this_class attribute or calling class_init on contained descriptors. Therefore I do not think that the names __set_name__ or __set_owner__ are appropriate for this use case. In a way, the long-form explicit names for our class_init and instance_init methods would be something like __init_fom_owner_class__, and __touch_instance__. Thanks, Sylvain PS: thanks to Neil Girdhar for the heads up on the traitlets repo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jul 20 12:22:37 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jul 2016 02:22:37 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Sylvain, Thanks for getting in touch! The traitlets library sounds interesting, and provides good additional evidence that this is a capability that folks are interested in having available. On 20 July 2016 at 15:26, Sylvain Corlay wrote: > My understanding is that the proposed __set_name__ in PEP487 exactly > corresponds to our class_init, although interestingly we often do much more > in class_init than setting the name of the descriptor, such as setting a > this_class attribute or calling class_init on contained descriptors. > Therefore I do not think that the names __set_name__ or __set_owner__ are > appropriate for this use case. > > In a way, the long-form explicit names for our class_init and instance_init > methods would be something like __init_fom_owner_class__, and > __touch_instance__. It's certainly a reasonable question/concern, but we've learned from experience that we're better off using protocol method names that are very specific to a particular intended use case, even if they can be adapted for other purposes. The trick is that we want educators teaching Python to be able to very easily answer the question of "What is this special method for?" (even if they later go on to say "And it's also used for these other things...") One previous example of that is the __index__ protocol, where the actual semantics are "instances of this type can be losslessly converted to integers", but the protocol is named for the particular use case "instances of this type can be used as sequence indices". For PEP 487, the two operations guiding the naming of the methods are "notify a base class when a new subclass is defined" and "notify a descriptor of its attribute name when assigned to a class". The precise verbs then mirror those already used in other parts of the related protocols (with __init__ leading to __init_subclass__, and __set__ leading to __set_name__). The main capability that __set_name__ provides that was previously difficult is letting a descriptor know its own name in the class namespace. The fact a descriptor implementor can do anything else they want as a side-effect of that new method being called isn't substantially different from the ability to add side-effects to the existing __get__, __set__ and __delete__ protocol methods. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From sylvain.corlay at gmail.com Wed Jul 20 13:40:19 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 20 Jul 2016 19:40:19 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi Nick, Thank you for your reply. I understand your argument about using protocol method names that are very specific to a particular intended use case. Interestingly, the one example that is provided in the PEP is that of a "trait" which is pretty much the same as traitlets. (traitlets started as a pure python implementation of Enthought's traits library). My point is that in any real-world implementation of traits, __set_name__ will do a lot more than setting the name, which makes the name misleading. Cheers, Sylvain On Wed, Jul 20, 2016 at 6:22 PM, Nick Coghlan wrote: > Hi Sylvain, > > Thanks for getting in touch! The traitlets library sounds interesting, > and provides good additional evidence that this is a capability that > folks are interested in having available. > > On 20 July 2016 at 15:26, Sylvain Corlay wrote: > > My understanding is that the proposed __set_name__ in PEP487 exactly > > corresponds to our class_init, although interestingly we often do much > more > > in class_init than setting the name of the descriptor, such as setting a > > this_class attribute or calling class_init on contained descriptors. > > Therefore I do not think that the names __set_name__ or __set_owner__ are > > appropriate for this use case. > > > > In a way, the long-form explicit names for our class_init and > instance_init > > methods would be something like __init_fom_owner_class__, and > > __touch_instance__. > > It's certainly a reasonable question/concern, but we've learned from > experience that we're better off using protocol method names that are > very specific to a particular intended use case, even if they can be > adapted for other purposes. The trick is that we want educators > teaching Python to be able to very easily answer the question of "What > is this special method for?" (even if they later go on to say "And > it's also used for these other things...") > > One previous example of that is the __index__ protocol, where the > actual semantics are "instances of this type can be losslessly > converted to integers", but the protocol is named for the particular > use case "instances of this type can be used as sequence indices". > > For PEP 487, the two operations guiding the naming of the methods are > "notify a base class when a new subclass is defined" and "notify a > descriptor of its attribute name when assigned to a class". The > precise verbs then mirror those already used in other parts of the > related protocols (with __init__ leading to __init_subclass__, and > __set__ leading to __set_name__). > > The main capability that __set_name__ provides that was previously > difficult is letting a descriptor know its own name in the class > namespace. The fact a descriptor implementor can do anything else they > want as a side-effect of that new method being called isn't > substantially different from the ability to add side-effects to the > existing __get__, __set__ and __delete__ protocol methods. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jul 20 22:53:18 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Jul 2016 12:53:18 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: On 21 July 2016 at 03:40, Sylvain Corlay wrote: > My point is that in any real-world implementation of traits, __set_name__ > will do a lot more than setting the name, which makes the name misleading. I suspect the point of disagreement on that front may be in how we view the names of the existing __get__, __set__ and __delete__ methods in the descriptor protocols - all 3 of those are in the form of event notifications to the descriptor to say "someone is getting the attribute", "someone is setting the attribute" and "someone is deleting the attribute". What the descriptor does in response to those notifications is up to the descriptor, with it being *conventional* that they be at least plausibly associated with the "obj.attr", "obj.attr = value" and "del obj.attr" operations (with folks voting by usage as to whether or not they consider a particular API's side effects in response to those notifications reasonable). The new notification is merely "someone is setting the name of the attribute", with that taking place when the contents of a class namespace are converted into class attributes. However, phrasing it that way suggest that it's possible we *did* miss something in the PEP: we haven't specified whether or not __set_name__ should be called when someone does someone does "cls.attr = descr". Given the name, I think we *should* call it in that case, and then the semantics during class creation are approximately what would happen if we actually built up the class attributes as: for attr, value in cls_ns.items(): setattr(cls, attr, value) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From gvanrossum at gmail.com Wed Jul 20 22:58:43 2016 From: gvanrossum at gmail.com (Guido van Rossum) Date: Wed, 20 Jul 2016 19:58:43 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Whoa. That's not how I read it. --Guido (mobile) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Thu Jul 21 02:43:56 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Thu, 21 Jul 2016 08:43:56 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: In any case I find this PEP great. If we can implement a library like traitlets only using these new hooks without a custom metaclass, it will be a big improvement. My only concern was that calling the hook __set_name__ was misleading while it could not set the name at all and do pretty much anything else. Regards, Sylvain On Thu, Jul 21, 2016 at 4:53 AM, Nick Coghlan wrote: > On 21 July 2016 at 03:40, Sylvain Corlay wrote: > > My point is that in any real-world implementation of traits, __set_name__ > > will do a lot more than setting the name, which makes the name > misleading. > > I suspect the point of disagreement on that front may be in how we > view the names of the existing __get__, __set__ and __delete__ methods > in the descriptor protocols - all 3 of those are in the form of event > notifications to the descriptor to say "someone is getting the > attribute", "someone is setting the attribute" and "someone is > deleting the attribute". What the descriptor does in response to those > notifications is up to the descriptor, with it being *conventional* > that they be at least plausibly associated with the "obj.attr", > "obj.attr = value" and "del obj.attr" operations (with folks voting by > usage as to whether or not they consider a particular API's side > effects in response to those notifications reasonable). > > The new notification is merely "someone is setting the name of the > attribute", with that taking place when the contents of a class > namespace are converted into class attributes. > > However, phrasing it that way suggest that it's possible we *did* miss > something in the PEP: we haven't specified whether or not __set_name__ > should be called when someone does someone does "cls.attr = descr". > Given the name, I think we *should* call it in that case, and then the > semantics during class creation are approximately what would happen if > we actually built up the class attributes as: > > for attr, value in cls_ns.items(): > setattr(cls, attr, value) > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From himurakenshin54 at gmail.com Fri Jul 22 02:10:22 2016 From: himurakenshin54 at gmail.com (Tian JiaLin) Date: Fri, 22 Jul 2016 14:10:22 +0800 Subject: [Python-Dev] Convert from unsigned long long to PyLong Message-ID: HI There, Maybe I should not post this in the dev group, but I think it has some relationship on the Python core. I'm using MySQLdb as the MySQL client. Recently I got a weird problem of this library. After looking into it, I suspect the problem may related to the conversion from unsigned long to PyLongObject. Here is the detail, If you are familiar with MySQLdb, the following snippet is a way to query the data from MySQL: connection = MySQLdb.connect(...) connection.autocommit(True) try: cursor = connection.cursor() if not cursor.execute(sql, values) > 0: return None row = cursor.fetchone() finally: connection.close() return row[0] Sometimes the return value of execute method would be 18446744073709552000 even there is no matched data available. I checked the source code of the library, the underlying implementation is https://github.com/farcepest/MySQLdb1/blob/master/_mysql.c#L835, static PyObject * _mysql_ConnectionObject_affected_rows( _mysql_ConnectionObject *self, PyObject *args) { if (!PyArg_ParseTuple(args, "")) return NULL; check_connection(self); return PyLong_FromUnsignedLongLong(mysql_affected_rows(&(self-> connection))); } And here is the official doc for mysql_affected_rows http://dev.mysql.com/doc/refman/5.7/en/mysql-affected-rows.html. Let me give a superficial understanding, please correct me if I were wrong. In a 64-bit system, the mysql_affected_rows is supposed to return a number of unsigned long, which means the range should be 0 ~ 2^64 (18446744073709551616), How could it be possible the function PyLong_FromUnsignedLongLong return a converted value larger than 2^64, that's what I don't understand. Does anyone have some ideas of it? The versions of the components I used: Python: 2.7.6 MySQL 5.7.11 MySQLdb 1.2.5 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanrin at gmail.com Fri Jul 22 05:02:23 2016 From: stefanrin at gmail.com (Stefan Ring) Date: Fri, 22 Jul 2016 11:02:23 +0200 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: So to sum this up, you claim that PyLong_FromUnsignedLongLong can somehow produce a number larger than the value range of a 64 bit number (0x10000000000000180). I have a hard time believing this. Most likely you are looking in the wrong place, mysql_affected_rows returns 2^64-1, and some Python code later adds 0x181 to that number. From guido at python.org Fri Jul 22 10:36:28 2016 From: guido at python.org (Guido van Rossum) Date: Fri, 22 Jul 2016 07:36:28 -0700 Subject: [Python-Dev] Should we fix these errors? Message-ID: Somebody did some research and found some bugs in CPython (IIUC). The published some questionable fragments. If there's a volunteer we could probably easily fix these. (I know we already have occasional Coverity scans and there are other tools too (anybody try lgtm yet?) But this seems honest research (also Python leaves Ruby in the dust :-): http://www.viva64.com/en/b/0414/ -- --Guido van Rossum (python.org/~guido) From himurakenshin54 at gmail.com Fri Jul 22 05:18:49 2016 From: himurakenshin54 at gmail.com (Tian JiaLin) Date: Fri, 22 Jul 2016 17:18:49 +0800 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: I know it's hard to believe this, I wish I'm wrong. But after looking into the code for one week, I didn't find any other code change the number. I will go through them again make sure I didn't miss anything. Thanks for the reply. On Fri, Jul 22, 2016 at 5:02 PM, Stefan Ring wrote: > So to sum this up, you claim that PyLong_FromUnsignedLongLong can > somehow produce a number larger than the value range of a 64 bit > number (0x10000000000000180). I have a hard time believing this. > > Most likely you are looking in the wrong place, mysql_affected_rows > returns 2^64-1, and some Python code later adds 0x181 to that number. > -- kenshin http://kenbeit.com Just Follow Your Heart -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Fri Jul 22 11:31:15 2016 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 23 Jul 2016 01:31:15 +1000 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: References: Message-ID: On Sat, Jul 23, 2016 at 12:36 AM, Guido van Rossum wrote: > Somebody did some research and found some bugs in CPython (IIUC). The > published some questionable fragments. If there's a volunteer we could > probably easily fix these. (I know we already have occasional Coverity > scans and there are other tools too (anybody try lgtm yet?) But this > seems honest research (also Python leaves Ruby in the dust :-): > > http://www.viva64.com/en/b/0414/ First and foremost: All of these purported bugs appear to have been found by compiling on Windows. Does Coverity test a Windows build? If not, can we get it to? These look like the exact types of errors that Coverity *would* discover. Fragment N1 is accurate in current Python. (Although the wording of the report leaves something to be desired. "The SOCKET type in Windows is unsigned, so comparing it against null is meaningless." - only "x < 0" (not null) is meaningless.) It's lines 1702 and 2026 in current Python. What's the best solution? Create a macro VALID_SOCKET with two different definitions, one using "x < 0" and the other using "x != INVALID_SOCKET"? Fragment N2 doesn't appear to be in CPython 3.6 though. I can't find a file called a_print.c, nor anything with ASN1_PRINTABLE_type in it. Third party code? 2.7 only? I've no idea. (It'd be so much more helpful if file paths had been given instead of just fragment codes. The error messages include file names without paths in them.) Fragment N3: Looks like a legit issue. http://bugs.python.org/issue27591 created with patch. Fragment N4, N5, N6a: Can't find bn_lib.c, dh_ameth.c, or cms_env.c in the cpython tree anywhere. Google suggests that they could be part of OpenSSL (which could be true of a_print.c from N2). Does Python bundle any OpenSSL source anywhere? Fragment N6b (there's a completely unrelated issue paired up in N6): I don't understand all of what's being said here. The error message quoted refers to _elementtree.c:917, which is an understandable false positive for the static checker; the problem can't happen, though, because line 913 checks for NULL and will construct a new empty list, and line 916 iterates up to the new list's length, so line 917 can never be reached if self->extra is NULL. But their analyzer can't know that. On the other hand, the paragraph and code snippet are referring to _PyState_AddModule in Modules/pystate.c, which is never called with def=NULL anywhere else in CPython; unless it's intended to be public, the check on line 292 could simply be removed. Conclusion: CPython may need some better static checking in Windows mode, but probably not desperately enough to buy their product (which is presumably the point of that blog). ChrisA From victor.stinner at gmail.com Fri Jul 22 11:35:19 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 22 Jul 2016 17:35:19 +0200 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: References: Message-ID: Oh, the first one is a regression that I introduced in the implementation of the PEP 475 (retry syscall on EINTR). I don't think that it can be triggered in practice, because socket handles on Windows are small numbers, so unlikely to be seen as negative. I just fixed it: https://hg.python.org/cpython/rev/6c11f52ab9db Victor 2016-07-22 16:36 GMT+02:00 Guido van Rossum : > Somebody did some research and found some bugs in CPython (IIUC). The > published some questionable fragments. If there's a volunteer we could > probably easily fix these. (I know we already have occasional Coverity > scans and there are other tools too (anybody try lgtm yet?) But this > seems honest research (also Python leaves Ruby in the dust :-): > > http://www.viva64.com/en/b/0414/ > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From ericsnowcurrently at gmail.com Fri Jul 22 11:50:33 2016 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 22 Jul 2016 09:50:33 -0600 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: On Fri, Jul 22, 2016 at 3:02 AM, Stefan Ring wrote: > So to sum this up, you claim that PyLong_FromUnsignedLongLong can > somehow produce a number larger than the value range of a 64 bit > number (0x10000000000000180). I have a hard time believing this. Perhaps I misunderstood your meaning, but Python's integers (AKA "PyLong") can be bigger that a machine-native integer (e.g. 64 bits): "All integers are implemented as ?long? integer objects of *arbitrary size*." (emphasis mine) (https://docs.python.org/3.5//c-api/long.html) -eric From status at bugs.python.org Fri Jul 22 12:08:45 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 22 Jul 2016 18:08:45 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160722160845.74FEB561DA@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-07-15 - 2016-07-22) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5586 (+24) closed 33762 (+47) total 39348 (+71) Open issues with patches: 2441 Issues opened (42) ================== #25507: IDLE: user code 'import tkinter; tkinter.font' should fail http://bugs.python.org/issue25507 reopened by terry.reedy #27521: Misleading compress level header on files created with gzip http://bugs.python.org/issue27521 opened by ddorda #27524: Update os.path for PEP 519/__fspath__() http://bugs.python.org/issue27524 opened by brett.cannon #27526: test_venv.TestEnsurePip fails mysteriously when /tmp is too sm http://bugs.python.org/issue27526 opened by r.david.murray #27530: Non-Critical Compiler WARNING: Python Embedding C++11 does not http://bugs.python.org/issue27530 opened by Daniel Lord #27534: IDLE: Reduce number and time for user process imports http://bugs.python.org/issue27534 opened by terry.reedy #27535: Memory leaks when opening tons of files http://bugs.python.org/issue27535 opened by ?????????????????? ???????????????????? #27536: Convert readme to reStructuredText http://bugs.python.org/issue27536 opened by louis.taylor #27539: negative Fraction ** negative int not normalized http://bugs.python.org/issue27539 opened by Vedran.??a??i?? #27540: msvcrt.ungetwch() calls _ungetch() http://bugs.python.org/issue27540 opened by arigo #27541: Repr of collection's subclasses http://bugs.python.org/issue27541 opened by serhiy.storchaka #27544: Document the ABCs for instance/subclass checks of dict view ty http://bugs.python.org/issue27544 opened by story645 #27546: Integrate tkinter and asyncio (and async) http://bugs.python.org/issue27546 opened by terry.reedy #27547: Integer Overflow Crash On float(array.array()) http://bugs.python.org/issue27547 opened by pabstersac #27558: SystemError with bare `raise` in threading or multiprocessing http://bugs.python.org/issue27558 opened by Romuald #27561: Warn against subclassing builtins, and overriding their method http://bugs.python.org/issue27561 opened by Kirk Hansen #27562: Import error encodings (Windows xp compatibility) http://bugs.python.org/issue27562 opened by Iovan Irinel #27564: 2.7.12 Windows Installer package broken. http://bugs.python.org/issue27564 opened by busfault #27565: Offer error context manager for code.interact http://bugs.python.org/issue27565 opened by Claudiu Saftoiu #27566: Tools/freeze/winmakemakefile.py clean target should use 'del' http://bugs.python.org/issue27566 opened by David D #27568: "HTTPoxy", use of HTTP_PROXY flag supplied by attacker in CGI http://bugs.python.org/issue27568 opened by remram #27569: Windows install problems http://bugs.python.org/issue27569 opened by ricardoe #27570: Avoid memcpy(. . ., NULL, 0) etc calls http://bugs.python.org/issue27570 opened by martin.panter #27572: Support bytes-like objects when base is given to int() http://bugs.python.org/issue27572 opened by xiang.zhang #27573: code.interact() should print an exit message http://bugs.python.org/issue27573 opened by steven.daprano #27574: Faster parsing keyword arguments http://bugs.python.org/issue27574 opened by serhiy.storchaka #27575: dict viewkeys intersection slow for large dicts http://bugs.python.org/issue27575 opened by David Su2 #27576: An unexpected difference between dict and OrderedDict http://bugs.python.org/issue27576 opened by belopolsky #27577: Make implementation and doc of tuple and list more compliant http://bugs.python.org/issue27577 opened by xiang.zhang #27578: inspect.findsource raises exception with empty __init__.py http://bugs.python.org/issue27578 opened by Alexander Todorov #27579: Add a tutorial for AsyncIO in the documentation http://bugs.python.org/issue27579 opened by matrixise #27580: CSV Null Byte Error http://bugs.python.org/issue27580 opened by bobbyocean #27581: Fix overflow check in PySequence_Tuple http://bugs.python.org/issue27581 opened by xiang.zhang #27582: Mispositioned SyntaxError caret for unknown code points http://bugs.python.org/issue27582 opened by ncoghlan #27583: configparser: modifying default_section at runtime http://bugs.python.org/issue27583 opened by rk #27584: New addition of vSockets to the python socket module http://bugs.python.org/issue27584 opened by Cathy Avery #27585: asyncio.Lock deadlock after cancellation http://bugs.python.org/issue27585 opened by sss #27587: Issues, reported by PVS-Studio static analyzer http://bugs.python.org/issue27587 opened by pavel-belikov #27588: Type (typing) objects are hashable and comparable for equality http://bugs.python.org/issue27588 opened by Gareth.Rees #27589: asyncio doc: issue in as_completed() doc http://bugs.python.org/issue27589 opened by haypo #27590: tarfile module next() method hides exceptions http://bugs.python.org/issue27590 opened by JieGhost #27591: multiprocessing: Possible uninitialized pointer use in Windows http://bugs.python.org/issue27591 opened by Rosuav Most recent 15 issues with no replies (15) ========================================== #27590: tarfile module next() method hides exceptions http://bugs.python.org/issue27590 #27589: asyncio doc: issue in as_completed() doc http://bugs.python.org/issue27589 #27588: Type (typing) objects are hashable and comparable for equality http://bugs.python.org/issue27588 #27584: New addition of vSockets to the python socket module http://bugs.python.org/issue27584 #27581: Fix overflow check in PySequence_Tuple http://bugs.python.org/issue27581 #27577: Make implementation and doc of tuple and list more compliant http://bugs.python.org/issue27577 #27570: Avoid memcpy(. . ., NULL, 0) etc calls http://bugs.python.org/issue27570 #27566: Tools/freeze/winmakemakefile.py clean target should use 'del' http://bugs.python.org/issue27566 #27565: Offer error context manager for code.interact http://bugs.python.org/issue27565 #27534: IDLE: Reduce number and time for user process imports http://bugs.python.org/issue27534 #27530: Non-Critical Compiler WARNING: Python Embedding C++11 does not http://bugs.python.org/issue27530 #27526: test_venv.TestEnsurePip fails mysteriously when /tmp is too sm http://bugs.python.org/issue27526 #27520: Issue when building PGO http://bugs.python.org/issue27520 #27511: Add PathLike objects support to BZ2File http://bugs.python.org/issue27511 #27505: Missing documentation for setting module __class__ attribute http://bugs.python.org/issue27505 Most recent 15 issues waiting for review (15) ============================================= #27591: multiprocessing: Possible uninitialized pointer use in Windows http://bugs.python.org/issue27591 #27584: New addition of vSockets to the python socket module http://bugs.python.org/issue27584 #27582: Mispositioned SyntaxError caret for unknown code points http://bugs.python.org/issue27582 #27581: Fix overflow check in PySequence_Tuple http://bugs.python.org/issue27581 #27574: Faster parsing keyword arguments http://bugs.python.org/issue27574 #27573: code.interact() should print an exit message http://bugs.python.org/issue27573 #27572: Support bytes-like objects when base is given to int() http://bugs.python.org/issue27572 #27570: Avoid memcpy(. . ., NULL, 0) etc calls http://bugs.python.org/issue27570 #27568: "HTTPoxy", use of HTTP_PROXY flag supplied by attacker in CGI http://bugs.python.org/issue27568 #27558: SystemError with bare `raise` in threading or multiprocessing http://bugs.python.org/issue27558 #27546: Integrate tkinter and asyncio (and async) http://bugs.python.org/issue27546 #27544: Document the ABCs for instance/subclass checks of dict view ty http://bugs.python.org/issue27544 #27539: negative Fraction ** negative int not normalized http://bugs.python.org/issue27539 #27536: Convert readme to reStructuredText http://bugs.python.org/issue27536 #27534: IDLE: Reduce number and time for user process imports http://bugs.python.org/issue27534 Top 10 most discussed issues (10) ================================= #23951: Update devguide style to use a similar theme as Docs http://bugs.python.org/issue23951 12 msgs #27558: SystemError with bare `raise` in threading or multiprocessing http://bugs.python.org/issue27558 12 msgs #1621: Do not assume signed integer overflow behavior http://bugs.python.org/issue1621 9 msgs #26662: configure/Makefile doesn't check if "python" command works, ne http://bugs.python.org/issue26662 9 msgs #27582: Mispositioned SyntaxError caret for unknown code points http://bugs.python.org/issue27582 9 msgs #23262: webbrowser module broken with Firefox 36+ http://bugs.python.org/issue23262 8 msgs #27469: Unicode filename gets crippled on Windows when drag and drop http://bugs.python.org/issue27469 8 msgs #27580: CSV Null Byte Error http://bugs.python.org/issue27580 8 msgs #24954: No way to generate or parse timezone as produced by datetime.i http://bugs.python.org/issue24954 7 msgs #27561: Warn against subclassing builtins, and overriding their method http://bugs.python.org/issue27561 7 msgs Issues closed (45) ================== #19142: Cross-compile fails trying to execute foreign pgen on build ho http://bugs.python.org/issue19142 closed by martin.panter #21708: Deprecate nonstandard behavior of a dumbdbm database http://bugs.python.org/issue21708 closed by serhiy.storchaka #24034: Make fails Objects/typeslots.inc http://bugs.python.org/issue24034 closed by martin.panter #25393: 'resource' module documentation error http://bugs.python.org/issue25393 closed by python-dev #26207: distutils msvccompiler fails due to mspdb140.dll error on debu http://bugs.python.org/issue26207 closed by steve.dower #26380: Add an http method enum http://bugs.python.org/issue26380 closed by ethan.furman #26559: logging.handlers.MemoryHandler flushes on shutdown but not rem http://bugs.python.org/issue26559 closed by python-dev #26696: Document collections.abc.ByteString http://bugs.python.org/issue26696 closed by brett.cannon #26844: Wrong error message during import http://bugs.python.org/issue26844 closed by brett.cannon #27083: PYTHONCASEOK is ignored on Windows http://bugs.python.org/issue27083 closed by brett.cannon #27309: Visual Styles support to tk/tkinter file and message dialogs http://bugs.python.org/issue27309 closed by steve.dower #27417: Call CoInitializeEx on startup http://bugs.python.org/issue27417 closed by steve.dower #27472: add the 'unix_shell' attribute to test.support http://bugs.python.org/issue27472 closed by xdegaye #27512: os.fspath is certain to crash when exception raised in __fspat http://bugs.python.org/issue27512 closed by brett.cannon #27515: Dotted name re-import does not rebind after deletion http://bugs.python.org/issue27515 closed by terry.reedy #27522: Reference cycle in email.feedparser http://bugs.python.org/issue27522 closed by r.david.murray #27523: Silence Socket Depreciation Warnings. http://bugs.python.org/issue27523 closed by Decorater #27525: Wrong OS header on file created by gzip module http://bugs.python.org/issue27525 closed by ddorda #27527: Make not yielding from or awaiting a coroutine a SyntaxError http://bugs.python.org/issue27527 closed by r.david.murray #27528: Document that filterwarnings(message=...) matches the start of http://bugs.python.org/issue27528 closed by martin.panter #27529: Tkinter memory leak on OS X http://bugs.python.org/issue27529 closed by ned.deily #27531: Documentation for assert_not_called() has wrong signature http://bugs.python.org/issue27531 closed by berker.peksag #27532: Dictionary iterator has no len() http://bugs.python.org/issue27532 closed by r.david.murray #27533: release GIL in nt._isdir http://bugs.python.org/issue27533 closed by steve.dower #27537: Segfault Via Resource Exhaustion http://bugs.python.org/issue27537 closed by ned.deily #27538: Segfault on error in code object checking http://bugs.python.org/issue27538 closed by ned.deily #27542: Segfault in gcmodule.c:360 visit_decref http://bugs.python.org/issue27542 closed by ned.deily #27543: from module import function creates package reference to the m http://bugs.python.org/issue27543 closed by brett.cannon #27545: missing pyshellext.vcxproj prevents puilding 3.6 http://bugs.python.org/issue27545 closed by python-dev #27548: Integer Overflow On bin() http://bugs.python.org/issue27548 closed by serhiy.storchaka #27549: Integer Overflow Crash On bytearray() http://bugs.python.org/issue27549 closed by serhiy.storchaka #27550: Integer Overflow Crash On Arithmetic Operations http://bugs.python.org/issue27550 closed by serhiy.storchaka #27551: Integer Overflow On print() http://bugs.python.org/issue27551 closed by serhiy.storchaka #27552: Integer Overflow On min() http://bugs.python.org/issue27552 closed by serhiy.storchaka #27553: Integer Overflow On unicode() http://bugs.python.org/issue27553 closed by serhiy.storchaka #27554: Integer Overflow On dir() http://bugs.python.org/issue27554 closed by serhiy.storchaka #27555: Integer Overflow on oct() http://bugs.python.org/issue27555 closed by serhiy.storchaka #27556: Integer overflow on hex() http://bugs.python.org/issue27556 closed by serhiy.storchaka #27557: Integer Overflow on int() http://bugs.python.org/issue27557 closed by serhiy.storchaka #27559: Crash On bytearray() http://bugs.python.org/issue27559 closed by martin.panter #27560: Memory overallocation crash and keyboard interrupt stops worki http://bugs.python.org/issue27560 closed by martin.panter #27563: docs for `gc.get_referrers` should explain the result format ( http://bugs.python.org/issue27563 closed by cool-RR #27567: Add constants EPOLLRDHUP and POLLRDHUP to module select. http://bugs.python.org/issue27567 closed by python-dev #27571: 3.6 Seems to be ignoring the _sodium pyd file made with pip. http://bugs.python.org/issue27571 closed by brett.cannon #27586: Is this a regular expression library bug? http://bugs.python.org/issue27586 closed by tim.peters From himurakenshin54 at gmail.com Fri Jul 22 12:06:04 2016 From: himurakenshin54 at gmail.com (Tian JiaLin) Date: Sat, 23 Jul 2016 00:06:04 +0800 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: Yes, you are right. Definitely "long" in Python can represent a number much bigger than the native. But the range of returned value from mysql_affected_rows within 0 ~ 2^64-1. No matter how it's converted, the converted value in Python also should in the range of 0 ~ 2^64 - 1. On Fri, Jul 22, 2016 at 11:50 PM, Eric Snow wrote: > On Fri, Jul 22, 2016 at 3:02 AM, Stefan Ring wrote: > > So to sum this up, you claim that PyLong_FromUnsignedLongLong can > > somehow produce a number larger than the value range of a 64 bit > > number (0x10000000000000180). I have a hard time believing this. > > Perhaps I misunderstood your meaning, but Python's integers (AKA > "PyLong") can be bigger that a machine-native integer (e.g. 64 bits): > > "All integers are implemented as ?long? integer objects of *arbitrary > size*." (emphasis mine) > > (https://docs.python.org/3.5//c-api/long.html) > > -eric > -- kenshin http://kenbeit.com Just Follow Your Heart -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Fri Jul 22 12:21:29 2016 From: random832 at fastmail.com (Random832) Date: Fri, 22 Jul 2016 12:21:29 -0400 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: References: Message-ID: <1469204489.1526432.673978057.274F0615@webmail.messagingengine.com> On Fri, Jul 22, 2016, at 11:35, Victor Stinner wrote: > Oh, the first one is a regression that I introduced in the > implementation of the PEP 475 (retry syscall on EINTR). I don't think > that it can be triggered in practice, because socket handles on > Windows are small numbers, so unlikely to be seen as negative. The problem as I understand it isn't that handles will be seen as negative, the problem is that the error return will be seen as *non*-negative. > I just fixed it: > https://hg.python.org/cpython/rev/6c11f52ab9db Does INVALID_SOCKET exist on non-windows systems? (It's probably safe to compare against -1, the relevant functions are defined in POSIX as returning -1 rather than an unspecified negative value) From random832 at fastmail.com Fri Jul 22 12:23:37 2016 From: random832 at fastmail.com (Random832) Date: Fri, 22 Jul 2016 12:23:37 -0400 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: <1469204617.1526634.673981841.2E4D98C2@webmail.messagingengine.com> On Fri, Jul 22, 2016, at 11:50, Eric Snow wrote: > On Fri, Jul 22, 2016 at 3:02 AM, Stefan Ring wrote: > > So to sum this up, you claim that PyLong_FromUnsignedLongLong can > > somehow produce a number larger than the value range of a 64 bit > > number (0x10000000000000180). I have a hard time believing this. > > Perhaps I misunderstood your meaning, but Python's integers (AKA > "PyLong") can be bigger that a machine-native integer (e.g. 64 bits): Yes, but you can't (or shouldn't be able to) get those values from a conversion from a machine-native type such as PyLong_FromUnsignedLongLong. From victor.stinner at gmail.com Fri Jul 22 13:06:16 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 22 Jul 2016 19:06:16 +0200 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: <1469204489.1526432.673978057.274F0615@webmail.messagingengine.com> References: <1469204489.1526432.673978057.274F0615@webmail.messagingengine.com> Message-ID: 2016-07-22 18:21 GMT+02:00 Random832 : >> I just fixed it: >> https://hg.python.org/cpython/rev/6c11f52ab9db > > Does INVALID_SOCKET exist on non-windows systems? Yes, it was already used in almost all places. When I read again the code, in fact I found other places with "fd < 0" or "fd = -1". I fixed more code in a second change: https://hg.python.org/cpython/rev/025281485318 Victor From brett at python.org Fri Jul 22 16:04:10 2016 From: brett at python.org (Brett Cannon) Date: Fri, 22 Jul 2016 20:04:10 +0000 Subject: [Python-Dev] The devguide is now hosted on GitHub Message-ID: https://github.com/python/devguide I have also moved all issues over as well and hooked up Read The Docs so that there's a mirror which is always up-to-date (vs. docs.python.org/devguide which is on a cronjob). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Fri Jul 22 16:23:31 2016 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 22 Jul 2016 14:23:31 -0600 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: References: Message-ID: Thanks for doing all this, Brett. :) -eric On Fri, Jul 22, 2016 at 2:04 PM, Brett Cannon wrote: > https://github.com/python/devguide > > I have also moved all issues over as well and hooked up Read The Docs so > that there's a mirror which is always up-to-date (vs. > docs.python.org/devguide which is on a cronjob). > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com > From stephane at wirtel.be Fri Jul 22 16:53:31 2016 From: stephane at wirtel.be (Stephane Wirtel) Date: Fri, 22 Jul 2016 22:53:31 +0200 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: References: Message-ID: Congratulations Brett, Thank you so much for this job. > On 22 juil. 2016, at 10:04 PM, Brett Cannon wrote: > > https://github.com/python/devguide > > I have also moved all issues over as well and hooked up Read The Docs so that there's a mirror which is always up-to-date (vs. docs.python.org/devguide which is on a cronjob). > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/stephane%40wirtel.be -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri Jul 22 17:08:21 2016 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 22 Jul 2016 17:08:21 -0400 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/22/2016 04:04 PM, Brett Cannon wrote: > https://github.com/python/devguide > > I have also moved all issues over as well and hooked up Read The Docs > so that there's a mirror which is always up-to-date (vs. > docs.python.org/devguide which is on a cronjob). What is the RTD project name? Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXkos+AAoJEPKpaDSJE9HY6PAP/AzvI1diFpjjhgQARkmCSvrT MugaShX60PGQDUtTTVCmlZg0Ca6mWgbiaX8JS/kYQuc3SI2JIs+lD/mdCydWKYfx Gzks/rzaS60NkXZjb/yW7Vs+2wQo2EWHC/uzKRDGT7m0yijQW0WQaACgWEtSo0v3 6FzIyxQyYi1UVD10Iw7TWCvYxk2F33QXha5hOsq2N3Zs9Vopkj9p2KeViCxs4UuX VT2hZam/X6ZPkEkHlRkuZM4UpYM3Zt5+dmrODI5ieXjsUngvfcVhVvay33tStlH9 DJYGPgAWCzNkiScDCWk8+iXkLqJAQusVms6HbgQcToRj2ySbWdtn+EMFp9Y+baGl GBFQoiHhj1nw9yFf4pGgO4xRyvwc4vfTs7PJnZnOxLI7STaRL6L5TpXSuFGVN0Sw 6AumK4mzXidK4efpROUGmLcc3SjuB266jmYDPmNmrqKtHXTIycEgwIjSeWFrMXOE zxQ/TeKiAIr05np22LyXFmm64ryaZjoXqkPdo1fHh6rp456t3o2rkxk4ghuMH4xs IA4V/LBW1BlWr+4P+JIDP+vhyZ45J5SHKvX3OY1OaRDyWHHTEA7qSic6x6CKfUWv o7cr0Kx6ehdwmDUGMfzcGUCoWoNrydKlh0PM2UEAyX+e6RY/5sq1NiKRfrHrO17L Mznm6AZXZXi3D8MkEEDa =+fVX -----END PGP SIGNATURE----- From brett at python.org Fri Jul 22 17:26:47 2016 From: brett at python.org (Brett Cannon) Date: Fri, 22 Jul 2016 21:26:47 +0000 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: References: Message-ID: It's in the README of the repo, but it's http://cpython-devguide.readthedocs.io/ On Fri, 22 Jul 2016 at 14:09 Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 07/22/2016 04:04 PM, Brett Cannon wrote: > > https://github.com/python/devguide > > > > I have also moved all issues over as well and hooked up Read The Docs > > so that there's a mirror which is always up-to-date (vs. > > docs.python.org/devguide which is on a cronjob). > > What is the RTD project name? > > > Tres. > - -- > =================================================================== > Tres Seaver +1 540-429-0999 tseaver at palladion.com > Palladion Software "Excellence by Design" http://palladion.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQIcBAEBAgAGBQJXkos+AAoJEPKpaDSJE9HY6PAP/AzvI1diFpjjhgQARkmCSvrT > MugaShX60PGQDUtTTVCmlZg0Ca6mWgbiaX8JS/kYQuc3SI2JIs+lD/mdCydWKYfx > Gzks/rzaS60NkXZjb/yW7Vs+2wQo2EWHC/uzKRDGT7m0yijQW0WQaACgWEtSo0v3 > 6FzIyxQyYi1UVD10Iw7TWCvYxk2F33QXha5hOsq2N3Zs9Vopkj9p2KeViCxs4UuX > VT2hZam/X6ZPkEkHlRkuZM4UpYM3Zt5+dmrODI5ieXjsUngvfcVhVvay33tStlH9 > DJYGPgAWCzNkiScDCWk8+iXkLqJAQusVms6HbgQcToRj2ySbWdtn+EMFp9Y+baGl > GBFQoiHhj1nw9yFf4pGgO4xRyvwc4vfTs7PJnZnOxLI7STaRL6L5TpXSuFGVN0Sw > 6AumK4mzXidK4efpROUGmLcc3SjuB266jmYDPmNmrqKtHXTIycEgwIjSeWFrMXOE > zxQ/TeKiAIr05np22LyXFmm64ryaZjoXqkPdo1fHh6rp456t3o2rkxk4ghuMH4xs > IA4V/LBW1BlWr+4P+JIDP+vhyZ45J5SHKvX3OY1OaRDyWHHTEA7qSic6x6CKfUWv > o7cr0Kx6ehdwmDUGMfzcGUCoWoNrydKlh0PM2UEAyX+e6RY/5sq1NiKRfrHrO17L > Mznm6AZXZXi3D8MkEEDa > =+fVX > -----END PGP SIGNATURE----- > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri Jul 22 18:09:27 2016 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 22 Jul 2016 17:09:27 -0500 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: References: Message-ID: https://cpython-devguide.readthedocs.io/ Thanks! On Friday, July 22, 2016, Brett Cannon wrote: > It's in the README of the repo, but it's > http://cpython-devguide.readthedocs.io/ > > On Fri, 22 Jul 2016 at 14:09 Tres Seaver > wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 07/22/2016 04:04 PM, Brett Cannon wrote: >> > https://github.com/python/devguide >> > >> > I have also moved all issues over as well and hooked up Read The Docs >> > so that there's a mirror which is always up-to-date (vs. >> > docs.python.org/devguide which is on a cronjob). >> >> What is the RTD project name? >> >> >> Tres. >> - -- >> =================================================================== >> Tres Seaver +1 540-429-0999 tseaver at palladion.com >> >> Palladion Software "Excellence by Design" http://palladion.com >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1 >> >> iQIcBAEBAgAGBQJXkos+AAoJEPKpaDSJE9HY6PAP/AzvI1diFpjjhgQARkmCSvrT >> MugaShX60PGQDUtTTVCmlZg0Ca6mWgbiaX8JS/kYQuc3SI2JIs+lD/mdCydWKYfx >> Gzks/rzaS60NkXZjb/yW7Vs+2wQo2EWHC/uzKRDGT7m0yijQW0WQaACgWEtSo0v3 >> 6FzIyxQyYi1UVD10Iw7TWCvYxk2F33QXha5hOsq2N3Zs9Vopkj9p2KeViCxs4UuX >> VT2hZam/X6ZPkEkHlRkuZM4UpYM3Zt5+dmrODI5ieXjsUngvfcVhVvay33tStlH9 >> DJYGPgAWCzNkiScDCWk8+iXkLqJAQusVms6HbgQcToRj2ySbWdtn+EMFp9Y+baGl >> GBFQoiHhj1nw9yFf4pGgO4xRyvwc4vfTs7PJnZnOxLI7STaRL6L5TpXSuFGVN0Sw >> 6AumK4mzXidK4efpROUGmLcc3SjuB266jmYDPmNmrqKtHXTIycEgwIjSeWFrMXOE >> zxQ/TeKiAIr05np22LyXFmm64ryaZjoXqkPdo1fHh6rp456t3o2rkxk4ghuMH4xs >> IA4V/LBW1BlWr+4P+JIDP+vhyZ45J5SHKvX3OY1OaRDyWHHTEA7qSic6x6CKfUWv >> o7cr0Kx6ehdwmDUGMfzcGUCoWoNrydKlh0PM2UEAyX+e6RY/5sq1NiKRfrHrO17L >> Mznm6AZXZXi3D8MkEEDa >> =+fVX >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Fri Jul 22 18:39:37 2016 From: christian at python.org (Christian Heimes) Date: Sat, 23 Jul 2016 00:39:37 +0200 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: References: Message-ID: On 2016-07-22 17:31, Chris Angelico wrote: > On Sat, Jul 23, 2016 at 12:36 AM, Guido van Rossum wrote: >> Somebody did some research and found some bugs in CPython (IIUC). The >> published some questionable fragments. If there's a volunteer we could >> probably easily fix these. (I know we already have occasional Coverity >> scans and there are other tools too (anybody try lgtm yet?) But this >> seems honest research (also Python leaves Ruby in the dust :-): >> >> http://www.viva64.com/en/b/0414/ > > First and foremost: All of these purported bugs appear to have been > found by compiling on Windows. Does Coverity test a Windows build? If > not, can we get it to? These look like the exact types of errors that > Coverity *would* discover. No, it doesn't. The Coverity Scan builds only run on X86_64 Linux platforms. When I took over Coverity Scan for CPython many years ago it was not possible to support multiple platforms and target with the free edition. I never tried to upload builds from different platforms because I feared that it might play havoc with the scan history. Should I check with Coverity again? Some of these issues have been found by Coverity and I even have patches for them, e.g. N6 is CID#1299595. I have 13 patches that I haven't published and merged yet. None of the issues is critical, though. Since I forgot how to use hg I have been waiting for the github migration. Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: 0004-Fix-dereferencing-before-NULL-check-in-_PyState_AddM.patch Type: text/x-patch Size: 1366 bytes Desc: not available URL: From himurakenshin54 at gmail.com Sat Jul 23 00:23:31 2016 From: himurakenshin54 at gmail.com (Tian JiaLin) Date: Sat, 23 Jul 2016 12:23:31 +0800 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: Hey Guys, I found the mistake I made, basically I'm using a tool called Sentry to capture the exceptions. The value returned from the Python is 2^64-1, which is -1 from mysql_affected_rows. Sentry is using JSON format as the a kind of storage, apparently the MAX SAFE INTEGER is 2^53 -1. Sorry for the indeliberated report of this issue and thanks for all of you helps. On Sat, Jul 23, 2016 at 12:06 AM, Tian JiaLin wrote: > Yes, you are right. Definitely "long" in Python can represent a number > much bigger than the native. > > But the range of returned value from mysql_affected_rows within 0 ~ 2^64-1. > No matter how it's converted, the converted value in Python also should in > the range of 0 ~ 2^64 - 1. > > On Fri, Jul 22, 2016 at 11:50 PM, Eric Snow > wrote: > >> On Fri, Jul 22, 2016 at 3:02 AM, Stefan Ring wrote: >> > So to sum this up, you claim that PyLong_FromUnsignedLongLong can >> > somehow produce a number larger than the value range of a 64 bit >> > number (0x10000000000000180). I have a hard time believing this. >> >> Perhaps I misunderstood your meaning, but Python's integers (AKA >> "PyLong") can be bigger that a machine-native integer (e.g. 64 bits): >> >> "All integers are implemented as ?long? integer objects of *arbitrary >> size*." (emphasis mine) >> >> (https://docs.python.org/3.5//c-api/long.html) >> >> -eric >> > > > > -- > kenshin > > http://kenbeit.com > Just Follow Your Heart > -- kenshin http://kenbeit.com Just Follow Your Heart -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sat Jul 23 01:11:36 2016 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 22 Jul 2016 22:11:36 -0700 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: References: Message-ID: <1469250696.3860443.674426209.43B7C58F@webmail.messagingengine.com> Should we just make the RTD one canonical and serve redirects on docs.python.org? On Fri, Jul 22, 2016, at 13:04, Brett Cannon wrote: > https://github.com/python/devguide > > I have also moved all issues over as well and hooked up Read The Docs so > that there's a mirror which is always up-to-date (vs. > docs.python.org/devguide which is on a cronjob). > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org From ronaldoussoren at mac.com Sat Jul 23 08:26:31 2016 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sat, 23 Jul 2016 14:26:31 +0200 Subject: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses Message-ID: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> Hi, It?s getting a tradition for me to work on PEP 447 during the EuroPython sprints and disappear afterwards. Hopefully I can manage to avoid the latter step this year? Last year the conclusion appeared to be that this is an acceptable PEP, but Mark Shannon had a concern about a default implementation for __getdescriptor__ on type in (follow the link for more context): > "__getdescriptor__" is fundamentally different from "__getattribute__" in that > is defined in terms of itself. > > object.__getattribute__ is defined in terms of type.__getattribute__, but > type.__getattribute__ just does > dictionary lookups. However defining type.__getattribute__ in terms of > __descriptor__ causes a circularity as > __descriptor__ has to be looked up on a type. > > So, not only must the cycle be broken by special casing "type", but that > "__getdescriptor__" can be defined > not only by a subclass, but also a metaclass that uses "__getdescriptor__" to > define "__getdescriptor__" on the class. > (and so on for meta-meta classes, etc.) My reaction that year is in . As I wrote there I did not fully understand the concerns Mark has, probably because I?m focussed too much on the implementation in CPython. If removing type.__getdescriptor__ and leaving this special method as an optional hook for subclasses of type fixes the conceptual concerns then that?s fine by me. I used type.__getdescriptor__ as the default implementation both because it appears to be cleaner to me and because this gives subclasses an easy way to access the default implementation. The implementation of the PEP in issue 18181 does special-case type.__getdescriptor__ but only as an optimisation, the code would work just as well without that special casing because the normal attribute lookup machinery is not used when accessing special methods written in C. That is, the implementation of object.__getattribute__ directly accesses fields of the type struct at the C level. Some magic behavior appears to be necessary even without the addition of __getdescriptor__ (type is a subclass of itself, object.__getattribute__ has direct access to dict.__getitem__, ?). I?m currently working on getting the patch in 18181 up-to-date w.r.t. the current trunk, the patch in the issue no longer applies cleanly. After that I?ll try to think up some tests that seriously try to break the new behaviour, and I want to update a patch I have for PyObjC to make use of the new functionality to make sure that the PEP actually fixes the issues I had w.r.t. builtin.super?s behavior. What is the best way forward after that? As before this is a change in behavior that, unsurprisingly, few core devs appear to be comfortable with evaluating, combined with new functionality that will likely see little use beyond PyObjC (although my opinions of that shouldn?t carry much weight, I thought that decorators would have limited appeal when those where introduced and couldn?t have been more wrong about that). Ronald P.S. The PEP itself: -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Sat Jul 23 09:22:47 2016 From: christian at python.org (Christian Heimes) Date: Sat, 23 Jul 2016 15:22:47 +0200 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: References: Message-ID: On 2016-07-22 16:36, Guido van Rossum wrote: > Somebody did some research and found some bugs in CPython (IIUC). The > published some questionable fragments. If there's a volunteer we could > probably easily fix these. (I know we already have occasional Coverity > scans and there are other tools too (anybody try lgtm yet?) But this > seems honest research (also Python leaves Ruby in the dust :-): > > http://www.viva64.com/en/b/0414/ I had a closer look at the report. About half of the bugs, maybe more are not in the C code of CPython but in OpenSSL code. I really mean OpenSSL code, not _ssl.c and _hashopenssl.c. It's safe to assume that they forgot to exclude external dependencies. The issues in ASN1_PRINTABLE_type() [N2], BN_mask_bits() [N4 bn_lib.c, digest.c, evp_enc.c], dh_cms_set_peerkey() [N5, dh_ameth.c] and cms_env_set_version() [N6, cms_env.c] are all OpenSSL issues and should be reported to OpenSSL. Guido, did the company contact you or do you have Pavel Belikov's email address? Christian From vadmium+py at gmail.com Sat Jul 23 09:43:53 2016 From: vadmium+py at gmail.com (Martin Panter) Date: Sat, 23 Jul 2016 13:43:53 +0000 Subject: [Python-Dev] Should we fix these errors? In-Reply-To: References: Message-ID: FYI there is also a bug tracker report about this: https://bugs.python.org/issue27587 On 23 July 2016 at 13:22, Christian Heimes wrote: > On 2016-07-22 16:36, Guido van Rossum wrote: >> Somebody did some research and found some bugs in CPython (IIUC). The >> published some questionable fragments. If there's a volunteer we could >> probably easily fix these. (I know we already have occasional Coverity >> scans and there are other tools too (anybody try lgtm yet?) But this >> seems honest research (also Python leaves Ruby in the dust :-): >> >> http://www.viva64.com/en/b/0414/ > > I had a closer look at the report. About half of the bugs, maybe more > are not in the C code of CPython but in OpenSSL code. I really mean > OpenSSL code, not _ssl.c and _hashopenssl.c. It's safe to assume that > they forgot to exclude external dependencies. > > The issues in ASN1_PRINTABLE_type() [N2], BN_mask_bits() [N4 bn_lib.c, > digest.c, evp_enc.c], dh_cms_set_peerkey() [N5, dh_ameth.c] and > cms_env_set_version() [N6, cms_env.c] are all OpenSSL issues and should > be reported to OpenSSL. > > Guido, did the company contact you or do you have Pavel Belikov's email > address? Perhaps you can contact him via the email address at . From brett at python.org Sat Jul 23 11:40:41 2016 From: brett at python.org (Brett Cannon) Date: Sat, 23 Jul 2016 15:40:41 +0000 Subject: [Python-Dev] The devguide is now hosted on GitHub In-Reply-To: <1469250696.3860443.674426209.43B7C58F@webmail.messagingengine.com> References: <1469250696.3860443.674426209.43B7C58F@webmail.messagingengine.com> Message-ID: On Fri, 22 Jul 2016 at 22:11 Benjamin Peterson wrote: > Should we just make the RTD one canonical and serve redirects on > docs.python.org? > My hope was to eventually have dev.python.org point at the RTFD instance, but I'm not up for spearheading that ATM and I would want the PSF to pay for a gold membership for the account (currently it's just my personal free account). -Brett > > On Fri, Jul 22, 2016, at 13:04, Brett Cannon wrote: > > https://github.com/python/devguide > > > > I have also moved all issues over as well and hooked up Read The Docs so > > that there's a mirror which is always up-to-date (vs. > > docs.python.org/devguide which is on a cronjob). > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jul 23 11:42:55 2016 From: brett at python.org (Brett Cannon) Date: Sat, 23 Jul 2016 15:42:55 +0000 Subject: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses In-Reply-To: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> References: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> Message-ID: On Sat, 23 Jul 2016 at 05:27 Ronald Oussoren wrote: > [SNIP] > > What is the best way forward after that? As before this is a change in > behavior that, unsurprisingly, few core devs appear to be comfortable with > evaluating, combined with new functionality that will likely see little use > beyond PyObjC (although my opinions of that shouldn?t carry much weight, I > thought that decorators would have limited appeal when those where > introduced and couldn?t have been more wrong about that). > > Ronald > > P.S. The PEP itself: If the PEP is ready to be reviewed after that then either getting Guido to pronounce or for him to assign a BDFL delegate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Sat Jul 23 11:58:35 2016 From: python at mrabarnett.plus.com (MRAB) Date: Sat, 23 Jul 2016 16:58:35 +0100 Subject: [Python-Dev] Convert from unsigned long long to PyLong In-Reply-To: References: Message-ID: <85506d9c-f494-2297-da0d-f11a7411834e@mrabarnett.plus.com> On 2016-07-23 05:23, Tian JiaLin wrote: > Hey Guys, > > I found the mistake I made, basically I'm using a tool called Sentry to > capture the exceptions. > The value returned from the Python is 2^64-1, which is -1 > from mysql_affected_rows. > Sentry is using JSON format as the a kind of storage, apparently the MAX > SAFE INTEGER > is > 2^53 -1. > [snip] JSON itself doesn't put a limit on the size of integers. That link is about how JavaScript handles JSON. From steve.dower at python.org Sat Jul 23 15:16:15 2016 From: steve.dower at python.org (Steve Dower) Date: Sat, 23 Jul 2016 12:16:15 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry Message-ID: <5793C27F.6060300@python.org> PEP 514 is now ready for pronouncement, so this is the last chance for any feedback (BDFL-delegate Paul has been active on the github PR, so I don't expect he has a lot of feedback left). The most major change from the previous post is the addition of some code examples at the end. Honestly, I don't expect many tools written in Python to be scanning the registry (since once you're in Python you probably don't need to find it), but hopefully they'll help clarify the PEP for people who prefer code. Full text below. Cheers, Steve ---------- PEP: 514 Title: Python registration in the Windows registry Version: $Revision$ Last-Modified: $Date$ Author: Steve Dower Status: Draft Type: Informational Content-Type: text/x-rst Created: 02-Feb-2016 Post-History: 02-Feb-2016, 01-Mar-2016, 18-Jul-2016 Abstract ======== This PEP defines a schema for the Python registry key to allow third-party installers to register their installation, and to allow tools and applications to detect and correctly display all Python environments on a user's machine. No implementation changes to Python are proposed with this PEP. Python environments are not required to be registered unless they want to be automatically discoverable by external tools. As this relates to Windows only, these tools are expected to be predominantly GUI applications. However, console applications may also make use of the registered information. This PEP covers the information that may be made available, but the actual presentation and use of this information is left to the tool designers. The schema matches the registry values that have been used by the official installer since at least Python 2.5, and the resolution behaviour matches the behaviour of the official Python releases. Some backwards compatibility rules are provided to ensure tools can correctly detect versions of CPython that do not register full information. Motivation ========== When installed on Windows, the official Python installer creates a registry key for discovery and detection by other applications. This allows tools such as installers or IDEs to automatically detect and display a user's Python installations. For example, the PEP 397 ``py.exe`` launcher and editors such as PyCharm and Visual Studio already make use of this information. Third-party installers, such as those used by distributions, typically create identical keys for the same purpose. Most tools that use the registry to detect Python installations only inspect the keys used by the official installer. As a result, third-party installations that wish to be discoverable will overwrite these values, often causing users to "lose" their original Python installation. By describing a layout for registry keys that allows third-party installations to register themselves uniquely, as well as providing tool developers guidance for discovering all available Python installations, these collisions should be prevented. We also take the opportunity to add some well-known metadata so that more information can be presented to users. Definitions =========== A "registry key" is the equivalent of a file-system path into the registry. Each key may contain "subkeys" (keys nested within keys) and "values" (named and typed attributes attached to a key). These are used on Windows to store settings in much the same way that directories containing configuration files would work. ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in user, and this user can generally read and write all settings under this root. ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any user can read these settings but only administrators can modify them. It is typical for values under ``HKEY_CURRENT_USER`` to take precedence over those in ``HKEY_LOCAL_MACHINE``. On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special key that 32-bit processes transparently read and write to rather than accessing the ``Software`` key directly. Further documentation regarding registry redirection on Windows is available from the MSDN Library [1]_. Structure ========= We consider there to be a single collection of Python environments on a machine, where the collection may be different for each user of the machine. There are three potential registry locations where the collection may be stored based on the installation options of each environment:: HKEY_CURRENT_USER\Software\Python\\ HKEY_LOCAL_MACHINE\Software\Python\\ HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ Official Python releases use ``PythonCore`` for Company, and the value of ``sys.winver`` for Tag. The Company ``PyLauncher`` is reserved. Other registered environments may use any values for Company and Tag. Recommendations are made later in this document. Company-Tag pairs are case-insensitive, and uniquely identify each environment. Depending on the purpose and intended use of a tool, there are two suggested approaches for resolving conflicts between Company-Tag pairs. Tools that list every installed environment may choose to include those even where the Company-Tag pairs match. They should ensure users can easily identify whether the registration was per-user or per-machine, and which registration has the higher priority. Tools that aim to select a single installed environment from all registered environments based on the Company-Tag pair, such as the ``py.exe`` launcher, should always select the environment registered in ``HKEY_CURRENT_USER`` when than the matching one in ``HKEY_LOCAL_MACHINE``. Conflicts between ``HKEY_LOCAL_MACHINE\Software\Python`` and ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` should only occur when both 64-bit and 32-bit versions of an interpreter have the same Tag. In this case, the tool should select whichever is more appropriate for its use. If a tool is able to determine from the provided information (or lack thereof) that it cannot use a registered environment, there is no obligation to present it to users. Except as discussed in the section on backwards compatibility, Company and Tag values are considered opaque to tools, and no information about the interpreter should be inferred from the text. However, some tools may display the Company and Tag values to users, so ideally the Tag will be able to help users identify the associated environment. Python environments are not required to register themselves unless they want to be automatically discoverable by external tools. Backwards Compatibility ----------------------- Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds in ``sys.winver``. As a result, it is not possible to have valid side-by-side installations of both 32-bit and 64-bit interpreters under this scheme since it would result in duplicate Tags. To ensure backwards compatibility, applications should treat environments listed under the following two registry keys as distinct, even when the Tag matches:: HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct from both of the above keys, potentially resulting in three environments discovered using the same Tag. Alternatively, a tool may determine whether the per-user environment is 64-bit or 32-bit and give it priority over the per-machine environment, resulting in a maximum of two discovered environments. It is not possible to detect side-by-side installations of both 64-bit and 32-bit versions of Python prior to 3.5 when they have been installed for the current user. Python 3.5 and later always uses different Tags for 64-bit and 32-bit versions. The following section describe user-visible information that may be registered. For Python 3.5 and earlier, none of this information is available, but alternative defaults are specified for the ``PythonCore`` key. Environments registered under other Company names have no backward compatibility requirements and must use distinct Tags to support side-by-side installations. Tools consuming these registrations are not required to disambiguate tags other than by preferring the user's setting. Company ------- The Company part of the key is intended to group related environments and to ensure that Tags are namespaced appropriately. The key name should be alphanumeric without spaces and likely to be unique. For example, a trademarked name (preferred), a hostname, or as a last resort, a UUID would be appropriate:: HKEY_CURRENT_USER\Software\Python\ExampleCorp HKEY_CURRENT_USER\Software\Python\www.example.com HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 The company name ``PyLauncher`` is reserved for the PEP 397 launcher (``py.exe``). It does not follow this convention and should be ignored by tools. If a string value named ``DisplayName`` exists, it should be used to identify the environment manufacturer/developer/destributor to users. Otherwise, the name of the key should be used. (For ``PythonCore``, the default display name is "Python Software Foundation".) If a string value named ``SupportUrl`` exists, it may be displayed or otherwise used to direct users to a web site related to the environment. (For ``PythonCore``, the default support URL is "http://www.python.org/".) A complete example may look like:: HKEY_CURRENT_USER\Software\Python\ExampleCorp (Default) = (value not set) DisplayName = "Example Corp" SupportUrl = "http://www.example.com" Tag --- The Tag part of the key is intended to uniquely identify an environment within those provided by a single company. The key name should be alphanumeric without spaces and stable across installations. For example, the Python language version, a UUID or a partial/complete hash would be appropriate, while a Tag based on the install directory or some aspect of the current machine may not. For example:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 It is expected that some tools will require users to type the Tag into a command line, and that the Company may be optional provided the Tag is unique across all Python installations. Short, human-readable and easy to type Tags are recommended, and if possible, select a value likely to be unique across all other Companies. If a string value named ``DisplayName`` exists, it should be used to identify the environment to users. Otherwise, the name of the key should be used. (For ``PythonCore``, the default is "Python " followed by the Tag.) If a string value named ``SupportUrl`` exists, it may be displayed or otherwise used to direct users to a web site related to the environment. (For ``PythonCore``, the default is "http://www.python.org/".) If a string value named ``Version`` exists, it should be used to identify the version of the environment. This is independent from the version of Python implemented by the environment. (For ``PythonCore``, the default is the first three characters of the Tag.) If a string value named ``SysVersion`` exists, it must be in ``x.y`` or ``x.y.z`` format matching the version returned by ``sys.version_info`` in the interpreter. If omitted, the Python version is unknown. (For ``PythonCore``, the default is the first three characters of the Tag.) If a string value named ``SysArchitecture`` exists, it must match the first element of the tuple returned by ``platform.architecture()``. Typically, this will be "32bit" or "64bit". If omitted, the architecture is unknown. (For ``PythonCore``, the architecture is "32bit" when registered under ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` *or* anywhere on a 32-bit operating system, "64bit" when registered under ``HKEY_LOCAL_MACHINE\Software\Python`` on a 64-bit machine, and unknown when registered under ``HKEY_CURRENT_USER``.) Note that each of these values is recommended, but optional. Omitting ``SysVersion`` or ``SysArchitecture`` may prevent some tools from correctly supporting the environment. A complete example may look like this:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy (Default) = (value not set) DisplayName = "Example Py Distro 3" SupportUrl = "http://www.example.com/distro-3" Version = "3.0.12345.0" SysVersion = "3.6.0" SysArchitecture = "64bit" InstallPath ----------- Beneath the environment key, an ``InstallPath`` key must be created. This key is always named ``InstallPath``, and the default value must match ``sys.prefix``:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath (Default) = "C:\ExampleCorpPy36" If a string value named ``ExecutablePath`` exists, it must be the full path to the ``python.exe`` (or equivalent) executable. If omitted, the environment is not executable. (For ``PythonCore``, the default is the ``python.exe`` file in the directory referenced by the ``(Default)`` value.) If a string value named ``ExecutableArguments`` exists, tools should use the value as the first arguments when executing ``ExecutablePath``. Tools may add other arguments following these, and will reasonably expect standard Python command line options to be available. If a string value named ``WindowedExecutablePath`` exists, it must be a path to the ``pythonw.exe`` (or equivalent) executable. If omitted, the default is the value of ``ExecutablePath``, and if that is omitted the environment is not executable. (For ``PythonCore``, the default is the ``pythonw.exe`` file in the directory referenced by the ``(Default)`` value.) If a string value named ``WindowedExecutableArguments`` exists, tools should use the value as the first arguments when executing ``WindowedExecutablePath``. Tools may add other arguments following these, and will reasonably expect standard Python command line options to be available. A complete example may look like:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy\InstallPath (Default) = "C:\ExampleDistro30" ExecutablePath = "C:\ExampleDistro30\ex_python.exe" ExecutableArguments = "--arg1" WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" WindowedExecutableArguments = "--arg1" Help ---- Beneath the environment key, a ``Help`` key may be created. This key is always named ``Help`` if present and has no default value. Each subkey of ``Help`` specifies a documentation file, tool, or URL associated with the environment. The subkey may have any name, and the default value is a string appropriate for passing to ``os.startfile`` or equivalent. If a string value named ``DisplayName`` exists, it should be used to identify the help file to users. Otherwise, the key name should be used. A complete example may look like:: HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help Python\ (Default) = "C:\ExampleDistro30\python36.chm" DisplayName = "Python Documentation" Extras\ (Default) = "http://www.example.com/tutorial" DisplayName = "Example Distro Online Tutorial" Other Keys ---------- All other subkeys under a Company-Tag pair are available for private use. Official CPython releases have traditionally used certain keys in this space to determine the location of the Python standard library and other installed modules. This behaviour is retained primarily for backward compatibility. However, as the code that reads these values is embedded into the interpreter, third-party distributions may be affected by values written into ``PythonCore`` if using an unmodified interpreter. Sample Code =========== This sample code enumerates the registry and displays the available Company-Tag pairs that could be used to launch an environment and the target executable. It only shows the most-preferred target for the tag. Backwards-compatible handling of ``PythonCore`` is omitted but shown in a later example:: # Display most-preferred environments. # Assumes a 64-bit operating system # Does not correctly handle PythonCore compatibility import winreg def enum_keys(key): i = 0 while True: try: yield winreg.EnumKey(key, i) except OSError: break i += 1 def get_value(key, value_name): try: return winreg.QueryValue(key, value_name) except FileNotFoundError: return None seen = set() for hive, key, flags in [ (winreg.HKEY_CURRENT_USER, r'Software\Python', 0), (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', winreg.KEY_WOW64_64KEY), (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', winreg.KEY_WOW64_32KEY), ]: with winreg.OpenKeyEx(hive, key, access=winreg.KEY_READ | flags) as root_key: for comany in enum_keys(root_key): if company == 'PyLauncher': continue with winreg.OpenKey(root_key, company) as company_key: for tag in enum_keys(company_key): if (company, tag) in seen: if company == 'PythonCore': # TODO: Backwards compatibility handling pass continue seen.add((company, tag)) try: with winreg.OpenKey(company_key, tag + r'\InstallPath') as ip_key: exec_path = get_value(ip_key, 'ExecutablePath') exec_args = get_value(ip_key, 'ExecutableArguments') if company == 'PythonCore' and not exec_path: # TODO: Backwards compatibility handling pass except OSError: exec_path, exec_args = None, None if exec_path: print('{}\\{} - {} {}'.format(company, tag, exec_path, exec_args or '')) else: print('{}\\{} - (not executable)'.format(company, tag)) This example only scans ``PythonCore`` entries for the current user. Where data is missing, the defaults as described earlier in the PEP are substituted. Note that these defaults are only for use under ``PythonCore``; other registrations do not have any default values:: # Only lists per-user PythonCore registrations # Uses fallback values as described in PEP 514 import os import winreg def enum_keys(key): i = 0 while True: try: yield winreg.EnumKey(key, i) except OSError: break i += 1 def get_value(key, value_name): try: return winreg.QueryValue(key, value_name) except FileNotFoundError: return None with winreg.OpenKey(winreg.HKEY_CURRENT_USER, r"Software\Python\PythonCore") as company_key: print('Company:', get_value(company_key, 'DisplayName') or 'Python Software Foundation') print('Support:', get_value(company_key, 'SupportUrl') or 'http://www.python.org/') print() for tag in enum_keys(company_key): with winreg.OpenKey(company_key, tag) as tag_key: print('PythonCore\\' + tag) print('Name:', get_value(tag_key, 'DisplayName') or ('Python ' + tag)) print('Support:', get_value(tag_key, 'SupportUrl') or 'http://www.python.org/') print('Version:', get_value(tag_key, 'Version') or tag[:3]) print('SysVersion:', get_value(tag_key, 'SysVersion') or tag[:3]) # Architecture is unknown because we are in HKCU # Tools may use alternate approaches to determine architecture when # the registration does not specify it. print('SysArchitecture:', get_value(tag_key, 'SysArchitecture') or '(unknown)') try: ip_key = winreg.OpenKey(company_key, tag + '\\InstallPath') except FileNotFoundError: pass else: with ip_key: ip = get_value(ip_key, None) exe = get_value(ip_key, 'ExecutablePath') or os.path.join(ip, 'python.exe') exew = get_value(ip_key, 'WindowedExecutablePath') or os.path.join(ip, 'python.exe') print('InstallPath:', ip) print('ExecutablePath:', exe) print('WindowedExecutablePath:', exew) print() This example shows a subset of the registration that will be created by a just-for-me install of 64-bit Python 3.6.0. Other keys may also be created:: HKEY_CURRENT_USER\Software\Python\PythonCore (Default) = (value not set) DisplayName = "Python Software Foundation" SupportUrl = "http://www.python.org/" HKEY_CURRENT_USER\Software\Python\PythonCore\3.6 (Default) = (value not set) DisplayName = "Python 3.6 (64-bit)" SupportUrl = "http://www.python.org/" Version = "3.6.0" SysVersion = "3.6" SysArchitecture = "64bit" HKEY_CURRENT_USER\Software\Python\PythonCore\3.6\Help\Main Python Documentation (Default) = "C:\Users\Me\AppData\Local\Programs\Python\Python36\Doc\python360.chm" DisplayName = "Python 3.6.0 Documentation" HKEY_CURRENT_USER\Software\Python\PythonCore\3.6\InstallPath (Default) = "C:\Users\Me\AppData\Local\Programs\Python\Python36\" ExecutablePath = "C:\Users\Me\AppData\Local\Programs\Python\Python36\python.exe" WindowedExecutablePath = "C:\Users\Me\AppData\Local\Programs\Python\Python36\pythonw.exe" References ========== .. [1] Registry Redirector (Windows) (https://msdn.microsoft.com/en-us/library/windows/desktop/aa384232.aspx) Copyright ========= This document has been placed in the public domain. From guido at python.org Sat Jul 23 16:20:50 2016 From: guido at python.org (Guido van Rossum) Date: Sat, 23 Jul 2016 13:20:50 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: <5793C27F.6060300@python.org> References: <5793C27F.6060300@python.org> Message-ID: I'll let Paul pronounce. But you should probably have a BDFL-Delegate: ... header. On Sat, Jul 23, 2016 at 12:16 PM, Steve Dower wrote: > PEP 514 is now ready for pronouncement, so this is the last chance for any > feedback (BDFL-delegate Paul has been active on the github PR, so I don't > expect he has a lot of feedback left). > > The most major change from the previous post is the addition of some code > examples at the end. Honestly, I don't expect many tools written in Python > to be scanning the registry (since once you're in Python you probably don't > need to find it), but hopefully they'll help clarify the PEP for people who > prefer code. > > Full text below. > > Cheers, > Steve > > ---------- > > PEP: 514 > Title: Python registration in the Windows registry > Version: $Revision$ > Last-Modified: $Date$ > Author: Steve Dower > Status: Draft > Type: Informational > Content-Type: text/x-rst > Created: 02-Feb-2016 > Post-History: 02-Feb-2016, 01-Mar-2016, 18-Jul-2016 > > Abstract > ======== > > This PEP defines a schema for the Python registry key to allow third-party > installers to register their installation, and to allow tools and > applications > to detect and correctly display all Python environments on a user's machine. > No > implementation changes to Python are proposed with this PEP. > > Python environments are not required to be registered unless they want to be > automatically discoverable by external tools. As this relates to Windows > only, > these tools are expected to be predominantly GUI applications. However, > console > applications may also make use of the registered information. This PEP > covers > the information that may be made available, but the actual presentation and > use > of this information is left to the tool designers. > > The schema matches the registry values that have been used by the official > installer since at least Python 2.5, and the resolution behaviour matches > the > behaviour of the official Python releases. Some backwards compatibility > rules > are provided to ensure tools can correctly detect versions of CPython that > do > not register full information. > > Motivation > ========== > > When installed on Windows, the official Python installer creates a registry > key > for discovery and detection by other applications. This allows tools such as > installers or IDEs to automatically detect and display a user's Python > installations. For example, the PEP 397 ``py.exe`` launcher and editors such > as > PyCharm and Visual Studio already make use of this information. > > Third-party installers, such as those used by distributions, typically > create > identical keys for the same purpose. Most tools that use the registry to > detect > Python installations only inspect the keys used by the official installer. > As a > result, third-party installations that wish to be discoverable will > overwrite > these values, often causing users to "lose" their original Python > installation. > > By describing a layout for registry keys that allows third-party > installations > to register themselves uniquely, as well as providing tool developers > guidance > for discovering all available Python installations, these collisions should > be > prevented. We also take the opportunity to add some well-known metadata so > that > more information can be presented to users. > > Definitions > =========== > > A "registry key" is the equivalent of a file-system path into the registry. > Each > key may contain "subkeys" (keys nested within keys) and "values" (named and > typed attributes attached to a key). These are used on Windows to store > settings > in much the same way that directories containing configuration files would > work. > > ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in > user, > and this user can generally read and write all settings under this root. > > ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any > user can read these settings but only administrators can modify them. It is > typical for values under ``HKEY_CURRENT_USER`` to take precedence over those > in > ``HKEY_LOCAL_MACHINE``. > > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special > key > that 32-bit processes transparently read and write to rather than accessing > the > ``Software`` key directly. > > Further documentation regarding registry redirection on Windows is available > from the MSDN Library [1]_. > > Structure > ========= > > We consider there to be a single collection of Python environments on a > machine, > where the collection may be different for each user of the machine. There > are > three potential registry locations where the collection may be stored based > on > the installation options of each environment:: > > HKEY_CURRENT_USER\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ > > Official Python releases use ``PythonCore`` for Company, and the value of > ``sys.winver`` for Tag. The Company ``PyLauncher`` is reserved. Other > registered > environments may use any values for Company and Tag. Recommendations are > made > later in this document. > > Company-Tag pairs are case-insensitive, and uniquely identify each > environment. > Depending on the purpose and intended use of a tool, there are two suggested > approaches for resolving conflicts between Company-Tag pairs. > > Tools that list every installed environment may choose to include those > even where the Company-Tag pairs match. They should ensure users can easily > identify whether the registration was per-user or per-machine, and which > registration has the higher priority. > > Tools that aim to select a single installed environment from all registered > environments based on the Company-Tag pair, such as the ``py.exe`` launcher, > should always select the environment registered in ``HKEY_CURRENT_USER`` > when > than the matching one in ``HKEY_LOCAL_MACHINE``. > > Conflicts between ``HKEY_LOCAL_MACHINE\Software\Python`` and > ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` should only occur when > both > 64-bit and 32-bit versions of an interpreter have the same Tag. In this > case, > the tool should select whichever is more appropriate for its use. > > If a tool is able to determine from the provided information (or lack > thereof) > that it cannot use a registered environment, there is no obligation to > present > it to users. > > Except as discussed in the section on backwards compatibility, Company and > Tag > values are considered opaque to tools, and no information about the > interpreter > should be inferred from the text. However, some tools may display the > Company > and Tag values to users, so ideally the Tag will be able to help users > identify > the associated environment. > > Python environments are not required to register themselves unless they want > to > be automatically discoverable by external tools. > > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds > in > ``sys.winver``. As a result, it is not possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters under this scheme since > it > would result in duplicate Tags. > > To ensure backwards compatibility, applications should treat environments > listed > under the following two registry keys as distinct, even when the Tag > matches:: > > HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ > > Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct > from > both of the above keys, potentially resulting in three environments > discovered > using the same Tag. Alternatively, a tool may determine whether the per-user > environment is 64-bit or 32-bit and give it priority over the per-machine > environment, resulting in a maximum of two discovered environments. > > It is not possible to detect side-by-side installations of both 64-bit and > 32-bit versions of Python prior to 3.5 when they have been installed for the > current user. Python 3.5 and later always uses different Tags for 64-bit and > 32-bit versions. > > The following section describe user-visible information that may be > registered. > For Python 3.5 and earlier, none of this information is available, but > alternative defaults are specified for the ``PythonCore`` key. > > Environments registered under other Company names have no backward > compatibility > requirements and must use distinct Tags to support side-by-side > installations. > Tools consuming these registrations are not required to disambiguate tags > other > than by preferring the user's setting. > > Company > ------- > > The Company part of the key is intended to group related environments and to > ensure that Tags are namespaced appropriately. The key name should be > alphanumeric without spaces and likely to be unique. For example, a > trademarked > name (preferred), a hostname, or as a last resort, a UUID would be > appropriate:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > HKEY_CURRENT_USER\Software\Python\www.example.com > HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 > > The company name ``PyLauncher`` is reserved for the PEP 397 launcher > (``py.exe``). It does not follow this convention and should be ignored by > tools. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the environment manufacturer/developer/destributor to users. Otherwise, the > name > of the key should be used. (For ``PythonCore``, the default display name is > "Python Software Foundation".) > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise > used to direct users to a web site related to the environment. (For > ``PythonCore``, the default support URL is "http://www.python.org/".) > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > (Default) = (value not set) > DisplayName = "Example Corp" > SupportUrl = "http://www.example.com" > > Tag > --- > > The Tag part of the key is intended to uniquely identify an environment > within > those provided by a single company. The key name should be alphanumeric > without > spaces and stable across installations. For example, the Python language > version, a UUID or a partial/complete hash would be appropriate, while a Tag > based on the install directory or some aspect of the current machine may > not. > For example:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > > It is expected that some tools will require users to type the Tag into a > command > line, and that the Company may be optional provided the Tag is unique across > all > Python installations. Short, human-readable and easy to type Tags are > recommended, and if possible, select a value likely to be unique across all > other Companies. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the environment to users. Otherwise, the name of the key should be used. > (For > ``PythonCore``, the default is "Python " followed by the Tag.) > > If a string value named ``SupportUrl`` exists, it may be displayed or > otherwise > used to direct users to a web site related to the environment. (For > ``PythonCore``, the default is "http://www.python.org/".) > > If a string value named ``Version`` exists, it should be used to identify > the > version of the environment. This is independent from the version of Python > implemented by the environment. (For ``PythonCore``, the default is the > first > three characters of the Tag.) > > If a string value named ``SysVersion`` exists, it must be in ``x.y`` or > ``x.y.z`` format matching the version returned by ``sys.version_info`` in > the > interpreter. If omitted, the Python version is unknown. (For ``PythonCore``, > the default is the first three characters of the Tag.) > > If a string value named ``SysArchitecture`` exists, it must match the first > element of the tuple returned by ``platform.architecture()``. Typically, > this > will be "32bit" or "64bit". If omitted, the architecture is unknown. (For > ``PythonCore``, the architecture is "32bit" when registered under > ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` *or* anywhere on a 32-bit > operating system, "64bit" when registered under > ``HKEY_LOCAL_MACHINE\Software\Python`` on a 64-bit machine, and unknown when > registered under ``HKEY_CURRENT_USER``.) > > Note that each of these values is recommended, but optional. Omitting > ``SysVersion`` or ``SysArchitecture`` may prevent some tools from correctly > supporting the environment. A complete example may look like this:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy > (Default) = (value not set) > DisplayName = "Example Py Distro 3" > SupportUrl = "http://www.example.com/distro-3" > Version = "3.0.12345.0" > SysVersion = "3.6.0" > SysArchitecture = "64bit" > > InstallPath > ----------- > > Beneath the environment key, an ``InstallPath`` key must be created. This > key is > always named ``InstallPath``, and the default value must match > ``sys.prefix``:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath > (Default) = "C:\ExampleCorpPy36" > > If a string value named ``ExecutablePath`` exists, it must be the full path > to > the ``python.exe`` (or equivalent) executable. If omitted, the environment > is > not executable. (For ``PythonCore``, the default is the ``python.exe`` file > in > the directory referenced by the ``(Default)`` value.) > > If a string value named ``ExecutableArguments`` exists, tools should use the > value as the first arguments when executing ``ExecutablePath``. Tools may > add > other arguments following these, and will reasonably expect standard Python > command line options to be available. > > If a string value named ``WindowedExecutablePath`` exists, it must be a path > to > the ``pythonw.exe`` (or equivalent) executable. If omitted, the default is > the > value of ``ExecutablePath``, and if that is omitted the environment is not > executable. (For ``PythonCore``, the default is the ``pythonw.exe`` file in > the > directory referenced by the ``(Default)`` value.) > > If a string value named ``WindowedExecutableArguments`` exists, tools should > use > the value as the first arguments when executing ``WindowedExecutablePath``. > Tools may add other arguments following these, and will reasonably expect > standard Python command line options to be available. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy\InstallPath > (Default) = "C:\ExampleDistro30" > ExecutablePath = "C:\ExampleDistro30\ex_python.exe" > ExecutableArguments = "--arg1" > WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" > WindowedExecutableArguments = "--arg1" > > Help > ---- > > Beneath the environment key, a ``Help`` key may be created. This key is > always > named ``Help`` if present and has no default value. > > Each subkey of ``Help`` specifies a documentation file, tool, or URL > associated > with the environment. The subkey may have any name, and the default value is > a > string appropriate for passing to ``os.startfile`` or equivalent. > > If a string value named ``DisplayName`` exists, it should be used to > identify > the help file to users. Otherwise, the key name should be used. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help > Python\ > (Default) = "C:\ExampleDistro30\python36.chm" > DisplayName = "Python Documentation" > Extras\ > (Default) = "http://www.example.com/tutorial" > DisplayName = "Example Distro Online Tutorial" > > Other Keys > ---------- > > All other subkeys under a Company-Tag pair are available for private use. > > Official CPython releases have traditionally used certain keys in this space > to > determine the location of the Python standard library and other installed > modules. This behaviour is retained primarily for backward compatibility. > However, as the code that reads these values is embedded into the > interpreter, > third-party distributions may be affected by values written into > ``PythonCore`` > if using an unmodified interpreter. > > Sample Code > =========== > > This sample code enumerates the registry and displays the available > Company-Tag > pairs that could be used to launch an environment and the target executable. > It > only shows the most-preferred target for the tag. Backwards-compatible > handling > of ``PythonCore`` is omitted but shown in a later example:: > > # Display most-preferred environments. > # Assumes a 64-bit operating system > # Does not correctly handle PythonCore compatibility > > import winreg > > def enum_keys(key): > i = 0 > while True: > try: > yield winreg.EnumKey(key, i) > except OSError: > break > i += 1 > > def get_value(key, value_name): > try: > return winreg.QueryValue(key, value_name) > except FileNotFoundError: > return None > > seen = set() > for hive, key, flags in [ > (winreg.HKEY_CURRENT_USER, r'Software\Python', 0), > (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', > winreg.KEY_WOW64_64KEY), > (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', > winreg.KEY_WOW64_32KEY), > ]: > with winreg.OpenKeyEx(hive, key, access=winreg.KEY_READ | flags) as > root_key: > for comany in enum_keys(root_key): > if company == 'PyLauncher': > continue > > with winreg.OpenKey(root_key, company) as company_key: > for tag in enum_keys(company_key): > if (company, tag) in seen: > if company == 'PythonCore': > # TODO: Backwards compatibility handling > pass > continue > seen.add((company, tag)) > > try: > with winreg.OpenKey(company_key, tag + > r'\InstallPath') as ip_key: > exec_path = get_value(ip_key, > 'ExecutablePath') > exec_args = get_value(ip_key, > 'ExecutableArguments') > if company == 'PythonCore' and not > exec_path: > # TODO: Backwards compatibility handling > pass > except OSError: > exec_path, exec_args = None, None > > if exec_path: > print('{}\\{} - {} {}'.format(company, tag, > exec_path, exec_args or '')) > else: > print('{}\\{} - (not > executable)'.format(company, tag)) > > This example only scans ``PythonCore`` entries for the current user. Where > data > is missing, the defaults as described earlier in the PEP are substituted. > Note > that these defaults are only for use under ``PythonCore``; other > registrations > do not have any default values:: > > # Only lists per-user PythonCore registrations > # Uses fallback values as described in PEP 514 > > import os > import winreg > > def enum_keys(key): > i = 0 > while True: > try: > yield winreg.EnumKey(key, i) > except OSError: > break > i += 1 > > def get_value(key, value_name): > try: > return winreg.QueryValue(key, value_name) > except FileNotFoundError: > return None > > with winreg.OpenKey(winreg.HKEY_CURRENT_USER, > r"Software\Python\PythonCore") as company_key: > print('Company:', get_value(company_key, 'DisplayName') or 'Python > Software Foundation') > print('Support:', get_value(company_key, 'SupportUrl') or > 'http://www.python.org/') > print() > > for tag in enum_keys(company_key): > with winreg.OpenKey(company_key, tag) as tag_key: > print('PythonCore\\' + tag) > print('Name:', get_value(tag_key, 'DisplayName') or ('Python > ' + tag)) > print('Support:', get_value(tag_key, 'SupportUrl') or > 'http://www.python.org/') > print('Version:', get_value(tag_key, 'Version') or tag[:3]) > print('SysVersion:', get_value(tag_key, 'SysVersion') or > tag[:3]) > # Architecture is unknown because we are in HKCU > # Tools may use alternate approaches to determine > architecture when > # the registration does not specify it. > print('SysArchitecture:', get_value(tag_key, > 'SysArchitecture') or '(unknown)') > > try: > ip_key = winreg.OpenKey(company_key, tag + '\\InstallPath') > except FileNotFoundError: > pass > else: > with ip_key: > ip = get_value(ip_key, None) > exe = get_value(ip_key, 'ExecutablePath') or > os.path.join(ip, 'python.exe') > exew = get_value(ip_key, 'WindowedExecutablePath') or > os.path.join(ip, 'python.exe') > print('InstallPath:', ip) > print('ExecutablePath:', exe) > print('WindowedExecutablePath:', exew) > print() > > This example shows a subset of the registration that will be created by a > just-for-me install of 64-bit Python 3.6.0. Other keys may also be created:: > > HKEY_CURRENT_USER\Software\Python\PythonCore > (Default) = (value not set) > DisplayName = "Python Software Foundation" > SupportUrl = "http://www.python.org/" > > HKEY_CURRENT_USER\Software\Python\PythonCore\3.6 > (Default) = (value not set) > DisplayName = "Python 3.6 (64-bit)" > SupportUrl = "http://www.python.org/" > Version = "3.6.0" > SysVersion = "3.6" > SysArchitecture = "64bit" > > HKEY_CURRENT_USER\Software\Python\PythonCore\3.6\Help\Main Python > Documentation > (Default) = > "C:\Users\Me\AppData\Local\Programs\Python\Python36\Doc\python360.chm" > DisplayName = "Python 3.6.0 Documentation" > > HKEY_CURRENT_USER\Software\Python\PythonCore\3.6\InstallPath > (Default) = "C:\Users\Me\AppData\Local\Programs\Python\Python36\" > ExecutablePath = > "C:\Users\Me\AppData\Local\Programs\Python\Python36\python.exe" > WindowedExecutablePath = > "C:\Users\Me\AppData\Local\Programs\Python\Python36\pythonw.exe" > > References > ========== > > .. [1] Registry Redirector (Windows) > (https://msdn.microsoft.com/en-us/library/windows/desktop/aa384232.aspx) > > Copyright > ========= > > This document has been placed in the public domain. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From steve.dower at python.org Sat Jul 23 23:51:09 2016 From: steve.dower at python.org (Steve Dower) Date: Sat, 23 Jul 2016 20:51:09 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: <5793C27F.6060300@python.org> Message-ID: <57943B2D.4050205@python.org> On 23Jul2016 1320, Guido van Rossum wrote: > I'll let Paul pronounce. But you should probably have a BDFL-Delegate: > ... header. Yeah, my headers are a bit outdated... I'm not even sure the $Revision$ and $Date$ variables are going to be substituted anymore (unless it's a pep2html thing rather than a VCS thing?). I'll definitely update BDFL-Delegate, Post-History, Status and Resolution tags for the final PR. Thanks, Steve From lkb.teichmann at gmail.com Sun Jul 24 03:31:38 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Sun, 24 Jul 2016 09:31:38 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: Hi list, Hi Nick, Sorry for my delayed response, it is summer here... > However, phrasing it that way suggest that it's possible we *did* miss > something in the PEP: we haven't specified whether or not __set_name__ > should be called when someone does someone does "cls.attr = descr". > Given the name, I think we *should* call it in that case, and then the > semantics during class creation are approximately what would happen if > we actually built up the class attributes as: > > for attr, value in cls_ns.items(): > setattr(cls, attr, value) That's a very good point and actually easy to solve: we would just need to override type.__setattr__ to do so. Actually, it is already overridden, so we just need to add code to type.__setattr__ to also call __set_name__. One could be of the other standpoint that in your above example it's the duty of the caller of setattr to also call __set_name__. It would be pretty easy to add a line in the loop that also calls __set_owner__. Greetings Martin From p.f.moore at gmail.com Sun Jul 24 03:45:00 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 24 Jul 2016 08:45:00 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: <5793C27F.6060300@python.org> Message-ID: This PEP is now accepted. Congratulations, Steve! And thanks for putting up with all of my last-minute questions :-) Paul On 23 July 2016 at 21:20, Guido van Rossum wrote: > I'll let Paul pronounce. But you should probably have a BDFL-Delegate: > ... header. > > On Sat, Jul 23, 2016 at 12:16 PM, Steve Dower wrote: >> PEP 514 is now ready for pronouncement, so this is the last chance for any >> feedback (BDFL-delegate Paul has been active on the github PR, so I don't >> expect he has a lot of feedback left). >> >> The most major change from the previous post is the addition of some code >> examples at the end. Honestly, I don't expect many tools written in Python >> to be scanning the registry (since once you're in Python you probably don't >> need to find it), but hopefully they'll help clarify the PEP for people who >> prefer code. >> >> Full text below. >> >> Cheers, >> Steve >> >> ---------- >> >> PEP: 514 >> Title: Python registration in the Windows registry >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Steve Dower >> Status: Draft >> Type: Informational >> Content-Type: text/x-rst >> Created: 02-Feb-2016 >> Post-History: 02-Feb-2016, 01-Mar-2016, 18-Jul-2016 >> >> Abstract >> ======== >> >> This PEP defines a schema for the Python registry key to allow third-party >> installers to register their installation, and to allow tools and >> applications >> to detect and correctly display all Python environments on a user's machine. >> No >> implementation changes to Python are proposed with this PEP. >> >> Python environments are not required to be registered unless they want to be >> automatically discoverable by external tools. As this relates to Windows >> only, >> these tools are expected to be predominantly GUI applications. However, >> console >> applications may also make use of the registered information. This PEP >> covers >> the information that may be made available, but the actual presentation and >> use >> of this information is left to the tool designers. >> >> The schema matches the registry values that have been used by the official >> installer since at least Python 2.5, and the resolution behaviour matches >> the >> behaviour of the official Python releases. Some backwards compatibility >> rules >> are provided to ensure tools can correctly detect versions of CPython that >> do >> not register full information. >> >> Motivation >> ========== >> >> When installed on Windows, the official Python installer creates a registry >> key >> for discovery and detection by other applications. This allows tools such as >> installers or IDEs to automatically detect and display a user's Python >> installations. For example, the PEP 397 ``py.exe`` launcher and editors such >> as >> PyCharm and Visual Studio already make use of this information. >> >> Third-party installers, such as those used by distributions, typically >> create >> identical keys for the same purpose. Most tools that use the registry to >> detect >> Python installations only inspect the keys used by the official installer. >> As a >> result, third-party installations that wish to be discoverable will >> overwrite >> these values, often causing users to "lose" their original Python >> installation. >> >> By describing a layout for registry keys that allows third-party >> installations >> to register themselves uniquely, as well as providing tool developers >> guidance >> for discovering all available Python installations, these collisions should >> be >> prevented. We also take the opportunity to add some well-known metadata so >> that >> more information can be presented to users. >> >> Definitions >> =========== >> >> A "registry key" is the equivalent of a file-system path into the registry. >> Each >> key may contain "subkeys" (keys nested within keys) and "values" (named and >> typed attributes attached to a key). These are used on Windows to store >> settings >> in much the same way that directories containing configuration files would >> work. >> >> ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in >> user, >> and this user can generally read and write all settings under this root. >> >> ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any >> user can read these settings but only administrators can modify them. It is >> typical for values under ``HKEY_CURRENT_USER`` to take precedence over those >> in >> ``HKEY_LOCAL_MACHINE``. >> >> On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special >> key >> that 32-bit processes transparently read and write to rather than accessing >> the >> ``Software`` key directly. >> >> Further documentation regarding registry redirection on Windows is available >> from the MSDN Library [1]_. >> >> Structure >> ========= >> >> We consider there to be a single collection of Python environments on a >> machine, >> where the collection may be different for each user of the machine. There >> are >> three potential registry locations where the collection may be stored based >> on >> the installation options of each environment:: >> >> HKEY_CURRENT_USER\Software\Python\\ >> HKEY_LOCAL_MACHINE\Software\Python\\ >> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ >> >> Official Python releases use ``PythonCore`` for Company, and the value of >> ``sys.winver`` for Tag. The Company ``PyLauncher`` is reserved. Other >> registered >> environments may use any values for Company and Tag. Recommendations are >> made >> later in this document. >> >> Company-Tag pairs are case-insensitive, and uniquely identify each >> environment. >> Depending on the purpose and intended use of a tool, there are two suggested >> approaches for resolving conflicts between Company-Tag pairs. >> >> Tools that list every installed environment may choose to include those >> even where the Company-Tag pairs match. They should ensure users can easily >> identify whether the registration was per-user or per-machine, and which >> registration has the higher priority. >> >> Tools that aim to select a single installed environment from all registered >> environments based on the Company-Tag pair, such as the ``py.exe`` launcher, >> should always select the environment registered in ``HKEY_CURRENT_USER`` >> when >> than the matching one in ``HKEY_LOCAL_MACHINE``. >> >> Conflicts between ``HKEY_LOCAL_MACHINE\Software\Python`` and >> ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` should only occur when >> both >> 64-bit and 32-bit versions of an interpreter have the same Tag. In this >> case, >> the tool should select whichever is more appropriate for its use. >> >> If a tool is able to determine from the provided information (or lack >> thereof) >> that it cannot use a registered environment, there is no obligation to >> present >> it to users. >> >> Except as discussed in the section on backwards compatibility, Company and >> Tag >> values are considered opaque to tools, and no information about the >> interpreter >> should be inferred from the text. However, some tools may display the >> Company >> and Tag values to users, so ideally the Tag will be able to help users >> identify >> the associated environment. >> >> Python environments are not required to register themselves unless they want >> to >> be automatically discoverable by external tools. >> >> Backwards Compatibility >> ----------------------- >> >> Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds >> in >> ``sys.winver``. As a result, it is not possible to have valid side-by-side >> installations of both 32-bit and 64-bit interpreters under this scheme since >> it >> would result in duplicate Tags. >> >> To ensure backwards compatibility, applications should treat environments >> listed >> under the following two registry keys as distinct, even when the Tag >> matches:: >> >> HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ >> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ >> >> Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct >> from >> both of the above keys, potentially resulting in three environments >> discovered >> using the same Tag. Alternatively, a tool may determine whether the per-user >> environment is 64-bit or 32-bit and give it priority over the per-machine >> environment, resulting in a maximum of two discovered environments. >> >> It is not possible to detect side-by-side installations of both 64-bit and >> 32-bit versions of Python prior to 3.5 when they have been installed for the >> current user. Python 3.5 and later always uses different Tags for 64-bit and >> 32-bit versions. >> >> The following section describe user-visible information that may be >> registered. >> For Python 3.5 and earlier, none of this information is available, but >> alternative defaults are specified for the ``PythonCore`` key. >> >> Environments registered under other Company names have no backward >> compatibility >> requirements and must use distinct Tags to support side-by-side >> installations. >> Tools consuming these registrations are not required to disambiguate tags >> other >> than by preferring the user's setting. >> >> Company >> ------- >> >> The Company part of the key is intended to group related environments and to >> ensure that Tags are namespaced appropriately. The key name should be >> alphanumeric without spaces and likely to be unique. For example, a >> trademarked >> name (preferred), a hostname, or as a last resort, a UUID would be >> appropriate:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> HKEY_CURRENT_USER\Software\Python\www.example.com >> HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 >> >> The company name ``PyLauncher`` is reserved for the PEP 397 launcher >> (``py.exe``). It does not follow this convention and should be ignored by >> tools. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the environment manufacturer/developer/destributor to users. Otherwise, the >> name >> of the key should be used. (For ``PythonCore``, the default display name is >> "Python Software Foundation".) >> >> If a string value named ``SupportUrl`` exists, it may be displayed or >> otherwise >> used to direct users to a web site related to the environment. (For >> ``PythonCore``, the default support URL is "http://www.python.org/".) >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> (Default) = (value not set) >> DisplayName = "Example Corp" >> SupportUrl = "http://www.example.com" >> >> Tag >> --- >> >> The Tag part of the key is intended to uniquely identify an environment >> within >> those provided by a single company. The key name should be alphanumeric >> without >> spaces and stable across installations. For example, the Python language >> version, a UUID or a partial/complete hash would be appropriate, while a Tag >> based on the install directory or some aspect of the current machine may >> not. >> For example:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 >> >> It is expected that some tools will require users to type the Tag into a >> command >> line, and that the Company may be optional provided the Tag is unique across >> all >> Python installations. Short, human-readable and easy to type Tags are >> recommended, and if possible, select a value likely to be unique across all >> other Companies. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the environment to users. Otherwise, the name of the key should be used. >> (For >> ``PythonCore``, the default is "Python " followed by the Tag.) >> >> If a string value named ``SupportUrl`` exists, it may be displayed or >> otherwise >> used to direct users to a web site related to the environment. (For >> ``PythonCore``, the default is "http://www.python.org/".) >> >> If a string value named ``Version`` exists, it should be used to identify >> the >> version of the environment. This is independent from the version of Python >> implemented by the environment. (For ``PythonCore``, the default is the >> first >> three characters of the Tag.) >> >> If a string value named ``SysVersion`` exists, it must be in ``x.y`` or >> ``x.y.z`` format matching the version returned by ``sys.version_info`` in >> the >> interpreter. If omitted, the Python version is unknown. (For ``PythonCore``, >> the default is the first three characters of the Tag.) >> >> If a string value named ``SysArchitecture`` exists, it must match the first >> element of the tuple returned by ``platform.architecture()``. Typically, >> this >> will be "32bit" or "64bit". If omitted, the architecture is unknown. (For >> ``PythonCore``, the architecture is "32bit" when registered under >> ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` *or* anywhere on a 32-bit >> operating system, "64bit" when registered under >> ``HKEY_LOCAL_MACHINE\Software\Python`` on a 64-bit machine, and unknown when >> registered under ``HKEY_CURRENT_USER``.) >> >> Note that each of these values is recommended, but optional. Omitting >> ``SysVersion`` or ``SysArchitecture`` may prevent some tools from correctly >> supporting the environment. A complete example may look like this:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy >> (Default) = (value not set) >> DisplayName = "Example Py Distro 3" >> SupportUrl = "http://www.example.com/distro-3" >> Version = "3.0.12345.0" >> SysVersion = "3.6.0" >> SysArchitecture = "64bit" >> >> InstallPath >> ----------- >> >> Beneath the environment key, an ``InstallPath`` key must be created. This >> key is >> always named ``InstallPath``, and the default value must match >> ``sys.prefix``:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath >> (Default) = "C:\ExampleCorpPy36" >> >> If a string value named ``ExecutablePath`` exists, it must be the full path >> to >> the ``python.exe`` (or equivalent) executable. If omitted, the environment >> is >> not executable. (For ``PythonCore``, the default is the ``python.exe`` file >> in >> the directory referenced by the ``(Default)`` value.) >> >> If a string value named ``ExecutableArguments`` exists, tools should use the >> value as the first arguments when executing ``ExecutablePath``. Tools may >> add >> other arguments following these, and will reasonably expect standard Python >> command line options to be available. >> >> If a string value named ``WindowedExecutablePath`` exists, it must be a path >> to >> the ``pythonw.exe`` (or equivalent) executable. If omitted, the default is >> the >> value of ``ExecutablePath``, and if that is omitted the environment is not >> executable. (For ``PythonCore``, the default is the ``pythonw.exe`` file in >> the >> directory referenced by the ``(Default)`` value.) >> >> If a string value named ``WindowedExecutableArguments`` exists, tools should >> use >> the value as the first arguments when executing ``WindowedExecutablePath``. >> Tools may add other arguments following these, and will reasonably expect >> standard Python command line options to be available. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy\InstallPath >> (Default) = "C:\ExampleDistro30" >> ExecutablePath = "C:\ExampleDistro30\ex_python.exe" >> ExecutableArguments = "--arg1" >> WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" >> WindowedExecutableArguments = "--arg1" >> >> Help >> ---- >> >> Beneath the environment key, a ``Help`` key may be created. This key is >> always >> named ``Help`` if present and has no default value. >> >> Each subkey of ``Help`` specifies a documentation file, tool, or URL >> associated >> with the environment. The subkey may have any name, and the default value is >> a >> string appropriate for passing to ``os.startfile`` or equivalent. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the help file to users. Otherwise, the key name should be used. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help >> Python\ >> (Default) = "C:\ExampleDistro30\python36.chm" >> DisplayName = "Python Documentation" >> Extras\ >> (Default) = "http://www.example.com/tutorial" >> DisplayName = "Example Distro Online Tutorial" >> >> Other Keys >> ---------- >> >> All other subkeys under a Company-Tag pair are available for private use. >> >> Official CPython releases have traditionally used certain keys in this space >> to >> determine the location of the Python standard library and other installed >> modules. This behaviour is retained primarily for backward compatibility. >> However, as the code that reads these values is embedded into the >> interpreter, >> third-party distributions may be affected by values written into >> ``PythonCore`` >> if using an unmodified interpreter. >> >> Sample Code >> =========== >> >> This sample code enumerates the registry and displays the available >> Company-Tag >> pairs that could be used to launch an environment and the target executable. >> It >> only shows the most-preferred target for the tag. Backwards-compatible >> handling >> of ``PythonCore`` is omitted but shown in a later example:: >> >> # Display most-preferred environments. >> # Assumes a 64-bit operating system >> # Does not correctly handle PythonCore compatibility >> >> import winreg >> >> def enum_keys(key): >> i = 0 >> while True: >> try: >> yield winreg.EnumKey(key, i) >> except OSError: >> break >> i += 1 >> >> def get_value(key, value_name): >> try: >> return winreg.QueryValue(key, value_name) >> except FileNotFoundError: >> return None >> >> seen = set() >> for hive, key, flags in [ >> (winreg.HKEY_CURRENT_USER, r'Software\Python', 0), >> (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', >> winreg.KEY_WOW64_64KEY), >> (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', >> winreg.KEY_WOW64_32KEY), >> ]: >> with winreg.OpenKeyEx(hive, key, access=winreg.KEY_READ | flags) as >> root_key: >> for comany in enum_keys(root_key): >> if company == 'PyLauncher': >> continue >> >> with winreg.OpenKey(root_key, company) as company_key: >> for tag in enum_keys(company_key): >> if (company, tag) in seen: >> if company == 'PythonCore': >> # TODO: Backwards compatibility handling >> pass >> continue >> seen.add((company, tag)) >> >> try: >> with winreg.OpenKey(company_key, tag + >> r'\InstallPath') as ip_key: >> exec_path = get_value(ip_key, >> 'ExecutablePath') >> exec_args = get_value(ip_key, >> 'ExecutableArguments') >> if company == 'PythonCore' and not >> exec_path: >> # TODO: Backwards compatibility handling >> pass >> except OSError: >> exec_path, exec_args = None, None >> >> if exec_path: >> print('{}\\{} - {} {}'.format(company, tag, >> exec_path, exec_args or '')) >> else: >> print('{}\\{} - (not >> executable)'.format(company, tag)) >> >> This example only scans ``PythonCore`` entries for the current user. Where >> data >> is missing, the defaults as described earlier in the PEP are substituted. >> Note >> that these defaults are only for use under ``PythonCore``; other >> registrations >> do not have any default values:: >> >> # Only lists per-user PythonCore registrations >> # Uses fallback values as described in PEP 514 >> >> import os >> import winreg >> >> def enum_keys(key): >> i = 0 >> while True: >> try: >> yield winreg.EnumKey(key, i) >> except OSError: >> break >> i += 1 >> >> def get_value(key, value_name): >> try: >> return winreg.QueryValue(key, value_name) >> except FileNotFoundError: >> return None >> >> with winreg.OpenKey(winreg.HKEY_CURRENT_USER, >> r"Software\Python\PythonCore") as company_key: >> print('Company:', get_value(company_key, 'DisplayName') or 'Python >> Software Foundation') >> print('Support:', get_value(company_key, 'SupportUrl') or >> 'http://www.python.org/') >> print() >> >> for tag in enum_keys(company_key): >> with winreg.OpenKey(company_key, tag) as tag_key: >> print('PythonCore\\' + tag) >> print('Name:', get_value(tag_key, 'DisplayName') or ('Python >> ' + tag)) >> print('Support:', get_value(tag_key, 'SupportUrl') or >> 'http://www.python.org/') >> print('Version:', get_value(tag_key, 'Version') or tag[:3]) >> print('SysVersion:', get_value(tag_key, 'SysVersion') or >> tag[:3]) >> # Architecture is unknown because we are in HKCU >> # Tools may use alternate approaches to determine >> architecture when >> # the registration does not specify it. >> print('SysArchitecture:', get_value(tag_key, >> 'SysArchitecture') or '(unknown)') >> >> try: >> ip_key = winreg.OpenKey(company_key, tag + '\\InstallPath') >> except FileNotFoundError: >> pass >> else: >> with ip_key: >> ip = get_value(ip_key, None) >> exe = get_value(ip_key, 'ExecutablePath') or >> os.path.join(ip, 'python.exe') >> exew = get_value(ip_key, 'WindowedExecutablePath') or >> os.path.join(ip, 'python.exe') >> print('InstallPath:', ip) >> print('ExecutablePath:', exe) >> print('WindowedExecutablePath:', exew) >> print() >> >> This example shows a subset of the registration that will be created by a >> just-for-me install of 64-bit Python 3.6.0. Other keys may also be created:: >> >> HKEY_CURRENT_USER\Software\Python\PythonCore >> (Default) = (value not set) >> DisplayName = "Python Software Foundation" >> SupportUrl = "http://www.python.org/" >> >> HKEY_CURRENT_USER\Software\Python\PythonCore\3.6 >> (Default) = (value not set) >> DisplayName = "Python 3.6 (64-bit)" >> SupportUrl = "http://www.python.org/" >> Version = "3.6.0" >> SysVersion = "3.6" >> SysArchitecture = "64bit" >> >> HKEY_CURRENT_USER\Software\Python\PythonCore\3.6\Help\Main Python >> Documentation >> (Default) = >> "C:\Users\Me\AppData\Local\Programs\Python\Python36\Doc\python360.chm" >> DisplayName = "Python 3.6.0 Documentation" >> >> HKEY_CURRENT_USER\Software\Python\PythonCore\3.6\InstallPath >> (Default) = "C:\Users\Me\AppData\Local\Programs\Python\Python36\" >> ExecutablePath = >> "C:\Users\Me\AppData\Local\Programs\Python\Python36\python.exe" >> WindowedExecutablePath = >> "C:\Users\Me\AppData\Local\Programs\Python\Python36\pythonw.exe" >> >> References >> ========== >> >> .. [1] Registry Redirector (Windows) >> (https://msdn.microsoft.com/en-us/library/windows/desktop/aa384232.aspx) >> >> Copyright >> ========= >> >> This document has been placed in the public domain. >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Sun Jul 24 06:37:17 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 24 Jul 2016 20:37:17 +1000 Subject: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses In-Reply-To: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> References: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> Message-ID: On 23 July 2016 at 22:26, Ronald Oussoren wrote: > I?m currently working on getting the patch in 18181 up-to-date w.r.t. the > current trunk, the patch in the issue no longer applies cleanly. After that > I?ll try to think up some tests that seriously try to break the new > behaviour, and I want to update a patch I have for PyObjC to make use of the > new functionality to make sure that the PEP actually fixes the issues I had > w.r.t. builtin.super?s behavior. You may also want to check compatibility with Martin's patch for PEP 487 (__init_subclass__ and __set_name__) at http://bugs.python.org/issue27366 I don't *think* it will conflict, but "try it and see what happens" is generally a better idea for the descriptor machinery than assuming changes are going to be non-conflicting :) > What is the best way forward after that? As before this is a change in > behavior that, unsurprisingly, few core devs appear to be comfortable with > evaluating, combined with new functionality that will likely see little use > beyond PyObjC. You may want to explicitly ping the https://github.com/ipython/traitlets developers to see if this change would let them do anything they currently find impractical or impossible. As far as Mark's concern about a non-terminating method definition goes, I do think you need to double check how the semantics of object.__getattribute__ are formally defined. >>> class Meta(type): ... def __getattribute__(self, attr): ... print("Via metaclass!") ... return super().__getattribute__(attr) ... >>> class Example(metaclass=Meta): pass ... >>> Example.mro() Via metaclass! [, ] Where the current PEP risks falling into unbounded recursion is that it appears to propose that the default type.__getdescriptor__ implementation be defined in terms of accessing cls.__dict__, but a normal Python level access to "cls.__dict__" would go through the descriptor machinery, triggering an infinite regress. The PEP needs to be explicit that where "cls.__dict__" is written in the definitions of both the old and new lookup semantics, it is *not* referring to a normal class attribute lookup, but rather to the interpreter's privileged access to the class namespace (e.g. direct 'tp_dict' access in CPython). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ronaldoussoren at mac.com Sun Jul 24 07:06:45 2016 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sun, 24 Jul 2016 13:06:45 +0200 Subject: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses In-Reply-To: References: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> Message-ID: <6F326944-9BF0-4E38-B487-79BC0ADF17B3@mac.com> > On 24 Jul 2016, at 12:37, Nick Coghlan wrote: > > On 23 July 2016 at 22:26, Ronald Oussoren wrote: >> I?m currently working on getting the patch in 18181 up-to-date w.r.t. the >> current trunk, the patch in the issue no longer applies cleanly. After that >> I?ll try to think up some tests that seriously try to break the new >> behaviour, and I want to update a patch I have for PyObjC to make use of the >> new functionality to make sure that the PEP actually fixes the issues I had >> w.r.t. builtin.super?s behavior. > > You may also want to check compatibility with Martin's patch for PEP > 487 (__init_subclass__ and __set_name__) at > http://bugs.python.org/issue27366 > > I don't *think* it will conflict, but "try it and see what happens" is > generally a better idea for the descriptor machinery than assuming > changes are going to be non-conflicting :) I also don?t think the two will conflict, but that?s based on a superficial read of that PEP the last time it was posted on python-dev. PEP 487 and 447 affect different parts of the object model, in particular PEP 487 doesn?t affect attribute lookup. > >> What is the best way forward after that? As before this is a change in >> behavior that, unsurprisingly, few core devs appear to be comfortable with >> evaluating, combined with new functionality that will likely see little use >> beyond PyObjC. > > You may want to explicitly ping the > https://github.com/ipython/traitlets developers to see if this change > would let them do anything they currently find impractical or > impossible. I?ll ask them. > > As far as Mark's concern about a non-terminating method definition > goes, I do think you need to double check how the semantics of > object.__getattribute__ are formally defined. > >>>> class Meta(type): > ... def __getattribute__(self, attr): > ... print("Via metaclass!") > ... return super().__getattribute__(attr) > ... >>>> class Example(metaclass=Meta): pass > ... >>>> Example.mro() > Via metaclass! > [, ] > > Where the current PEP risks falling into unbounded recursion is that > it appears to propose that the default type.__getdescriptor__ > implementation be defined in terms of accessing cls.__dict__, but a > normal Python level access to "cls.__dict__" would go through the > descriptor machinery, triggering an infinite regress. > > The PEP needs to be explicit that where "cls.__dict__" is written in > the definitions of both the old and new lookup semantics, it is *not* > referring to a normal class attribute lookup, but rather to the > interpreter's privileged access to the class namespace (e.g. direct > 'tp_dict' access in CPython). On first glance the same is true for all access to dunder attributes in sample code for the PEP, a similar example could be written for __get__ or __set__. I have to think a bit more about how to clearly describe this. I?m currently coaxing PyObjC into using PEP 447 when that?s available and that involves several layers of metaclasses in C and that?s annoyingly hard to debug when the code doesn?t do what I want like it does now. But on the other hand, that?s why wanted to use PyObjC to validate the PEP in the first place. Back to wrangling C code, Ronald > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Sun Jul 24 11:20:40 2016 From: guido at python.org (Guido van Rossum) Date: Sun, 24 Jul 2016 08:20:40 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: I am very much against this. The two are not at all like each other. Also, what's the use case? On Sunday, July 24, 2016, Martin Teichmann wrote: > Hi list, Hi Nick, > > Sorry for my delayed response, it is summer here... > > > However, phrasing it that way suggest that it's possible we *did* miss > > something in the PEP: we haven't specified whether or not __set_name__ > > should be called when someone does someone does "cls.attr = descr". > > Given the name, I think we *should* call it in that case, and then the > > semantics during class creation are approximately what would happen if > > we actually built up the class attributes as: > > > > for attr, value in cls_ns.items(): > > setattr(cls, attr, value) > > That's a very good point and actually easy to solve: we would just > need to override type.__setattr__ to do so. Actually, it is already > overridden, so we just need to add code to type.__setattr__ to also > call __set_name__. > > One could be of the other standpoint that in your above example it's > the duty of the caller of setattr to also call __set_name__. It would > be pretty easy to add a line in the loop that also calls > __set_owner__. > > Greetings > > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido (mobile) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Sun Jul 24 12:25:31 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 24 Jul 2016 09:25:31 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: <5793C27F.6060300@python.org> Message-ID: <5794EBFB.4050307@stoneleaf.us> On 07/24/2016 12:45 AM, Paul Moore wrote: > This PEP is now accepted. Congratulations, Steve! And more congratulations! :) -- ~Ethan~ From ethan at stoneleaf.us Sun Jul 24 12:31:32 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 24 Jul 2016 09:31:32 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: Message-ID: <5794ED64.20408@stoneleaf.us> On 07/24/2016 08:20 AM, Guido van Rossum wrote: > I am very much against this. The two are not at all like each other. Also, what's the use case? To be clear: you are against the automatic calling of __set_name__ and/or __set_owner__ when using setattr outside of class creation? Said another way: class creation mechanics should only happen during class creation? -- ~Ethan~ From gvanrossum at gmail.com Sun Jul 24 13:00:58 2016 From: gvanrossum at gmail.com (Guido van Rossum) Date: Sun, 24 Jul 2016 10:00:58 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: <5794ED64.20408@stoneleaf.us> References: <5794ED64.20408@stoneleaf.us> Message-ID: Yes. --Guido (mobile) On Jul 24, 2016 9:34 AM, "Ethan Furman" wrote: > On 07/24/2016 08:20 AM, Guido van Rossum wrote: > > I am very much against this. The two are not at all like each other. Also, >> what's the use case? >> > > To be clear: you are against the automatic calling of __set_name__ and/or > __set_owner__ when using > setattr outside of class creation? Said another way: class creation > mechanics should only happen > during class creation? > > -- > ~Ethan~ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Sun Jul 24 14:29:14 2016 From: steve.dower at python.org (Steve Dower) Date: Sun, 24 Jul 2016 11:29:14 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: <5793C27F.6060300@python.org> Message-ID: Thanks Paul. I'll update the headers on the PEP later today. Top-posted from my Windows Phone -----Original Message----- From: "Paul Moore" Sent: ?7/?24/?2016 0:45 To: "Guido van Rossum" Cc: "Steve Dower" ; "Python Dev" Subject: Re: [Python-Dev] PEP 514: Python registration in the Windows registry This PEP is now accepted. Congratulations, Steve! And thanks for putting up with all of my last-minute questions :-) Paul On 23 July 2016 at 21:20, Guido van Rossum wrote: > I'll let Paul pronounce. But you should probably have a BDFL-Delegate: > ... header. > > On Sat, Jul 23, 2016 at 12:16 PM, Steve Dower wrote: >> PEP 514 is now ready for pronouncement, so this is the last chance for any >> feedback (BDFL-delegate Paul has been active on the github PR, so I don't >> expect he has a lot of feedback left). >> >> The most major change from the previous post is the addition of some code >> examples at the end. Honestly, I don't expect many tools written in Python >> to be scanning the registry (since once you're in Python you probably don't >> need to find it), but hopefully they'll help clarify the PEP for people who >> prefer code. >> >> Full text below. >> >> Cheers, >> Steve >> >> ---------- >> >> PEP: 514 >> Title: Python registration in the Windows registry >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Steve Dower >> Status: Draft >> Type: Informational >> Content-Type: text/x-rst >> Created: 02-Feb-2016 >> Post-History: 02-Feb-2016, 01-Mar-2016, 18-Jul-2016 >> >> Abstract >> ======== >> >> This PEP defines a schema for the Python registry key to allow third-party >> installers to register their installation, and to allow tools and >> applications >> to detect and correctly display all Python environments on a user's machine. >> No >> implementation changes to Python are proposed with this PEP. >> >> Python environments are not required to be registered unless they want to be >> automatically discoverable by external tools. As this relates to Windows >> only, >> these tools are expected to be predominantly GUI applications. However, >> console >> applications may also make use of the registered information. This PEP >> covers >> the information that may be made available, but the actual presentation and >> use >> of this information is left to the tool designers. >> >> The schema matches the registry values that have been used by the official >> installer since at least Python 2.5, and the resolution behaviour matches >> the >> behaviour of the official Python releases. Some backwards compatibility >> rules >> are provided to ensure tools can correctly detect versions of CPython that >> do >> not register full information. >> >> Motivation >> ========== >> >> When installed on Windows, the official Python installer creates a registry >> key >> for discovery and detection by other applications. This allows tools such as >> installers or IDEs to automatically detect and display a user's Python >> installations. For example, the PEP 397 ``py.exe`` launcher and editors such >> as >> PyCharm and Visual Studio already make use of this information. >> >> Third-party installers, such as those used by distributions, typically >> create >> identical keys for the same purpose. Most tools that use the registry to >> detect >> Python installations only inspect the keys used by the official installer. >> As a >> result, third-party installations that wish to be discoverable will >> overwrite >> these values, often causing users to "lose" their original Python >> installation. >> >> By describing a layout for registry keys that allows third-party >> installations >> to register themselves uniquely, as well as providing tool developers >> guidance >> for discovering all available Python installations, these collisions should >> be >> prevented. We also take the opportunity to add some well-known metadata so >> that >> more information can be presented to users. >> >> Definitions >> =========== >> >> A "registry key" is the equivalent of a file-system path into the registry. >> Each >> key may contain "subkeys" (keys nested within keys) and "values" (named and >> typed attributes attached to a key). These are used on Windows to store >> settings >> in much the same way that directories containing configuration files would >> work. >> >> ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in >> user, >> and this user can generally read and write all settings under this root. >> >> ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any >> user can read these settings but only administrators can modify them. It is >> typical for values under ``HKEY_CURRENT_USER`` to take precedence over those >> in >> ``HKEY_LOCAL_MACHINE``. >> >> On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special >> key >> that 32-bit processes transparently read and write to rather than accessing >> the >> ``Software`` key directly. >> >> Further documentation regarding registry redirection on Windows is available >> from the MSDN Library [1]_. >> >> Structure >> ========= >> >> We consider there to be a single collection of Python environments on a >> machine, >> where the collection may be different for each user of the machine. There >> are >> three potential registry locations where the collection may be stored based >> on >> the installation options of each environment:: >> >> HKEY_CURRENT_USER\Software\Python\\ >> HKEY_LOCAL_MACHINE\Software\Python\\ >> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ >> >> Official Python releases use ``PythonCore`` for Company, and the value of >> ``sys.winver`` for Tag. The Company ``PyLauncher`` is reserved. Other >> registered >> environments may use any values for Company and Tag. Recommendations are >> made >> later in this document. >> >> Company-Tag pairs are case-insensitive, and uniquely identify each >> environment. >> Depending on the purpose and intended use of a tool, there are two suggested >> approaches for resolving conflicts between Company-Tag pairs. >> >> Tools that list every installed environment may choose to include those >> even where the Company-Tag pairs match. They should ensure users can easily >> identify whether the registration was per-user or per-machine, and which >> registration has the higher priority. >> >> Tools that aim to select a single installed environment from all registered >> environments based on the Company-Tag pair, such as the ``py.exe`` launcher, >> should always select the environment registered in ``HKEY_CURRENT_USER`` >> when >> than the matching one in ``HKEY_LOCAL_MACHINE``. >> >> Conflicts between ``HKEY_LOCAL_MACHINE\Software\Python`` and >> ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` should only occur when >> both >> 64-bit and 32-bit versions of an interpreter have the same Tag. In this >> case, >> the tool should select whichever is more appropriate for its use. >> >> If a tool is able to determine from the provided information (or lack >> thereof) >> that it cannot use a registered environment, there is no obligation to >> present >> it to users. >> >> Except as discussed in the section on backwards compatibility, Company and >> Tag >> values are considered opaque to tools, and no information about the >> interpreter >> should be inferred from the text. However, some tools may display the >> Company >> and Tag values to users, so ideally the Tag will be able to help users >> identify >> the associated environment. >> >> Python environments are not required to register themselves unless they want >> to >> be automatically discoverable by external tools. >> >> Backwards Compatibility >> ----------------------- >> >> Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds >> in >> ``sys.winver``. As a result, it is not possible to have valid side-by-side >> installations of both 32-bit and 64-bit interpreters under this scheme since >> it >> would result in duplicate Tags. >> >> To ensure backwards compatibility, applications should treat environments >> listed >> under the following two registry keys as distinct, even when the Tag >> matches:: >> >> HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ >> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ >> >> Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct >> from >> both of the above keys, potentially resulting in three environments >> discovered >> using the same Tag. Alternatively, a tool may determine whether the per-user >> environment is 64-bit or 32-bit and give it priority over the per-machine >> environment, resulting in a maximum of two discovered environments. >> >> It is not possible to detect side-by-side installations of both 64-bit and >> 32-bit versions of Python prior to 3.5 when they have been installed for the >> current user. Python 3.5 and later always uses different Tags for 64-bit and >> 32-bit versions. >> >> The following section describe user-visible information that may be >> registered. >> For Python 3.5 and earlier, none of this information is available, but >> alternative defaults are specified for the ``PythonCore`` key. >> >> Environments registered under other Company names have no backward >> compatibility >> requirements and must use distinct Tags to support side-by-side >> installations. >> Tools consuming these registrations are not required to disambiguate tags >> other >> than by preferring the user's setting. >> >> Company >> ------- >> >> The Company part of the key is intended to group related environments and to >> ensure that Tags are namespaced appropriately. The key name should be >> alphanumeric without spaces and likely to be unique. For example, a >> trademarked >> name (preferred), a hostname, or as a last resort, a UUID would be >> appropriate:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> HKEY_CURRENT_USER\Software\Python\www.example.com >> HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 >> >> The company name ``PyLauncher`` is reserved for the PEP 397 launcher >> (``py.exe``). It does not follow this convention and should be ignored by >> tools. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the environment manufacturer/developer/destributor to users. Otherwise, the >> name >> of the key should be used. (For ``PythonCore``, the default display name is >> "Python Software Foundation".) >> >> If a string value named ``SupportUrl`` exists, it may be displayed or >> otherwise >> used to direct users to a web site related to the environment. (For >> ``PythonCore``, the default support URL is "http://www.python.org/".) >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp >> (Default) = (value not set) >> DisplayName = "Example Corp" >> SupportUrl = "http://www.example.com" >> >> Tag >> --- >> >> The Tag part of the key is intended to uniquely identify an environment >> within >> those provided by a single company. The key name should be alphanumeric >> without >> spaces and stable across installations. For example, the Python language >> version, a UUID or a partial/complete hash would be appropriate, while a Tag >> based on the install directory or some aspect of the current machine may >> not. >> For example:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 >> >> It is expected that some tools will require users to type the Tag into a >> command >> line, and that the Company may be optional provided the Tag is unique across >> all >> Python installations. Short, human-readable and easy to type Tags are >> recommended, and if possible, select a value likely to be unique across all >> other Companies. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the environment to users. Otherwise, the name of the key should be used. >> (For >> ``PythonCore``, the default is "Python " followed by the Tag.) >> >> If a string value named ``SupportUrl`` exists, it may be displayed or >> otherwise >> used to direct users to a web site related to the environment. (For >> ``PythonCore``, the default is "http://www.python.org/".) >> >> If a string value named ``Version`` exists, it should be used to identify >> the >> version of the environment. This is independent from the version of Python >> implemented by the environment. (For ``PythonCore``, the default is the >> first >> three characters of the Tag.) >> >> If a string value named ``SysVersion`` exists, it must be in ``x.y`` or >> ``x.y.z`` format matching the version returned by ``sys.version_info`` in >> the >> interpreter. If omitted, the Python version is unknown. (For ``PythonCore``, >> the default is the first three characters of the Tag.) >> >> If a string value named ``SysArchitecture`` exists, it must match the first >> element of the tuple returned by ``platform.architecture()``. Typically, >> this >> will be "32bit" or "64bit". If omitted, the architecture is unknown. (For >> ``PythonCore``, the architecture is "32bit" when registered under >> ``HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python`` *or* anywhere on a 32-bit >> operating system, "64bit" when registered under >> ``HKEY_LOCAL_MACHINE\Software\Python`` on a 64-bit machine, and unknown when >> registered under ``HKEY_CURRENT_USER``.) >> >> Note that each of these values is recommended, but optional. Omitting >> ``SysVersion`` or ``SysArchitecture`` may prevent some tools from correctly >> supporting the environment. A complete example may look like this:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy >> (Default) = (value not set) >> DisplayName = "Example Py Distro 3" >> SupportUrl = "http://www.example.com/distro-3" >> Version = "3.0.12345.0" >> SysVersion = "3.6.0" >> SysArchitecture = "64bit" >> >> InstallPath >> ----------- >> >> Beneath the environment key, an ``InstallPath`` key must be created. This >> key is >> always named ``InstallPath``, and the default value must match >> ``sys.prefix``:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath >> (Default) = "C:\ExampleCorpPy36" >> >> If a string value named ``ExecutablePath`` exists, it must be the full path >> to >> the ``python.exe`` (or equivalent) executable. If omitted, the environment >> is >> not executable. (For ``PythonCore``, the default is the ``python.exe`` file >> in >> the directory referenced by the ``(Default)`` value.) >> >> If a string value named ``ExecutableArguments`` exists, tools should use the >> value as the first arguments when executing ``ExecutablePath``. Tools may >> add >> other arguments following these, and will reasonably expect standard Python >> command line options to be available. >> >> If a string value named ``WindowedExecutablePath`` exists, it must be a path >> to >> the ``pythonw.exe`` (or equivalent) executable. If omitted, the default is >> the >> value of ``ExecutablePath``, and if that is omitted the environment is not >> executable. (For ``PythonCore``, the default is the ``pythonw.exe`` file in >> the >> directory referenced by the ``(Default)`` value.) >> >> If a string value named ``WindowedExecutableArguments`` exists, tools should >> use >> the value as the first arguments when executing ``WindowedExecutablePath``. >> Tools may add other arguments following these, and will reasonably expect >> standard Python command line options to be available. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\examplepy\InstallPath >> (Default) = "C:\ExampleDistro30" >> ExecutablePath = "C:\ExampleDistro30\ex_python.exe" >> ExecutableArguments = "--arg1" >> WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" >> WindowedExecutableArguments = "--arg1" >> >> Help >> ---- >> >> Beneath the environment key, a ``Help`` key may be created. This key is >> always >> named ``Help`` if present and has no default value. >> >> Each subkey of ``Help`` specifies a documentation file, tool, or URL >> associated >> with the environment. The subkey may have any name, and the default value is >> a >> string appropriate for passing to ``os.startfile`` or equivalent. >> >> If a string value named ``DisplayName`` exists, it should be used to >> identify >> the help file to users. Otherwise, the key name should be used. >> >> A complete example may look like:: >> >> HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help >> Python\ >> (Default) = "C:\ExampleDistro30\python36.chm" >> DisplayName = "Python Documentation" >> Extras\ >> (Default) = "http://www.example.com/tutorial" >> DisplayName = "Example Distro Online Tutorial" >> >> Other Keys >> ---------- >> >> All other subkeys under a Company-Tag pair are available for private use. >> >> Official CPython releases have traditionally used certain keys in this space >> to >> determine the location of the Python standard library and other installed >> modules. This behaviour is retained primarily for backward compatibility. >> However, as the code that reads these values is embedded into the >> interpreter, >> third-party distributions may be affected by values written into >> ``PythonCore`` >> if using an unmodified interpreter. >> >> Sample Code >> =========== >> >> This sample code enumerates the registry and displays the available >> Company-Tag >> pairs that could be used to launch an environment and the target executable. >> It >> only shows the most-preferred target for the tag. Backwards-compatible >> handling >> of ``PythonCore`` is omitted but shown in a later example:: >> >> # Display most-preferred environments. >> # Assumes a 64-bit operating system >> # Does not correctly handle PythonCore compatibility >> >> import winreg >> >> def enum_keys(key): >> i = 0 >> while True: >> try: >> yield winreg.EnumKey(key, i) >> except OSError: >> break >> i += 1 >> >> def get_value(key, value_name): >> try: >> return winreg.QueryValue(key, value_name) >> except FileNotFoundError: >> return None >> >> seen = set() >> for hive, key, flags in [ >> (winreg.HKEY_CURRENT_USER, r'Software\Python', 0), >> (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', >> winreg.KEY_WOW64_64KEY), >> (winreg.HKEY_LOCAL_MACHINE, r'Software\Python', >> winreg.KEY_WOW64_32KEY), >> ]: >> with winreg.OpenKeyEx(hive, key, access=winreg.KEY_READ | flags) as >> root_key: >> for comany in enum_keys(root_key): >> if company == 'PyLauncher': >> continue >> >> with winreg.OpenKey(root_key, company) as company_key: >> for tag in enum_keys(company_key): >> if (company, tag) in seen: >> if company == 'PythonCore': >> # TODO: Backwards compatibility handling >> pass >> continue >> seen.add((company, tag)) >> >> try: >> with winreg.OpenKey(company_key, tag + >> r'\InstallPath') as ip_key: >> exec_path = get_value(ip_key, >> 'ExecutablePath') >> exec_args = get_value(ip_key, >> 'ExecutableArguments') >> if company == 'PythonCore' and not >> exec_path: >> # TODO: Backwards compatibility handling >> pass >> except OSError: >> exec_path, exec_args = None, None >> >> if ex [The entire original message is not included.] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 24 23:49:39 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Jul 2016 13:49:39 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 25 July 2016 at 03:00, Guido van Rossum wrote: > Yes. OK, we can cover that in the documentation - if folks want to emulate what happens during class construction after the fact, they'll need to do: cls.name = attr attr.__set_name__(cls, "name") Semantically, I agree that approach makes sense - by default, descriptors created outside a class body won't have a defined owning class or attribute name, and if you want to give them one, you'll have to do it explicitly. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ronaldoussoren at mac.com Mon Jul 25 00:57:04 2016 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 25 Jul 2016 06:57:04 +0200 Subject: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses In-Reply-To: <6F326944-9BF0-4E38-B487-79BC0ADF17B3@mac.com> References: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> <6F326944-9BF0-4E38-B487-79BC0ADF17B3@mac.com> Message-ID: <77DB8EBC-B258-492E-8492-BD1CFD865948@mac.com> An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Mon Jul 25 00:58:51 2016 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 25 Jul 2016 06:58:51 +0200 Subject: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses In-Reply-To: <6F326944-9BF0-4E38-B487-79BC0ADF17B3@mac.com> References: <0B7F0208-DEC6-4039-89B4-1FCD0071B092@mac.com> <6F326944-9BF0-4E38-B487-79BC0ADF17B3@mac.com> Message-ID: > On 24 Jul 2016, at 13:06, Ronald Oussoren > wrote: > ? > But on the other hand, that?s why wanted to use PyObjC to validate > the PEP in the first place. I?ve hit a fairly significant issue with this, PyObjC?s super contains more magic than just this magic that would be fixed by PEP 447. I don?t think I?ll be able to finish work on PEP 447 this week because of that, and in the worst case will have to retire the PEP. The problem is as follows: to be able to map all of Cocoa?s methods to Python PyObjC creates two proxy classes for every Cocoa class: the regular class and its metaclass. The latter is used to store class methods. This is needed because Objective-C classes can have instance and class methods with the same name, as an example: @interface NSObject -(NSString*)description; +(NSString*)description @end The first declaration for ?description? is an instance method, the second is a class method. The Python metaclass is mostly a hidden detail, users don?t explicitly interact with these classes and use the normal Python convention for defining class methods. This works fine, problems starts when you want to subclass in Python and override the class method: class MyClass (NSObject): @classmethod def description(cls): return ?hello there from %r? % (super(MyClass, cls).description()) If you?re used to normal Python code there?s nothing wrong here, but getting this to work required some magic in objc.super to ensure that its __getattribute__ looks in the metaclass in this case and not the regular class. The current PEP447-ised version of PyObjC has a number of test failures because builtin.super obviously doesn?t contain this hack (and shouldn?t). I think I can fix this for modern code that uses an argumentless call to super by replacing the cell containing the __class__ reference when moving the method from the regular class to the instance class. That would obviously not work for the code I showed earlier, but that at least won?t fail silently and the error message is specific enough that I can include it in PyObjC?s documentation. Ronald > > Back to wrangling C code, > > Ronald > > >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at moreati.org.uk Mon Jul 25 07:59:28 2016 From: alex at moreati.org.uk (Alex Willmer) Date: Mon, 25 Jul 2016 13:59:28 +0200 Subject: [Python-Dev] Introducing Python for CloudABI Message-ID: Morning all, I'm writing to introduce myself and a port of CPython 3.6 to a CloudABI. The port is reaching the point where it might be of interest to others. Namely it ran it's first .py script yesterday during the EuroPython scripts. Having said that it's still very early days, the patches do horrible things - particularly to the import machinery. I writing this to raise awareness, and open discussions. I'd love to answer any questions/comments you might have. Background: # What is CloudABI? CloudABI is a POSIX-like platform, with Capability based Security applied. At the syscall/libc layer functions which perform IO or acquire resources without a pre-existing file descriptor (e.g. open(), stat(), bind() etc) are removed. All IO operations must be performed through functions that accept a file descriptor, or a path relative to an fd. In this way descriptors server as capability tokens. All such tokens are provided to a process when it is spawned. If none are provided then the program in question is limited to just pure computation & memory allocation. Even stdin, stdout & stderr are not provided by default. # Why bother with CloudABI? It makes it possible to isolate programs from the OS, without resorting to e.g. containers. Possibly even to run untrusted binaries. A compromised CloudABI process can only damaged the things it has access to e.g. a transcoding job can only read the provided input and write to the provided output. It couldn't read /etc/passwd, or try to brute force SSH. This kind of isolation is still possible with UNIX, but it's not the default - which makes it rare. Personally, I find it interesting. I like the fact that CloudABI processes can be run by unprivileged users - unlike containers. The no-default-global-resources nature makes it easier to write code that can be tested. The fd provided to a webapp doesn't have to be a TCP socket, it could be a domain socket, or just a file stream. # What is the state of Python for CloudABI? Python for CloudABI is a proof of concept. The port takes the form of a number of patches to CPython 3.6.0a3. These mostly add autoconf & #ifdef entries for POSIX API functions that CloudABI deliberately does not support. A few differences make their way through Python code, for instance - sys.path is a list of file descriptors, rather than a list of strings - sys.executable is None - sys.argv is not present - The uid and gid entries of stat tuples are set to None (like on Windows) I got print('Hello World', file=...) working about a month ago, and executed my first .py file yesterday (commit pending). The current TODO list is - Finish script support - Module execution (python -m) support - zipimport support for file descriptors - ssl support - patch cleanup - try to run test suite There is no Python 2.x support, and I don't plan to add any. # What's the state of CloudABI? CloudABI runs on FreeBSD, NetBSD, macOS and Linux. For now it requires a patched kernel on Linux; FreeBSD 11 will include it out the box. Various libraries/packages have been ported (e.g. curl, libpng, x265, lua, libressl). # What's the history of CloudABI? The project started about 2 years ago. Ed Schouten is the project leader & creator. I became involved this year, having seen a talk by Ed at CCC around new year. # Where can I get more info? - https://nuxi.nl - CloudABI homepage, including Ed Schouten's CCC talk - http://slides.com/alexwillmer/cloudabi-capability-security - My EP2016 talk - https://www.youtube.com/watch?v=wlUtkBa8tK8&feature=youtu.be&t=49m - https://github.com/NuxiNL/cloudlibc - https://github.com/NuxiNL/cloudabi-ports - https://github.com/NuxiNL/cloudabi-ports/tree/master/packages/python - #cloudabi on Efnet IRC Regards, Alex -- Alex Willmer From guido at python.org Mon Jul 25 11:33:14 2016 From: guido at python.org (Guido van Rossum) Date: Mon, 25 Jul 2016 08:33:14 -0700 Subject: [Python-Dev] Introducing Python for CloudABI In-Reply-To: References: Message-ID: Hi Alex, CloudABI sounds interesting. I recall working on something vaguely similar at Google, for App Engine. But we didn't go so far as what you are doing here: changing the type of sys.path for example will instantly break so much Python code that it's not even possible to think about it. If you are serious about getting patches reviewed they should probably go in the bug tracker (presumably there are some things that are less controversial than the sys.path change). If you want your API changes discussed I recommend trying python-ideas first, people there are more open to new and different things. In the mean time, good luck with CloudABI. --Guido On Mon, Jul 25, 2016 at 4:59 AM, Alex Willmer wrote: > Morning all, I'm writing to introduce myself and a port of CPython 3.6 > to a CloudABI. > > The port is reaching the point where it might be of interest to > others. Namely it ran it's first .py script yesterday during the > EuroPython scripts. Having said that it's still very early days, the > patches do horrible things - particularly to the import machinery. > > I writing this to raise awareness, and open discussions. I'd love to > answer any questions/comments you might have. > > Background: > > # What is CloudABI? > CloudABI is a POSIX-like platform, with Capability based Security > applied. At the syscall/libc layer functions which perform IO or > acquire resources without a pre-existing file descriptor (e.g. open(), > stat(), bind() etc) are removed. All IO operations must be performed > through functions that accept a file descriptor, or a path relative to > an fd. > > In this way descriptors server as capability tokens. All such tokens > are provided to a process when it is spawned. If none are provided > then the program in question is limited to just pure computation & > memory allocation. Even stdin, stdout & stderr are not provided by > default. > > # Why bother with CloudABI? > It makes it possible to isolate programs from the OS, without > resorting to e.g. containers. Possibly even to run untrusted binaries. > A compromised CloudABI process can only damaged the things it has > access to e.g. a transcoding job can only read the provided input and > write to the provided output. It couldn't read /etc/passwd, or try to > brute force SSH. This kind of isolation is still possible with UNIX, > but it's not the default - which makes it rare. > > Personally, I find it interesting. I like the fact that CloudABI > processes can be run by unprivileged users - unlike containers. The > no-default-global-resources nature makes it easier to write code that > can be tested. The fd provided to a webapp doesn't have to be a TCP > socket, it could be a domain socket, or just a file stream. > > # What is the state of Python for CloudABI? > Python for CloudABI is a proof of concept. The port takes the form of > a number of patches to CPython 3.6.0a3. These mostly add autoconf & > #ifdef entries for POSIX API functions that CloudABI deliberately does > not support. > > A few differences make their way through Python code, for instance > - sys.path is a list of file descriptors, rather than a list of strings > - sys.executable is None > - sys.argv is not present > - The uid and gid entries of stat tuples are set to None (like on Windows) > > I got print('Hello World', file=...) working about a month ago, and > executed my first .py file yesterday (commit pending). > > The current TODO list is > - Finish script support > - Module execution (python -m) support > - zipimport support for file descriptors > - ssl support > - patch cleanup > - try to run test suite > > There is no Python 2.x support, and I don't plan to add any. > > # What's the state of CloudABI? > CloudABI runs on FreeBSD, NetBSD, macOS and Linux. For now it requires > a patched kernel on Linux; FreeBSD 11 will include it out the box. > Various libraries/packages have been ported (e.g. curl, libpng, x265, > lua, libressl). > > # What's the history of CloudABI? > The project started about 2 years ago. Ed Schouten is the project > leader & creator. I became involved this year, having seen a talk by > Ed at CCC around new year. > > # Where can I get more info? > - https://nuxi.nl - CloudABI homepage, including Ed Schouten's CCC talk > - http://slides.com/alexwillmer/cloudabi-capability-security - My EP2016 talk > - https://www.youtube.com/watch?v=wlUtkBa8tK8&feature=youtu.be&t=49m > - https://github.com/NuxiNL/cloudlibc > - https://github.com/NuxiNL/cloudabi-ports > - https://github.com/NuxiNL/cloudabi-ports/tree/master/packages/python > - #cloudabi on Efnet IRC > > Regards, Alex > -- > Alex Willmer > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From jsbueno at python.org.br Wed Jul 27 13:56:53 2016 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Wed, 27 Jul 2016 14:56:53 -0300 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: Hi - sorry for steppign in late - I've just reread the PEP and tried out the reference implementation, and I have two sugestions/issues with it as is: 1) Why does `__init_subclass__` is not run in the class it is defined itself?? That makes no sense to me as in very unpythonic. I applied the patch at issue, 27366 went to the terminal, and created a "hello world" __init_subclass__ with a simple print statement, and was greatly surprised that it did not printout. Only upon reviewing the PEP text I inferred that it was supposed to work (as it did) in further subclasses of my initial class. After all, issubclass(A, A) is usually True. I pledge for this behavior to be changed on the PEP. If one does not want it to run on the baseclass, a simple default argument and an `if param is None: return` on the method body can do the job, with less exceptions and surprises. Otherwise, I'd suggest at least some PEP rewording to make explicit the fact it is not run on the class it is defined at all. (I can help with that, but I'd rather see it implemented as above). I can see the fact that it woudl have little effect, as any eventual parameter on the Base class could be hardcoded into the class body itself - but just imagine a class hierarchy with cooperative "__init_subclass__" methods - it would be rather surprising that each class has to count on its parents __init_subclass__ being run, without the one defined in its own body being called - that is rather surprising. ------- 2) I have another higher level concern with the PEP as well: It will pass all class keyword parameters, but for "metaclass" to "__init_subclass__" - thus making it all but impossible to any other keyword to the class creation machinery to ever be defined again. We can't think of any such other keyword now, but what might come in a couple of years? What about just denoting in the PEP that "double under" keywords should be reserved and not relied to not be used by "type" itself in the future? (or any other way of marking reserved class kewyords) - actually it would even make sense to make "__metaclass__" an alias for "metaclass" in the class creation machinery. Anyway the PEP itself should mention that currently the keyword-arg "metaclass" is swallowed by the class creation machinery and will never reach `__init_subclass__` - this behavior is less surprising for me, but it should be documented) Or, an even less exceptional behavior for the future: make it so that "metaclass" specifies a custom metaclass (due to compatibility issues) AND is passed to __init_subclass__, and "__metaclass__" specifies a metaclass and is not passed (along with other double-unders as they are defined)). (as an extra bonus, people migrating from Python 2 to Python 3.6 will not even be surprised by the keyword argument being __metaclass__) Best regards, js -><- On 25 July 2016 at 00:49, Nick Coghlan wrote: > On 25 July 2016 at 03:00, Guido van Rossum wrote: >> Yes. > > OK, we can cover that in the documentation - if folks want to emulate > what happens during class construction after the fact, they'll need to > do: > > cls.name = attr > attr.__set_name__(cls, "name") > > Semantically, I agree that approach makes sense - by default, > descriptors created outside a class body won't have a defined owning > class or attribute name, and if you want to give them one, you'll have > to do it explicitly. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br From ncoghlan at gmail.com Wed Jul 27 21:30:02 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 28 Jul 2016 11:30:02 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 28 July 2016 at 03:56, Joao S. O. Bueno wrote: > Otherwise, I'd suggest at least some PEP rewording to make explicit > the fact it is not run on the class it is defined at all. (I can help > with that, but I'd rather see it implemented as above). This is covered in the PEP: https://www.python.org/dev/peps/pep-0487/#calling-the-hook-on-the-class-itself > 2) > I have another higher level concern with the PEP as well: > It will pass all class keyword parameters, but for "metaclass" to > "__init_subclass__" - thus making it all but impossible to any other > keyword to the class creation machinery to ever be defined again. We > can't think of any such other keyword now, but what might come in a > couple of years? This isn't a new problem, as it already exists today for custom metaclasses. It just means introducing new class construction keywords at the language level is something that needs to be handled judiciously, and with a view to the fact that it might have knock-on effects for other APIs which need to find a new parameter name. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jsbueno at python.org.br Wed Jul 27 23:55:37 2016 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 28 Jul 2016 00:55:37 -0300 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 27 July 2016 at 22:30, Nick Coghlan wrote: > On 28 July 2016 at 03:56, Joao S. O. Bueno wrote: >> Otherwise, I'd suggest at least some PEP rewording to make explicit >> the fact it is not run on the class it is defined at all. (I can help >> with that, but I'd rather see it implemented as above). > > This is covered in the PEP: > https://www.python.org/dev/peps/pep-0487/#calling-the-hook-on-the-class-itself > >> 2) >> I have another higher level concern with the PEP as well: >> It will pass all class keyword parameters, but for "metaclass" to >> "__init_subclass__" - thus making it all but impossible to any other >> keyword to the class creation machinery to ever be defined again. We >> can't think of any such other keyword now, but what might come in a >> couple of years? > > This isn't a new problem, as it already exists today for custom > metaclasses. It just means introducing new class construction keywords > at the language level is something that needs to be handled > judiciously, and with a view to the fact that it might have knock-on > effects for other APIs which need to find a new parameter name. > Actually, as documented on the PEP (and I just confirmed at a Python 3.5 prompt), you actually can't use custom keywords for class defintions. This PEP fixes that, but at the same time makes any other class reserved keyword impossible in the future - that is, unless a single line warning against reserved name patterns is added. I think it is low a cost not to be paid now, blocking too many - yet to be imagined - future possibilities. (as for the example in Py 3.5): In [17]: def M(type): ...: def __new__(metacls, name, bases, dict, **kw): ...: print(kw) ...: return super().__new__(name, bases, dict) ...: def __init__(cls, name, bases, dict, **kw): ...: print("init...", kw) ...: return super().__init__(name, bases, dict) ...: In [18]: class A(metaclass=M, test=23, color="blue"): ...: pass ...: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 class A(metaclass=M, test=23, color="blue"): 2 pass TypeError: M() got an unexpected keyword argument 'color' > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Jul 28 03:26:06 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 28 Jul 2016 17:26:06 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 28 July 2016 at 13:55, Joao S. O. Bueno wrote: > Actually, as documented on the PEP (and I just confirmed at a Python > 3.5 prompt), > you actually can't use custom keywords for class defintions. This PEP > fixes that, but at the same time makes any other class reserved > keyword impossible in the future - that is, unless a single line > warning against reserved name patterns is added. We don't warn against people defining new dunder-protocols as methods, why would we warn against a similar breach of convention in this case? I'm also wondering how you would want such a warning to work if we ever claimed a parameter name for a base class in the standard library, but didn't claim it as a name supported by type/object. Note that I'm not denying that it *may* be annoying *if* we define a new universal class parameter at some point in the future *and* it collides with a custom parameter in a pre-existing API *and* the authors of that API miss the related PEP. However, given that we've come up with exactly one named class parameter to date (metaclass), and explicitly decided against adding another (namespace, replaced with PEP 520's simpler option of just making the standard namespace provide attribute ordering data), the odds of actually encountering the posited problematic scenario seem pretty remote. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jsbueno at python.org.br Thu Jul 28 09:12:49 2016 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 28 Jul 2016 10:12:49 -0300 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 28 July 2016 at 04:26, Nick Coghlan wrote: > On 28 July 2016 at 13:55, Joao S. O. Bueno wrote: >> Actually, as documented on the PEP (and I just confirmed at a Python >> 3.5 prompt), >> you actually can't use custom keywords for class defintions. This PEP >> fixes that, but at the same time makes any other class reserved >> keyword impossible in the future - that is, unless a single line >> warning against reserved name patterns is added. > > We don't warn against people defining new dunder-protocols as methods, > why would we warn against a similar breach of convention in this case? > I'm also wondering how you would want such a warning to work if we > ever claimed a parameter name for a base class in the standard > library, but didn't claim it as a name supported by type/object. > > Note that I'm not denying that it *may* be annoying *if* we define a > new universal class parameter at some point in the future *and* it > collides with a custom parameter in a pre-existing API *and* the > authors of that API miss the related PEP. > > However, given that we've come up with exactly one named class > parameter to date (metaclass), and explicitly decided against adding > another (namespace, replaced with PEP 520's simpler option of just > making the standard namespace provide attribute ordering data), the > odds of actually encountering the posited problematic scenario seem > pretty remote. That is alright. (Even though, I think somewhere around there are remarks against one putting forth his own dunder methods) . But that elaves another issue open: the "metaclass" parameter get in to a very odd position, in which it has nothing distinctive about it, still is the only parameter that will be swallowed and won't reach "__init_subclass__". Although I know it is not straightforward to implement (as the "metaclass" parameter is not passed to the metaclass __new__ or __init__), wouldn't it make sense to make it be passed to __init_subclass__ just like all other keywords? (the default __init_subclass__ then could swallow it, if it reaches there). I am putting the question now, because it is a matter of "now" or "never" - I can see it can does make sense if it is not passed down. Anyway, do you have any remarks on the first issue I raised? About __init_subclass__ being called in the class it is defined in, not just on it's descendant subclasses? > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Jul 28 09:53:45 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 28 Jul 2016 23:53:45 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 28 July 2016 at 23:12, Joao S. O. Bueno wrote: > Although I know it is not straightforward to implement (as the > "metaclass" parameter is not passed to the metaclass __new__ or > __init__), wouldn't it make sense to make it be passed to > __init_subclass__ just like all other keywords? (the default > __init_subclass__ then could swallow it, if it reaches there). > > I am putting the question now, because it is a matter of "now" or > "never" - I can see it can does make sense if it is not passed down. It would complicate the implementation, and has the potential to be confusing (since the explicit metaclass hint and the actual metaclass aren't guaranteed to be the same), so I don't think it makes sense to pass it down. I'll make sure we note it in the documentation for the new protocol method, though. > Anyway, do you have any remarks on the first issue I raised? About > __init_subclass__ being called in the class it is defined in, not just > on it's descendant subclasses? That's already covered in the PEP: https://www.python.org/dev/peps/pep-0487/#calling-the-hook-on-the-class-itself We want to make it easy for mixin classes to use __init_subclass__ to define "required attributes" on subclasses, and that's straightforward with the current definition: class MyMixin: def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) if not hasattr(cls, "mixin_required_attribute"): raise TypeError(f"Subclasses of {__class__} must define a 'mixin_required_attribute' attribute") If you actually do want the init_subclass__ method to also run on the "base" class, then you'll need to tweak the class hierarchy a bit to look like: class _PrivateInitBase: def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) ... def MyOriginalClass(_PrivateInitBase): ... (You don't want to call __init_subclass__ from a class decorator, as that would call any parent __init_subclass__ implementations a second time) By contrast, when we had the default the other way around, opting *out* of self-application required boilerplate inside of __init_subclass__ to special case the situation where "cls is __class__": class MyMixin: def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) if cls is __class__: return # Don't init the base class if not getattr(cls, "mixin_required_attribute", None) is None: raise TypeError(f"Subclasses of {__class__} must define a non-None 'mixin_required_attribute' attribute") This raises exciting new opportunities for subtle bugs, like bailing out *before* calling the parent __init_subclass__ method, and then figure out that a later error from an apparently unrelated method is because your __init_subclass__ implementation is buggy. There's still an opportunity for bugs with the current design decision (folks expecting __init_subclass__ to be called on the class defining it when that isn't the case), but they should be relatively shallow ones, and once people learn the rule that __init_subclass__ is only called on *strict* subclasses, it's a pretty easy behaviour to remember. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jsbueno at python.org.br Thu Jul 28 10:06:28 2016 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 28 Jul 2016 11:06:28 -0300 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 28 July 2016 at 10:53, Nick Coghlan wrote: > On 28 July 2016 at 23:12, Joao S. O. Bueno wrote: >> Although I know it is not straightforward to implement (as the >> "metaclass" parameter is not passed to the metaclass __new__ or >> __init__), wouldn't it make sense to make it be passed to >> __init_subclass__ just like all other keywords? (the default >> __init_subclass__ then could swallow it, if it reaches there). >> >> I am putting the question now, because it is a matter of "now" or >> "never" - I can see it can does make sense if it is not passed down. > > It would complicate the implementation, and has the potential to be > confusing (since the explicit metaclass hint and the actual metaclass > aren't guaranteed to be the same), so I don't think it makes sense to > pass it down. I'll make sure we note it in the documentation for the > new protocol method, though. > >> Anyway, do you have any remarks on the first issue I raised? About >> __init_subclass__ being called in the class it is defined in, not just >> on it's descendant subclasses? > > That's already covered in the PEP: > https://www.python.org/dev/peps/pep-0487/#calling-the-hook-on-the-class-itself > > We want to make it easy for mixin classes to use __init_subclass__ to > define "required attributes" on subclasses, and that's straightforward > with the current definition: > > class MyMixin: > def __init_subclass__(cls, **kwargs): > super().__init_subclass__(**kwargs) > if not hasattr(cls, "mixin_required_attribute"): > raise TypeError(f"Subclasses of {__class__} must > define a 'mixin_required_attribute' attribute") > > If you actually do want the init_subclass__ method to also run on the > "base" class, then you'll need to tweak the class hierarchy a bit to > look like: > > class _PrivateInitBase: > def __init_subclass__(cls, **kwargs): > super().__init_subclass__(**kwargs) > ... > > def MyOriginalClass(_PrivateInitBase): > ... > > (You don't want to call __init_subclass__ from a class decorator, as > that would call any parent __init_subclass__ implementations a second > time) > > By contrast, when we had the default the other way around, opting > *out* of self-application required boilerplate inside of > __init_subclass__ to special case the situation where "cls is > __class__": > > class MyMixin: > def __init_subclass__(cls, **kwargs): > super().__init_subclass__(**kwargs) > if cls is __class__: > return # Don't init the base class > if not getattr(cls, "mixin_required_attribute", None) is None: > raise TypeError(f"Subclasses of {__class__} must > define a non-None 'mixin_required_attribute' attribute") > > This raises exciting new opportunities for subtle bugs, like bailing > out *before* calling the parent __init_subclass__ method, and then > figure out that a later error from an apparently unrelated method is > because your __init_subclass__ implementation is buggy. > > There's still an opportunity for bugs with the current design decision > (folks expecting __init_subclass__ to be called on the class defining > it when that isn't the case), but they should be relatively shallow > ones, and once people learn the rule that __init_subclass__ is only > called on *strict* subclasses, it's a pretty easy behaviour to > remember. Thanks for the extensive reasoning. Maybe then adding a `run_init_subclass` class decorator on the stdlib to go along with the pep? It should be a two liner that would avoid boiler plate done wrong - but more important than thatm is that it being documented alog with the __init_sublass__ method will make it more obvious it is not run where it is defined. (I had missed it in the PEP text and just understood that part when re-reading the PEP after being surprised) js -><- > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Jul 28 10:22:49 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 29 Jul 2016 00:22:49 +1000 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: On 29 July 2016 at 00:06, Joao S. O. Bueno wrote: > Maybe then adding a `run_init_subclass` class decorator on the stdlib > to go along with the pep? > It should be a two liner that would avoid boiler plate done wrong - > but more important than thatm is that it being documented alog with > the __init_sublass__ method will make it more obvious it is not run > where it is defined. (I had missed it in the PEP text and just > understood that part when re-reading the PEP after being surprised) The class decorator approach looks like this: def run_init_subclass(cls, **kwds): cls.__init_subclass__(**kwds) However, it's not the right way to do it, as it means super(cls, cls).__init_subclass__(**kwds) will get called a second time (since the class machinery will have already done it implicitly before invoking the class decorators). If the parent class methods are idempotent then calling them again won't matter, but if you're supplying extra keyword arguments, you'd need to specify them in both the decorator call *and* the class header. (I didn't actually realise this problem until writing the earlier email, so I'll probably tweak that part of the PEP to be more explicit about this aspect) So the simplest approach if you want "this class *and* all its descendants" behaviour is to adhere to the "strict subclasses only" requirement, and put the __init_subclass__ implementation in a base class, even if that means creating an additional mixin just to hold it. If that recommended approach isn't acceptable for some reason, then the decorator-centric alternative would be to instead write it this way: def decorate_class(cls, **kwds): ... @decorate_class class MyClass: def __init_subclass__(cls, **kwds): super().__init_subclass__(**kwds) decorate_class(cls) So the decorator gets defined outside the class, applied explicitly to the base class, and then the __init_subclass__ hook applies it implicitly to all subclasses. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at barrys-emacs.org Thu Jul 28 12:51:37 2016 From: barry at barrys-emacs.org (Barry Scott) Date: Thu, 28 Jul 2016 17:51:37 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: References: Message-ID: <5072FE8B-1528-496E-AC94-1CB9C8DEAF54@barrys-emacs.org> Why do you need SysArchitecture? Surely the 32bit pythons are registered in the 32bit registry and the 64 bit pythons in the 64 bit registry. you can side by side install python 3.4 but only if you install 64 bit first then 32 bit second. Barry > On 15 Jul 2016, at 23:20, Steve Dower wrote: > > Hi all > > I'd like to get this PEP approved (status changed to Active, IIUC). > > So far (to my knowledge), Anaconda is writing out the new metadata and Visual Studio is reading it. Any changes to the schema now will require somewhat public review anyway, so I don't see any harm in approving the PEP right now. > > To reiterate, this doesn't require changing anything about CPython at all and has no backwards compatibility impact on official releases (but hopefully it will stop alternative distros from overwriting our essential metadata and causing problems). > > I suppose I look to Guido first, unless he wants to delegate to one of the other Windows contributors? > > Cheers, > Steve > > URL: https://www.python.org/dev/peps/pep-0514/ > > Full text > ------- > > PEP: 514 > Title: Python registration in the Windows registry > Version: $Revision$ > Last-Modified: $Date$ > Author: Steve Dower > Status: Draft > Type: Informational > Content-Type: text/x-rst > Created: 02-Feb-2016 > Post-History: 02-Feb-2016, 01-Mar-2016 > > Abstract > ======== > > This PEP defines a schema for the Python registry key to allow third-party > installers to register their installation, and to allow applications to detect > and correctly display all Python environments on a user's machine. No > implementation changes to Python are proposed with this PEP. > > Python environments are not required to be registered unless they want to be > automatically discoverable by external tools. > > The schema matches the registry values that have been used by the official > installer since at least Python 2.5, and the resolution behaviour matches the > behaviour of the official Python releases. > > Motivation > ========== > > When installed on Windows, the official Python installer creates a registry key > for discovery and detection by other applications. This allows tools such as > installers or IDEs to automatically detect and display a user's Python > installations. > > Third-party installers, such as those used by distributions, typically create > identical keys for the same purpose. Most tools that use the registry to detect > Python installations only inspect the keys used by the official installer. As a > result, third-party installations that wish to be discoverable will overwrite > these values, resulting in users "losing" their Python installation. > > By describing a layout for registry keys that allows third-party installations > to register themselves uniquely, as well as providing tool developers guidance > for discovering all available Python installations, these collisions should be > prevented. > > Definitions > =========== > > A "registry key" is the equivalent of a file-system path into the registry. Each > key may contain "subkeys" (keys nested within keys) and "values" (named and > typed attributes attached to a key). > > ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in user, > and this user can generally read and write all settings under this root. > > ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any > user can read these settings but only administrators can modify them. It is > typical for values under ``HKEY_CURRENT_USER`` to take precedence over those in > ``HKEY_LOCAL_MACHINE``. > > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special key > that 32-bit processes transparently read and write to rather than accessing the > ``Software`` key directly. > > Structure > ========= > > We consider there to be a single collection of Python environments on a machine, > where the collection may be different for each user of the machine. There are > three potential registry locations where the collection may be stored based on > the installation options of each environment:: > > HKEY_CURRENT_USER\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ > > Environments are uniquely identified by their Company-Tag pair, with two options > for conflict resolution: include everything, or give priority to user > preferences. > > Tools that include every installed environment, even where the Company-Tag pairs > match, should ensure users can easily identify whether the registration was > per-user or per-machine. > > When tools are selecting a single installed environment from all registered > environments, the intent is that user preferences from ``HKEY_CURRENT_USER`` > will override matching Company-Tag pairs in ``HKEY_LOCAL_MACHINE``. > > Official Python releases use ``PythonCore`` for Company, and the value of > ``sys.winver`` for Tag. Other registered environments may use any values for > Company and Tag. Recommendations are made in the following sections. > > Python environments are not required to register themselves unless they want to > be automatically discoverable by external tools. > > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds in > ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. > > To ensure backwards compatibility, applications should treat environments listed > under the following two registry keys as distinct, even when the Tag matches:: > > HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ > > Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct from > both of the above keys, potentially resulting in three environments discovered > using the same Tag. Alternatively, a tool may determine whether the per-user > environment is 64-bit or 32-bit and give it priority over the per-machine > environment, resulting in a maximum of two discovered environments. > > It is not possible to detect side-by-side installations of both 64-bit and > 32-bit versions of Python prior to 3.5 when they have been installed for the > current user. Python 3.5 and later always uses different Tags for 64-bit and > 32-bit versions. > > Environments registered under other Company names must use distinct Tags to > support side-by-side installations. Tools consuming these registrations are > not required to disambiguate tags other than by preferring the user's setting. > > Company > ------- > > The Company part of the key is intended to group related environments and to > ensure that Tags are namespaced appropriately. The key name should be > alphanumeric without spaces and likely to be unique. For example, a trademarked > name, a UUID, or a hostname would be appropriate:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 > HKEY_CURRENT_USER\Software\Python\www.example.com > > The company name ``PyLauncher`` is reserved for the PEP 397 launcher > (``py.exe``). It does not follow this convention and should be ignored by tools. > > If a string value named ``DisplayName`` exists, it should be used to identify > the environment category to users. Otherwise, the name of the key should be > used. > > If a string value named ``SupportUrl`` exists, it may be displayed or otherwise > used to direct users to a web site related to the environment. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > (Default) = (value not set) > DisplayName = "Example Corp" > SupportUrl = "http://www.example.com" > > Tag > --- > > The Tag part of the key is intended to uniquely identify an environment within > those provided by a single company. The key name should be alphanumeric without > spaces and stable across installations. For example, the Python language > version, a UUID or a partial/complete hash would be appropriate; an integer > counter that increases for each new environment may not:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > > If a string value named ``DisplayName`` exists, it should be used to identify > the environment to users. Otherwise, the name of the key should be used. > > If a string value named ``SupportUrl`` exists, it may be displayed or otherwise > used to direct users to a web site related to the environment. > > If a string value named ``Version`` exists, it should be used to identify the > version of the environment. This is independent from the version of Python > implemented by the environment. > > If a string value named ``SysVersion`` exists, it must be in ``x.y`` or > ``x.y.z`` format matching the version returned by ``sys.version_info`` in the > interpreter. Otherwise, if the Tag matches this format it is used. If not, the > Python version is unknown. > > Note that each of these values is recommended, but optional. A complete example > may look like this:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > (Default) = (value not set) > DisplayName = "Distro 3" > SupportUrl = "http://www.example.com/distro-3" > Version = "3.0.12345.0" > SysVersion = "3.6.0" > > InstallPath > ----------- > > Beneath the environment key, an ``InstallPath`` key must be created. This key is > always named ``InstallPath``, and the default value must match ``sys.prefix``:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath > (Default) = "C:\ExampleCorpPy36" > > If a string value named ``ExecutablePath`` exists, it must be a path to the > ``python.exe`` (or equivalent) executable. Otherwise, the interpreter executable > is assumed to be called ``python.exe`` and exist in the directory referenced by > the default value. > > If a string value named ``WindowedExecutablePath`` exists, it must be a path to > the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed > interpreter executable is assumed to be called ``pythonw.exe`` and exist in the > directory referenced by the default value. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\InstallPath > (Default) = "C:\ExampleDistro30" > ExecutablePath = "C:\ExampleDistro30\ex_python.exe" > WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" > > Help > ---- > > Beneath the environment key, a ``Help`` key may be created. This key is always > named ``Help`` if present and has no default value. > > Each subkey of ``Help`` specifies a documentation file, tool, or URL associated > with the environment. The subkey may have any name, and the default value is a > string appropriate for passing to ``os.startfile`` or equivalent. > > If a string value named ``DisplayName`` exists, it should be used to identify > the help file to users. Otherwise, the key name should be used. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help > Python\ > (Default) = "C:\ExampleDistro30\python36.chm" > DisplayName = "Python Documentation" > Extras\ > (Default) = "http://www.example.com/tutorial" > DisplayName = "Example Distro Online Tutorial" > > Other Keys > ---------- > > Some other registry keys are used for defining or inferring search paths under > certain conditions. A third-party installation is permitted to define these keys > under their Company-Tag key, however, the interpreter must be modified and > rebuilt in order to read these values. Alternatively, the interpreter may be > modified to not use any registry keys for determining search paths. Making such > changes is a decision for the third party; this PEP makes no recommendation > either way. > > Copyright > ========= > > This document has been placed in the public domain. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/barry%40barrys-emacs.org > From lele at metapensiero.it Thu Jul 28 13:55:12 2016 From: lele at metapensiero.it (Lele Gaifax) Date: Thu, 28 Jul 2016 19:55:12 +0200 Subject: [Python-Dev] Supporting native backup facility of SQLite References: <877fcsv6ya.fsf@metapensiero.it> <87y458tbbh.fsf@metapensiero.it> <87twfwt7sf.fsf@metapensiero.it> Message-ID: <87lh0llkan.fsf@metapensiero.it> Lele Gaifax writes: > Paul Moore writes: > >> If you were interested in doing that, I'd suggest opening a tracker issue >> with a patch. > > Excellent, will do that, thank you for the encouragement! See http://bugs.python.org/issue27645 Thank you in advance for any feedback! ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From p.f.moore at gmail.com Thu Jul 28 15:11:49 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Jul 2016 20:11:49 +0100 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: <5072FE8B-1528-496E-AC94-1CB9C8DEAF54@barrys-emacs.org> References: <5072FE8B-1528-496E-AC94-1CB9C8DEAF54@barrys-emacs.org> Message-ID: On 28 July 2016 at 17:51, Barry Scott wrote: > Why do you need SysArchitecture? Surely the 32bit pythons are registered in the 32bit registry and the 64 bit pythons in the 64 bit registry. Per-user installs go in HKEY_CURRENT_USER, which is not architecture-specific. So you either need SysArchitecture, or you have to leave it as "unknown". Paul From steve.dower at python.org Thu Jul 28 16:20:54 2016 From: steve.dower at python.org (Steve Dower) Date: Thu, 28 Jul 2016 13:20:54 -0700 Subject: [Python-Dev] PEP 514: Python registration in the Windows registry In-Reply-To: <5072FE8B-1528-496E-AC94-1CB9C8DEAF54@barrys-emacs.org> References: <5072FE8B-1528-496E-AC94-1CB9C8DEAF54@barrys-emacs.org> Message-ID: The 3.4 issue where ordering matters is a bug in the MSI, incidentally (there's an extra upgrade code in one of them). Python 3.5 does not have the issue and side by side works correctly for both per-user and all-user installs. Top-posted from my Windows Phone -----Original Message----- From: "Barry Scott" Sent: ?7/?28/?2016 10:19 To: "Steve Dower" Cc: "Python Dev" Subject: Re: [Python-Dev] PEP 514: Python registration in the Windows registry Why do you need SysArchitecture? Surely the 32bit pythons are registered in the 32bit registry and the 64 bit pythons in the 64 bit registry. you can side by side install python 3.4 but only if you install 64 bit first then 32 bit second. Barry > On 15 Jul 2016, at 23:20, Steve Dower wrote: > > Hi all > > I'd like to get this PEP approved (status changed to Active, IIUC). > > So far (to my knowledge), Anaconda is writing out the new metadata and Visual Studio is reading it. Any changes to the schema now will require somewhat public review anyway, so I don't see any harm in approving the PEP right now. > > To reiterate, this doesn't require changing anything about CPython at all and has no backwards compatibility impact on official releases (but hopefully it will stop alternative distros from overwriting our essential metadata and causing problems). > > I suppose I look to Guido first, unless he wants to delegate to one of the other Windows contributors? > > Cheers, > Steve > > URL: https://www.python.org/dev/peps/pep-0514/ > > Full text > ------- > > PEP: 514 > Title: Python registration in the Windows registry > Version: $Revision$ > Last-Modified: $Date$ > Author: Steve Dower > Status: Draft > Type: Informational > Content-Type: text/x-rst > Created: 02-Feb-2016 > Post-History: 02-Feb-2016, 01-Mar-2016 > > Abstract > ======== > > This PEP defines a schema for the Python registry key to allow third-party > installers to register their installation, and to allow applications to detect > and correctly display all Python environments on a user's machine. No > implementation changes to Python are proposed with this PEP. > > Python environments are not required to be registered unless they want to be > automatically discoverable by external tools. > > The schema matches the registry values that have been used by the official > installer since at least Python 2.5, and the resolution behaviour matches the > behaviour of the official Python releases. > > Motivation > ========== > > When installed on Windows, the official Python installer creates a registry key > for discovery and detection by other applications. This allows tools such as > installers or IDEs to automatically detect and display a user's Python > installations. > > Third-party installers, such as those used by distributions, typically create > identical keys for the same purpose. Most tools that use the registry to detect > Python installations only inspect the keys used by the official installer. As a > result, third-party installations that wish to be discoverable will overwrite > these values, resulting in users "losing" their Python installation. > > By describing a layout for registry keys that allows third-party installations > to register themselves uniquely, as well as providing tool developers guidance > for discovering all available Python installations, these collisions should be > prevented. > > Definitions > =========== > > A "registry key" is the equivalent of a file-system path into the registry. Each > key may contain "subkeys" (keys nested within keys) and "values" (named and > typed attributes attached to a key). > > ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in user, > and this user can generally read and write all settings under this root. > > ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, any > user can read these settings but only administrators can modify them. It is > typical for values under ``HKEY_CURRENT_USER`` to take precedence over those in > ``HKEY_LOCAL_MACHINE``. > > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a special key > that 32-bit processes transparently read and write to rather than accessing the > ``Software`` key directly. > > Structure > ========= > > We consider there to be a single collection of Python environments on a machine, > where the collection may be different for each user of the machine. There are > three potential registry locations where the collection may be stored based on > the installation options of each environment:: > > HKEY_CURRENT_USER\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ > > Environments are uniquely identified by their Company-Tag pair, with two options > for conflict resolution: include everything, or give priority to user > preferences. > > Tools that include every installed environment, even where the Company-Tag pairs > match, should ensure users can easily identify whether the registration was > per-user or per-machine. > > When tools are selecting a single installed environment from all registered > environments, the intent is that user preferences from ``HKEY_CURRENT_USER`` > will override matching Company-Tag pairs in ``HKEY_LOCAL_MACHINE``. > > Official Python releases use ``PythonCore`` for Company, and the value of > ``sys.winver`` for Tag. Other registered environments may use any values for > Company and Tag. Recommendations are made in the following sections. > > Python environments are not required to register themselves unless they want to > be automatically discoverable by external tools. > > Backwards Compatibility > ----------------------- > > Python 3.4 and earlier did not distinguish between 32-bit and 64-bit builds in > ``sys.winver``. As a result, it is possible to have valid side-by-side > installations of both 32-bit and 64-bit interpreters. > > To ensure backwards compatibility, applications should treat environments listed > under the following two registry keys as distinct, even when the Tag matches:: > > HKEY_LOCAL_MACHINE\Software\Python\PythonCore\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\PythonCore\ > > Environments listed under ``HKEY_CURRENT_USER`` may be treated as distinct from > both of the above keys, potentially resulting in three environments discovered > using the same Tag. Alternatively, a tool may determine whether the per-user > environment is 64-bit or 32-bit and give it priority over the per-machine > environment, resulting in a maximum of two discovered environments. > > It is not possible to detect side-by-side installations of both 64-bit and > 32-bit versions of Python prior to 3.5 when they have been installed for the > current user. Python 3.5 and later always uses different Tags for 64-bit and > 32-bit versions. > > Environments registered under other Company names must use distinct Tags to > support side-by-side installations. Tools consuming these registrations are > not required to disambiguate tags other than by preferring the user's setting. > > Company > ------- > > The Company part of the key is intended to group related environments and to > ensure that Tags are namespaced appropriately. The key name should be > alphanumeric without spaces and likely to be unique. For example, a trademarked > name, a UUID, or a hostname would be appropriate:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > HKEY_CURRENT_USER\Software\Python\6C465E66-5A8C-4942-9E6A-D29159480C60 > HKEY_CURRENT_USER\Software\Python\www.example.com > > The company name ``PyLauncher`` is reserved for the PEP 397 launcher > (``py.exe``). It does not follow this convention and should be ignored by tools. > > If a string value named ``DisplayName`` exists, it should be used to identify > the environment category to users. Otherwise, the name of the key should be > used. > > If a string value named ``SupportUrl`` exists, it may be displayed or otherwise > used to direct users to a web site related to the environment. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp > (Default) = (value not set) > DisplayName = "Example Corp" > SupportUrl = "http://www.example.com" > > Tag > --- > > The Tag part of the key is intended to uniquely identify an environment within > those provided by a single company. The key name should be alphanumeric without > spaces and stable across installations. For example, the Python language > version, a UUID or a partial/complete hash would be appropriate; an integer > counter that increases for each new environment may not:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6 > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > > If a string value named ``DisplayName`` exists, it should be used to identify > the environment to users. Otherwise, the name of the key should be used. > > If a string value named ``SupportUrl`` exists, it may be displayed or otherwise > used to direct users to a web site related to the environment. > > If a string value named ``Version`` exists, it should be used to identify the > version of the environment. This is independent from the version of Python > implemented by the environment. > > If a string value named ``SysVersion`` exists, it must be in ``x.y`` or > ``x.y.z`` format matching the version returned by ``sys.version_info`` in the > interpreter. Otherwise, if the Tag matches this format it is used. If not, the > Python version is unknown. > > Note that each of these values is recommended, but optional. A complete example > may look like this:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66 > (Default) = (value not set) > DisplayName = "Distro 3" > SupportUrl = "http://www.example.com/distro-3" > Version = "3.0.12345.0" > SysVersion = "3.6.0" > > InstallPath > ----------- > > Beneath the environment key, an ``InstallPath`` key must be created. This key is > always named ``InstallPath``, and the default value must match ``sys.prefix``:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\3.6\InstallPath > (Default) = "C:\ExampleCorpPy36" > > If a string value named ``ExecutablePath`` exists, it must be a path to the > ``python.exe`` (or equivalent) executable. Otherwise, the interpreter executable > is assumed to be called ``python.exe`` and exist in the directory referenced by > the default value. > > If a string value named ``WindowedExecutablePath`` exists, it must be a path to > the ``pythonw.exe`` (or equivalent) executable. Otherwise, the windowed > interpreter executable is assumed to be called ``pythonw.exe`` and exist in the > directory referenced by the default value. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\InstallPath > (Default) = "C:\ExampleDistro30" > ExecutablePath = "C:\ExampleDistro30\ex_python.exe" > WindowedExecutablePath = "C:\ExampleDistro30\ex_pythonw.exe" > > Help > ---- > > Beneath the environment key, a ``Help`` key may be created. This key is always > named ``Help`` if present and has no default value. > > Each subkey of ``Help`` specifies a documentation file, tool, or URL associated > with the environment. The subkey may have any name, and the default value is a > string appropriate for passing to ``os.startfile`` or equivalent. > > If a string value named ``DisplayName`` exists, it should be used to identify > the help file to users. Otherwise, the key name should be used. > > A complete example may look like:: > > HKEY_CURRENT_USER\Software\Python\ExampleCorp\6C465E66\Help > Python\ > (Default) = "C:\ExampleDistro30\python36.chm" > DisplayName = "Python Documentation" > Extras\ > (Default) = "http://www.example.com/tutorial" > DisplayName = "Example Distro Online Tutorial" > > Other Keys > ---------- > > Some other registry keys are used for defining or inferring search paths under > certain conditions. A third-party installation is permitted to define these keys > under their Company-Tag key, however, the interpreter must be modified and > rebuilt in order to read these values. Alternatively, the interpreter may be > modified to not use any registry keys for determining search paths. Making such > changes is a decision for the third party; this PEP makes no recommendation > either way. > > Copyright > ========= > > This document has been placed in the public domain. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/barry%40barrys-emacs.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkb.teichmann at gmail.com Fri Jul 29 11:01:01 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Fri, 29 Jul 2016 17:01:01 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: Hello, there has been quite some discussion on why PEP 487's __init_subclass__ initializes subclasses, and not the class itself. I think the most important details have been already thoroughly discussed here. One thing I was missing in the discussion is practical examples. I have been using PEP 487-like metaclasses since several years now, and I have never come across an application where it would have even been nice to initialize the class itself. Also, while researching other people's code when I wrote PEP 487, I couldn't find any such code elsewhere, yet I found a lot of code where people took the wildest measure to prevent a metaclass in doing its job on the first class it is used for. One example is enum.EnumMeta, which contains code not to make enum.Enum an enum (I do not try to propose that the enum module should use PEP 487, it's way to complicated for that). So once we have a practical example, we could start discussing how to mitigate the problem. Btw, everyone is still invited to review the patch for PEP 487 at http://bugs.python.org/issue27366. Many thanks to Nick who already reviewed, and also to Guido who left helpful comments! Greetings Martin From sylvain.corlay at gmail.com Fri Jul 29 11:49:39 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Fri, 29 Jul 2016 17:49:39 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: In the traitlets library I mentioned earlier, we do have a need for this. The corresponding function is called `setup_class`. What it does is setting some class attributes that are required for certain types of descriptors to be able to initialize themselves. class MetaHasTraits(MetaHasDescriptors): """A metaclass for HasTraits.""" def setup_class(cls, classdict): cls._trait_default_generators = {} super(MetaHasTraits, cls).setup_class(classdict) Sylvain On Fri, Jul 29, 2016 at 5:01 PM, Martin Teichmann wrote: > Hello, > > there has been quite some discussion on why PEP 487's > __init_subclass__ initializes subclasses, and not the class itself. I > think the most important details have been already thoroughly > discussed here. > > One thing I was missing in the discussion is practical examples. I > have been using PEP 487-like metaclasses since several years now, and > I have never come across an application where it would have even been > nice to initialize the class itself. Also, while researching other > people's code when I wrote PEP 487, I couldn't find any such code > elsewhere, yet I found a lot of code where people took the wildest > measure to prevent a metaclass in doing its job on the first class it > is used for. One example is enum.EnumMeta, which contains code not to > make enum.Enum an enum (I do not try to propose that the enum module > should use PEP 487, it's way to complicated for that). > > So once we have a practical example, we could start discussing how to > mitigate the problem. > > Btw, everyone is still invited to review the patch for PEP 487 at > http://bugs.python.org/issue27366. Many thanks to Nick who already > reviewed, and also to Guido who left helpful comments! > > Greetings > > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/sylvain.corlay%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Jul 29 12:08:48 2016 From: status at bugs.python.org (Python tracker) Date: Fri, 29 Jul 2016 18:08:48 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20160729160848.1747A5666B@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2016-07-22 - 2016-07-29) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5588 ( +2) closed 33818 (+56) total 39406 (+58) Open issues with patches: 2442 Issues opened (41) ================== #24773: Implement PEP 495 (Local Time Disambiguation) http://bugs.python.org/issue24773 reopened by martin.panter #27593: Deprecate sys._mercurial and create sys._git http://bugs.python.org/issue27593 opened by brett.cannon #27594: Assertion failure when running "test_ast" tests with coverage. http://bugs.python.org/issue27594 opened by ap #27595: Document PEP 495 (Local Time Disambiguation) features http://bugs.python.org/issue27595 opened by belopolsky #27596: Build failure with Xcode 8 beta on OSX 10.11 http://bugs.python.org/issue27596 opened by ronaldoussoren #27597: Add usage examples for TracebackException, StackSummary and Fr http://bugs.python.org/issue27597 opened by cool-RR #27598: Add SizedIterable to collections.abc and typing http://bugs.python.org/issue27598 opened by brett.cannon #27599: Buffer overrun in binascii http://bugs.python.org/issue27599 opened by serhiy.storchaka #27602: Enable py launcher to launch repository Python. http://bugs.python.org/issue27602 opened by terry.reedy #27603: Migrate IDLE context menu items to shell extension http://bugs.python.org/issue27603 opened by steve.dower #27604: More details about `-O` flag http://bugs.python.org/issue27604 opened by cool-RR #27605: Inconsistent calls to __eq__ from built-in __contains__ http://bugs.python.org/issue27605 opened by vfaronov #27606: Android cross-built for armv5te with clang and '-mthumb' crash http://bugs.python.org/issue27606 opened by xdegaye #27607: Importing the main module twice leads to two incompatible inst http://bugs.python.org/issue27607 opened by SylvieLorxu #27609: IDLE completions: format, factor, and fix http://bugs.python.org/issue27609 opened by terry.reedy #27611: test_tix cannot import _default_root after test_idle http://bugs.python.org/issue27611 opened by martin.panter #27612: socket.gethostbyname resolving octal IP addresses incorrectly http://bugs.python.org/issue27612 opened by mattrobenolt #27613: Empty iterator is rendered as a single bracket ] when using js http://bugs.python.org/issue27613 opened by altvod #27614: Race in test_docxmlrpc.py http://bugs.python.org/issue27614 opened by earl.chew #27618: docs for threading.Lock claim it's a class (since 3.3), but it http://bugs.python.org/issue27618 opened by gvanrossum #27619: getopt should strip whitespace from long arguments http://bugs.python.org/issue27619 opened by steven.daprano #27620: IDLE: Add keyboard equivalents for mouse actions. http://bugs.python.org/issue27620 opened by terry.reedy #27621: Finish IDLE Query dialog appearance and behavior. http://bugs.python.org/issue27621 opened by serhiy.storchaka #27623: int.to_bytes() and int.from_bytes(): raise ValueError when byt http://bugs.python.org/issue27623 opened by mmarkk #27627: clang fails to build ctypes on Android armv7 http://bugs.python.org/issue27627 opened by xdegaye #27628: ipaddress incompatibility with ipaddr: __contains__ between ne http://bugs.python.org/issue27628 opened by lukasz.langa #27629: Cannot create ssl.SSLSocket without existing socket http://bugs.python.org/issue27629 opened by nemunaire #27630: Generator._encoded_EMTPY misspelling in email package http://bugs.python.org/issue27630 opened by martin.panter #27632: build on AIX fails when builddir != srcdir, more than bad path http://bugs.python.org/issue27632 opened by Michael.Felt #27635: pickle documentation says that unpickling may not call __new__ http://bugs.python.org/issue27635 opened by july #27636: Refactor IDLE htest http://bugs.python.org/issue27636 opened by terry.reedy #27637: int.to_bytes(-1, ...) should automatically choose required cou http://bugs.python.org/issue27637 opened by mmarkk #27638: int.to_bytes() and int.from_bytes() should support 'net' and ' http://bugs.python.org/issue27638 opened by mmarkk #27639: UserList.__getitem__ doesn't account for slices http://bugs.python.org/issue27639 opened by staticshock #27640: add the '--disable-test-suite' option to configure http://bugs.python.org/issue27640 opened by xdegaye #27641: Do not build Programs/_freeze_importlib when cross-compiling http://bugs.python.org/issue27641 opened by thomas.perl #27643: test_ctypes fails on AIX with xlc http://bugs.python.org/issue27643 opened by Michael.Felt #27644: Expand documentation about type aliases and NewType in the typ http://bugs.python.org/issue27644 opened by michael0x2a #27645: Supporting native backup facility of SQLite http://bugs.python.org/issue27645 opened by lelit #27646: yield from expression can be any iterable http://bugs.python.org/issue27646 opened by terry.reedy #27647: Update Windows build to Tcl/Tk 8.6.6 http://bugs.python.org/issue27647 opened by serhiy.storchaka Most recent 15 issues with no replies (15) ========================================== #27646: yield from expression can be any iterable http://bugs.python.org/issue27646 #27636: Refactor IDLE htest http://bugs.python.org/issue27636 #27635: pickle documentation says that unpickling may not call __new__ http://bugs.python.org/issue27635 #27632: build on AIX fails when builddir != srcdir, more than bad path http://bugs.python.org/issue27632 #27599: Buffer overrun in binascii http://bugs.python.org/issue27599 #27597: Add usage examples for TracebackException, StackSummary and Fr http://bugs.python.org/issue27597 #27595: Document PEP 495 (Local Time Disambiguation) features http://bugs.python.org/issue27595 #27594: Assertion failure when running "test_ast" tests with coverage. http://bugs.python.org/issue27594 #27593: Deprecate sys._mercurial and create sys._git http://bugs.python.org/issue27593 #27589: asyncio doc: issue in as_completed() doc http://bugs.python.org/issue27589 #27584: New addition of vSockets to the python socket module http://bugs.python.org/issue27584 #27566: Tools/freeze/winmakemakefile.py clean target should use 'del' http://bugs.python.org/issue27566 #27565: Offer error context manager for code.interact http://bugs.python.org/issue27565 #27534: IDLE: Reduce number and time for user process imports http://bugs.python.org/issue27534 #27530: Non-Critical Compiler WARNING: Python Embedding C++11 does not http://bugs.python.org/issue27530 Most recent 15 issues waiting for review (15) ============================================= #27646: yield from expression can be any iterable http://bugs.python.org/issue27646 #27645: Supporting native backup facility of SQLite http://bugs.python.org/issue27645 #27644: Expand documentation about type aliases and NewType in the typ http://bugs.python.org/issue27644 #27641: Do not build Programs/_freeze_importlib when cross-compiling http://bugs.python.org/issue27641 #27640: add the '--disable-test-suite' option to configure http://bugs.python.org/issue27640 #27639: UserList.__getitem__ doesn't account for slices http://bugs.python.org/issue27639 #27629: Cannot create ssl.SSLSocket without existing socket http://bugs.python.org/issue27629 #27623: int.to_bytes() and int.from_bytes(): raise ValueError when byt http://bugs.python.org/issue27623 #27621: Finish IDLE Query dialog appearance and behavior. http://bugs.python.org/issue27621 #27619: getopt should strip whitespace from long arguments http://bugs.python.org/issue27619 #27614: Race in test_docxmlrpc.py http://bugs.python.org/issue27614 #27611: test_tix cannot import _default_root after test_idle http://bugs.python.org/issue27611 #27604: More details about `-O` flag http://bugs.python.org/issue27604 #27587: Issues, reported by PVS-Studio static analyzer http://bugs.python.org/issue27587 #27584: New addition of vSockets to the python socket module http://bugs.python.org/issue27584 Top 10 most discussed issues (10) ================================= #24773: Implement PEP 495 (Local Time Disambiguation) http://bugs.python.org/issue24773 29 msgs #26462: Patch to enhance literal block language declaration http://bugs.python.org/issue26462 23 msgs #27612: socket.gethostbyname resolving octal IP addresses incorrectly http://bugs.python.org/issue27612 18 msgs #26852: add the '--enable-sourceless-distribution' option to configure http://bugs.python.org/issue26852 14 msgs #27546: Integrate tkinter and asyncio (and async) http://bugs.python.org/issue27546 13 msgs #27607: Importing the main module twice leads to two incompatible inst http://bugs.python.org/issue27607 13 msgs #27604: More details about `-O` flag http://bugs.python.org/issue27604 12 msgs #26851: android compilation and link flags http://bugs.python.org/issue26851 11 msgs #27619: getopt should strip whitespace from long arguments http://bugs.python.org/issue27619 11 msgs #27623: int.to_bytes() and int.from_bytes(): raise ValueError when byt http://bugs.python.org/issue27623 11 msgs Issues closed (54) ================== #7063: Memory errors in array.array http://bugs.python.org/issue7063 closed by martin.panter #8623: Aliasing warnings in socketmodule.c http://bugs.python.org/issue8623 closed by martin.panter #10036: compiler warnings for various modules on Linux buildslaves http://bugs.python.org/issue10036 closed by martin.panter #10965: dev task of documenting undocumented APIs http://bugs.python.org/issue10965 closed by brett.cannon #11048: "import ctypes" causes segfault on read-only filesystem http://bugs.python.org/issue11048 closed by haypo #13849: Add tests for NUL checking in certain strs http://bugs.python.org/issue13849 closed by berker.peksag #14218: include rendered output in addition to markup http://bugs.python.org/issue14218 closed by brett.cannon #15661: OS X installer packages should be signed for OS X 10.8 Gatekee http://bugs.python.org/issue15661 closed by ned.deily #16930: mention limitations and/or alternatives to hg graft http://bugs.python.org/issue16930 closed by brett.cannon #16931: mention work-around to create diffs in default/non-git mode http://bugs.python.org/issue16931 closed by brett.cannon #17227: devguide: buggy heading numbers http://bugs.python.org/issue17227 closed by brett.cannon #17596: mingw: add wincrypt.h in Python/random.c http://bugs.python.org/issue17596 closed by martin.panter #18041: mention issues with code churn in the devguide http://bugs.python.org/issue18041 closed by brett.cannon #20851: Update devguide to cover testing from a tarball http://bugs.python.org/issue20851 closed by brett.cannon #22645: Unable to install Python 3.4.2 amd64 on Windows 8.1 Update 1 http://bugs.python.org/issue22645 closed by berker.peksag #23320: devguide should mention rules about "paragraph reflow" in the http://bugs.python.org/issue23320 closed by brett.cannon #23951: Update devguide style to use a similar theme as Docs http://bugs.python.org/issue23951 closed by brett.cannon #24016: Add a Sprints organization/preparation section to devguide http://bugs.python.org/issue24016 closed by brett.cannon #24682: Add Quick Start: Communications section to devguide http://bugs.python.org/issue24682 closed by brett.cannon #24689: Add tips for effective online communication to devguide http://bugs.python.org/issue24689 closed by brett.cannon #25431: implement address in network in ipaddress module http://bugs.python.org/issue25431 closed by berker.peksag #25966: Bug in asyncio.corotuines._format_coroutine http://bugs.python.org/issue25966 closed by berker.peksag #26152: A non-breaking space in a source http://bugs.python.org/issue26152 closed by ncoghlan #26662: configure/Makefile doesn't check if "python" command works, ne http://bugs.python.org/issue26662 closed by xdegaye #26974: Crash in Decimal.from_float http://bugs.python.org/issue26974 closed by skrah #27130: zlib: OverflowError while trying to compress 2^32 bytes or mor http://bugs.python.org/issue27130 closed by martin.panter #27131: Unit test random shuffle http://bugs.python.org/issue27131 closed by rhettinger #27250: Add os.urandom_block() http://bugs.python.org/issue27250 closed by haypo #27266: Always use getrandom() in os.random() on Linux and add block=F http://bugs.python.org/issue27266 closed by haypo #27404: Misc/NEWS: add [Security] prefix to Python 3.5.2 changelog http://bugs.python.org/issue27404 closed by haypo #27454: PyUnicode_InternInPlace can use PyDict_SetDefault http://bugs.python.org/issue27454 closed by berker.peksag #27490: Do not run pgen when it is not going to be used (cross-compili http://bugs.python.org/issue27490 closed by martin.panter #27493: logging module fails with unclear error when supplied a (Posix http://bugs.python.org/issue27493 closed by python-dev #27579: Add a tutorial for AsyncIO in the documentation http://bugs.python.org/issue27579 closed by haypo #27581: Fix overflow check in PySequence_Tuple http://bugs.python.org/issue27581 closed by martin.panter #27591: multiprocessing: Possible uninitialized pointer use in Windows http://bugs.python.org/issue27591 closed by berker.peksag #27592: FIPS_mode() and FIPS_mode_set() functions in Python (ssl) http://bugs.python.org/issue27592 closed by r.david.murray #27600: Spam http://bugs.python.org/issue27600 closed by ebarry #27601: Minor inaccuracy in hash documentation http://bugs.python.org/issue27601 closed by magniff #27608: Something wrong with str.upper().lower() chain? http://bugs.python.org/issue27608 closed by magniff #27610: Add PEP 514 metadata to Windows installer http://bugs.python.org/issue27610 closed by steve.dower #27615: IDLE's debugger steps into PyShell.py for calls to print() et http://bugs.python.org/issue27615 closed by terry.reedy #27616: filedialog.askdirectory inconsistent on Windows between return http://bugs.python.org/issue27616 closed by serhiy.storchaka #27617: Compiled bdist_wininst missing from embedded distribution http://bugs.python.org/issue27617 closed by steve.dower #27622: int.to_bytes(): docstring is not precise http://bugs.python.org/issue27622 closed by mmarkk #27624: unclear documentation on Queue.qsize() http://bugs.python.org/issue27624 closed by rhettinger #27625: "make install" fails when no zlib support available http://bugs.python.org/issue27625 closed by SilentGhost #27626: Spelling fixes http://bugs.python.org/issue27626 closed by martin.panter #27631: .exe is appended to python executable based on filesystem case http://bugs.python.org/issue27631 closed by ammar2 #27633: Doc: Add missing version info to email.parser http://bugs.python.org/issue27633 closed by berker.peksag #27634: selectors.SelectSelectors fails if select.select was patched http://bugs.python.org/issue27634 closed by brett.cannon #27642: import and __import__() fails silently without a ImportError a http://bugs.python.org/issue27642 closed by ebarry #27648: Message of webbrowser.py something wrong. http://bugs.python.org/issue27648 closed by r.david.murray #27649: multiprocessing on Windows does not properly manage class attr http://bugs.python.org/issue27649 closed by r.david.murray From lkb.teichmann at gmail.com Fri Jul 29 12:35:30 2016 From: lkb.teichmann at gmail.com (Martin Teichmann) Date: Fri, 29 Jul 2016 18:35:30 +0200 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: Hi Sylvain, thanks for the example, it's a great example to illustrate PEP 487 and its design decisions. > What it does is setting some class attributes that are required for certain > types of descriptors to be able to initialize themselves. > > class MetaHasTraits(MetaHasDescriptors): > > """A metaclass for HasTraits.""" > def setup_class(cls, classdict): > cls._trait_default_generators = {} > super(MetaHasTraits, cls).setup_class(classdict) While in a metaclass solution this does need that the metaclass needs to execute code on the first class it is used for, in a PEP 487 solution this is not the case. A PEP 487 class HasTraits (no Meta here) would not have Traits-descriptors itself, the classes inheriting from it would have traits to be initialized. The PEP 487 HasTraits takes the role of the metaclass. A completely different problem shows up here. In your example, HasTraits needs to initialize things on the class before the Descriptors are run. This is not possible with PEP 487, where the descriptors are initialized before __init_subclass__ is even called. There are two ways to mitigate that problem: - the first initialized descriptor could do the necessary initialization, or - the descriptors are initialized from within __init_subclass__ At first sight, the first option seems hackish and the second option looks much saner. Nonetheless PEP 487 proposes the second solution. The rationale behind that is that people tend to forget to call super(), and suddenly descriptors don't work anymore. I realized later there is another benefit to this: if the first initialized descriptor is doing the class initialization, often __init_subclass__ isn't needed at all anymore, which means that those kind of descriptors can be used on any class, without the need to tell users that they have to inherit from a certain base class for the descriptors to work. Only if some finalizing code needs to run after the last descriptor is initialized one needs to write an __init_subclass__. This is unavoidable as the last descriptor doesn't know that it is the last. Greetings Martin From alexander.belopolsky at gmail.com Fri Jul 29 12:55:41 2016 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 29 Jul 2016 12:55:41 -0400 Subject: [Python-Dev] Method signatures in the datetime module documentation Message-ID: I have started [1] writing documentation for the new PEP 495 (Local Time Disambiguation) features and ran into the following problem. The current documentation is rather inconsistent in presenting the method signatures. For example: date.replace(year, month, day) [2], but datetime.replace([year[, month[, day[, hour[, minute[, second[, microsecond[, tzinfo]]]]]]]]) [3]. The new signature for datetime.replace in the Python implementation is def replace(self, hour=None, minute=None, second=None, microsecond=None, tzinfo=True, *, fold=None): but the C implementation does not accept True for tzinfo or None for the other arguments. The defaults in the Python implementation are really just sentinels to detect which arguments are not provided. How should I present the signature of the new replace method in the documentation? [1]: http://bugs.python.org/issue27595 [2]: https://docs.python.org/3/library/datetime.html#datetime.date.replace [3]: https://docs.python.org/3/library/datetime.html#datetime.datetime.replace From ethan at stoneleaf.us Fri Jul 29 13:23:50 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 29 Jul 2016 10:23:50 -0700 Subject: [Python-Dev] PEP487: Simpler customization of class creation In-Reply-To: References: <5794ED64.20408@stoneleaf.us> Message-ID: <579B9126.1080309@stoneleaf.us> On 07/29/2016 08:01 AM, Martin Teichmann wrote: > ... Also, while researching other > people's code when I wrote PEP 487, I couldn't find any such code > elsewhere, yet I found a lot of code where people took the wildest > measure to prevent a metaclass in doing its job on the first class it > is used for. One example is enum.EnumMeta, which contains code not to > make enum.Enum an enum ... Actually, enum.Enum is an enum. The guards are there because part of creating a new Enum class is searching for the previous Enum classes, and of course the very first time through there is no previous Enum class. My apologies if I have misunderstood what you were trying to say. -- ~Ethan~