From christian at python.org Wed Nov 1 08:46:27 2017 From: christian at python.org (Christian Heimes) Date: Wed, 1 Nov 2017 13:46:27 +0100 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: References: Message-ID: On 2017-11-01 10:07, Thiebaud Weksteen wrote: > Hi, > > UEFI has become the standard for firmware (BIOS) interface. Intel has > provided an open source implementation under the name EDK2 (part of > the TianoCore initiative) [1] for some time. This implementation has > evolved significantly and now provides the functionalities of a small > OS with a standard library similar to POSIX. > > In 2011, a port of Python 2.7.1 was added to the EDK2 repository [2]. > This port then evolved to 2.7.2 which is still defined as the > reference port [3]. In 2015, another port was added of Python 2.7.10 > in parallel of 2.7.2 [4]. Since then, both implementations have > diverged from upstream and know vulnerabilities have not been fixed. > > I would like to bring support for edk2 in the official Python > repository to remediate this situation, that is officially support > edk2 as a platform. Technically, there would be three main aspects for > the on-boarding work: > > 1) Fix headers and source to resolve definition conflicts, similarly > to ABS definition in [5]; https://gist.github.com/tweksteen/ed516ca7ab7dfa8d18428f59d9c22a3e is a low-hanging fruit, e.g. ABS() should be replaced with Py_ABS() from pymacro.h. Why did you have to replace non-ASCII chars in comments? Does your build chain not support comments with UTF-8 chars? > 2) Add the edk2module.c [6] to handle platform-specific > functionalities, similarly to the posixmodule.c; edk2module.c duplicates a lot of functionality that is also exposed by posixmodule.c. We try to reduce duplicated code and functionality as much as possible. IMO the edk2 module should be folded into posixmodule.c. > 3) Add the build configuration file [7] and necessary modifications > within Python to handle the edk2 toolchain; Once you are able to build master for EDK2, we need to figure out how to integrate the new build flavor into our CI and buildbot system. Is the EDK2 build chain available for free on Linux? > This work would target the master branch (that is Python 3). I would > be interested in hearing your thoughts on this idea. In general your proposal sounds like a good idea. A new platform may require a PEP, though. You can start now by submitting pull requests for the header fixes. Even in the case we decide not to support EDK2, we make your life easier by reducing the amount of extra patches. Christian From solipsis at pitrou.net Wed Nov 1 08:54:59 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Nov 2017 13:54:59 +0100 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 References: Message-ID: <20171101135459.25233abb@fsol> On Wed, 1 Nov 2017 13:46:27 +0100 Christian Heimes wrote: > > > This work would target the master branch (that is Python 3). I would > > be interested in hearing your thoughts on this idea. > > In general your proposal sounds like a good idea. A new platform may > require a PEP, though. It would also require a maintainer and a maintenance promise for several years (5 or 10? I don't know). I doubt any other core developers are interested in/equipped for dealing with EDK2 issues, regressions and subtleties. > You can start now by submitting pull requests for the header fixes. Even > in the case we decide not to support EDK2, we make your life easier by > reducing the amount of extra patches. Agreed. Regards Antoine. From tweek at google.com Wed Nov 1 05:07:26 2017 From: tweek at google.com (Thiebaud Weksteen) Date: Wed, 1 Nov 2017 10:07:26 +0100 Subject: [Python-Dev] Official port of Python on EDK2 Message-ID: Hi, UEFI has become the standard for firmware (BIOS) interface. Intel has provided an open source implementation under the name EDK2 (part of the TianoCore initiative) [1] for some time. This implementation has evolved significantly and now provides the functionalities of a small OS with a standard library similar to POSIX. In 2011, a port of Python 2.7.1 was added to the EDK2 repository [2]. This port then evolved to 2.7.2 which is still defined as the reference port [3]. In 2015, another port was added of Python 2.7.10 in parallel of 2.7.2 [4]. Since then, both implementations have diverged from upstream and know vulnerabilities have not been fixed. I would like to bring support for edk2 in the official Python repository to remediate this situation, that is officially support edk2 as a platform. Technically, there would be three main aspects for the on-boarding work: 1) Fix headers and source to resolve definition conflicts, similarly to ABS definition in [5]; 2) Add the edk2module.c [6] to handle platform-specific functionalities, similarly to the posixmodule.c; 3) Add the build configuration file [7] and necessary modifications within Python to handle the edk2 toolchain; This work would target the master branch (that is Python 3). I would be interested in hearing your thoughts on this idea. Thanks, Thiebaud [1] https://github.com/tianocore/edk2 [2] https://github.com/tianocore/edk2/commit/006fecd5a177b4b7b6b36fab6690bf2b2fa11829 [3] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/PythonReadMe.txt [4] https://github.com/tianocore/edk2/commit/c8042e10763bca064df257547d04ae3dfcdfaf91 [5] https://gist.github.com/tweksteen/ed516ca7ab7dfa8d18428f59d9c22a3e [6] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/Efi/edk2module.c [7] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/PythonCore.inf From shoyer at gmail.com Wed Nov 1 11:31:54 2017 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 01 Nov 2017 15:31:54 +0000 Subject: [Python-Dev] Review for bpo-30140: fix binop dispatch for subclasses Message-ID: Hi python-dev, It's been over a month without any activity and over a week since my ping, and I'm still waiting for a review on my pull request: https://bugs.python.org/issue30140 https://github.com/python/cpython/pull/1325 I would greatly appreciate if someone has time to take a look. This is my first pull request to CPython. Thanks, Stephan -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Nov 1 14:24:07 2017 From: brett at python.org (Brett Cannon) Date: Wed, 01 Nov 2017 18:24:07 +0000 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: <20171101135459.25233abb@fsol> References: <20171101135459.25233abb@fsol> Message-ID: The official guidelines on what it takes to add official support for a platform is https://www.python.org/dev/peps/pep-0011/#supporting-platforms. Basically it's a core dev willing to sponsor and maintain the work, a buildbot, and implicitly at least a 5 year commitment. On Wed, 1 Nov 2017 at 05:55 Antoine Pitrou wrote: > On Wed, 1 Nov 2017 13:46:27 +0100 > Christian Heimes wrote: > > > > > This work would target the master branch (that is Python 3). I would > > > be interested in hearing your thoughts on this idea. > > > > In general your proposal sounds like a good idea. A new platform may > > require a PEP, though. > > It would also require a maintainer and a maintenance promise for > several years (5 or 10? I don't know). I doubt any other core > developers are interested in/equipped for dealing with EDK2 issues, > regressions and subtleties. > > > You can start now by submitting pull requests for the header fixes. Even > > in the case we decide not to support EDK2, we make your life easier by > > reducing the amount of extra patches. > > Agreed. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at python.org Wed Nov 1 17:47:45 2017 From: nad at python.org (Ned Deily) Date: Wed, 1 Nov 2017 17:47:45 -0400 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff Message-ID: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Happy belated Halloween to those who celebrate it; I hope it wasn't too scary! Also possibly scary: we have just a little over 12 weeks remaining until Python 3.7's feature code cutoff, 2018-01-29. Those 12 weeks include a number of traditional holidays around the world so, if you are planning on writing another PEP for 3.7 or working on getting an existing one approved or getting feature code reviewed, please plan accordingly. If you have something in the pipeline, please either let me know or, when implemented, add the feature to PEP 537, the 3.7 Release Schedule PEP. As you may recall, the release schedule calls for 4 alpha preview releases prior to the feature code cutoff with the first beta release. We have already produced the first two alphas. Reviewing the schedule recently, I realized that I had "front-loaded" the alphas, leaving a bigger gap between the final alphas and the first beta. So I have adjusted the schedule a bit, pushing alpha 3 and 4 out. The new dates are: - 3.7.0 alpha 3: 2017-11-27 (was 2017-11-13) - 3.7.0 alpha 4: 2018-01-08 (was 2017-12-18) - 3.7.0 beta 1: 2018-01-29 (feature freeze - unchanged) I hope the new dates give you a little bit more time to get your bits finished and get a little bit of exposure prior to the feature freeze. Considering how quickly and positively it has been adopted, 3.6 is going to be a tough act to follow. But we can do it again. Thank you all for your ongoing efforts! --Ned -- Ned Deily nad at python.org -- [] From lukasz at langa.pl Wed Nov 1 18:48:00 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Wed, 1 Nov 2017 15:48:00 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations Message-ID: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> Based on positive feedback on python-ideas back in September, I'm publishing the second draft for consideration on python-dev. I hope you like it! A nicely formatted rendering is available here: https://www.python.org/dev/peps/pep-0563/ (Just make sure you're looking at the version that has "Post-History: 1-Nov-2017" as you might have an older version cached by your browser, or you might be seeing a newer version if you get to this e-mail in the future.) - ? PEP: 563 Title: Postponed Evaluation of Annotations Version: $Revision$ Last-Modified: $Date$ Author: ?ukasz Langa Discussions-To: Python-Dev Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 8-Sep-2017 Python-Version: 3.7 Post-History: 1-Nov-2017 Resolution: Abstract ======== PEP 3107 introduced syntax for function annotations, but the semantics were deliberately left undefined. PEP 484 introduced a standard meaning to annotations: type hints. PEP 526 defined variable annotations, explicitly tying them with the type hinting use case. This PEP proposes changing function annotations and variable annotations so that they are no longer evaluated at function definition time. Instead, they are preserved in ``__annotations__`` in string form. This change is going to be introduced gradually, starting with a new ``__future__`` import in Python 3.7. Rationale and Goals =================== PEP 3107 added support for arbitrary annotations on parts of a function definition. Just like default values, annotations are evaluated at function definition time. This creates a number of issues for the type hinting use case: * forward references: when a type hint contains names that have not been defined yet, that definition needs to be expressed as a string literal; * type hints are executed at module import time, which is not computationally free. Postponing the evaluation of annotations solves both problems. Non-goals --------- Just like in PEP 484 and PEP 526, it should be emphasized that **Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.** This PEP is meant to solve the problem of forward references in type annotations. There are still cases outside of annotations where forward references will require usage of string literals. Those are listed in a later section of this document. Annotations without forced evaluation enable opportunities to improve the syntax of type hints. This idea will require its own separate PEP and is not discussed further in this document. Non-typing usage of annotations ------------------------------- While annotations are still available for arbitrary use besides type checking, it is worth mentioning that the design of this PEP, as well as its precursors (PEP 484 and PEP 526), is predominantly motivated by the type hinting use case. With Python 3.7, PEP 484 graduates from provisional status. Other enhancements to the Python programming language like PEP 544, PEP 557, or PEP 560, are already being built on this basis as they depend on type annotations and the ``typing`` module as defined by PEP 484. With this in mind, uses for annotations incompatible with the aforementioned PEPs should be considered deprecated. Implementation ============== In Python 4.0, function and variable annotations will no longer be evaluated at definition time. Instead, a string form will be preserved in the respective ``__annotations__`` dictionary. Static type checkers will see no difference in behavior, whereas tools using annotations at runtime will have to perform postponed evaluation. If an annotation was already a string, this string is preserved verbatim. In other cases, the string form is obtained from the AST during the compilation step, which means that the string form preserved might not preserve the exact formatting of the source. Annotations need to be syntactically valid Python expressions, also when passed as literal strings (i.e. ``compile(literal, '', 'eval')``). Annotations can only use names present in the module scope as postponed evaluation using local names is not reliable (with the sole exception of class-level names resolved by ``typing.get_type_hints()``). Note that as per PEP 526, local variable annotations are not evaluated at all since they are not accessible outside of the function's closure. Enabling the future behavior in Python 3.7 ------------------------------------------ The functionality described above can be enabled starting from Python 3.7 using the following special import:: from __future__ import annotations Resolving Type Hints at Runtime =============================== To resolve an annotation at runtime from its string form to the result of the enclosed expression, user code needs to evaluate the string. For code that uses type hints, the ``typing.get_type_hints(obj, globalns=None, localns=None)`` function correctly evaluates expressions back from its string form. Note that all valid code currently using ``__annotations__`` should already be doing that since a type annotation can be expressed as a string literal. For code which uses annotations for other purposes, a regular ``eval(ann, globals, locals)`` call is enough to resolve the annotation. In both cases it's important to consider how globals and locals affect the postponed evaluation. An annotation is no longer evaluated at the time of definition and, more importantly, *in the same scope* it was defined. Consequently, using local state in annotations is no longer possible in general. As for globals, the module where the annotation was defined is the correct context for postponed evaluation. The ``get_type_hints()`` function automatically resolves the correct value of ``globalns`` for functions and classes. It also automatically provides the correct ``localns`` for classes. When running ``eval()``, the value of globals can be gathered in the following way: * function objects hold a reference to their respective globals in an attribute called ``__globals__``; * classes hold the name of the module they were defined in, this can be used to retrieve the respective globals:: cls_globals = vars(sys.modules[SomeClass.__module__]) Note that this needs to be repeated for base classes to evaluate all ``__annotations__``. * modules should use their own ``__dict__``. The value of ``localns`` cannot be reliably retrieved for functions because in all likelihood the stack frame at the time of the call no longer exists. For classes, ``localns`` can be composed by chaining vars of the given class and its base classes (in the method resolution order). Since slots can only be filled after the class was defined, we don't need to consult them for this purpose. Runtime annotation resolution and class decorators -------------------------------------------------- Metaclasses and class decorators that need to resolve annotations for the current class will fail for annotations that use the name of the current class. Example:: def class_decorator(cls): annotations = get_type_hints(cls) # raises NameError on 'C' print(f'Annotations for {cls}: {annotations}') return cls @class_decorator class C: singleton: 'C' = None This was already true before this PEP. The class decorator acts on the class before it's assigned a name in the current definition scope. Runtime annotation resolution and ``TYPE_CHECKING`` --------------------------------------------------- Sometimes there's code that must be seen by a type checker but should not be executed. For such situations the ``typing`` module defines a constant, ``TYPE_CHECKING``, that is considered ``True`` during type checking but ``False`` at runtime. Example:: import typing if typing.TYPE_CHECKING: import expensive_mod def a_func(arg: expensive_mod.SomeClass) -> None: a_var: expensive_mod.SomeClass = arg ... This approach is also useful when handling import cycles. Trying to resolve annotations of ``a_func`` at runtime using ``typing.get_type_hints()`` will fail since the name ``expensive_mod`` is not defined (``TYPE_CHECKING`` variable being ``False`` at runtime). This was already true before this PEP. Backwards Compatibility ======================= This is a backwards incompatible change. Applications depending on arbitrary objects to be directly present in annotations will break if they are not using ``typing.get_type_hints()`` or ``eval()``. Annotations that depend on locals at the time of the function definition will not be resolvable later. Example:: def generate(): A = Optional[int] class C: field: A = 1 def method(self, arg: A) -> None: ... return C X = generate() Trying to resolve annotations of ``X`` later by using ``get_type_hints(X)`` will fail because ``A`` and its enclosing scope no longer exists. Python will make no attempt to disallow such annotations since they can often still be successfully statically analyzed, which is the predominant use case for annotations. Annotations using nested classes and their respective state are still valid. They can use local names or the fully qualified name. Example:: class C: field = 'c_field' def method(self, arg: C.field) -> None: # this is OK ... class D: field2 = 'd_field' def method(self, arg: C.field) -> C.D.field2: # this is OK ... def method(self, arg: field) -> D.field2: # this is OK ... In the presence of an annotation that isn't a syntactically valid expression, SyntaxError is raised at compile time. However, since names aren't resolved at that time, no attempt is made to validate whether used names are correct or not. Deprecation policy ------------------ Starting with Python 3.7, a ``__future__`` import is required to use the described functionality. No warnings are raised. In Python 3.8 a ``PendingDeprecationWarning`` is raised by the compiler in the presence of type annotations in modules without the ``__future__`` import. Starting with Python 3.9 the warning becomes a ``DeprecationWarning``. In Python 4.0 this will become the default behavior. Use of annotations incompatible with this PEP is no longer supported. Forward References ================== Deliberately using a name before it was defined in the module is called a forward reference. For the purpose of this section, we'll call any name imported or defined within a ``if TYPE_CHECKING:`` block a forward reference, too. This PEP addresses the issue of forward references in *type annotations*. The use of string literals will no longer be required in this case. However, there are APIs in the ``typing`` module that use other syntactic constructs of the language, and those will still require working around forward references with string literals. The list includes: * type definitions:: T = TypeVar('T', bound='UserId') UserId = NewType('UserId', 'SomeType') Employee = NamedTuple('Employee', [('name', str), ('id', 'UserId')]) * aliases:: Alias = Optional['SomeType'] AnotherAlias = Union['SomeType', 'OtherType'] * casting:: cast('SomeType', value) * base classes:: class C(Tuple['SomeType', 'OtherType']): ... Depending on the specific case, some of the cases listed above might be worked around by placing the usage in a ``if TYPE_CHECKING:`` block. This will not work for any code that needs to be available at runtime, notably for base classes and casting. For named tuples, using the new class definition syntax introduced in Python 3.6 solves the issue. In general, fixing the issue for *all* forward references requires changing how module instantiation is performed in Python, from the current single-pass top-down model. This would be a major change in the language and is out of scope for this PEP. Rejected Ideas ============== Keep the ability to use function local state when defining annotations ---------------------------------------------------------------------- With postponed evaluation, this would require keeping a reference to the frame in which an annotation got created. This could be achieved for example by storing all annotations as lambdas instead of strings. This would be prohibitively expensive for highly annotated code as the frames would keep all their objects alive. That includes predominantly objects that won't ever be accessed again. Note that in the case of nested classes, the functionality to get the effective "globals" and "locals" at definition time is provided by ``typing.get_type_hints()``. If a function generates a class or a function with annotations that have to use local variables, it can populate the given generated object's ``__annotations__`` dictionary directly, without relying on the compiler. Disallow local state usage for classes, too ------------------------------------------- This PEP originally proposed limiting names within annotations to only allow names from the model-level scope, including for classes. The author argued this makes name resolution unambiguous, including in cases of conflicts between local names and module-level names. This idea was ultimately rejected in case of nested classes. Instead, ``typing.get_type_hints()`` got modified to populate the local namespace correctly if class-level annotations are needed. The reasons for rejecting the idea were that it goes against the intuition of how scoping works in Python, and would break enough existing type annotations to make the transition cumbersome. Finally, local scope access is required for class decorators to be able to evaluate type annotations. This is because class decorators are applied before the class receives its name in the outer scope. Introduce a new dictionary for the string literal form instead -------------------------------------------------------------- Yury Selivanov shared the following idea: 1. Add a new special attribute to functions: ``__annotations_text__``. 2. Make ``__annotations__`` a lazy dynamic mapping, evaluating expressions from the corresponding key in ``__annotations_text__`` just-in-time. This idea is supposed to solve the backwards compatibility issue, removing the need for a new ``__future__`` import. Sadly, this is not enough. Postponed evaluation changes which state the annotation has access to. While postponed evaluation fixes the forward reference problem, it also makes it impossible to access function-level locals anymore. This alone is a source of backwards incompatibility which justifies a deprecation period. A ``__future__`` import is an obvious and explicit indicator of opting in for the new functionality. It also makes it trivial for external tools to recognize the difference between a Python files using the old or the new approach. In the former case, that tool would recognize that local state access is allowed, whereas in the latter case it would recognize that forward references are allowed. Finally, just-in-time evaluation in ``__annotations__`` is an unnecessary step if ``get_type_hints()`` is used later. Drop annotations with -O ------------------------ There are two reasons this is not satisfying for the purpose of this PEP. First, this only addresses runtime cost, not forward references, those still cannot be safely used in source code. A library maintainer would never be able to use forward references since that would force the library users to use this new hypothetical -O switch. Second, this throws the baby out with the bath water. Now *no* runtime annotation use can be performed. PEP 557 is one example of a recent development where evaluating type annotations at runtime is useful. All that being said, a granular -O option to drop annotations is a possibility in the future, as it's conceptually compatible with existing -O behavior (dropping docstrings and assert statements). This PEP does not invalidate the idea. Prior discussion ================ In PEP 484 ---------- The forward reference problem was discussed when PEP 484 was originally drafted, leading to the following statement in the document: A compromise is possible where a ``__future__`` import could enable turning *all* annotations in a given module into string literals, as follows:: from __future__ import annotations class ImSet: def add(self, a: ImSet) -> List[ImSet]: ... assert ImSet.add.__annotations__ == { 'a': 'ImSet', 'return': 'List[ImSet]' } Such a ``__future__`` import statement may be proposed in a separate PEP. python/typing#400 ----------------- The problem was discussed at length on the typing module's GitHub project, under `Issue 400 `_. The problem statement there includes critique of generic types requiring imports from ``typing``. This tends to be confusing to beginners: Why this:: from typing import List, Set def dir(o: object = ...) -> List[str]: ... def add_friends(friends: Set[Friend]) -> None: ... But not this:: def dir(o: object = ...) -> list[str]: ... def add_friends(friends: set[Friend]) -> None ... Why this:: up_to_ten = list(range(10)) friends = set() But not this:: from typing import List, Set up_to_ten = List[int](range(10)) friends = Set[Friend]() While typing usability is an interesting problem, it is out of scope of this PEP. Specifically, any extensions of the typing syntax standardized in PEP 484 will require their own respective PEPs and approval. Issue 400 ultimately suggests postponing evaluation of annotations and keeping them as strings in ``__annotations__``, just like this PEP specifies. This idea was received well. Ivan Levkivskyi supported using the ``__future__`` import and suggested unparsing the AST in ``compile.c``. Jukka Lehtosalo pointed out that there are some cases of forward references where types are used outside of annotations and postponed evaluation will not help those. For those cases using the string literal notation would still be required. Those cases are discussed briefly in the "Forward References" section of this PEP. The biggest controversy on the issue was Guido van Rossum's concern that untokenizing annotation expressions back to their string form has no precedent in the Python programming language and feels like a hacky workaround. He said: One thing that comes to mind is that it's a very random change to the language. It might be useful to have a more compact way to indicate deferred execution of expressions (using less syntax than ``lambda:``). But why would the use case of type annotations be so all-important to change the language to do it there first (rather than proposing a more general solution), given that there's already a solution for this particular use case that requires very minimal syntax? Eventually, Ethan Smith and schollii voiced that feedback gathered during PyCon US suggests that the state of forward references needs fixing. Guido van Rossum suggested coming back to the ``__future__`` idea, pointing out that to prevent abuse, it's important for the annotations to be kept both syntactically valid and evaluating correctly at runtime. First draft discussion on python-ideas -------------------------------------- Discussion happened largely in two threads, `the original announcement `_ and a follow-up called `PEP 563 and expensive backwards compatibility `_. The PEP received rather warm feedback (4 strongly in favor, 2 in favor with concerns, 2 against). The biggest voice of concern on the former thread being Steven D'Aprano's review stating that the problem definition of the PEP doesn't justify breaking backwards compatibility. In this response Steven seemed mostly concerned about Python no longer supporting evaluation of annotations that depended on local function/class state. A few people voiced concerns that there are libraries using annotations for non-typing purposes. However, none of the named libraries would be invalidated by this PEP. They do require adapting to the new requirement to call ``eval()`` on the annotation with the correct ``globals`` and ``locals`` set. This detail about ``globals`` and ``locals`` having to be correct was picked up by a number of commenters. Nick Coghlan benchmarked turning annotations into lambdas instead of strings, sadly this proved to be much slower at runtime than the current situation. The latter thread was started by Jim J. Jewett who stressed that the ability to properly evaluate annotations is an important requirement and backwards compatibility in that regard is valuable. After some discussion he admitted that side effects in annotations are a code smell and modal support to either perform or not perform evaluation is a messy solution. His biggest concern remained loss of functionality stemming from the evaluation restrictions on global and local scope. Nick Coghlan pointed out that some of those evaluation restrictions from the PEP could be lifted by a clever implementation of an evaluation helper, which could solve self-referencing classes even in the form of a class decorator. He suggested the PEP should provide this helper function in the standard library. Acknowledgements ================ This document could not be completed without valuable input, encouragement and advice from Guido van Rossum, Jukka Lehtosalo, and Ivan Levkivskyi. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Wed Nov 1 19:16:31 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Wed, 1 Nov 2017 16:16:31 -0700 Subject: [Python-Dev] PEP 511 (code transformers) rejected In-Reply-To: References: Message-ID: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> I find this sad. In the JavaScript community the existence of Babel is very important for the long-term evolution of the language independently from the runtime. With Babel, JavaScript programmers can utilize new language syntax while being able to deploy on dated browsers. While there's always some experimentation, I doubt our community would abuse the new syntactic freedom that the PEP provided. Then again, maybe we should do what Babel did, e.g. release a tool like it totally separately from the runtime. - ? > On Oct 17, 2017, at 1:23 PM, Victor Stinner wrote: > > Hi, > > I rejected my own PEP 511 "API for code transformers" that I wrote in > January 2016: > > https://github.com/python/peps/commit/9d8fd950014a80324791d7dae3c130b1b64fdace > > Rejection Notice: > > """ > This PEP was rejected by its author. > > This PEP was seen as blessing new Python-like programming languages > which are close but incompatible with the regular Python language. It > was decided to not promote syntaxes incompatible with Python. > > This PEP was also seen as a nice tool to experiment new Python features, > but it is already possible to experiment them without the PEP, only with > importlib hooks. If a feature becomes useful, it should be directly part > of Python, instead of depending on an third party Python module. > > Finally, this PEP was driven was the FAT Python optimization project > which was abandonned in 2016, since it was not possible to show any > significant speedup, but also because of the lack of time to implement > the most advanced and complex optimizations. > """ > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Wed Nov 1 19:24:25 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Wed, 1 Nov 2017 16:24:25 -0700 Subject: [Python-Dev] PEP 530 In-Reply-To: References: Message-ID: <302F40D4-8D79-43C6-A9D2-51D8BF67B683@langa.pl> The original poster is an elementary school student. To keep the list clean, I responded 1:1 in a more inviting manner. Hopefully the person will succeed installing and learning Python :-) - ? > On Oct 28, 2017, at 2:33 PM, Terry Reedy wrote: > > On 10/27/2017 4:43 PM, London wrote: >> can you help me get idol for my computer > > Post questions about using python on python-list and include information about what OS you are running and what version of Python you want. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Wed Nov 1 19:34:18 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Wed, 1 Nov 2017 16:34:18 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: References: <20171026120137.1de34389@fsol> Message-ID: <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> > On Oct 30, 2017, at 4:00 AM, Victor Stinner wrote: > > Except of Antoine Pitrou, does everybody else like the new UI? :-) I also much prefer MM3 and HyperKitty. The old pipermail tree looks more inviting (I like the concise tree) but it's deceiving. When you actually start going through an entire discussion, it's easy to lose track unless posters use quotations neatly. The search functionality of HyperKitty alone is worth it. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Wed Nov 1 20:25:25 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Nov 2017 10:25:25 +1000 Subject: [Python-Dev] [python-committers] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: On 2 November 2017 at 07:47, Ned Deily wrote: > Happy belated Halloween to those who celebrate it; I hope it wasn't too > scary! Also possibly scary: we have just a little over 12 weeks remaining > until Python 3.7's feature code cutoff, 2018-01-29. Those 12 weeks include > a number of traditional holidays around the world so, if you are planning > on writing another PEP for 3.7 or working on getting an existing one > approved or getting feature code reviewed, please plan accordingly. > If you have something in the pipeline, please either let me know or, when > implemented, add the feature to PEP 537, the 3.7 Release Schedule PEP. My two main open items for 3.7 are: - PEP 558 (formally defining expected semantics for local namespaces) - making the points where we check for asynchronous signals more deterministic (https://bugs.python.org/issue29988) For 558, I'm finally happy with the proposed design, so I just need to sit down and actually make a working write-through proxy implementation. For the signal handling checks, the problem is that https://bugs.python.org/issue29988#msg301869 makes me nervous about adjusting the typical locations where interrupts get raised without also making it easier to temporarily disable such checks (and having with statements do so while calling cleanup functions). Cheers, Nick. P.S. Work on the latter issue provided the primary motivation for the new per-opcode tracing feature: it lets us reliably inject expections at "bad" times, so otherwise rare race conditions can be deliberately provoked. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariatta.wijaya at gmail.com Wed Nov 1 21:49:22 2017 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Wed, 1 Nov 2017 18:49:22 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> Message-ID: Anything I can do to help make the migration to MM3 + HyperKitty happen? :) Mariatta Wijaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 1 21:53:11 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Nov 2017 11:53:11 +1000 Subject: [Python-Dev] PEP 511 (code transformers) rejected In-Reply-To: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> References: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> Message-ID: On 2 November 2017 at 09:16, Lukasz Langa wrote: > I find this sad. In the JavaScript community the existence of Babel is > very important for the long-term evolution of the language independently > from the runtime. With Babel, JavaScript programmers can utilize new > language syntax while being able to deploy on dated browsers. While there's > always some experimentation, I doubt our community would abuse the new > syntactic freedom that the PEP provided. > > Then again, maybe we should do what Babel did, e.g. release a tool like it > totally separately from the runtime. > Right, I think python-modernize and python-future provide a better model for Babel equivalents in Python than anything built directly into CPython would. In many ways, python-future's pasteurize already *is* that kind of polyfill, where you get to write nice modern Python yourself, and then ask pasteurize to mess it up so it also works on Python 2.7: http://python-future.org/pasteurize.html The piece that we're currently missing to make such workflows easier to manage is an equivalent of JavaScript's source maps ( http://blog.teamtreehouse.com/introduction-source-maps), together with debugging tools that are able to use source map information to generate nice tracebacks, even when the original sources are unavailable. Source maps could also potentially help with getting meaningful tracebacks in other contexts, like bytecode-only deployments and Cython extension modules (for example, the traceback problem is the main reason Red Hat's Python container images still have the source code in them - when that was last measured, you could get an image size reduction of around 15% by including only the pyc files and omitting the original sources, but it wasn't worth it when it came at the cost of making tracebacks unreadable). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.richardson at intel.com Wed Nov 1 21:36:45 2017 From: brian.richardson at intel.com (Richardson, Brian) Date: Thu, 2 Nov 2017 01:36:45 +0000 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: References: Message-ID: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> Thiebaud: Thank you. I have started discussions within Intel for updating the UEFI CPython implementation to Python 3.x. The TianoCore community would appreciate contributions by people with Python experience to bring this code up to current standards. Please review the contribution guidelines for TianoCore and let me know if you have any questions. http://www.tianocore.org/contrib/ Thanks ... br --- Brian Richardson, Senior Technical Marketing Engineer, Intel Software brian.richardson at intel.com -- @intel_brian (Twitter & WeChat) https://software.intel.com/en-us/meet-the-developers/evangelists/team/brian-richardson -----Original Message----- From: edk2-devel [mailto:edk2-devel-bounces at lists.01.org] On Behalf Of Thiebaud Weksteen Sent: Wednesday, November 1, 2017 5:07 AM To: python-dev at python.org Cc: edk2-devel at lists.01.org Subject: [edk2] Official port of Python on EDK2 Hi, UEFI has become the standard for firmware (BIOS) interface. Intel has provided an open source implementation under the name EDK2 (part of the TianoCore initiative) [1] for some time. This implementation has evolved significantly and now provides the functionalities of a small OS with a standard library similar to POSIX. In 2011, a port of Python 2.7.1 was added to the EDK2 repository [2]. This port then evolved to 2.7.2 which is still defined as the reference port [3]. In 2015, another port was added of Python 2.7.10 in parallel of 2.7.2 [4]. Since then, both implementations have diverged from upstream and know vulnerabilities have not been fixed. I would like to bring support for edk2 in the official Python repository to remediate this situation, that is officially support edk2 as a platform. Technically, there would be three main aspects for the on-boarding work: 1) Fix headers and source to resolve definition conflicts, similarly to ABS definition in [5]; 2) Add the edk2module.c [6] to handle platform-specific functionalities, similarly to the posixmodule.c; 3) Add the build configuration file [7] and necessary modifications within Python to handle the edk2 toolchain; This work would target the master branch (that is Python 3). I would be interested in hearing your thoughts on this idea. Thanks, Thiebaud [1] https://github.com/tianocore/edk2 [2] https://github.com/tianocore/edk2/commit/006fecd5a177b4b7b6b36fab6690bf2b2fa11829 [3] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/PythonReadMe.txt [4] https://github.com/tianocore/edk2/commit/c8042e10763bca064df257547d04ae3dfcdfaf91 [5] https://gist.github.com/tweksteen/ed516ca7ab7dfa8d18428f59d9c22a3e [6] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/Efi/edk2module.c [7] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/PythonCore.inf _______________________________________________ edk2-devel mailing list edk2-devel at lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel From barry at python.org Wed Nov 1 22:31:43 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 1 Nov 2017 19:31:43 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> Message-ID: On Nov 1, 2017, at 18:49, Mariatta Wijaya wrote: > > Anything I can do to help make the migration to MM3 + HyperKitty happen? :) Thanks for the offer Mariatta! Assuming this is something we want to go through with, probably the best way to get there is to work with the postmaster, especially Mark Sapiro, on the migration. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Wed Nov 1 22:42:11 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 1 Nov 2017 19:42:11 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> Message-ID: Maybe we should try it on some other list too? I know it works "in principle" and I'd love for all Python mailing lists to migrate, but I'd like to have some more experience with community mailing lists before tackling python-dev. On Wed, Nov 1, 2017 at 7:31 PM, Barry Warsaw wrote: > On Nov 1, 2017, at 18:49, Mariatta Wijaya > wrote: > > > > Anything I can do to help make the migration to MM3 + HyperKitty happen? > :) > > Thanks for the offer Mariatta! Assuming this is something we want to go > through with, probably the best way to get there is to work with the > postmaster, especially Mark Sapiro, on the migration. > > Cheers, > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Nov 1 22:54:35 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 1 Nov 2017 19:54:35 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> Message-ID: <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> On Nov 1, 2017, at 19:42, Guido van Rossum wrote: > > Maybe we should try it on some other list too? I know it works "in principle" and I'd love for all Python mailing lists to migrate, but I'd like to have some more experience with community mailing lists before tackling python-dev. What about core-workflow or committers? I think some of the criteria are: * Gets a fair bit of traffic, but not too much * Is okay with a little bit of downtime for the migration * Willing to put up with any transient migration snafus * Amenable to trying the new UI * Has the BDFL as a member :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Wed Nov 1 23:05:31 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Wed, 1 Nov 2017 20:05:31 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> Message-ID: <5A6618BF-8F6F-4A4B-A7EF-77A1AD695012@langa.pl> +1 committers > On Nov 1, 2017, at 7:54 PM, Barry Warsaw wrote: > > On Nov 1, 2017, at 19:42, Guido van Rossum wrote: >> >> Maybe we should try it on some other list too? I know it works "in principle" and I'd love for all Python mailing lists to migrate, but I'd like to have some more experience with community mailing lists before tackling python-dev. > > What about core-workflow or committers? I think some of the criteria are: > > * Gets a fair bit of traffic, but not too much > * Is okay with a little bit of downtime for the migration > * Willing to put up with any transient migration snafus > * Amenable to trying the new UI > * Has the BDFL as a member :) > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Wed Nov 1 23:06:14 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 1 Nov 2017 20:06:14 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> Message-ID: Another one is core-mentorship, which satisfies the same criteria; and in my view this has the added and useful property that its beneficiaries are non-core members. After that I'd do core-workflow. Honestly I'd leave python-committers alone for a while, we're a curmudgeonly group. :-) On Wed, Nov 1, 2017 at 7:54 PM, Barry Warsaw wrote: > On Nov 1, 2017, at 19:42, Guido van Rossum wrote: > > > > Maybe we should try it on some other list too? I know it works "in > principle" and I'd love for all Python mailing lists to migrate, but I'd > like to have some more experience with community mailing lists before > tackling python-dev. > > What about core-workflow or committers? I think some of the criteria are: > > * Gets a fair bit of traffic, but not too much > * Is okay with a little bit of downtime for the migration > * Willing to put up with any transient migration snafus > * Amenable to trying the new UI > * Has the BDFL as a member :) > > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariatta.wijaya at gmail.com Thu Nov 2 00:01:27 2017 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Wed, 1 Nov 2017 21:01:27 -0700 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> Message-ID: Starting with core-mentorship and then core-workflow sounds good. Let me first find out what it's going to take to do the migration. (I actually have no idea!) I've sent an email to postmaster and asked for more details :) Hope it's not too complicated... Mariatta Wijaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Nov 2 04:10:11 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 2 Nov 2017 04:10:11 -0400 Subject: [Python-Dev] Migrate python-dev to Mailman 3? In-Reply-To: References: <20171026120137.1de34389@fsol> <10EC07AF-88CC-4D98-9249-8DF0A929876D@langa.pl> <3B3A2C70-58DF-4EBE-A58D-8A4D91C5D605@python.org> Message-ID: On 11/1/2017 11:06 PM, Guido van Rossum wrote: > Another one is core-mentorship, which satisfies the same criteria; and > in my view this has the added and useful property that its beneficiaries > are non-core members. After that I'd do core-workflow. Honestly I'd > leave python-committers alone for a while, we're a curmudgeonly group. :-) As an idledev admin, I also volunteer it. It has an archive but is currently dormant, so it could be shut down for even a couple of weeks to practice archive conversion and test messages. -- Terry Jan Reedy From lele at metapensiero.it Thu Nov 2 04:54:36 2017 From: lele at metapensiero.it (Lele Gaifax) Date: Thu, 02 Nov 2017 09:54:36 +0100 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: <877ev9kqlv.fsf@metapensiero.it> Hi, I'd like to know what should I do wrt to issue #27645 [1] and the related PR #377 [2]: I think I fulfilled every requested item, and a month ago I rebased the work [3] to solve the NEWS conflicts. I'm not sure if the rebase should have been done on the original branch instead of creating a new one, or instead if I should open a new PR (and close the original one?). Thanks for any hint, ciao, lele. [1] https://bugs.python.org/issue27645 [2] https://github.com/python/cpython/pull/377 [3] https://github.com/lelit/cpython/compare/sqlite-backup-api-v3 -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From victor.stinner at gmail.com Thu Nov 2 09:42:31 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 2 Nov 2017 14:42:31 +0100 Subject: [Python-Dev] PEP 511 (code transformers) rejected In-Reply-To: References: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> Message-ID: (Email resent, I first sent it to Nick privately by mistake.) 2017-11-02 2:53 GMT+01:00 Nick Coghlan : > The piece that we're currently missing to make such workflows easier to > manage is an equivalent of JavaScript's source maps (...) Code objects already have a instruction pointer => line number mapping table: the code.co_lnotab field. It's documented at: https://github.com/python/cpython/blob/master/Objects/lnotab_notes.txt This table is built from the line number information of the AST tree. The mapping table is optimized to be small. Before Python 3.5, line number had to be monotonic. Since Python 3.6, you "move" instructions at the AST level, and so have "non-monotonic" line numbers (ex: line 1, line 3, line 2, line 4). Victor From tweek at google.com Thu Nov 2 06:52:35 2017 From: tweek at google.com (Thiebaud Weksteen) Date: Thu, 2 Nov 2017 11:52:35 +0100 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> References: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> Message-ID: Christian, Antoine, Brett: Thanks for the clarification on what an official support would require. As Christian mentioned, sending simple headers patches is an obvious starting point, no matter if the support becomes official or not. Brian: Thanks for your email. As I suggested, by having the support directly within the Python community, you would avoid having to maintain a separate port. I don't think that having a new Python3 port as part of EDK2 is a good idea. What I am suggesting is that Intel should contribute directly to the Python repository by sending your modifications upstream and not expect someone to re-import Python into EDK2. That is, bringing your UEFI experience to Python and not the opposite. This would be a much better use of anyone's time. Thanks, Thiebaud On Thu, Nov 2, 2017 at 2:36 AM, Richardson, Brian < brian.richardson at intel.com> wrote: > Thiebaud: > > Thank you. I have started discussions within Intel for updating the UEFI > CPython implementation to Python 3.x. The TianoCore community would > appreciate contributions by people with Python experience to bring this > code up to current standards. > > Please review the contribution guidelines for TianoCore and let me know if > you have any questions. > http://www.tianocore.org/contrib/ > > Thanks ... br > --- > Brian Richardson, Senior Technical Marketing Engineer, Intel Software > brian.richardson at intel.com -- @intel_brian (Twitter & WeChat) > https://software.intel.com/en-us/meet-the-developers/evangel > ists/team/brian-richardson > > -----Original Message----- > From: edk2-devel [mailto:edk2-devel-bounces at lists.01.org] On Behalf Of > Thiebaud Weksteen > Sent: Wednesday, November 1, 2017 5:07 AM > To: python-dev at python.org > Cc: edk2-devel at lists.01.org > Subject: [edk2] Official port of Python on EDK2 > > Hi, > > UEFI has become the standard for firmware (BIOS) interface. Intel has > provided an open source implementation under the name EDK2 (part of the > TianoCore initiative) [1] for some time. This implementation has evolved > significantly and now provides the functionalities of a small OS with a > standard library similar to POSIX. > > In 2011, a port of Python 2.7.1 was added to the EDK2 repository [2]. > This port then evolved to 2.7.2 which is still defined as the reference > port [3]. In 2015, another port was added of Python 2.7.10 in parallel of > 2.7.2 [4]. Since then, both implementations have diverged from upstream and > know vulnerabilities have not been fixed. > > I would like to bring support for edk2 in the official Python repository > to remediate this situation, that is officially support > edk2 as a platform. Technically, there would be three main aspects for the > on-boarding work: > > 1) Fix headers and source to resolve definition conflicts, similarly to > ABS definition in [5]; > 2) Add the edk2module.c [6] to handle platform-specific functionalities, > similarly to the posixmodule.c; > 3) Add the build configuration file [7] and necessary modifications within > Python to handle the edk2 toolchain; > > This work would target the master branch (that is Python 3). I would be > interested in hearing your thoughts on this idea. > > Thanks, > Thiebaud > > [1] https://github.com/tianocore/edk2 > [2] https://github.com/tianocore/edk2/commit/006fecd5a177b4b7b6b > 36fab6690bf2b2fa11829 > [3] https://github.com/tianocore/edk2/blob/master/AppPkg/Applica > tions/Python/PythonReadMe.txt > [4] https://github.com/tianocore/edk2/commit/c8042e10763bca064df > 257547d04ae3dfcdfaf91 > [5] https://gist.github.com/tweksteen/ed516ca7ab7dfa8d18428f59d9c22a3e > [6] https://github.com/tianocore/edk2/blob/master/AppPkg/Applica > tions/Python/Efi/edk2module.c > [7] https://github.com/tianocore/edk2/blob/master/AppPkg/Applica > tions/Python/PythonCore.inf > _______________________________________________ > edk2-devel mailing list > edk2-devel at lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Nov 2 11:16:22 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 2 Nov 2017 16:16:22 +0100 Subject: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution In-Reply-To: References: <20171016170631.375edd56@fsol> <4f15b978-d786-3b80-77f7-c6cb7d313573@email.de> <90a1475b-4fa1-3323-fae8-29d65d54fcc5@email.de> Message-ID: Thank you Guido for your review and approval. I just implemented the PEP 564 and so changed the PEP status to Final. FYI I also added 3 new clock identifiers to the time module in Python 3.7: CLOCK_BOOTTIME, CLOCK_PROF and CLOCK_UPTIME. So you can now get your Linux uptime with a resolution of 1 nanosecond :-D haypo at selma$ ./python -c 'import time; print(time.clock_gettime_ns(time.CLOCK_BOOTTIME))' 232172588663888 Don't do that at home, it's just for educational purpose only! ;-) Victor 2017-10-30 18:18 GMT+01:00 Guido van Rossum : > I have read PEP 564 and (mostly) followed the discussion in this thread, and > I am happy with the PEP. I am hereby approving PEP 564. Congratulations > Victor! > -- > --Guido van Rossum (python.org/~guido) From steve at pearwood.info Thu Nov 2 11:45:16 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 3 Nov 2017 02:45:16 +1100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> Message-ID: <20171102154516.GG9068@ando.pearwood.info> On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote: > PEP: 563 > Title: Postponed Evaluation of Annotations > This PEP proposes changing function annotations and variable annotations > so that they are no longer evaluated at function definition time. > Instead, they are preserved in ``__annotations__`` in string form. This means that now *all* annotations, not just forward references, are no longer validated at runtime and will allow arbitrary typos and errors: def spam(n:itn): # now valid ... Up to now, it has been only forward references that were vulnerable to that sort of thing. Of course running a type checker should pick those errors up, but the evaluation of annotations ensures that they are actually valid (not necessarily correct, but at least a valid name), even if you happen to not be running a type checker. That's useful. Are we happy to live with that change? > Rationale and Goals > =================== > > PEP 3107 added support for arbitrary annotations on parts of a function > definition. Just like default values, annotations are evaluated at > function definition time. This creates a number of issues for the type > hinting use case: > > * forward references: when a type hint contains names that have not been > defined yet, that definition needs to be expressed as a string > literal; After all the discussion, I still don't see why this is an issue. Strings makes perfectly fine forward references. What is the problem that needs solving? Is this about people not wanting to type the leading and trailing ' around forward references? > * type hints are executed at module import time, which is not > computationally free. True; but is that really a performance bottleneck? If it is, that should be stated in the PEP, and state what typical performance improvement this change should give. After all, if we're going to break people's code in order to improve performance, we should at least be sure that it improves performance :-) > Postponing the evaluation of annotations solves both problems. Actually it doesn't. As your PEP says later: > This PEP is meant to solve the problem of forward references in type > annotations. There are still cases outside of annotations where > forward references will require usage of string literals. Those are > listed in a later section of this document. So the primary problem this PEP is designed to solve, isn't actually solved by this PEP. (See Guido's comments, quoted later.) > Implementation > ============== > > In Python 4.0, function and variable annotations will no longer be > evaluated at definition time. Instead, a string form will be preserved > in the respective ``__annotations__`` dictionary. Static type checkers > will see no difference in behavior, Static checkers don't see __annotations__ at all, since that's not available at edit/compile time. Static checkers see only the source code. The checker (and the human reader!) will no longer have the useful clue that something is a forward reference: # before class C: def method(self, other:'C'): ... since the quotes around C will be redundant and almost certainly left out. And if they aren't left out, then what are we to make of the annotation? Will the quotes be stripped out, or left in? In other words, will method's __annotations__ contain 'C' or "'C'"? That will make a difference when the type hint is eval'ed. > If an annotation was already a string, this string is preserved > verbatim. That's ambiguous. See above. > Annotations can only use names present in the module scope as postponed > evaluation using local names is not reliable (with the sole exception of > class-level names resolved by ``typing.get_type_hints()``). Even if you call get_type_hints from inside the function defining the local names? def function(): A = something() def inner(x:A)->int: ... d = typing.get_type_hints(inner) return (d, inner) I would expect that should work. Will it? > For code which uses annotations for other purposes, a regular > ``eval(ann, globals, locals)`` call is enough to resolve the > annotation. Let's just hope nobody doing that has allowed any tainted strings to be stuffed into __annotations__. > * modules should use their own ``__dict__``. Which is better written as ``vars()`` with no argument, I believe. Or possibly ``globals()``. > If a function generates a class or a function with annotations that > have to use local variables, it can populate the given generated > object's ``__annotations__`` dictionary directly, without relying on > the compiler. I don't understand this paragraph. > The biggest controversy on the issue was Guido van Rossum's concern > that untokenizing annotation expressions back to their string form has > no precedent in the Python programming language and feels like a hacky > workaround. He said: > > One thing that comes to mind is that it's a very random change to > the language. It might be useful to have a more compact way to > indicate deferred execution of expressions (using less syntax than > ``lambda:``). But why would the use case of type annotations be so > all-important to change the language to do it there first (rather > than proposing a more general solution), given that there's already > a solution for this particular use case that requires very minimal > syntax? I agree with Guido's concern here. A more general solution would (hopefully!) be like a thunk, and might allow some interesting techniques unrelated to type checking. Just off the top of my head, say, late binding of default values (without the "if arg is None: arg = []" trick). > A few people voiced concerns that there are libraries using annotations > for non-typing purposes. However, none of the named libraries would be > invalidated by this PEP. They do require adapting to the new > requirement to call ``eval()`` on the annotation with the correct > ``globals`` and ``locals`` set. Since this is likely to be a common task for such libraries, can we have a evaluate_annotations() function to do this, rather than have everyone reinvent the wheel? def func(arg:int): ... evaluate_annotations(func) assert func.__annotations__['arg'] is int It could be a decorator, as well as modifying __annotations__ in place. I imagine something with a signature like this: def evaluate_annotations( obj:Union[Function, Class], globals:Dict=None, locals:Dict=None )->Union[Function, Class]: """Evaluate the __annotations__ of a function, or recursively a class and all its methods. Replace the __annotations__ in place. Returns the modified argument, making this suitable as a decorator. If globals is not given, it is taken from the function.__globals__ or class.__module__ if available. If locals is not given, it defaults to the current locals. """ -- Steve From n.jayaprakash at intel.com Thu Nov 2 12:41:48 2017 From: n.jayaprakash at intel.com (Jayaprakash, N) Date: Thu, 2 Nov 2017 16:41:48 +0000 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> References: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> Message-ID: Would you consider adding thread support in this port of Python for EDK2 shell? Regards, JP -----Original Message----- From: edk2-devel [mailto:edk2-devel-bounces at lists.01.org] On Behalf Of Richardson, Brian Sent: Thursday, November 2, 2017 7:07 AM To: Thiebaud Weksteen ; python-dev at python.org Cc: edk2-devel at lists.01.org Subject: Re: [edk2] Official port of Python on EDK2 Thiebaud: Thank you. I have started discussions within Intel for updating the UEFI CPython implementation to Python 3.x. The TianoCore community would appreciate contributions by people with Python experience to bring this code up to current standards. Please review the contribution guidelines for TianoCore and let me know if you have any questions. http://www.tianocore.org/contrib/ Thanks ... br --- Brian Richardson, Senior Technical Marketing Engineer, Intel Software brian.richardson at intel.com -- @intel_brian (Twitter & WeChat) https://software.intel.com/en-us/meet-the-developers/evangelists/team/brian-richardson -----Original Message----- From: edk2-devel [mailto:edk2-devel-bounces at lists.01.org] On Behalf Of Thiebaud Weksteen Sent: Wednesday, November 1, 2017 5:07 AM To: python-dev at python.org Cc: edk2-devel at lists.01.org Subject: [edk2] Official port of Python on EDK2 Hi, UEFI has become the standard for firmware (BIOS) interface. Intel has provided an open source implementation under the name EDK2 (part of the TianoCore initiative) [1] for some time. This implementation has evolved significantly and now provides the functionalities of a small OS with a standard library similar to POSIX. In 2011, a port of Python 2.7.1 was added to the EDK2 repository [2]. This port then evolved to 2.7.2 which is still defined as the reference port [3]. In 2015, another port was added of Python 2.7.10 in parallel of 2.7.2 [4]. Since then, both implementations have diverged from upstream and know vulnerabilities have not been fixed. I would like to bring support for edk2 in the official Python repository to remediate this situation, that is officially support edk2 as a platform. Technically, there would be three main aspects for the on-boarding work: 1) Fix headers and source to resolve definition conflicts, similarly to ABS definition in [5]; 2) Add the edk2module.c [6] to handle platform-specific functionalities, similarly to the posixmodule.c; 3) Add the build configuration file [7] and necessary modifications within Python to handle the edk2 toolchain; This work would target the master branch (that is Python 3). I would be interested in hearing your thoughts on this idea. Thanks, Thiebaud [1] https://github.com/tianocore/edk2 [2] https://github.com/tianocore/edk2/commit/006fecd5a177b4b7b6b36fab6690bf2b2fa11829 [3] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/PythonReadMe.txt [4] https://github.com/tianocore/edk2/commit/c8042e10763bca064df257547d04ae3dfcdfaf91 [5] https://gist.github.com/tweksteen/ed516ca7ab7dfa8d18428f59d9c22a3e [6] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/Efi/edk2module.c [7] https://github.com/tianocore/edk2/blob/master/AppPkg/Applications/Python/PythonCore.inf _______________________________________________ edk2-devel mailing list edk2-devel at lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel _______________________________________________ edk2-devel mailing list edk2-devel at lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel From brett at python.org Thu Nov 2 13:00:19 2017 From: brett at python.org (Brett Cannon) Date: Thu, 02 Nov 2017 17:00:19 +0000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <20171102154516.GG9068@ando.pearwood.info> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On Thu, 2 Nov 2017 at 08:46 Steven D'Aprano wrote: > On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote: > > > PEP: 563 > > Title: Postponed Evaluation of Annotations > > > This PEP proposes changing function annotations and variable annotations > > so that they are no longer evaluated at function definition time. > > Instead, they are preserved in ``__annotations__`` in string form. > > This means that now *all* annotations, not just forward references, are > no longer validated at runtime and will allow arbitrary typos and > errors: > > def spam(n:itn): # now valid > ... > > Up to now, it has been only forward references that were vulnerable to > that sort of thing. Of course running a type checker should pick those > errors up, but the evaluation of annotations ensures that they are > actually valid (not necessarily correct, but at least a valid name), > even if you happen to not be running a type checker. That's useful. > > Are we happy to live with that change? > I would say "yes" for two reasons. One, if you're bothering to provide type hints then you should be testing those type hints. So as you pointed out, Steve, that will be caught at that point. Two, code editors with auto-completion will help prevent this kind of typo. Now I would never suggest that we design Python with expectations of what sort of tooling people have available, but in this instance it will help. It also feeds into a question you ask below... > > > > Rationale and Goals > > =================== > > > > PEP 3107 added support for arbitrary annotations on parts of a function > > definition. Just like default values, annotations are evaluated at > > function definition time. This creates a number of issues for the type > > hinting use case: > > > > * forward references: when a type hint contains names that have not been > > defined yet, that definition needs to be expressed as a string > > literal; > > After all the discussion, I still don't see why this is an issue. > Strings makes perfectly fine forward references. What is the problem > that needs solving? Is this about people not wanting to type the leading > and trailing ' around forward references? > I think it's mainly about the next point you ask about... > > > > * type hints are executed at module import time, which is not > > computationally free. > > True; but is that really a performance bottleneck? If it is, that should > be stated in the PEP, and state what typical performance improvement > this change should give. > > After all, if we're going to break people's code in order to improve > performance, we should at least be sure that it improves performance :-) > The cost of constructing some of the objects used as type hints can be very expensive and make importing really expensive (this has been pointed out by Lukasz previously as well as Inada-san). By making Python itself not have to construct objects from e.g. the 'typing' module at runtime, you then don't pay a runtime penalty for something you're almost never going to use at runtime anyway. > > > > Postponing the evaluation of annotations solves both problems. > > Actually it doesn't. As your PEP says later: > > > This PEP is meant to solve the problem of forward references in type > > annotations. There are still cases outside of annotations where > > forward references will require usage of string literals. Those are > > listed in a later section of this document. > > So the primary problem this PEP is designed to solve, isn't actually > solved by this PEP. > I think the performance bit is really the big deal here. And as I mentioned earlier, if you turn all of your type hints into strings, you lose auto-completion/intellisense which is a shame. I think there's also a benefit here of promoting the fact that type hints are not a runtime thing, they are a static analysis thing. By requiring the extra step to convert from a string to an actual object, it helps get the point across that type hints are just bits of metadata for tooling and not something you're expected really interact with at runtime unless you have a really good reason to. So I'm +1 on the idea, but the __future__ statement is a bit too generic for me. I would prefer something like `from __future__ import annotation_strings` or `annotations_as_strings`. -Brett > > (See Guido's comments, quoted later.) > > > > > Implementation > > ============== > > > > In Python 4.0, function and variable annotations will no longer be > > evaluated at definition time. Instead, a string form will be preserved > > in the respective ``__annotations__`` dictionary. Static type checkers > > will see no difference in behavior, > > Static checkers don't see __annotations__ at all, since that's not > available at edit/compile time. Static checkers see only the source > code. The checker (and the human reader!) will no longer have the useful > clue that something is a forward reference: > > # before > class C: > def method(self, other:'C'): > ... > > since the quotes around C will be redundant and almost certainly left > out. And if they aren't left out, then what are we to make of the > annotation? Will the quotes be stripped out, or left in? > > In other words, will method's __annotations__ contain 'C' or "'C'"? That > will make a difference when the type hint is eval'ed. > > > > If an annotation was already a string, this string is preserved > > verbatim. > > That's ambiguous. See above. > > > > Annotations can only use names present in the module scope as postponed > > evaluation using local names is not reliable (with the sole exception of > > class-level names resolved by ``typing.get_type_hints()``). > > Even if you call get_type_hints from inside the function defining the > local names? > > def function(): > A = something() > def inner(x:A)->int: > ... > d = typing.get_type_hints(inner) > return (d, inner) > > I would expect that should work. Will it? > > > > For code which uses annotations for other purposes, a regular > > ``eval(ann, globals, locals)`` call is enough to resolve the > > annotation. > > Let's just hope nobody doing that has allowed any tainted strings to > be stuffed into __annotations__. > > > > * modules should use their own ``__dict__``. > > Which is better written as ``vars()`` with no argument, I believe. Or > possibly ``globals()``. > > > > If a function generates a class or a function with annotations that > > have to use local variables, it can populate the given generated > > object's ``__annotations__`` dictionary directly, without relying on > > the compiler. > > I don't understand this paragraph. > > > > The biggest controversy on the issue was Guido van Rossum's concern > > that untokenizing annotation expressions back to their string form has > > no precedent in the Python programming language and feels like a hacky > > workaround. He said: > > > > One thing that comes to mind is that it's a very random change to > > the language. It might be useful to have a more compact way to > > indicate deferred execution of expressions (using less syntax than > > ``lambda:``). But why would the use case of type annotations be so > > all-important to change the language to do it there first (rather > > than proposing a more general solution), given that there's already > > a solution for this particular use case that requires very minimal > > syntax? > > I agree with Guido's concern here. A more general solution would > (hopefully!) be like a thunk, and might allow some interesting > techniques unrelated to type checking. Just off the top of my head, say, > late binding of default values (without the "if arg is None: arg = []" > trick). > > > > A few people voiced concerns that there are libraries using annotations > > for non-typing purposes. However, none of the named libraries would be > > invalidated by this PEP. They do require adapting to the new > > requirement to call ``eval()`` on the annotation with the correct > > ``globals`` and ``locals`` set. > > Since this is likely to be a common task for such libraries, can we have > a evaluate_annotations() function to do this, rather than have everyone > reinvent the wheel? > > def func(arg:int): > ... > > evaluate_annotations(func) > assert func.__annotations__['arg'] is int > > It could be a decorator, as well as modifying __annotations__ in place. > > I imagine something with a signature like this: > > def evaluate_annotations( > obj:Union[Function, Class], > globals:Dict=None, > locals:Dict=None > )->Union[Function, Class]: > """Evaluate the __annotations__ of a function, or recursively > a class and all its methods. Replace the __annotations__ in > place. Returns the modified argument, making this suitable as > a decorator. > > If globals is not given, it is taken from the function.__globals__ > or class.__module__ if available. If locals is not given, it > defaults to the current locals. > """ > > > -- > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcounted11 at gmail.com Thu Nov 2 13:52:08 2017 From: bcounted11 at gmail.com (Bob Woolsey) Date: Thu, 2 Nov 2017 13:52:08 -0400 Subject: [Python-Dev] Interesting what version of python should I start with Message-ID: Sent from my iPhone From ronaldoussoren at mac.com Thu Nov 2 13:17:55 2017 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Thu, 02 Nov 2017 18:17:55 +0100 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: <21A3BAE4-2EDA-4299-A1FB-F643ECB2C984@mac.com> > On 1 Nov 2017, at 22:47, Ned Deily wrote: > > Happy belated Halloween to those who celebrate it; I hope it wasn't too scary! Also possibly scary: we have just a little over 12 weeks remaining until Python 3.7's feature code cutoff, 2018-01-29. Those 12 weeks include a number of traditional holidays around the world so, if you are planning on writing another PEP for 3.7 or working on getting an existing one approved or getting feature code reviewed, please plan accordingly. If you have something in the pipeline, please either let me know or, when implemented, add the feature to PEP 537, the 3.7 Release Schedule PEP. I?d still like to finish PEP 447, but don?t know if I can manage to find enough free time to do so. Ronald From brett at python.org Thu Nov 2 13:19:04 2017 From: brett at python.org (Brett Cannon) Date: Thu, 02 Nov 2017 17:19:04 +0000 Subject: [Python-Dev] PEP 511 (code transformers) rejected In-Reply-To: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> References: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> Message-ID: On Wed, 1 Nov 2017 at 16:17 Lukasz Langa wrote: > I find this sad. In the JavaScript community the existence of Babel is > very important for the long-term evolution of the language independently > from the runtime. With Babel, JavaScript programmers can utilize new > language syntax while being able to deploy on dated browsers. While there's > always some experimentation, I doubt our community would abuse the new > syntactic freedom that the PEP provided. > > Then again, maybe we should do what Babel did, e.g. release a tool like it > totally separately from the runtime. > I think the trick here would be getting people more comfortable with ahead-of-time compilation and then adding the appropriate support to bytecode files to load other "optimization" levels/tags. Then you load the .pyc files and rely on co_lnotab as Victor pointed out to get your source mapping by compiling your source code explicitly instead of as a side-effect of import. And since this approach would then just be about generalizing how to specify different tags to match against in .pyc file names it's easier to get accepted. -Brett > > - ? > > > On Oct 17, 2017, at 1:23 PM, Victor Stinner > wrote: > > > > Hi, > > > > I rejected my own PEP 511 "API for code transformers" that I wrote in > > January 2016: > > > > > https://github.com/python/peps/commit/9d8fd950014a80324791d7dae3c130b1b64fdace > > > > Rejection Notice: > > > > """ > > This PEP was rejected by its author. > > > > This PEP was seen as blessing new Python-like programming languages > > which are close but incompatible with the regular Python language. It > > was decided to not promote syntaxes incompatible with Python. > > > > This PEP was also seen as a nice tool to experiment new Python features, > > but it is already possible to experiment them without the PEP, only with > > importlib hooks. If a feature becomes useful, it should be directly part > > of Python, instead of depending on an third party Python module. > > > > Finally, this PEP was driven was the FAT Python optimization project > > which was abandonned in 2016, since it was not possible to show any > > significant speedup, but also because of the lack of time to implement > > the most advanced and complex optimizations. > > """ > > > > Victor > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlehtosalo at gmail.com Thu Nov 2 14:39:59 2017 From: jlehtosalo at gmail.com (Jukka Lehtosalo) Date: Thu, 2 Nov 2017 18:39:59 +0000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <20171102154516.GG9068@ando.pearwood.info> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On Thu, Nov 2, 2017 at 3:45 PM, Steven D'Aprano wrote: > On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote: > > > This PEP proposes changing function annotations and variable annotations > > so that they are no longer evaluated at function definition time. > > Instead, they are preserved in ``__annotations__`` in string form. > > This means that now *all* annotations, not just forward references, are > no longer validated at runtime and will allow arbitrary typos and > errors: > > def spam(n:itn): # now valid > ... > > Up to now, it has been only forward references that were vulnerable to > that sort of thing. Of course running a type checker should pick those > errors up, but the evaluation of annotations ensures that they are > actually valid (not necessarily correct, but at least a valid name), > even if you happen to not be running a type checker. That's useful. > > Are we happy to live with that change? > Within functions misspellings won't be caught until you invoke a function: def spam(s): return itn(s) # no error unless spam() is called We've lived with this for a long time and generally people seem to be happy with it. The vast majority of code in non-trivial programs (where type annotations are useful) tends to be within functions, so this will only slightly increase the number of things that won't be caught without running tests (or running the type checker). As type checking has become the main use case for annotations, using annotations without a type checker is fast becoming a marginal use case. Type checkers can easily and reliably validate that names in annotations aren't misspelled. > * forward references: when a type hint contains names that have not been > > defined yet, that definition needs to be expressed as a string > > literal; > > After all the discussion, I still don't see why this is an issue. > Strings makes perfectly fine forward references. What is the problem > that needs solving? Is this about people not wanting to type the leading > and trailing ' around forward references? > Let's make a thought experiment. What if every forward reference would require special quoting? Would Python programmers be happy with this? Say, let's use ! as a suffix to mark a forward reference. They make perfectly fine forward references. They are visually pretty unobtrusive (I'm not suggesting $ or other ugly perlisms): def main(): args = parse_args!() # A forward reference do_stuff!(args) # Explicit is better than implicit def parse_args(): ... def do_stuff(args): ... Of course, I'm not seriously proposing this, but this highlights the fact that in normal code forward references "just work" (at least usually), and if we'd require a special quoting mechanism to use them anywhere, Python would look uglier and more inconsistent. Nobody would be happy with this change, even though you'd only have to type a single ! character extra -- that's not a lot work, right? I think that the analogy is reasonable. In type checked code annotations are one of most widely used language features -- it's quite possible to have annotations for almost every function in a code base. This is not a marginal feature, and people expect commonly used features to feel polished and usable, not inconsistent and hacky. It's quite possible that the first type annotated experiment a user writes requires the use of forward references, and this gives a pretty bad first impression -- not unlike how the ! forward reference would make the first impression of using non-type-checked Python pretty awkward. Here are more arguments why literal escapes are a usability problem: 1) It's hard to predict when string quotes are needed. Real-world large code bases tend to have a lot of import cycles, and string literal escapes are often needed within import cycles. However, they aren't always needed. To annotate code correctly, you frequently need to understand how the file you are editing is related to other modules in terms of import cycle structure. In large code bases this can be very difficult to keep in your head, so basically adding forward references becomes a matter of tweak-until-it-works. So either each time you write an annotation, you can look at how imports are structured -- to see whether a particular type needs to be quoted -- or you can guess and hope for the best. This is a terrible user experience and increases cognitive load significantly. Our goal should not be to just have something that technically 'works', as this is a very low standard. I want Python to be easy to use, intuitive and elegant. I don't expect that anybody who has annotated large code bases could consider string literal forward references to be any of those. 2) It's one of the top complaints from users. Even a non-user with a basic understanding of mypy told me what amounts to "Python doesn't have real static typing; forward references make it obvious that types are just an afterthought". 3) It's not intuitive for many programmers, as string literals aren't used frequently in Python for quoting other kinds of forward references. It may be intuitive for programmers with a deep Python understanding, but they are a minority. Other mainstream languages with type annotations don't use quoting for forward references. C requires forward references to be declared (but only once per type, not on every use), but I don't think that C is a good model for Python anyway. 4) If you move chunks of code around, suddenly you may have to update your forward references -- some quotes won't be needed any more, and new ones may be required. 5) Some editors and other tools highlight string literals with a color different from other type annotations, making them look ugly and out of place. Some might fix this eventually, but you can argue that the current behavior is reasonable, since forward references are just ordinary string literals at runtime. > This PEP is meant to solve the problem of forward references in type > > annotations. There are still cases outside of annotations where > > forward references will require usage of string literals. Those are > > listed in a later section of this document. > > So the primary problem this PEP is designed to solve, isn't actually > solved by this PEP. > Forward references in type annotations account for the vast majority of forward references. Everything else is pretty marginal and typically involves more advanced type system features. We can probably come up with some data on this. Also, the other contexts are more clearly just regular Python expressions, so the requirement to use forward references is arguably less surprising. Consider this example: FooList = List[Foo] # Foo only gets defined below class Foo: ... I think that even a pretty rudimentary understanding of how Python works should be enough to understand that the above won't work at runtime without quoting. However, this is less obvious: class A: ... def copy(self) -> 'A': # But wasn't A already declared above? return A(...) # And here we can refer to A directly Also, forward references already only work sometimes in non-type-checked Python, so the proposed change would still be kind of consistent with how things work now. Example: def f(): return A() # Probably ok? a = A() # Not ok f() # Not ok class A: ... f() # Ok Static checkers don't see __annotations__ at all, since that's not > available at edit/compile time. Static checkers see only the source > code. The checker (and the human reader!) will no longer have the useful > clue that something is a forward reference: > A static checker really doesn't benefit from knowing that something is a forward reference. Mypy, for example, doesn't need to know if something is a forward reference to decide what it refers to. As I discussed above, knowing whether something is a forward reference can actually be less than helpful for a human reader as well. Jukka -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Thu Nov 2 14:58:13 2017 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Thu, 2 Nov 2017 13:58:13 -0500 Subject: [Python-Dev] Interesting what version of python should I start with In-Reply-To: References: Message-ID: This list is for development of Python itself, not development with Python. You want python-list: https://mail.python.org/mailman/listinfo/python-list That being said: use the latest version (3.6 as of right now). On Thu, Nov 2, 2017 at 12:52 PM, Bob Woolsey wrote: > > > Sent from my iPhone > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else https://refi64.com/ From levkivskyi at gmail.com Thu Nov 2 15:21:00 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 2 Nov 2017 20:21:00 +0100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On 2 November 2017 at 18:00, Brett Cannon wrote: > > > On Thu, 2 Nov 2017 at 08:46 Steven D'Aprano wrote: > >> On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote: >> [...snip...] > > I think the performance bit is really the big deal here. > > I don't think so. Although subscripting generics is indeed very expensive, this is heavily optimised by various caches. I think PEP 560 might actually have comparable (or even bigger) performance effects. IIUC performance is listed second in the PEP for a reason. I am OK with using quotes for forward references, but I have seen many people complaining about this (especially novices). I think Jukka is right here. We can't allow all unquoted forward references but those that will still be quoted appear in more advanced situations like bounded type variables and derived generics. But in these cases it is clear that a forward reference appears in runtime context. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 2 15:32:06 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 2 Nov 2017 20:32:06 +0100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> Message-ID: On 1 November 2017 at 23:48, Lukasz Langa wrote: > Runtime annotation resolution and class decorators > -------------------------------------------------- > > Metaclasses and class decorators that need to resolve annotations for > the current class will fail for annotations that use the name of the > current class. Example:: > > def class_decorator(cls): > annotations = get_type_hints(cls) # raises NameError on 'C' > print(f'Annotations for {cls}: {annotations}') > return cls > > @class_decorator > class C: > singleton: 'C' = None > > This was already true before this PEP. The class decorator acts on > the class before it's assigned a name in the current definition scope. > > Just a random idea: maybe this can be resolved by just updating the localns before calling get_type_hints() like this: localns = locals().copy() localns.update({cls.__name__: cls}) In general I like how the PEP is written now. I maybe would add examples for this > the cases listed above might be worked around by placing the usage > in a if TYPE_CHECKING: block. > ... > For named tuples, using the new class definition syntax introduced in Python 3.6 solves the issue. actually showing something like if TYPE_CHECKING: Alias = List[Tuple[int, SomeClass]] class NT(NamedTuple): one: SomeClass other: Alias class SomeClass: ... -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Nov 2 16:15:50 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 2 Nov 2017 16:15:50 -0400 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <877ev9kqlv.fsf@metapensiero.it> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> Message-ID: On 11/2/2017 4:54 AM, Lele Gaifax wrote: > I'm not sure if the rebase should have been done on the original branch > instead of creating a new one, or instead if I should open a new PR (and close > the original one?). It is normal to 'git merge upstream/master' after updating the master branch and checking out the pr branch, and then push the updated branch. I sometimes have to do that when reviewing a pr. Often, for me, the problem is not merge conflicts, but re version conflicts. However, if there are merge conflicts that make it easier to reproduce the patch from the current master, closing and opening a new PR is ok too. -- Terry Jan Reedy From blibbet at gmail.com Thu Nov 2 15:37:05 2017 From: blibbet at gmail.com (Blibbet) Date: Thu, 2 Nov 2017 12:37:05 -0700 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: References: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> Message-ID: On 11/02/2017 09:41 AM, Jayaprakash, N wrote: > Would you consider adding thread support in this port of Python for EDK2 shell? FYI, this library adds thread support to UEFI: https://github.com/Openwide-Ingenierie/GreenThreads-UEFI Note that the library is GPLv2, ...but the author (a 1-person project) could be asked to relicense to BSD to fit into Tianocore. Note that library is currently Intel x64-centric, and contains a bit of assembly. Will need some ARM/RISC-V/x86 contributions. HTH, Lee Fisher From sf at fermigier.com Thu Nov 2 16:13:09 2017 From: sf at fermigier.com (=?UTF-8?Q?St=C3=A9fane_Fermigier?=) Date: Thu, 2 Nov 2017 21:13:09 +0100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On Thu, Nov 2, 2017 at 7:39 PM, Jukka Lehtosalo wrote: > > As type checking has become the main use case for annotations, using > annotations without a type checker is fast becoming a marginal use case. > Type checkers can easily and reliably validate that names in annotations aren't > misspelled. > Another common use case is dependency injection / IoC: Examples include: - Injector (https://github.com/alecthomas/injector): >>> class Outer: ... @inject ... def __init__(self, inner: Inner): ... self.inner = inner - Flsk-Injector (ok it's the same underlying injector): # Route with injection @app.route("/foo") def foo(db: sqlite3.Connection): users = db.execute('SELECT * FROM users').all() return render("foo.html") - Apistar components (https://github.com/encode/apistar#components ): def say_hello(user: User): return {'hello': user.username} => In each of the examples, the type annotation are used at runtime by the IoC container to inject an object of the appropriate type, based on some specifications. They may or may not be used by a typechecker too, but that's secondary. S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- ?You never change things by ?ghting the existing reality. To change something, build a new model that makes the existing model obsolete.? ? R. Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasz at langa.pl Thu Nov 2 17:25:25 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Thu, 2 Nov 2017 14:25:25 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: <01164F69-7445-44C3-8859-D13FC44E31AD@langa.pl> > On Nov 2, 2017, at 1:13 PM, St?fane Fermigier wrote: > > Another common use case is dependency injection / IoC: > > - Injector (https://github.com/alecthomas/injector ): > - Flask-Injector (ok it's the same underlying injector): Pretty cool! This is already using `get_type_hints()` so it's perfectly compatible with PEP 563: https://github.com/alecthomas/injector/blob/master/injector.py#L915 > - Apistar components (https://github.com/encode/apistar#components ): This is using `inspect` directly so will have to migrate to `get_type_hints()` later. But, as the PEP mentions, it should already be using `get_type_hints()` since people are free to use forward references even today. Thanks for flagging those use cases! - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From eric at trueblade.com Thu Nov 2 17:28:10 2017 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 2 Nov 2017 17:28:10 -0400 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: On 11/1/2017 5:47 PM, Ned Deily wrote: > Happy belated Halloween to those who celebrate it; I hope it wasn't too scary! Also possibly scary: we have just a little over 12 weeks remaining until Python 3.7's feature code cutoff, 2018-01-29. Those 12 weeks include a number of traditional holidays around the world so, if you are planning on writing another PEP for 3.7 or working on getting an existing one approved or getting feature code reviewed, please plan accordingly. If you have something in the pipeline, please either let me know or, when implemented, add the feature to PEP 537, the 3.7 Release Schedule PEP. I hope to be able to free up some time to complete PEP 557, Data Classes. Eric. From levkivskyi at gmail.com Thu Nov 2 17:57:49 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 2 Nov 2017 22:57:49 +0100 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: I will be happy to see PEP 544 and PEP 560 in Python 3.7, and maybe also PEP 562 (if we decide on it and I will have time). -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Thu Nov 2 20:55:39 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Fri, 3 Nov 2017 09:55:39 +0900 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: I'm 100% agree with ?ukasz and Brett. +1 and thanks for writing this PEP. INADA Naoki On Fri, Nov 3, 2017 at 2:00 AM, Brett Cannon wrote: > > > On Thu, 2 Nov 2017 at 08:46 Steven D'Aprano wrote: >> >> On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote: >> >> > PEP: 563 >> > Title: Postponed Evaluation of Annotations >> >> > This PEP proposes changing function annotations and variable annotations >> > so that they are no longer evaluated at function definition time. >> > Instead, they are preserved in ``__annotations__`` in string form. >> >> This means that now *all* annotations, not just forward references, are >> no longer validated at runtime and will allow arbitrary typos and >> errors: >> >> def spam(n:itn): # now valid >> ... >> >> Up to now, it has been only forward references that were vulnerable to >> that sort of thing. Of course running a type checker should pick those >> errors up, but the evaluation of annotations ensures that they are >> actually valid (not necessarily correct, but at least a valid name), >> even if you happen to not be running a type checker. That's useful. >> >> Are we happy to live with that change? > > > I would say "yes" for two reasons. One, if you're bothering to provide type > hints then you should be testing those type hints. So as you pointed out, > Steve, that will be caught at that point. > > Two, code editors with auto-completion will help prevent this kind of typo. > Now I would never suggest that we design Python with expectations of what > sort of tooling people have available, but in this instance it will help. It > also feeds into a question you ask below... > >> >> >> >> > Rationale and Goals >> > =================== >> > >> > PEP 3107 added support for arbitrary annotations on parts of a function >> > definition. Just like default values, annotations are evaluated at >> > function definition time. This creates a number of issues for the type >> > hinting use case: >> > >> > * forward references: when a type hint contains names that have not been >> > defined yet, that definition needs to be expressed as a string >> > literal; >> >> After all the discussion, I still don't see why this is an issue. >> Strings makes perfectly fine forward references. What is the problem >> that needs solving? Is this about people not wanting to type the leading >> and trailing ' around forward references? > > > I think it's mainly about the next point you ask about... > >> >> >> >> > * type hints are executed at module import time, which is not >> > computationally free. >> >> True; but is that really a performance bottleneck? If it is, that should >> be stated in the PEP, and state what typical performance improvement >> this change should give. >> >> After all, if we're going to break people's code in order to improve >> performance, we should at least be sure that it improves performance :-) > > > The cost of constructing some of the objects used as type hints can be very > expensive and make importing really expensive (this has been pointed out by > Lukasz previously as well as Inada-san). By making Python itself not have to > construct objects from e.g. the 'typing' module at runtime, you then don't > pay a runtime penalty for something you're almost never going to use at > runtime anyway. > >> >> >> >> > Postponing the evaluation of annotations solves both problems. >> >> Actually it doesn't. As your PEP says later: >> >> > This PEP is meant to solve the problem of forward references in type >> > annotations. There are still cases outside of annotations where >> > forward references will require usage of string literals. Those are >> > listed in a later section of this document. >> >> So the primary problem this PEP is designed to solve, isn't actually >> solved by this PEP. > > > I think the performance bit is really the big deal here. > > And as I mentioned earlier, if you turn all of your type hints into strings, > you lose auto-completion/intellisense which is a shame. > > I think there's also a benefit here of promoting the fact that type hints > are not a runtime thing, they are a static analysis thing. By requiring the > extra step to convert from a string to an actual object, it helps get the > point across that type hints are just bits of metadata for tooling and not > something you're expected really interact with at runtime unless you have a > really good reason to. > > So I'm +1 on the idea, but the __future__ statement is a bit too generic for > me. I would prefer something like `from __future__ import > annotation_strings` or `annotations_as_strings`. > > -Brett > >> >> >> (See Guido's comments, quoted later.) >> >> >> >> > Implementation >> > ============== >> > >> > In Python 4.0, function and variable annotations will no longer be >> > evaluated at definition time. Instead, a string form will be preserved >> > in the respective ``__annotations__`` dictionary. Static type checkers >> > will see no difference in behavior, >> >> Static checkers don't see __annotations__ at all, since that's not >> available at edit/compile time. Static checkers see only the source >> code. The checker (and the human reader!) will no longer have the useful >> clue that something is a forward reference: >> >> # before >> class C: >> def method(self, other:'C'): >> ... >> >> since the quotes around C will be redundant and almost certainly left >> out. And if they aren't left out, then what are we to make of the >> annotation? Will the quotes be stripped out, or left in? >> >> In other words, will method's __annotations__ contain 'C' or "'C'"? That >> will make a difference when the type hint is eval'ed. >> >> >> > If an annotation was already a string, this string is preserved >> > verbatim. >> >> That's ambiguous. See above. >> >> >> > Annotations can only use names present in the module scope as postponed >> > evaluation using local names is not reliable (with the sole exception of >> > class-level names resolved by ``typing.get_type_hints()``). >> >> Even if you call get_type_hints from inside the function defining the >> local names? >> >> def function(): >> A = something() >> def inner(x:A)->int: >> ... >> d = typing.get_type_hints(inner) >> return (d, inner) >> >> I would expect that should work. Will it? >> >> >> > For code which uses annotations for other purposes, a regular >> > ``eval(ann, globals, locals)`` call is enough to resolve the >> > annotation. >> >> Let's just hope nobody doing that has allowed any tainted strings to >> be stuffed into __annotations__. >> >> >> > * modules should use their own ``__dict__``. >> >> Which is better written as ``vars()`` with no argument, I believe. Or >> possibly ``globals()``. >> >> >> > If a function generates a class or a function with annotations that >> > have to use local variables, it can populate the given generated >> > object's ``__annotations__`` dictionary directly, without relying on >> > the compiler. >> >> I don't understand this paragraph. >> >> >> > The biggest controversy on the issue was Guido van Rossum's concern >> > that untokenizing annotation expressions back to their string form has >> > no precedent in the Python programming language and feels like a hacky >> > workaround. He said: >> > >> > One thing that comes to mind is that it's a very random change to >> > the language. It might be useful to have a more compact way to >> > indicate deferred execution of expressions (using less syntax than >> > ``lambda:``). But why would the use case of type annotations be so >> > all-important to change the language to do it there first (rather >> > than proposing a more general solution), given that there's already >> > a solution for this particular use case that requires very minimal >> > syntax? >> >> I agree with Guido's concern here. A more general solution would >> (hopefully!) be like a thunk, and might allow some interesting >> techniques unrelated to type checking. Just off the top of my head, say, >> late binding of default values (without the "if arg is None: arg = []" >> trick). >> >> >> > A few people voiced concerns that there are libraries using annotations >> > for non-typing purposes. However, none of the named libraries would be >> > invalidated by this PEP. They do require adapting to the new >> > requirement to call ``eval()`` on the annotation with the correct >> > ``globals`` and ``locals`` set. >> >> Since this is likely to be a common task for such libraries, can we have >> a evaluate_annotations() function to do this, rather than have everyone >> reinvent the wheel? >> >> def func(arg:int): >> ... >> >> evaluate_annotations(func) >> assert func.__annotations__['arg'] is int >> >> It could be a decorator, as well as modifying __annotations__ in place. >> >> I imagine something with a signature like this: >> >> def evaluate_annotations( >> obj:Union[Function, Class], >> globals:Dict=None, >> locals:Dict=None >> )->Union[Function, Class]: >> """Evaluate the __annotations__ of a function, or recursively >> a class and all its methods. Replace the __annotations__ in >> place. Returns the modified argument, making this suitable as >> a decorator. >> >> If globals is not given, it is taken from the function.__globals__ >> or class.__module__ if available. If locals is not given, it >> defaults to the current locals. >> """ >> >> >> -- >> Steve >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com > From guido at python.org Thu Nov 2 22:19:28 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 2 Nov 2017 19:19:28 -0700 Subject: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution In-Reply-To: References: <20171016170631.375edd56@fsol> <4f15b978-d786-3b80-77f7-c6cb7d313573@email.de> <90a1475b-4fa1-3323-fae8-29d65d54fcc5@email.de> Message-ID: Yay! Record time from acceptance to implementation. :-) On Thu, Nov 2, 2017 at 8:16 AM, Victor Stinner wrote: > Thank you Guido for your review and approval. > > I just implemented the PEP 564 and so changed the PEP status to Final. > > FYI I also added 3 new clock identifiers to the time module in Python > 3.7: CLOCK_BOOTTIME, CLOCK_PROF and CLOCK_UPTIME. > > So you can now get your Linux uptime with a resolution of 1 nanosecond :-D > > haypo at selma$ ./python -c 'import time; > print(time.clock_gettime_ns(time.CLOCK_BOOTTIME))' > 232172588663888 > > Don't do that at home, it's just for educational purpose only! ;-) > > Victor > > 2017-10-30 18:18 GMT+01:00 Guido van Rossum : > > I have read PEP 564 and (mostly) followed the discussion in this thread, > and > > I am happy with the PEP. I am hereby approving PEP 564. Congratulations > > Victor! > > -- > > --Guido van Rossum (python.org/~guido) > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Nov 3 00:26:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Nov 2017 14:26:33 +1000 Subject: [Python-Dev] PEP 511 (code transformers) rejected In-Reply-To: References: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> Message-ID: On 2 November 2017 at 23:42, Victor Stinner wrote: > (Email resent, I first sent it to Nick privately by mistake.) Oops, I didn't even notice that. Reposting my reply below. > 2017-11-02 2:53 GMT+01:00 Nick Coghlan : >> The piece that we're currently missing to make such workflows easier to >> manage is an equivalent of JavaScript's source maps (...) > > Code objects already have a instruction pointer => line number mapping > table: the code.co_lnotab field. It's documented at: > https://github.com/python/cpython/blob/master/Objects/lnotab_notes.txt > > This table is built from the line number information of the AST tree. > > The mapping table is optimized to be small. Before Python 3.5, line > number had to be monotonic. Since Python 3.6, you "move" instructions > at the AST level, and so have "non-monotonic" line numbers (ex: line > 1, line 3, line 2, line 4). Right, and linecache knows how to read that, However, it can only do so if the source files are on the running system with the bytecode files, *and* the source code we're interested in is the source code that was actually compiled by the interpreter. Source code transformers fail that second precondition (since the interpreter only sees the post-transformation code), and this is one of the big reasons folks ended up writing actual single source 2/3 compatible code bases rather than running 2to3 as a source code transformer when building packages: with transformed source, conventional tracebacks quote the line from the transformed source code, *not* the line in the original pre-transformation source code. However, if the code transformer were to emit a JavaScript style source map in addition to emitting the transformed code, then automated tooling could take a traceback that referenced lines in the transformed code, and work out the equivalent traceback for the pre-transformation code. (I believe Cython has something like that in order to provide it's HTML annotation mode, and PyPy's JIT can trace from machine code back to the related Python source lines, but we don't have anything that's independent of a particular toolchain the way source maps are) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Nov 3 01:36:14 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Nov 2017 15:36:14 +1000 Subject: [Python-Dev] PEP 511 (code transformers) rejected In-Reply-To: References: <010E2EB8-7771-415C-80A5-109929789527@langa.pl> Message-ID: On 3 November 2017 at 03:19, Brett Cannon wrote: > On Wed, 1 Nov 2017 at 16:17 Lukasz Langa wrote: >> >> I find this sad. In the JavaScript community the existence of Babel is >> very important for the long-term evolution of the language independently >> from the runtime. With Babel, JavaScript programmers can utilize new >> language syntax while being able to deploy on dated browsers. While there's >> always some experimentation, I doubt our community would abuse the new >> syntactic freedom that the PEP provided. >> >> Then again, maybe we should do what Babel did, e.g. release a tool like it >> totally separately from the runtime. > > I think the trick here would be getting people more comfortable with > ahead-of-time compilation and then adding the appropriate support to > bytecode files to load other "optimization" levels/tags. Then you load the > .pyc files and rely on co_lnotab as Victor pointed out to get your source > mapping by compiling your source code explicitly instead of as a side-effect > of import. And since this approach would then just be about generalizing how > to specify different tags to match against in .pyc file names it's easier to > get accepted. I'm not sure it's quite that simple, as you still need to define: - how does the import system know that a given input file is a "cache-only" import? - how do linecache and similar tools know what source file the pyc maps back to? Right now, the source-file/cache-file relationship is hardcoded in two functions: * https://docs.python.org/3/library/importlib.html#importlib.util.cache_from_source; and * https://docs.python.org/3/library/importlib.html#importlib.util.source_from_cache If we look at the code from hylang's custom importer for ".hy" files [1] we can see that the "cache_from_source" implementation has a convenient property: it ignores the source extension entirely, which means it works for input paths with arbitrary file extensions, not just Python source files. This means that hy's import system integration can use that helper, but if you have a "foo.hy" source file and a "__pycache__/foo-.pyc" ouptut file, the regular import machinery will *ignore* the latter file, and you have to register Hy's customer importer in order for Python to acknowledge that the cached file exists. The reverse lookup, by contrast, always assumes that the source suffix is a ".py" file (which is already broken for "pyw" source files on Windows). Correcting for that at the standard library level would require changing the cache filename format to include an optional additional element: the source file extension (cache_to_source doesn't assume it has access to the pyc file itself - only the filename). So if we went down that path, then the import system level additions we'd want would probably be along the lines of: - an enhancement to the cache file naming scheme to allow source file extensions to be saved in PYC filenames - an update to the SourceFileLoader to use that new naming scheme when implicitly compiling source files with the pyw extension - a new "CacheOnlyLoader" together with a new CACHE_ONLY_SUFFIXES list - a new ".pyb" suffix (for "Python backport") as the sole default entry in CACHE_ONLY_SUFFIXES (awful pun alert: you could also argue that this suffix makes sense because "pyb files come before pyc files") To make this syntactic polyfill approach usable with older Python versions (including 2.7), importlib2 could be resynced to the first importlib version that supported this (importlib2 is currently up to date with Python 3.5's multi-phase initialisation support, since that was the last major functional change in importlib). Cheers, Nick. [1] https://github.com/hylang/hy/blob/master/hy/importer.py -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Nov 3 02:22:36 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Nov 2017 16:22:36 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On 3 November 2017 at 03:00, Brett Cannon wrote: > The cost of constructing some of the objects used as type hints can be very > expensive and make importing really expensive (this has been pointed out by > Lukasz previously as well as Inada-san). By making Python itself not have to > construct objects from e.g. the 'typing' module at runtime, you then don't > pay a runtime penalty for something you're almost never going to use at > runtime anyway. Another point worth noting is that merely importing the typing module is expensive: $ python -m perf timeit -s "from importlib import reload; import typing" "reload(typing)" ..................... Mean +- std dev: 10.6 ms +- 0.6 ms 10 ms is a *big* chunk out of a CLI application's startup time budget. So I think to be truly effective in achieving its goals, the PEP will also need to promote TYPE_CHECKING to a builtin, such that folks can write: from __future__ import lazy_annotations # More self-explanatory name if TYPE_CHECKING: import typing and be able to express their type annotations in a way that a static type checker will understand, while incurring near-zero runtime overhead. [snip] Regarding the thunk idea: the compiler can pretty easily rewrite all annotations from being "" to "lambda: ". This has a couple of very nice properties: * rendering them later is purely a matter of calling them, since the compiler will take care of capturing all the right namespaces and name references (including references from method declarations to the type defining them) * quoted and unquoted annotations will reliably render differently (since one will return a string when called, and the other won't) The big downside to this approach is that it makes simple annotations *more* expensive rather than less expensive. Baseline (mentioning a builtin): $ python -m perf timeit "str" ..................... Mean +- std dev: 27.1 ns +- 1.4 ns String constant (~4x speedup): $ python -m perf timeit "'str'" ..................... Mean +- std dev: 7.84 ns +- 0.22 ns Lambda expression (~2x slowdown): $ python -m perf timeit "lambda: str" ..................... Mean +- std dev: 62.3 ns +- 1.4 ns That said, I'll also point out the following: * for application startup purposes, if you save 10 ms by not importing the typing module, then that buys you time for around 285 *thousand* implicit lambda declarations before your startup actually gets slower (assuming the relative timings on my machine are typical) * for nested functions, the overhead of the function call is enough that the dramatic 4x speedup vs 2x slowdown ratio disappears (see P.S. for numbers) * I don't believe we've really invested much time in optimising the creation of zero-argument lambdas yet, so there may be options for bringing the numbers down for the lambda based approach The other key downside to the lambda based approach is that it hits the same backwards compatibility problem we hit when list comprehensions were given their own nested scope: if we push annotations down into a new scope, they won't be able to see class level attributes any more. For comprehensions, we could partially mitigate that by evaluating the outermost iterable expression in the class scope, but there's no equivalent to that available for annotations (since the annotation's lambda expression may never be called at all). Cheers, Nick. P.S. Relative performance of the annotation styles in a nested function definition Baseline: $ python -m perf timeit -s "def f(): str" "f()" ..................... Mean +- std dev: 103 ns +- 3 ns String constant (~1.25x speedup): $ python -m perf timeit -s "def f(): 'str'" "f()" ..................... Mean +- std dev: 77.0 ns +- 1.7 ns Lambda expression (~1.5x slowdown): $ python -m perf timeit -s "def f(): (lambda: str)" "f()" ..................... Mean +- std dev: 149 ns +- 6 ns -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Nov 3 02:27:47 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Nov 2017 16:27:47 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On 3 November 2017 at 04:39, Jukka Lehtosalo wrote: >> > * forward references: when a type hint contains names that have not been >> > defined yet, that definition needs to be expressed as a string >> > literal; >> >> After all the discussion, I still don't see why this is an issue. >> Strings makes perfectly fine forward references. What is the problem >> that needs solving? Is this about people not wanting to type the leading >> and trailing ' around forward references? > > > Let's make a thought experiment. What if every forward reference would > require special quoting? Would Python programmers be happy with this? Say, > let's use ! as a suffix to mark a forward reference. They make perfectly > fine forward references. They are visually pretty unobtrusive (I'm not > suggesting $ or other ugly perlisms): > > def main(): > args = parse_args!() # A forward reference > do_stuff!(args) # Explicit is better than implicit > > def parse_args(): > ... > > def do_stuff(args): > ... > > Of course, I'm not seriously proposing this, but this highlights the fact > that in normal code forward references "just work" (at least usually), and > if we'd require a special quoting mechanism to use them anywhere, Python > would look uglier and more inconsistent. Nobody would be happy with this > change, even though you'd only have to type a single ! character extra -- > that's not a lot work, right? > > I think that the analogy is reasonable. I think it also makes a pretty decent argument that pushing function annotations into implicit lambda expressions will be easier to explain to people than converting them into strings, and then having to explain an entirely new complex set of name resolution rules. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lele at metapensiero.it Fri Nov 3 05:21:21 2017 From: lele at metapensiero.it (Lele Gaifax) Date: Fri, 03 Nov 2017 10:21:21 +0100 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> Message-ID: <87o9ojafam.fsf@metapensiero.it> Terry Reedy writes: > On 11/2/2017 4:54 AM, Lele Gaifax wrote: > >> I'm not sure if the rebase should have been done on the original branch >> instead of creating a new one, or instead if I should open a new PR (and >> close the original one?). > > It is normal to 'git merge upstream/master' after updating the master branch > and checking out the pr branch, and then push the updated branch. I > sometimes have to do that when reviewing a pr. Often, for me, the problem > is not merge conflicts, but re version conflicts. However, if there are > merge conflicts that make it easier to reproduce the patch from the current > master, closing and opening a new PR is ok too. There was one major conflict, due to the NEWS->blurb transition, and to fix that I rebased the patchset onto the (at the time) current master removing one commit and recording a new one with a proper NEWS.d entry. It seemed correct to me pushing the result into a different branch of my own GH fork, and to stick a note about the fact in the PR, asking about the preferred workflow: since a few people already reviewed the code, I didn't feel right in overwriting that history (I still don't know how GH behave when one force-pushes a different history in a branch already tracked by a PR). In the end, I closed the original PR#377 and opened a new https://github.com/python/cpython/pull/4238. I need some hint on how to solve a compilation issue on Windows, as I didn't find a portable wrapper interface to the unlink(2) system call: when the backup fails the code wants to remove the possibly corrupted/incomplete external file, but apparently the header is not present on that system so I explicitly declared the unlink() function as done for example by Modules/posixmodule.c, but does not seem the right solution. Thanks in advance, ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From storchaka at gmail.com Fri Nov 3 06:01:28 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 3 Nov 2017 12:01:28 +0200 Subject: [Python-Dev] Reorganizing re and curses related code Message-ID: Currently the implementation of re and curses related modules is sparsed over several files: re: Lib/re.py Lib/sre_compile.py Lib/sre_constants.py Lib/sre_parse.py _sre: Modules/_sre.c Modules/sre_constants.h Modules/sre.h Modules/sre_lib.h _curses: Include/py_curses.h Modules/_cursesmodule.c Modules/_curses_panel.c I want to make the re module a package, and move sre_*.py files into it. Maybe later I'll add the sre_optimize.py file for separating optimization from parsing and compiling to an internal code. The original sre_*.py files will be left for compatibility for long time, but they will just import their content from the re package. _sre implementation will be moved into the Modules/_sre/ directory. This will just make them to be in one place and will decrease the number of files in the Modules/ directory. The implementations of the _curses and _curses_panel modules together with the common header file will be moved into the Modules/_curses/ directory. Excluding py_curses.h from the set of global headers will increase the speed of rebuilding when modify just the _curses implementation (I did this too much recent times). In future the implementation of menu and forms extensions will be added (the patch for menu has beed provided years ago). Since _cursesmodule.c is one of the largest file (it defines hundreds of functions), it may be worth to extract the implementation of the _curses.window class into a separate file. And I want to implement the support of "soft function-key labels". All this will increase the number of _curses related files to 7. curses already is a package. Since virtually all changes in these files at recent years have been made by me, I don't think this will harm other core developers. Are there any objections? From solipsis at pitrou.net Fri Nov 3 06:12:56 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 3 Nov 2017 11:12:56 +0100 Subject: [Python-Dev] Reorganizing re and curses related code References: Message-ID: <20171103111256.7256064c@fsol> Sounds good to me :-) Regards Antoine. On Fri, 3 Nov 2017 12:01:28 +0200 Serhiy Storchaka wrote: > Currently the implementation of re and curses related modules is sparsed > over several files: > > re: > Lib/re.py > Lib/sre_compile.py > Lib/sre_constants.py > Lib/sre_parse.py > > _sre: > > Modules/_sre.c > Modules/sre_constants.h > Modules/sre.h > Modules/sre_lib.h > > _curses: > Include/py_curses.h > Modules/_cursesmodule.c > Modules/_curses_panel.c > > I want to make the re module a package, and move sre_*.py files into it. > Maybe later I'll add the sre_optimize.py file for separating > optimization from parsing and compiling to an internal code. The > original sre_*.py files will be left for compatibility for long time, > but they will just import their content from the re package. > > _sre implementation will be moved into the Modules/_sre/ directory. This > will just make them to be in one place and will decrease the number of > files in the Modules/ directory. > > The implementations of the _curses and _curses_panel modules together > with the common header file will be moved into the Modules/_curses/ > directory. Excluding py_curses.h from the set of global headers will > increase the speed of rebuilding when modify just the _curses > implementation (I did this too much recent times). In future the > implementation of menu and forms extensions will be added (the patch for > menu has beed provided years ago). Since _cursesmodule.c is one of the > largest file (it defines hundreds of functions), it may be worth to > extract the implementation of the _curses.window class into a separate > file. And I want to implement the support of "soft function-key labels". > All this will increase the number of _curses related files to 7. > > curses already is a package. > > Since virtually all changes in these files at recent years have been > made by me, I don't think this will harm other core developers. Are > there any objections? > From ncoghlan at gmail.com Fri Nov 3 06:29:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Nov 2017 20:29:33 +1000 Subject: [Python-Dev] Reorganizing re and curses related code In-Reply-To: References: Message-ID: On 3 November 2017 at 20:01, Serhiy Storchaka wrote: > > Since virtually all changes in these files at recent years have been made by > me, I don't think this will harm other core developers. Are there any > objections? Sound fine to me (and you may want to add an underscore prefix to the sre_*.py files in their new home). The one caveat I'll note is that this may limit automatic backporting of fixes to this files (I'm not sure how good 'git cherry-pick' is at handling file renames). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Fri Nov 3 09:02:55 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 3 Nov 2017 15:02:55 +0200 Subject: [Python-Dev] Reorganizing re and curses related code In-Reply-To: References: Message-ID: 03.11.17 12:29, Nick Coghlan ????: > On 3 November 2017 at 20:01, Serhiy Storchaka wrote: >> >> Since virtually all changes in these files at recent years have been made by >> me, I don't think this will harm other core developers. Are there any >> objections? > > Sound fine to me (and you may want to add an underscore prefix to the > sre_*.py files in their new home). > > The one caveat I'll note is that this may limit automatic backporting > of fixes to this files (I'm not sure how good 'git cherry-pick' is at > handling file renames). I'm aware of this and tried to fix all known bugs (which can't be classified as a lack of a feature) in these modules before doing this change. There are two old bugs left in _sre, but they don't have fixes yet. From guido at python.org Fri Nov 3 10:36:08 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 3 Nov 2017 07:36:08 -0700 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <87o9ojafam.fsf@metapensiero.it> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> <87o9ojafam.fsf@metapensiero.it> Message-ID: Maybe we should remove typing from the stdlib? https://github.com/python/typing/issues/495 -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Nov 3 10:40:48 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 3 Nov 2017 07:40:48 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: IMO the inability of referencing class-level definitions from annotations on methods pretty much kills this idea. On Thu, Nov 2, 2017 at 11:27 PM, Nick Coghlan wrote: > On 3 November 2017 at 04:39, Jukka Lehtosalo wrote: > >> > * forward references: when a type hint contains names that have not > been > >> > defined yet, that definition needs to be expressed as a string > >> > literal; > >> > >> After all the discussion, I still don't see why this is an issue. > >> Strings makes perfectly fine forward references. What is the problem > >> that needs solving? Is this about people not wanting to type the leading > >> and trailing ' around forward references? > > > > > > Let's make a thought experiment. What if every forward reference would > > require special quoting? Would Python programmers be happy with this? > Say, > > let's use ! as a suffix to mark a forward reference. They make perfectly > > fine forward references. They are visually pretty unobtrusive (I'm not > > suggesting $ or other ugly perlisms): > > > > def main(): > > args = parse_args!() # A forward reference > > do_stuff!(args) # Explicit is better than implicit > > > > def parse_args(): > > ... > > > > def do_stuff(args): > > ... > > > > Of course, I'm not seriously proposing this, but this highlights the fact > > that in normal code forward references "just work" (at least usually), > and > > if we'd require a special quoting mechanism to use them anywhere, Python > > would look uglier and more inconsistent. Nobody would be happy with this > > change, even though you'd only have to type a single ! character extra -- > > that's not a lot work, right? > > > > I think that the analogy is reasonable. > > I think it also makes a pretty decent argument that pushing function > annotations into implicit lambda expressions will be easier to explain > to people than converting them into strings, and then having to > explain an entirely new complex set of name resolution rules. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Fri Nov 3 10:50:54 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 3 Nov 2017 16:50:54 +0200 Subject: [Python-Dev] Remove typing from the stdlib (was: Reminder: 12 weeks to 3.7 feature code cutoff) In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> <87o9ojafam.fsf@metapensiero.it> Message-ID: <59328520-d816-2d20-ce22-8e12fc7764f4@gmail.com> 03.11.17 16:36, Guido van Rossum ????: > Maybe we should remove typing from the stdlib? > https://github.com/python/typing/issues/495 I didn't use typing, but AFAIK the most used feature from typing is NamedTuple. If move NamedTuple and maybe other convenient classes not directly related to typing into collections or types modules, I think removing typing from the stdlib will less stress people. From sigmaepsilon92 at gmail.com Fri Nov 3 03:01:51 2017 From: sigmaepsilon92 at gmail.com (Michael Zimmermann) Date: Fri, 3 Nov 2017 08:01:51 +0100 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: References: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> Message-ID: > FYI, this library adds thread support to UEFI: > > https://github.com/Openwide-Ingenierie/GreenThreads-UEFI IMO this library has some crucial problems like changing the TPL during context switching. For my project "EFIDroid" I've invested many months analyzing, testing and implementing my own threading implementation based on LK(LittleKernel, a MIT licensed project) threads and get/set -context. The result is a pretty stable implementation which can even be used in UEFI drivers: https://github.com/efidroid/uefi_edk2packages_EFIDroidLKLPkg/tree/master/UEFIThreads I'm currently using this lib for my LKL(LinuxKernelLibrary) port to be able to use linux touchscreen drivers in UEFI - so you could say it has been well tested. The only "problem" is that it only supports ARM right now and that the get/set context implementation was copied (and simplified) from glibc which means that this part is GPL code. Thanks Michael Zimmermann On Thu, Nov 2, 2017 at 8:37 PM, Blibbet wrote: > On 11/02/2017 09:41 AM, Jayaprakash, N wrote: >> Would you consider adding thread support in this port of Python for > EDK2 shell? > > FYI, this library adds thread support to UEFI: > > https://github.com/Openwide-Ingenierie/GreenThreads-UEFI > > Note that the library is GPLv2, ...but the author (a 1-person project) > could be asked to relicense to BSD to fit into Tianocore. > > Note that library is currently Intel x64-centric, and contains a bit of > assembly. Will need some ARM/RISC-V/x86 contributions. > > HTH, > Lee Fisher > _______________________________________________ > edk2-devel mailing list > edk2-devel at lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel From p.f.moore at gmail.com Fri Nov 3 11:09:13 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 3 Nov 2017 15:09:13 +0000 Subject: [Python-Dev] Remove typing from the stdlib (was: Reminder: 12 weeks to 3.7 feature code cutoff) In-Reply-To: <59328520-d816-2d20-ce22-8e12fc7764f4@gmail.com> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> <87o9ojafam.fsf@metapensiero.it> <59328520-d816-2d20-ce22-8e12fc7764f4@gmail.com> Message-ID: On 3 November 2017 at 14:50, Serhiy Storchaka wrote: > 03.11.17 16:36, Guido van Rossum ????: >> Maybe we should remove typing from the stdlib? >> https://github.com/python/typing/issues/495 > > I didn't use typing, but AFAIK the most used feature from typing is > NamedTuple. If move NamedTuple and maybe other convenient classes not > directly related to typing into collections or types modules, I think > removing typing from the stdlib will less stress people. (Checks docs) Hmm, I'd missed that this was even in there. Regardless of what happens with the typing module, I think this should be moved. I expect there are many people who never looked at the docs of the typing module because they don't use type annotations. I know the class uses type annotations to define its attributes, but I don't see that as an issue. Data classes do the same, and they won't be in the typing module... Paul From victor.stinner at gmail.com Fri Nov 3 12:15:23 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 3 Nov 2017 17:15:23 +0100 Subject: [Python-Dev] Remove typing from the stdlib Message-ID: Hi, 2017-11-03 15:36 GMT+01:00 Guido van Rossum : > Maybe we should remove typing from the stdlib? > https://github.com/python/typing/issues/495 I'm strongly in favor on such move. My experience with asyncio in the stdlib is that users expect changes faster than the very slow release process of the stdlib (a release every 18 months in average). I saw many PEPs and discussion on the typing design (meta-classes vs regular classes), as if the typing is not stable enough to be part of the stdlib. The typing module is not used yet in the stdlib, so there is no technically reason to keep typing part of the stdlib. IMHO it's perfectly fine to keep typing and annotations out of the stdlib, since the venv & pip tooling is now rock solid ;-) Victor From eric at trueblade.com Fri Nov 3 12:46:33 2017 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 3 Nov 2017 12:46:33 -0400 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: Message-ID: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> On 11/3/2017 12:15 PM, Victor Stinner wrote: > Hi, > > 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >> Maybe we should remove typing from the stdlib? >> https://github.com/python/typing/issues/495 > The typing module is not used yet in the stdlib, so there is no > technically reason to keep typing part of the stdlib. IMHO it's > perfectly fine to keep typing and annotations out of the stdlib, since > the venv & pip tooling is now rock solid ;-) I'm planning on using it for PEP 557: https://www.python.org/dev/peps/pep-0557/#class-variables The way the code currently checks for this should still work if typing is not in the stdlib, although of course it's assuming that the name "typing" really is the "official" typing library. # If typing has not been imported, then it's impossible for # any annotation to be a ClassVar. So, only look for ClassVar # if typing has been imported. typing = sys.modules.get('typing') if typing is not None: # This test uses a typing internal class, but it's the best # way to test if this is a ClassVar. if type(a_type) is typing._ClassVar: # This field is a ClassVar. Ignore it. continue See also https://github.com/ericvsmith/dataclasses/issues/14 Eric. From status at bugs.python.org Fri Nov 3 13:09:50 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 3 Nov 2017 18:09:50 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20171103170950.E869456CC4@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-10-27 - 2017-11-03) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6255 ( -5) closed 37431 (+54) total 43686 (+49) Open issues with patches: 2416 Issues opened (36) ================== #27456: asyncio: set TCP_NODELAY flag by default https://bugs.python.org/issue27456 reopened by yselivanov #31887: docs for email.generator are missing a comment on special mult https://bugs.python.org/issue31887 opened by dhke #31889: difflib SequenceMatcher ratio() still have unpredictable behav https://bugs.python.org/issue31889 opened by Siltaar #31892: ssl.get_server_certificate should allow specifying certificate https://bugs.python.org/issue31892 opened by hanno #31894: test_timestamp_naive failed on NetBSD https://bugs.python.org/issue31894 opened by serhiy.storchaka #31895: Native hijri calendar support https://bugs.python.org/issue31895 opened by haneef95 #31896: In function define class inherit ctypes.structure, and using c https://bugs.python.org/issue31896 opened by Yang Big #31897: Unexpected exceptions in plistlib.loads https://bugs.python.org/issue31897 opened by Ned Williamson #31898: Add a `recommended-packages.txt` file https://bugs.python.org/issue31898 opened by ncoghlan #31899: Ensure backwards compatibility with recommended packages https://bugs.python.org/issue31899 opened by ncoghlan #31900: localeconv() should decode numeric fields from LC_NUMERIC enco https://bugs.python.org/issue31900 opened by cstratak #31901: atexit callbacks only called for current subinterpreter https://bugs.python.org/issue31901 opened by Dormouse759 #31902: Fix col_offset for ast nodes: AsyncFor, AsyncFunctionDef, Asyn https://bugs.python.org/issue31902 opened by guoci #31903: `_scproxy` calls SystemConfiguration functions in a way that c https://bugs.python.org/issue31903 opened by Maxime Belanger #31904: Python should support VxWorks RTOS https://bugs.python.org/issue31904 opened by Brian Kuhl #31907: Clarify error message when attempting to call function via str https://bugs.python.org/issue31907 opened by mickey695 #31908: trace module cli does not write cover files https://bugs.python.org/issue31908 opened by Michael Selik #31910: test_socket.test_create_connection() failed with EADDRNOTAVAIL https://bugs.python.org/issue31910 opened by haypo #31911: Use malloc_usable_size() in pymalloc for realloc https://bugs.python.org/issue31911 opened by haypo #31912: PyMem_Malloc() should guarantee alignof(max_align_t) https://bugs.python.org/issue31912 opened by skrah #31913: forkserver could warn if several threads are running https://bugs.python.org/issue31913 opened by pitrou #31914: Document Pool.(star)map return type https://bugs.python.org/issue31914 opened by dilyan.palauzov #31916: ensurepip not honoring value of $(DESTDIR) - pip not installed https://bugs.python.org/issue31916 opened by multimiler #31920: pygettext ignores directories as inputfile argument https://bugs.python.org/issue31920 opened by Oleg Krasnikov #31921: Bring together logic for entering/leaving a frame in frameobje https://bugs.python.org/issue31921 opened by pdox #31922: Can't receive replies from multicast UDP with asyncio https://bugs.python.org/issue31922 opened by vxgmichel #31923: Misspelled "loading" in Doc/includes/sqlite3/load_extension.py https://bugs.python.org/issue31923 opened by davywtf #31924: Fix test_curses on NetBSD 8 https://bugs.python.org/issue31924 opened by serhiy.storchaka #31925: test_socket creates too many locks https://bugs.python.org/issue31925 opened by serhiy.storchaka #31927: Fix compiling the socket module on NetBSD 8 and other issues https://bugs.python.org/issue31927 opened by serhiy.storchaka #31930: IDLE: Pressing "Home" on Windows places cursor before ">>>" https://bugs.python.org/issue31930 opened by terry.reedy #31931: test_concurrent_futures: ProcessPoolSpawnExecutorTest.test_shu https://bugs.python.org/issue31931 opened by haypo #31932: setup.py cannot find vcversall.bat on MSWin 8.1 if installed i https://bugs.python.org/issue31932 opened by laranzu #31933: some Blake2 parameters are encoded backwards on big-endian pla https://bugs.python.org/issue31933 opened by oconnor663 #31934: Failure to build out of source from a not clean source https://bugs.python.org/issue31934 opened by xdegaye #31935: subprocess.run() timeout not working with grandchildren and st https://bugs.python.org/issue31935 opened by Martin Ritter Most recent 15 issues with no replies (15) ========================================== #31935: subprocess.run() timeout not working with grandchildren and st https://bugs.python.org/issue31935 #31934: Failure to build out of source from a not clean source https://bugs.python.org/issue31934 #31932: setup.py cannot find vcversall.bat on MSWin 8.1 if installed i https://bugs.python.org/issue31932 #31927: Fix compiling the socket module on NetBSD 8 and other issues https://bugs.python.org/issue31927 #31925: test_socket creates too many locks https://bugs.python.org/issue31925 #31924: Fix test_curses on NetBSD 8 https://bugs.python.org/issue31924 #31923: Misspelled "loading" in Doc/includes/sqlite3/load_extension.py https://bugs.python.org/issue31923 #31922: Can't receive replies from multicast UDP with asyncio https://bugs.python.org/issue31922 #31911: Use malloc_usable_size() in pymalloc for realloc https://bugs.python.org/issue31911 #31904: Python should support VxWorks RTOS https://bugs.python.org/issue31904 #31903: `_scproxy` calls SystemConfiguration functions in a way that c https://bugs.python.org/issue31903 #31902: Fix col_offset for ast nodes: AsyncFor, AsyncFunctionDef, Asyn https://bugs.python.org/issue31902 #31899: Ensure backwards compatibility with recommended packages https://bugs.python.org/issue31899 #31896: In function define class inherit ctypes.structure, and using c https://bugs.python.org/issue31896 #31889: difflib SequenceMatcher ratio() still have unpredictable behav https://bugs.python.org/issue31889 Most recent 15 issues waiting for review (15) ============================================= #31934: Failure to build out of source from a not clean source https://bugs.python.org/issue31934 #31933: some Blake2 parameters are encoded backwards on big-endian pla https://bugs.python.org/issue31933 #31927: Fix compiling the socket module on NetBSD 8 and other issues https://bugs.python.org/issue31927 #31924: Fix test_curses on NetBSD 8 https://bugs.python.org/issue31924 #31923: Misspelled "loading" in Doc/includes/sqlite3/load_extension.py https://bugs.python.org/issue31923 #31921: Bring together logic for entering/leaving a frame in frameobje https://bugs.python.org/issue31921 #31920: pygettext ignores directories as inputfile argument https://bugs.python.org/issue31920 #31910: test_socket.test_create_connection() failed with EADDRNOTAVAIL https://bugs.python.org/issue31910 #31908: trace module cli does not write cover files https://bugs.python.org/issue31908 #31904: Python should support VxWorks RTOS https://bugs.python.org/issue31904 #31903: `_scproxy` calls SystemConfiguration functions in a way that c https://bugs.python.org/issue31903 #31902: Fix col_offset for ast nodes: AsyncFor, AsyncFunctionDef, Asyn https://bugs.python.org/issue31902 #31900: localeconv() should decode numeric fields from LC_NUMERIC enco https://bugs.python.org/issue31900 #31897: Unexpected exceptions in plistlib.loads https://bugs.python.org/issue31897 #31887: docs for email.generator are missing a comment on special mult https://bugs.python.org/issue31887 Top 10 most discussed issues (10) ================================= #18835: Add PyMem_AlignedAlloc() https://bugs.python.org/issue18835 19 msgs #31630: math.tan has poor accuracy near pi/2 on OpenBSD and NetBSD https://bugs.python.org/issue31630 13 msgs #31897: Unexpected exceptions in plistlib.loads https://bugs.python.org/issue31897 9 msgs #31894: test_timestamp_naive failed on NetBSD https://bugs.python.org/issue31894 8 msgs #31626: Writing in freed memory in _PyMem_DebugRawRealloc() after shri https://bugs.python.org/issue31626 7 msgs #31895: Native hijri calendar support https://bugs.python.org/issue31895 7 msgs #31900: localeconv() should decode numeric fields from LC_NUMERIC enco https://bugs.python.org/issue31900 6 msgs #20182: Derby #13: Convert 50 sites to Argument Clinic across 5 files https://bugs.python.org/issue20182 5 msgs #31901: atexit callbacks only called for current subinterpreter https://bugs.python.org/issue31901 5 msgs #31908: trace module cli does not write cover files https://bugs.python.org/issue31908 5 msgs Issues closed (54) ================== #8070: Infinite loop in PyRun_InteractiveLoopFlags() if PyRun_Interac https://bugs.python.org/issue8070 closed by xdegaye #8548: Building on CygWin 1.7: PATH_MAX redefined https://bugs.python.org/issue8548 closed by berker.peksag #9667: NetBSD curses KEY_* constants https://bugs.python.org/issue9667 closed by serhiy.storchaka #9674: make install DESTDIR=/home/blah fails when the prefix specifie https://bugs.python.org/issue9674 closed by xdegaye #11383: compilation seg faults on insanely large expressions https://bugs.python.org/issue11383 closed by serhiy.storchaka #15037: curses.unget_wch and test_curses fail when linked with ncurses https://bugs.python.org/issue15037 closed by serhiy.storchaka #16994: collections.Counter.least_common https://bugs.python.org/issue16994 closed by serhiy.storchaka #20047: bytearray partition bug https://bugs.python.org/issue20047 closed by serhiy.storchaka #20064: PyObject_Malloc is not documented https://bugs.python.org/issue20064 closed by haypo #23699: Add a macro to ease writing rich comparisons https://bugs.python.org/issue23699 closed by ncoghlan #24291: Many servers (wsgiref, http.server, etc) can truncate large ou https://bugs.python.org/issue24291 closed by martin.panter #25293: Hooking Thread/Process instantiation in concurrent.futures. https://bugs.python.org/issue25293 closed by pitrou #25720: Fix curses module compilation with ncurses6 https://bugs.python.org/issue25720 closed by serhiy.storchaka #26618: _overlapped extension module of asyncio uses deprecated WSAStr https://bugs.python.org/issue26618 closed by berker.peksag #27666: "stack smashing detected" in PyCursesWindow_Box https://bugs.python.org/issue27666 closed by serhiy.storchaka #30333: test_multiprocessing_forkserver: poll() failed on AMD64 FreeBS https://bugs.python.org/issue30333 closed by haypo #30442: Skip test_xml_etree under coverage https://bugs.python.org/issue30442 closed by berker.peksag #30806: netrc.__repr__() is broken for writing to file https://bugs.python.org/issue30806 closed by inada.naoki #30824: Add mimetype for extension .json https://bugs.python.org/issue30824 closed by berker.peksag #31065: Documentation for Popen.poll is unclear https://bugs.python.org/issue31065 closed by berker.peksag #31095: Checking all tp_dealloc with Py_TPFLAGS_HAVE_GC https://bugs.python.org/issue31095 closed by berker.peksag #31245: Asyncio UNIX socket and SOCK_DGRAM https://bugs.python.org/issue31245 closed by yselivanov #31273: [2.7] unittest: Unicode support in TestCase.skip https://bugs.python.org/issue31273 closed by serhiy.storchaka #31298: Error when calling numpy.astype https://bugs.python.org/issue31298 closed by berker.peksag #31304: Update doc for starmap_async error_back kwarg https://bugs.python.org/issue31304 closed by Mariatta #31307: ConfigParser.read silently fails if filenames argument is a by https://bugs.python.org/issue31307 closed by berker.peksag #31308: forkserver process isn't re-launched if it died https://bugs.python.org/issue31308 closed by pitrou #31310: semaphore tracker isn't protected against crashes https://bugs.python.org/issue31310 closed by pitrou #31390: pydoc.Helper.keywords missing async and await https://bugs.python.org/issue31390 closed by berker.peksag #31629: Running test_curses on FreeBSD changes signal handlers https://bugs.python.org/issue31629 closed by haypo #31700: one-argument version for Generator.typing https://bugs.python.org/issue31700 closed by levkivskyi #31784: Implementation of the PEP 564: Add time.time_ns() https://bugs.python.org/issue31784 closed by haypo #31836: test_code_module fails after test_idle https://bugs.python.org/issue31836 closed by terry.reedy #31852: Crashes with lines of the form "async \" https://bugs.python.org/issue31852 closed by haypo #31858: IDLE: cleanup use of sys.ps1 and never set it. https://bugs.python.org/issue31858 closed by terry.reedy #31860: IDLE: Make font sample editable https://bugs.python.org/issue31860 closed by terry.reedy #31872: SSL BIO is broken for internationalized domains https://bugs.python.org/issue31872 closed by asvetlov #31881: subprocess.returncode not set depending on arguments to subpro https://bugs.python.org/issue31881 closed by nthompson #31883: Cygwin: heap corruption bug in wcsxfrm https://bugs.python.org/issue31883 closed by serhiy.storchaka #31888: Creating a UUID with a list throws bad exception https://bugs.python.org/issue31888 closed by serhiy.storchaka #31890: Please define the flag METH_STACKLESS for Stackless Python https://bugs.python.org/issue31890 closed by haypo #31891: Make curses compiling on NetBSD 7.1 and tests passing https://bugs.python.org/issue31891 closed by serhiy.storchaka #31893: Issues with kqueue https://bugs.python.org/issue31893 closed by serhiy.storchaka #31905: IPv4Networkcontains raises exception on None https://bugs.python.org/issue31905 closed by serhiy.storchaka #31906: String literals next to each other does not cause error https://bugs.python.org/issue31906 closed by Mariatta #31909: Missing definition of HAVE_SYSCALL_GETRANDOM https://bugs.python.org/issue31909 closed by Ilya.Kulakov #31915: (list).insert() not working https://bugs.python.org/issue31915 closed by steven.daprano #31917: Add time.CLOCK_PROF constant https://bugs.python.org/issue31917 closed by haypo #31918: Don't let all python code modify modules https://bugs.python.org/issue31918 closed by serhiy.storchaka #31919: Make curses compiling on OpenIndiana and tests passing https://bugs.python.org/issue31919 closed by serhiy.storchaka #31926: compile error when converting selectmodule to AC due to missin https://bugs.python.org/issue31926 closed by haypo #31928: DOC: Show sys.version_info as a named tuple https://bugs.python.org/issue31928 closed by csabella #31929: Raw strings create syntax error when last character is a singl https://bugs.python.org/issue31929 closed by barry #1447222: tkinter Dialog fails when more than four buttons are used https://bugs.python.org/issue1447222 closed by berker.peksag From brett at python.org Fri Nov 3 13:46:21 2017 From: brett at python.org (Brett Cannon) Date: Fri, 03 Nov 2017 17:46:21 +0000 Subject: [Python-Dev] Remove typing from the stdlib (was: Reminder: 12 weeks to 3.7 feature code cutoff) In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> <87o9ojafam.fsf@metapensiero.it> <59328520-d816-2d20-ce22-8e12fc7764f4@gmail.com> Message-ID: On Fri, 3 Nov 2017 at 08:09 Paul Moore wrote: > On 3 November 2017 at 14:50, Serhiy Storchaka wrote: > > 03.11.17 16:36, Guido van Rossum ????: > >> Maybe we should remove typing from the stdlib? > >> https://github.com/python/typing/issues/495 > > > > I didn't use typing, but AFAIK the most used feature from typing is > > NamedTuple. If move NamedTuple and maybe other convenient classes not > > directly related to typing into collections or types modules, I think > > removing typing from the stdlib will less stress people. > > (Checks docs) Hmm, I'd missed that this was even in there. > > Regardless of what happens with the typing module, I think this should > be moved. I expect there are many people who never looked at the docs > of the typing module because they don't use type annotations. I know > the class uses type annotations to define its attributes, but I don't > see that as an issue. Data classes do the same, and they won't be in > the typing module... > There is another option and that's splitting up the typing module into core, abstract things, and then the stuff that is about concrete types. For instance, ClassVar, Union, and cast() are all abstract concepts that are basic to type hints. But things like Mapping are more concrete and not a fundamental concept to type hints. You could argue that the fundamentals that won't change could stay in the stdlib while the concrete type classes could get pulled out so they can be managed more easily/quickly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Nov 3 13:47:38 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 3 Nov 2017 18:47:38 +0100 Subject: [Python-Dev] Remove typing from the stdlib References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> Message-ID: <20171103184738.5ce7b68c@fsol> On Fri, 3 Nov 2017 12:46:33 -0400 "Eric V. Smith" wrote: > On 11/3/2017 12:15 PM, Victor Stinner wrote: > > Hi, > > > > 2017-11-03 15:36 GMT+01:00 Guido van Rossum : > >> Maybe we should remove typing from the stdlib? > >> https://github.com/python/typing/issues/495 > > > The typing module is not used yet in the stdlib, so there is no > > technically reason to keep typing part of the stdlib. IMHO it's > > perfectly fine to keep typing and annotations out of the stdlib, since > > the venv & pip tooling is now rock solid ;-) > > I'm planning on using it for PEP 557: > https://www.python.org/dev/peps/pep-0557/#class-variables > > The way the code currently checks for this should still work if typing > is not in the stdlib, although of course it's assuming that the name > "typing" really is the "official" typing library. I don't think other modules should start relying on the typing module at runtime. The dataclasses module can define its own "ClassVar" thing and then I suspect it's easy to map it to typing._ClassVar. It seems we should be careful not to blur the distinction between declarations that have an effect on actual code, and typing declarations which only affect type-checking tools. Regards Antoine. From brett at python.org Fri Nov 3 13:48:23 2017 From: brett at python.org (Brett Cannon) Date: Fri, 03 Nov 2017 17:48:23 +0000 Subject: [Python-Dev] [edk2] Official port of Python on EDK2 In-Reply-To: References: <80AC2BAA3152784F98F581129E5CF5AFA4635665@ORSMSX114.amr.corp.intel.com> Message-ID: Since the initial email got cross-posted, would it be possible to drop python-dev from any discussing that doesn't directly involve Python itself (e.g. we don't need to be involved in a discussion about whether you all have threading on UEFI)? On Fri, 3 Nov 2017 at 07:57 Michael Zimmermann wrote: > > FYI, this library adds thread support to UEFI: > > > > https://github.com/Openwide-Ingenierie/GreenThreads-UEFI > > IMO this library has some crucial problems like changing the TPL > during context switching. > For my project "EFIDroid" I've invested many months analyzing, testing > and implementing my own threading implementation based on > LK(LittleKernel, a MIT licensed project) threads and get/set -context. > > The result is a pretty stable implementation which can even be used in > UEFI drivers: > > https://github.com/efidroid/uefi_edk2packages_EFIDroidLKLPkg/tree/master/UEFIThreads > I'm currently using this lib for my LKL(LinuxKernelLibrary) port to be > able to use linux touchscreen drivers in UEFI - so you could say it > has been well tested. > > The only "problem" is that it only supports ARM right now and that the > get/set context implementation was copied (and simplified) from glibc > which means that this part is GPL code. > > Thanks > Michael Zimmermann > > On Thu, Nov 2, 2017 at 8:37 PM, Blibbet wrote: > > On 11/02/2017 09:41 AM, Jayaprakash, N wrote: > >> Would you consider adding thread support in this port of Python for > > EDK2 shell? > > > > FYI, this library adds thread support to UEFI: > > > > https://github.com/Openwide-Ingenierie/GreenThreads-UEFI > > > > Note that the library is GPLv2, ...but the author (a 1-person project) > > could be asked to relicense to BSD to fit into Tianocore. > > > > Note that library is currently Intel x64-centric, and contains a bit of > > assembly. Will need some ARM/RISC-V/x86 contributions. > > > > HTH, > > Lee Fisher > > _______________________________________________ > > edk2-devel mailing list > > edk2-devel at lists.01.org > > https://lists.01.org/mailman/listinfo/edk2-devel > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Nov 3 14:04:34 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 3 Nov 2017 18:04:34 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <20171103184738.5ce7b68c@fsol> References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <20171103184738.5ce7b68c@fsol> Message-ID: On 3 November 2017 at 17:47, Antoine Pitrou wrote: > On Fri, 3 Nov 2017 12:46:33 -0400 > "Eric V. Smith" wrote: >> On 11/3/2017 12:15 PM, Victor Stinner wrote: >> > Hi, >> > >> > 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >> >> Maybe we should remove typing from the stdlib? >> >> https://github.com/python/typing/issues/495 >> >> > The typing module is not used yet in the stdlib, so there is no >> > technically reason to keep typing part of the stdlib. IMHO it's >> > perfectly fine to keep typing and annotations out of the stdlib, since >> > the venv & pip tooling is now rock solid ;-) >> >> I'm planning on using it for PEP 557: >> https://www.python.org/dev/peps/pep-0557/#class-variables >> >> The way the code currently checks for this should still work if typing >> is not in the stdlib, although of course it's assuming that the name >> "typing" really is the "official" typing library. > > I don't think other modules should start relying on the typing module at > runtime. > The dataclasses module can define its own "ClassVar" thing and then I > suspect it's easy to map it to typing._ClassVar. It seems we should be > careful not to blur the distinction between declarations that have an > effect on actual code, and typing declarations which only affect > type-checking tools. I'm looking forward to the dataclasses module, and I'm perfectly OK with the way that it uses type annotations to declare attributes. I also don't have a problem with it relying on the typing module - but *only* if the typing module is in the stdlib. I don't think it's good if a standard feature needs an external library for some of its functionality. So I guess the point is, if we're considering moving typing out of the stdlib, then what's the impact on PEP 557? Personally, I don't use type annotations myself yet, but I've used code that does and I'm considering looking into them - for a variety of reasons, documentation, IDE support, and the ability to type check my code via mypy. If typing moves out of the stdlib, I'd be much less inclined to do so - adding a runtime dependency is a non-trivial cost in terms of admin for deployment, handling within my (peculiar, if you want to debate workflow) development workflow, etc. Working out how to add type annotations *without* them being a runtime dependency (just at test-time) is too much work. So I am concerned that if we move typing out of the stdlib, it'll reduce adoption rates. Paul From jelle.zijlstra at gmail.com Fri Nov 3 14:04:44 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Fri, 3 Nov 2017 11:04:44 -0700 Subject: [Python-Dev] Remove typing from the stdlib (was: Reminder: 12 weeks to 3.7 feature code cutoff) In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <877ev9kqlv.fsf@metapensiero.it> <87o9ojafam.fsf@metapensiero.it> <59328520-d816-2d20-ce22-8e12fc7764f4@gmail.com> Message-ID: 2017-11-03 10:46 GMT-07:00 Brett Cannon : > > > On Fri, 3 Nov 2017 at 08:09 Paul Moore wrote: > >> On 3 November 2017 at 14:50, Serhiy Storchaka >> wrote: >> > 03.11.17 16:36, Guido van Rossum ????: >> >> Maybe we should remove typing from the stdlib? >> >> https://github.com/python/typing/issues/495 >> > >> > I didn't use typing, but AFAIK the most used feature from typing is >> > NamedTuple. If move NamedTuple and maybe other convenient classes not >> > directly related to typing into collections or types modules, I think >> > removing typing from the stdlib will less stress people. >> >> (Checks docs) Hmm, I'd missed that this was even in there. >> >> Regardless of what happens with the typing module, I think this should >> be moved. I expect there are many people who never looked at the docs >> of the typing module because they don't use type annotations. I know >> the class uses type annotations to define its attributes, but I don't >> see that as an issue. Data classes do the same, and they won't be in >> the typing module... >> > > There is another option and that's splitting up the typing module into > core, abstract things, and then the stuff that is about concrete types. For > instance, ClassVar, Union, and cast() are all abstract concepts that are > basic to type hints. But things like Mapping are more concrete and not a > fundamental concept to type hints. You could argue that the fundamentals > that won't change could stay in the stdlib while the concrete type classes > could get pulled out so they can be managed more easily/quickly. > I don't think the fundamentals are less likely to change?if anything, it may be the opposite. Many of the potential innovations we'd want to add to typing (Protocols, literal types, intersection types) are fundamental to the type system, and things like Mapping actually don't change very often. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Fri Nov 3 13:00:36 2017 From: steve.dower at python.org (Steve Dower) Date: Fri, 3 Nov 2017 10:00:36 -0700 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: Message-ID: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> On 03Nov2017 0915, Victor Stinner wrote: > Hi, > > 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >> Maybe we should remove typing from the stdlib? >> https://github.com/python/typing/issues/495 > > I'm strongly in favor on such move. I'm torn. If not having typing installed means you can't use "Any" or "Optional" in an annotation, that basically kills the whole thing. Some primitives need to be there. If annotations become glorified strings (which IMHO they should) and typing gains a function to parse those into type hints (which IMHO it should), then I'm in favour of splitting typing out. (Personally, if making every type hint a literal 'string' meant that I could avoid importing typing then I'd do it.) However, if typing is split out then its API (specifically, the contents of typing.__all__ and their semantic meaning (e.g. "Iterable" means something with an "__iter__" method)) needs to stay in PEPs. Static analysers using type hints encode much more information about these types than can be inferred statically from typing.py, which means the definitions should not change faster than Python x.*y*. Ideally, they would not change at all once released. For example, my static analyser has an existing object representing iterables, since we've been inferring iterables for years. When I parse a type annotation and see "typing.Iterable", I'm going to just substitute my own implementation - the definition in typing.py isn't going to be used (or be useful). And because it has to map across languages, it has to be a hard-coded mapping that can't rely on typing.py at all. Since the contents of typing.py will likely be completely ignored by my analyser, which means that I can't treat "whatever version of typing is installed" as ground truth. It needs to move slower or be purely additive. Being in the standard library is a nice easy way to ensure this - moving it out is a risk. That said, because I don't care about the contents of the file, all the heavy execution stuff can totally be moved out. If typing in the stdlib became very trivial definitions or just a set of names to support searching/editors/forward-refs, and typing out of the stdlib had the ability to convert annotations into an object model that provides rich runtime introspection, I'd also be fine. At least then the interface is highly stable, even if the implementation (for those who use it) changes. Cheers, Steve From antoine at python.org Fri Nov 3 14:08:16 2017 From: antoine at python.org (Antoine Pitrou) Date: Fri, 3 Nov 2017 19:08:16 +0100 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <20171103184738.5ce7b68c@fsol> Message-ID: <705e19d1-10af-12da-c2eb-14f67359337d@python.org> Also, as for dataclasses specifically, since it may become as pervasive as namedtuples, we probably want it to be as light-weight as possible. Which will be harder to achieve if its *API* depends on the typing machinery. Regards Antoine. Le 03/11/2017 ? 19:04, Paul Moore a ?crit?: > On 3 November 2017 at 17:47, Antoine Pitrou wrote: >> On Fri, 3 Nov 2017 12:46:33 -0400 >> "Eric V. Smith" wrote: >>> On 11/3/2017 12:15 PM, Victor Stinner wrote: >>>> Hi, >>>> >>>> 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >>>>> Maybe we should remove typing from the stdlib? >>>>> https://github.com/python/typing/issues/495 >>> >>>> The typing module is not used yet in the stdlib, so there is no >>>> technically reason to keep typing part of the stdlib. IMHO it's >>>> perfectly fine to keep typing and annotations out of the stdlib, since >>>> the venv & pip tooling is now rock solid ;-) >>> >>> I'm planning on using it for PEP 557: >>> https://www.python.org/dev/peps/pep-0557/#class-variables >>> >>> The way the code currently checks for this should still work if typing >>> is not in the stdlib, although of course it's assuming that the name >>> "typing" really is the "official" typing library. >> >> I don't think other modules should start relying on the typing module at >> runtime. >> The dataclasses module can define its own "ClassVar" thing and then I >> suspect it's easy to map it to typing._ClassVar. It seems we should be >> careful not to blur the distinction between declarations that have an >> effect on actual code, and typing declarations which only affect >> type-checking tools. > > I'm looking forward to the dataclasses module, and I'm perfectly OK > with the way that it uses type annotations to declare attributes. I > also don't have a problem with it relying on the typing module - but > *only* if the typing module is in the stdlib. I don't think it's good > if a standard feature needs an external library for some of its > functionality. > > So I guess the point is, if we're considering moving typing out of the > stdlib, then what's the impact on PEP 557? > > Personally, I don't use type annotations myself yet, but I've used > code that does and I'm considering looking into them - for a variety > of reasons, documentation, IDE support, and the ability to type check > my code via mypy. If typing moves out of the stdlib, I'd be much less > inclined to do so - adding a runtime dependency is a non-trivial cost > in terms of admin for deployment, handling within my (peculiar, if you > want to debate workflow) development workflow, etc. Working out how to > add type annotations *without* them being a runtime dependency (just > at test-time) is too much work. So I am concerned that if we move > typing out of the stdlib, it'll reduce adoption rates. > > Paul > From guido at python.org Fri Nov 3 14:24:28 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 3 Nov 2017 11:24:28 -0700 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: A side note (I'm reading all responses but staying out of the discussion): No static checker should depend on the *contents* of typing.py, since it's just a bunch of runtime gymnastics to allow types to be evaluated at runtime without errors, with a secondary goal of making them introspectable (some folks don't even agree with the latter, e.g. Mark Shannon). Static analyzers should be able to make strong *assumptions* about what things defined there mean -- in mypy such assumptions are all over the place, based on the full name of things -- it never reads typing.py. (It reads typing.pyi from typeshed, but what's there is ignored in many cases too.) On Fri, Nov 3, 2017 at 10:00 AM, Steve Dower wrote: > On 03Nov2017 0915, Victor Stinner wrote: > >> Hi, >> >> 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >> >>> Maybe we should remove typing from the stdlib? >>> https://github.com/python/typing/issues/495 >>> >> >> I'm strongly in favor on such move. >> > > I'm torn. > > If not having typing installed means you can't use "Any" or "Optional" in > an annotation, that basically kills the whole thing. Some primitives need > to be there. > > If annotations become glorified strings (which IMHO they should) and > typing gains a function to parse those into type hints (which IMHO it > should), then I'm in favour of splitting typing out. (Personally, if making > every type hint a literal 'string' meant that I could avoid importing > typing then I'd do it.) > > However, if typing is split out then its API (specifically, the contents > of typing.__all__ and their semantic meaning (e.g. "Iterable" means > something with an "__iter__" method)) needs to stay in PEPs. > > Static analysers using type hints encode much more information about these > types than can be inferred statically from typing.py, which means the > definitions should not change faster than Python x.*y*. Ideally, they would > not change at all once released. > > For example, my static analyser has an existing object representing > iterables, since we've been inferring iterables for years. When I parse a > type annotation and see "typing.Iterable", I'm going to just substitute my > own implementation - the definition in typing.py isn't going to be used (or > be useful). And because it has to map across languages, it has to be a > hard-coded mapping that can't rely on typing.py at all. > > Since the contents of typing.py will likely be completely ignored by my > analyser, which means that I can't treat "whatever version of typing is > installed" as ground truth. It needs to move slower or be purely additive. > Being in the standard library is a nice easy way to ensure this - moving it > out is a risk. > > That said, because I don't care about the contents of the file, all the > heavy execution stuff can totally be moved out. If typing in the stdlib > became very trivial definitions or just a set of names to support > searching/editors/forward-refs, and typing out of the stdlib had the > ability to convert annotations into an object model that provides rich > runtime introspection, I'd also be fine. At least then the interface is > highly stable, even if the implementation (for those who use it) changes. > > Cheers, > Steve > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine at python.org Fri Nov 3 14:36:05 2017 From: antoine at python.org (Antoine Pitrou) Date: Fri, 3 Nov 2017 19:36:05 +0100 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <20171103184738.5ce7b68c@fsol> Message-ID: <83232149-2d3f-603d-a183-0dea2bf44709@python.org> Le 03/11/2017 ? 19:34, St?fane Fermigier a ?crit?: > > On Fri, Nov 3, 2017 at 6:47 PM, Antoine Pitrou > wrote: > > I don't think other modules should start relying on the typing module at > runtime. > > They already do, as I've noted in my message about dependency injection, > and I find this quite elegant. Third-party libraries do what they want, but we are talking about the stdlib here. Regards Antoine. From steve.dower at python.org Fri Nov 3 14:37:40 2017 From: steve.dower at python.org (Steve Dower) Date: Fri, 3 Nov 2017 11:37:40 -0700 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: <7fcf22d0-93ea-98a1-7c10-d90429d94abf@python.org> On 03Nov2017 1124, Guido van Rossum wrote: > A side note (I'm reading all responses but staying out of the discussion): > > No static checker should depend on the *contents* of typing.py, since > it's just a bunch of runtime gymnastics to allow types to be evaluated > at runtime without errors, with a secondary goal of making them > introspectable (some folks don't even agree with the latter, e.g. Mark > Shannon). > > Static analyzers should be able to make strong *assumptions* about what > things defined there mean -- in mypy such assumptions are all over the > place, based on the full name of things -- it never reads typing.py. (It > reads typing.pyi from typeshed, but what's there is ignored in many > cases too.) Thank you. Very glad to hear I understood it correctly. Cheers, Steve From sf at fermigier.com Fri Nov 3 14:34:03 2017 From: sf at fermigier.com (=?UTF-8?Q?St=C3=A9fane_Fermigier?=) Date: Fri, 3 Nov 2017 19:34:03 +0100 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <20171103184738.5ce7b68c@fsol> References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <20171103184738.5ce7b68c@fsol> Message-ID: On Fri, Nov 3, 2017 at 6:47 PM, Antoine Pitrou wrote: > On Fri, 3 Nov 2017 12:46:33 -0400 > "Eric V. Smith" wrote: > > On 11/3/2017 12:15 PM, Victor Stinner wrote: > > > Hi, > > > > > > 2017-11-03 15:36 GMT+01:00 Guido van Rossum : > > >> Maybe we should remove typing from the stdlib? > > >> https://github.com/python/typing/issues/495 > > > > > The typing module is not used yet in the stdlib, so there is no > > > technically reason to keep typing part of the stdlib. IMHO it's > > > perfectly fine to keep typing and annotations out of the stdlib, since > > > the venv & pip tooling is now rock solid ;-) > > > > I'm planning on using it for PEP 557: > > https://www.python.org/dev/peps/pep-0557/#class-variables > > > > The way the code currently checks for this should still work if typing > > is not in the stdlib, although of course it's assuming that the name > > "typing" really is the "official" typing library. > > I don't think other modules should start relying on the typing module at > runtime. > They already do, as I've noted in my message about dependency injection, and I find this quite elegant. Previously, similar projects used decorators, or less elegant constructs. S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- ?You never change things by ?ghting the existing reality. To change something, build a new model that makes the existing model obsolete.? ? R. Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Nov 3 16:27:08 2017 From: barry at python.org (Barry Warsaw) Date: Fri, 3 Nov 2017 13:27:08 -0700 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: On Nov 3, 2017, at 10:00, Steve Dower wrote: > > On 03Nov2017 0915, Victor Stinner wrote: >> Hi, >> 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >>> Maybe we should remove typing from the stdlib? >>> https://github.com/python/typing/issues/495 >> I'm strongly in favor on such move. > > I'm torn. Me too. We?re seeing much greater adoption of type annotations, and it?s becoming one of the killer features for adopting Python 3. I?d be hesitant to accept anything that slows that adoption down. While it?s been technically provisional, a lot of code is beginning to depend on it and it would be a shame to break that code as we also start to adopt Python 3.7. But I can appreciate the need to iterate on its API faster. I don?t know if a middle ground is feasible. What core functionality and stable-enough APIs can be kept in stdlib typing, and can we provide an extension or override mechanism if you want the latest and greatest? Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From p.f.moore at gmail.com Fri Nov 3 16:34:14 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 3 Nov 2017 20:34:14 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: On 3 November 2017 at 17:00, Steve Dower wrote: > > I'm torn. > > If not having typing installed means you can't use "Any" or "Optional" in an > annotation, that basically kills the whole thing. Some primitives need to be > there. Thinking some more about this, I think that it's now established that the annotation syntax is for types - any debate over other uses for them is now past. As a result, though, I think it's important that the language (and/or the standard library) should include support for *expressing* those types. The typing module is what allows users to express types like "list of integers", or "optional string", or "iterable", and if we move typing out of the stdlib, we make it impossible for people who want to use the language feature to do so from within the core language. Consider someone who's downloaded Python and PyCharm (or Visual Studio). They want to get the benefit of the IDE code completion facilities, so they declare their argument as List[int], following the information they've found on how to declare lists of integers. And now their code won't run, until they install typing from PyPI. And there's no workaround, because you can't express List[int] in the core language/stdlib. That's not a very good start for a newcomer to Python. I'm fine with the "advanced" bits of typing being removed from the stdlib, but I think we need to include in the stdlib at least enough to express the basic types of the language (including common combinations such as Optional and Union). Paul PS Apologies if I've misunderstood any of the technical aspects of typing - I'm happy to be corrected. As I said in another email, I've not actually used type annotations in my own code yet, although I'm thinking of starting to - precisely because I've been using PyCharm recently and the IDE support when you declare types is quite nice. From barry at python.org Fri Nov 3 16:44:59 2017 From: barry at python.org (Barry Warsaw) Date: Fri, 3 Nov 2017 13:44:59 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: <0C9535B0-3DF1-4A6C-B5CB-39BCD7CAD3D1@python.org> On Nov 2, 2017, at 23:22, Nick Coghlan wrote: > Another point worth noting is that merely importing the typing module > is expensive: > > $ python -m perf timeit -s "from importlib import reload; import > typing" "reload(typing)" > ..................... > Mean +- std dev: 10.6 ms +- 0.6 ms > > 10 ms is a *big* chunk out of a CLI application's startup time budget. Far and away so, except for the re module. % ./python.exe -X importtime -c "import typing" import time: self [us] | cumulative | imported package import time: 72 | 72 | _codecs import time: 625 | 696 | codecs import time: 354 | 354 | encodings.aliases import time: 713 | 1762 | encodings import time: 198 | 198 | encodings.utf_8 import time: 98 | 98 | _signal import time: 233 | 233 | encodings.latin_1 import time: 353 | 353 | _weakrefset import time: 264 | 617 | abc import time: 402 | 1018 | io import time: 136 | 136 | _stat import time: 197 | 333 | stat import time: 227 | 227 | genericpath import time: 377 | 604 | posixpath import time: 2812 | 2812 | _collections_abc import time: 787 | 4534 | os import time: 315 | 315 | _sitebuiltins import time: 336 | 336 | sitecustomize import time: 114 | 114 | usercustomize import time: 1064 | 6361 | site import time: 160 | 160 | _operator import time: 1412 | 1571 | operator import time: 371 | 371 | keyword import time: 817 | 817 | _heapq import time: 762 | 1579 | heapq import time: 272 | 272 | itertools import time: 635 | 635 | reprlib import time: 99 | 99 | _collections import time: 3580 | 8104 | collections import time: 112 | 112 | _functools import time: 781 | 892 | functools import time: 1774 | 2666 | contextlib import time: 272 | 272 | types import time: 861 | 1132 | enum import time: 76 | 76 | _sre import time: 426 | 426 | sre_constants import time: 446 | 872 | sre_parse import time: 414 | 1361 | sre_compile import time: 79 | 79 | _locale import time: 190 | 190 | copyreg import time: 17200 | 19961 | re import time: 374 | 374 | collections.abc import time: 15124 | 46226 | typing -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From srkunze at mail.de Fri Nov 3 18:32:44 2017 From: srkunze at mail.de (Sven R. Kunze) Date: Fri, 3 Nov 2017 23:32:44 +0100 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: On 03.11.2017 21:34, Paul Moore wrote: > Consider someone who's downloaded Python and PyCharm (or Visual > Studio). They want to get the benefit of the IDE code completion > facilities, so they declare their argument as List[int], following the > information they've found on how to declare lists of integers. The PyCharm I know is capable of detecting such simple types on its own, without type hints. I for one like the idea of a faster evolution of typing.py. Cheers, Sven PS: pip is pretty standard these days, so I don't think it's much of an issue for guys who really needs it installed. From victor.stinner at gmail.com Fri Nov 3 19:01:31 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 4 Nov 2017 00:01:31 +0100 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: 2017-11-03 18:00 GMT+01:00 Steve Dower : > If not having typing installed means you can't use "Any" or "Optional" in an > annotation, that basically kills the whole thing. Some primitives need to be > there. I'm not sure that I understand you. The question is if you would only need or . If typing is removed from the stdlib, you can still use it in your application. It's "just" another dependency no? Which major (non trivial) application or Python module has zero external dependency nowaday? The only drawback is that we cannot use typing "anymore" in the stdlib itself, since we don't allow external dependency in the stdlib for practical reasons. But as I wrote, we don't use typing currently in stdlib. I'm perfectly fine with the current status of having annotations on the stdlib in a third party project. Victor From jsbueno at python.org.br Fri Nov 3 19:44:55 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Fri, 3 Nov 2017 21:44:55 -0200 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: This just popped up in Brython's issue tracker discussion: """ Pierre Quentel 04:57 (16 hours ago) to brython-dev/br., Subscribed I think it's better to rename all occurences of async now, although it's strange that : there is currently no deprecation warning in CPython with code that uses it as a variable name, PEP492 said that "async and await names will be softly deprecated in CPython 3.5 and 3.6" there is no mention of async and await becoming keywords in What's new in Python 3.7 Maybe the idea was finally given up, but I can't find a reference. """ So, what is the status of promoting async and await to full keyword for 3.7? On 1 November 2017 at 19:47, Ned Deily wrote: > Happy belated Halloween to those who celebrate it; I hope it wasn't too scary! Also possibly scary: we have just a little over 12 weeks remaining until Python 3.7's feature code cutoff, 2018-01-29. Those 12 weeks include a number of traditional holidays around the world so, if you are planning on writing another PEP for 3.7 or working on getting an existing one approved or getting feature code reviewed, please plan accordingly. If you have something in the pipeline, please either let me know or, when implemented, add the feature to PEP 537, the 3.7 Release Schedule PEP. As you may recall, the release schedule calls for 4 alpha preview releases prior to the feature code cutoff with the first beta release. We have already produced the first two alphas. Reviewing the schedule recently, I realized that I had "front-loaded" the alphas, leaving a bigger gap between the final alphas and the first beta. So I have adjusted the schedule a bit, pushing alpha 3 and 4 out. The new d > ates are: > > - 3.7.0 alpha 3: 2017-11-27 (was 2017-11-13) > - 3.7.0 alpha 4: 2018-01-08 (was 2017-12-18) > - 3.7.0 beta 1: 2018-01-29 (feature freeze - unchanged) > > I hope the new dates give you a little bit more time to get your bits finished and get a little bit of exposure prior to the feature freeze. > > Considering how quickly and positively it has been adopted, 3.6 is going to be a tough act to follow. But we can do it again. Thank you all for your ongoing efforts! > > --Ned > > -- > Ned Deily > nad at python.org -- [] > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br From jelle.zijlstra at gmail.com Fri Nov 3 19:52:47 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Fri, 3 Nov 2017 16:52:47 -0700 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: 2017-11-03 16:44 GMT-07:00 Joao S. O. Bueno : > This just popped up in Brython's issue tracker discussion: > > """ > Pierre Quentel > > 04:57 (16 hours ago) > to brython-dev/br., Subscribed > > I think it's better to rename all occurences of async now, although > it's strange that : > > there is currently no deprecation warning in CPython with code that > uses it as a variable name, PEP492 said that "async and await names > will be softly deprecated in CPython 3.5 and 3.6" > there is no mention of async and await becoming keywords in What's new > in Python 3.7 > > Maybe the idea was finally given up, but I can't find a reference. > > """ > > So, what is the status of promoting async and await to full keyword for > 3.7? > > This was implemented, and it's in NEWS: https://github.com/python/cpython/pull/1669. > > On 1 November 2017 at 19:47, Ned Deily wrote: > > Happy belated Halloween to those who celebrate it; I hope it wasn't too > scary! Also possibly scary: we have just a little over 12 weeks remaining > until Python 3.7's feature code cutoff, 2018-01-29. Those 12 weeks include > a number of traditional holidays around the world so, if you are planning > on writing another PEP for 3.7 or working on getting an existing one > approved or getting feature code reviewed, please plan accordingly. > If you have something in the pipeline, please either let me know or, when > implemented, add the feature to PEP 537, the 3.7 Release Schedule PEP. As > you may recall, the release schedule calls for 4 alpha preview releases > prior to the feature code cutoff with the first beta release. We have > already produced the first two alphas. Reviewing the schedule recently, I > realized that I had "front-loaded" the alphas, leaving a bigger gap between > the final alphas and the first beta. So I have adjusted the schedule a > bit, pushing alpha 3 and 4 out. The new > d > > ates are: > > > > - 3.7.0 alpha 3: 2017-11-27 (was 2017-11-13) > > - 3.7.0 alpha 4: 2018-01-08 (was 2017-12-18) > > - 3.7.0 beta 1: 2018-01-29 (feature freeze - unchanged) > > > > I hope the new dates give you a little bit more time to get your bits > finished and get a little bit of exposure prior to the feature freeze. > > > > Considering how quickly and positively it has been adopted, 3.6 is going > to be a tough act to follow. But we can do it again. Thank you all for > your ongoing efforts! > > > > --Ned > > > > -- > > Ned Deily > > nad at python.org -- [] > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jsbueno%40python.org.br > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Nov 3 19:53:08 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 4 Nov 2017 00:53:08 +0100 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: 2017-11-04 0:44 GMT+01:00 Joao S. O. Bueno : > This just popped up in Brython's issue tracker discussion: > > """ > Pierre Quentel > > 04:57 (16 hours ago) > to brython-dev/br., Subscribed > > I think it's better to rename all occurences of async now, although > it's strange that : > > there is currently no deprecation warning in CPython with code that > uses it as a variable name, PEP492 said that "async and await names > will be softly deprecated in CPython 3.5 and 3.6" > there is no mention of async and await becoming keywords in What's new > in Python 3.7 > > Maybe the idea was finally given up, but I can't find a reference. async & await already became concrete keywords in Python 3.7: $ ./python Python 3.7.0a2+ (heads/master-dirty:cbe1756e3e, Nov 4 2017, 00:24:07) >>> async=1 File "", line 1 async=1 ^ SyntaxError: invalid syntax Please request an entry in the "What's New in Pyhon 3.7" at https://bugs.python.org/issue30406 Victor From luca.sbardella at gmail.com Fri Nov 3 20:09:24 2017 From: luca.sbardella at gmail.com (Luca Sbardella) Date: Sat, 04 Nov 2017 00:09:24 +0000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <0C9535B0-3DF1-4A6C-B5CB-39BCD7CAD3D1@python.org> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <0C9535B0-3DF1-4A6C-B5CB-39BCD7CAD3D1@python.org> Message-ID: Impressive stats! I didn?t know this command, thanks! On Fri, 3 Nov 2017 at 20:47, Barry Warsaw wrote: > On Nov 2, 2017, at 23:22, Nick Coghlan wrote: > > Another point worth noting is that merely importing the typing module > > is expensive: > > > > $ python -m perf timeit -s "from importlib import reload; import > > typing" "reload(typing)" > > ..................... > > Mean +- std dev: 10.6 ms +- 0.6 ms > > > > 10 ms is a *big* chunk out of a CLI application's startup time budget. > > Far and away so, except for the re module. > > % ./python.exe -X importtime -c "import typing" > import time: self [us] | cumulative | imported package > import time: 72 | 72 | _codecs > import time: 625 | 696 | codecs > import time: 354 | 354 | encodings.aliases > import time: 713 | 1762 | encodings > import time: 198 | 198 | encodings.utf_8 > import time: 98 | 98 | _signal > import time: 233 | 233 | encodings.latin_1 > import time: 353 | 353 | _weakrefset > import time: 264 | 617 | abc > import time: 402 | 1018 | io > import time: 136 | 136 | _stat > import time: 197 | 333 | stat > import time: 227 | 227 | genericpath > import time: 377 | 604 | posixpath > import time: 2812 | 2812 | _collections_abc > import time: 787 | 4534 | os > import time: 315 | 315 | _sitebuiltins > import time: 336 | 336 | sitecustomize > import time: 114 | 114 | usercustomize > import time: 1064 | 6361 | site > import time: 160 | 160 | _operator > import time: 1412 | 1571 | operator > import time: 371 | 371 | keyword > import time: 817 | 817 | _heapq > import time: 762 | 1579 | heapq > import time: 272 | 272 | itertools > import time: 635 | 635 | reprlib > import time: 99 | 99 | _collections > import time: 3580 | 8104 | collections > import time: 112 | 112 | _functools > import time: 781 | 892 | functools > import time: 1774 | 2666 | contextlib > import time: 272 | 272 | types > import time: 861 | 1132 | enum > import time: 76 | 76 | _sre > import time: 426 | 426 | sre_constants > import time: 446 | 872 | sre_parse > import time: 414 | 1361 | sre_compile > import time: 79 | 79 | _locale > import time: 190 | 190 | copyreg > import time: 17200 | 19961 | re > import time: 374 | 374 | collections.abc > import time: 15124 | 46226 | typing > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/luca.sbardella%40gmail.com > -- http://lucasbardella.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Nov 3 20:15:47 2017 From: barry at python.org (Barry Warsaw) Date: Fri, 3 Nov 2017 17:15:47 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <0C9535B0-3DF1-4A6C-B5CB-39BCD7CAD3D1@python.org> Message-ID: <45A67DDE-D020-4CD6-928C-EA08873D66AD@python.org> On Nov 3, 2017, at 17:09, Luca Sbardella wrote: > > Impressive stats! I didn?t know this command, thanks! Neither did I until a day or so ago. I already only want to use Python 3.7. :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Fri Nov 3 20:59:02 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Fri, 3 Nov 2017 17:59:02 -0700 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: <8EA65403-C40D-489E-B5AB-C2E0FC32482D@langa.pl> > On 3 Nov, 2017, at 4:01 PM, Victor Stinner wrote: > > The question is if you would only need or pip install typing>. > > If typing is removed from the stdlib, you can still use it in your > application. It's "just" another dependency no? The ideal situation is that something is built-in and just works, examples: dicts, lists, sorted(). So, if you have to import it to use it, it's still great but less seamless, current example: regular expressions. Let's say Guido suggests we should import sorted, dict, and list before use. Not a big deal, right? I mean, how many applications do you know that don't use any other imports? Finally, if you have to find a third-party package, add it to requirements.txt and manage the dependency forward, that's even less seamless. The standard library has a pretty conservative approach to backwards compatibility. On the other hand, third-party libraries often don't. Sure, there's noble exceptions but the general feel is that you need to be more careful with dependencies from PyPI. If somebody suggested that regular expressions or dictionaries should be moved to PyPI, in my book that would suggest strange things will start happening in the future. So, the difference is in perceived usability. It's psychological. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Fri Nov 3 23:30:49 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 4 Nov 2017 13:30:49 +1000 Subject: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: On 4 November 2017 at 09:52, Jelle Zijlstra wrote: > > 2017-11-03 16:44 GMT-07:00 Joao S. O. Bueno : >> >> This just popped up in Brython's issue tracker discussion: >> >> """ >> Pierre Quentel >> >> 04:57 (16 hours ago) >> to brython-dev/br., Subscribed >> >> I think it's better to rename all occurences of async now, although >> it's strange that : >> >> there is currently no deprecation warning in CPython with code that >> uses it as a variable name, PEP492 said that "async and await names >> will be softly deprecated in CPython 3.5 and 3.6" >> there is no mention of async and await becoming keywords in What's new >> in Python 3.7 >> >> Maybe the idea was finally given up, but I can't find a reference. >> >> """ >> >> So, what is the status of promoting async and await to full keyword for >> 3.7? >> > This was implemented, and it's in NEWS: > https://github.com/python/cpython/pull/1669. That's a big enough change that it should be in What's New as well (at least in the porting section, and probably more prominent than that). The current lack of DeprecationWarnings in 3.6 is a fairly major oversight/bug, though: Python 3.6.2 (default, Oct 2 2017, 16:51:32) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> await = 1 >>> async = 1 >>> print(async, await) 1 1 So if we're going to go ahead with making them real keywords in 3.7 (as specified in PEP 492), then the missing DeprecationWarning problem in 3.6 needs to be fixed. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Nov 3 23:53:36 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 4 Nov 2017 13:53:36 +1000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: On 4 November 2017 at 04:24, Guido van Rossum wrote: > A side note (I'm reading all responses but staying out of the discussion): > > No static checker should depend on the *contents* of typing.py, since it's > just a bunch of runtime gymnastics to allow types to be evaluated at runtime > without errors, with a secondary goal of making them introspectable (some > folks don't even agree with the latter, e.g. Mark Shannon). > > Static analyzers should be able to make strong *assumptions* about what > things defined there mean -- in mypy such assumptions are all over the > place, based on the full name of things -- it never reads typing.py. (It > reads typing.pyi from typeshed, but what's there is ignored in many cases > too.) If I understand correctly, a lot of the complexity in the current typing.py implementation is there to make isinstance and issubclass do something "useful" at runtime, and to allow generics to be used as base classes. If it wasn't for those design goals, then "typing.List[int]" could just return a lightweight instance of a regular class rather than a usable Python class definition. If I'm right about that, then PEP 560's proposal to allow types to implement a hook that says "Replace me with this other object for runtime subclassing purposes" may be enough to let you delete most of the current code in the typing module - you'd just need to have isinstance and issubclass respect that new hook as well, and define the hooks as returning the relevant ABCs. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Sat Nov 4 06:39:01 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 4 Nov 2017 10:39:01 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: On 4 November 2017 at 03:53, Nick Coghlan wrote: > If I understand correctly, a lot of the complexity in the current > typing.py implementation is there to make isinstance and issubclass do > something "useful" at runtime, and to allow generics to be used as > base classes. > > If it wasn't for those design goals, then "typing.List[int]" could > just return a lightweight instance of a regular class rather than a > usable Python class definition. +1 to this. > If I'm right about that, then PEP 560's proposal to allow types to > implement a hook that says "Replace me with this other object for > runtime subclassing purposes" may be enough to let you delete most of > the current code in the typing module - you'd just need to have > isinstance and issubclass respect that new hook as well, and define > the hooks as returning the relevant ABCs. That would seem ideal to me. Lukasz Langa said: > So, the difference is in perceived usability. It's psychological. Please, let's not start the "not in the stdlib isn't an issue" debate again. If I concede it's a psychological issue, will you concede that the fact that it's psychological doesn't mean that it's not a real, difficult to solve, problem for some people? I'm also willing to concede that it's a *minority* problem, if that helps. But can we stop dismissing it as a non-existent problem? Paul From ncoghlan at gmail.com Sat Nov 4 09:51:41 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 4 Nov 2017 23:51:41 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On 4 November 2017 at 00:40, Guido van Rossum wrote: > IMO the inability of referencing class-level definitions from annotations on > methods pretty much kills this idea. If we decided we wanted to make it work, I think the key runtime building block we would need is a new kind of cell reference: an IndirectAttributeCell. Those would present the same interface as a regular nonlocal cell (so it could be stored in __closure__ just as regular cells are, and accessed the same way when the function body is executed), but internally it would hold two references: - one to another cell object (__class__ for this use case) - an attribute name on the target object that get/set/del operations on the indirect cell's value should affect As Python code: class IndirectAttributeCell: def __new__(cls, cell, attr): self._cell = cell self._attr = attr @property def cell_contents(self): return getattr(self._cell.cell_contents, self._attr) @cell_contents.setter def cell_contents(self, value): setattr(self._cell.cell_contents, self._attr, value) @cell_contents.deleter def cell_contents(self): delattr(self._cell.cell_contents, self._attr) The class body wouldn't be able to evaluate the thunks (since `__class__` wouldn't be set yet), but `__init_subclass__` implementations could, as could class decorators. It would require some adjustment in the compiler as well (in order to pass the class level attribute definitions down to these implicitly defined scopes as a new kind of accessible external namespace during the symbol analysis pass, as well as to register the use of "__class__" if one of the affected names was referenced), but I think it would work at least at a technical level (by contrast, every other idea I came up with back when I was working on the list comprehension change was sufficiently flawed that it fell apart within a few hours of starting to tinker with the idea). As an added bonus, we could potentially also extend the same permissive name resolution semantics to the implicit scopes used in comprehensions, such that it was only the explicitly defined scopes (i.e. lambda expressions, function definitions, and nested classes) that lost implicit access to the class level variables. Cheers, Nick. P.S. If we subsequently decided to elevate expression thunks to a first class language primitive, they shouldn't need any further semantic enhancements beyond that one, since the existing scoping rules already give the desired behaviour at module and function scope. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan at bytereef.org Sat Nov 4 13:30:13 2017 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 4 Nov 2017 18:30:13 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? Message-ID: <20171104173013.GA4005@bytereef.org> Hello, would it be possible to guarantee that dict literals are ordered in v3.7? The issue is well-known and the workarounds are tedious, example: https://mail.python.org/pipermail/python-ideas/2015-December/037423.html If the feature is guaranteed now, people can rely on it around v3.9. Stefan Krah From guido at python.org Sat Nov 4 12:42:09 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 4 Nov 2017 09:42:09 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: I'm very worried about trying to come up with a robust implementation of this in under 12 weeks. By contrast, the stringification that ?ukasz is proposing feels eminently doable. On Sat, Nov 4, 2017 at 6:51 AM, Nick Coghlan wrote: > On 4 November 2017 at 00:40, Guido van Rossum wrote: > > IMO the inability of referencing class-level definitions from > annotations on > > methods pretty much kills this idea. > > If we decided we wanted to make it work, I think the key runtime > building block we would need is a new kind of cell reference: an > IndirectAttributeCell. > > Those would present the same interface as a regular nonlocal cell (so > it could be stored in __closure__ just as regular cells are, and > accessed the same way when the function body is executed), but > internally it would hold two references: > > - one to another cell object (__class__ for this use case) > - an attribute name on the target object that get/set/del operations > on the indirect cell's value should affect > > As Python code: > > class IndirectAttributeCell: > def __new__(cls, cell, attr): > self._cell = cell > self._attr = attr > > @property > def cell_contents(self): > return getattr(self._cell.cell_contents, self._attr) > > @cell_contents.setter > def cell_contents(self, value): > setattr(self._cell.cell_contents, self._attr, value) > > @cell_contents.deleter > def cell_contents(self): > delattr(self._cell.cell_contents, self._attr) > > The class body wouldn't be able to evaluate the thunks (since > `__class__` wouldn't be set yet), but `__init_subclass__` > implementations could, as could class decorators. > > It would require some adjustment in the compiler as well (in order to > pass the class level attribute definitions down to these implicitly > defined scopes as a new kind of accessible external namespace during > the symbol analysis pass, as well as to register the use of > "__class__" if one of the affected names was referenced), but I think > it would work at least at a technical level (by contrast, every other > idea I came up with back when I was working on the list comprehension > change was sufficiently flawed that it fell apart within a few hours > of starting to tinker with the idea). > > As an added bonus, we could potentially also extend the same > permissive name resolution semantics to the implicit scopes used in > comprehensions, such that it was only the explicitly defined scopes > (i.e. lambda expressions, function definitions, and nested classes) > that lost implicit access to the class level variables. > > Cheers, > Nick. > > P.S. If we subsequently decided to elevate expression thunks to a > first class language primitive, they shouldn't need any further > semantic enhancements beyond that one, since the existing scoping > rules already give the desired behaviour at module and function scope. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Nov 4 14:35:11 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 4 Nov 2017 11:35:11 -0700 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171104173013.GA4005@bytereef.org> References: <20171104173013.GA4005@bytereef.org> Message-ID: This sounds reasonable -- I think when we introduced this in 3.6 we were worried that other implementations (e.g. Jython) would have a problem with this, but AFAIK they've reported back that they can do this just fine. So let's just document this as a language guarantee. On Sat, Nov 4, 2017 at 10:30 AM, Stefan Krah wrote: > > Hello, > > would it be possible to guarantee that dict literals are ordered in v3.7? > > > The issue is well-known and the workarounds are tedious, example: > > https://mail.python.org/pipermail/python-ideas/2015- > December/037423.html > > > If the feature is guaranteed now, people can rely on it around v3.9. > > > > Stefan Krah > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pludemann at google.com Sat Nov 4 14:43:06 2017 From: pludemann at google.com (Peter Ludemann) Date: Sat, 4 Nov 2017 11:43:06 -0700 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: If type annotations are treated like implicit lambdas, then that's a first step to something similar to Lisp's "special forms". A full generalization of that would allow, for example, logging.debug to not evaluate its args unless debugging is turned on (I use a logging.debug wrapper that allows lambdas as args, and evaluates them if debugging is turned on). Maybe a better question is whether we want "special forms" in Python. It complicates some things but simplifies others. But things that satisfy Lisp programmers might not make Python programmers happy. ;) On 4 November 2017 at 09:42, Guido van Rossum wrote: > I'm very worried about trying to come up with a robust implementation of > this in under 12 weeks. By contrast, the stringification that ?ukasz is > proposing feels eminently doable. > > On Sat, Nov 4, 2017 at 6:51 AM, Nick Coghlan wrote: > >> On 4 November 2017 at 00:40, Guido van Rossum wrote: >> > IMO the inability of referencing class-level definitions from >> annotations on >> > methods pretty much kills this idea. >> >> If we decided we wanted to make it work, I think the key runtime >> building block we would need is a new kind of cell reference: an >> IndirectAttributeCell. >> >> Those would present the same interface as a regular nonlocal cell (so >> it could be stored in __closure__ just as regular cells are, and >> accessed the same way when the function body is executed), but >> internally it would hold two references: >> >> - one to another cell object (__class__ for this use case) >> - an attribute name on the target object that get/set/del operations >> on the indirect cell's value should affect >> >> As Python code: >> >> class IndirectAttributeCell: >> def __new__(cls, cell, attr): >> self._cell = cell >> self._attr = attr >> >> @property >> def cell_contents(self): >> return getattr(self._cell.cell_contents, self._attr) >> >> @cell_contents.setter >> def cell_contents(self, value): >> setattr(self._cell.cell_contents, self._attr, value) >> >> @cell_contents.deleter >> def cell_contents(self): >> delattr(self._cell.cell_contents, self._attr) >> >> The class body wouldn't be able to evaluate the thunks (since >> `__class__` wouldn't be set yet), but `__init_subclass__` >> implementations could, as could class decorators. >> >> It would require some adjustment in the compiler as well (in order to >> pass the class level attribute definitions down to these implicitly >> defined scopes as a new kind of accessible external namespace during >> the symbol analysis pass, as well as to register the use of >> "__class__" if one of the affected names was referenced), but I think >> it would work at least at a technical level (by contrast, every other >> idea I came up with back when I was working on the list comprehension >> change was sufficiently flawed that it fell apart within a few hours >> of starting to tinker with the idea). >> >> As an added bonus, we could potentially also extend the same >> permissive name resolution semantics to the implicit scopes used in >> comprehensions, such that it was only the explicitly defined scopes >> (i.e. lambda expressions, function definitions, and nested classes) >> that lost implicit access to the class level variables. >> >> Cheers, >> Nick. >> >> P.S. If we subsequently decided to elevate expression thunks to a >> first class language primitive, they shouldn't need any further >> semantic enhancements beyond that one, since the existing scoping >> rules already give the desired behaviour at module and function scope. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> > > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > pludemann%40google.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.baker at python.org Sat Nov 4 16:55:58 2017 From: jim.baker at python.org (Jim Baker) Date: Sat, 4 Nov 2017 14:55:58 -0600 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: +1, as Guido correctly recalls, this language guarantee will work well with Jython when we get to the point of implementing 3.7+. On Sat, Nov 4, 2017 at 12:35 PM, Guido van Rossum wrote: > This sounds reasonable -- I think when we introduced this in 3.6 we were > worried that other implementations (e.g. Jython) would have a problem with > this, but AFAIK they've reported back that they can do this just fine. So > let's just document this as a language guarantee. > > On Sat, Nov 4, 2017 at 10:30 AM, Stefan Krah wrote: > >> >> Hello, >> >> would it be possible to guarantee that dict literals are ordered in v3.7? >> >> >> The issue is well-known and the workarounds are tedious, example: >> >> https://mail.python.org/pipermail/python-ideas/2015-Decembe >> r/037423.html >> >> >> If the feature is guaranteed now, people can rely on it around v3.9. >> >> >> >> Stefan Krah >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% >> 40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jbaker%40zyasoft.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 4 21:32:13 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Nov 2017 11:32:13 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On 5 November 2017 at 02:42, Guido van Rossum wrote: > I'm very worried about trying to come up with a robust implementation of > this in under 12 weeks. By contrast, the stringification that ?ukasz is > proposing feels eminently doable. I'm far from confident about that, as the string proposal inherently breaks runtime type annotation evaluation for nested function and class definitions, since those lose access to nonlocal variable references (since the compiler isn't involved in their name resolution any more). https://www.python.org/dev/peps/pep-0563/#resolving-type-hints-at-runtime is essentially defining a completely new type annotation specific scheme for name resolution, and takes us back to a Python 1.x era "locals and globals only" approach with no support for closure variables. Consider this example from the PEP: def generate(): A = Optional[int] class C: field: A = 1 def method(self, arg: A) -> None: ... return C X = generate() The PEP's current attitude towards this is "Yes, it will break, but that's OK, because it doesn't matter for the type annotation use case, since static analysers will still understand it". Adopting such a cavalier approach towards backwards compatibility with behaviour that has been supported since Python 3.0 *isn't OK*, since it would mean we were taking the step from "type annotations are the primary use case" to "Other use cases for function annotations are no longer supported". The only workaround I can see for that breakage is that instead of using strings, we could instead define a new "thunk" type that consists of two things: 1. A code object to be run with eval() 2. A dictionary mapping from variable names to closure cells (or None for not yet resolved references to globals and builtins) Correctly evaluating the code object in its original context would then be possible by reading the "cell_contents" attributes of the cells stored in the mapping and injecting them into the globals namespace used to run the code. This would actually be a pretty cool new primitive to have available (since it also leaves the consuming code free to *ignore* the closure cells, which is what you'd want for use cases like callback functions with implicitly named parameters), and retains the current eager compilation behaviour (so we'd be storing compiled code objects as constants instead of strings). If PEP 563 were updated to handle closure references properly using a scheme like the one above, I'd be far more supportive of the proposal. Alternatively, in a lambda based proposal that compiled code like the above as equivalent to the following code today: def generate(): A = Optional[int] class C: field: A = 1 def method(self, arg: (lambda: A)) -> None: ... return C X = generate() Then everything's automatically fine, since the compiler would correctly resolve the nonlocal reference to A and inject the appropriate closure references. In such a lambda based implementation, the *only* tricky case is this one, where the typevar is declared at class scope: class C: A = Optional[int] field: A = 1 def method(self, arg: A) -> None: ... Now, even without the introduction of the IndirectAttributeCell concept, this is amenable to a pretty simple workaround: A = Optional[int] class C: field: A = 1 def method(self, arg: A) -> None: ... C.A = A del A But I genuinely can't see how breaking annotation evaluation at class scope can be seen as a deal-breaker for the implicit lambda based approach without breaking annotation evaluation for nested functions also being seen as a deal-breaker for the string based approach. Either way, there are going to be changes needed to the compiler in order for it to still generate suitable references at compile time - the only question would then be whether they're existing cells stored in a new construct (a thunk to be executed with eval rather than via a regular function call), or a new kind of cell stored on a regular function object (implicit access to class attributes from implicitly defined scopes in the class body). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Nov 4 22:04:41 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Nov 2017 12:04:41 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: On 5 November 2017 at 04:35, Guido van Rossum wrote: > This sounds reasonable -- I think when we introduced this in 3.6 we were > worried that other implementations (e.g. Jython) would have a problem with > this, but AFAIK they've reported back that they can do this just fine. So > let's just document this as a language guarantee. When I asked Damien George about this for MicroPython, he indicated that they'd have to choose between guaranteed order and O(1) lookups given their current dict implementation. That surprised me a bit (since PyPy and CPython both *saved* memory by switching to their guaranteed order implementations, hence the name "compact dict representation"), but my (admittedly vague) understand is that the presence of a space/speed trade-off in their case has something to do with MicroPython deliberately running with a much higher chance of hash collisions in general (since the data sets it deals with are naturally smaller). So if we make the change, MicroPython will likely go along with it, but it may mean that dict lookups there become O(N), and folks will be relying on "N" being consistently small due to memory constraints (but some typically O(N) algorithms will still become O(N^2) when run on MicroPython). I don't think that situation should change the decision, but I do think it would be helpful if folks that understand CPython's dict implementation could take a look at MicroPython's dict implementation and see if it might be possible for them to avoid having to make that trade-off and instead be able to use a naturally insertion ordered hashmap implementation. Cheers, Nick. P.S. If anyone does want to explore MicroPython's dict implementation, and see if there might be an alternate implementation strategy that offers both O(1) lookup and guaranteed ordering without using additional memory, the relevant files seem to be: * https://github.com/micropython/micropython/blob/77a48e8cd493c0b0e0ca2d2ad58a110a23c6a232/py/obj.h#L339 (C level hashmap/ordered array structs) * https://github.com/micropython/micropython/blob/master/py/map.c (C level hashmap/ordered array implementation) * https://github.com/micropython/micropython/blob/master/py/objdict.c (Python dict wrapper around the mapping impl) The current behaviour is that the builtin dict uses the hashmap algorithms, while collections.OrderedDict uses the ordered array algorithms. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Sun Nov 5 11:02:47 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sun, 5 Nov 2017 11:02:47 -0500 Subject: [Python-Dev] [python-committers] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: On Fri, Nov 3, 2017 at 11:30 PM, Nick Coghlan wrote: > On 4 November 2017 at 09:52, Jelle Zijlstra wrote: >> >> 2017-11-03 16:44 GMT-07:00 Joao S. O. Bueno : >>> >>> This just popped up in Brython's issue tracker discussion: >>> >>> """ >>> Pierre Quentel >>> >>> 04:57 (16 hours ago) >>> to brython-dev/br., Subscribed >>> >>> I think it's better to rename all occurences of async now, although >>> it's strange that : >>> >>> there is currently no deprecation warning in CPython with code that >>> uses it as a variable name, PEP492 said that "async and await names >>> will be softly deprecated in CPython 3.5 and 3.6" >>> there is no mention of async and await becoming keywords in What's new >>> in Python 3.7 >>> >>> Maybe the idea was finally given up, but I can't find a reference. >>> >>> """ >>> >>> So, what is the status of promoting async and await to full keyword for >>> 3.7? >>> >> This was implemented, and it's in NEWS: >> https://github.com/python/cpython/pull/1669. > > That's a big enough change that it should be in What's New as well (at > least in the porting section, and probably more prominent than that). > > The current lack of DeprecationWarnings in 3.6 is a fairly major > oversight/bug, though: There's no oversight. We had PendingDeprecationWarning for async/await names in 3.5, and DeprecationWarning in 3.6. You just need to enable warnings to see them: ~ ? python3 -Wall Python 3.6.2 (default, Aug 2 2017, 22:29:27) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> async = 1 :1: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7 > So if we're going to go ahead with making them real keywords in 3.7 > (as specified in PEP 492), then the missing DeprecationWarning problem > in 3.6 needs to be fixed. They are already keywords in 3.7, I've committed that change a month ago. Yury From barry at python.org Sun Nov 5 12:37:33 2017 From: barry at python.org (Barry Warsaw) Date: Sun, 5 Nov 2017 09:37:33 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> On Nov 4, 2017, at 11:35, Guido van Rossum wrote: > > This sounds reasonable -- I think when we introduced this in 3.6 we were worried that other implementations (e.g. Jython) would have a problem with this, but AFAIK they've reported back that they can do this just fine. So let's just document this as a language guarantee. The other concern was backward compatibility issues. For example, if 3.7 makes this guarantee official, and folks write code with this assumption that has to work with older versions of Python, then that code could be buggy in previous versions and work in 3.7. This will probably be most evident in test suites, and such failures are often mysterious to debug (as we?ve seen in the past). That doesn?t mean we shouldn?t do it, but it does mean we have to be careful and explicit to educate users about how to write safe multi-Python-version code. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From brett at python.org Sun Nov 5 12:45:54 2017 From: brett at python.org (Brett Cannon) Date: Sun, 05 Nov 2017 17:45:54 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <8EA65403-C40D-489E-B5AB-C2E0FC32482D@langa.pl> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> <8EA65403-C40D-489E-B5AB-C2E0FC32482D@langa.pl> Message-ID: On Fri, 3 Nov 2017 at 17:59 Lukasz Langa wrote: > > > On 3 Nov, 2017, at 4:01 PM, Victor Stinner > wrote: > > > > The question is if you would only need or > pip install typing>. > > > > If typing is removed from the stdlib, you can still use it in your > > application. It's "just" another dependency no? > > > The ideal situation is that something is built-in and just works, > examples: dicts, lists, sorted(). > > So, if you have to import it to use it, it's still great but less > seamless, current example: regular expressions. Let's say Guido suggests we > should import sorted, dict, and list before use. Not a big deal, right? I > mean, how many applications do you know that don't use any other imports? > > Finally, if you have to find a third-party package, add it to > requirements.txt and manage the dependency forward, that's even less > seamless. The standard library has a pretty conservative approach to > backwards compatibility. On the other hand, third-party libraries often > don't. Sure, there's noble exceptions but the general feel is that you need > to be more careful with dependencies from PyPI. If somebody suggested that > regular expressions or dictionaries should be moved to PyPI, in my book > that would suggest strange things will start happening in the future. > > So, the difference is in perceived usability. It's psychological. > I think another psychological side-effect of removing 'typing' is how strongly people will view the guidance that annotations are generally expected to be type hints. With 'typing' in the stdlib, it made that assumption a bit strong, especially if the stdlib ever started to use type hints itself. But by saying "we expect annotations will be used for type hints, but we actually don't do that in the stdlib nor provide a mechanism for actually specifying type hints", that weakens the message. It starts to feel like a "do as I say, not as I do" bit of guidance (which to me is different than the worry that the usage of type hints will decrease). But that might be fine; I personally don't know. But my suspicion is we will see an uptick of alternative annotation uses that aren't type hints if we take 'typing' out, and that's something we should be aware of and okay with. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Nov 5 12:50:41 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 5 Nov 2017 09:50:41 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> Message-ID: Yup. At least such code will not break in 3.5. In general if you write code using a newer version you should expect arbitrary breakage when trying to run it under older versions. There is no compatibility guarantee in that direction for anything anyways. I don't see this as a reason to not put this into the language spec at 3.7. On Sun, Nov 5, 2017 at 9:37 AM, Barry Warsaw wrote: > On Nov 4, 2017, at 11:35, Guido van Rossum wrote: > > > > This sounds reasonable -- I think when we introduced this in 3.6 we were > worried that other implementations (e.g. Jython) would have a problem with > this, but AFAIK they've reported back that they can do this just fine. So > let's just document this as a language guarantee. > > The other concern was backward compatibility issues. For example, if 3.7 > makes this guarantee official, and folks write code with this assumption > that has to work with older versions of Python, then that code could be > buggy in previous versions and work in 3.7. This will probably be most > evident in test suites, and such failures are often mysterious to debug (as > we?ve seen in the past). > > That doesn?t mean we shouldn?t do it, but it does mean we have to be > careful and explicit to educate users about how to write safe > multi-Python-version code. > > Cheers, > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at ganssle.io Sun Nov 5 13:14:54 2017 From: paul at ganssle.io (Paul G) Date: Sun, 5 Nov 2017 13:14:54 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> Message-ID: I'm not entirely sure I understand the full set of reasoning for this - I couldn't really tell what the problem with OrderedDict is from the link Stefan provided. It seems to me like a kind of huge change for the language to move from arbitrary-ordered to guaranteed-ordered dict. The problem I see is that this introduces a huge backwards compatibility burden on all implementations of Python. It's possible that no implementations of Python (either future CPython versions or current/future alternative interpreters) will find any reason to use anything but an insertion-order sorted dictionary, but given that we've done just fine with arbitrary-order semantics for the entire lifetime of the language /and/ there is a container (OrderedDict) which has guaranteed order semantics, it doesn't seem worth it to me. I think I would mostly be concerned with (in terms of likeliness to occur): 1. An edge case we haven't thought of where arbitrary order dictionaries would allow some crticial optimization for a given platform (perhaps in someone writing a transpiler to another language where the convenient equivalent container has arbitrary order, e.g. if Brython wants to implement dicts in terms of objects - https://stackoverflow.com/questions/5525795/does-javascript-guarantee-object-property-order ) 2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation 3. Some sort of bug or vulnerability is discovered that makes insertion-ordered dictionaries an unwise choice (similar to the hash collision vulnerability that necessitated hash randomization - https://stackoverflow.com/questions/14956313#14959001 ). Perhaps these concerns are overblown, but if indeed guaranteed-order Mapping literals are critical in some or many cases, maybe it would be preferable to introduce new syntax for OrderedDict literals. Best, Paul On 11/05/2017 12:50 PM, Guido van Rossum wrote: > Yup. At least such code will not break in 3.5. > > In general if you write code using a newer version you should expect > arbitrary breakage when trying to run it under older versions. There is no > compatibility guarantee in that direction for anything anyways. > > I don't see this as a reason to not put this into the language spec at 3.7. > > On Sun, Nov 5, 2017 at 9:37 AM, Barry Warsaw wrote: > >> On Nov 4, 2017, at 11:35, Guido van Rossum wrote: >>> >>> This sounds reasonable -- I think when we introduced this in 3.6 we were >> worried that other implementations (e.g. Jython) would have a problem with >> this, but AFAIK they've reported back that they can do this just fine. So >> let's just document this as a language guarantee. >> >> The other concern was backward compatibility issues. For example, if 3.7 >> makes this guarantee official, and folks write code with this assumption >> that has to work with older versions of Python, then that code could be >> buggy in previous versions and work in 3.7. This will probably be most >> evident in test suites, and such failures are often mysterious to debug (as >> we?ve seen in the past). >> >> That doesn?t mean we shouldn?t do it, but it does mean we have to be >> careful and explicit to educate users about how to write safe >> multi-Python-version code. >> >> Cheers, >> -Barry >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> guido%40python.org >> >> > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From stefan at bytereef.org Sun Nov 5 13:39:31 2017 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 5 Nov 2017 19:39:31 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> Message-ID: <20171105183931.GA18442@bytereef.org> On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: > I'm not entirely sure I understand the full set of reasoning for this - I couldn't really tell what the problem with OrderedDict is from the link Stefan provided. It seems to me like a kind of huge change for the language to move from arbitrary-ordered to guaranteed-ordered dict. The problem I see is that this introduces a huge backwards compatibility burden on all implementations of Python. Scientific applications want something like {'a': 10, 'b': "foo", 'c': {'this': b'123'}} as an ordered initializer for unboxed or typed (or both) data. In general, if dicts are ordered, they can be used for example as initializers for (nested) C structs. > 2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation I would think this is very unlikely, given that the previous dict implementation has always been very fast. The new one is very fast, too. Stefan Krah From paul at ganssle.io Sun Nov 5 13:57:19 2017 From: paul at ganssle.io (Paul G) Date: Sun, 5 Nov 2017 13:57:19 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171105183931.GA18442@bytereef.org> References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> Message-ID: > Scientific applications want something like > > {'a': 10, 'b': "foo", 'c': {'this': b'123'}} > > as an ordered initializer for unboxed or typed (or both) data. In general, > if dicts are ordered, they can be used for example as initializers for > (nested) C structs. I can understand why you'd want an ordered container, I just don't see why it must be a dict. Why can't it be: OrderedDict(a=10, b='foo', c=OrderedDict(this=b'123')) Is it just that you don't want to type OrderedDict that many times? If it's so important to provide ordered dictionary literals, I would think it's a no-brainer to give them their own literal syntax (rather than re-defining dicts to have guaranteed order). e.g.: o{'a': 10, 'b': 'foo', 'c': o{'this': b'123'} Then there is no backwards incompatibility problem and users can express whether order does or does not matter to them when initializing a container. On 11/05/2017 01:39 PM, Stefan Krah wrote: > On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: >> I'm not entirely sure I understand the full set of reasoning for this - I couldn't really tell what the problem with OrderedDict is from the link Stefan provided. It seems to me like a kind of huge change for the language to move from arbitrary-ordered to guaranteed-ordered dict. The problem I see is that this introduces a huge backwards compatibility burden on all implementations of Python. > > > > >> 2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation > > I would think this is very unlikely, given that the previous dict implementation > has always been very fast. The new one is very fast, too. > > > > Stefan Krah > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From storchaka at gmail.com Sun Nov 5 14:01:40 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 5 Nov 2017 21:01:40 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171104173013.GA4005@bytereef.org> References: <20171104173013.GA4005@bytereef.org> Message-ID: 04.11.17 19:30, Stefan Krah ????: > would it be possible to guarantee that dict literals are ordered in v3.7? > > > The issue is well-known and the workarounds are tedious, example: > > https://mail.python.org/pipermail/python-ideas/2015-December/037423.html > > > If the feature is guaranteed now, people can rely on it around v3.9. Do you suggest to make dictionary displays producing OrderedDict instead of dict? From storchaka at gmail.com Sun Nov 5 14:09:37 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 5 Nov 2017 21:09:37 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171105183931.GA18442@bytereef.org> References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> Message-ID: 05.11.17 20:39, Stefan Krah ????: > On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: >> 2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation > > I would think this is very unlikely, given that the previous dict implementation > has always been very fast. The new one is very fast, too. The modification of the current implementation that don't preserve the initial order after deletion would be more compact and faster. From stefan at bytereef.org Sun Nov 5 14:20:06 2017 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 5 Nov 2017 20:20:06 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: <20171105192006.GA19157@bytereef.org> On Sun, Nov 05, 2017 at 09:01:40PM +0200, Serhiy Storchaka wrote: > Do you suggest to make dictionary displays producing OrderedDict > instead of dict? No, this is essentially a language spec doc issue that would guarantee the ordering properties of the current dict implementation. Stefan Krah From stefan at bytereef.org Sun Nov 5 14:30:29 2017 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 5 Nov 2017 20:30:29 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> Message-ID: <20171105193029.GA19471@bytereef.org> On Sun, Nov 05, 2017 at 09:09:37PM +0200, Serhiy Storchaka wrote: > 05.11.17 20:39, Stefan Krah ????: > >On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: > >>2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation > > > >I would think this is very unlikely, given that the previous dict implementation > >has always been very fast. The new one is very fast, too. > > The modification of the current implementation that don't preserve > the initial order after deletion would be more compact and faster. How much faster? Stefan Krah From storchaka at gmail.com Sun Nov 5 14:35:38 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 5 Nov 2017 21:35:38 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171105192006.GA19157@bytereef.org> References: <20171104173013.GA4005@bytereef.org> <20171105192006.GA19157@bytereef.org> Message-ID: 05.11.17 21:20, Stefan Krah ????: > On Sun, Nov 05, 2017 at 09:01:40PM +0200, Serhiy Storchaka wrote: >> Do you suggest to make dictionary displays producing OrderedDict >> instead of dict? > > No, this is essentially a language spec doc issue that would guarantee > the ordering properties of the current dict implementation. Wouldn't be enough to guarantee just the ordering of dicts before first deletion? Or before first resizing (the maximal size of dictionary displays is known at compile time, so they can be presized)? From storchaka at gmail.com Sun Nov 5 14:54:22 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 5 Nov 2017 21:54:22 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171105193029.GA19471@bytereef.org> References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> <20171105193029.GA19471@bytereef.org> Message-ID: 05.11.17 21:30, Stefan Krah ????: > On Sun, Nov 05, 2017 at 09:09:37PM +0200, Serhiy Storchaka wrote: >> 05.11.17 20:39, Stefan Krah ????: >>> On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: >>>> 2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation >>> >>> I would think this is very unlikely, given that the previous dict implementation >>> has always been very fast. The new one is very fast, too. >> >> The modification of the current implementation that don't preserve >> the initial order after deletion would be more compact and faster. > > How much faster? I didn't try to implement this. But the current implementation requires periodical reallocating if add and remove items. The following loop reallocates the dict every len(d) iterations, while the size of the dict is not changed, and the half of its storage is empty. while True: v = d.pop(k) ... d[k] = v From stefan at bytereef.org Sun Nov 5 15:06:12 2017 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 5 Nov 2017 21:06:12 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171105192006.GA19157@bytereef.org> Message-ID: <20171105200611.GA19697@bytereef.org> On Sun, Nov 05, 2017 at 09:35:38PM +0200, Serhiy Storchaka wrote: > 05.11.17 21:20, Stefan Krah ????: > >On Sun, Nov 05, 2017 at 09:01:40PM +0200, Serhiy Storchaka wrote: > >>Do you suggest to make dictionary displays producing OrderedDict > >>instead of dict? > > > >No, this is essentially a language spec doc issue that would guarantee > >the ordering properties of the current dict implementation. > > Wouldn't be enough to guarantee just the ordering of dicts before > first deletion? Or before first resizing (the maximal size of > dictionary displays is known at compile time, so they can be > presized)? Yes, for my use case that would be sufficient and that's what I had in mind initially. A luxury syntax addition like {a = 10, b = {c = "foo"}} that is read as an OrderedDict (where the keys a, b, c are implicitly strings) would of course also be sufficient for my use case. But I suspect many users who have other use cases find it tantalizing not to be able to use the properties of the current regular dict. Stefan Krah From solipsis at pitrou.net Sun Nov 5 15:18:24 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 5 Nov 2017 21:18:24 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? References: <20171104173013.GA4005@bytereef.org> <20171105192006.GA19157@bytereef.org> <20171105200611.GA19697@bytereef.org> Message-ID: <20171105211824.54e64b31@fsol> On Sun, 5 Nov 2017 21:06:12 +0100 Stefan Krah wrote: > On Sun, Nov 05, 2017 at 09:35:38PM +0200, Serhiy Storchaka wrote: > > 05.11.17 21:20, Stefan Krah ????: > > >On Sun, Nov 05, 2017 at 09:01:40PM +0200, Serhiy Storchaka wrote: > > >>Do you suggest to make dictionary displays producing OrderedDict > > >>instead of dict? > > > > > >No, this is essentially a language spec doc issue that would guarantee > > >the ordering properties of the current dict implementation. > > > > Wouldn't be enough to guarantee just the ordering of dicts before > > first deletion? Or before first resizing (the maximal size of > > dictionary displays is known at compile time, so they can be > > presized)? > > Yes, for my use case that would be sufficient and that's what > I had in mind initially. > > > A luxury syntax addition like {a = 10, b = {c = "foo"}} that is read > as an OrderedDict (where the keys a, b, c are implicitly strings) would > of course also be sufficient for my use case. Or you are ok with: OD = OrderedDict # ... OD(a=10, b=OD(c="foo")) Regards Antoine. From pganssle at gmail.com Sun Nov 5 15:10:29 2017 From: pganssle at gmail.com (Paul Ganssle) Date: Sun, 5 Nov 2017 15:10:29 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> <20171105193029.GA19471@bytereef.org> Message-ID: <0f2c552f-8008-78b7-e276-03d04b8c1772@gmail.com> I think the question of whether any specific implementation of dict could be made faster for a given architecture or even that the trade-offs made by CPython are generally the right ones is kinda beside the point. It's certainly feasible that an implementation that does not preserve ordering could be better for some implementation of Python, and the question is really how much is gained by changing the language semantics in such a way as to cut off that possibility. On 11/05/2017 02:54 PM, Serhiy Storchaka wrote: > 05.11.17 21:30, Stefan Krah ????: >> On Sun, Nov 05, 2017 at 09:09:37PM +0200, Serhiy Storchaka wrote: >>> 05.11.17 20:39, Stefan Krah ????: >>>> On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: >>>>> 2. Someone invents a new arbitrary-ordered container that would improve on the memory and/or CPU performance of the current dict implementation >>>> >>>> I would think this is very unlikely, given that the previous dict implementation >>>> has always been very fast. The new one is very fast, too. >>> >>> The modification of the current implementation that don't preserve >>> the initial order after deletion would be more compact and faster. >> >> How much faster? > > I didn't try to implement this. But the current implementation requires periodical reallocating if add and remove items. The following loop reallocates the dict every len(d) iterations, while the size of the dict is not changed, and the half of its storage is empty. > > while True: > ??? v = d.pop(k) > ??? ... > ??? d[k] = v > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From srkunze at mail.de Sun Nov 5 15:44:01 2017 From: srkunze at mail.de (Sven R. Kunze) Date: Sun, 5 Nov 2017 21:44:01 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> +1 from me too. On 04.11.2017 21:55, Jim Baker wrote: > +1, as Guido correctly recalls, this language guarantee will work well > with Jython when we get to the point of implementing 3.7+. > > On Sat, Nov 4, 2017 at 12:35 PM, Guido van Rossum > wrote: > > This sounds reasonable -- I think when we introduced this in 3.6 > we were worried that other implementations (e.g. Jython) would > have a problem with this, but AFAIK they've reported back that > they can do this just fine. So let's just document this as a > language guarantee. > > On Sat, Nov 4, 2017 at 10:30 AM, Stefan Krah > wrote: > > > Hello, > > would it be possible to guarantee that dict literals are > ordered in v3.7? > > > The issue is well-known and the workarounds are tedious, example: > > https://mail.python.org/pipermail/python-ideas/2015-December/037423.html > > > > If the feature is guaranteed now, people can rely on it around > v3.9. > > > > Stefan Krah > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > > -- > --Guido van Rossum (python.org/~guido ) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jbaker%40zyasoft.com > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From pludemann at google.com Sun Nov 5 15:50:12 2017 From: pludemann at google.com (Peter Ludemann) Date: Sun, 5 Nov 2017 12:50:12 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> Message-ID: Isn't ordered dict also useful for **kwargs? If it turns out that there's a dict implementation that's faster by not preserving order, collections.UnorderedDict could be added. There could also be specialized implementations that pre-size the dict (cf: C++ unordered_map::reserve), etc., etc. But these are all future things, which might not be necessary. On 5 November 2017 at 12:44, Sven R. Kunze wrote: > +1 from me too. > > On 04.11.2017 21:55, Jim Baker wrote: > > +1, as Guido correctly recalls, this language guarantee will work well > with Jython when we get to the point of implementing 3.7+. > > On Sat, Nov 4, 2017 at 12:35 PM, Guido van Rossum > wrote: > >> This sounds reasonable -- I think when we introduced this in 3.6 we were >> worried that other implementations (e.g. Jython) would have a problem with >> this, but AFAIK they've reported back that they can do this just fine. So >> let's just document this as a language guarantee. >> >> On Sat, Nov 4, 2017 at 10:30 AM, Stefan Krah wrote: >> >>> >>> Hello, >>> >>> would it be possible to guarantee that dict literals are ordered in v3.7? >>> >>> >>> The issue is well-known and the workarounds are tedious, example: >>> >>> https://mail.python.org/pipermail/python-ideas/2015-Decembe >>> r/037423.html >>> >>> >>> If the feature is guaranteed now, people can rely on it around v3.9. >>> >>> >>> >>> Stefan Krah >>> >>> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailma >>> n/options/python-dev/guido%40python.org >>> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido ) >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jbaker% >> 40zyasoft.com >> >> > > > _______________________________________________ > Python-Dev mailing listPython-Dev at python.orghttps://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > pludemann%40google.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pganssle at gmail.com Sun Nov 5 16:49:26 2017 From: pganssle at gmail.com (Paul Ganssle) Date: Sun, 5 Nov 2017 16:49:26 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> Message-ID: <67642978-b5c6-a65f-d74d-6a0bcae32d46@gmail.com> > If it turns out that there's a dict implementation that's faster by not > preserving order, collections.UnorderedDict could be added. > There could also be specialized implementations that pre-size the dict (cf: > C++ unordered_map::reserve), etc., etc. > But these are all future things, which might not be necessary. I think that the problem with this is that for the most part, people will use `dict`, and for most uses of `dict`, order doesn't matter (and it never has before). Given that "arbitrary order" includes *any* fixed ordering (insertion ordered, reverse insertion ordered, etc), the common case should keep the existing "no order guarantee" specification. This gives interpreter authors maximum freedom with a fundamental, widely used data type. > Isn't ordered dict also useful for **kwargs? If this is useful (and it seems like it would be), I think again a syntax modification that allows users to indicate that they want a particular implementation of **kwargs would be better than modifying dict semantics. It could possibly be handled with a type-hinting like syntax: def f(*args, **kwargs : OrderedKwargs): Or a riff on the existing syntax: def f(*args, ***kwargs): def f(*args, ^^kwargs): def f(*args, .**kwargs): In this case, the only guarantee you'd need (which relatively minor compared to a change in the dict semantics) would be that keyword argument order passed to a function would be preserved as the order that it is passed into the `kwargs` constructor. The old **kwargs syntax would give you a `dict` as normal, and the new ^^kwargs would give you an OrderedDict or some other dict subclass with guaranteed order. On 11/05/2017 03:50 PM, Peter Ludemann via Python-Dev wrote: > > > On 5 November 2017 at 12:44, Sven R. Kunze wrote: > >> +1 from me too. >> >> On 04.11.2017 21:55, Jim Baker wrote: >> >> +1, as Guido correctly recalls, this language guarantee will work well >> with Jython when we get to the point of implementing 3.7+. >> >> On Sat, Nov 4, 2017 at 12:35 PM, Guido van Rossum >> wrote: >> >>> This sounds reasonable -- I think when we introduced this in 3.6 we were >>> worried that other implementations (e.g. Jython) would have a problem with >>> this, but AFAIK they've reported back that they can do this just fine. So >>> let's just document this as a language guarantee. >>> >>> On Sat, Nov 4, 2017 at 10:30 AM, Stefan Krah wrote: >>> >>>> >>>> Hello, >>>> >>>> would it be possible to guarantee that dict literals are ordered in v3.7? >>>> >>>> >>>> The issue is well-known and the workarounds are tedious, example: >>>> >>>> https://mail.python.org/pipermail/python-ideas/2015-Decembe >>>> r/037423.html >>>> >>>> >>>> If the feature is guaranteed now, people can rely on it around v3.9. >>>> >>>> >>>> >>>> Stefan Krah >>>> >>>> >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: https://mail.python.org/mailma >>>> n/options/python-dev/guido%40python.org >>>> >>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido ) >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jbaker% >>> 40zyasoft.com >>> >>> >> >> >> _______________________________________________ >> Python-Dev mailing listPython-Dev at python.orghttps://mail.python.org/mailman/listinfo/python-dev >> >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> pludemann%40google.com >> >> > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From timothy.c.delaney at gmail.com Sun Nov 5 17:26:11 2017 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Mon, 6 Nov 2017 09:26:11 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> Message-ID: On 6 November 2017 at 07:50, Peter Ludemann via Python-Dev < python-dev at python.org> wrote: > Isn't ordered dict also useful for **kwargs? > **kwargs is already specified as insertion ordered as of Python 3.6. https://www.python.org/dev/peps/pep-0468/ Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothy.c.delaney at gmail.com Sun Nov 5 17:31:28 2017 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Mon, 6 Nov 2017 09:31:28 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> Message-ID: On 6 November 2017 at 06:09, Serhiy Storchaka wrote: > 05.11.17 20:39, Stefan Krah ????: > >> On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote: >> >>> 2. Someone invents a new arbitrary-ordered container that would improve >>> on the memory and/or CPU performance of the current dict implementation >>> >> >> I would think this is very unlikely, given that the previous dict >> implementation >> has always been very fast. The new one is very fast, too. >> > > The modification of the current implementation that don't preserve the > initial order after deletion would be more compact and faster. I would personally be happy with this as the guarantee (it covers dict literals and handles PEP 468), but it might be more confusing. "dicts are in arbitrary order" and "dicts maintain insertion order" are fairly simple to explain, "dicts maintain insertion order up to the point that a key is deleted" is less so. Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Sun Nov 5 18:12:49 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 5 Nov 2017 15:12:49 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: Message-ID: > On Nov 3, 2017, at 9:15 AM, Victor Stinner wrote: > > 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >> Maybe we should remove typing from the stdlib? >> https://github.com/python/typing/issues/495 > > I'm strongly in favor on such move. > > My experience with asyncio in the stdlib is that users expect changes > faster than the very slow release process of the stdlib (a release > every 18 months in average). > > I saw many PEPs and discussion on the typing design (meta-classes vs > regular classes), as if the typing is not stable enough to be part of > the stdlib. > > The typing module is not used yet in the stdlib, so there is no > technically reason to keep typing part of the stdlib. IMHO it's > perfectly fine to keep typing and annotations out of the stdlib, since > the venv & pip tooling is now rock solid ;-) I concur with Victor on every point. In particular, many of the good reasons that typeshed is external to the standard library will also apply to typing.py. It would also be nice to not have typing.py vary with each version of CPython's release cycle. Not only would typing benefit from more frequent updates, it would be nice to have updates that aren't tied to a specific version of CPython -- that would help folks who have to maintain code that works across multiple CPython versions (i.e. the same benefit that we get by always installing the most up-to-date versions of requests, typeshed, jinja2, etc). Already, we've have updates to typing.py in the point releases of Python because those updates were considered so useful and important. Raymond From steve.dower at python.org Sun Nov 5 18:32:58 2017 From: steve.dower at python.org (Steve Dower) Date: Sun, 5 Nov 2017 15:32:58 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> Message-ID: Since there is voting going on, -1 on changing the guarantees of `dict`. If ordering is important, OrderedDict is more explicit and more obvious to the over reading the code, even if the underlying implementation is identical. Good luck teaching people about why Python went from OrderedDict to UnorderedDict within a version. I remember learning about this same thing in C++ and just thinking ?wut??. And they just added a new type and said to stop using the old one ? not changing something that people already understand. Better to underspecify the default dict and offer explicitly named extensions (DefaultDict, OrderedDict, etc.) for those who want more guarantees. Cheers, Steve Top-posted from my Windows phone From: Peter Ludemann via Python-Dev Sent: Sunday, November 5, 2017 12:53 To: Sven R. Kunze Cc: Python-Dev Subject: Re: [Python-Dev] Guarantee ordered dict literals in v3.7? Isn't ordered dict also useful for **kwargs?? If it turns out that there's a dict implementation that's faster by not preserving order, collections.UnorderedDict could be added. There could also be specialized implementations that pre-size the dict (cf: C++ unordered_map::reserve), etc., etc. But these are all future things, which might not be necessary. On 5 November 2017 at 12:44, Sven R. Kunze wrote: +1 from me too. On 04.11.2017 21:55, Jim Baker wrote: +1, as Guido correctly recalls, this language guarantee will work well with Jython when we get to the point of implementing 3.7+. On Sat, Nov 4, 2017 at 12:35 PM, Guido van Rossum wrote: This sounds reasonable -- I think when we introduced this in 3.6 we were worried that other implementations (e.g. Jython) would have a problem with this, but AFAIK they've reported back that they can do this just fine. So let's just document this as a language guarantee. On Sat, Nov 4, 2017 at 10:30 AM, Stefan Krah wrote: Hello, would it be possible to guarantee that dict literals are ordered in v3.7? The issue is well-known and the workarounds are tedious, example: ? ?https://mail.python.org/pipermail/python-ideas/2015-December/037423.html If the feature is guaranteed now, people can rely on it around v3.9. Stefan Krah _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/jbaker%40zyasoft.com _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/pludemann%40google.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Sun Nov 5 18:43:32 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 5 Nov 2017 15:43:32 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: > On Nov 4, 2017, at 7:04 PM, Nick Coghlan wrote: > > When I asked Damien George about this for MicroPython, he indicated > that they'd have to choose between guaranteed order and O(1) lookups > given their current dict implementation. That surprised me a bit > (since PyPy and CPython both *saved* memory by switching to their > guaranteed order implementations, hence the name "compact dict > representation"), but my (admittedly vague) understand is that the > presence of a space/speed trade-off in their case has something to do > with MicroPython deliberately running with a much higher chance of > hash collisions in general (since the data sets it deals with are > naturally smaller). > > So if we make the change, MicroPython will likely go along with it, > but it may mean that dict lookups there become O(N), and folks will be > relying on "N" being consistently small due to memory constraints (but > some typically O(N) algorithms will still become O(N^2) when run on > MicroPython). > > I don't think that situation should change the decision, but I do > think it would be helpful if folks that understand CPython's dict > implementation could take a look at MicroPython's dict implementation > and see if it might be possible for them to avoid having to make that > trade-off and instead be able to use a naturally insertion ordered > hashmap implementation. I've just looked at the MicroPython dictionary implementation and think they won't have a problem implementing O(1) compact dicts with ordering. The likely reason for the confusion is that they are already have an option for an "ordered array" dict variant that does a brute-force linear search. However, their normal hashed lookup is very similar to ours and is easily amenable to being compact and ordered. See: https://github.com/micropython/micropython/blob/77a48e8cd493c0b0e0ca2d2ad58a110a23c6a232/py/map.c#L139 Pretty much any implementation hashed lookup of keys and values is amenable to being compact and ordered. Whatever existing logic that looks up an entry becomes a lookup into a table of indices which in turn references a sequential array of keys and values. This logic is independent of hashing scheme or density, and it has no effect on the number of probes or collision rate. The cost is an extra level of indirection and an extra array of indices (typically very small). The benefit is faster iteration over the smaller dense key/value array, overall memory savings resulting in improved cache utilization, and the side-effect of remembering insertion order. Summary: I think MicroPython will be just fine and if needed I will help create the patch that implements compact-and-ordered behavior. Raymond From njs at pobox.com Sun Nov 5 19:31:05 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 5 Nov 2017 16:31:05 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> <20171105193029.GA19471@bytereef.org> <0f2c552f-8008-78b7-e276-03d04b8c1772@gmail.com> Message-ID: On Nov 5, 2017 2:41 PM, "Paul Ganssle" wrote: I think the question of whether any specific implementation of dict could be made faster for a given architecture or even that the trade-offs made by CPython are generally the right ones is kinda beside the point. It's certainly feasible that an implementation that does not preserve ordering could be better for some implementation of Python, and the question is really how much is gained by changing the language semantics in such a way as to cut off that possibility. The language definition is not nothing, but I think it's easy to overestimate its importance. CPython does in practice provide ordering guarantees for dicts, and this solves a whole bunch of pain points: it makes json roundtripping work better, it gives ordered kwargs, it makes it possible for metaclasses to see the order class items were defined, etc. And we got all these goodies for better-than-free: the new dict is faster and uses less memory. So it seems very unlikely that CPython is going to revert this change in the foreseeable future, and that means people will write code that depends on this, and that means in practice reverting it will become impossible due to backcompat and it will be important for other interpreters to implement, regardless of what the language definition says. That said, there are real benefits to putting this in the spec. Given that we're not going to get rid of it, we might as well reward the minority of programmers who are conscientious about following the spec by letting them use it too. And there were multiple PEPs that went away when this was merged; no one wants to resurrect them just for hypothetical future implementations that may never exist. And putting it in the spec will mean that we can stop having this argument over and over with the same points rehashed for those who missed the last one. (This isn't aimed at you or anything; it's not your fault you don't know all these arguments off the top of your head, because how would you? But it is a reality of mailing list dynamics that rehashing this kind of thing sucks up energy without producing much.) MicroPython deviates from the language spec in lots of ways. Hopefully this won't need to be another one, but it won't be the end of the world if it is. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Nov 5 20:46:48 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 11:46:48 +1000 Subject: [Python-Dev] [python-committers] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: On 6 November 2017 at 02:02, Yury Selivanov wrote: > On Fri, Nov 3, 2017 at 11:30 PM, Nick Coghlan wrote: >> The current lack of DeprecationWarnings in 3.6 is a fairly major >> oversight/bug, though: > > There's no oversight. We had PendingDeprecationWarning for > async/await names in 3.5, and DeprecationWarning in 3.6. You just > need to enable warnings to see them: Gah, seven years on from Python 2.7's release, I still get caught by that. I'm tempted to propose we reverse that decision and go back to enabling them by default :P If app devs don't want their users seeing deprecation warnings, they can silence them globally during app startup, and end users can do the same in PYTHONSTARTUP for their interactive sessions. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 5 21:05:07 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 12:05:07 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default Message-ID: On the 12-weeks-to-3.7-feature-freeze thread, Jose Bueno & I both mistakenly though the async/await deprecation warnings were missing from 3.6. They weren't missing, we'd just both forgotten those warnings were off by default (7 years after the change to the default settings in 2.7 & 3.2). So my proposal is simple (and not really new): let's revert back to the way things were in 2.6 and earlier, with DeprecationWarning being visible by default, and app devs having to silence it explicitly during application startup (before they start importing third party modules) if they don't want their users seeing it when running on the latest Python version (e.g. this would be suitable for open source apps that get integrated into Linux distros and use the system Python there). This will also restore the previously clear semantic and behavioural different between PendingDeprecationWarning (hidden by default) and DeprecationWarning (visible by default). As part of this though, I'd suggest amending the documentation for DeprecationWarning [1] to specifically cover how to turn it off programmatically (`warnings.simplefilter("ignore", DeprecationWarning)`), at the command line (`python -W ignore::DeprecationWarning ...`), and via the environment (`PYTHONWARNINGS=ignore::DeprecationWarning`). (Structurally, I'd probably put that at the end of the warnings listing as a short introduction to warnings management, with links out to the relevant sections of the documentation, and just use DeprecationWarning as the specific example) Cheers, Nick. [1] https://docs.python.org/3/library/exceptions.html#DeprecationWarning -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 5 21:18:24 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 12:18:24 +1000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> Message-ID: On 4 November 2017 at 02:46, Eric V. Smith wrote: > On 11/3/2017 12:15 PM, Victor Stinner wrote: >> >> Hi, >> >> 2017-11-03 15:36 GMT+01:00 Guido van Rossum : >>> >>> Maybe we should remove typing from the stdlib? >>> https://github.com/python/typing/issues/495 > > >> The typing module is not used yet in the stdlib, so there is no >> technically reason to keep typing part of the stdlib. IMHO it's >> perfectly fine to keep typing and annotations out of the stdlib, since >> the venv & pip tooling is now rock solid ;-) > > > I'm planning on using it for PEP 557: > https://www.python.org/dev/peps/pep-0557/#class-variables > > The way the code currently checks for this should still work if typing is > not in the stdlib, although of course it's assuming that the name "typing" > really is the "official" typing library. > > # If typing has not been imported, then it's impossible for > # any annotation to be a ClassVar. So, only look for ClassVar > # if typing has been imported. > typing = sys.modules.get('typing') > if typing is not None: > # This test uses a typing internal class, but it's the best > # way to test if this is a ClassVar. > if type(a_type) is typing._ClassVar: > # This field is a ClassVar. Ignore it. > continue > > See also https://github.com/ericvsmith/dataclasses/issues/14 That particular dependency could also be avoided by defining an "is_class_var(annotation)" generic function and a "ClassVar" helper object in the dataclasses module. For example: class _ClassVar: def __init__(self, annotation): self.annotation = annotation class _MakeClassVar: def __getitem__(self, key): return _ClassVar(key) ClassVar = _MakeClassVar() @functools.singledispatch def is_class_var(annotation): return isinstance(annotation, _ClassVar) It would put the burden on static analysers and the typing module to understand that `dataclasses.ClassVar` meant the same thing conceptually as `typing.ClassVar`, but I think that's OK. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Sun Nov 5 21:20:12 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sun, 5 Nov 2017 21:20:12 -0500 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: Big +1 from me. The whole point of DeprecationWarnings is to be visible so that they are resolved faster. The current behaviour allows them to go unnoticed for the majority of Python users. Yury On Sun, Nov 5, 2017 at 9:05 PM, Nick Coghlan wrote: > On the 12-weeks-to-3.7-feature-freeze thread, Jose Bueno & I both > mistakenly though the async/await deprecation warnings were missing > from 3.6. > > They weren't missing, we'd just both forgotten those warnings were off > by default (7 years after the change to the default settings in 2.7 & > 3.2). > > So my proposal is simple (and not really new): let's revert back to > the way things were in 2.6 and earlier, with DeprecationWarning being > visible by default, and app devs having to silence it explicitly > during application startup (before they start importing third party > modules) if they don't want their users seeing it when running on the > latest Python version (e.g. this would be suitable for open source > apps that get integrated into Linux distros and use the system Python > there). > > This will also restore the previously clear semantic and behavioural > different between PendingDeprecationWarning (hidden by default) and > DeprecationWarning (visible by default). > > As part of this though, I'd suggest amending the documentation for > DeprecationWarning [1] to specifically cover how to turn it off > programmatically (`warnings.simplefilter("ignore", > DeprecationWarning)`), at the command line (`python -W > ignore::DeprecationWarning ...`), and via the environment > (`PYTHONWARNINGS=ignore::DeprecationWarning`). > > (Structurally, I'd probably put that at the end of the warnings > listing as a short introduction to warnings management, with links out > to the relevant sections of the documentation, and just use > DeprecationWarning as the specific example) > > Cheers, > Nick. > > [1] https://docs.python.org/3/library/exceptions.html#DeprecationWarning > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From phd at phdru.name Sun Nov 5 21:29:23 2017 From: phd at phdru.name (Oleg Broytman) Date: Mon, 6 Nov 2017 03:29:23 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <20171106022923.GA26858@phdru.name> On Sun, Nov 05, 2017 at 09:20:12PM -0500, Yury Selivanov wrote: > Big +1 from me. The whole point of DeprecationWarnings is to be > visible The whole point of DeprecationWarnings is to be visible to developers while in reality they will be visible to users -- and what the users would do with the warnings? > Yury Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From raymond.hettinger at gmail.com Sun Nov 5 21:44:20 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 5 Nov 2017 18:44:20 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> <20171105193029.GA19471@bytereef.org> <0f2c552f-8008-78b7-e276-03d04b8c1772@gmail.com> Message-ID: > On Nov 5, 2017, at 4:31 PM, Nathaniel Smith wrote: > > CPython does in practice provide ordering guarantees for dicts, and this solves a whole bunch of pain points: it makes json roundtripping work better, it gives ordered kwargs, it makes it possible for metaclasses to see the order class items were defined, etc. And we got all these goodies for better-than-free: the new dict is faster and uses less memory. So it seems very unlikely that CPython is going to revert this change in the foreseeable future, and that means people will write code that depends on this, and that means in practice reverting it will become impossible due to backcompat and it will be important for other interpreters to implement, regardless of what the language definition says. > > That said, there are real benefits to putting this in the spec. Given that we're not going to get rid of it, we might as well reward the minority of programmers who are conscientious about following the spec by letting them use it too. Thanks. Your note resonated with me -- the crux of your argument seems to be that the proposal results in a net reduction in complexity for both users and implementers. That makes sense. Even having read all the PEPs, read all the patches, and having participated in the discussions, I tend to forget where ordering is guaranteed and where it isn't. This discussion reminds me of when Timsort was introduced many years ago. Sort stability wasn't guaranteed at first, but it was so darned convenient (and a pain to work around when not present) that it became guaranteed in the following release. The current proposal is different in many ways, but does share the virtue of being a nice-to-have for users. > MicroPython deviates from the language spec in lots of ways. Hopefully this won't need to be another one, but it won't be the end of the world if it is. I've looked at the MicroPython source and think this won't be a problem. It will be even easier for them than it was for us (the code is simpler because it doesn't have special cases for key-sharing, unicode optimizations, and whatnot). Raymond From songofacandy at gmail.com Sun Nov 5 22:01:48 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Mon, 6 Nov 2017 12:01:48 +0900 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> <20171105193029.GA19471@bytereef.org> Message-ID: On Mon, Nov 6, 2017 at 4:54 AM, Serhiy Storchaka wrote: ... > > I didn't try to implement this. But the current implementation requires > periodical reallocating if add and remove items. The following loop > reallocates the dict every len(d) iterations, while the size of the dict is > not changed, and the half of its storage is empty. > > while True: > v = d.pop(k) > ... > d[k] = v > FYI, Raymond's original compact dict (moving last item to slot used for deleted item) will break OrderedDict. So it's not easy to implement than it looks. OrderedDict uses linked list to keep which slot is used for the key. Moving last item will break it. It means odict.__delitem__ can't use PyDict_DelItem anymore and OrderedDict should touch internal structure of dict. I think current OrderedDict implementation is fragile loose coupling. While two object has different file (dictobject.c and odictobject.c), OrderedDict depends on dict's internal behavior heavily. It prevents optimizing dict. See comment here. https://github.com/python/cpython/blob/a5293b4ff2c1b5446947b4986f98ecf5d52432d4/Objects/dictobject.c#L1082 I don't have strong opinion about what should we do about dict and OrderedDict. But I feel PyPy's approach (using same implementation and just override __eq__ and add move_to_end() method) is most simple. Regards, INADA Naoki From ncoghlan at gmail.com Sun Nov 5 22:07:51 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 13:07:51 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> Message-ID: On 6 November 2017 at 06:50, Peter Ludemann via Python-Dev wrote: > Isn't ordered dict also useful for **kwargs? 3.6 already made this semantic change for **kwargs and class execution namespaces - it just left the door open to having implementations meet those requirements by way of substituting in collections.OrderedDict for those use cases, rather than using a regular builtin dictionary. So we're also already imposing the requirement on conformant Python implementations to have a reasonably performant insertion-ordered mapping implementation available. The open questions around insertion ordering thus relate to what user expectations should be for: - dictionary displays - explicit invocations of the builtin dict constructor - mutating methods on builtin dicts - serialisation and deserialisation operations that use regular dicts (e.g. the JSON module, csv.DictReader/Writer) It's that last category which is particularly problematic now, as in *C*Python 3.6, the dicts used for these operations actually *are* insertion ordered, so round trips from disk, through a CPython builtin dict, and then back to disk *will* be order preserving, whereas in previous versions there was no guarantee that that would be the case (and hash randomisation meant the key reordering wasn't even deterministic). While such operations were already order preserving in PyPy (since their switch to an insertion-ordered builtin dict implementation served as prior art for the subsequent switch in CPython), we know from experience that it's incredibly easy for even things that are nominally implementation details of CPython to become expected behaviour for Python users (e.g. there's still plenty of code out there that relies on the use of an eager refcounting memory management strategy to handle external resources properly). That means our choices for 3.7 boil down to: * make this a language level guarantee that Python devs can reasonably rely on * deliberately perturb dict iteration in CPython the same way the default Go runtime does [1] When we did the "insertion ordered hash map" availability review, the main implementations we were checking on behalf of were Jython & VOC (JVM implementations), Batavia (JavaScript implementation), and MicroPython (C implementation). Adding IronPython (C# implementation) to the mix gives: * for the JVM, the insertion ordered LinkedHashMap [2] has been available since J2SE 1.4 was released in 2002 * for JavaScript, ECMA 6 defines the Map type [3] as an insertion ordered key/value store * for the .NET CLR, System.Collections.Specialized.OrderedDictionary [4] seems to be the best builtin option, but the Java implementations also map reasonably well to C# semantics * for MicroPython, it's not yet clear whether or not the hash map design used in CPython could also be adapted to their design constraints (i.e. giving both insertion ordering and O(1) lookup without requiring excessive amounts of memory), but it should at least be feasible to make the semantics configurable based on a compile time option (since CPython has a working C implementation of the desired semantics already) Since the round-trip behaviour that comes from guaranteed order preservation is genuinely useful, and we're comfortable with folks switching to more specialised containers when they need different performance characteristics from what the builtins provide, elevating insertion order preservation to a language level requirements makes sense. Cheers, Nick. [1] https://blog.golang.org/go-maps-in-action (search for "Iteration Order") [2] https://docs.oracle.com/javase/8/docs/api/java/util/LinkedHashMap.html [3] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map [4] https://docs.microsoft.com/en-us/dotnet/api/system.collections.specialized.ordereddictionary?view=netframework-4.7.1 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 5 22:13:16 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 13:13:16 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: On 6 November 2017 at 09:43, Raymond Hettinger wrote: > On Nov 4, 2017, at 7:04 PM, Nick Coghlan wrote: >> I don't think that situation should change the decision, but I do >> think it would be helpful if folks that understand CPython's dict >> implementation could take a look at MicroPython's dict implementation >> and see if it might be possible for them to avoid having to make that >> trade-off and instead be able to use a naturally insertion ordered >> hashmap implementation. > > I've just looked at the MicroPython dictionary implementation and think they won't have a problem implementing O(1) compact dicts with ordering. > > The likely reason for the confusion is that they are already have an option for an "ordered array" dict variant that does a brute-force linear search. However, their normal hashed lookup is very similar to ours and is easily amenable to being compact and ordered. > > See: https://github.com/micropython/micropython/blob/77a48e8cd493c0b0e0ca2d2ad58a110a23c6a232/py/map.c#L139 > > Pretty much any implementation hashed lookup of keys and values is amenable to being compact and ordered. Whatever existing logic that looks up an entry becomes a lookup into a table of indices which in turn references a sequential array of keys and values. This logic is independent of hashing scheme or density, and it has no effect on the number of probes or collision rate. > > The cost is an extra level of indirection and an extra array of indices (typically very small). The benefit is faster iteration over the smaller dense key/value array, overall memory savings resulting in improved cache utilization, and the side-effect of remembering insertion order. > > Summary: I think MicroPython will be just fine and if needed I will help create the patch that implements compact-and-ordered behavior. Nice! That's what I thought based on reading some of the design discussions about CPython's dict implementation, but I didn't know the details of either dict implementation well enough to be confident that the CPython changes would map cleanly to MicroPython's variant. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lukasz at langa.pl Sun Nov 5 22:32:30 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Sun, 5 Nov 2017 19:32:30 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <20171106022923.GA26858@phdru.name> References: <20171106022923.GA26858@phdru.name> Message-ID: > On 5 Nov, 2017, at 6:29 PM, Oleg Broytman wrote: > > On Sun, Nov 05, 2017 at 09:20:12PM -0500, Yury Selivanov wrote: >> Big +1 from me. The whole point of DeprecationWarnings is to be >> visible > > The whole point of DeprecationWarnings is to be visible to > developers while in reality they will be visible to users -- and what > the users would do with the warnings? Complain to the authors and make them remove the issue. https://github.com/requests/requests/issues/3954 https://github.com/scikit-learn-contrib/sklearn-pandas/issues/76 https://github.com/pandas-dev/pandas/issues/5824 https://github.com/pypa/setuptools/issues/472 +1 to re-enable this from me, too. At Facebook we are running unit tests and development builds with warnings. Just be aware that for some applications that will spew a lot. Python 3.6's warnings on invalid escapes is a big source of this. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Sun Nov 5 22:38:59 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 13:38:59 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <20171106022923.GA26858@phdru.name> References: <20171106022923.GA26858@phdru.name> Message-ID: On 6 November 2017 at 12:29, Oleg Broytman wrote: > On Sun, Nov 05, 2017 at 09:20:12PM -0500, Yury Selivanov wrote: >> Big +1 from me. The whole point of DeprecationWarnings is to be >> visible > > The whole point of DeprecationWarnings is to be visible to > developers while in reality they will be visible to users -- and what > the users would do with the warnings? Hence the proposed documentation change: the responsibility for silencing these warnings (for both their own code and for their dependencies) should rest with *application* developers, with our responsibility as providers of the underlying platform then being to make it completely obvious how to actually do that (it's currently really unclear, with the relevant info being scattered across the list of builtin warnings, different parts of the warnings module and CPython command line usage documentation, with no explicit examples of exactly what you need to write anywhere). To put that another way: - if we ever write "import foo" ourselves, then we're a Python developer, and it's our responsibility to work out how to manage DeprecationWarning when it gets raised by either our own code, or the libraries and frameworks that we use - if we ship Python code in a "supply your own runtime" model, such that we have actual non-developer users and operators (rather than solely fellow Python developers) to worry about, then it's still our responsibility to decide whether or not we want to let deprecation warnings appear on stderr (based on what we think is most appropriate for our particular user base) - if we want to categorically ensure our users don't get unexpected deprecation warnings on stderr, then we should be bundling a Python runtime as well (e.g. via PyInstaller or a Linux container image, or by operating a network service), rather than asking users and operators to handle the runtime+application integration step We've been running the current experiment for 7 years, and the main observable outcome has been folks getting surprised by breaking changes in CPython releases, especially folks that primarily use Python interactively (e.g. for data analysis), or as a scripting engine (e.g. for systems administration). That means the status quo is defeating the main purpose of DeprecationWarnings (providing hard-to-miss advance notice of upcoming breaking changes in the language definition and standard library), for the sake of letting app developers duck responsibility for managing what their own software writes to stderr. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lukasz at langa.pl Sun Nov 5 22:41:33 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Sun, 5 Nov 2017 19:41:33 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: > On 4 Nov, 2017, at 3:39 AM, Paul Moore wrote: > > Lukasz Langa said: >> So, the difference is in perceived usability. It's psychological. > > Please, let's not start the "not in the stdlib isn't an issue" debate > again. If I concede it's a psychological issue, will you concede that > the fact that it's psychological doesn't mean that it's not a real, > difficult to solve, problem for some people? I'm also willing to > concede that it's a *minority* problem, if that helps. But can we stop > dismissing it as a non-existent problem? Paul, if you read the words I wrote in my e-mail verbatim, you will note that I am not saying it's not real or it's not important. Quite the opposite. Can you elaborate what made you think that my assertion that the issue is psychological made you think I'm being dismissive? To me it looks like you're aggressively agreeing with me, so I'd like to understand what caused your reaction. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From timothy.c.delaney at gmail.com Sun Nov 5 22:51:33 2017 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Mon, 6 Nov 2017 14:51:33 +1100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 6 November 2017 at 13:05, Nick Coghlan wrote: > As part of this though, I'd suggest amending the documentation for > DeprecationWarning [1] to specifically cover how to turn it off > programmatically (`warnings.simplefilter("ignore", > DeprecationWarning)`), at the command line (`python -W > ignore::DeprecationWarning ...`), and via the environment > (`PYTHONWARNINGS=ignore::DeprecationWarning`). > I'm wondering if it would be sensible to recommend only disabling the warnings if running with a known version of Python e.g. if sys.version_info < (3, 8): with warnings.simplefilter('ignore', DeprecationWarning): import module The idea here is to prompt the developer to refactor to not use the deprecated functionality early enough that users aren't impacted. Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sun Nov 5 23:14:59 2017 From: barry at python.org (Barry Warsaw) Date: Sun, 5 Nov 2017 20:14:59 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On Nov 5, 2017, at 18:05, Nick Coghlan wrote: > So my proposal is simple (and not really new): let's revert back to > the way things were in 2.6 and earlier, with DeprecationWarning being > visible by default +1 > As part of this though, I'd suggest amending the documentation for > DeprecationWarning [1] to specifically cover how to turn it off > programmatically (`warnings.simplefilter("ignore", > DeprecationWarning)`), at the command line (`python -W > ignore::DeprecationWarning ...`), and via the environment > (`PYTHONWARNINGS=ignore::DeprecationWarning`). +1 I?d also consider adding convenient shortcuts for each of these. I think DeprecationWarning is special enough to warrant it. Possibly: warnings.silence_deprecations() python -X silence-deprecations PYTHONSILENCEDEPRECATIONS=x Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Sun Nov 5 23:18:07 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Sun, 5 Nov 2017 20:18:07 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: <8436CF8D-562C-44A6-87C2-B48E5B551BE8@langa.pl> > On 4 Nov, 2017, at 11:43 AM, Peter Ludemann via Python-Dev wrote: > > If type annotations are treated like implicit lambdas, then that's a first step to something similar to Lisp's "special forms". A full generalization of that would allow, for example, logging.debug to not evaluate its args unless debugging is turned on (I use a logging.debug wrapper that allows lambdas as args, and evaluates them if debugging is turned on). Interestingly enough, at Facebook we found out that using f-strings is *faster* at runtime than the lazy form of logging.log("format with %s and %d", arg1, arg2), including for cases when the log message is not emitted. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Sun Nov 5 23:40:00 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Sun, 5 Nov 2017 20:40:00 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: > On 4 Nov, 2017, at 6:32 PM, Nick Coghlan wrote: > > The PEP's current attitude towards this is "Yes, it will break, but > that's OK, because it doesn't matter for the type annotation use case, > since static analysers will still understand it". Adopting such a > cavalier approach towards backwards compatibility with behaviour that > has been supported since Python 3.0 *isn't OK*, since it would mean we > were taking the step from "type annotations are the primary use case" > to "Other use cases for function annotations are no longer supported". Well, this is what the PEP literally says in "Deprecation policy": > In Python 4.0 this will become the default behavior. Use of annotations incompatible with this PEP is no longer supported. The rationale here is that type annotations as defined by PEP 484 and others is the only notable use case. Note that "type annotations" includes things like data classes, auto_attribs in attrs, the dependency injection frameworks mentioned before, etc. Those are compatible with PEP 484. So, despite the open nature of annotations since Python 3.0, no alternative use case emerged that requires eager evaluation and access to local state. PEP 563 is addressing the pragmatic issue of improving usability of type annotations, instead of worrying about some unknown theoretically possible use case. While function annotations were open to arbitrary use, typing was increasingly hinted (pun not intended) as *the* use case for them: 1. From Day 1, type checking is listed as the number one intended use case in PEP 3107 (and most others listed there are essentially type annotations by any other name). 2. PEP 484 says "We do hope that type hints will eventually become the sole use for annotations", and that "In order for maximal compatibility with offline type checking it may eventually be a good idea to change interfaces that rely on annotations to switch to a different mechanism, for example a decorator." 3. Variable annotations in PEP 526 were designed with type annotations as the sole stated purpose. PEP 563 simply brings this multi-PEP dance to its logical conclusion, stating in "Rationale and Goals" that "uses for annotations incompatible with the aforementioned PEPs should be considered deprecated." The timeline for full deprecation is Python 4.0. > The only workaround I can see for that breakage is that instead of > using strings, we could instead define a new "thunk" type that > consists of two things: > > 1. A code object to be run with eval() > 2. A dictionary mapping from variable names to closure cells (or None > for not yet resolved references to globals and builtins) This is intriguing. 1. Would that only be used for type annotations? Any other interesting things we could do with them? 2. It feels to me like that would make annotations *heavier* at runtime instead of leaner, since now we're forcing the relevant closures to stay in memory. 3. This form of lazy evaluation seems pretty implicit to me for the reader. Peter Ludemann's example of magic logging.debug() is a case in point here. All in all, unless somebody else is ready to step up and write the PEP on this subject (and its implementation) right now, I think this idea will miss Python 3.7. > Now, even without the introduction of the IndirectAttributeCell > concept, this is amenable to a pretty simple workaround: > > A = Optional[int] > class C: > field: A = 1 > def method(self, arg: A) -> None: ... > C.A = A > del A This is a poor workaround, worse in fact than using a string literal as a forward reference. This is more verbose and error-prone. Decorators address the same construct and their wild popularity suggests that this notation is inferior. > But I genuinely can't see how breaking annotation evaluation at class > scope can be seen as a deal-breaker for the implicit lambda based > approach without breaking annotation evaluation for nested functions > also being seen as a deal-breaker for the string based approach. The main reason to use type annotations is readability, just like decorators. While there's nothing stopping the programmer to write: class C: ... def method(self, arg1): ... method.__annotations__ = {'arg1': str, 'return': int} C.__annotations__ = {'attribute1': ...} ...this notation doesn't fit the bill. Since nested classes and types embedded as class attributes are popular among type hinting users, supporting this case is a no-brainer. On the other hand, if you have a factory function that generates some class or function, then you either: 1. Use annotations in the generated class/function for type checking; OR 2. Add annotations in the generated class/function for them to be preserved in __annotations__ for some future runtime use. In the former case, you are unlikely to use local state. But even if you were, that doesn't matter since the static type checker doesn't resolve your annotation at runtime. In the latter case, you are free to assign __annotations__ directly since clearly readability of the factory code isn't your goal, but rather the functionality that runtime annotations provide. I was pretty careful surveying existing use cases looking for things that would be made impossible by the PEP 563. The backwards compatibility it causes requires source changes but I couldn't find situations where there would be irrecoverable functionality loss. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Sun Nov 5 23:47:10 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 14:47:10 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 6 November 2017 at 14:14, Barry Warsaw wrote: > On Nov 5, 2017, at 18:05, Nick Coghlan wrote: > >> So my proposal is simple (and not really new): let's revert back to >> the way things were in 2.6 and earlier, with DeprecationWarning being >> visible by default > > +1 > >> As part of this though, I'd suggest amending the documentation for >> DeprecationWarning [1] to specifically cover how to turn it off >> programmatically (`warnings.simplefilter("ignore", >> DeprecationWarning)`), at the command line (`python -W >> ignore::DeprecationWarning ...`), and via the environment >> (`PYTHONWARNINGS=ignore::DeprecationWarning`). > > +1 > > I?d also consider adding convenient shortcuts for each of these. I think DeprecationWarning is special enough to warrant it. Possibly: > > warnings.silence_deprecations() > python -X silence-deprecations > PYTHONSILENCEDEPRECATIONS=x It could be interesting to combine this with Tim's suggestion of putting an upper version limit on the silencing, so the above may look like: warnings.ignore_deprecations((3, 7)) python -X ignore-deprecations=3.7 PYTHONIGNOREDEPRECATIONS=3.7 (Using "ignore" to match the existing action name so the intent is a bit more self-explanatory) The ignore_deprecations function would then look like: def ignore_deprecations(max_version): """Ignore DeprecationWarning on Python versions up to & including the given one *max_version* is an iterable suitable for ordered comparison against sys.version_info """ if sys.version_info <= max_version: warnings.simplefilter('ignore', DeprecationWarning) So the conventional usage would be that if you were regularly updating your application, by the time Python 3.8 actually existed, the check would have been bumped to say 3.8. But if you stopped updating (or the publisher stopped releasing updates), you'd eventually start getting deprecation warnings again as the underlying Python version changed. I could definitely see that working well for the community Linux distro use case, where there isn't necessarily anyone closely monitoring old packages to ensure they're actively tracking upstream releases (and attempting to institute more ruthless pruning processes can lead to potentially undesirable community dynamics) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Nov 6 00:55:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 15:55:12 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: On 6 November 2017 at 14:40, Lukasz Langa wrote: > On 4 Nov, 2017, at 6:32 PM, Nick Coghlan wrote: >> The only workaround I can see for that breakage is that instead of >> using strings, we could instead define a new "thunk" type that >> consists of two things: > >> 1. A code object to be run with eval() >> 2. A dictionary mapping from variable names to closure cells (or None >> for not yet resolved references to globals and builtins) > > This is intriguing. > > 1. Would that only be used for type annotations? Any other interesting > things we could do with them? Yes, they'd have the potential to replace strings for at least some data analysis use cases, where passing in lambdas is too awkward syntactically, since you have to spell out all the parameters. The pandas.DataFrame.query operation is a reasonable example of that kind of thing: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html (Not an exact example, since Pandas uses a python-like expression language, rather than specifically Python) Right now, folks tend to use strings for this kind of use case, which has the same performance problem that pre-f-string string formatting does: it defers the expression parsing and compilation step until runtime, rather than being able to do it once and then cache the result in __pycache__. > 2. It feels to me like that would make annotations *heavier* at runtime > instead of leaner, since now we're forcing the relevant closures to stay in > memory. Cells are pretty cheap (they're just a couple of pointers), and if they're references to module or class attributes, the object referenced by the cell would have remained alive regardless. Even for nonlocal variable references (which a solely string-based approach would disallow), the referenced objects will already be getting kept alive anyway by way of the typing machinery. > 3. This form of lazy evaluation seems pretty implicit to me for the reader. > Peter Ludemann's example of magic logging.debug() is a case in point here. One of the biggest advantages though is that just like functions, all of the necessary logic for doing the delayed evaluation can be captured in a __call__ method, rather than via elaborate instructions on how to appropriately invoke eval() based on knowledge of where the annotation came from. This is especially important if typing gets taken out of the standard library: you'll need a replacement for typing.get_type_hints() in PEP 563, and a thunk.__call__() method would be a good spelling for that. > All in all, unless somebody else is ready to step up and write the PEP on > this subject (and its implementation) right now, I think this idea will miss > Python 3.7. As long as we don't argue for that being an adequate excuse to rush into "We're using plain strings with ill-defined name resolution semantics because we couldn't be bothered coming up with a proper thunk-based design to evaluate", I'd be fine with that. None of this is urgent, and it's mainly of interest to large organisations that will see a direct economic benefit from implementing it, so the entire idea can easily be delayed to 3.8 if they're not prepared to fund a proper evaluation of the available design options over the next 3 months. Python's name resolution rules are already ridiculously complicated, and PEP 563 is proposing to make them *even worse*, purely for the sake of an optional feature primarily of interest to large enterprise users. If delayed evaluation of type annotations is deemed important enough to burden every future Pythonista with learning a second set of name resolution semantics purely for type annotations, then it's important enough to postpone implementing it until someone invests the time in coming up with a competing thunk-based alternative that is able to rely entirely on the *existing* name resolution semantics. Exploring that potential thunk-based approach a bit further: 1. We'd continue to eagerly compile annotations (as we do today), but treat them like a nested class body with a single expression. Unlike an implicit lambda, this compilation mode will allow the resulting code object to be used with the two-argument form of the exec builtin 2. That code object would be the main item stored on the thunk object 3. If __classcell__ is defined in the current namespace and names from the current namespace are referenced, then that can be captured on the thunk, giving its __call__ method access to any class attributes needed for name resolution 4. Closure references would be captured automatically, but class bodies already allow locals to override nonlocals (for compatibility with pre-populated namespaces returned from __prepare__) 5. A thunk's __globals__ reference would be implicitly captured the same way it is for a regular function That's enough to leave nested classes as the main problematic case, since they can't see each other's attributes by default, and the proposed injected locals semantics in PEP 563 don't get this right either (they only account for MRO-based name resolution, not lexical nesting, even though the PEP claims the latter is expected to work) To illustrate the problem: ``` >>> class C: ... field = 1 ... class D: ... field2 = field ... Traceback (most recent call last): File "", line 1, in File "", line 3, in C File "", line 4, in D NameError: name 'field' is not defined ``` The __class__ ref used for zero-arg super support doesn't currently solve this problem, as right now, it only extends a single level - the inner class definition hides the outer one from method implementations (and deliberately so). There are two main ways of resolving this, with the simplest being to say that type annotations still need to be resolvable using normal closure semantics. That is, the nested class example in the PEP would be changed as follows: ``` # C is defined at module or function scope, not inside another class class C: field = 'c_field' def method(self, arg: field) -> None: # this is OK ... def method2(self, arg: C.field) -> None: # this is OK ... class D: field2 = 'd_field' def method(self, arg: C.field) -> C.D.field2: # this is OK ... def method2(self, arg: C.field) -> field2: # this is OK ... def method3(self, arg: field) -> field2: # this fails (can't find 'field') ... def method4(self, arg: C.field) -> D.field2: # this fails (can't find 'D') ... ``` This means the compiler needs to be involved at least enough to capture references to classes that aren't defined at the top level of a module. If you *don't* use existing closure semantics to solve it, then you'd instead need to either update the compiler to capture a stack of __class__ references, or else reverse engineer something based on __qualname__. However, the latter approach wouldn't work for classes defined inside a function (since there's no navigation path from the module namespace back down to the individual classes - you *need* a cell reference in that case). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Nov 6 01:00:50 2017 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Mon, 6 Nov 2017 15:00:50 +0900 Subject: [Python-Dev] [python-committers] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: <23039.64146.228004.550064@turnbull.sk.tsukuba.ac.jp> -committers and some individuals dropped from address list. Nick Coghlan writes: > Gah, seven years on from Python 2.7's release, I still get caught by > that. I'm tempted to propose we reverse that decision and go back to > enabling them by default :P > > If app devs don't want their users seeing deprecation warnings, they > can silence them globally during app startup, and end users can do the > same in PYTHONSTARTUP for their interactive sessions. This point was debated then, and there were good reasons why a lot of users can't/won't do this. The two I remember are (1) a lot of non-technical users use apps that aren't getting upgraded, and so will always emit those warnings, which often scare or confuse them, and (2) doing it in PYTHONSTARTUP is indeed global, and the kind of people who use interactive sessions typically *want* to see those warnings, but only some of them and only sometimes. FWIW YMMV Steve From ncoghlan at gmail.com Mon Nov 6 01:10:14 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 16:10:14 +1000 Subject: [Python-Dev] [python-committers] Reminder: 12 weeks to 3.7 feature code cutoff In-Reply-To: <23039.64146.228004.550064@turnbull.sk.tsukuba.ac.jp> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <23039.64146.228004.550064@turnbull.sk.tsukuba.ac.jp> Message-ID: On 6 November 2017 at 16:00, Stephen J. Turnbull wrote: > -committers and some individuals dropped from address list. > > Nick Coghlan writes: > > > Gah, seven years on from Python 2.7's release, I still get caught by > > that. I'm tempted to propose we reverse that decision and go back to > > enabling them by default :P > > > > If app devs don't want their users seeing deprecation warnings, they > > can silence them globally during app startup, and end users can do the > > same in PYTHONSTARTUP for their interactive sessions. > > This point was debated then, and there were good reasons why a lot of > users can't/won't do this. The two I remember are (1) a lot of > non-technical users use apps that aren't getting upgraded, and so will > always emit those warnings, which often scare or confuse them, and If folks get scared away from running unmaintained software, that's a good thing, not a bad thing. > (2) > doing it in PYTHONSTARTUP is indeed global, and the kind of people who > use interactive sessions typically *want* to see those warnings, but > only some of them and only sometimes. Then that's a good motivation to learn how to manage which deprecation warnings they actually see. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Nov 6 01:26:26 2017 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Mon, 6 Nov 2017 15:26:26 +0900 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> Message-ID: <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Nick Coghlan writes: > Hence the proposed documentation change: the responsibility for > silencing these warnings (for both their own code and for their > dependencies) should rest with *application* developers, How do you propose to handle users with legacy apps that they can't or their organization won't or they don't wanna upgrade? As I understand it, their only option would be something global, which they may not want to do. > We've been running the current experiment for 7 years, and the main > observable outcome Well, yeah. You can't observe something that doesn't happen, period. Bottom line: this is NOT a simple proposal, because it inherently deals in counterfactual reasoning. Steve -- Associate Professor Division of Policy and Planning Science http://turnbull/sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From lukasz at langa.pl Mon Nov 6 01:36:08 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Sun, 5 Nov 2017 22:36:08 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> Message-ID: <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> > On 5 Nov, 2017, at 9:55 PM, Nick Coghlan wrote: > > Python's name resolution rules are already ridiculously complicated, > and PEP 563 is proposing to make them *even worse*, purely for the > sake of an optional feature primarily of interest to large enterprise > users. Solving forward references in type annotations is one of the two explicit goals of the PEP. That alone changes how name resolution works. It sounds like you're -1 on that idea alone? > That's enough to leave nested classes as the main problematic case, > since they can't see each other's attributes by default, and the > proposed injected locals semantics in PEP 563 don't get this right > either (they only account for MRO-based name resolution, not lexical > nesting, even though the PEP claims the latter is expected to work) You're right! I went too far here. I meant to be as compatible as possible with how regular attribute access works for nested classes. Originally I didn't want to support any locals at all for simplicity. I was convinced this is important (documented in "Rejected ideas") but overdid the example. Since your example that illustrates the problem shows that those things fail for regular attribute lookup, too, that can be simply fixed in the PEP. I did that here: https://github.com/ambv/static-annotations/blob/master/pep-0563.rst#backwards-compatibility Note that I am still including the "def method(self) -> D.field2:" example as valid as including a class' own name in the locals() chain map provided by get_type_hints() is going to be trivial. In fact, I think I'll add this to get_type_hints() independently of this PEP since users of string literal forward references probably don't expect it to fail for nested classes. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Mon Nov 6 01:38:46 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 16:38:46 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: On 6 November 2017 at 16:26, Stephen J. Turnbull wrote: > Nick Coghlan writes: > > > Hence the proposed documentation change: the responsibility for > > silencing these warnings (for both their own code and for their > > dependencies) should rest with *application* developers, > > How do you propose to handle users with legacy apps that they can't or > their organization won't or they don't wanna upgrade? As I understand > it, their only option would be something global, which they may not > want to do. Put "PYTHONWARNINGS=ignore::DeprecationWarning" before whatever command is giving them the warnings. Even on Windows, you can put that in a batch file with the actual command you want to run and silence the warnings that way. This is the same philosophy we applied in PEP 493 to provide a smoother transition to HTTPS verification (via selective application of PYTHONHTTPSVERIFY=0) > > We've been running the current experiment for 7 years, and the main > > observable outcome > > Well, yeah. You can't observe something that doesn't happen, period. > > Bottom line: this is NOT a simple proposal, because it inherently > deals in counterfactual reasoning. No, it doesn't, as we've tried both approaches now: warning by default for the ~20 years leading up to Python 2.7, and not warning by default for the ~7 years since. Prior to 2.7, the complaints we received mainly related to app developers wanting to pass responsibility for *their* UX problems to us, and ditto for poorly maintained institutional infrastructure. So we've tried both ways now, and the status quo has led to *us* failing to provide adequate advance notice of breaking changes to *our* direct users. That's a far more important consideration for CPython's default behaviour than the secondary impact on users of applications that happen to be written in Python that are paying sufficient attention to stderr to be scared by DeprecationWarnings, but not paying sufficient attention to learn what those DeprecationWarnings actually mean. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Mon Nov 6 02:08:38 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 6 Nov 2017 09:08:38 +0200 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: 06.11.17 04:05, Nick Coghlan ????: > On the 12-weeks-to-3.7-feature-freeze thread, Jose Bueno & I both > mistakenly though the async/await deprecation warnings were missing > from 3.6. > > They weren't missing, we'd just both forgotten those warnings were off > by default (7 years after the change to the default settings in 2.7 & > 3.2). Following issues on GitHub related to new Python releases I have found that many projects try to fix deprecation warning, but there are projects that are surprised by ending of deprecation periods and removing features. > So my proposal is simple (and not really new): let's revert back to > the way things were in 2.6 and earlier, with DeprecationWarning being > visible by default, and app devs having to silence it explicitly > during application startup (before they start importing third party > modules) if they don't want their users seeing it when running on the > latest Python version (e.g. this would be suitable for open source > apps that get integrated into Linux distros and use the system Python > there). > > This will also restore the previously clear semantic and behavioural > different between PendingDeprecationWarning (hidden by default) and > DeprecationWarning (visible by default). There was a proposition to make DeprecationWarning visible by default in debug builds and in interactive interpreter. What if first implement this idea in 3.7 and make DeprecationWarning visible by default in production scripts only in 3.8? This will make less breakage. From guido at python.org Mon Nov 6 02:09:02 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 5 Nov 2017 23:09:02 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: I still find this unfriendly to users of Python scripts and small apps who are not the developers of those scripts. (Large apps tend to spit out so much logging it doesn't really matter.) Isn't there a better heuristic we can come up with so that the warnings tend to be on for developers but off for end users? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 6 02:28:14 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 17:28:14 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> Message-ID: On 6 November 2017 at 16:36, Lukasz Langa wrote: > > On 5 Nov, 2017, at 9:55 PM, Nick Coghlan wrote: > > Python's name resolution rules are already ridiculously complicated, > and PEP 563 is proposing to make them *even worse*, purely for the > sake of an optional feature primarily of interest to large enterprise > users. > > > Solving forward references in type annotations is one of the two explicit > goals of the PEP. That alone changes how name resolution works. It sounds > like you're -1 on that idea alone? That's just lazy evaluation, and no different from the way name resolution works for globals and builtins in functions, and for globals, builtins, and closure references in classes. So while I do expect that's going to be confusing, it's also entirely independent of how the lazy evaluation is implemented, and I think the basic goal of deferring annotation evaluation is a good one. [snip] > Since your example that illustrates the problem shows that those things fail > for regular attribute lookup, too, that can be simply fixed in the PEP. I > did that here: > https://github.com/ambv/static-annotations/blob/master/pep-0563.rst#backwards-compatibility Unfortunately, it isn't quite that trivial to fix. To see the remaining problem, take your nested class example from the PEP and move it inside a function. Today, that makes no difference, since "C" will transparently switch from being accessed via LOAD_NAME to instead being accessed with LOAD_CLASSDEREF. By contrast, without the ability to access the outer "C" class definition via a closure reference, you'll no longer have a generally applicable way to evaluate *any* of the annotations that reference it, since you won't have access to C from the module namespace any more. I've been persuaded that a nested *function* isn't the right answer for type annotations (since it doesn't let you tinker with locals() prior to execution, which in turn means you can't readily allow access to any class attributes at execution time), but that still leaves the more exec-friendly logic of class body compilation available. The difference between this and class body creation is that instead of passing the compiled code object to MAKE_FUNCTION, and then passing the resulting function to __build_class__, we'd instead introduce a new MAKE_THUNK opcode that implemented __call__ differently from the way regular functions implement it. That way, the compiler changes would be limited to: - compile annotations like a small nested class body (but returning the expression result, rather than None) - emit MAKE_THUNK instead of the expression's opcodes - emit STORE_ANNOTATION as usual >From a name resolution perspective, the new things folks would need to learn are: - annotations are now lazily evaluated (just like functions) - but name resolution works the same way it does in class bodies (unlike lambda expressions) To allow typing.get_type_hints() to provide access to attributes defined in the class, you'd need one final piece of the puzzle: - rather than accepting regular function arguments (since thunks won't have parameter lists) thunk.__call__ would instead accept an optional pre-populated locals() namespace to use With those changes, blindly calling annotations would usually just work - the one case that couldn't be handled that way would be annotations that implicitly accessed class level attrbutes, which would require passing in "vars(class)" when calling the thunk. Cheers, Nick. P.S. Back when I made the implicit scope change for list comprehensions, I tried all sorts of potential tweaks to the compiler's name resolution logic before finally giving up and deciding that using a real nested function was the only way to make sure I avoided introducing any weird new edge cases. Lexically nested closures are generally great, but they make it *really* hard to emulate Python's name resolution logic without direct assistance from the compiler at the point where the name reference appears. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Mon Nov 6 02:44:12 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 6 Nov 2017 09:44:12 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> <20171105183931.GA18442@bytereef.org> <20171105193029.GA19471@bytereef.org> Message-ID: 06.11.17 05:01, INADA Naoki ????: > FYI, Raymond's original compact dict (moving last item to slot used > for deleted item) will break OrderedDict. So it's not easy to implement > than it looks. > > OrderedDict uses linked list to keep which slot is used for the key. > Moving last item will break it. > It means odict.__delitem__ can't use PyDict_DelItem anymore and > OrderedDict should touch internal structure of dict. All this is implementation details. We did have little time for coordinated changing dict and OrderedDict implementations before releasing 3.6, and left existing coupling. This also prevents from in-place compactization of dict items. We could add a private flag to dict that denotes whether this dict is ordered or no. The ordinal dict could be more compact, while OrderedDict left ordered. And I like your issue31265. The current dict implementation still is young. It takes several optimizations in 3.7 (thank to you Inada) and AFAIK there are still not merged patches. I would wait until it become more stable before making change in language specification. > I think current OrderedDict implementation is fragile loose coupling. > While two object has different file (dictobject.c and odictobject.c), > OrderedDict depends on dict's internal behavior heavily. > It prevents optimizing dict. See comment here. > > https://github.com/python/cpython/blob/a5293b4ff2c1b5446947b4986f98ecf5d52432d4/Objects/dictobject.c#L1082 > > I don't have strong opinion about what should we do about dict and OrderedDict. > But I feel PyPy's approach (using same implementation and just override > __eq__ and add move_to_end() method) is most simple. I think that the coupling with dict and OrderedDict should be more robust, but left private. Dict implementation should be aware of OrderedDict and provide some private methods for using in OrderedDict, but not restrict the optimization of unordered dicts. From storchaka at gmail.com Mon Nov 6 02:47:01 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 6 Nov 2017 09:47:01 +0200 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: 06.11.17 09:09, Guido van Rossum ????: > I still find this unfriendly to users of Python scripts and small apps > who are not the developers of those scripts. (Large apps tend to spit > out so much logging it doesn't really matter.) > > Isn't there a better heuristic we can come up with so that the warnings > tend to be on for developers but off for end users? There was a proposition to make deprecation warnings visible by default in debug build and interactive interpreter. From ncoghlan at gmail.com Mon Nov 6 03:18:55 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 18:18:55 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: On 6 November 2017 at 17:09, Guido van Rossum wrote: > I still find this unfriendly to users of Python scripts and small apps who > are not the developers of those scripts. At a distro level, these warnings being off by default has actually turned out to be a problem, as it's precisely those users of Python scripts and small apps running in the system Python that don't find out they're at risk of a future distro upgrade breaking their tools until they hit the release where they actually break. They then go to the developers of either the distro or those tools saying "Everything is broken, now what do I do?", rather than less urgently asking "Hey, what's up with this weird warning I'm getting now?". So compared to that current experience of "My distro upgrade broke my stuff", getting back to the occasional "After my distro upgrade, a bunch of my stuff is now emitting messages I don't understand on stderr" sounds likes a positive improvement to me :) > Isn't there a better heuristic we can come up with so that the warnings tend > to be on for developers but off for end users? That depends on where you're drawing the line between "developer" and "end user". Someone working on a new version of Django, for example, would probably qualify as an end user from our perspective, while they'd be a framework developer from the point of view of someone building a website. If we're talking about "Doesn't even know what Python is" end users, then the most reliable way for devs to ensure they never see a deprecation warning is to bundle Python with the application, instead of expecting end users to handle the task of integrating the two together. If we're talking about library and frameworks developers, then the only reliable way to distinguish between deprecation warnings that are encountered because a dependency has a future compatibility problem and those that are internal to the application is to use module filtering: warnings.filterwarnings("ignore", category=DeprecationWarning) warnings.filterwarnings("once", category=DeprecationWarning, module="myproject.*") warnings.filterwarnings("once", category=DeprecationWarning, module="__main__.*") This model allows folks to more selectively opt-in to getting warnings from their direct dependencies, while ignoring warnings from further down their dependency stack. As things currently stand though, there's little inherent incentive for new Python users to start learning how to do any of this - instead, the default behaviour for the last several years has been "Breaking API changes just happen sometimes without any prior warning", and you have to go find out how to say "Please tell me when breaking changes are coming" (and remember to opt in to that every time you start Python) before you get any prior notification. I do like Barry's suggestion of introducing a gentler API specifically for filtering deprecations warnings, though, as I find the warnings filtering system to be a bit like logging, in that it's sufficiently powerful and flexible that getting started with it can be genuinely confusing and intimidating. In relation to that, the "warn" module README at https://pypi.python.org/pypi/warn/ provides some additional examples of how it can currently be difficult to craft a good definition of which deprecation warnings someone actually wants to see. Cheers, Nick. P.S. That README also points out another problem with the status quo: DeprecationWarning still gets silenced by default when encountered in third party modules as well, meaning that also shows up as an abrupt compatibility break for anyone that didn't already know they needed to opt-in to get deprecation warnings. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Mon Nov 6 03:25:11 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 6 Nov 2017 02:25:11 -0600 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> Message-ID: On Sun, Nov 5, 2017 at 9:38 PM, Nick Coghlan wrote: > We've been running the current experiment for 7 years, and the main > observable outcome has been folks getting surprised by breaking > changes in CPython releases, especially folks that primarily use > Python interactively (e.g. for data analysis), or as a scripting > engine (e.g. for systems administration). It's also caused lots of projects to switch to using their own ad hoc warning types for deprecations, e.g. off the top of my head: https://github.com/matplotlib/matplotlib/blob/6c51037864f9a4ca816b68ede78207f7ecec656c/lib/matplotlib/cbook/deprecation.py#L5 https://github.com/numpy/numpy/blob/d75b86c0c49f7eb3ec60564c2e23b3ff237082a2/numpy/_globals.py#L45 https://github.com/python-trio/trio/blob/f50aa8e00c29c7f2953b7bad38afc620772dca74/trio/_deprecate.py#L16 So in some ways the change has actually made it *harder* for end-user applications/scripts to hide all deprecation warnings, because for each package you use you have to somehow figure out which idiosyncratic type it uses, and filter them each separately. (In any changes though please do keep in mind that Python itself is not the only one issuing deprecation warnings. I'm thinking in particular of the filter-based-on-Python-version idea. Maybe you could have subclasses like Py35DeprecationWarning and filter on those?) -n -- Nathaniel J. Smith -- https://vorpus.org From victor.stinner at gmail.com Mon Nov 6 03:28:15 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 09:28:15 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: 2017-11-06 8:47 GMT+01:00 Serhiy Storchaka : > 06.11.17 09:09, Guido van Rossum ????: >> >> I still find this unfriendly to users of Python scripts and small apps who >> are not the developers of those scripts. (Large apps tend to spit out so >> much logging it doesn't really matter.) >> >> Isn't there a better heuristic we can come up with so that the warnings >> tend to be on for developers but off for end users? > > There was a proposition to make deprecation warnings visible by default in > debug build and interactive interpreter. The problem is that outside CPython core developers, I expect that almost nobody runs a Python compiled in debug mode. We should provide debug features in the release build. For example, in Python 3.6, I added debug hooks on memory allocation in release mode using PYTHONMALLOC=debug. These hooks were already enabled by default in debug mode. Moreover, applications are not developed nor tested in the REPL. Last year, I proposed a global "developer mode". The idea is to provide the same experience than a Python debug build, but on a Python release build: python3 -X dev script.py or PYTHONDEV=1 python3 script.py behaves as PYTHONMALLOC=debug python3 -Wd -b -X faulthandler script.py * Show DeprecationWarning and ResourceWarning warnings: python -Wd * Show BytesWarning warning: python -b * Enable Python assertions (assert) and set __debug__ to True: remove (or just ignore) -O or -OO command line arguments * faulthandler to get a Python traceback on segfault and fatal errors: python -X faulthandler * Debug hooks on Python memory allocators: PYTHONMALLOC=debug If you don't follow the CPython development, it's hard to be aware of "new" options like -X faulthandler (Python 3.3) or PYTHONMALLOC=debug (Python 3.6). And it's easy to forget an option like -b. Maybe we even a need -X dev=strict which would be stricter: * use -Werror instead of -Wd: raise an exception when a warning is emitted * use -bb instead of -b: get BytesWarning exceptions * Replace "inconsistent use of tabs and spaces in indentation" warning with an error in the Python parser * etc. https://mail.python.org/pipermail/python-ideas/2016-March/039314.html Victor From victor.stinner at gmail.com Mon Nov 6 03:46:18 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 09:46:18 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> Message-ID: 2017-11-05 18:50 GMT+01:00 Guido van Rossum : > I don't see this as a reason to not put this into the language spec at 3.7. It can prevent some kinds of optimizations. Dictionaries are used "everywhere" in Python, so they are very important for performance. I would prefer to only keep the ordering warranty for function keyword arguments and class members, and use explicitly an ordered dictionary when needed. Sorry, I don't have any example right now of a concrete optimization which would not be possible with ordered dictionary. But Serhiy mentioned the performance impact of ordering in Python 3.6 dictionary on deletion. Victor From ncoghlan at gmail.com Mon Nov 6 04:04:37 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 19:04:37 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <1684AADB-0D45-4C2B-A30F-189A11B5F356@python.org> Message-ID: On 6 November 2017 at 18:46, Victor Stinner wrote: > 2017-11-05 18:50 GMT+01:00 Guido van Rossum : >> I don't see this as a reason to not put this into the language spec at 3.7. > > It can prevent some kinds of optimizations. Dictionaries are used > "everywhere" in Python, so they are very important for performance. > > I would prefer to only keep the ordering warranty for function keyword > arguments and class members, and use explicitly an ordered dictionary > when needed. > > Sorry, I don't have any example right now of a concrete optimization > which would not be possible with ordered dictionary. But Serhiy > mentioned the performance impact of ordering in Python 3.6 dictionary > on deletion. Note that I *don't* think we should mandate that regular dictionaries be synonymous with collections.OrderedDict - I think it's fine to say that regular dicts are insertion ordered *until* a key is deleted, and after that their ordering is arbitrary. Insertion-ordered-until-the-first-delete is sufficient to guarantee serialisation round trips, dict display ordering, and keyword order preservation in the dict constructor (it's not sufficient to guarantee ordering for class namespaces, as those may contain "del" statements, but class bodies are also permitted to use a non-default mapping type). However, if we decide *not* to require that dictionaries be insertion ordered, then I think we should do the same thing Go did, and have dict iterators start at a random offset in the item sequence (that's not actually my idea - Greg Smith suggested it at the core dev sprints). Otherwise we'll end up standardising the 3.6 behaviour by default, as folks come to rely on it, and decline to support Python implementations that don't provide insertion ordering semantics. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From steve at pearwood.info Mon Nov 6 04:14:02 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 6 Nov 2017 20:14:02 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> Message-ID: <20171106091400.GD15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 01:07:51PM +1000, Nick Coghlan wrote: > That means our choices for 3.7 boil down to: > > * make this a language level guarantee that Python devs can reasonably rely on > * deliberately perturb dict iteration in CPython the same way the > default Go runtime does [1] I agree with this choice. My preference is for the first: having dicts be unordered has never been a positive virtue in itself, but always the cost we paid for fast O(1) access. Now what we have fast O(1) access *without* dicts being unordered, we should make it a language guarantee. Provided of course that we can be reasonable certain that other implementations can do the same. And it looks like we can. But if we wanted to still keep our options open, how about weakening the requirement that globals() and object __dicts__ be specifically the same type as builtin dict? That way if we discover a super-fast and compact dict implementation (maybe one that allows only string keys?) that is unordered, we can use it for object namespaces without affecting the builtin dict. > When we did the "insertion ordered hash map" availability review, the > main implementations we were checking on behalf of were Jython & VOC > (JVM implementations), Batavia (JavaScript implementation), and > MicroPython (C implementation). Adding IronPython (C# implementation) > to the mix gives: Shouldn't we check with Nuitka (C++) and Cython as well? I'd be surprised if this is a problem for either of them, but we should ask. > Since the round-trip behaviour that comes from guaranteed order > preservation is genuinely useful, and we're comfortable with folks > switching to more specialised containers when they need different > performance characteristics from what the builtins provide, elevating > insertion order preservation to a language level requirements makes > sense. +1 OrderedDict could then become a thin wrapper around regular dicts. -- Steve From p.f.moore at gmail.com Mon Nov 6 04:35:10 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 6 Nov 2017 09:35:10 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: On 6 November 2017 at 03:41, Lukasz Langa wrote: > >> On 4 Nov, 2017, at 3:39 AM, Paul Moore wrote: >> >> Lukasz Langa said: >>> So, the difference is in perceived usability. It's psychological. >> >> Please, let's not start the "not in the stdlib isn't an issue" debate >> again. If I concede it's a psychological issue, will you concede that >> the fact that it's psychological doesn't mean that it's not a real, >> difficult to solve, problem for some people? I'm also willing to >> concede that it's a *minority* problem, if that helps. But can we stop >> dismissing it as a non-existent problem? > > Paul, if you read the words I wrote in my e-mail verbatim, you will note that I am not saying it's not real or it's not important. Quite the opposite. Can you elaborate what made you think that my assertion that the issue is psychological made you think I'm being dismissive? To me it looks like you're aggressively agreeing with me, so I'd like to understand what caused your reaction. Apologies. On rereading your email, I can see how you meant that. However, it didn't come across that way to me on first reading. There have been a few other threads on various lists I've been involved in recently where it's been really hard to get anyone to see that there's any practical issues for people if a module is on PyPI rather than being in the stdlib. So I'm afraid I'd got pretty tired of arguing the same points over and over again, and over-reacted to what I thought you said. To explain my actual point a little more clearly: 1. Without typing available, some programs using type annotations won't run. That is, using type annotations (a test-time/development-time feature) introduces a runtime dependency on typing, and hence introduces an extra deployment concern (unlike other development-type features like test frameworks). 2. For some people, if something isn't in the Python standard library (technically, in a standard install), it's not available (without significant effort, or possibly not at all). For those people, a runtime dependency on a non-stdlib typing module means "no use of type annotations allowed". 3. Virtual environments typically only include the stdlib, and "use system site-packages" has affects more than just a single module, so bundling still has issues for virtualenvs - and in the packaging tutorials, we're actively encouraging people to use virtualenvs. We (python-dev) can work around this issue for venv by auto-installing typing, but that still leaves virtualenv (which is critically short of resources, and needs to support older versions of Python, so a major change like bundling typing is going to be a struggle to get implemented). None of these problems are really addressed by simply saying "pip install typing". That's not for psychological reasons, there are genuine barriers to having that work. However, it's not at all clear how many people are affected by these issues. My personal feeling is that the group of people participating in open source development is biased towards environments where it's not a problem, but that's more gut feeling than hard facts. The place where barriers are perceived/psychological is when we try to discuss workarounds or solutions - because there's a lot of guesswork about what people's environments are like, there's a tendency to assume scenarios that support our preferred solution, rather than those that don't. Personally, I like to think that my arguments are "giving a voice to people in constrained corporate environments like mine, who are under-represented in open source environments". But it's very easy for a view like that to turn into "banging on about a minor problem as if it were a crisis", and I'm probably guilty of doing that. Worse still, that results in me getting frustrated and then over-reacting further - which is where we came in. So again, apologies. I misunderstood what you were saying, but more as a result of my personal biases than because you weren't sufficiently clear. Paul From pmiscml at gmail.com Mon Nov 6 05:18:17 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 6 Nov 2017 12:18:17 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: <20171106121817.4c3c2367@x230> Hello, On Sun, 5 Nov 2017 12:04:41 +1000 Nick Coghlan wrote: > On 5 November 2017 at 04:35, Guido van Rossum > wrote: > > This sounds reasonable -- I think when we introduced this in 3.6 we > > were worried that other implementations (e.g. Jython) would have a > > problem with this, but AFAIK they've reported back that they can do > > this just fine. So let's just document this as a language > > guarantee. > > When I asked Damien George about this for MicroPython, he indicated > that they'd have to choose between guaranteed order and O(1) lookups > given their current dict implementation. That surprised me a bit MicroPython hashmap implementation is effectively O(n) (average and worst case) due to the algorithm parameters chosen (like the load factor of 1). Of course, parameters could be tweaked, but the ones chosen are so because the memory usage is far more important for MicroPython systems than performance characteristics (all due to small amounts of memory). Like, MicroPython was twice as fast than Python 3.3 on average, and 1000 times more efficient in the memory usage. > (since PyPy and CPython both *saved* memory by switching to their > guaranteed order implementations, hence the name "compact dict There's nothing to save in MicroPython's dict implementation, simply because it doesn't waste anything in the first place - uPy's dict entry is just (key, val) (2 machine words), and load factor of the dict can reach 1.0 as mentioned. [] > I don't think that situation should change the decision, Indeed, it shouldn't. What may change it is the simple and obvious fact that there's no need to change anything, as proven by the 25-year history of the language. What happens now borders on technologic surrealism - the CPython, after many years of persuasion, switched its dict algorithm, rather inefficient in terms of memory, to something else, less inefficient (still quite inefficient, taking "no overhead" as the baseline). That algorithm randomly had another property. Now there's a seemingly serious talk of letting that property leak into the *language spec*, despite the fact that there can be unlimited number of dictionary algorithms, most of them not having that property. What it will lead to is further fragmentation of the community. Python2 vs Python3 split is far from being over, and now there're splits between: * people who use "yield from" vs "await" * people who use f-strings vs who don't * people who rely on sorted nature of dict's vs who don't etc. [] > P.S. If anyone does want to explore MicroPython's dict implementation, > and see if there might be an alternate implementation strategy that > offers both O(1) lookup and guaranteed ordering without using > additional memory That would be the first programmer in the history to have a cake and eat it too. Memory efficiency, runtime efficiency, sorted order: choose 2 of 3. -- Best regards, Paul mailto:pmiscml at gmail.com From ncoghlan at gmail.com Mon Nov 6 05:20:03 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 20:20:03 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106091400.GD15990@ando.pearwood.info> References: <20171104173013.GA4005@bytereef.org> <65f224a0-ee4d-fe00-a648-8210bb1c9022@mail.de> <20171106091400.GD15990@ando.pearwood.info> Message-ID: On 6 November 2017 at 19:14, Steven D'Aprano wrote: > On Mon, Nov 06, 2017 at 01:07:51PM +1000, Nick Coghlan wrote: >> When we did the "insertion ordered hash map" availability review, the >> main implementations we were checking on behalf of were Jython & VOC >> (JVM implementations), Batavia (JavaScript implementation), and >> MicroPython (C implementation). Adding IronPython (C# implementation) >> to the mix gives: > > Shouldn't we check with Nuitka (C++) and Cython as well? If you're using dicts as dicts, Cython just uses the CPython ones via the C API, rather than defining its own. I didn't do any specific research for C++ (since it's essentially the king of fast low level data structure design, hence its popularity in graphics programming), but did see a few references to C++ ordered hash map implementations while looking into the available options for other runtimes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Nov 6 05:21:54 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 6 Nov 2017 10:21:54 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: On 6 November 2017 at 03:38, Nick Coghlan wrote: > - if we ever write "import foo" ourselves, then we're a Python > developer, and it's our responsibility to work out how to manage > DeprecationWarning when it gets raised by either our own code, or the > libraries and frameworks that we use As someone who was bitten by this when deprecation warnings were displayed by default, what's the process for suppressing deprecation warnings in modules that I import (and hence have no control over) *without* also suppressing them for my code (where I do want to fix them, so that my users don't have a problem)? That's the complicated bit that needs to be in the docs - more so than a simple pointer to how to suppress the warning altogether. On 6 November 2017 at 06:38, Nick Coghlan wrote: > Put "PYTHONWARNINGS=ignore::DeprecationWarning" before whatever > command is giving them the warnings. > > Even on Windows, you can put that in a batch file with the actual > command you want to run and silence the warnings that way. Batch files do not behave the same in Windows as standard executables. Having to wrap a "normal application" (for example, a script wrapper installed via "pip install package" in a bat file is (a) messy for inexperienced users, and (b) likely to cause weird errors (for example nesting bat files is broken, so you can't use a "wrapped" command transparently in another bat file without silent errors). Paul From ncoghlan at gmail.com Mon Nov 6 05:35:29 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 20:35:29 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: On 6 November 2017 at 20:21, Paul Moore wrote: > On 6 November 2017 at 03:38, Nick Coghlan wrote: >> - if we ever write "import foo" ourselves, then we're a Python >> developer, and it's our responsibility to work out how to manage >> DeprecationWarning when it gets raised by either our own code, or the >> libraries and frameworks that we use > > As someone who was bitten by this when deprecation warnings were > displayed by default, what's the process for suppressing deprecation > warnings in modules that I import (and hence have no control over) > *without* also suppressing them for my code (where I do want to fix > them, so that my users don't have a problem)? > > That's the complicated bit that needs to be in the docs - more so than > a simple pointer to how to suppress the warning altogether. For "top level" deprecation warnings in the libraries you use (i.e. those where the specific API you're calling from your code is either the one that calls warnings.warn, or else it adjusts the stack level argument so it acts that way), the warnings configuration looks like: warnings.filterwarnings("ignore", category=DeprecationWarning) warnings.filterwarnings("once", category=DeprecationWarning, module="myproject.*") warnings.filterwarnings("once", category=DeprecationWarning, module="__main__") So that could stand to be made cleaner in a few specific ways: 1. Provide a dedicated API for configuring the deprecation warnings filtering 2. When given a module name, also enable warnings for submodules of that module Given those design guidelines, an improvement may look like: warnings.ignoredeprecations(except_for=["myproject","__main__"]) A middle ground between the status quo and full re-enablement of deprecation warnings would also be to add the following to the default filter set used when neither -W nor PYTHONWARNINGS is set: warnings.filterwarnings("once", category=DeprecationWarning, module="__main__") That way, warnings would be emitted by default for the REPL and top-level scripts, but getting them for imported libraries would continue to be opt-in. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan at bytereef.org Mon Nov 6 05:36:59 2017 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 6 Nov 2017 11:36:59 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106121817.4c3c2367@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: <20171106103659.GA8894@bytereef.org> On Mon, Nov 06, 2017 at 12:18:17PM +0200, Paul Sokolovsky wrote: > MicroPython hashmap implementation is effectively O(n) (average and > worst case) due to the algorithm parameters chosen (like the load factor > of 1). Of course, parameters could be tweaked, but the ones chosen are > so because the memory usage is far more important for MicroPython > systems than performance characteristics (all due to small amounts of > memory). Like, MicroPython was twice as fast than Python 3.3 on > average, and 1000 times more efficient in the memory usage. $ cat xxx.py def pi_float(): """native float""" lasts, t, s, n, na, d, da = 0, 3.0, 3, 1, 0, 0, 24 while s != lasts: lasts = s n, na = n+na, na+8 d, da = d+da, da+32 t = (t * n) / d s += t return s for i in range(100000): x = pi_float() $ time ./micropython xxx.py real 0m4.424s user 0m4.406s sys 0m0.016s $ $ time ../../cpython/python xxx.py real 0m1.066s user 0m1.056s sys 0m0.010s Congratulations ... Stefan Krah From rosuav at gmail.com Mon Nov 6 05:48:37 2017 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 6 Nov 2017 21:48:37 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106103659.GA8894@bytereef.org> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106103659.GA8894@bytereef.org> Message-ID: On Mon, Nov 6, 2017 at 9:36 PM, Stefan Krah wrote: > On Mon, Nov 06, 2017 at 12:18:17PM +0200, Paul Sokolovsky wrote: >> MicroPython hashmap implementation is effectively O(n) (average and >> worst case) due to the algorithm parameters chosen (like the load factor >> of 1). Of course, parameters could be tweaked, but the ones chosen are >> so because the memory usage is far more important for MicroPython >> systems than performance characteristics (all due to small amounts of >> memory). Like, MicroPython was twice as fast than Python 3.3 on >> average, and 1000 times more efficient in the memory usage. > > $ cat xxx.py > > def pi_float(): > """native float""" > lasts, t, s, n, na, d, da = 0, 3.0, 3, 1, 0, 0, 24 > while s != lasts: > lasts = s > n, na = n+na, na+8 > d, da = d+da, da+32 > t = (t * n) / d > s += t > return s > > for i in range(100000): > x = pi_float() > > $ time ./micropython xxx.py > > real 0m4.424s > user 0m4.406s > sys 0m0.016s > $ > $ time ../../cpython/python xxx.py > > real 0m1.066s > user 0m1.056s > sys 0m0.010s > > > > Congratulations ... Maybe I'm misreading your demo, but I fail to see what this has to do with dict performance? ChrisA From pmiscml at gmail.com Mon Nov 6 06:03:05 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 6 Nov 2017 13:03:05 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106103659.GA8894@bytereef.org> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106103659.GA8894@bytereef.org> Message-ID: <20171106130305.38462510@x230> Hello, On Mon, 6 Nov 2017 11:36:59 +0100 Stefan Krah wrote: > On Mon, Nov 06, 2017 at 12:18:17PM +0200, Paul Sokolovsky wrote: > > MicroPython hashmap implementation is effectively O(n) (average and > > worst case) due to the algorithm parameters chosen (like the load > > factor of 1). Of course, parameters could be tweaked, but the ones > > chosen are so because the memory usage is far more important for > > MicroPython systems than performance characteristics (all due to > > small amounts of memory). Like, MicroPython was twice as fast than > > Python 3.3 on average, and 1000 times more efficient in the memory > > usage. > [] > > $ time ./micropython xxx.py > $ time ../../cpython/python xxx.py > > Congratulations ... That's why I wrote "Python 3.3", it's hard to argue CPython doesn't do anything about the "Python is slow" proverb. It's still shouldn't be hard to construct a testcase where MicroPython is faster (by not implementing features not needed by 90% of Python programs of course, not "for free"). Anyway, where're memory measurements? (This is offtopic, so I shouldn't have replied.) > Stefan Krah -- Best regards, Paul mailto:pmiscml at gmail.com From solipsis at pitrou.net Mon Nov 6 06:12:36 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 12:12:36 +0100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <8436CF8D-562C-44A6-87C2-B48E5B551BE8@langa.pl> Message-ID: <20171106121236.5eb4b050@fsol> On Sun, 5 Nov 2017 20:18:07 -0800 Lukasz Langa wrote: > > Interestingly enough, at Facebook we found out that using f-strings is *faster* at runtime than the lazy form of logging.log("format with %s and %d", arg1, arg2), including for cases when the log message is not emitted. I suspect this depends on how complex your f-strings are (or the interpolated data). Regards Antoine. From steve at holdenweb.com Mon Nov 6 06:18:50 2017 From: steve at holdenweb.com (Steve Holden) Date: Mon, 6 Nov 2017 11:18:50 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106121817.4c3c2367@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: On Mon, Nov 6, 2017 at 10:18 AM, Paul Sokolovsky wrote: > Hello, > > What happens now borders on technologic surrealism - the CPython, after > many years of persuasion, switched its dict algorithm, rather > inefficient in terms of memory, to something else, less inefficient > (still quite inefficient, taking "no overhead" as the baseline). That > algorithm randomly had another property. Now there's a seemingly > serious talk of letting that property leak into the *language spec*, > despite the fact that there can be unlimited number of dictionary > algorithms, most of them not having that property. > > ? I have to agree: I find the elevation of a CPython implementation detail to a language feature ?somewhat hard to comprehend. Maybe it's more to do with the way it's been presented, but this is hardly an enhancement the language has been screaming for for years. ?Presumably there is little concern that algorithms that rely on this behaviour will be perfectly syntactically conformant with earlier versions? but will fail subtly and without explanation? It's a small concern, but a real one - particularly for learners. regards Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Nov 6 06:27:54 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 12:27:54 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: <20171106122754.23939971@fsol> I think that Paul has a point. Interestingly, at the same time we're talking about guaranteeing the order of dicts, we're talking about using another, unordered, data structure (hash array mapped tries) to improve the performance of something that resembles a namespace. It seems the "unordered" part will be visible through ExecutionContext.vars(). https://www.python.org/dev/peps/pep-0550/#enumerating-context-vars The ordered-ness of dicts could instead become one of those stable CPython implementation details, such as the fact that resources are cleaned up timely by reference counting, that people nevertheless should not rely on if they're writing portable code. Regards Antoine. On Mon, 6 Nov 2017 12:18:17 +0200 Paul Sokolovsky wrote: > [] > > > I don't think that situation should change the decision, > > Indeed, it shouldn't. What may change it is the simple and obvious fact > that there's no need to change anything, as proven by the 25-year > history of the language. > > What happens now borders on technologic surrealism - the CPython, after > many years of persuasion, switched its dict algorithm, rather > inefficient in terms of memory, to something else, less inefficient > (still quite inefficient, taking "no overhead" as the baseline). That > algorithm randomly had another property. Now there's a seemingly > serious talk of letting that property leak into the *language spec*, > despite the fact that there can be unlimited number of dictionary > algorithms, most of them not having that property. From solipsis at pitrou.net Mon Nov 6 06:45:27 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 12:45:27 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default References: Message-ID: <20171106124527.07ad7844@fsol> On Mon, 6 Nov 2017 12:05:07 +1000 Nick Coghlan wrote: > > So my proposal is simple (and not really new): let's revert back to > the way things were in 2.6 and earlier, with DeprecationWarning being > visible by default, and app devs having to silence it explicitly > during application startup (before they start importing third party > modules) if they don't want their users seeing it when running on the > latest Python version (e.g. this would be suitable for open source > apps that get integrated into Linux distros and use the system Python > there). > > This will also restore the previously clear semantic and behavioural > different between PendingDeprecationWarning (hidden by default) and > DeprecationWarning (visible by default). I'm on the fence on this. I was part of the minority who opposed the original decision. So I really appreciate your sentiment. Since then, I had to deal with a lot of very diverse third-party libraries, and I learned that: - most third-party libraries don't ever emit PendingDeprecationWarning; they only emit DeprecationWarning. So all their warnings would now be visible by default. (1) - release cycles are much shorter on third-party libraries, so it's easier not to notice that one of your dependencies has started changing some of its APIs - maybe you'll notice in 3 months. Also, perhaps you need a compatibility fallback anyway instead of unconditionally switching to the new version of the API, which adds to the maintenance cost. - depending on not-well-maintained third-party libraries is a fact of life; these libraries may induce a lot of DeprecationWarnings on their dependencies, and still work fine until some maintainer comes out from the grave (or steps briefly into it before returning to their normal non-programming life) to apply a proper fix and make a release. The one part where I think your proposal is good (apart from making things a bit simpler for developers) is that I also noticed some authors of third-party libraries don't notice until late that their code emits DeprecationWarnings in dependencies. By knowing earlier (and having their users affected) they may be enticed to fix those issues earlier. But that's only true for third-party libraries with an active enough maintainer, and a tight enough release schedule. As for why (1) happens, I think it's partly because changing from one warning to another is cumbersome; partly because many libraries don't want to be constrained by a long deprecation cycle. Regards Antoine. From solipsis at pitrou.net Mon Nov 6 06:58:56 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 12:58:56 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default References: <20171106124527.07ad7844@fsol> Message-ID: <20171106125856.471c6163@fsol> On Mon, 6 Nov 2017 12:45:27 +0100 Antoine Pitrou wrote: > > I'm on the fence on this. > > I was part of the minority who opposed the original decision. So I > really appreciate your sentiment. Since then, I had to deal with a lot > of very diverse third-party libraries, and I learned that: > > - most third-party libraries don't ever emit PendingDeprecationWarning; > they only emit DeprecationWarning. So all their warnings would now be > visible by default. (1) > > - release cycles are much shorter on third-party libraries, so it's > easier not to notice that one of your dependencies has started > changing some of its APIs - maybe you'll notice in 3 months. Also, > perhaps you need a compatibility fallback anyway instead of > unconditionally switching to the new version of the API, which adds > to the maintenance cost. Of course, there's also the case where it's own of your dependency's dependencies, or your dependency's dependency's dependencies, or your dependency's dependency's dependency's dependencies, that has started to emit such warning because of how your depencency, or your dependency's dependency, or your dependency's dependency's dependency, calls into that library. Now if you have several such dependencies (or dependencies' dependencies, etc.), it becomes both likely and annoying to solve / workaround. I guess my takeaway point is that many situations are complicated, and many third-party library developers are much less disciplined than what some of us would idealistically expect them to be (those developers probably often have good reasons for that). For someone who takes care to only use selected third-party libraries of high maintenance quality, I'm very +1 on your proposal. For the more murky (but rather common) cases of relying on average quality third-party libraries, I'm +0. Regards Antoine. From ncoghlan at gmail.com Mon Nov 6 07:09:18 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 22:09:18 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: On 6 November 2017 at 21:18, Steve Holden wrote: > I have to agree: I find the elevation of a CPython implementation detail to > a language feature somewhat hard to comprehend. Maybe it's more to do with > the way it's been presented, but this is hardly an enhancement the language > has been screaming for for years. > > Presumably there is little concern that algorithms that rely on this > behaviour will be perfectly syntactically conformant with earlier versions > but will fail subtly and without explanation? It's a small concern, but a > real one - particularly for learners. A similar concern existed when we elevated sort stability to being a language requirement - if you relied on that guarantee, your code was technically buggy on versions prior to 2.3, but eventually 2.2 and earlier aged out of general use, allowing such code to become correct in general. So the current discussion is mainly about deciding where we want the compatibility burden to fall in relation to dict insertion ordering: 1. Do we deliberately revert CPython back to being harder to use correctly for the sake of making Python easier to implement? 2. Do we make Python harder to implement for the sake of making it easier to use? 3. Do we choose not to choose, thus implicitly choosing "2" by default due to the fact that Python is defined by a language spec and a reference implementation, rather than *just* a language spec? Here's a more-complicated-than-a-doctest-for-a-dict-repo, but still fairly straightforward, example regarding the "insertion ordering dictionaries are easier to use correctly" argument: import json data = {"a":1, "b":2, "c":3} rendered = json.dumps(data) data2 = json.loads(rendered) rendered2 = json.dumps(data2) # JSON round trip assert data == data2, "JSON round trip failed" # Dict round trip assert rendered == rendered2, "dict round trip failed" Both of those assertions will always pass in CPython 3.6, as well as in PyPy, because their dict implementations are insertion ordered, which means the iteration order on the dictionaries is always "a", "b", "c". If you try it on 3.5 though, you should fairly consistently see that last assertion fail, since there's nothing in 3.5 that ensures that data and data2 will iterate over their keys in the same order. You can make that code implementation independent (and sufficiently version dependent to pass both assertions) by using OrderedDict: from collections import OrderedDict import json data = OrderedDict(a=1, b=2, c=3) rendered = json.dumps(data) data2 = json.loads(rendered, object_pairs_hook=OrderedDict) rendered2 = json.dumps(data2) # JSON round trip assert data == data2, "JSON round trip failed" # Dict round trip assert rendered == rendered2, "dict round trip failed" However, despite the way this code looks, the serialised key order *might not* be "a, b, c" on 3.5 and earlier (it will be on 3.6+, since that already requires that kwarg order be preserved). So the formally correct version independent code that reliably ensures that the key order in the JSON file is always "a, b, c" looks like this: from collections import OrderedDict import json data = OrderedDict((("a",1), ("b",2), ("c",3))) rendered = json.dumps(data) data2 = json.loads(rendered, object_pairs_hook=OrderedDict) rendered2 = json.dumps(data2) # JSON round trip assert data == data2, "JSON round trip failed" # Dict round trip assert rendered == rendered2, "dict round trip failed" # Key order assert "".join(data) == "".join(data2) == "abc", "key order failed" Getting from the "Works on CPython 3.6+ but is technically non-portable" state to a fully portable correct implementation that ensures a particular key order in the JSON file thus currently requires the following changes: - don't use a dict display, use collections.OrderedDict - make sure to set object_pairs_hook when using json.loads - don't use kwargs to OrderedDict, use a sequence of 2-tuples For 3.6, we've already said that we want the last constraint to age out, such that the middle version of the code also ensures a particular key order. The proposal is that in 3.7 we retroactively declare that the first, most obvious, version of this code should in fact reliably pass all three assertions. Failing that, the proposal is that we instead change the dict iteration implementation such that the dict round trip will start failing reasonably consistently again (the same as it did in 3.5), so that folks realise almost immediately that they still need collections.OrderedDict instead of the builtin dict. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Mon Nov 6 07:07:21 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 13:07:21 +0100 Subject: [Python-Dev] 3.5.4 doesn't appear in changelog Message-ID: <20171106130721.31e08bde@fsol> Hello, Is there a known reason why 3.5.4 doesn't appear in the changelogs at the bottom of https://docs.python.org/3.7/whatsnew/index.html ? (all releases until 3.5.3 do) Regards Antoine. From steve at pearwood.info Mon Nov 6 08:14:35 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 00:14:35 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106122754.23939971@fsol> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> Message-ID: <20171106131434.GE15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 12:27:54PM +0100, Antoine Pitrou wrote: > The ordered-ness of dicts could instead become one of those stable > CPython implementation details, such as the fact that resources are > cleaned up timely by reference counting, that people nevertheless > should not rely on if they're writing portable code. Given that (according to others) none of IronPython, Jython, Batavia, Nuitka, or even MicroPython, should have trouble implementing an insertion-order preserving dict, and that PyPy already has, why should we say it is a CPython implementation detail? -- Steve From ncoghlan at gmail.com Mon Nov 6 08:23:25 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 6 Nov 2017 23:23:25 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <20171106125856.471c6163@fsol> References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> Message-ID: On 6 November 2017 at 21:58, Antoine Pitrou wrote: > I guess my takeaway point is that many situations are complicated, and > many third-party library developers are much less disciplined than what > some of us would idealistically expect them to be (those developers > probably often have good reasons for that). For someone who takes care > to only use selected third-party libraries of high maintenance quality, > I'm very +1 on your proposal. For the more murky (but rather common) > cases of relying on average quality third-party libraries, I'm +0. Agreed, and I'm thinking there could be a lot of value in the variant of the idea that says: - tweak the default warning filters to turn DeprecationWarning back on for __main__ only - add a new warnings module API specifically for managing deprecation warnings The first change would restore DeprecationWarning-by-default for: - ad hoc single file scripts (potentially including Jupyter notebooks, depending on which execution namespace kernels use) - ad hoc experimentation at the REPL - working through outdated examples at the REPL For installed applications using setuptools (or similar), "__main__" is the script wrapper, not any of the application code, and those have been getting more minimal over time (and when they do have non-trivial code in them, it's calling into setuptools/pkg_resources rather than the standard library). The second change would be designed around making it easier for app developers to say "Always emit DeprecationWarnings for my own code, don't worry about anything else". With DeprecationWarning still off by default, that might look like: warnings.reportdeprecations("myproject") Cheers, Nick. P.S. For those interested, the issue where we switched to the current behaviour is https://bugs.python.org/issue7319 And the related stdlib-sig thread is https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html That was apparently in the long gone era when I still read every python-checkins message, so there's also a very short thread on python-dev after it landed in SVN: https://mail.python.org/pipermail/python-dev/2010-January/097178.html The primary argument for the change in the stlib-sig thread is definitely "App devs don't hide deprecation warnings, and then their users complain about seeing them". Guido even goes so far to describe app developers using the warnings filter interface as designed to manage what they emit on stderr as "a special hack". Later in the thread Georg Brandl brought up a fairly compelling argument that Guido was right about that, which is that programmatic filters to manage warnings currently don't compose well with the command line and environment variable settings, since there's only one list of warning filters, which means there's no way to say "put this *before* the default filters, but *after* any filters that were specified explicitly with -W or PYTHONWARNINGS". Instead, your options are limited to prepending (the default behaviour) which overrides both the defaults and any specific settings, or appending, which means you can't even override the defaults. When DeprecationWarnings were enabled by default, this meant there was no native way to override application level filters that ignored them in order to turn on DeprecationWarnings when running your test suite. By contrast, having them be off by default with runtime programmatic filter manipulation being rare offers a simple way to turn them on globally, via "-Wd". Adding "-W once::DeprecationWarning:__main__" to the default filter list doesn't change that substantially. That said, this does make me wonder whether the warnings module should either place a sentinel marker in its warning filter list to mark where the default filters start (and adjust the append mode to insert filters there), or else provide a new option for programmatic configuration that's "higher priority than the defaults, lower priority than the explicit configuration settings". -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan at bytereef.org Mon Nov 6 08:29:19 2017 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 6 Nov 2017 14:29:19 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106130305.38462510@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106103659.GA8894@bytereef.org> <20171106130305.38462510@x230> Message-ID: <20171106132919.GA21362@bytereef.org> On Mon, Nov 06, 2017 at 01:03:05PM +0200, Paul Sokolovsky wrote: > > $ time ./micropython xxx.py > > $ time ../../cpython/python xxx.py > > > > > Congratulations ... > > That's why I wrote "Python 3.3", it's hard to argue CPython doesn't do > anything about the "Python is slow" proverb. It's still shouldn't be > hard to construct a testcase where MicroPython is faster (by not > implementing features not needed by 90% of Python programs of course, > not "for free"). Sorry, that was a slightly mischievous benchmark indeed. -- Whether the proposal is surreal or not depends on the likelihood that a) a substantially faster dict algorithm will emerge and b) CPython, PyPy and Jython will switch to it. My proposal was based on the fact that for almost two release cycles the ordering implementation detail hasn't changed. Stefan Krah From solipsis at pitrou.net Mon Nov 6 08:30:45 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 14:30:45 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> Message-ID: <20171106143045.3bc16405@fsol> On Tue, 7 Nov 2017 00:14:35 +1100 Steven D'Aprano wrote: > On Mon, Nov 06, 2017 at 12:27:54PM +0100, Antoine Pitrou wrote: > > > The ordered-ness of dicts could instead become one of those stable > > CPython implementation details, such as the fact that resources are > > cleaned up timely by reference counting, that people nevertheless > > should not rely on if they're writing portable code. > > Given that (according to others) none of IronPython, Jython, Batavia, > Nuitka, or even MicroPython, should have trouble implementing an > insertion-order preserving dict, and that PyPy already has, why should > we say it is a CPython implementation detail? That's not what I'm taking away from Paul Sokolovsky's message I was responding to. If you think otherwise then please expand and/or contact Paul so that things are made clearer one way or the other. From nad at python.org Mon Nov 6 08:45:54 2017 From: nad at python.org (Ned Deily) Date: Mon, 6 Nov 2017 14:45:54 +0100 Subject: [Python-Dev] 3.5.4 doesn't appear in changelog In-Reply-To: <20171106130721.31e08bde@fsol> References: <20171106130721.31e08bde@fsol> Message-ID: <7C6BE3CE-FF2B-43C4-941F-363A4B3DE9FE@python.org> On Nov 6, 2017, at 13:07, Antoine Pitrou wrote: > Is there a known reason why 3.5.4 doesn't appear in the changelogs at > the bottom of https://docs.python.org/3.7/whatsnew/index.html ? > > (all releases until 3.5.3 do) As things stand now, the changelogs from maintenance releases have to be manually merged into feature release trees (e.g. master). That's something I've done close to final feature release. With the switch to blurb, it would be nice to better automate the sharing of changelogs between branches but I don't think anyone has tried to tackle that yet. Patches welome. -- Ned Deily nad at python.org -- [] From solipsis at pitrou.net Mon Nov 6 08:39:52 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 14:39:52 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> Message-ID: <20171106143952.346e19d7@fsol> On Mon, 6 Nov 2017 23:23:25 +1000 Nick Coghlan wrote: > On 6 November 2017 at 21:58, Antoine Pitrou wrote: > > I guess my takeaway point is that many situations are complicated, and > > many third-party library developers are much less disciplined than what > > some of us would idealistically expect them to be (those developers > > probably often have good reasons for that). For someone who takes care > > to only use selected third-party libraries of high maintenance quality, > > I'm very +1 on your proposal. For the more murky (but rather common) > > cases of relying on average quality third-party libraries, I'm +0. > > Agreed, and I'm thinking there could be a lot of value in the variant > of the idea that says: > > - tweak the default warning filters to turn DeprecationWarning back on > for __main__ only Thats sounds error-prone. I'd rather have them on by default everywhere. > - add a new warnings module API specifically for managing deprecation warnings +1 And I think we need to handle two different use cases: - silencing warnings *emitted by* a certain module (e.g. a widely-used module which recently introduced major API changes) - silencing warnings *reported in* a certain module (e.g. a sporadically-maintained library whose usage frequently emits deprecation warnings coming from other libraries) Ideally, we also need a CLI switch (or environment variable) to override these settings, so that one can run in "dev mode" and see all problematic usage accross their library, application and third-party dependencies. Regards Antoine. From tds333 at mailbox.org Mon Nov 6 09:01:39 2017 From: tds333 at mailbox.org (Wolfgang) Date: Mon, 6 Nov 2017 15:01:39 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106143045.3bc16405@fsol> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> Message-ID: <02b2ad90-d438-6a33-a48b-6bbca3aab944@mailbox.org> Hi, Could be missed by me but the guarantee that dict literals are ordered implies not that dict must be ordered in all cases. The dict literal: d = {"a": 1, "b": 2} will keep the order of "a" and "b" because it is specified as a dict literal. But d["c"] = 3 can change this order and it is allowed by the specification of guaranteed ordered dict literals. Please correct me if I am wrong. In Python 3.6 it does not because dict is implemented ordered and the insertion order is preserved. Also I think we should give the whole thing more time and wait with this guarantee. There are valid concerns against this. Personally I like the ordering but if I need an ordered dict it is ok for me to write this explicitly. The **kwargs are already guaranteed to be ordered and this was useful because the OrderedDict constructor benefits and for other places it is useful. But making all dict's per default ordered is another level. Regards, Wolfgang From gjcarneiro at gmail.com Mon Nov 6 09:16:39 2017 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Mon, 6 Nov 2017 14:16:39 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: Big +1 to turning warnings on by default again. When this behaviour first started, first I was surprised, then annoyed that warnings were being suppressed. For a few years I learned to have `export PYTHONWARNINGS=default` in my .bashrc. But eventually, the warnings in 3rd-party Python modules gradually increased because, since warnings are disabled by default, authors of command-line tools, or even python modules, don't even realise they are triggering the warning. So developers stop fixing warnings because they are hidden. Things get worse and worse over the years. Eventually I got fed up and removed the PYTHONWARNINGS env var. Showing warnings by default is good: 1. End users who don't understand what those warnings are are unlikely to see them since they don't command-line tools at all; 2. The users that do see them are sufficiently proficient to be able to submit a bug report; 3. If you file a bug report in tool that uses a 3rd party module, the author of that tool should open a corresponding bug report on the 3rd party module that actually caused the warning; 4. Given time, all the bug reports trickle down and create pressure on module maintainers to fix warnings; 5. If a module is being used and has no maintainer, it's a good indication it is time to fork it or find an alternative. Not fixing warnings is a form of technical debt that we should not encourage. It is not the Python way. On 6 November 2017 at 02:05, Nick Coghlan wrote: > On the 12-weeks-to-3.7-feature-freeze thread, Jose Bueno & I both > mistakenly though the async/await deprecation warnings were missing > from 3.6. > > They weren't missing, we'd just both forgotten those warnings were off > by default (7 years after the change to the default settings in 2.7 & > 3.2). > > So my proposal is simple (and not really new): let's revert back to > the way things were in 2.6 and earlier, with DeprecationWarning being > visible by default, and app devs having to silence it explicitly > during application startup (before they start importing third party > modules) if they don't want their users seeing it when running on the > latest Python version (e.g. this would be suitable for open source > apps that get integrated into Linux distros and use the system Python > there). > > This will also restore the previously clear semantic and behavioural > different between PendingDeprecationWarning (hidden by default) and > DeprecationWarning (visible by default). > > As part of this though, I'd suggest amending the documentation for > DeprecationWarning [1] to specifically cover how to turn it off > programmatically (`warnings.simplefilter("ignore", > DeprecationWarning)`), at the command line (`python -W > ignore::DeprecationWarning ...`), and via the environment > (`PYTHONWARNINGS=ignore::DeprecationWarning`). > > (Structurally, I'd probably put that at the end of the warnings > listing as a short introduction to warnings management, with links out > to the relevant sections of the documentation, and just use > DeprecationWarning as the specific example) > > Cheers, > Nick. > > [1] https://docs.python.org/3/library/exceptions.html#DeprecationWarning > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > gjcarneiro%40gmail.com > -- Gustavo J. A. M. Carneiro Gambit Research "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From flying-sheep at web.de Mon Nov 6 09:51:43 2017 From: flying-sheep at web.de (Philipp A.) Date: Mon, 06 Nov 2017 14:51:43 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: Hi! Just this minute I ran across a case where I?d want DeprecationWarnings on by default (We want to rename a property in an API I?m co-developing. I has mainly scientists as target audience, so end users, not developers) Gustavo Carneiro schrieb am Mo., 6. Nov. 2017 um 15:19 Uhr: > Big +1 to turning warnings on by default again. > > When this behaviour first started, first I was surprised, then annoyed > that warnings were being suppressed. For a few years I learned to have > `export PYTHONWARNINGS=default` in my .bashrc. > > But eventually, the warnings in 3rd-party Python modules gradually > increased because, since warnings are disabled by default, authors of > command-line tools, or even python modules, don't even realise they are > triggering the warning. > > So developers stop fixing warnings because they are hidden. Things get > worse and worse over the years. Eventually I got fed up and removed the > PYTHONWARNINGS env var. > > Showing warnings by default is good: > 1. End users who don't understand what those warnings are are unlikely to > see them since they don't command-line tools at all; > 2. The users that do see them are sufficiently proficient to be able to > submit a bug report; > 3. If you file a bug report in tool that uses a 3rd party module, the > author of that tool should open a corresponding bug report on the 3rd party > module that actually caused the warning; > 4. Given time, all the bug reports trickle down and create pressure on > module maintainers to fix warnings; > 5. If a module is being used and has no maintainer, it's a good > indication it is time to fork it or find an alternative. > > Not fixing warnings is a form of technical debt that we should not > encourage. It is not the Python way. > > > On 6 November 2017 at 02:05, Nick Coghlan wrote: > >> On the 12-weeks-to-3.7-feature-freeze thread, Jose Bueno & I both >> mistakenly though the async/await deprecation warnings were missing >> from 3.6. >> >> They weren't missing, we'd just both forgotten those warnings were off >> by default (7 years after the change to the default settings in 2.7 & >> 3.2). >> >> So my proposal is simple (and not really new): let's revert back to >> the way things were in 2.6 and earlier, with DeprecationWarning being >> visible by default, and app devs having to silence it explicitly >> during application startup (before they start importing third party >> modules) if they don't want their users seeing it when running on the >> latest Python version (e.g. this would be suitable for open source >> apps that get integrated into Linux distros and use the system Python >> there). >> >> This will also restore the previously clear semantic and behavioural >> different between PendingDeprecationWarning (hidden by default) and >> DeprecationWarning (visible by default). >> >> As part of this though, I'd suggest amending the documentation for >> DeprecationWarning [1] to specifically cover how to turn it off >> programmatically (`warnings.simplefilter("ignore", >> DeprecationWarning)`), at the command line (`python -W >> ignore::DeprecationWarning ...`), and via the environment >> (`PYTHONWARNINGS=ignore::DeprecationWarning`). >> >> (Structurally, I'd probably put that at the end of the warnings >> listing as a short introduction to warnings management, with links out >> to the relevant sections of the documentation, and just use >> DeprecationWarning as the specific example) >> >> Cheers, >> Nick. >> >> [1] https://docs.python.org/3/library/exceptions.html#DeprecationWarning >> > >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com >> > > > > -- > Gustavo J. A. M. Carneiro > Gambit Research > "The universe is always one step beyond logic." -- Frank Herbert > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/flying-sheep%40web.de > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Nov 6 10:20:45 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 6 Nov 2017 15:20:45 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 6 November 2017 at 14:16, Gustavo Carneiro wrote: > But eventually, the warnings in 3rd-party Python modules gradually increased > because, since warnings are disabled by default, authors of command-line > tools, or even python modules, don't even realise they are triggering the > warning. > > So developers stop fixing warnings because they are hidden. Things get > worse and worse over the years. Eventually I got fed up and removed the > PYTHONWARNINGS env var. Maybe it's worth running the test suites of a number of major packages like pip, requests, django, ... with warnings switched on, to see what the likely impact of making warnings display by default would be on those projects? Hopefully, it's zero, but hard data is always better than speculation :-) Paul From victor.stinner at gmail.com Mon Nov 6 11:02:54 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 17:02:54 +0100 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? Message-ID: Hi, While discussions on the typing module are still hot, what do you think of allowing annotations in the standard libraries, but limited to a few basic types: * None * bool, int, float, complex * bytes, bytearray * str I'm not sure about container types like tuple, list, dict, set, frozenset. If we allow them, some developers may want to describe the container content, like List[int] or Dict[int, str]. My intent is to enhance the builtin documentation of functions of the standard library including functions implemented in C. For example, it's well known that id(obj) returns an integer. So I expect a signature like: id(obj) -> int Context: Tal Einat proposed a change to convert the select module to Argument Clinic: https://bugs.python.org/issue31938 https://github.com/python/cpython/pull/4265 The docstring currently documents the return like: --- haypo at selma$ pydoc3 select.epoll.fileno|cat Help on method_descriptor in select.epoll: select.epoll.fileno = fileno(...) fileno() -> int Return the epoll control file descriptor. --- I'm talking about "-> int", nice way to document that the function returns an integer. Problem: even if Argument Clinic supports "return converters" like "int", it doesn't generate a docstring with the return type. So I created the issue: "Support return annotation in signature for Argument Clinic" https://bugs.python.org/issue31939 But now I am confused between docstrings, Argument Clinic, "return converters", "signature" and "annotations"... R. David Murray reminded me the current policy: "the python standard library will not include type annotations, that those are provided by typeshed." While we are discussing removing (or not) typing from the stdlib (!), I propose to allow annotations in the stdlib, *but* only limited to the most basic types. Such annotations *shouldn't* have a significant impact on performances (startup time) or memory footprint. The expected drawback is that users can be surprised that some functions get annotations, while others don't. For example, os.fspath() requires a complex annotation which needs the typing module and is currently done in typeshed, whereas id(obj) can get its return type documented ("-> int"). What do you think? Victor From victor.stinner at gmail.com Mon Nov 6 11:07:38 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 17:07:38 +0100 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: References: Message-ID: Related to annotations, are you ok to annotate basic types in the *documentation* and/or *docstrings* of the standard library? For example, I chose to document the return type of time.time() (int) and time.time_ns() (float). It's short and I like how it's formatted. See the current rendered documentation: https://docs.python.org/dev/library/time.html#time.time "Annotations" in the documentation and docstrings have no impact on Python runtime performance. Annotations in docstrings makes them a few characters longer and so impact the memory footprint, but I consider that the overhead is negligible, especially when using python3 -OO. Victor 2017-11-06 17:02 GMT+01:00 Victor Stinner : > Hi, > > While discussions on the typing module are still hot, what do you > think of allowing annotations in the standard libraries, but limited > to a few basic types: > > * None > * bool, int, float, complex > * bytes, bytearray > * str > > I'm not sure about container types like tuple, list, dict, set, > frozenset. If we allow them, some developers may want to describe the > container content, like List[int] or Dict[int, str]. > > My intent is to enhance the builtin documentation of functions of the > standard library including functions implemented in C. For example, > it's well known that id(obj) returns an integer. So I expect a > signature like: > > id(obj) -> int > > > Context: Tal Einat proposed a change to convert the select module to > Argument Clinic: > > https://bugs.python.org/issue31938 > https://github.com/python/cpython/pull/4265 > > The docstring currently documents the return like: > --- > haypo at selma$ pydoc3 select.epoll.fileno|cat > Help on method_descriptor in select.epoll: > > select.epoll.fileno = fileno(...) > fileno() -> int > > Return the epoll control file descriptor. > --- > > I'm talking about "-> int", nice way to document that the function > returns an integer. > > Problem: even if Argument Clinic supports "return converters" like > "int", it doesn't generate a docstring with the return type. So I > created the issue: > > "Support return annotation in signature for Argument Clinic" > https://bugs.python.org/issue31939 > > But now I am confused between docstrings, Argument Clinic, "return > converters", "signature" and "annotations"... > > R. David Murray reminded me the current policy: > > "the python standard library will not include type annotations, that > those are provided by typeshed." > > While we are discussing removing (or not) typing from the stdlib (!), > I propose to allow annotations in the stdlib, *but* only limited to > the most basic types. > > Such annotations *shouldn't* have a significant impact on performances > (startup time) or memory footprint. > > The expected drawback is that users can be surprised that some > functions get annotations, while others don't. For example, > os.fspath() requires a complex annotation which needs the typing > module and is currently done in typeshed, whereas id(obj) can get its > return type documented ("-> int"). > > What do you think? > > Victor From steve at holdenweb.com Mon Nov 6 11:22:23 2017 From: steve at holdenweb.com (Steve Holden) Date: Mon, 6 Nov 2017 16:22:23 +0000 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: References: Message-ID: While I appreciate the value of annotations I think that *any* addition of them to the stdlib would complicate an important learning resource unnecessarily. S Steve Holden On Mon, Nov 6, 2017 at 4:07 PM, Victor Stinner wrote: > Related to annotations, are you ok to annotate basic types in the > *documentation* and/or *docstrings* of the standard library? > > For example, I chose to document the return type of time.time() (int) > and time.time_ns() (float). It's short and I like how it's formatted. > See the current rendered documentation: > > https://docs.python.org/dev/library/time.html#time.time > > "Annotations" in the documentation and docstrings have no impact on > Python runtime performance. Annotations in docstrings makes them a few > characters longer and so impact the memory footprint, but I consider > that the overhead is negligible, especially when using python3 -OO. > > Victor > > 2017-11-06 17:02 GMT+01:00 Victor Stinner : > > Hi, > > > > While discussions on the typing module are still hot, what do you > > think of allowing annotations in the standard libraries, but limited > > to a few basic types: > > > > * None > > * bool, int, float, complex > > * bytes, bytearray > > * str > > > > I'm not sure about container types like tuple, list, dict, set, > > frozenset. If we allow them, some developers may want to describe the > > container content, like List[int] or Dict[int, str]. > > > > My intent is to enhance the builtin documentation of functions of the > > standard library including functions implemented in C. For example, > > it's well known that id(obj) returns an integer. So I expect a > > signature like: > > > > id(obj) -> int > > > > > > Context: Tal Einat proposed a change to convert the select module to > > Argument Clinic: > > > > https://bugs.python.org/issue31938 > > https://github.com/python/cpython/pull/4265 > > > > The docstring currently documents the return like: > > --- > > haypo at selma$ pydoc3 select.epoll.fileno|cat > > Help on method_descriptor in select.epoll: > > > > select.epoll.fileno = fileno(...) > > fileno() -> int > > > > Return the epoll control file descriptor. > > --- > > > > I'm talking about "-> int", nice way to document that the function > > returns an integer. > > > > Problem: even if Argument Clinic supports "return converters" like > > "int", it doesn't generate a docstring with the return type. So I > > created the issue: > > > > "Support return annotation in signature for Argument Clinic" > > https://bugs.python.org/issue31939 > > > > But now I am confused between docstrings, Argument Clinic, "return > > converters", "signature" and "annotations"... > > > > R. David Murray reminded me the current policy: > > > > "the python standard library will not include type annotations, that > > those are provided by typeshed." > > > > While we are discussing removing (or not) typing from the stdlib (!), > > I propose to allow annotations in the stdlib, *but* only limited to > > the most basic types. > > > > Such annotations *shouldn't* have a significant impact on performances > > (startup time) or memory footprint. > > > > The expected drawback is that users can be surprised that some > > functions get annotations, while others don't. For example, > > os.fspath() requires a complex annotation which needs the typing > > module and is currently done in typeshed, whereas id(obj) can get its > > return type documented ("-> int"). > > > > What do you think? > > > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > steve%40holdenweb.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmiscml at gmail.com Mon Nov 6 11:35:48 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 6 Nov 2017 18:35:48 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106143045.3bc16405@fsol> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> Message-ID: <20171106183548.20fee86b@x230> Hello, On Mon, 6 Nov 2017 14:30:45 +0100 Antoine Pitrou wrote: > On Tue, 7 Nov 2017 00:14:35 +1100 > Steven D'Aprano wrote: > > On Mon, Nov 06, 2017 at 12:27:54PM +0100, Antoine Pitrou wrote: > > > > > The ordered-ness of dicts could instead become one of those stable > > > CPython implementation details, such as the fact that resources > > > are cleaned up timely by reference counting, that people > > > nevertheless should not rely on if they're writing portable > > > code. > > > > Given that (according to others) none of IronPython, Jython, > > Batavia, Nuitka, or even MicroPython, should have trouble > > implementing an insertion-order preserving dict, and that PyPy > > already has, why should we say it is a CPython implementation > > detail? > > That's not what I'm taking away from Paul Sokolovsky's message I was > responding to. If you think otherwise then please expand and/or > contact Paul so that things are made clearer one way or the other. For MicroPython, it would lead to quite an overhead to make dictionary items be in insertion order. As I mentioned, MicroPython optimizes for very low bookkeeping memory overhead, so lookups are effectively O(n), but orderedness will increase constant factor significantly, perhaps 5x. Also, arguably any algorithm which would *maintain* insertion order over mutating operations would be more complex and/or require more memory that one which doesn't. (This is based on the idea that this problem is equivalent to maintaining a sorted data structure, where sorting key is "insertion order"). CPython 3.6 gives **kwargs in the key specification order? That's fine if that's all that it says (note that it doesn't say what happens if someone takes **kwargs and starts to "del []" or "[] =" it). That allows different ways to implement it, per particular implementation's choice. That's even implementable in MicroPython. Like, lookups will be less efficient, but nobody realistically passes hundreds of kwargs. It's pretty different to go from that to dictionaries in general. Saying that a general dict always maintains insertion order is saying that "in Python, you have to use complex, memory hungry algorithms for a simple mapping type". Saying something like "dict maintains insertion order until first modification" is going down the rabbit hole, making it all confusing, hard to remember, crazy-sounding to novices. Why all that, if, since the beginning of times, Python offered a structure with guaranteed ordering - list, and structure for efficient mapping one values into other - dict. Then even marriage between the two - OrderedDict. Why suddenly once in 25 years there's a need to do something to dict's, violating computer science background behind them (one of the reason enough people loved Python comparing to other "practical hack" languages)? Alternatives were already presented on the thread - if people want more and easier ordered dictionaries, it calls to add a special literal initializer for them ("o{}" was proposed). -- Best regards, Paul mailto:pmiscml at gmail.com From chris.jerdonek at gmail.com Mon Nov 6 12:27:20 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Mon, 06 Nov 2017 17:27:20 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: On Mon, Nov 6, 2017 at 4:11 AM Nick Coghlan wrote: > Here's a more-complicated-than-a-doctest-for-a-dict-repo, but still > fairly straightforward, example regarding the "insertion ordering > dictionaries are easier to use correctly" argument: > > import json > data = {"a":1, "b":2, "c":3} > rendered = json.dumps(data) > data2 = json.loads(rendered) > rendered2 = json.dumps(data2) > # JSON round trip > assert data == data2, "JSON round trip failed" > # Dict round trip > assert rendered == rendered2, "dict round trip failed" > > Both of those assertions will always pass in CPython 3.6, as well as > in PyPy, because their dict implementations are insertion ordered, > which means the iteration order on the dictionaries is always "a", > "b", "c". > > If you try it on 3.5 though, you should fairly consistently see that > last assertion fail, since there's nothing in 3.5 that ensures that > data and data2 will iterate over their keys in the same order. > > You can make that code implementation independent (and sufficiently > version dependent to pass both assertions) by using OrderedDict: > > from collections import OrderedDict > import json > data = OrderedDict(a=1, b=2, c=3) > rendered = json.dumps(data) > data2 = json.loads(rendered, object_pairs_hook=OrderedDict) > rendered2 = json.dumps(data2) > # JSON round trip > assert data == data2, "JSON round trip failed" > # Dict round trip > assert rendered == rendered2, "dict round trip failed" > > However, despite the way this code looks, the serialised key order > *might not* be "a, b, c" on 3.5 and earlier (it will be on 3.6+, since > that already requires that kwarg order be preserved). > > So the formally correct version independent code that reliably ensures > that the key order in the JSON file is always "a, b, c" looks like > this: > > from collections import OrderedDict > import json > data = OrderedDict((("a",1), ("b",2), ("c",3))) > rendered = json.dumps(data) > data2 = json.loads(rendered, object_pairs_hook=OrderedDict) > rendered2 = json.dumps(data2) > # JSON round trip > assert data == data2, "JSON round trip failed" > # Dict round trip > assert rendered == rendered2, "dict round trip failed" > # Key order > assert "".join(data) == "".join(data2) == "abc", "key order failed" > > Getting from the "Works on CPython 3.6+ but is technically > non-portable" state to a fully portable correct implementation that > ensures a particular key order in the JSON file thus currently > requires the following changes: Nick, it seems like this is more complicated than it needs to be. You can just pass sort_keys=True to json.dump() / json.dumps(). I use it for tests and human-readability all the time. ?Chris > > - don't use a dict display, use collections.OrderedDict > - make sure to set object_pairs_hook when using json.loads > - don't use kwargs to OrderedDict, use a sequence of 2-tuples > > For 3.6, we've already said that we want the last constraint to age > out, such that the middle version of the code also ensures a > particular key order. > > The proposal is that in 3.7 we retroactively declare that the first, > most obvious, version of this code should in fact reliably pass all > three assertions. > > Failing that, the proposal is that we instead change the dict > iteration implementation such that the dict round trip will start > failing reasonably consistently again (the same as it did in 3.5), so > that folks realise almost immediately that they still need > collections.OrderedDict instead of the builtin dict. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Nov 6 12:41:30 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 18:41:30 +0100 Subject: [Python-Dev] Partial support of a platform Message-ID: Hi, I see more and more proposed changes to fix some parts of the code to "partially" support a platform. I remember that 5 years ago, the CPython code was "full" of #ifdef and other conditional code to support various platforms, and I was happy when we succeeded to remove support for all these old platforms like OS/2, DOS or VMS. The PEP 11 has a nice description to get a *full* support of a new platform: https://www.python.org/dev/peps/pep-0011/ But the question here is more about "partial" support. While changes are usually short, I dislike applying them to Python 2.7 and/or Python 3.6, until a platform is fully support. I prefer to first see a platform fully supported to see how much changes are required and to make sure that we get someone involved to maintain the code (handle new issues). Example of platforms: MinGW, Cygwin, OpenBSD, NetBSD, xWorks RTOS, etc. But the way, is there an exhaustive list of platforms "officially" supported by CPython? Victor From hodgestar+pythondev at gmail.com Mon Nov 6 12:42:16 2017 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Mon, 6 Nov 2017 19:42:16 +0200 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: I'm -1 on turning this on by default. As a Python developer, I want to be aware of when deprecations are introduced, but I don't want the users of my library or application to care or know if I don't address those deprecation warnings for a few months or a year. The right solution for me here seems to enable the warnings in CI pipelines / tests. As an end user, if I see deprecation warnings there's nothing I can really do to make them go away straight away except run Python with warnings turned off which seems to defeat the point of turning them on by default. The right solution here seems to be for authors to test their software before releasing. I'm -2 on a complicated rule for when warnings are on because I'm going to forget the rule a week from now and probably no one I work with on a day to day basis will even know what the rule was to start with. Maybe there are ways around these things, but I'm not really seeing what's wrong with the current situation that can't be fixed with slightly better CI setups (which are good for other reasons too). Schiavo Simon From brett at python.org Mon Nov 6 12:58:47 2017 From: brett at python.org (Brett Cannon) Date: Mon, 06 Nov 2017 17:58:47 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106183548.20fee86b@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> Message-ID: On Mon, 6 Nov 2017 at 08:36 Paul Sokolovsky wrote: > Hello, > > On Mon, 6 Nov 2017 14:30:45 +0100 > Antoine Pitrou wrote: > > > On Tue, 7 Nov 2017 00:14:35 +1100 > > Steven D'Aprano wrote: > > > On Mon, Nov 06, 2017 at 12:27:54PM +0100, Antoine Pitrou wrote: > > > > > > > The ordered-ness of dicts could instead become one of those stable > > > > CPython implementation details, such as the fact that resources > > > > are cleaned up timely by reference counting, that people > > > > nevertheless should not rely on if they're writing portable > > > > code. > > > > > > Given that (according to others) none of IronPython, Jython, > > > Batavia, Nuitka, or even MicroPython, should have trouble > > > implementing an insertion-order preserving dict, and that PyPy > > > already has, why should we say it is a CPython implementation > > > detail? > > > > That's not what I'm taking away from Paul Sokolovsky's message I was > > responding to. If you think otherwise then please expand and/or > > contact Paul so that things are made clearer one way or the other. > > For MicroPython, it would lead to quite an overhead to make > dictionary items be in insertion order. As I mentioned, MicroPython > optimizes for very low bookkeeping memory overhead, so lookups are > effectively O(n), but orderedness will increase constant factor > significantly, perhaps 5x. > > Also, arguably any algorithm which would *maintain* insertion order > over mutating operations would be more complex and/or require more > memory that one which doesn't. (This is based on the idea that > this problem is equivalent to maintaining a sorted data structure, where > sorting key is "insertion order"). > > CPython 3.6 gives **kwargs in the key specification order? That's fine > if that's all that it says (note that it doesn't say what happens if > someone takes **kwargs and starts to "del []" or "[] =" it). That > allows different ways to implement it, per particular implementation's > choice. That's even implementable in MicroPython. Like, lookups will be > less efficient, but nobody realistically passes hundreds of kwargs. > > It's pretty different to go from that to dictionaries in general. > > Saying that a general dict always maintains insertion order is saying > that "in Python, you have to use complex, memory hungry algorithms for > a simple mapping type". > But that's an implementation detail that most folks won't care about. > > Saying something like "dict maintains insertion order until first > modification" is going down the rabbit hole, making it all confusing, > hard to remember, crazy-sounding to novices. > > > Why all that, if, since the beginning of times, Python offered a > structure with guaranteed ordering - list, and structure for efficient > mapping one values into other - dict. Then even marriage between the > two - OrderedDict. > > Why suddenly once in 25 years there's a need to do something to dict's, > violating computer science background behind them (one of the reason > enough people loved Python comparing to other "practical hack" > languages)? > I don't understand what "computer science background" is being violated? > > Alternatives were already presented on the thread - if people want more > and easier ordered dictionaries, it calls to add a special literal > initializer for them ("o{}" was proposed). > I don't see that happening. If OrderedDict didn't lead to syntax then I don't see why that's necessary now. I think worrying about future optimizations that order preservation might prevent is a premature optimization/red herring. None of us can predict the future so worrying about some algorithmic breakthrough that will suddenly materialize on how to implement dictionaries seems unnecessary. That line of thought suggests we should guarantee as little as possible and just make most stuff undefined behaviour. I also think Paul S. has made it clear that MicroPython won't be implementing order-preserving dicts due to their memory constraints. That's fine, but I think we also need to realize that MicroPython is already a slight hybrid by being based on Python 3.4 but with a backport of async/await and a subset of the stdlib. >From what I have read, this argument breaks down to this: Standardizing order preserving and ... - MicroPython won't implement it, but all other major implementations will - Those that have order preservation will implement PEPs 468 and 520 for free - Some minor practical benefits in terms of avoiding accidental reliance on ordering If we don't promise order preservation ... - CPython should probably add some perturbing to the iteration order - All implementations can handle this without issue or major implementation costs So to me this sounds like a decision between the pragmatism of choosing semantics that all Python implementations can handle or the developer benefits of an order-preserving dict everywhere (all the other arguments I think are minor compared to these two). -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Nov 6 13:10:57 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 6 Nov 2017 10:10:57 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <153D0B55-6429-4121-AB03-284DC2928C8F@python.org> On Nov 5, 2017, at 23:08, Serhiy Storchaka wrote: > Following issues on GitHub related to new Python releases I have found that many projects try to fix deprecation warning, but there are projects that are surprised by ending of deprecation periods and removing features. Like others here, I?ve also been bitten by silently ignored DeprecationWarnings. We had some admittedly dodgy code in a corner of Mailman that we could have fixed earlier if we?d seen the warnings. But we never did, so the first indication of a problem was when code actually *broke* with the new version of Python. The problem was compounded because it wasn?t us that saw it first, it was a user, so now they had a broken installation and we had to issue a hot fix. If we?d seen the DeprecationWarnings in the previous version of Python, we would have fixed them and all would have been good. It?s true that those warnings can cause problems though. There are certain build/CI environments, e.g. in Ubuntu, that fail when they see unexpected stderr output. So when we start seeing new deprecations, we got build(-ish) time failures. I still think that?s a minor price to pay for projects that *want* to do the right thing but don?t because those warnings are essentially hidden until they are breakages. We have tools to help, so let?s use them. Staying current and code clean is hard and never ending. Welcome to software development! -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Mon Nov 6 13:12:51 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 6 Nov 2017 10:12:51 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <2B81BE3B-AA10-4401-A629-5C6782444838@python.org> On Nov 5, 2017, at 20:47, Nick Coghlan wrote: >> warnings.silence_deprecations() >> python -X silence-deprecations >> PYTHONSILENCEDEPRECATIONS=x > > It could be interesting to combine this with Tim's suggestion of > putting an upper version limit on the silencing, so the above may look > like: > > warnings.ignore_deprecations((3, 7)) > python -X ignore-deprecations=3.7 > PYTHONIGNOREDEPRECATIONS=3.7 That could be cool as long as we also support wildcards, e.g. defaults along the lines of my suggestions above to ignore everything. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From brett at python.org Mon Nov 6 13:17:42 2017 From: brett at python.org (Brett Cannon) Date: Mon, 06 Nov 2017 18:17:42 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: On Mon, 6 Nov 2017 at 00:30 Victor Stinner wrote: > 2017-11-06 8:47 GMT+01:00 Serhiy Storchaka : > > 06.11.17 09:09, Guido van Rossum ????: > >> > >> I still find this unfriendly to users of Python scripts and small apps > who > >> are not the developers of those scripts. (Large apps tend to spit out so > >> much logging it doesn't really matter.) > >> > >> Isn't there a better heuristic we can come up with so that the warnings > >> tend to be on for developers but off for end users? > > > > There was a proposition to make deprecation warnings visible by default > in > > debug build and interactive interpreter. > > The problem is that outside CPython core developers, I expect that > almost nobody runs a Python compiled in debug mode. We should provide > debug features in the release build. For example, in Python 3.6, I > added debug hooks on memory allocation in release mode using > PYTHONMALLOC=debug. These hooks were already enabled by default in > debug mode. > > Moreover, applications are not developed nor tested in the REPL. > > Last year, I proposed a global "developer mode". The idea is to > provide the same experience than a Python debug build, but on a Python > release build: > > python3 -X dev script.py > or > PYTHONDEV=1 python3 script.py > behaves as > PYTHONMALLOC=debug python3 -Wd -b -X faulthandler script.py > > * Show DeprecationWarning and ResourceWarning warnings: python -Wd > * Show BytesWarning warning: python -b > * Enable Python assertions (assert) and set __debug__ to True: remove > (or just ignore) -O or -OO command line arguments > * faulthandler to get a Python traceback on segfault and fatal errors: > python -X faulthandler > * Debug hooks on Python memory allocators: PYTHONMALLOC=debug > > If you don't follow the CPython development, it's hard to be aware of > "new" options like -X faulthandler (Python 3.3) or PYTHONMALLOC=debug > (Python 3.6). And it's easy to forget an option like -b. > > > Maybe we even a need -X dev=strict which would be stricter: > > * use -Werror instead of -Wd: raise an exception when a warning is emitted > * use -bb instead of -b: get BytesWarning exceptions > * Replace "inconsistent use of tabs and spaces in indentation" warning > with an error in the Python parser > * etc. > I like this idea and would argue that `-X dev` should encompass what's proposed for `-X dev=strict` and just have it be strict to begin with. Then we can add an equivalent environment variable and push people who use CI to just blindly set the environment variable in their tests. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Nov 6 13:19:59 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 6 Nov 2017 10:19:59 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106121817.4c3c2367@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> On Nov 6, 2017, at 02:18, Paul Sokolovsky wrote: > What it will lead to is further fragmentation of the community. Python2 > vs Python3 split is far from being over, and now there're splits > between: > > * people who use "yield from" vs "await" > * people who use f-strings vs who don't > * people who rely on sorted nature of dict's vs who don't This is the classic argument of, do you proceed conservatively and use the lowest-common denominator that makes your code work with the widest range of versions, or do you ride the wave and adopt the latest and greatest features as soon as they?re available? Neither answer is wrong or right? for everyone. It?s also a debate as old as the software industry. Every package, project, company, developer, community will have to decide for themselves. Once you realize you can?t win, you?ve won! :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From rdmurray at bitdance.com Mon Nov 6 13:25:45 2017 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 06 Nov 2017 13:25:45 -0500 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: References: Message-ID: <20171106182548.0EFFF1310027@webabinitio.net> I agree with Steve. There is *cognitive* overhead to type annotations. I find that they make Python code harder to read and understand. So I object to them in the documentation and docstrings as well. (Note: while I agree that the notation is compact for the simple types, the fact that it would appear for some signatures and not for others is a show stopper from my point of view...consistency is important to reducing the cognitive overhead of reading the docs.) I'm dealing with the spread of annotations on my current project, having to ask programmers on the team to delete annotations that they've "helpfully" added that to my mind serve no purpose on a project of the size we're developing, where we aren't using static analysis for anything. Maybe I'm being a curmudgeon standing in the way of progress, but I'm pretty sure there are a number of people in my camp :) On Mon, 06 Nov 2017 16:22:23 +0000, Steve Holden wrote: > While I appreciate the value of annotations I think that *any* addition of > them to the stdlib would complicate an important learning resource > unnecessarily. S > > Steve Holden > > On Mon, Nov 6, 2017 at 4:07 PM, Victor Stinner > wrote: > > > Related to annotations, are you ok to annotate basic types in the > > *documentation* and/or *docstrings* of the standard library? > > > > For example, I chose to document the return type of time.time() (int) > > and time.time_ns() (float). It's short and I like how it's formatted. > > See the current rendered documentation: > > > > https://docs.python.org/dev/library/time.html#time.time > > > > "Annotations" in the documentation and docstrings have no impact on > > Python runtime performance. Annotations in docstrings makes them a few > > characters longer and so impact the memory footprint, but I consider > > that the overhead is negligible, especially when using python3 -OO. From brett at python.org Mon Nov 6 13:37:03 2017 From: brett at python.org (Brett Cannon) Date: Mon, 06 Nov 2017 18:37:03 +0000 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: <20171106182548.0EFFF1310027@webabinitio.net> References: <20171106182548.0EFFF1310027@webabinitio.net> Message-ID: On Mon, Nov 6, 2017, 10:27 R. David Murray, wrote: > I agree with Steve. There is *cognitive* overhead to type annotations. > I find that they make Python code harder to read and understand. So I > object to them in the documentation and docstrings as well. (Note: > while I agree that the notation is compact for the simple types, the > fact that it would appear for some signatures and not for others is a > show stopper from my point of view...consistency is important to reducing > the cognitive overhead of reading the docs.) > > I'm dealing with the spread of annotations on my current project, > having to ask programmers on the team to delete annotations that they've > "helpfully" added that to my mind serve no purpose on a project of the > size we're developing, where we aren't using static analysis for anything. > I think this is the key point in your situation if the project is private. > Maybe I'm being a curmudgeon standing in the way of progress, but I'm > pretty sure there are a number of people in my camp :) > The key thing here is there are people like me who are using your analyzers (and you are as well indirectly since the CLA bot is fully type hinted ?). I think the key question is whether we expect typeshed to keep up with richer annotations using typing or are basic ones in the stdlib going ot leave less of a gap in support in the long-term? To be honest, I suspect most Python code in the stdlib would require protocols to be accurate (C code is another matter), but return type hints could be reasonably accurate. -Brett > On Mon, 06 Nov 2017 16:22:23 +0000, Steve Holden > wrote: > > While I appreciate the value of annotations I think that *any* addition > of > > them to the stdlib would complicate an important learning resource > > unnecessarily. S > > > > Steve Holden > > > > On Mon, Nov 6, 2017 at 4:07 PM, Victor Stinner > > > wrote: > > > > > Related to annotations, are you ok to annotate basic types in the > > > *documentation* and/or *docstrings* of the standard library? > > > > > > For example, I chose to document the return type of time.time() (int) > > > and time.time_ns() (float). It's short and I like how it's formatted. > > > See the current rendered documentation: > > > > > > https://docs.python.org/dev/library/time.html#time.time > > > > > > "Annotations" in the documentation and docstrings have no impact on > > > Python runtime performance. Annotations in docstrings makes them a few > > > characters longer and so impact the memory footprint, but I consider > > > that the overhead is negligible, especially when using python3 -OO. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Nov 6 13:40:19 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 6 Nov 2017 10:40:19 -0800 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: References: Message-ID: <82354225-7BFE-48D2-B54C-10BA129BB5E9@python.org> On Nov 6, 2017, at 08:02, Victor Stinner wrote: > > While discussions on the typing module are still hot, what do you > think of allowing annotations in the standard libraries, but limited > to a few basic types: I?m still -1 on adding annotations to the stdlib, despite their increasing use out in the wild, for the reasons that Steve and David have pointed out. (Let?s let Eric be the one that breaks the mold with data classes. Then we can blame him!) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From skip.montanaro at gmail.com Mon Nov 6 13:55:23 2017 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Mon, 6 Nov 2017 12:55:23 -0600 Subject: [Python-Dev] Partial support of a platform In-Reply-To: References: Message-ID: > The PEP 11 has a nice description to get a *full* support of a new platform: > https://www.python.org/dev/peps/pep-0011/ PEP 11 defines the endpoint, full support, and several requirements to call a platform fully supported. It would be nice if a process was defined for getting from "no support" to "full support." I think that to be as supportive as possible (no pun intended), it would make sense to grant people check-in privileges to a branch (if that's possible or desirable), or, if not, list forks which are intended to add support for various platforms which are moving toward full support. I don't know if PEP 11 is the right place to track such activity, relatively few updates should be required. I doubt it will be an onerous burden. Something like: * ButterflyOS - Victor Stinner (victor.stinner at gmail.com) is working to add CPython support for this platform on this Git branch: https://github.com/python/Butterfly * MothOS - Skip Montanaro (skip.montanaro at gmail.com) is working to add CPython support for this platform on this Git branch: https://github.com/smontanaro/Moth Interested parties would be directed to contact the pilots of those branches if they wanted to contribute. Skip From alex.gaynor at gmail.com Mon Nov 6 14:00:21 2017 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 6 Nov 2017 14:00:21 -0500 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <20171106185145.mfgq6qylrugk6nqo@python.ca> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: I also feel this decision was a mistake. If there's a consensus to revert, I'm happy to draft a PEP. Alex On Nov 6, 2017 1:58 PM, "Neil Schemenauer" wrote: > On 2017-11-06, Nick Coghlan wrote: > > Gah, seven years on from Python 2.7's release, I still get caught by > > that. I'm tempted to propose we reverse that decision and go back to > > enabling them by default :P > > Either enable them by default or make them really easy to enable for > development evironments. I think some setting of the PYTHONWARNINGS > evironment variable should do it. It is not obvious to me how to do > it though. Maybe there should be an environment variable that does > it more directly. E.g. > > PYTHONWARNDEPRECATED=1 > > Another idea is to have venv to turn them on by default or, based on > a command-line option, do it. Or, maybe the unit testing frameworks > should turn on the warnings when they run. > > The current "disabled by default" behavior is obviously not working > very well. I had them turned on for a while and found quite a > number of warnings in what are otherwise high-quality Python > packages. Obviously the vast majority of developers don't have them > turned on. > > Regards, > > Neil > _______________________________________________ > python-committers mailing list > python-committers at python.org > https://mail.python.org/mailman/listinfo/python-committers > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nas-python at arctrix.com Mon Nov 6 13:51:45 2017 From: nas-python at arctrix.com (Neil Schemenauer) Date: Mon, 6 Nov 2017 12:51:45 -0600 Subject: [Python-Dev] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> Message-ID: <20171106185145.mfgq6qylrugk6nqo@python.ca> On 2017-11-06, Nick Coghlan wrote: > Gah, seven years on from Python 2.7's release, I still get caught by > that. I'm tempted to propose we reverse that decision and go back to > enabling them by default :P Either enable them by default or make them really easy to enable for development evironments. I think some setting of the PYTHONWARNINGS evironment variable should do it. It is not obvious to me how to do it though. Maybe there should be an environment variable that does it more directly. E.g. PYTHONWARNDEPRECATED=1 Another idea is to have venv to turn them on by default or, based on a command-line option, do it. Or, maybe the unit testing frameworks should turn on the warnings when they run. The current "disabled by default" behavior is obviously not working very well. I had them turned on for a while and found quite a number of warnings in what are otherwise high-quality Python packages. Obviously the vast majority of developers don't have them turned on. Regards, Neil From pmiscml at gmail.com Mon Nov 6 14:07:57 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 6 Nov 2017 21:07:57 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> Message-ID: <20171106210757.25e14be4@x230> Hello, On Mon, 06 Nov 2017 17:58:47 +0000 Brett Cannon wrote: [] > > Why suddenly once in 25 years there's a need to do something to > > dict's, violating computer science background behind them (one of > > the reason enough people loved Python comparing to other "practical > > hack" languages)? > > I don't understand what "computer science background" is being > violated? I tried to explain that in the previous mail, can try a different angle. So, please open you favorite CS book (better few) and look up "abstract data types", then "mapping/associative array" and "list". We can use Wikipedia too: https://en.wikipedia.org/wiki/Associative_array. So, please look up: "Operations associated with this data type allow". And you'll see, that there're no "ordering" related operations are defined. Vice versa, looking at "sequence" operations, there will be "prev/next", maybe "get n'th" element operations, implying ordering. Python used to be a perfect application of these principles. Its dict was a perfect CS implementation of an abstract associative array, and list - of "sequence" abstract type (with additional guarantee of O(1) random element access). People knew and rejoiced that Python is built on solid science principles, or could *learn* them from it. That no longer will be true, with a sound concept being replaced with on-the-spot practical hack, choosing properties of a random associative array algorithm implementation over properties of a superset of such algorithms (many of which are again don't offer any orderness guarantees). I know though what will be replied (based on the replies below): "all these are implementation details" - no, orderness vs non-orderness of a mapping algorithm is an implementation detail; "users shouldn't know all that" - they should, that's the real knowledge, and up until now, they could learn that from *Python docs*, "we can't predict future" - we don't need, we just need to know the past (25 years in our case), and understand why it was done like that, I don't think Guido couldn't code it ordered in 1991, it's just not natural for a mapping type to be so, and in 2017, it's not more natural than it was in 1991. MicroPython in particular appeared because Python offered all the CS-sound properties and freedom and alternative choices for implementation (more so than any other scripting language). It's losing it, and not just for MicroPython's surprise. [] -- Best regards, Paul mailto:pmiscml at gmail.com From tseaver at palladion.com Mon Nov 6 14:07:56 2017 From: tseaver at palladion.com (Tres Seaver) Date: Mon, 6 Nov 2017 14:07:56 -0500 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: <20171106182548.0EFFF1310027@webabinitio.net> References: <20171106182548.0EFFF1310027@webabinitio.net> Message-ID: On 11/06/2017 01:25 PM, R. David Murray wrote: > Maybe I'm being a curmudgeon standing in the way of progress, but I'm > pretty sure there are a number of people in my camp :) I'm definitely there: anything which optimizes machine-readabilty over readability for the Mark 1 eyeball is a lose. Tres. -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com From eric at trueblade.com Mon Nov 6 14:21:16 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 6 Nov 2017 14:21:16 -0500 Subject: [Python-Dev] Allow annotations using basic types in the stdlib? In-Reply-To: <82354225-7BFE-48D2-B54C-10BA129BB5E9@python.org> References: <82354225-7BFE-48D2-B54C-10BA129BB5E9@python.org> Message-ID: On 11/6/2017 1:40 PM, Barry Warsaw wrote: > On Nov 6, 2017, at 08:02, Victor Stinner wrote: >> >> While discussions on the typing module are still hot, what do you >> think of allowing annotations in the standard libraries, but limited >> to a few basic types: > > I?m still -1 on adding annotations to the stdlib, despite their increasing use out in the wild, for the reasons that Steve and David have pointed out. (Let?s let Eric be the one that breaks the mold with data classes. Then we can blame him!) Thanks for volunteering me! Note that dataclasses completely ignores the annotations (they could all be None, for all I care), except for the one specific case of ClassVar. And that's still up for discussion, see https://github.com/ericvsmith/dataclasses/issues/61. Eric. From paul at ganssle.io Mon Nov 6 14:23:13 2017 From: paul at ganssle.io (Paul G) Date: Mon, 6 Nov 2017 14:23:13 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> Message-ID: <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Is there a major objection to just adding in explicit syntax for order-preserving dictionaries? To some extent that seems like a reasonable compromise position in an "explicit is better than implicit" sense. A whole lot of code is out there that doesn't require or expect order-preserving dictionaries - it would be nice to be able to differentiate out the parts where order actually *does* matter. (Of course, given that CPython's implementation is order-preserving, a bunch of code is probably now being written that implicitly requires on this detail, but at least having syntax that makes that clear would give people the *option* to make the assumption explicit). On 11/06/2017 01:19 PM, Barry Warsaw wrote: > On Nov 6, 2017, at 02:18, Paul Sokolovsky wrote: > >> What it will lead to is further fragmentation of the community. Python2 >> vs Python3 split is far from being over, and now there're splits >> between: >> >> * people who use "yield from" vs "await" >> * people who use f-strings vs who don't >> * people who rely on sorted nature of dict's vs who don't > > This is the classic argument of, do you proceed conservatively and use the lowest-common denominator that makes your code work with the widest range of versions, or do you ride the wave and adopt the latest and greatest features as soon as they?re available? > > Neither answer is wrong or right? for everyone. It?s also a debate as old as the software industry. Every package, project, company, developer, community will have to decide for themselves. Once you realize you can?t win, you?ve won! :) > > -Barry > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From barry at python.org Mon Nov 6 14:33:10 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 6 Nov 2017 11:33:10 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: <431FCB2E-1713-4200-A497-84B470D4EEC9@python.org> On Nov 6, 2017, at 11:23, Paul G wrote: > > Is there a major objection to just adding in explicit syntax for order-preserving dictionaries? I don?t think new syntax is necessary. We already have OrderedDict, which to me is a perfectly sensible way to spell ?I need a mapping that preserves insertion order?, and the extra import doesn?t bother me. I?m not saying whether or not to make the language guarantee that built-in dict preserves order. I?m just saying that if we don?t make that language change, we already have everything we need to support both use cases. If we did make the change, it?s possible we would need a way to explicit say that order is not preserved. That seems a little weird to me, but I suppose it could be useful. I like the idea previously brought up that iteration order be deliberately randomized in that case, but we?d still need a good way to spell that. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From pmiscml at gmail.com Mon Nov 6 14:56:38 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 6 Nov 2017 21:56:38 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <431FCB2E-1713-4200-A497-84B470D4EEC9@python.org> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> <431FCB2E-1713-4200-A497-84B470D4EEC9@python.org> Message-ID: <20171106215638.04031849@x230> Hello, On Mon, 6 Nov 2017 11:33:10 -0800 Barry Warsaw wrote: [] > If we did make the change, it?s possible we would need a way to > explicit say that order is not preserved. That seems a little weird I recently was working on a more or less complex dataflow propagation problem. It should converge to a fixed point, and it did, but on different runs, to different ones. So, I know that I'm a bad programmer, need to do more of my homework and grow. I know, that if I rewrite it in C++ or C, it'll work unstable the same way, because it's buggy. (Heck, over these years, I learned that I don't need to rewrite things in C/C++, because Python is the *real* language, which works the way computers do, without sugaring that up). I need to remember that, because with Python 3.7, I may become a good-programmer-in-a-ponyland-of-ordered-dicts. Btw, in all this discussion, I don't remember anyone mentioning sets. I don't recall the way they're implemented in CPython, but they have strong conceptual and semantic resemblance to dict's. So, what about them, do they become ordered too? > > Cheers, > -Barry > -- Best regards, Paul mailto:pmiscml at gmail.com From storchaka at gmail.com Mon Nov 6 15:28:36 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 6 Nov 2017 22:28:36 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106215638.04031849@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> <431FCB2E-1713-4200-A497-84B470D4EEC9@python.org> <20171106215638.04031849@x230> Message-ID: 06.11.17 21:56, Paul Sokolovsky ????: > Btw, in all this discussion, I don't remember anyone mentioning sets. I > don't recall the way they're implemented in CPython, but they have > strong conceptual and semantic resemblance to dict's. So, what about > them, do they become ordered too? No, sets are not ordered, and seems they will not be ordered at least in several next years. From steve.dower at python.org Mon Nov 6 16:16:25 2017 From: steve.dower at python.org (Steve Dower) Date: Mon, 6 Nov 2017 13:16:25 -0800 Subject: [Python-Dev] Partial support of a platform In-Reply-To: References: Message-ID: <1a182518-2117-3aef-1a4d-c4f4973f60e3@python.org> On 06Nov2017 0941, Victor Stinner wrote: > [SNIP] > > But the question here is more about "partial" support. > > While changes are usually short, I dislike applying them to Python 2.7 > and/or Python 3.6, until a platform is fully support. I prefer to > first see a platform fully supported to see how much changes are > required and to make sure that we get someone involved to maintain the > code (handle new issues). > > Example of platforms: MinGW, Cygwin, OpenBSD, NetBSD, xWorks RTOS, etc. I appreciate the desire for changes to be made upstream, especially on code that changes frequently enough to make it difficult to patch reliably (this is basically the entire premise behind my PEP 551). At the same time, I don't like carrying the burden of code for platforms we do not support (hence PEP 551 doesn't really add any interesting code). There is a balance to be found here, though. I don't believe CPython *partially* supports any platforms - either they are fully supported or they are not supported. However, we can and should do things that help other people support their platforms, such as making sure build options can be specified by environment variables. At the same time, we can and should *exclude* things that require platform-specific testing in core (for example, predefined options for building for a specific platform). Similarly, if someone wanted to add specific platform support to a stdlib module via "if sys.platform ...", I'd be hesitant. However, if they refactored it such that it was easier *for a custom build of CPython* to provide/omit certain features (e.g. https://github.com/python/cpython/blob/30f4fa456ef626ad7a92759f492ec7a268f7af4e/Lib/threading.py#L1290-L1296 ) then I'd be more inclined to accept it - but only if there was no behavioural change on supported platforms. Does that make sense? I'm not sure whether we ought to capture some general guidelines somewhere on how to make decisions around this, but we should certainly have the discussion here first anyway. Cheers, Steve From solipsis at pitrou.net Mon Nov 6 16:24:47 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 22:24:47 +0100 Subject: [Python-Dev] Partial support of a platform References: Message-ID: <20171106222447.399e76fa@fsol> On Mon, 6 Nov 2017 18:41:30 +0100 Victor Stinner wrote: > > Example of platforms: MinGW, Cygwin, OpenBSD, NetBSD, xWorks RTOS, etc. We support POSIX-compatible platforms. Do OpenBSD and NetBSD need special care? Regards Antoine. From victor.stinner at gmail.com Mon Nov 6 16:49:06 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 22:49:06 +0100 Subject: [Python-Dev] Partial support of a platform In-Reply-To: <20171106222447.399e76fa@fsol> References: <20171106222447.399e76fa@fsol> Message-ID: 2017-11-06 22:24 GMT+01:00 Antoine Pitrou : > We support POSIX-compatible platforms. Do OpenBSD and NetBSD need > special care? Aha, "POSIX", you are funny Antoine :-D If it was a single #ifdef in the whole code base, I wouldn't have to start such thread on python-dev :-) Open issues with "OpenBSD" in the title: 31630: math.tan has poor accuracy near pi/2 on OpenBSD and NetBSD 31631: test_c_locale_coercion fails on OpenBSD 31635: test_strptime failure on OpenBSD 31636: test_locale failure on OpenBSD 25342: test_json segfault on OpenBSD 25191: test_getsetlocale_issue1813 failed on OpenBSD 23470: OpenBSD buildbot uses wrong stdlib 12588: test_capi.test_subinterps() failed on OpenBSD (powerpc) 12589: test_long.test_nan_inf() failed on OpenBSD (powerpc) CPython already contais 11 "#ifdef (...) __OpenBSD__" in C and 11 sys.platform.startswith('openbsd') in Python. Supporting a "POSIX" platform requires some changes :-) Open issues with "NetBSD" in the title: 31925: [NetBSD] test_socket creates too many locks 30512: CAN Socket support for NetBSD 31927: Fix compiling the socket module on NetBSD 8 and other issues 31630: math.tan has poor accuracy near pi/2 on OpenBSD and NetBSD 31894: test_timestamp_naive failed on NetBSD CPython already contains 17 "#ifdef (...) __NetBSD__" and 7 "platform.startswith('netbsd')" in Python. MinGW: ... hum, there are 57 open MinGW issues, use the bug tracker to list them ;-) Cygwin: ... hum, there are 23 open Cygwin issues, use the bug tracker to list them ;-) etc. Victor From victor.stinner at gmail.com Mon Nov 6 16:54:02 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 22:54:02 +0100 Subject: [Python-Dev] Partial support of a platform In-Reply-To: <1a182518-2117-3aef-1a4d-c4f4973f60e3@python.org> References: <1a182518-2117-3aef-1a4d-c4f4973f60e3@python.org> Message-ID: 2017-11-06 22:16 GMT+01:00 Steve Dower : > I don't believe CPython *partially* supports any platforms - either they are > fully supported or they are not supported. Ok. So there are two questions: * Where is the list of platforms "endorsed" by CPython ("fully supported") * How can we decide when to start to support support a new platform? Usually, developers begin with a nice looking change: short, not intrusive. But later, the real change comes, and it's much larger and uglier :-) And the rationale for the first change is "come on, it's just a few lines!". And I have troubles to justify to reject such patch. Slowly, I'm pointing people to the right section of the PEP 11. Victor From solipsis at pitrou.net Mon Nov 6 17:07:56 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Nov 2017 23:07:56 +0100 Subject: [Python-Dev] Partial support of a platform In-Reply-To: References: <20171106222447.399e76fa@fsol> Message-ID: <20171106230756.1ba50b05@fsol> On Mon, 6 Nov 2017 22:49:06 +0100 Victor Stinner wrote: > > CPython already contais 11 "#ifdef (...) __OpenBSD__" in C and 11 > sys.platform.startswith('openbsd') in Python. Supporting a "POSIX" > platform requires some changes :-) Yes... So, the question is: does OpenBSD already maintain a Python port? If so, then we should have them contribute their patches instead of fixing issues ourselves. If they don't want too, then we have no obligation to maintain them. OTOH, if you and Serhiy want to maintain them, you can do so. But we shouldn't promise anything for the future. The reason that testing on them is interesting, IMHO, is to chase potential Linux-isms in our code base. Circumventing {Free,Open,Net}BSD-specific deficiences is not. To put things differently: if a commit breaks {Free,Open,Net}BSD but leaves other platforms intact, I don't think it should be reverted. Regards Antoine. From victor.stinner at gmail.com Mon Nov 6 17:33:54 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 6 Nov 2017 23:33:54 +0100 Subject: [Python-Dev] Partial support of a platform In-Reply-To: <20171106230756.1ba50b05@fsol> References: <20171106222447.399e76fa@fsol> <20171106230756.1ba50b05@fsol> Message-ID: 2017-11-06 23:07 GMT+01:00 Antoine Pitrou : > The reason that testing on > them is interesting, IMHO, is to chase potential Linux-isms in our code > base. Circumventing {Free,Open,Net}BSD-specific deficiences is not. Serhiy found at least an interesting issue thanks to OpenBSD, a bug in memory debug hooks: https://bugs.python.org/issue31626 Victor From lukasz at langa.pl Mon Nov 6 18:20:52 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Mon, 6 Nov 2017 15:20:52 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> Message-ID: <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> > On Nov 5, 2017, at 11:28 PM, Nick Coghlan wrote: > > On 6 November 2017 at 16:36, Lukasz Langa wrote: > > - compile annotations like a small nested class body (but returning > the expression result, rather than None) > - emit MAKE_THUNK instead of the expression's opcodes > - emit STORE_ANNOTATION as usual > Is the motivation behind creating thunks vs. reusing lambdas just the difference in handling class-level scope? If so, would it be possible to just modify lambdas to behave thunk-like there? It sounds like this would strictly broaden the functionality of lambdas, in other words, wouldn't create backwards incompatibility for existing code. Reusing lambdas (with extending them to support class-level scoping) would be a less scary endeavor than introducing a brand new language construct. With my current understanding I still think stringification is both easier to implement and understand by end users. The main usability win of thunks/lambdas is not very significant: evaluating them is as easy as calling them whereas strings require typing.get_type_hints(). I still think being able to access function-local state at time of definition is only theoretically useful. What would be significant though is if thunk/lambdas helped fixing forward references in general. But I can't really see how that could work. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From chris.barker at noaa.gov Mon Nov 6 18:23:18 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 6 Nov 2017 15:23:18 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: On Mon, Nov 6, 2017 at 11:23 AM, Paul G wrote: > (Of course, given that CPython's implementation is order-preserving, a > bunch of code is probably now being written that implicitly requires on > this detail, but at least having syntax that makes that clear would give > people the *option* to make the assumption explicit). This is a really key point -- a heck of a lot more people use cPython than read the language spec. And a great deal of code is developed with a certain level of ignorance -- try something, if it works, and your test pass (if there are any), then you are done. So right now, there is more an more code out there that relies on a regular old dcit being ordered. I've been struggling with teaching this right now -- my written-a-couple-years ago materials talk about dicts being arbitrary order, and then have a little demo of that fact. Now I'm teaching with Python 3.6, and I had to add in something like: cPython does, in fact, preserve order with dicts, but it should be considered an implementation detail, and not counted on ... (and by the say, so does PyPy, and ....)" I don't know, but I'm going to guess about 0% of my class is going to remember that... And if we added o{,,,} syntax it would be even worse, 'cause folks would forget to use it, as their code wouldn't behave differently (kind of like the 'b' flag on unix text files, or the u"string" where everything is ascii in that string...) in short -- we don't have a choice (unless we add an explicit randomization as some suggested -- but that just seems perverse...) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Mon Nov 6 18:32:53 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 7 Nov 2017 01:32:53 +0200 Subject: [Python-Dev] Partial support of a platform In-Reply-To: <20171106222447.399e76fa@fsol> References: <20171106222447.399e76fa@fsol> Message-ID: 06.11.17 23:24, Antoine Pitrou ????: > On Mon, 6 Nov 2017 18:41:30 +0100 > Victor Stinner wrote: >> >> Example of platforms: MinGW, Cygwin, OpenBSD, NetBSD, xWorks RTOS, etc. > > We support POSIX-compatible platforms. Do OpenBSD and NetBSD need > special care? Yes, because our support is GNU/Linux-centric. Features not supported on Linux degrade with time. From storchaka at gmail.com Mon Nov 6 18:41:02 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 7 Nov 2017 01:41:02 +0200 Subject: [Python-Dev] Partial support of a platform In-Reply-To: References: Message-ID: 06.11.17 19:41, Victor Stinner ????: > Example of platforms: MinGW, Cygwin, OpenBSD, NetBSD, xWorks RTOS, etc. > > But the way, is there an exhaustive list of platforms "officially" > supported by CPython? Several month ago there was a couple of buildbots including NetBSD and OpenBSD. What happened to them? Was the support of NetBSD and OpenBSD officially stopped as well as the support of OpenIndiana? From barry at python.org Mon Nov 6 18:56:24 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 06 Nov 2017 15:56:24 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: Nick Coghlan wrote: > On the 12-weeks-to-3.7-feature-freeze thread, Jose Bueno & I both > mistakenly though the async/await deprecation warnings were missing > from 3.6. Sometimes the universe just throws synchronicity right in your face. I'm working on building an internal tool against Python 3.7 to take advantage of the very cool -X importtime feature. It's been a fun challenge, but mostly because of our external dependencies. For example, PyThreadState renamed its structure members, so both Cython and lxml needed new releases to adjust for this. That's the "easy" part; they've done it and those fixes work great. We also depend on ldap3 . Suddenly we get a SyntaxError because ldap3 has a module ldap3/strategy/async.py. I say "suddenly" because of course *if* DeprecationWarnings had been enabled by default, I'm sure someone would have noticed that those imports were telling the developers about the impending problem in Python 3.6. https://github.com/cannatag/ldap3/issues/428 This just reinforces my opinion that even though printing DeprecationWarning by default *can* be a hardship in some environments, it is on the whole a positive beneficial indicator that gives developers some runway to fix such problems. These types of apparently sudden breakages are the worse of all worlds. Cheers, -Barry From jsbueno at python.org.br Mon Nov 6 19:17:23 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Mon, 6 Nov 2017 22:17:23 -0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: On 6 November 2017 at 17:23, Paul G wrote: > Is there a major objection to just adding in explicit syntax for order-preserving dictionaries? To some extent that seems like a reasonable compromise position in an "explicit is better than implicit" sense. A whole lot of code is out there that doesn't require or expect order-preserving dictionaries - it would be nice to be able to differentiate out the parts where order actually *does* matter. > > (Of course, given that CPython's implementation is order-preserving, a bunch of code is probably now being written that implicitly requires on this detail, but at least having syntax that makes that clear would give people the *option* to make the assumption explicit). I think the additional syntax have the added benefit of preventing code that relies on the ordering of dict literals to be run on older versions, therefore triggering subtle bugs that had already being mentioned. And also, forgot along the discussion, is the big disadvantage that other Python implementations would have a quite significant overhead on mandatory ordered dicts. One that was mentioned along the way is transpilers, with Brython as an example - but there might be others. MircoPython is far from being the only implementation affected. js -><- > > On 11/06/2017 01:19 PM, Barry Warsaw wrote: >> On Nov 6, 2017, at 02:18, Paul Sokolovsky wrote: >> >>> What it will lead to is further fragmentation of the community. Python2 >>> vs Python3 split is far from being over, and now there're splits >>> between: >>> >>> * people who use "yield from" vs "await" >>> * people who use f-strings vs who don't >>> * people who rely on sorted nature of dict's vs who don't >> >> This is the classic argument of, do you proceed conservatively and use the lowest-common denominator that makes your code work with the widest range of versions, or do you ride the wave and adopt the latest and greatest features as soon as they?re available? >> >> Neither answer is wrong or right? for everyone. It?s also a debate as old as the software industry. Every package, project, company, developer, community will have to decide for themselves. Once you realize you can?t win, you?ve won! :) >> >> -Barry >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io >> > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br > From donald at stufft.io Mon Nov 6 19:35:22 2017 From: donald at stufft.io (Donald Stufft) Date: Mon, 6 Nov 2017 19:35:22 -0500 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> Message-ID: <46A3BF96-E531-4A46-8171-464299774910@stufft.io> > On Nov 6, 2017, at 4:35 AM, Paul Moore wrote: > > 1. Without typing available, some programs using type annotations > won't run. That is, using type annotations (a > test-time/development-time feature) introduces a runtime dependency on > typing, and hence introduces an extra deployment concern (unlike other > development-type features like test frameworks). > 2. For some people, if something isn't in the Python standard library > (technically, in a standard install), it's not available (without > significant effort, or possibly not at all). For those people, a > runtime dependency on a non-stdlib typing module means "no use of type > annotations allowed". > 3. Virtual environments typically only include the stdlib, and "use > system site-packages" has affects more than just a single module, so > bundling still has issues for virtualenvs - and in the packaging > tutorials, we're actively encouraging people to use virtualenvs. We > (python-dev) can work around this issue for venv by auto-installing > typing, but that still leaves virtualenv (which is critically short of > resources, and needs to support older versions of Python, so a major > change like bundling typing is going to be a struggle to get > implemented). Maybe we just need to fully flesh out the idea of a "Python Core" (What exists now as ?Python?) and a ?Python Platform? (Python Core + A select set of preinstalled libraries). Then typing can just be part of the Python Platform, and gets installed as part of your typical installation, but is otherwise an independent piece of code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Nov 6 19:42:24 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 7 Nov 2017 01:42:24 +0100 Subject: [Python-Dev] Partial support of a platform In-Reply-To: References: Message-ID: 2017-11-07 0:41 GMT+01:00 Serhiy Storchaka : > Several month ago there was a couple of buildbots including NetBSD and > OpenBSD. What happened to them? Was the support of NetBSD and OpenBSD > officially stopped as well as the support of OpenIndiana? While I don't recall seeing any NetBSD buildbot, I recall that we had a OpenBSD buildbot but it was red as far as I recall. Many tests were failing on OpenBSD: test_crypt, test_socket, and many others. Since we have much more "green" buildbot workers nowadays, I would prefer to wait until almost all tests pass on a "new" platform, before plugging a new buildbot. For example, I am now subscribed to the buildbot-status@ mailing list which may be flooded by failures until a buildbot becomes stable. I'm ok with skipping enough tests for a "new" platforms to get a "green" test suite, and fix skipped tests later. In my experience, keeping tests which fail forever is annoying and hides new regressions. Even if a buildbot which always fail shouldn't send email notifications, in practice, buildbot likes to send emails for various reasons, even if the previous build was already a failure and the new build is still a failure :-) But maybe we can adjust the buildbot configuration to not send notifications on some buildbot workers. Victor From eric at trueblade.com Mon Nov 6 19:44:03 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 6 Nov 2017 19:44:03 -0500 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <2B81BE3B-AA10-4401-A629-5C6782444838@python.org> References: <2B81BE3B-AA10-4401-A629-5C6782444838@python.org> Message-ID: On 11/6/2017 1:12 PM, Barry Warsaw wrote: > On Nov 5, 2017, at 20:47, Nick Coghlan wrote: > >>> warnings.silence_deprecations() >>> python -X silence-deprecations >>> PYTHONSILENCEDEPRECATIONS=x >> >> It could be interesting to combine this with Tim's suggestion of >> putting an upper version limit on the silencing, so the above may look >> like: >> >> warnings.ignore_deprecations((3, 7)) >> python -X ignore-deprecations=3.7 >> PYTHONIGNOREDEPRECATIONS=3.7 > > That could be cool as long as we also support wildcards, e.g. defaults along the lines of my suggestions above to ignore everything. I'd like to see a command line or environment variable that says: "turn on deprecation warnings (and/or pending deprecation warnings), but do not show warnings for this list of modules (possibly regex's)". Like: PYTHONDEPRECATIONWARNINGSEXCEPTFOR=PIL,requests.* Then I'd just turn it on for all modules (empty string?), and when I got something that was flooding me with output I'd add it to the list. Eric. From ncoghlan at gmail.com Mon Nov 6 21:47:26 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 12:47:26 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On 7 November 2017 at 05:00, Alex Gaynor wrote: > I also feel this decision was a mistake. If there's a consensus to revert, > I'm happy to draft a PEP. Even without consensus to revert, I think it would be great to have a PEP summarising some of the trade-offs between the different options. And I do think there's openness to improving the situation, it's just not clear yet that simple reversion of the previous change is the right approach to attaining that improvement. In particular, the point Georg Brandl raised about the poor interaction between the "-W" command line option and Python code that calls "warnings.filterwarnings" is still valid, and still a cause for concern (whereby if code disables deprecation warnings, there's no command line option or environment variable that can be used to override that setting and force all the warnings to be shown despite what the code says). More generally, we seem to have (at least) 4 different usage models desired: - "show my users nothing about legacy calls" (the app dev with a user provided Python use case) - "show me all legacy calls" (the testing use case, needs to be a command line/environment setting) - "only show me legacy calls in *my* code" (the "I trust my deps to take care of themselves" use case) - "show me all legacy calls into a particular dependency" (the "I'm considering upgrading this" use case, handled by the warn module on PyPI) (Tangent: as indicated by my phrasing above, I'm wondering if "legacy calls"/"calls to legacy APIs" might be better user-centric terminology here, since those legacy calls are what we're actually trying to enable people to find and change - deprecation warnings are merely a tool designed to help serve that purpose) One of the biggest sources of tension is that silent-by-default is actually a good option when dependency upgrades are entirely in the hands of the application developer - if you're using something like pip-tools or pipenv to pin your dependencies to particular versions, and bundling a specific Python runtime with your application, and running your software through pre-merge CI, then "I am going to upgrade my dependencies now" is just another source code change, and your CI should pick up most major compatibility issues. In those scenarios, runtime deprecation warnings really are noise most of the time, since you already have a defined process for dealing with potential API breakages in dependencies. Where problems arise is when the decision on when to upgrade a dependency *isn't* in the hands of the application developer (e.g. folks using the system Python in a Linux distro, or an embedded Python scripting engine in a larger application). In those cases, the runtime deprecation warnings are important to notify users and maintainers that the next version upgrade for the underlying platform may break their code. This applies at higher levels, not just at the base Python platform layer. For example, if Fedora rebases matplotlib, then there's a chance that may require updating pitivi as well. Given that tension, it may be that this is an area where it makes sense for us to say "CPython uses [these warnings filters] by default, but redistributors may set the default warnings filter differently. Always use -W or PYTHONWARNINGS if you want to ensure a particular set of default filters are active." Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From songofacandy at gmail.com Mon Nov 6 21:50:32 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 7 Nov 2017 11:50:32 +0900 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: As memory footprint and import time point of view, I prefer string to thunk. We can intern strings, but not lambda. Dict containing only strings is not tracked by GC, dict containing lambdas is tracked by GC. INADA Naoki On Tue, Nov 7, 2017 at 8:20 AM, Lukasz Langa wrote: > > >> On Nov 5, 2017, at 11:28 PM, Nick Coghlan wrote: >> >> On 6 November 2017 at 16:36, Lukasz Langa wrote: >> >> - compile annotations like a small nested class body (but returning >> the expression result, rather than None) >> - emit MAKE_THUNK instead of the expression's opcodes >> - emit STORE_ANNOTATION as usual >> > > Is the motivation behind creating thunks vs. reusing lambdas just the difference in handling class-level scope? If so, would it be possible to just modify lambdas to behave thunk-like there? It sounds like this would strictly broaden the functionality of lambdas, in other words, wouldn't create backwards incompatibility for existing code. > > Reusing lambdas (with extending them to support class-level scoping) would be a less scary endeavor than introducing a brand new language construct. > > With my current understanding I still think stringification is both easier to implement and understand by end users. The main usability win of thunks/lambdas is not very significant: evaluating them is as easy as calling them whereas strings require typing.get_type_hints(). I still think being able to access function-local state at time of definition is only theoretically useful. > > What would be significant though is if thunk/lambdas helped fixing forward references in general. But I can't really see how that could work. > > - ? > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com > From tjreedy at udel.edu Mon Nov 6 22:23:49 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 6 Nov 2017 22:23:49 -0500 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On 11/6/2017 9:47 PM, Nick Coghlan wrote: > On 7 November 2017 at 05:00, Alex Gaynor wrote: >> I also feel this decision was a mistake. If there's a consensus to revert, >> I'm happy to draft a PEP. > > Even without consensus to revert, I think it would be great to have a > PEP summarising some of the trade-offs between the different optio I hope that there is consensus that neither the old nor current default is necessarily the best we can do. > And I do think there's openness to improving the situation, it's just > not clear yet that simple reversion of the previous change is the > right approach to attaining that improvement. In particular, the point > Georg Brandl raised about the poor interaction between the "-W" > command line option and Python code that calls > "warnings.filterwarnings" is still valid, and still a cause for > concern (whereby if code disables deprecation warnings, there's no > command line option or environment variable that can be used to > override that setting and force all the warnings to be shown despite > what the code says). > > More generally, we seem to have (at least) 4 different usage models desired: > > - "show my users nothing about legacy calls" (the app dev with a user > provided Python use case) I believe that this is the current default. But in practice, it often also means 'show the developer nothing about legacy calls', and therein lies the problem with the current default. > - "show me all legacy calls" (the testing use case, needs to be a > command line/environment setting) I believe that this was the old default. And I understand that it is the default when running the CPython test suite. For testing the stdlib, it works because we control all the dependencies of each module. But for other apps, the problem for both users and developers being overwhelmed with warnings from sources not in one's control. > - "only show me legacy calls in *my* code" (the "I trust my deps to > take care of themselves" use case) Perhaps this should be the new default, where 'my code' means everything under the directory containing the startup file. If an app developer either fixes or suppresses warnings from app code when they first appear, then users will seldom or never see warnings. So for users, this would then be close to the current default. Even for unmaintained code, the noise should be someone limited in most cases. > - "show me all legacy calls into a particular dependency" (the "I'm > considering upgrading this" use case, handled by the warn module on > PyPI) -- Terry Jan Reedy From brett at python.org Mon Nov 6 22:38:14 2017 From: brett at python.org (Brett Cannon) Date: Tue, 07 Nov 2017 03:38:14 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106210757.25e14be4@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> Message-ID: On Mon, 6 Nov 2017 at 11:08 Paul Sokolovsky wrote: > Hello, > > On Mon, 06 Nov 2017 17:58:47 +0000 > Brett Cannon wrote: > > [] > > > > Why suddenly once in 25 years there's a need to do something to > > > dict's, violating computer science background behind them (one of > > > the reason enough people loved Python comparing to other "practical > > > hack" languages)? > > > > I don't understand what "computer science background" is being > > violated? > > I tried to explain that in the previous mail, can try a different > angle. So, please open you favorite CS book (better few) and look up > "abstract data types", then "mapping/associative array" and "list". We > can use Wikipedia too: https://en.wikipedia.org/wiki/Associative_array. > So, please look up: "Operations associated with this data type allow". > And you'll see, that there're no "ordering" related operations are > defined. Vice versa, looking at "sequence" operations, there will be > "prev/next", maybe "get n'th" element operations, implying ordering. > I don't think you meant for this to come off as insulting, but telling me how to look up the definition of an associative array or map feels like you're putting me down. I also have a Ph.D. in computer science so I'm aware of the academic definitions of these data structures. > > Python used to be a perfect application of these principles. Its dict > was a perfect CS implementation of an abstract associative array, and > list - of "sequence" abstract type (with additional guarantee of O(1) > random element access). > People knew and rejoiced that Python is built on solid science > principles, or could *learn* them from it. That no longer will be true, > with a sound concept being replaced with on-the-spot practical hack, > choosing properties of a random associative array algorithm > implementation over properties of a superset of such algorithms (many > of which are again don't offer any orderness guarantees). > > I don't think it's fair to call the current dict implementation a hack. It's a sound design that has a certain property that we are discussing the masking of. As I said previously, I think this discussion comes down to whether we think there are pragmatic benefits to exposing the ordered aspects to the general developer versus not. -Brett > > > I know though what will be replied (based on the replies below): "all > these are implementation details" - no, orderness vs non-orderness of a > mapping algorithm is an implementation detail; "users shouldn't know all > that" - they should, that's the real knowledge, and up until now, they > could learn that from *Python docs*, "we can't predict future" - we > don't need, we just need to know the past (25 years in our case), and > understand why it was done like that, I don't think Guido couldn't code > it ordered in 1991, it's just not natural for a mapping type to be so, > and in 2017, it's not more natural than it was in 1991. > > MicroPython in particular appeared because Python offered all the > CS-sound properties and freedom and alternative choices for > implementation (more so than any other scripting language). It's losing > it, and not just for MicroPython's surprise. > > > [] > > > -- > Best regards, > Paul mailto:pmiscml at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Mon Nov 6 22:51:49 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 14:51:49 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106121817.4c3c2367@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: <20171107035149.GF15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 12:18:17PM +0200, Paul Sokolovsky wrote: > > I don't think that situation should change the decision, > > Indeed, it shouldn't. What may change it is the simple and obvious fact > that there's no need to change anything, as proven by the 25-year > history of the language. I disagree -- the history of Python shows that having dicts be unordered is a PITA for many Python programmers. Python eventually gained an ordered dict because it provides useful functionality that developers demand. Every new generation of Python programmers comes along and gets confused by why dicts mysteriously change their order from how they were entered, why doctests involving dicts break, why keyword arguments lose their order, why they have to import a module to get ordered dicts instead of having it be built-in, etc. Historically, we had things like ConfigParser reordering ini files when you write them. Having dicts be unordered is not a positive virtue, it is a limitation. Up until now, it was the price we've paid for having fast, O(1) dicts. Now we have a dict implementation which is fast, O(1) and ordered. Why pretend that we don't? This is a long-requested feature, and the cost appears to be small: by specifying this, all we do is rule out some, but not all, hypothetical future optimizations. Unordered dicts served CPython well for 20+ years, but I doubt many people will miss them. > What happens now borders on technologic surrealism - the CPython, after > many years of persuasion, switched its dict algorithm, rather > inefficient in terms of memory, to something else, less inefficient > (still quite inefficient, taking "no overhead" as the baseline). Trading off space for time is a very common practice. You said that lookups on MicroPython's dicts are O(N). How efficient is ?Py when doing a lookup of a dict with ten million keys? ?Py has chosen to optimize for space, rather than time. That's great. But I don't think you should sneer at CPython's choice to optimize for time instead. And given that ?Py's dicts already fail to meet the expected O(1) dict behviour, and the already large number of functional differences (not just performance differences) between ?Py and Python: http://docs.micropython.org/en/latest/pyboard/genrst/index.html I don't think that this will make much practical difference. MicroPython users already cannot expect to run arbitrary Python code that works in other implementations: the Python community is fragmented between ?Py code written for tiny machines, and Python code for machines with lots of memory. > That > algorithm randomly had another property. Now there's a seemingly > serious talk of letting that property leak into the *language spec*, It will no more be a "leak" than any other deliberate design choice. > despite the fact that there can be unlimited number of dictionary > algorithms, most of them not having that property. Sure. So what? There's an unlimited number of algorithms that don't provide the functionality that we want. There are an unlimited number of sort algorithms, but Python guarantees that we're only going to use those that are stable. Similar applies for method resolution (which ?Py already violates), strings, etc. > What it will lead to is further fragmentation of the community. Aren't you concerned about fragmenting the community because of the functional differences between MicroPython and the specs? Sometimes a small amount of fragmentation is unavoidable, and not necessarily a bad thing. > > P.S. If anyone does want to explore MicroPython's dict implementation, > > and see if there might be an alternate implementation strategy that > > offers both O(1) lookup and guaranteed ordering without using > > additional memory > > That would be the first programmer in the history to have a cake and > eat it too. Memory efficiency, runtime efficiency, sorted order: choose > 2 of 3. Given that you state that ?Py dicts are O(N) and unordered, does that mean you picked only 1 out of 3? -- Steve From ncoghlan at gmail.com Mon Nov 6 23:01:23 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 14:01:23 +1000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> Message-ID: On 6 November 2017 at 12:18, Nick Coghlan wrote: > That particular dependency could also be avoided by defining an > "is_class_var(annotation)" generic function and a "ClassVar" helper > object in the dataclasses module. For example: > > class _ClassVar: > def __init__(self, annotation): > self.annotation = annotation > > class _MakeClassVar: > def __getitem__(self, key): > return _ClassVar(key) > > ClassVar = _MakeClassVar() > > @functools.singledispatch > def is_class_var(annotation): > return isinstance(annotation, _ClassVar) > > It would put the burden on static analysers and the typing module to > understand that `dataclasses.ClassVar` meant the same thing > conceptually as `typing.ClassVar`, but I think that's OK. Eric filed a new issue for this idea here: https://github.com/ericvsmith/dataclasses/issues/61 As I indicated in my comment there, I'm now wondering if there might be an opportunity here whereby we could use the *dataclasses* module to define a stable non-provisional syntactically compatible subset of the typing module, and require folks to start explicitly depending on the typing module (as a regular PyPI dependency) if they want anything more sophisticated than that (Unions, Generics, TypeVars, etc). Unlike the typing module, dataclasses *wouldn't* attempt to give annotations any meaningful runtime semantics, they'd just be ordinary class instances (hence no complex metaclasses required, hence a relatively fast import time). Beyond the API already proposed in PEP 557, this would mean adding: * dataclasses.ClassVar (as proposed above) * dataclasses.Any (probably just set to the literal string "dataclasses.Any") * dataclasses.NamedTuple (as a replacement for typing.NamedTuple) * potentially dataclasses.is_class_var (instead of dataclasses implicitly snooping in sys.modules looking for a "typing" module) In effect, where we now have "typing" and "typing_extensions", we'd instead have "dataclasses" in the standard library (with the subset of annotations needed to define data record classes), and "typing" as an independently versioned module on PyPI. I think such an approach could address a few problems: - the useful-at-runtime concepts (in dataclasses) would be clearly separated from the static-type-analysis enablers (in typing) - type hints would still have a clear presence in the regular standard library, but the more complex parts would be opt-in (similar to statistics vs SciPy, secrets vs cryptography, etc) - as various concepts in typing matured and became genuinely stable, they could potentially "graduate" into the standard library's dataclasses module Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mertz at gnosis.cx Mon Nov 6 23:05:07 2017 From: mertz at gnosis.cx (David Mertz) Date: Mon, 6 Nov 2017 20:05:07 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> Message-ID: I strongly opposed adding an ordered guarantee to regular dicts. If the implementation happens to keep that, great. Maybe OrderedDict can be rewritten to use the dict implementation. But the evidence that all implementations will always be fine with this restraint feels poor, and we have a perfectly good explicit OrderedDict for those who want that. On Nov 6, 2017 7:39 PM, "Brett Cannon" wrote: > > > On Mon, 6 Nov 2017 at 11:08 Paul Sokolovsky wrote: > >> Hello, >> >> On Mon, 06 Nov 2017 17:58:47 +0000 >> Brett Cannon wrote: >> >> [] >> >> > > Why suddenly once in 25 years there's a need to do something to >> > > dict's, violating computer science background behind them (one of >> > > the reason enough people loved Python comparing to other "practical >> > > hack" languages)? >> > >> > I don't understand what "computer science background" is being >> > violated? >> >> I tried to explain that in the previous mail, can try a different >> angle. So, please open you favorite CS book (better few) and look up >> "abstract data types", then "mapping/associative array" and "list". We >> can use Wikipedia too: https://en.wikipedia.org/wiki/Associative_array. >> So, please look up: "Operations associated with this data type allow". >> And you'll see, that there're no "ordering" related operations are >> defined. Vice versa, looking at "sequence" operations, there will be >> "prev/next", maybe "get n'th" element operations, implying ordering. >> > > I don't think you meant for this to come off as insulting, but telling me > how to look up the definition of an associative array or map feels like > you're putting me down. I also have a Ph.D. in computer science so I'm > aware of the academic definitions of these data structures. > > >> >> Python used to be a perfect application of these principles. Its dict >> was a perfect CS implementation of an abstract associative array, and >> list - of "sequence" abstract type (with additional guarantee of O(1) >> random element access). > > >> People knew and rejoiced that Python is built on solid science >> principles, or could *learn* them from it. > > That no longer will be true, >> with a sound concept being replaced with on-the-spot practical hack, >> choosing properties of a random associative array algorithm >> implementation over properties of a superset of such algorithms (many >> of which are again don't offer any orderness guarantees). >> >> > I don't think it's fair to call the current dict implementation a hack. > It's a sound design that has a certain property that we are discussing the > masking of. As I said previously, I think this discussion comes down to > whether we think there are pragmatic benefits to exposing the ordered > aspects to the general developer versus not. > > -Brett > > >> >> >> I know though what will be replied (based on the replies below): "all >> these are implementation details" - no, orderness vs non-orderness of a >> mapping algorithm is an implementation detail; "users shouldn't know all >> that" - they should, that's the real knowledge, and up until now, they >> could learn that from *Python docs*, "we can't predict future" - we >> don't need, we just need to know the past (25 years in our case), and >> understand why it was done like that, I don't think Guido couldn't code >> it ordered in 1991, it's just not natural for a mapping type to be so, >> and in 2017, it's not more natural than it was in 1991. > > >> >> MicroPython in particular appeared because Python offered all the >> CS-sound properties and freedom and alternative choices for >> implementation (more so than any other scripting language). It's losing >> it, and not just for MicroPython's surprise. >> >> >> [] >> >> >> -- >> Best regards, >> Paul mailto:pmiscml at gmail.com >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mertz%40gnosis.cx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 6 23:09:17 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 14:09:17 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 7 November 2017 at 03:42, Simon Cross wrote: > Maybe there are ways around these things, but I'm not really seeing > what's wrong with the current situation that can't be fixed with > slightly better CI setups (which are good for other reasons too). Given the status quo, how do educators learn that the examples they're teaching to their students are using deprecated APIs? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From steve at pearwood.info Mon Nov 6 23:14:26 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 15:14:26 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <431FCB2E-1713-4200-A497-84B470D4EEC9@python.org> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> <431FCB2E-1713-4200-A497-84B470D4EEC9@python.org> Message-ID: <20171107041426.GG15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 11:33:10AM -0800, Barry Warsaw wrote: > If we did make the change, it?s possible we would need a way to > explicit say that order is not preserved. That seems a little weird > to me, but I suppose it could be useful. Useful for what? Given that we will hypothetically have order-preserving dicts that perform no worse than unordered dicts, I'm struggling to think of a reason (apart from performance) why somebody would intentionally use a non-ordered dict. If performance was an issue, sure, it makes sense to have a non-ordered dict for when you don't want to pay the cost of keeping insertion order. But performance seems to be a non-issue. I can see people wanting a SortedDict which automatically sorts the keys into some specified order. If I really work at it, I can imagine that there might even be a use-case for randomizing the key order (like calling random.shuffle on the keys). But if you are willing to use a dict with arbitrary order, that means that *you don't care* what order the keys are in. If you don't care, then insertion order should be no better or worse than any other implementation-defined arbitrary order. > I like the idea previously > brought up that iteration order be deliberately randomized in that > case, but we?d still need a good way to spell that. That would only be in the scenario that we decide *not* to guarantee insertion-order preserving semantics for dicts, in order to prevent users from relying on an implementation feature that isn't a language guarantee. -- Steve From raymond.hettinger at gmail.com Tue Nov 7 00:11:00 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 6 Nov 2017 21:11:00 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> Message-ID: <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> > On Nov 6, 2017, at 8:05 PM, David Mertz wrote: > > I strongly opposed adding an ordered guarantee to regular dicts. If the implementation happens to keep that, great. Maybe OrderedDict can be rewritten to use the dict implementation. But the evidence that all implementations will always be fine with this restraint feels poor, and we have a perfectly good explicit OrderedDict for those who want that. I think this post is dismissive of the value that users would get from having reliable ordering by default. Having worked with Python 3.6 for a while, it is repeatedly delightful to encounter the effects of ordering. When debugging, it is a pleasure to be able to easily see what has changed in a dictionary. When creating XML, it is joy to see the attribs show in the same order you added them. When reading a configuration, modifying it, and writing it back out, it is a godsend to have it written out in about the same order you originally typed it in. The same applies to reading and writing JSON. When adding a VIA header in a HTTP proxy, it is nice to not permute the order of the other headers. When generating url query strings for REST APIs, it is nice have the parameter order match documented examples. We've lived without order for so long that it seems that some of us now think data scrambling is a virtue. But it isn't. Scrambled data is the opposite of human friendly. Raymond P.S. Especially during debugging, it is often inconvenient, difficult, or impossible to bring in an OrderedDict after the fact or to inject one into third-party code that is returning regular dicts. Just because we have OrderedDict in collections doesn't mean that we always get to take advantage of it. Plain dicts get served to us whether we want them or not. From ethan at ethanhs.me Tue Nov 7 00:20:17 2017 From: ethan at ethanhs.me (Ethan Smith) Date: Mon, 6 Nov 2017 21:20:17 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> Message-ID: > Beyond the API already proposed in PEP 557, this would mean adding: > > * dataclasses.ClassVar (as proposed above) > * dataclasses.Any (probably just set to the literal string > "dataclasses.Any") > * dataclasses.NamedTuple (as a replacement for typing.NamedTuple) > * potentially dataclasses.is_class_var (instead of dataclasses > implicitly snooping in sys.modules looking for a "typing" module) > > In effect, where we now have "typing" and "typing_extensions", we'd > instead have "dataclasses" in the standard library (with the subset of > annotations needed to define data record classes), and "typing" as an > independently versioned module on PyPI. > > I'm not so keen on this because I think some things in typing (such as NamedTuple) probably deserve to be in the collections module. And some of the ABCs could probably also be merged with collections.abc but doing this correctly and not all at once would be quite difficult. I do think the typing concepts should be better integrated into the standard library. However, a fair amount of the clases you list (such as NamedTuple) are in of themselves dependent on parts of typing. Cheers, Ethan I think such an approach could address a few problems: > > - the useful-at-runtime concepts (in dataclasses) would be clearly > separated from the static-type-analysis enablers (in typing) > - type hints would still have a clear presence in the regular standard > library, but the more complex parts would be opt-in (similar to > statistics vs SciPy, secrets vs cryptography, etc) > - as various concepts in typing matured and became genuinely stable, > they could potentially "graduate" into the standard library's > dataclasses module > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > ethan%40ethanhs.me > -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Tue Nov 7 00:40:07 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 7 Nov 2017 14:40:07 +0900 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> Message-ID: I agree with Raymond. dict ordered by default makes better developer experience. So, my concern is how "language spec" is important for minor (sorry about my bad vocabulary) implementation? What's difference between "MicroPython is 100% compatible with language spec" and "MicroPython is almost compatible with Python language spec, but has some restriction"? If it's very important, how about "strong recommendation for implementations" instead of "language spec"? Users who don't care implementations other than CPython and PyPy can rely on it's usability. Regards, INADA Naoki On Tue, Nov 7, 2017 at 2:11 PM, Raymond Hettinger wrote: > >> On Nov 6, 2017, at 8:05 PM, David Mertz wrote: >> >> I strongly opposed adding an ordered guarantee to regular dicts. If the implementation happens to keep that, great. Maybe OrderedDict can be rewritten to use the dict implementation. But the evidence that all implementations will always be fine with this restraint feels poor, and we have a perfectly good explicit OrderedDict for those who want that. > > I think this post is dismissive of the value that users would get from having reliable ordering by default. > > Having worked with Python 3.6 for a while, it is repeatedly delightful to encounter the effects of ordering. When debugging, it is a pleasure to be able to easily see what has changed in a dictionary. When creating XML, it is joy to see the attribs show in the same order you added them. When reading a configuration, modifying it, and writing it back out, it is a godsend to have it written out in about the same order you originally typed it in. The same applies to reading and writing JSON. When adding a VIA header in a HTTP proxy, it is nice to not permute the order of the other headers. When generating url query strings for REST APIs, it is nice have the parameter order match documented examples. > > We've lived without order for so long that it seems that some of us now think data scrambling is a virtue. But it isn't. Scrambled data is the opposite of human friendly. > > > Raymond > > > P.S. Especially during debugging, it is often inconvenient, difficult, or impossible to bring in an OrderedDict after the fact or to inject one into third-party code that is returning regular dicts. Just because we have OrderedDict in collections doesn't mean that we always get to take advantage of it. Plain dicts get served to us whether we want them or not. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com From ncoghlan at gmail.com Tue Nov 7 01:09:23 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 16:09:23 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: On 7 November 2017 at 09:20, Lukasz Langa wrote: > > >> On Nov 5, 2017, at 11:28 PM, Nick Coghlan wrote: >> >> On 6 November 2017 at 16:36, Lukasz Langa wrote: >> >> - compile annotations like a small nested class body (but returning >> the expression result, rather than None) >> - emit MAKE_THUNK instead of the expression's opcodes >> - emit STORE_ANNOTATION as usual >> > > Is the motivation behind creating thunks vs. reusing lambdas just the difference in handling class-level scope? If so, would it be possible to just modify lambdas to behave thunk-like there? It sounds like this would strictly broaden the functionality of lambdas, in other words, wouldn't create backwards incompatibility for existing code. > > Reusing lambdas (with extending them to support class-level scoping) would be a less scary endeavor than introducing a brand new language construct. I want to say "yes", but it's more "sort of", and at least arguably "no" (while you'd be using building blocks that already exist inside the compiler and eval loop, you'd be putting them together in a slightly new way that's a hybrid of the way lambdas work and the way class bodies work). For the code execution part, class creation currently uses MAKE_FUNCTION (just like lambdas and def statements), and is able to inject the namespace returned by __prepare__ as the code execution namespace due to differences in the way the function's code object gets compiled. These are *exactly* the semantics you'd want for deferred annotations, but the way they're currently structured internally is inconvenient for your use case. Compilation details here: https://github.com/python/cpython/blob/master/Python/compile.c#L1895 Code execution details here: https://github.com/python/cpython/blob/master/Python/bltinmodule.c#L167 It's the combination of using COMPILER_SCOPE_CLASS at compilation time with the direct call to "PyEval_EvalCodeEx" in __build_class__ that means you can't just treat annotations as a regular lambda function - lambdas are compiled with CO_OPTIMIZED set, and that means they won't read local variable values from the provided locals namespace, which in turn means you wouldn't be able to easily inject "vars(cls)" to handle the method annotation case. So that's where the idea of finally adding a "thunk" object came from: it would be to a class body as a lambda expression is to a function body, except that instead of relying on a custom call to PyEval_EvalCodeEx in __build_class__ the way class bodies do, it would instead define a suitable implementation of tp_call that accepted the locals namespace to use as a parameter. The nice part of this approach is that even though it would technically be a new execution primitive, it's still one with well-established name resolution semantics: the behaviour we already use for class bodies. > With my current understanding I still think stringification is both easier to implement and understand by end users. No matter how you slice it, PEP 563 *is* defining a new delayed execution primitive as part of the core language syntax. The question is whether we define it as a fully compiler integrated primitive, with clearly specified lexical name resolution semantics that align with other existing constructs, or something bolted on to the side of the language without integrating it properly, which we then have to live with forever. "This isn't visibly quoted, but it's a string anyway, so the compiler won't check it for syntax errors" isn't easy to understand. Neither is the complex set of rules you're proposing for what people will need to do in order to actually evaluate those strings and turn them back into runtime objects. By contrast, "parameter: annotation" can be explained as "it's like 'lambda: expression', but instead of being a regular function with an explicit parameter list, the annotation is a deferred expression that accepts a locals namespace to use when called". > The main usability win of thunks/lambdas is not very significant: evaluating them is as easy as calling them whereas strings require typing.get_type_hints(). I still think being able to access function-local state at time of definition is only theoretically useful. Your current proposal means that this code will work: class C: field = 1 def method(a: C.field): pass But this will fail: def make_class(): class C: field = 1 def method(a: C.field): pass Dropping the "C." prefix would make the single class case work, but wouldn't help with the nested classes case: def make_class(): class C: field = 1 class D: def method(a: C.field): pass Confusingly, though, this would still work: def make_class(): class C: field = 1 class D: field2 = C.field def method(a: field2): pass All of that potential for future confusion around which lexical references will and won't work for annotation expressions can be avoided if we impose "Annotations will fully participate in the regular lexical scoping rules at the point where they appear in the code" as a design constraint on PEP 563. > What would be significant though is if thunk/lambdas helped fixing forward references in general. But I can't really see how that could work. They'd only be able to help with forward references in the general case if a dedicated thunk expression was added. For example, combining the generator expression "parentheses are required" constraint with PEP 312's "simple implicit lambda" syntax proposal would give: T = TypeVar('T', bound=(:UserId)) UserId = NewType('UserId', (:SomeType)) Employee = NamedTuple('Employee', [('name', str), ('id', UserId)]) Alias = Optional[(:SomeType)] AnotherAlias = Union[(:SomeType), (:OtherType)] cast((:SomeType), value) class C(Tuple[(:SomeType), (:OtherType)]): ... However, rather than producing a zero-argument lambda (as PEP 312 proposed), this would instead produce a thunk object, which could either be called with zero arguments (thus requiring all names used to be resolvable as nonlocal, global, or builtin references), or else with a local namespace to use (which would be checked first for all variable names). The main advantage of such a syntax is that it would make it easy for both humans and computers to distinguish the actual strings from the lazily evaluated expressions. However, I'd also consider that out of scope for PEP 563 - it's just a potential future enhancement that pursuing a thunk-based *solution* to PEP 563 would enable, in a way that a string-based solution won't. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 7 01:23:05 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 16:23:05 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> Message-ID: On 7 November 2017 at 03:27, Chris Jerdonek wrote: > On Mon, Nov 6, 2017 at 4:11 AM Nick Coghlan wrote: >> Getting from the "Works on CPython 3.6+ but is technically >> non-portable" state to a fully portable correct implementation that >> ensures a particular key order in the JSON file thus currently >> requires the following changes: > > Nick, it seems like this is more complicated than it needs to be. You can > just pass sort_keys=True to json.dump() / json.dumps(). I use it for tests > and human-readability all the time. sort_keys is only equivalent to order preservation if the key order you want *is* alphabetical order. While that's typically a good enough assumption for JSON, it's not the case for things like CSV column order, TOML or ini-file setting order, etc. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From steve at pearwood.info Tue Nov 7 01:21:06 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 17:21:06 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> Message-ID: <20171107062106.GI15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 08:05:07PM -0800, David Mertz wrote: > I strongly opposed adding an ordered guarantee to regular dicts. If the > implementation happens to keep that, great. That's the worst of both worlds. The status quo is that unless we deliberately perturb the dictionary order, developers will come to rely on implementation order (because that's what the CPython reference implementation actually offers, regardless of what the docs say). Consequently: - people will be writing non-portable code, whether they know it or not; - CPython won't be able to change the implementation, because it will break too much code; - other implementations will be pressured to match CPython's implementation. The only difference is that on the one hand we are honest and up-front about requiring order-preserving dicts, and on the other we still require it, but pretend that we don't. And frankly, it seems rather perverse to intentionally perturb dictionary order just to keep our options open that someday there might be some algorithm which offers sufficiently better performance but doesn't preserve order. Preserving order is useful, desirable, often requested functionality, and now that we have it, it would have to be one hell of an optimization to justify dropping it again. (It is like Timsort and stability. How much faster sorting would it have taken to justify giving up sort stability? 50% faster? 100%? We wouldn't have done it for a 1% speedup.) It would be better to relax the requirement that builtin dict is used for those things that would benefit from improved performance. Is there any need for globals() to be the same mapping type as dict? Probably not. If somebody comes up with a much more efficient, non-order- preserving map ideal for globals, it would be better to change globals than dict. In my opinion. > Maybe OrderedDict can be > rewritten to use the dict implementation. But the evidence that all > implementations will always be fine with this restraint feels poor, I think you have a different definition of "poor" to me :-) Nick has already done a survey of PyPy (which already has insertion- order preserving dicts), Jython, VOC, and Batavia, and they don't have any problem with this. IronPython is built on C#, which has order- preserving mappings. Nuitka is built on C++, and if C++ can't implement an order-preserving mapping, there is something terribly wrong with the world. Cython (I believe) uses CPython's implementation, as does Stackless. The only well-known implementation that may have trouble with this is MicroPython, but it already changes the functionality of a lot of builtins and core language features, e.g. it uses a different method resolution order (so multiple inheritence won't work right), some builtins don't support slicing with three arguments, etc. I think the evidence is excellent that other implementations shouldn't have a problem with this, unless (like MicroPython) they are targetting machines with tiny memory resources. ?Py runs on the PyBoard, which I believe has under 200K of memory. I think we can all forgive ?Py if it only *approximately* matches Python semantics. -- Steve From steve at pearwood.info Tue Nov 7 01:33:03 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 17:33:03 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171106183548.20fee86b@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> Message-ID: <20171107063303.GJ15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 06:35:48PM +0200, Paul Sokolovsky wrote: > For MicroPython, it would lead to quite an overhead to make > dictionary items be in insertion order. As I mentioned, MicroPython > optimizes for very low bookkeeping memory overhead, so lookups are > effectively O(n), but orderedness will increase constant factor > significantly, perhaps 5x. Paul, it would be good if you could respond to Raymond's earlier comments where he wrote: I've just looked at the MicroPython dictionary implementation and think they won't have a problem implementing O(1) compact dicts with ordering. The likely reason for the confusion is that they are already have an option for an "ordered array" dict variant that does a brute-force linear search. However, their normal hashed lookup is very similar to ours and is easily amenable to being compact and ordered. See: https://github.com/micropython/micropython/blob/77a48e8cd493c0b0e0ca2d2ad58a110a23c6a232/py/map.c#L139 Raymond has also volunteered to assist with this. > Also, arguably any algorithm which would *maintain* insertion order > over mutating operations would be more complex and/or require more > memory that one which doesn't. I think it would be reasonable to say that builtin dicts only maintain insertion order for insertions, lookups, and changing the value. Any mutation which deletes keys may arbitrarily re-order the dict. If the user wants a stronger guarantee, then they should use OrderedDict. -- Steve From steve at pearwood.info Tue Nov 7 01:39:13 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 17:39:13 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: <20171107063913.GK15990@ando.pearwood.info> On Mon, Nov 06, 2017 at 10:17:23PM -0200, Joao S. O. Bueno wrote: > And also, forgot along the discussion, is the big disadvantage that > other Python implementations would have a quite > significant overhead on mandatory ordered dicts. I don't think that is correct. Nick already did a survey, and found that C# (IronPython), Java (Jython and VOC) and Javascript (Batavia) all have acceptable insertion-order preserving mappings. C++ (Nuitka) surely won't have any problem with this (if C++ cannot implement an efficient order-preserving map, there is something terribly wrong with the world). As for other languages that somebody might choose to build Python on (the Parrot VM, Haskell, D, Rust, etc) surely we shouldn't be limiting what Python does for the sake of hypothetical implementations in "underpowered" languages? I don't mean to imply that any of those examples are necessarily underpowered, but if language Foo is incapable of supporting an efficient ordered map, then language Foo is simply not good enough for a serious Python implementation. We shouldn't allow Python's evolution to be hamstrung by the requirement to support arbitrarily weak implementation languages. > One that was mentioned along the way is transpilers, with > Brython as an example - but there might be others. Since Brython transpiles to Javascript, couldn't it use the standard Map object, which preserves insertion order? https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map Quote: Description A Map object iterates its elements in insertion order The EMCAScript 6 standard specifies that Map.prototype.forEach operates over the key/value pairs in insertion order: https://tc39.github.io/ecma262/#sec-map-objects -- Steve From ncoghlan at gmail.com Tue Nov 7 01:39:53 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 16:39:53 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: On 7 November 2017 at 09:23, Chris Barker wrote: > in short -- we don't have a choice (unless we add an explicit randomization > as some suggested -- but that just seems perverse...) And this is the key point for me: "choosing not to choose" is effectively the same as standardising the feature, as enough Python code will come to rely on CPython's behaviour that most alternative implementations will feel obliged to start behaving the same way CPython does (with MicroPython being the potential exception due to memory usage constraints always winning over algorithmic efficiency concerns in that context). We added ResourceWarning a while back to help discourage reliance on CPython promptly calling __del__ methods when dropping the last reference to an object. An equivalent for this case would be for dict objects to randomize iteration (ala Go), once again requiring folks to opt-in via collections.OrderedDict to get guaranteed ordering (perhaps with a "o{}" dict display as new syntactic sugar). But unless someone actually writes a PEP and implementation for that in the next 12 weeks (and Guido accepts it), then we'll have 2 releases and 3 years of CPython working a particular way increasing the inertia against making such a change in 3.8 (and beyond that, I'd say we'd be well and truly into de facto standardisation territory, and the chances of ever introducing deliberate perturbation of dict iteration order would drop to nil). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 7 02:28:24 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 17:28:24 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171107062106.GI15990@ando.pearwood.info> References: <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <20171107062106.GI15990@ando.pearwood.info> Message-ID: On 7 November 2017 at 16:21, Steven D'Aprano wrote: > On Mon, Nov 06, 2017 at 08:05:07PM -0800, David Mertz wrote: >> Maybe OrderedDict can be >> rewritten to use the dict implementation. But the evidence that all >> implementations will always be fine with this restraint feels poor, > > I think you have a different definition of "poor" to me :-) While I think "poor" is understating the case, I think "excellent" (which you use later on) is overstating it. My own characterisation would be "at least arguably good enough". > Nick has already done a survey of PyPy (which already has insertion- > order preserving dicts), Jython, VOC, and Batavia, and they don't have > any problem with this. For these, my research only showed that their respective platforms have an order-preserving hashmap implementation available. What's entirely unclear at this point is how switching wholesale to that may impact the *performance* of these implementations (both in terms of speed and memory usage), and how much code churn would be involved in actually making the change. Making builtin dict() order-preserving may also still impose an ongoing complexity cost on these implementations if they end up having to split "the mapping we use for code execution namespaces" away from "the mapping we provide as the builtin dict". (That said, I think at least Jython already makes that distinction - I believe their string-only namespace dict is a separate type, whereas CPython plays dynamic optimisation games inside the regular dict type). So Barry's suggestion of providing an explicit "collections.UnorderedDict" as a consistent spelling for "an associative mapping without any ordering guarantees" is a reasonable one, even if it's primary usage in CPython ends up being to ensure algorithms are compatible with collections that don't provide an inherently stable iteration order, and any associated performance benefits are mostly seen on other implementations. (As Paul S notes, such a data type would also serve a pedagogical purpose when teaching computer science principles) > IronPython is built on C#, which has order- > preserving mappings. I couldn't actually find a clearly suitable existing collection type in the .NET CLR - the one I linked was just the one commonly referenced as "good enough for most purposes". It had some constraints that meant it may not be suitable as a builtin dict type in a Python implementation (e.g. it looked to have a 32-bit length limit). > Nuitka is built on C++, and if C++ can't implement > an order-preserving mapping, there is something terribly wrong with the > world. Cython (I believe) uses CPython's implementation, as does > Stackless. Right, the other C/C++ implementations that also target environments with at least 128 MiB+ RAM (and typically more) can reasonably be expected to tolerate similar space/speed trade-offs to those that CPython itself makes (and that's assuming they aren't just using CPython's data type implementations in the first place). > The only well-known implementation that may have trouble with this is > MicroPython, but it already changes the functionality of a lot of > builtins and core language features, e.g. it uses a different method > resolution order (so multiple inheritence won't work right), some > builtins don't support slicing with three arguments, etc. > > I think the evidence is excellent that other implementations shouldn't > have a problem with this, unless (like MicroPython) they are targetting > machines with tiny memory resources. ?Py runs on the PyBoard, which I > believe has under 200K of memory. I think we can all forgive ?Py if it > only *approximately* matches Python semantics. It runs on the ESP8266 as well, and that only has 96 kiB data memory. This means we're already talking 3-4 orders of magnitude difference in memory capacity and typical data set sizes between CPython and MicroPython use cases, and that's only accounting for the *low* end of CPython's use cases - once you expand out to multi-terabyte data sets (the very upper end of what a single x86-64 server can handle if you can afford the RAM for it), then we're talking 9-10 orders of magnitude between CPython's high end and MicroPython's low end. So for CPython's target use cases algorithmic efficiency dominates performance, and we afford to invest extra memory usage and startup overhead in service to more efficient data access algorithms. MicroPython's the opposite - you're going to run out of memory for data storage long before algorithmic inefficiency becomes your biggest problem, but wasted bookkeeping memory and startup overhead can cause problems even with small data sets. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From steve at pearwood.info Tue Nov 7 02:39:24 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 7 Nov 2017 18:39:24 +1100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <20171107062106.GI15990@ando.pearwood.info> Message-ID: <20171107073924.GL15990@ando.pearwood.info> On Tue, Nov 07, 2017 at 05:28:24PM +1000, Nick Coghlan wrote: > On 7 November 2017 at 16:21, Steven D'Aprano wrote: > > On Mon, Nov 06, 2017 at 08:05:07PM -0800, David Mertz wrote: > >> Maybe OrderedDict can be > >> rewritten to use the dict implementation. But the evidence that all > >> implementations will always be fine with this restraint feels poor, > > > > I think you have a different definition of "poor" to me :-) > > While I think "poor" is understating the case, I think "excellent" > (which you use later on) is overstating it. My own characterisation > would be "at least arguably good enough". Fair enough, and thanks for elaborating. -- Steve From tds333 at mailbox.org Tue Nov 7 03:00:06 2017 From: tds333 at mailbox.org (Wolfgang) Date: Tue, 7 Nov 2017 09:00:06 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171104173013.GA4005@bytereef.org> References: <20171104173013.GA4005@bytereef.org> Message-ID: Hi, I have to admit sometimes I don't understand why small things produce so much mail traffic. :-) If I use a mapping like dict most of the time I don't care if it is ordered. If I need an ordering I use OrderedDict. In a library if I need a dict to be ordered most of the time there is a parameter allowing me to do this or to specify the used class as dict. If this is not the case it can be fixed. In rare cases a workaround is needed. As of now I have dict and OrderedDict, it is clear and explicit. No need to change. Yes it is useful for debugging to have things ordered. Yes in other places the implicit ordering is also fine. For pypy and CPython good and take it, be happy. Also it is fine to teach people that dict (Mapping) is not ordered but CPython has an implementation detail and it is ordered. But if you want the guarantee use OrderedDict. To be really practical even if this is guaranteed in 3.9 I cannot rely on it because of Python 2.7, 3.5, 3.6, ... compatibility. If this versions are out of order in 10 years, even then if I want to produce a small library running on another implementation I have to care, because of the list of differences to the CPython implementation or because the project is not yet up to date with the implementation (Jython). To be save then I will still use OrderedDict guaranteeing me this what I want. Finally even when dict literals will be guaranteed to be ordered it is good to teach and use OrderedDict because it is explicit. For implementations (algorithm) I cannot foresee the future so I cannot tell if it will be a burden or not. Finally someone have to decide it. As long as OrderedDict is available for me to specify it explicit it will be fine. ;-) Regards, Wolfgang On 04.11.2017 18:30, Stefan Krah wrote: > > Hello, > > would it be possible to guarantee that dict literals are ordered in v3.7? > > > The issue is well-known and the workarounds are tedious, example: > > https://mail.python.org/pipermail/python-ideas/2015-December/037423.html > > > If the feature is guaranteed now, people can rely on it around v3.9. > > > > Stefan Krah > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/tds333%40mailbox.org > From p.f.moore at gmail.com Tue Nov 7 04:30:19 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 09:30:19 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 7 November 2017 at 04:09, Nick Coghlan wrote: > Given the status quo, how do educators learn that the examples they're > teaching to their students are using deprecated APIs? By reading the documentation on what they are teaching, and by testing their examples with new versions with deprecation warnings turned on? Better than having warnings appear the first time they run a course with a new version of Python, surely? I understand the "but no-one actually does this" argument. And I understand that breakage as a result is worse than a few warnings. But enabling deprecation warnings by default feels to me like favouring the developer over the end user. I remember before the current behaviour was enabled and it was *immensely* frustrating to try to use 3rd party code and get a load of warnings. The only options were: 1. Report the bug - usually not much help, as I want to run the program *now*, not when a new release is made. 2. Fix the code (and ideally submit a PR upstream) - I want to *use* the program, not debug it. 3. Find the right setting/environment variable, and tweak how I call the program to apply it - which doesn't fix the root cause, it's just a workaround. I appreciate that this is open source, and using free programs comes with an obligation to contribute back or deal with issues like this, but even so, it's a pretty bad user experience. I'd prefer it if rather than simply switching warnings on by default, we worked on making it easier for the people in a position to actually *fix* the issue (coders writing programs, educators developing training materials, etc) to see the warnings. For example, encourage the various testing frameworks (unittest, pytest, nose, tox, ...) to enable warnings by default, promote "test with warnings enabled" in things like the packaging guide, ensure that all new deprecations are documented in the "Porting to Python 3.x" notes, etc. Paul From p.f.moore at gmail.com Tue Nov 7 04:38:25 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 09:38:25 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <46A3BF96-E531-4A46-8171-464299774910@stufft.io> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> <46A3BF96-E531-4A46-8171-464299774910@stufft.io> Message-ID: On 7 November 2017 at 00:35, Donald Stufft wrote: > Maybe we just need to fully flesh out the idea of a "Python Core" (What > exists now as ?Python?) and a ?Python Platform? (Python Core + A select set > of preinstalled libraries). Then typing can just be part of the Python > Platform, and gets installed as part of your typical installation, but is > otherwise an independent piece of code. I quite like this idea, but at a minimum it would mean that the Windows installers for Python should include "core" and "platform" builds, with "platform" being the version that new users are directed to. And I'm not sure python-dev would be comfortable with agreeing and distributing what would essentially be a curated subset of PyPI packages. The most recent discussion on "a recommended set of packages" didn't get much further than 3 or 4 candidates, which hardly constitutes a "platform". Conversely, something like Anaconda could be considered as the "Python Platform" (assuming they bundle typing...), but I'm not really comfortable with python-dev aligning themselves that strongly with a single distributor. (We're talking about a recommendation along the lines of "people wanting to gain, for example, corporate approval for "Python" should request approval for the Anaconda distribution rather than the python.org distribution"). Paul From solipsis at pitrou.net Tue Nov 7 04:40:28 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Nov 2017 10:40:28 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default References: Message-ID: <20171107104028.142b9729@fsol> On Tue, 7 Nov 2017 09:30:19 +0000 Paul Moore wrote: > > I understand the "but no-one actually does this" argument. And I > understand that breakage as a result is worse than a few warnings. But > enabling deprecation warnings by default feels to me like favouring > the developer over the end user. I understand this characterization. > I'd prefer it if rather than simply switching warnings on by default, > we worked on making it easier for the people in a position to actually > *fix* the issue (coders writing programs, educators developing > training materials, etc) to see the warnings. For example, encourage > the various testing frameworks (unittest, pytest, nose, tox, ...) to > enable warnings by default, pytest does nowadays. That doesn't mean warnings get swiftly fixed, though. There are many reasons why (see my initial reply to Nick's proposal). > ensure that all new deprecations are > documented in the "Porting to Python 3.x" notes, etc. In my experience, Python deprecations are in the minority. Most often you have to deal with deprecations in third-party libraries rather than Python core/stdlib, because we (Python) are more reluctant to change and deprecate APIs than the average library maintainer is. Regards Antoine. From p.f.moore at gmail.com Tue Nov 7 04:40:49 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 09:40:49 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> Message-ID: On 7 November 2017 at 05:20, Ethan Smith wrote: > I'm not so keen on this because I think some things in typing (such as > NamedTuple) probably deserve to be in the collections module. And some of > the ABCs could probably also be merged with collections.abc but doing this > correctly and not all at once would be quite difficult. > I do think the typing concepts should be better integrated into the standard > library. However, a fair amount of the clases you list (such as NamedTuple) > are in of themselves dependent on parts of typing. This is a good point. To what extent is it true that the stdlib *already* uses the typing module internally, it's just that the usage is hidden by the fact that the examples of this are in the typing module - not because they "should" be there but simply because it isolates the use of typing to that one module? Paul From encukou at gmail.com Tue Nov 7 04:51:46 2017 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 7 Nov 2017 10:51:46 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: <7b05ffda-fd10-864f-1225-9861decfcc4f@gmail.com> On 11/07/2017 09:00 AM, Wolfgang wrote: [...] > Also it is fine to teach people that dict (Mapping) is not ordered > but CPython has an implementation detail and it is ordered. > But if you want the guarantee use OrderedDict. I don't think that is fine. When I explained this in 3.5, dicts rearranging themselves seemed quite weird to the newcomers. This year, I'm not looking forward to saying that dicts behave "intuitively", but you shouldn't rely on that, because they're theoretically allowed to rearrange themselves. The concept of "implementation detail" and language spec vs. multiple interpreter implementations isn't easy to explain to someone in a "basic coding literacy" course. Today I can still show an example on Python 3.5. But most people I teach today won't run their code on 3.5, or on MicroPython or Brython, and quite soon they'll forget that there's no dict ordering guarantee. Also: I happen to read python-dev and the language docs. I suspect not all teachers do, and when they see that dict order randomization was "fixed", they might just remove the explanation from the lesson and teach something practical instead. From p.f.moore at gmail.com Tue Nov 7 04:59:06 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 09:59:06 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: On 7 November 2017 at 06:39, Nick Coghlan wrote: > On 7 November 2017 at 09:23, Chris Barker wrote: >> in short -- we don't have a choice (unless we add an explicit randomization >> as some suggested -- but that just seems perverse...) > > And this is the key point for me: "choosing not to choose" is > effectively the same as standardising the feature, as enough Python > code will come to rely on CPython's behaviour that most alternative > implementations will feel obliged to start behaving the same way > CPython does (with MicroPython being the potential exception due to > memory usage constraints always winning over algorithmic efficiency > concerns in that context). Personally, I think that having an ordered implementation and then deliberately breaking that ordering is pretty silly. Not least because we chose not to do that for 3.6. So we're left with the simple question of whether we make the behaviour required in the documentation (which is basically where this thread started). I see 3 options. First, we maintain the status quo, treat it as a CPython/PyPy implementation detail, and accept that this means that people will expect it - resulting in pressure on alternative implementations to conform, but without a language spec to support them. Second, we document the requirement in the language spec, requiring alternative implementations to either implement it or document it as a way in which they don't confirm to the language spec (which is unattractive for them, as "implements the official Python language spec" is a selling point). Or third, we could document it as an optional behaviour, that language implementations are expected to implement if possible, and if they can't they should document the variance. That is to some extent the best of both worlds, in that it allows implementations to claim conformance with the spec while still not implementing the feature if it causes them problems to do so - they just have to document that they don't implement this optional but recommended behaviour. The downside of this option is that there's no precedent for it in the Python spec (it's basically C's "implementation defined behaviour", which is something Python hasn't needed this far). Paul From steve at holdenweb.com Tue Nov 7 05:18:40 2017 From: steve at holdenweb.com (Steve Holden) Date: Tue, 7 Nov 2017 10:18:40 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <46A3BF96-E531-4A46-8171-464299774910@stufft.io> References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> <46A3BF96-E531-4A46-8171-464299774910@stufft.io> Message-ID: On Tue, Nov 7, 2017 at 12:35 AM, Donald Stufft wrote: [..] > Maybe we just need to fully flesh out the idea of a "Python Core" (What > exists now as ?Python?) and a ?Python Platform? (Python Core + A select set > of preinstalled libraries). Then typing can just be part of the Python > Platform, and gets installed as part of your typical installation, but is > otherwise an independent piece of code. > > Given that (type and other) annotations have been promoted as an optional feature of the language it seems unfair and perhaps unwise to add a dependency specifically to support ?them to the stdlib and therefore the Python core. ? Since type annotations are, as Paul pointed out, development-time features, it? would appear to behoove those wishing to use them to separate them in such a way that the software can be installed without annotations, and therefore without the need for the typing module. Assuming they would like to see the widest possible distribution, of course. For selected audiences I am sure typing will be *de rigeur*, ?In this scenario, surely the most "typical" installation would be a virtualenv. ?Do all virtualenvs rely on the same Platform? Who decides which additional libraries are required for the Platform? Doesn't this just add another "type of installation" distinction to confuse the unwary? How do I maintain the Platform separately from the Core? Does my sysadmin maintain either or both? I'd like to see a little more clarity about the benefits such a schism would offer. regards Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Nov 7 06:30:11 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 11:30:11 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> <46A3BF96-E531-4A46-8171-464299774910@stufft.io> Message-ID: On 7 November 2017 at 10:18, Steve Holden wrote: > On Tue, Nov 7, 2017 at 12:35 AM, Donald Stufft wrote: > [..] > >> >> Maybe we just need to fully flesh out the idea of a "Python Core" (What >> exists now as ?Python?) and a ?Python Platform? (Python Core + A select set >> of preinstalled libraries). Then typing can just be part of the Python >> Platform, and gets installed as part of your typical installation, but is >> otherwise an independent piece of code. >> > Given that (type and other) annotations have been promoted as an optional > feature of the language it seems unfair and perhaps unwise to add a > dependency specifically to support > them > to the stdlib and therefore the Python core. > Since type annotations are, as Paul pointed out, development-time features, > it would appear to behoove those wishing to use them to separate them in > such a way that the software can be installed without annotations, and > therefore without the need for the typing module. Assuming they would like > to see the widest possible distribution, of course. For selected audiences I > am sure typing will be de rigeur, >From my point of view, I take the same premise and come to the opposite conclusion. Because type annotations are a development-time feature, they should *not* require a dependency in the final deployment (apart from Python itself). However, because they are a language syntax feature they are of necessity written in the application source. And type specification of anything more complex than basic types (for example, List[int]) requires classes defined in the typing module. Therefore, typing must be in the stdlib so that use of type annotations by the developer doesn't impose a runtime dependency on the end user. If there were a way of including type annotations that had no runtime effect on the final deployed program, things would be different. But the decision to make annotations part of the language syntax precludes that. In particular, "it would appear to behoove those wishing to use them to separate them" - there's no way of doing that *precisely* because they are a language syntax feature. Paul From ncoghlan at gmail.com Tue Nov 7 07:22:47 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 22:22:47 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 7 November 2017 at 19:30, Paul Moore wrote: > On 7 November 2017 at 04:09, Nick Coghlan wrote: >> Given the status quo, how do educators learn that the examples they're >> teaching to their students are using deprecated APIs? > > By reading the documentation on what they are teaching, and by testing > their examples with new versions with deprecation warnings turned on? > Better than having warnings appear the first time they run a course > with a new version of Python, surely? > > I understand the "but no-one actually does this" argument. And I > understand that breakage as a result is worse than a few warnings. But > enabling deprecation warnings by default feels to me like favouring > the developer over the end user. I remember before the current > behaviour was enabled and it was *immensely* frustrating to try to use > 3rd party code and get a load of warnings. The only options were: > > 1. Report the bug - usually not much help, as I want to run the > program *now*, not when a new release is made. > 2. Fix the code (and ideally submit a PR upstream) - I want to *use* > the program, not debug it. > 3. Find the right setting/environment variable, and tweak how I call > the program to apply it - which doesn't fix the root cause, it's just > a workaround. Yes, this is why I've come around to the view that we need to come up with a viable definition of "third party code" and leave deprecation warnings triggered by that code disabled by default. My suggestion for that definition is to have the *default* meaning of "third party code" be "everything that isn't __main__". That way, if you get a deprecation warning at the REPL, it's necessarily because of something *you* did, not because of something a library you called did. Ditto for single file scripts. We'd then offer some straightforward interfaces for people to say "Please also report legacy calls from 'module' as warnings". You'd still get less-than-helpful warnings if you were running a single file script that someone *else* wrote (rather than one you wrote yourself), but that's an inherent flaw in that distribution model: as soon as you ask people to supply their own Python runtime, you're putting them in the position of acting as an application integrator (finding a combination of Python language runtime and your script that actually work together), rather than as a regular software user. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lists at janc.be Tue Nov 7 07:34:50 2017 From: lists at janc.be (Jan Claeys) Date: Tue, 07 Nov 2017 13:34:50 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <89AA34A9-B047-4E9A-AB35-84194A967563@python.org> <28b3da3b-1d71-a4bc-8e68-aaa6cfa19a4e@ganssle.io> Message-ID: <1510058090.20749.290.camel@janc.be> On Tue, 2017-11-07 at 16:39 +1000, Nick Coghlan wrote: > And this is the key point for me: "choosing not to choose" is > effectively the same as standardising the feature, as enough Python > code will come to rely on CPython's behaviour that most alternative > implementations will feel obliged to start behaving the same way > CPython does (with MicroPython being the potential exception due to > memory usage constraints always winning over algorithmic efficiency > concerns in that context). Maybe an UnorderedDict could be added which Python implementations _can_ implement as an optimized (less memory use, faster, ...) version without ordering guarantees if they have a need for it. In other implementations it could just be a synonym for a regular dict. That way it would be explicit that the programmer doesn't care about the ordered behaviour. It would also avoid current mistakes some (especially those new to the language and occasional users) make, because of assumptions from default dict behaviour. (Maybe a commandline switch or other mechanisms to explicitly use that UnorderedDict as the default could also be useful. It would be a no-op in implementations which don't have differing implementations.) -- Jan Claeys From flying-sheep at web.de Tue Nov 7 08:35:48 2017 From: flying-sheep at web.de (Philipp A.) Date: Tue, 07 Nov 2017 13:35:48 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: Sorry, I still don?t understand how any of this is a problem. 1. If you?re an application developer, google ?python disable DeprecationWarning? and paste the code you found, so your users don?t see the warnings. 2. If you?re a library developer, and a library you depend on raises DeprecationWarnings without it being your fault, file an issue/bug there. For super-increased convenience in case 2., we could also add a convenience API that blocks deprecation warnings raised from certain module or its submodules. Best, Philipp Nick Coghlan schrieb am Di., 7. Nov. 2017 um 13:25 Uhr: > On 7 November 2017 at 19:30, Paul Moore wrote: > > On 7 November 2017 at 04:09, Nick Coghlan wrote: > >> Given the status quo, how do educators learn that the examples they're > >> teaching to their students are using deprecated APIs? > > > > By reading the documentation on what they are teaching, and by testing > > their examples with new versions with deprecation warnings turned on? > > Better than having warnings appear the first time they run a course > > with a new version of Python, surely? > > > > I understand the "but no-one actually does this" argument. And I > > understand that breakage as a result is worse than a few warnings. But > > enabling deprecation warnings by default feels to me like favouring > > the developer over the end user. I remember before the current > > behaviour was enabled and it was *immensely* frustrating to try to use > > 3rd party code and get a load of warnings. The only options were: > > > > 1. Report the bug - usually not much help, as I want to run the > > program *now*, not when a new release is made. > > 2. Fix the code (and ideally submit a PR upstream) - I want to *use* > > the program, not debug it. > > 3. Find the right setting/environment variable, and tweak how I call > > the program to apply it - which doesn't fix the root cause, it's just > > a workaround. > > Yes, this is why I've come around to the view that we need to come up > with a viable definition of "third party code" and leave deprecation > warnings triggered by that code disabled by default. > > My suggestion for that definition is to have the *default* meaning of > "third party code" be "everything that isn't __main__". > > That way, if you get a deprecation warning at the REPL, it's > necessarily because of something *you* did, not because of something a > library you called did. Ditto for single file scripts. > > We'd then offer some straightforward interfaces for people to say > "Please also report legacy calls from 'module' as warnings". > > You'd still get less-than-helpful warnings if you were running a > single file script that someone *else* wrote (rather than one you > wrote yourself), but that's an inherent flaw in that distribution > model: as soon as you ask people to supply their own Python runtime, > you're putting them in the position of acting as an application > integrator (finding a combination of Python language runtime and your > script that actually work together), rather than as a regular software > user. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/flying-sheep%40web.de > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Nov 7 08:44:50 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 13:44:50 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 7 November 2017 at 13:35, Philipp A. wrote: > Sorry, I still don?t understand how any of this is a problem. > > If you?re an application developer, google ?python disable > DeprecationWarning? and paste the code you found, so your users don?t see > the warnings. > If you?re a library developer, and a library you depend on raises > DeprecationWarnings without it being your fault, file an issue/bug there. > > For super-increased convenience in case 2., we could also add a convenience > API that blocks deprecation warnings raised from certain module or its > submodules. > Best, Philipp If you're a user and your application developer didn't do (1) or a library developer developing one of the libraries your application developer chose to use didn't do (2), you're hosed. If you're a user who works in an environment where moving to a new version of the application is administratively complex, you're hosed. As I say, the proposal prioritises developer convenience over end user experience. Paul From stefan at bytereef.org Tue Nov 7 08:48:46 2017 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 7 Nov 2017 14:48:46 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" Message-ID: <20171107134846.GA2683@bytereef.org> This is just a reminder that the current dict is not an "OrderedDict": >>> from collections import OrderedDict >>> OrderedDict(a=0, b=1) == OrderedDict(b=1, a=0) False >>> dict(a=0, b=1) == dict(b=1, a=0) True The recent proposal was primarily about guaranteeing the insertion order of dict literals. If further guarantees are proposed, perhaps it would be a good idea to open a new thread and state what exactly is being proposed. Stefan Krah From ncoghlan at gmail.com Tue Nov 7 08:57:46 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Nov 2017 23:57:46 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 7 November 2017 at 23:44, Paul Moore wrote: > As I say, the proposal prioritises developer convenience over end user > experience. Users of applications written in Python are not python-dev's users: they're the users of those applications, and hence the quality of that experience is up to the developers of those applications. This is no different from the user experience of Instagram being Facebook's problem, the user experience of RHEL being Red Hat's problem, the user experience of YouTube being Google's problem, etc. *python-dev's* users are developers, data analysts, educators, and so forth that are actually writing Python code, and at the moment we're making it hard for them to be suitably forewarned of upcoming breaking changes - they have to know the secret knock that says "I'd like to be warned about future breaking changes, please". Sure, a lot of people do learn what that knock is, and they often even remember to ask for it, but the entire reason this thread started was because *I* forgot that I needed to run "python3 -Wd" in order to check for async/await deprecation warnings in 3.6, and incorrectly assumed that their absence meant we'd forgotten to include them. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 7 09:01:04 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 00:01:04 +1000 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107134846.GA2683@bytereef.org> References: <20171107134846.GA2683@bytereef.org> Message-ID: On 7 November 2017 at 23:48, Stefan Krah wrote: > > > This is just a reminder that the current dict is not an "OrderedDict": > >>>> from collections import OrderedDict >>>> OrderedDict(a=0, b=1) == OrderedDict(b=1, a=0) > False >>>> dict(a=0, b=1) == dict(b=1, a=0) > True > > The recent proposal was primarily about guaranteeing the insertion order of > dict literals. > > If further guarantees are proposed, perhaps it would be a good idea to > open a new thread and state what exactly is being proposed. "Insertion ordered until the first key removal" is the only guarantee that's being proposed. OrderedDict just comes into the discussion because reaching for its stronger guarantees is currently the only way to obtain that guarantee in a formally implementation-independent and future-proof way. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan at bytereef.org Tue Nov 7 09:06:42 2017 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 7 Nov 2017 15:06:42 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> Message-ID: <20171107140642.GA3164@bytereef.org> On Wed, Nov 08, 2017 at 12:01:04AM +1000, Nick Coghlan wrote: > > The recent proposal was primarily about guaranteeing the insertion order of > > dict literals. > > > > If further guarantees are proposed, perhaps it would be a good idea to > > open a new thread and state what exactly is being proposed. > > "Insertion ordered until the first key removal" is the only guarantee > that's being proposed. > > OrderedDict just comes into the discussion because reaching for its > stronger guarantees is currently the only way to obtain that guarantee > in a formally implementation-independent and future-proof way. Ok good, I was primarily worried about collections.UnorderedDict coming up and users thinking that OrderedDict could be replaced entirely by dict(). Stefan Krah From flying-sheep at web.de Tue Nov 7 09:17:08 2017 From: flying-sheep at web.de (Philipp A.) Date: Tue, 07 Nov 2017 14:17:08 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: Nick Coghlan schrieb am Di., 7. Nov. 2017 um 14:57 Uhr: > Users of applications written in Python are not python-dev's users: > they're the users of those applications, and hence the quality of that > experience is up to the developers of those applications. [?] > Thank you, that?s exactly what I?m talking about. Besides: Nobody is ?hosed?? There will be one occurrence of every DeprecationWarning in the stderr of the application. Hardly the end of the world for CLI applications and even invisible for GUI applications. If the devs care about the user not seeing any warnings in their CLI application, they?ll have a test set up for that, which will tell them that the newest python-dev would raise a new warning, once they turn on testing for that release. That?s completely fine! Explicit is better than implicit! If I know lib X raises DeprecationWarnings I don?t care about, I want to explicitly silence them, instead of missing out on all the valuable information in other DeprecationWarnings. Best, Philipp -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue Nov 7 09:21:38 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 07 Nov 2017 06:21:38 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <5A01C172.2010107@stoneleaf.us> On 11/07/2017 05:44 AM, Paul Moore wrote: > If you're a user and your application developer didn't do (1) or a > library developer developing one of the libraries your application > developer chose to use didn't do (2), you're hosed. If you're a user > who works in an environment where moving to a new version of the > application is administratively complex, you're hosed. Suffering from DeprecationWarnings is not "being hosed". Having your script/application/framework suddenly stop working because nobody noticed something was being deprecated is "being hosed". +1 to turn them back on. -- ~Ethan~ From p.f.moore at gmail.com Tue Nov 7 09:27:16 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 14:27:16 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <5A01C172.2010107@stoneleaf.us> References: <5A01C172.2010107@stoneleaf.us> Message-ID: On 7 November 2017 at 14:21, Ethan Furman wrote: > On 11/07/2017 05:44 AM, Paul Moore wrote: > >> If you're a user and your application developer didn't do (1) or a >> library developer developing one of the libraries your application >> developer chose to use didn't do (2), you're hosed. If you're a user >> who works in an environment where moving to a new version of the >> application is administratively complex, you're hosed. > > Suffering from DeprecationWarnings is not "being hosed". Having your > script/application/framework suddenly stop working because nobody noticed > something was being deprecated is "being hosed". OK, I overstated. Apologies. My recollection is of a lot more end user complaints when deprecation warnings were previously switched on than others seem to remember, but I can't find hard facts, so I'll assume I'm misremembering. Paul From solipsis at pitrou.net Tue Nov 7 09:32:29 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Nov 2017 15:32:29 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" References: <20171107134846.GA2683@bytereef.org> Message-ID: <20171107153229.70ad6ad6@fsol> On Wed, 8 Nov 2017 00:01:04 +1000 Nick Coghlan wrote: > On 7 November 2017 at 23:48, Stefan Krah wrote: > > > > > > This is just a reminder that the current dict is not an "OrderedDict": > > > >>>> from collections import OrderedDict > >>>> OrderedDict(a=0, b=1) == OrderedDict(b=1, a=0) > > False > >>>> dict(a=0, b=1) == dict(b=1, a=0) > > True > > > > The recent proposal was primarily about guaranteeing the insertion order of > > dict literals. > > > > If further guarantees are proposed, perhaps it would be a good idea to > > open a new thread and state what exactly is being proposed. > > "Insertion ordered until the first key removal" is the only guarantee > that's being proposed. Is it? It seems to me that many arguments being made are only relevant under the hypothesis that insertion is ordered even after the first key removal. For example the user-friendliness argument, for I don't think it's very user-friendly to have a guarantee that disappears forever on the first __del__. Regards Antoine. From a.badger at gmail.com Tue Nov 7 09:42:18 2017 From: a.badger at gmail.com (Toshio Kuratomi) Date: Tue, 7 Nov 2017 06:42:18 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On Nov 7, 2017 5:47 AM, "Paul Moore" wrote: On 7 November 2017 at 13:35, Philipp A. wrote: > Sorry, I still don?t understand how any of this is a problem. > > If you?re an application developer, google ?python disable > DeprecationWarning? and paste the code you found, so your users don?t see > the warnings. > If you?re a library developer, and a library you depend on raises > DeprecationWarnings without it being your fault, file an issue/bug there. > > For super-increased convenience in case 2., we could also add a convenience > API that blocks deprecation warnings raised from certain module or its > submodules. > Best, Philipp If you're a user and your application developer didn't do (1) or a library developer developing one of the libraries your application developer chose to use didn't do (2), you're hosed. If you're a user who works in an environment where moving to a new version of the application is administratively complex, you're hosed. As I say, the proposal prioritises developer convenience over end user experience. I don't agree with this characterisation. Even if we assume a user isn't going to fix a DeprecationWarning they still benefit: (1) if they're a sysadmin it will warn them that they need to be careful when upgrading a dependency. (2) if the developer never hears about the DeprecationWarning then it is ultimately the user who suffers when the tool they depend on breaks without warning so seeing and reporting the DeprecationWarning helps the end user. (3) if DeprecationWarnings are allowed to linger through multiple releases, it may tell the user about the quality of the software they're using. More information is helpful to end users. Developers are actually the ones that it inconveniences as we'll be the ones grumbling when an end user who hasn't evaluated the deprecation cycles of upstream projects as we have demand immediate changes for deprecations that are still years away from causing problems. But unlike end users, we do have the ability to solve that by turning those deprecations off in our code if we've done our due diligence (or even if we haven't done our due diligence). -Toshio -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Tue Nov 7 09:44:07 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 7 Nov 2017 09:44:07 -0500 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107153229.70ad6ad6@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: On Tue, Nov 7, 2017 at 9:32 AM, Antoine Pitrou wrote: > On Wed, 8 Nov 2017 00:01:04 +1000 > Nick Coghlan wrote: > >> On 7 November 2017 at 23:48, Stefan Krah wrote: >> > >> > >> > This is just a reminder that the current dict is not an "OrderedDict": >> > >> >>>> from collections import OrderedDict >> >>>> OrderedDict(a=0, b=1) == OrderedDict(b=1, a=0) >> > False >> >>>> dict(a=0, b=1) == dict(b=1, a=0) >> > True >> > >> > The recent proposal was primarily about guaranteeing the insertion order of >> > dict literals. >> > >> > If further guarantees are proposed, perhaps it would be a good idea to >> > open a new thread and state what exactly is being proposed. >> >> "Insertion ordered until the first key removal" is the only guarantee >> that's being proposed. > > Is it? It seems to me that many arguments being made are only relevant > under the hypothesis that insertion is ordered even after the first key > removal. For example the user-friendliness argument, for I don't > think it's very user-friendly to have a guarantee that disappears > forever on the first __del__. One common pattern that I see frequently is this: def foo(**kwargs): kwargs.pop('somekey', None) bar(**kwargs) With ordering breaking on first pop/delete we essentially have no guarantee about kwargs order, or at least it's very easy to break. It would make writing wrappers like this extremely tedious -- we are essentially forcing people to use OrderedDict to just pop an item from kwargs. Not to mention that this isn't cheap in terms of performance. Is there a *real* motivation for saying that pop/delete can break the order? Yury From steve at pearwood.info Tue Nov 7 09:56:42 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 8 Nov 2017 01:56:42 +1100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107153229.70ad6ad6@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: <20171107145641.GB19802@ando.pearwood.info> On Tue, Nov 07, 2017 at 03:32:29PM +0100, Antoine Pitrou wrote: [...] > > "Insertion ordered until the first key removal" is the only guarantee > > that's being proposed. > > Is it? It seems to me that many arguments being made are only relevant > under the hypothesis that insertion is ordered even after the first key > removal. For example the user-friendliness argument, for I don't > think it's very user-friendly to have a guarantee that disappears > forever on the first __del__. Don't let the perfect be the enemy of the good. For many applications, keys are never removed from the dict, so this doesn't matter. If you never delete a key, then the remaining keys will never be reordered. I think that Nick's intent was not to say that after a single deletion, the ordering guarantee goes away "forever", but that a deletion may be permitted to reorder the keys, after which further additions will honour insertion order. At least, that's how I interpret him. To clarify: if we start with an empty dict, add keys A...D, delete B, then add E...H, we could expect: {A: 1} {A: 1, B: 2} {A: 1, B: 2, C: 3} {A: 1, B: 2, C: 3, D: 4} {D: 4, A: 1, C: 3} # some arbitrary reordering {D: 4, A: 1, C: 3, E: 5} {D: 4, A: 1, C: 3, E: 5, F: 6} {D: 4, A: 1, C: 3, E: 5, F: 6, G: 7} {D: 4, A: 1, C: 3, E: 5, F: 6, G: 7, H: 8} Nick, am I correct that this was your intent? -- Steve From solipsis at pitrou.net Tue Nov 7 10:12:42 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Nov 2017 16:12:42 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107145641.GB19802@ando.pearwood.info> Message-ID: <20171107161242.169ad35b@fsol> On Wed, 8 Nov 2017 01:56:42 +1100 Steven D'Aprano wrote: > > I think that Nick's intent was not to say that after a single deletion, > the ordering guarantee goes away "forever", but that a deletion may be > permitted to reorder the keys, after which further additions will honour > insertion order. At least, that's how I interpret him. The problem is this is taking things to a level of precision that makes the guarantee tedious to remember and reason about. The only thing that's friendly to (non-expert) users is either "dicts are always ordered [by insertion order], point bar" or "dicts are not ordered, point bar". Anything in-between, with reservations depending on which operations are invoked and when, is not really helpful to the average (non-expert) user. Which is why I think the user-friendliness argument does not apply if order ceases to be guaranteed after __del__ is called. Regards Antoine. From solipsis at pitrou.net Tue Nov 7 10:14:12 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Nov 2017 16:14:12 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: <20171107161412.3b85402f@fsol> On Tue, 7 Nov 2017 09:44:07 -0500 Yury Selivanov wrote: > > One common pattern that I see frequently is this: > > def foo(**kwargs): > kwargs.pop('somekey', None) > bar(**kwargs) I see it frequently too, but that's in code meant to be Python 2-compatible (and therefore cannot count on any ordering guarantee, even de facto). On Python 3 you can write: def foo(somekey=None, **kwargs): # do something with somekey? bar(**kwargs) Regards Antoine. From tomuxiong at gmx.com Tue Nov 7 04:12:17 2017 From: tomuxiong at gmx.com (Thomas Nyberg) Date: Tue, 7 Nov 2017 10:12:17 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> Message-ID: On 11/07/2017 09:00 AM, Wolfgang wrote: > Hi, > > I have to admit sometimes I don't understand why small things produce so much > mail traffic. :-) > > If I use a mapping like dict most of the time I don't care if it is ordered. > If I need an ordering I use OrderedDict. In a library if I need a dict to be > ordered most of the time there is a parameter allowing me to do this or to > specify the used class as dict. > If this is not the case it can be fixed. In rare cases a workaround is needed. > > As of now I have dict and OrderedDict, it is clear and explicit. > No need to change. > > [...] > > Regards, > > Wolfgang I feel a bit out of place on this list since I'm a lurker and not a core developer, but I just wanted to add my agreement with this as well. So maybe take my opinion with a grain of salt... I agree with Wolfgang. I just don't understand why this change is needed. We have dict and we have OrderedDict. Why does dict need to provide the extra ordering constraints? I've read many posts in this discussion and find none of them convincing. Python already guarantees things like ordering of keyword arguments. I've seen some people point out confusion of newcomers (e.g. they are surprised when order is not surprised), but that just seems to me natural confusion that comes about when learning. I would argue that a better solution to that problem is exactly the go solution: i.e. purposely perturbing the ordering in a way that shows up immediately so that users realize the problems in their thinking earlier. The dict provides a mapping from key to value. I personally think that that is mentally much simpler object than a mapping from key to value with certain ordering guarantees. If I want to extra guarantees I import OrderedDict and read what the guarantees are. This seems totally fine to me. I don't really see any advantages to this change but a lack of implementation flexibility and a more complicated core object in Python. Cheers, Thomas From evpok.padding at gmail.com Tue Nov 7 04:56:53 2017 From: evpok.padding at gmail.com (Evpok Padding) Date: Tue, 7 Nov 2017 10:56:53 +0100 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <7b05ffda-fd10-864f-1225-9861decfcc4f@gmail.com> References: <20171104173013.GA4005@bytereef.org> <7b05ffda-fd10-864f-1225-9861decfcc4f@gmail.com> Message-ID: Hello, I agree with Wolfgang here. From what I gathered of the discussion, the argument started from ??It would be nice if dict litterals returned ordered dicts instead of an unordered ones??, which is mostly a good thing, as it allows e.g. `OrderedDict({'spam': 'ham', 'sausages': 'eggs'})` instead of having to rely on lists of couples to create an OrderedDict. It is not of utmost utility, but it would be nice to have and not dissimilar to what we already have with kwargs being ordered dicts, also a matter of slightly better usability. Possibly, in order to avoid implying that all dicts guarantee ordering at all time, a syntax such as `o{}` might be used, mostly to help newcomers. So far, so good. Where it started to go all haywire is when it became conflated that with the fact that CPython and Pypy dicts are actually ordered (up to a point at least) and it suddenly became ??Let's guarantee the ordering of all dicts?? which apparently is an issue for at least one implementation of Python, and still have to be implemented in several others (said implementation would be trivial, or so it is said, but it still has to be written, along with appropriate tests, regression checks?). So far, the arguments I have seen for that are 1. It is useful in context where one has to guarantee the ordering of some mapping (such as in json) 2. It is useful in contexts where ordering is facultative, but nice to have (debugging has been mentionned) 3. It is already this way in CPython, so people are going to use that anyway I take issue with all of those arguments. 1. If the ordered should be guaranteed, why would it be so hard to use OrderedDict?? - Just write `d = OrderedDict(((key, val) for key, value in ?))` instead of `{key: value for key, value in ?}`. It is not that hard and at least it is explicit that the order is important. And if it is really so hard, we could have dict comprehensions be ordered too in addition to litterals, it still doesn't mean that dicts should all be ordered - It is even easier if you fill your dict value-per-value, just initialise it as `d = OrderedDict` instead of `d = {}` and voil??! 2. I would like to see some examples of cases where this is really much more convenient than any other soution, but even then I suspect that these cases are not sufficently relevant to wed all Python backends to ordered dicts forever. 3. This is just a pure fallacy. The language has a documented API that says that if order of insertion is important, you should explicitely use an OrderedDict. If people stray away from it and use implementation details such as the ordering of dict in CPython, they are on their own and shouldn't expect it to be portable to any other version. Again, it's not as if OrderedDict did not exist or was much more inconvenient to use than dict. Also, since the proposed implementation does not keep ordering on deletion, those relying implicitely on the ordering of dicts without reading the docs might get bitten by it later in much more subtle ways. Note that I don't sugest mandatory shuffling of dicts to advertise their non-guaranteed ordering, either. Just that reading the docs (or having your instructor tell you that dict does not guarantee the order) is the reponsibility of the end user. To sum it up - Ordered dict litterals are a nice idea, but probably not that important. If it happens, it would be nice if it could be extended to dict comprehensions, though. - Guaranteeing the ordering of all `dicts` does not make a lot of sense - Changing the API to guarantee the order of dicts **is** an API change, which still means work Am I missing something?? Cheers, E On 7 November 2017 at 10:51, Petr Viktorin wrote: > On 11/07/2017 09:00 AM, Wolfgang wrote: > [...] > >> Also it is fine to teach people that dict (Mapping) is not ordered >> but CPython has an implementation detail and it is ordered. >> But if you want the guarantee use OrderedDict. >> > > I don't think that is fine. > When I explained this in 3.5, dicts rearranging themselves seemed quite > weird to the newcomers. > This year, I'm not looking forward to saying that dicts behave > "intuitively", but you shouldn't rely on that, because they're > theoretically allowed to rearrange themselves. > The concept of "implementation detail" and language spec vs. multiple > interpreter implementations isn't easy to explain to someone in a "basic > coding literacy" course. > > Today I can still show an example on Python 3.5. But most people I teach > today won't run their code on 3.5, or on MicroPython or Brython, and quite > soon they'll forget that there's no dict ordering guarantee. > > Also: I happen to read python-dev and the language docs. I suspect not all > teachers do, and when they see that dict order randomization was "fixed", > they might just remove the explanation from the lesson and teach something > practical instead. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/evpok. > padding%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Tue Nov 7 10:21:21 2017 From: mertz at gnosis.cx (David Mertz) Date: Tue, 7 Nov 2017 07:21:21 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> Message-ID: On Nov 6, 2017 9:11 PM, "Raymond Hettinger" wrote: > On Nov 6, 2017, at 8:05 PM, David Mertz wrote: > I strongly opposed adding an ordered guarantee to regular dicts. If the implementation happens to keep that, great. Maybe OrderedDict can be rewritten to use the dict implementation. But the evidence that all implementations will always be fine with this restraint feels poor, and we have a perfectly good explicit OrderedDict for those who want that. I think this post is dismissive of the value that users would get from having reliable ordering by default. Dismissive seems like an overly strong word. I recognize I disagree with Raymond on best official semantics. Someone else points out that if someday an "even more efficient unordered dict" is discovered, user-facing "dict" doesn't strictly have to be the same data structure as "internal dict". The fact they are is admittedly an implementation detail also. I've had all those same uses about round-tripping serialization that Raymond mentions. I know the standard work arounds (which are not difficult, but DO require a little extra code if we don't have order). But like Raymond, I make most of my living TEACHING Python. I feel like the extra order guarantee would make teaching slightly harder. I'm sure he feels contrarily. It is true that with 3.6 I can no longer show an example where the dict display is oddly changed when printed. But then, unordered sets also wind up sorting small integers on printing, even though that's not a guarantee. Ordering by insertion order (possibly "only until first deletion") is simply not obvious to beginners. If we had, hypothetically, a dict that "always alphabetized keys" that would be more intuitive to them, for example. Insertion order feels obvious to us experts, but it really is an extra cognitive burden to learners beyond understanding "key/Val association". Having worked with Python 3.6 for a while, it is repeatedly delightful to encounter the effects of ordering. When debugging, it is a pleasure to be able to easily see what has changed in a dictionary. When creating XML, it is joy to see the attribs show in the same order you added them. When reading a configuration, modifying it, and writing it back out, it is a godsend to have it written out in about the same order you originally typed it in. The same applies to reading and writing JSON. When adding a VIA header in a HTTP proxy, it is nice to not permute the order of the other headers. When generating url query strings for REST APIs, it is nice have the parameter order match documented examples. We've lived without order for so long that it seems that some of us now think data scrambling is a virtue. But it isn't. Scrambled data is the opposite of human friendly. Raymond P.S. Especially during debugging, it is often inconvenient, difficult, or impossible to bring in an OrderedDict after the fact or to inject one into third-party code that is returning regular dicts. Just because we have OrderedDict in collections doesn't mean that we always get to take advantage of it. Plain dicts get served to us whether we want them or not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Tue Nov 7 10:37:15 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 7 Nov 2017 17:37:15 +0200 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107145641.GB19802@ando.pearwood.info> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107145641.GB19802@ando.pearwood.info> Message-ID: 07.11.17 16:56, Steven D'Aprano ????: > To clarify: if we start with an empty dict, add keys A...D, delete B, > then add E...H, we could expect: > > {A: 1} > {A: 1, B: 2} > {A: 1, B: 2, C: 3} > {A: 1, B: 2, C: 3, D: 4} > {D: 4, A: 1, C: 3} # some arbitrary reordering > {D: 4, A: 1, C: 3, E: 5} > {D: 4, A: 1, C: 3, E: 5, F: 6} > {D: 4, A: 1, C: 3, E: 5, F: 6, G: 7} > {D: 4, A: 1, C: 3, E: 5, F: 6, G: 7, H: 8} Rather {A: 1, D: 4, C: 3} # move the last item in place of removed {A: 1, D: 4, C: 3, E: 5} {A: 1, D: 4, C: 3, E: 5, F: 6} {A: 1, D: 4, C: 3, E: 5, F: 6, G: 7} {A: 1, D: 4, C: 3, E: 5, F: 6, G: 7, H: 8} or {A: 1, C: 3, D: 4} {A: 1, E: 5, C: 3, D: 4} # place the new item in place of removed {A: 1, E: 5, C: 3, D: 4, F: 6} {A: 1, E: 5, C: 3, D: 4, F: 6, G: 7} {A: 1, E: 5, C: 3, D: 4, F: 6, G: 7, H: 8} or {A: 1, C: 3, D: 4} {A: 1, C: 3, D: 4, E: 5} # add new items at end until fill the array {A: 1, F: 6, C: 3, D: 4, E: 5} # and fill holes after that {A: 1, F: 6, C: 3, D: 4, E: 5, G: 7} # reallocate the array {A: 1, F: 6, C: 3, D: 4, E: 5, G: 7, H: 8} These scenarios are more probably. From songofacandy at gmail.com Tue Nov 7 11:05:11 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 8 Nov 2017 01:05:11 +0900 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107153229.70ad6ad6@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: >> > If further guarantees are proposed, perhaps it would be a good idea to >> > open a new thread and state what exactly is being proposed. >> >> "Insertion ordered until the first key removal" is the only guarantee >> that's being proposed. > > Is it? It seems to me that many arguments being made are only relevant > under the hypothesis that insertion is ordered even after the first key > removal. For example the user-friendliness argument, for I don't > think it's very user-friendly to have a guarantee that disappears > forever on the first __del__. > I agree with Antoine. It's "hard to explain" than "preserving insertion order". Dict performance is important because it's used for namespace. But delete-heavy workload is not happen for namespace. It may make workloads like LRU caching slightly. But I don't think performance gain is large enough. Many overhead comes from API layer wrapping LRU cache. (e.g. functools.lru_cache) So I expect performance difference can be found only on some micro benchmarks. Additionally, class namespace should keep insertion order. It's language spec from 3.6. So we should have two mode for such optimization. It makes dict more complicated. So I'm +0.5 on making dict order as language spec, and -1 on "preserves insertion order until deletion" idea. But my expect may be wrong. Serhiy is working on it so I'm waiting it to benchmark. Regards, INADA Naoki From barry at python.org Tue Nov 7 11:36:47 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 08:36:47 -0800 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107161242.169ad35b@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107145641.GB19802@ando.pearwood.info> <20171107161242.169ad35b@fsol> Message-ID: <2D960266-8F80-477F-B152-AABF43706B73@python.org> On Nov 7, 2017, at 07:12, Antoine Pitrou wrote: > The problem is this is taking things to a level of precision that makes > the guarantee tedious to remember and reason about. > > The only thing that's friendly to (non-expert) users is either "dicts > are always ordered [by insertion order], point bar" or "dicts are not > ordered, point bar". Anything in-between, with reservations depending > on which operations are invoked and when, is not really helpful to the > average (non-expert) user. > > Which is why I think the user-friendliness argument does not apply if > order ceases to be guaranteed after __del__ is called. That?s a very important point. If it?s difficult to explain, teach, and retain the different ordering guarantees between built-in dict and OrderedDict, it might in fact be better to not guarantee any ordering for built-in dict *in the language specification*. Otherwise we might need a section as big as chapter 5 in the Python Language Reference just to dict ordering semantics. ;) Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Tue Nov 7 11:41:12 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 08:41:12 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171107063303.GJ15990@ando.pearwood.info> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171107063303.GJ15990@ando.pearwood.info> Message-ID: <85F20F95-D7CA-45B4-93AA-4B3AC1146E5D@python.org> On Nov 6, 2017, at 22:33, Steven D'Aprano wrote: > I think it would be reasonable to say that builtin dicts only maintain > insertion order for insertions, lookups, and changing the value. Any > mutation which deletes keys may arbitrarily re-order the dict. > > If the user wants a stronger guarantee, then they should use > OrderedDict. In fact, that *is* leaking CPython?s implementation into the language specification. If by chance CPython?s implementation preserved order even after key deletion, either now or in the future, would that be defined as the ordering guarantees for built-in dict in the Python Language Reference? Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From njs at pobox.com Tue Nov 7 11:45:22 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 7 Nov 2017 10:45:22 -0600 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On Nov 7, 2017 06:24, "Nick Coghlan" wrote: On 7 November 2017 at 19:30, Paul Moore wrote: > On 7 November 2017 at 04:09, Nick Coghlan wrote: >> Given the status quo, how do educators learn that the examples they're >> teaching to their students are using deprecated APIs? > > By reading the documentation on what they are teaching, and by testing > their examples with new versions with deprecation warnings turned on? > Better than having warnings appear the first time they run a course > with a new version of Python, surely? > > I understand the "but no-one actually does this" argument. And I > understand that breakage as a result is worse than a few warnings. But > enabling deprecation warnings by default feels to me like favouring > the developer over the end user. I remember before the current > behaviour was enabled and it was *immensely* frustrating to try to use > 3rd party code and get a load of warnings. The only options were: > > 1. Report the bug - usually not much help, as I want to run the > program *now*, not when a new release is made. > 2. Fix the code (and ideally submit a PR upstream) - I want to *use* > the program, not debug it. > 3. Find the right setting/environment variable, and tweak how I call > the program to apply it - which doesn't fix the root cause, it's just > a workaround. Yes, this is why I've come around to the view that we need to come up with a viable definition of "third party code" and leave deprecation warnings triggered by that code disabled by default. My suggestion for that definition is to have the *default* meaning of "third party code" be "everything that isn't __main__". That way, if you get a deprecation warning at the REPL, it's necessarily because of something *you* did, not because of something a library you called did. Ditto for single file scripts. IPython actually made this change a few years ago; since 2015 I think it has shown DeprecationWarnings by default if they're triggered by __main__. It's helpful but I haven't noticed it eliminating this problem. One limitation in particular is that it requires that the warnings are correctly attributed to the code that triggered them, which means that whoever is issuing the warning has to set the stacklevel= correctly, and most people don't. (The default of stacklevel=1 is always wrong for DeprecationWarning.) Also, IIRC it's actually impossible to set the stacklevel= correctly when you're deprecating a whole module and issue the warning at import time, because you need to know how many stack frames the import system uses. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Nov 7 11:50:30 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 08:50:30 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <7b05ffda-fd10-864f-1225-9861decfcc4f@gmail.com> References: <20171104173013.GA4005@bytereef.org> <7b05ffda-fd10-864f-1225-9861decfcc4f@gmail.com> Message-ID: <407DC5A4-01CB-48A3-9B44-52BBE0804E04@python.org> On Nov 7, 2017, at 01:51, Petr Viktorin wrote: > > When I explained this in 3.5, dicts rearranging themselves seemed quite weird to the newcomers. > This year, I'm not looking forward to saying that dicts behave "intuitively", but you shouldn't rely on that, because they're theoretically allowed to rearrange themselves. > The concept of "implementation detail" and language spec vs. multiple interpreter implementations isn't easy to explain to someone in a "basic coding literacy" course. Perhaps, but IME, it?s not hard to teach someone that in a code review. Today, if I see someone submit a change that includes an implicit assumption about ordering, I?ll call that out. I can say ?you can?t rely on dicts being ordered, so if that?s what you want, use an OrderedDict or sort your test data?. That?s usually a localized observation, meaning, I can look at the diff and see that they are assuming dict iteration ordering. What happens when built-in dict?s implementation behavior becomes a language guarantee? Now the review is much more difficult because I probably won?t be able to tell just from a diff whether the ordering guarantees are preserved. Do they delete a key somewhere? Who knows? I?m not even sure that would be statically determinable since I?d have to trace the use of that dictionary at run time to see if some ?del d[key]? is deleting the key in the dict under review or not. I can probably only tell that at run time. So how to I accurately review that code? Is the order presumption valid or invalid? Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Tue Nov 7 11:55:50 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 08:55:50 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On Nov 7, 2017, at 05:44, Paul Moore wrote: > If you're a user and your application developer didn't do (1) or a > library developer developing one of the libraries your application > developer chose to use didn't do (2), you're hosed. If you're a user > who works in an environment where moving to a new version of the > application is administratively complex, you're hosed. ?hosed? feels like too strong of a word here. DeprecationWarnings usually don?t break anything. Sure, they?re annoying but they can usually be easily ignored. Yes, there are some situations where DWs do actively break things (as I?ve mentioned, some Debuntu build/test environments). But those are also relatively easier to silence, or at least the folks running those environments, or writing the code for those environments, are usually more advanced developers for whom setting an environment variable or flag isn?t that big of a deal. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From pmiscml at gmail.com Tue Nov 7 12:09:32 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Tue, 7 Nov 2017 19:09:32 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171107063303.GJ15990@ando.pearwood.info> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171107063303.GJ15990@ando.pearwood.info> Message-ID: <20171107190932.09c34c57@x230> Hello, On Tue, 7 Nov 2017 17:33:03 +1100 Steven D'Aprano wrote: > On Mon, Nov 06, 2017 at 06:35:48PM +0200, Paul Sokolovsky wrote: > > > For MicroPython, it would lead to quite an overhead to make > > dictionary items be in insertion order. As I mentioned, MicroPython > > optimizes for very low bookkeeping memory overhead, so lookups are > > effectively O(n), but orderedness will increase constant factor > > significantly, perhaps 5x. > > Paul, it would be good if you could respond to Raymond's earlier > comments where he wrote: > > I've just looked at the MicroPython dictionary implementation and > think they won't have a problem implementing O(1) compact dicts > with ordering. > > The likely reason for the confusion is that they are already have > an option for an "ordered array" dict variant that does a brute-force > linear search. However, their normal hashed lookup is very > similar to ours and is easily amenable to being compact and ordered. > > See: > https://github.com/micropython/micropython/blob/77a48e8cd493c0b0e0ca2d2ad58a110a23c6a232/py/map.c#L139 > > Raymond has also volunteered to assist with this. I tried to do that, let me summarize previous point and give IMHO re: contributing an alternative: MicroPython's dict implementation is optimized for the least bookkeeping overhead, not performance on overlarge datasets. For the heap sizes we target (64KB on average), that's a good choice (put it differently, MicroPython's motto is "system's memory (all bunch kilobytes of it) belongs to user, not to MicroPython"). Adding insertion order would either: 1. Lead to significant (several times) slowdown, or 2. To noticeable memory overhead. Note that MicroPython uses the absolute minimum for a dictionary entry - 2 words (key and value). Adding even one extra word (e.g. some indirection pointer) means increasing overhead by 50%, cutting useful user memory size by third. With all that in mind, MicroPython is a very configurable project (215 config options as of this moment). We can have a config option for dict implementation too. But, the points above still hold - MicroPython targets low-memory systems and doesn't really target plenty-of-memory systems (there was never an aim to compete with CPython, the aim was to bring Python (the language) where CPython could never go). Put it another way, the alternative dict implementation is not expected to be used by default. If, with all the above in mind, someone, especially a CPython developer, wants to contribute an alternative dict implementation, it would be gladly accepted. (Note that if CPython followed the same policy, i.e. allowed compile-time selection of old vs new dict algorithm, we wouldn't have this thread.) (Disclaimer: all the above is just my IMHO as a long-time contributor, I'm not a MicroPython BDFL). And I really appreciate all the attention to MicroPython - that's the biggest case on my memory on python-dev. [] -- Best regards, Paul mailto:pmiscml at gmail.com From brett at python.org Tue Nov 7 12:39:22 2017 From: brett at python.org (Brett Cannon) Date: Tue, 07 Nov 2017 17:39:22 +0000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> <46A3BF96-E531-4A46-8171-464299774910@stufft.io> Message-ID: On Tue, 7 Nov 2017 at 03:34 Paul Moore wrote: > On 7 November 2017 at 10:18, Steve Holden wrote: > > On Tue, Nov 7, 2017 at 12:35 AM, Donald Stufft wrote: > > [..] > > > >> > >> Maybe we just need to fully flesh out the idea of a "Python Core" (What > >> exists now as ?Python?) and a ?Python Platform? (Python Core + A select > set > >> of preinstalled libraries). Then typing can just be part of the Python > >> Platform, and gets installed as part of your typical installation, but > is > >> otherwise an independent piece of code. > >> > > Given that (type and other) annotations have been promoted as an optional > > feature of the language it seems unfair and perhaps unwise to add a > > dependency specifically to support > > them > > to the stdlib and therefore the Python core. > > Since type annotations are, as Paul pointed out, development-time > features, > > it would appear to behoove those wishing to use them to separate them in > > such a way that the software can be installed without annotations, and > > therefore without the need for the typing module. Assuming they would > like > > to see the widest possible distribution, of course. For selected > audiences I > > am sure typing will be de rigeur, > > From my point of view, I take the same premise and come to the > opposite conclusion. > > Because type annotations are a development-time feature, they should > *not* require a dependency in the final deployment (apart from Python > itself). However, because they are a language syntax feature they are > of necessity written in the application source. And type specification > of anything more complex than basic types (for example, List[int]) > requires classes defined in the typing module. Therefore, typing must > be in the stdlib so that use of type annotations by the developer > doesn't impose a runtime dependency on the end user. > That's not *necessarily* true if Lukasz's PEP lands and annotations become strings. The dependency would only need to be installed if someone chose to introspect the annotations and then "instantiate" them into actual objects. And that only comes up if someone does it from outside by a 3rd-party, who would then need to install the type annotation dependencies themselves. I fully admit that could get messy if introspection from 3rd-parties happens at all regularly and trying to manage that kind of situation. In this instance I would argue that any code that is to facilitate creating an object from an annotation exist outside of the stdlib if 'typing' gets removed to prevent this sort of situation without the user of such code being fully aware of what they are up against. -Brett > > If there were a way of including type annotations that had no runtime > effect on the final deployed program, things would be different. But > the decision to make annotations part of the language syntax precludes > that. In particular, "it would appear to behoove those wishing to use > them to separate them" - there's no way of doing that *precisely* > because they are a language syntax feature. > > Paul > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmiscml at gmail.com Tue Nov 7 12:39:41 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Tue, 7 Nov 2017 19:39:41 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> Message-ID: <20171107193941.0c86fa7f@x230> Hello, On Tue, 7 Nov 2017 14:40:07 +0900 INADA Naoki wrote: > I agree with Raymond. dict ordered by default makes better developer > experience. > > So, my concern is how "language spec" is important for minor (sorry > about my bad vocabulary) implementation? > What's difference between "MicroPython is 100% compatible with > language spec" and > "MicroPython is almost compatible with Python language spec, but has > some restriction"? So, the problem is that there's no "Python language spec". And over time, that becomes a problem for alternative implementations, especially not mainstream ("we have infinite amount of memory to burn") ones. What we have is just CPython documentation, which mixes Python language spec and CPython implementation details. And is being changed (including "language spec" part) on the fiat of CPython developers, apparently without any guarantees of platform stability and backward compatibility. Over time, this really becomes a visible drawback, comparing to the close competitors. For example, year goes by year, but in JavaScript, [] + [] is still: '' That's stability! [] -- Best regards, Paul mailto:pmiscml at gmail.com From breamoreboy at gmail.com Tue Nov 7 04:39:50 2017 From: breamoreboy at gmail.com (Mark Lawrence) Date: Tue, 7 Nov 2017 09:39:50 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> Message-ID: On 07/11/17 04:05, David Mertz wrote: > I strongly opposed adding an ordered guarantee to regular dicts. If the > implementation happens to keep that, great. Maybe OrderedDict can be > rewritten to use the dict implementation. But the evidence that all > implementations will always be fine with this restraint feels poor, and > we have a perfectly good explicit OrderedDict for those who want that. > > If there is an ordered guarantee for regular dicts but not for dict literals, which is the subject of this thread, then haven't we got a recipe for the kind of confusion that will lead to the number of questions from newbies going off of the Richter scale? -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence From barry at python.org Tue Nov 7 12:58:33 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 09:58:33 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <20171107193941.0c86fa7f@x230> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: On Nov 7, 2017, at 09:39, Paul Sokolovsky wrote: > So, the problem is that there's no "Python language spec?. There is a language specification: https://docs.python.org/3/reference/index.html But there are still corners that are undocumented, or topics that are deliberately left as implementation details. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Tue Nov 7 13:41:46 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Tue, 7 Nov 2017 10:41:46 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> Message-ID: <1D824500-740F-4029-8235-47E961335789@langa.pl> > On Nov 6, 2017, at 8:01 PM, Nick Coghlan wrote: > > As I indicated in my comment there, I'm now wondering if there might > be an opportunity here whereby we could use the *dataclasses* module > to define a stable non-provisional syntactically compatible subset of > the typing module, and require folks to start explicitly depending on > the typing module (as a regular PyPI dependency) if they want anything > more sophisticated than that (Unions, Generics, TypeVars, etc). I have an issue open about essentially this idea: https://github.com/python/typing/issues/496 The consensus is that this is too expensive to do this in time for Python 3.7. On the other hand, not doing anything is terrible in terms of usability: now users will be forced to use both `typing` (a frozen non-provisional version in the stdlib) and `typing_extensions` (new features that we couldn't add directly to `typing`). This is the reason why Guido asked about moving `typing` out of the standard library. Currently it seems that the best thing that we can do is to ship an upgradeable version of `typing` with Python 3.7.0. There are many details to be figured out here: 1. Does that mean `typing` is going to be provisional forever? The answer is surprisingly "yes" since it makes sense to keep updating the bundled `typing` in 3.7.1, 3.7.2, etc. We need to update PEP 411 to talk about this new case. 2. Does that mean `typing` can now become backwards incompatible randomly? The answer should be "no" and already is "no". If you look at typing.py, it already ships in Python 3.6 with some crazy dances that makes it work even on Python 3.3. 3. Does that mean that Debian is going to rip it out and make people install a `python-typing` .deb? Sadly, probably yes. We need to figure out what that means for us. 4. How do we even version this library then? Probably like this: 3.7.0.0, 3.7.0.1, 3.7.1.0, and so on. But that depends on answers to the other questions above. 5. What will happen to typing_extensions and mypy_extensions? They can probably be folded into typing 3.7.0.0. This is a huge win for usability. 6. Where will we track issues for it? Which repo will be the source of truth? Fortunately, we already have https://github.com/python/typing/ There's more things to figure out for sure. And the clock is ticking. But this is a great case study to figure out since more packages in the standard library could use a release model like this, and maybe in the future this will finally lead to unbundling of the standard library from the runtime. Yeah, it's a long shot and the dependency graph between libraries in the standard library is rather extensive. But this is the first step. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Tue Nov 7 13:55:35 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Tue, 7 Nov 2017 10:55:35 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <76e1a6c1-fd7a-e6cb-c061-852b5999aec8@python.org> <46A3BF96-E531-4A46-8171-464299774910@stufft.io> Message-ID: > On Nov 7, 2017, at 9:39 AM, Brett Cannon wrote: > > On Tue, 7 Nov 2017 at 03:34 Paul Moore > wrote: > > Because type annotations are a development-time feature, they should > *not* require a dependency in the final deployment (apart from Python > itself). However, because they are a language syntax feature they are > of necessity written in the application source. And type specification > of anything more complex than basic types (for example, List[int]) > requires classes defined in the typing module. Therefore, typing must > be in the stdlib so that use of type annotations by the developer > doesn't impose a runtime dependency on the end user. > > That's not necessarily true if Lukasz's PEP lands and annotations become strings. The dependency would only need to be installed if someone chose to introspect the annotations and then "instantiate" them into actual objects. And that only comes up if someone does it from outside by a 3rd-party, who would then need to install the type annotation dependencies themselves. PEP 563 is the first step there but it's not enough. For this idea to work, `typing.TYPE_CHECKING` would need to be moved to the Python runtime, so that you can do if TYPE_CHECKING: from typing import * More importantly, you would put type aliases, type variables and new-types in this `if` block, too. But even if all this is true, there's still two remaining features that require runtime availability: 1. `cast()`; and 2. creating generic classes by subclassing `Generic[T]` or any of its subclasses. And soon enough, Protocols. I hope I'm not forgetting any other cases. For `cast()` I have an idea how to move it to a variable annotation. For generic classes, there's no magic, you need `typing` at runtime. Fortunately, that latter case is (I hope?) relatively unlikely. All the above is summarized here: https://github.com/python/typing/issues/496 - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From rosuav at gmail.com Tue Nov 7 14:19:46 2017 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 8 Nov 2017 06:19:46 +1100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107153229.70ad6ad6@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: On Wed, Nov 8, 2017 at 1:32 AM, Antoine Pitrou wrote: > On Wed, 8 Nov 2017 00:01:04 +1000 > Nick Coghlan wrote: > >> On 7 November 2017 at 23:48, Stefan Krah wrote: >> > >> > >> > This is just a reminder that the current dict is not an "OrderedDict": >> > >> >>>> from collections import OrderedDict >> >>>> OrderedDict(a=0, b=1) == OrderedDict(b=1, a=0) >> > False >> >>>> dict(a=0, b=1) == dict(b=1, a=0) >> > True >> > >> > The recent proposal was primarily about guaranteeing the insertion order of >> > dict literals. >> > >> > If further guarantees are proposed, perhaps it would be a good idea to >> > open a new thread and state what exactly is being proposed. >> >> "Insertion ordered until the first key removal" is the only guarantee >> that's being proposed. > > Is it? It seems to me that many arguments being made are only relevant > under the hypothesis that insertion is ordered even after the first key > removal. For example the user-friendliness argument, for I don't > think it's very user-friendly to have a guarantee that disappears > forever on the first __del__. I've used a good few dictionary objects in my time, but most of them have literally never had any items deleted from them. Consider a simple anagram finder: anagrams = defaultdict(list) for word in words: anagrams[''.join(sorted(word))].append(word) for words in anagrams.values(): if len(words) > 5: print(words) New items get added to the dictionary, but nothing is ever removed. I can assume, with CPython's current semantics, that the final iteration will be in order of first seen; whatever order the incoming word list was in, the output will be in too. This IS a useful guarantee, even with the caveats. ChrisA From solipsis at pitrou.net Tue Nov 7 14:28:03 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Nov 2017 20:28:03 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: <20171107202803.24931cf6@fsol> On Wed, 8 Nov 2017 06:19:46 +1100 Chris Angelico wrote: > > I've used a good few dictionary objects in my time, but most of them > have literally never had any items deleted from them. Well... It really depends what kind of problem you're solving. I certainly delete or pop items from dicts quite often. Let's not claim that deleting items from a dict is a rare or advanced feature. It is not. Regards Antoine. From pludemann at google.com Tue Nov 7 14:32:17 2017 From: pludemann at google.com (Peter Ludemann) Date: Tue, 7 Nov 2017 11:32:17 -0800 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107202803.24931cf6@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: Does it matter whether the dict order after pop/delete is explicitly specified, or just specified that it's deterministic? On 7 November 2017 at 11:28, Antoine Pitrou wrote: > On Wed, 8 Nov 2017 06:19:46 +1100 > Chris Angelico wrote: > > > > I've used a good few dictionary objects in my time, but most of them > > have literally never had any items deleted from them. > > Well... It really depends what kind of problem you're solving. I > certainly delete or pop items from dicts quite often. > > Let's not claim that deleting items from a dict is a rare or advanced > feature. It is not. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > pludemann%40google.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Nov 7 14:38:56 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 11:38:56 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <1D824500-740F-4029-8235-47E961335789@langa.pl> References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <1D824500-740F-4029-8235-47E961335789@langa.pl> Message-ID: <03078661-776D-4D91-8083-29150B5C01D5@python.org> On Nov 7, 2017, at 10:41, Lukasz Langa wrote: > 3. Does that mean that Debian is going to rip it out and make people install a `python-typing` .deb? Sadly, probably yes. We need to figure out what that means for us. Maybe. Full disclosure, I?ve recently scaled back my contributions to Debian, so I won?t be doing this work, but if I was, I?d probably handle it very similarly to other replaceable external dependencies (e.g. pip). There is a small loophole in policy to allow for the building and use of wheels for just this *limited* purpose. So roughly I would propose packaging the external python-typing package as a separately installable deb, but also to build this into a wheel that we can pull in at python3.7 interpreter package build time. It?s fairly easy to do, and all the infrastructure is already there. What would be useful is for upstream CPython to make it easy to import an externally built and installed wheel, from some location outside of its own installed tree (/usr/share/python-wheels). Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Tue Nov 7 14:48:01 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 07 Nov 2017 11:48:01 -0800 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107202803.24931cf6@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: Antoine Pitrou wrote: > Well... It really depends what kind of problem you're solving. I > certainly delete or pop items from dicts quite often. > > Let's not claim that deleting items from a dict is a rare or advanced > feature. It is not. +1. It's a pretty common pattern for handling optional keyword arguments, e.g. in subclass methods. class Foo(Bar): def foo(self, *args **kws): mine = kws.pop('mine', None) super().foo(self, *args, **kws) do_something_myself(mine) Now the question is, what guarantees does the language make about the ordering of kws that Foo.foo() is passing to Bar.foo()? -Barry From tim.peters at gmail.com Tue Nov 7 14:50:11 2017 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 7 Nov 2017 13:50:11 -0600 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: [Peter Ludemann] > Does it matter whether the dict order after pop/delete is explicitly > specified, or just specified that it's deterministic? Any behavior whatsoever becomes essential after it becomes known ;-) For example, dicts as currently ordered easily support LRU (least recently used) purging like so: On access: result = d.pop(key) d[key] = result This moves `key` from wherever it was to the _most_ recently used position. To purge the `numtopurge` least recently used keys (since traversing the dict is always from least-recently to most-recently added): topurge = tuple(itertools.islice(d, numtopurge)) for key in topurge: del d[key] Is it worth guaranteeing that will always "work" (as intended)? Not to me, but I do have code that relies on it now - and we can count on someone else saying it's utterly crucial ;-) From njs at pobox.com Tue Nov 7 15:05:50 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 7 Nov 2017 14:05:50 -0600 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: On Nov 7, 2017 12:02 PM, "Barry Warsaw" wrote: On Nov 7, 2017, at 09:39, Paul Sokolovsky wrote: > So, the problem is that there's no "Python language spec?. There is a language specification: https://docs.python.org/3/refe rence/index.html But there are still corners that are undocumented, or topics that are deliberately left as implementation details. Also, specs don't mean that much unless there are multiple implementations in widespread use. In JS the spec matters because it describes the common subset of the language you can expect to see across browsers, and lets the browser vendors coordinate on future changes. Since users actually target and test against multiple implementations, this is useful. In python, CPython's dominance means that most libraries are written against CPython's behavior instead of the spec, and alternative implementations generally don't care about the spec, they care about whether they can run the code their users want to run. So PyPy has found that for their purposes, the python spec includes all kinds of obscure internal implementation details like CPython's static type/heap type distinction, the exact tricks CPython uses to optimize local variable access, the CPython C API, etc. The Pyston devs found that for their purposes, refcounting actually was a mandatory part of the python language. Jython, MicroPython, etc make a different set of compatibility tradeoffs again. I'm not saying the spec is useless, but it's not magic either. It only matters to the extent that it solves some problem for people. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Nov 7 15:32:23 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 12:32:23 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On Mon, Nov 6, 2017 at 7:23 PM, Terry Reedy wrote: > On 11/6/2017 9:47 PM, Nick Coghlan wrote: > [...] > >> - "only show me legacy calls in *my* code" (the "I trust my deps to >> take care of themselves" use case) >> > > Perhaps this should be the new default, where 'my code' means everything > under the directory containing the startup file. If an app developer > either fixes or suppresses warnings from app code when they first appear, > then users will seldom or never see warnings. So for users, this would > then be close to the current default. > Yes, this or a close a variant sounds like a decent option. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at ganssle.io Tue Nov 7 15:35:54 2017 From: paul at ganssle.io (Paul G) Date: Tue, 7 Nov 2017 15:35:54 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. I could either be something like an environment variable SCRAMBLE_DICT_ORDER or a flag like --scramble-dict-order. That would probably help somewhat with the very real problem of "everyone's going to start counting on this ordered property". On 11/07/2017 12:58 PM, Barry Warsaw wrote: > On Nov 7, 2017, at 09:39, Paul Sokolovsky wrote: > >> So, the problem is that there's no "Python language spec?. > > There is a language specification: https://docs.python.org/3/reference/index.html > > But there are still corners that are undocumented, or topics that are deliberately left as implementation details. > > Cheers, > -Barry > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From chris.barker at noaa.gov Tue Nov 7 15:47:03 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 7 Nov 2017 12:47:03 -0800 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: On Tue, Nov 7, 2017 at 11:50 AM, Tim Peters wrote: > Is it worth guaranteeing that will always "work" (as intended)? Not > to me, but I do have code that relies on it now - This is critically important -- no one looks at the language spec to figure out how something works -- they try it, and if it works assume it will continue to work. if dict order is preserved in cPython , people WILL count on it! And similarly, having order preserved only until a delete is going to cause bugs in people's code that are less than careful :-( -- and that's most of us. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothy.c.delaney at gmail.com Tue Nov 7 16:08:51 2017 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Wed, 8 Nov 2017 08:08:51 +1100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On 8 November 2017 at 03:55, Barry Warsaw wrote: > On Nov 7, 2017, at 05:44, Paul Moore wrote: > > > If you're a user and your application developer didn't do (1) or a > > library developer developing one of the libraries your application > > developer chose to use didn't do (2), you're hosed. If you're a user > > who works in an environment where moving to a new version of the > > application is administratively complex, you're hosed. > > ?hosed? feels like too strong of a word here. DeprecationWarnings usually > don?t break anything. Sure, they?re annoying but they can usually be > easily ignored. > > Yes, there are some situations where DWs do actively break things (as I?ve > mentioned, some Debuntu build/test environments). But those are also > relatively easier to silence, or at least the folks running those > environments, or writing the code for those environments, are usually more > advanced developers for whom setting an environment variable or flag isn?t > that big of a deal. > One other case would be if you've got an application with no stderr (e.g. a GUI application) - with enough deprecation warnings the stderr buffer could become full and block, preventing the application from progressing. I've just had a similar issue where a process was running as a service and used subprocess.check_output() - stderr was written to the parent's stderr, which didn't exist and caused the program to hang. However, I'm definitely +1 on enabling DeprecationWarning by default, but with mechanisms or recommendations for the application developer to silence them selectively for the current release. Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Nov 7 16:13:32 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 7 Nov 2017 13:13:32 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> Message-ID: On Tue, Nov 7, 2017 at 7:21 AM, David Mertz wrote: > But like Raymond, I make most of my living TEACHING Python. > and I make least of my living TEACHING Python ( :-) ),and: > I feel like the extra order guarantee would make teaching slightly harder. > I can't understand how this possibly makes python (or dicts) harder to teach -- you can simply say: "dicts insertion order is preserved" or not mention it at all -- I think most people kind of expect it to be preserved, which is why I (used to )always bring up the lack-of-ordering of dicts early on -- but I suspect I simply won't bother mentioning it if it's decided as a language feature. I'm sure he feels contrarily. It is true that with 3.6 I can no longer show > an example where the dict display is oddly changed when printed. > Exactly! I have a really hard time deciding how to handle this -- explaining that ordering is not guaranteed, but not being able to demonstrate it! And frankly, my students are all going to forget what I "explained" soon enough, and replace it with their experience -- which will be that dicts retain their order. But then, unordered sets also wind up sorting small integers on printing, > even though that's not a guarantee. > but it's not hard to make an easy example with order not preserved -- jsut start with a non order example: In [6]: s = {3,7,4} In [7]: s Out[7]: {3, 4, 7} or use other types... And "set" is a mathematical concept that has no oder, whereas the "dictionary" metaphor DOES have order... > Ordering by insertion order (possibly "only until first deletion") is > simply not obvious to beginners. > the "only until first deletion" part is really hard -- I hope we don't go that route. But I don't think insertion-order is non-obvious -- particularly with literals. > If we had, hypothetically, a dict that "always alphabetized keys" that > would be more intuitive to them, for example. Insertion order feels obvious > to us experts, but it really is an extra cognitive burden to learners > beyond understanding "key/Val association". > again, I don't think so -- I kind of agree if dicts did not preserve order in practice -- demonstrating that right out of the gate does help make the "key/Val association" clear -- but if you can't demonstrate it, I think we're looking at more confusion... Maybe I'll ask my students this evening -- this is the first class I'm teaching with py3.6 .... We've lived without order for so long that it seems that some of us now > think data scrambling is a virtue. But it isn't. Scrambled data is the > opposite of human friendly. > > exactly! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Nov 7 16:15:15 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 21:15:15 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: On 7 November 2017 at 20:35, Paul G wrote: > If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. > > I could either be something like an environment variable SCRAMBLE_DICT_ORDER or a flag like --scramble-dict-order. That would probably help somewhat with the very real problem of "everyone's going to start counting on this ordered property". This seems like overkill to me. By the same logic, we should add a "delay garbage collection" mode, that allows people to test that their code doesn't make unwarranted assumptions that a reference-counting garbage collector is in use. Most public projects (which are the only ones that really need to worry about this sort of detail) will probably be supporting Python 3.5 and likely even Python 2.7 for some time yet. So they test under non-order-preserving dictionary implementations anyway. And if code that's only targeted for Python 3.7 assumes order preserving dictionaries, it's likely not got a huge user base anyway, so what's the problem? Paul From barry at python.org Tue Nov 7 16:15:46 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 13:15:46 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: <08810DC9-DDF7-423F-B918-2E4190CEA450@python.org> On Nov 7, 2017, at 12:35, Paul G wrote: > > If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. As has been suggested elsewhere, if we decide not to make that guarantee, then we should probably deliberately randomize iteration order. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From evpok.padding at gmail.com Tue Nov 7 16:19:56 2017 From: evpok.padding at gmail.com (Evpok Padding) Date: Tue, 7 Nov 2017 22:19:56 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: On 7 November 2017 at 21:47, Chris Barker wrote: > On Tue, Nov 7, 2017 at 11:50 AM, Tim Peters wrote: > >> Is it worth guaranteeing that will always "work" (as intended)? Not >> to me, but I do have code that relies on it now - > > > This is critically important -- no one looks at the language spec to > figure out how something works -- they try it, and if it works assume it > will continue to work. > > if dict order is preserved in cPython , people WILL count on it! > I won't, and if people do and their code break, they'll have only themselves to blame. Also, what proof do you have of that besides anecdotal evidence?? E -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Nov 7 16:23:13 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 7 Nov 2017 21:23:13 +0000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> Message-ID: On 7 November 2017 at 21:13, Chris Barker wrote: > the "only until first deletion" part is really hard -- I hope we don't go > that route. But I don't think insertion-order is non-obvious -- particularly > with literals. But I thought we *had* gone that route. Actually, there's no "route" to go here. We're only talking about documenting the existing semantics that cPython has, and I thought that included no longer guaranteeing insertion order after a delete. Although I can't prove that by experiment at the moment. I don't know whether my confusion above is an argument for or against documenting the behaviour :-) Paul From paul at ganssle.io Tue Nov 7 16:29:38 2017 From: paul at ganssle.io (Paul G) Date: Tue, 7 Nov 2017 16:29:38 -0500 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: @ Paul Moore > This seems like overkill to me. By the same logic, we should add a > "delay garbage collection" mode, that allows people to test that their > code doesn't make unwarranted assumptions that a reference-counting > garbage collector is in use. But you can get pretty fine-grained control of garbage collection with judicious use of gc.disable(). If there were a similar mechanism for changing the ordering properties of dictionaries in code, I wouldn't suggest it as an interpreter flag / option. And you're right - it's not pressing - the people likely to test with dictionaries scrambled are exactly the people likely to be supporting 2.7 and 3.5, but that won't be true forever, and it would be nice to have *some* mechanism to test that you're not relying on this property. @Barry Warsaw > As has been suggested elsewhere, if we decide not to make that guarantee, then we should probably deliberately randomize iteration order. This was my suggestion of a middle way, since deliberate randomization seems like a step too far just to avoid people relying on implementation details. I've seen plenty of code that assumes that `assert` statements will always throw `AssertionError`, but that's not guaranteed, and some people run their test suites with -O just to check that they aren't making that assumption. On 11/07/2017 04:15 PM, Paul Moore wrote: > On 7 November 2017 at 20:35, Paul G wrote: >> If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. >> >> I could either be something like an environment variable SCRAMBLE_DICT_ORDER or a flag like --scramble-dict-order. That would probably help somewhat with the very real problem of "everyone's going to start counting on this ordered property". > > > Most public projects (which are the only ones that really need to > worry about this sort of detail) will probably be supporting Python > 3.5 and likely even Python 2.7 for some time yet. So they test under > non-order-preserving dictionary implementations anyway. And if code > that's only targeted for Python 3.7 assumes order preserving > dictionaries, it's likely not got a huge user base anyway, so what's > the problem? > > Paul > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From python at mrabarnett.plus.com Tue Nov 7 16:53:54 2017 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 7 Nov 2017 21:53:54 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <6889a10a-7fd6-3143-c7a0-9d4feaa6fac3@mrabarnett.plus.com> On 2017-11-07 14:17, Philipp A. wrote: > Nick Coghlan > schrieb am > Di., 7. Nov. 2017 um 14:57?Uhr: > > Users of applications written in Python are not python-dev's users: > they're the users of those applications, and hence the quality of > that experience is up to the developers of those applications. [?] > > > Thank you, that?s exactly what I?m talking about. Besides: Nobody is > ?hosed?? There will be one occurrence of every DeprecationWarning in the > stderr of the application. Hardly the end of the world for CLI > applications and even invisible for GUI applications. > > If the devs care about the user not seeing any warnings in their CLI > application, they?ll have a test set up for that, which will tell them > that the newest python-dev would raise a new warning, once they turn on > testing for that release. That?s completely fine! > > Explicit is better than implicit! If I know lib X raises > DeprecationWarnings I don?t care about, I want to explicitly silence > them, instead of missing out on all the valuable information in other > DeprecationWarnings. > Also, errors should never pass silently. Deprecation warnings are future errors. From ncoghlan at gmail.com Tue Nov 7 17:16:53 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 08:16:53 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On 8 November 2017 at 06:32, Guido van Rossum wrote: > On Mon, Nov 6, 2017 at 7:23 PM, Terry Reedy wrote: >> >> On 11/6/2017 9:47 PM, Nick Coghlan wrote: >> [...] >>> >>> - "only show me legacy calls in *my* code" (the "I trust my deps to >>> take care of themselves" use case) >> >> >> Perhaps this should be the new default, where 'my code' means everything >> under the directory containing the startup file. If an app developer either >> fixes or suppresses warnings from app code when they first appear, then >> users will seldom or never see warnings. So for users, this would then be >> close to the current default. > > Yes, this or a close a variant sounds like a decent option. Unfortunately, there are a lot of common directory layouts where a simple filesystem based assumption like that one will lead to warnings from third party code: 1. zipapp archives, where everything, including __main__.py, shares a common path prefix (the zip archive) 2. Working directories that include a ".venv" link to the default virtual environment for a project (this is a not uncommon way of telling IDEs which venv to use) 3. Package execution, when the package includes a "_vendor" or "_bundle" subtree The one thing we can be reasonably confident counts as "the developer's code" is "__main__", but even that isn't completely certain in cases where folks are publishing single file scripts for use by others (e.g. a DeprecationWarning from get-pip.py would be useful to pip developers, but almost entirely unhelpful to users of the script itself). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 7 17:33:09 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 08:33:09 +1000 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <1D824500-740F-4029-8235-47E961335789@langa.pl> References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <1D824500-740F-4029-8235-47E961335789@langa.pl> Message-ID: On 8 November 2017 at 04:41, Lukasz Langa wrote: > 4. How do we even version this library then? Probably like this: 3.7.0.0, > 3.7.0.1, 3.7.1.0, and so on. But that depends on answers to the other > questions above. Something you may want to consider is switching to CalVer for typing itself, such that we end up saying something like "Python 3.7.0 includes typing 2017.12.1". My experience has been that CalVer is just *better* for interoperability specifications, since it inherently conveys information about the age of the specification. Saying "We target typing 2017.12.1" in 2018 immediately lets people know they're going to need some pretty up to date software to run a project that has that caveat on it. By contrast, saying the same thing in 2021 means most things released in the past 3 years should be able to handle it. Such an approach also avoids future confusion if the final version of Python 3.7 were to start bundling Python 3.8's version of typing. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jwilk at jwilk.net Tue Nov 7 16:34:45 2017 From: jwilk at jwilk.net (Jakub Wilk) Date: Tue, 7 Nov 2017 22:34:45 +0100 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <20171107213445.lxvpyxavvr4qd5l2@jwilk.net> * Barry Warsaw , 2017-11-06, 15:56: >We also depend on ldap3 . Suddenly we >get a SyntaxError because ldap3 has a module ldap3/strategy/async.py. >I say "suddenly" because of course *if* DeprecationWarnings had been >enabled by default, I'm sure someone would have noticed that those >imports were telling the developers about the impending problem in >Python 3.6. > >https://github.com/cannatag/ldap3/issues/428 "import async" would indeed cause deprecation warning, but that's not what ldap3 does. The only uses of the now-keyword "async" in their codebase are like this: from ..strategy.async import AsyncStrategy from .async import AsyncStrategy These do not provoke deprecation warnings from Python 3.6. (They probably should!) I'm afraid that showing deprecation warnings by default wouldn't have helped in this particular case. -- Jakub Wilk From lukasz at langa.pl Tue Nov 7 17:53:34 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Tue, 7 Nov 2017 14:53:34 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <1D824500-740F-4029-8235-47E961335789@langa.pl> Message-ID: <3B7084B4-39C7-4666-A6E8-27F081CEE3CA@langa.pl> > On Nov 7, 2017, at 2:33 PM, Nick Coghlan wrote: > > On 8 November 2017 at 04:41, Lukasz Langa wrote: >> 4. How do we even version this library then? Probably like this: 3.7.0.0, >> 3.7.0.1, 3.7.1.0, and so on. But that depends on answers to the other >> questions above. > > Something you may want to consider is switching to CalVer for typing > itself, such that we end up saying something like "Python 3.7.0 > includes typing 2017.12.1". You don't need to sell me on CalVer, all my private packages use this versioning scheme (just with the shorthand 17) :-) And yes, this is a good suggestion. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Tue Nov 7 18:16:28 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 15:16:28 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <20171107213445.lxvpyxavvr4qd5l2@jwilk.net> References: <20171107213445.lxvpyxavvr4qd5l2@jwilk.net> Message-ID: <07C5CA3F-8D68-4820-A5E2-FF4E6DE7B542@python.org> On Nov 7, 2017, at 13:34, Jakub Wilk wrote: > "import async" would indeed cause deprecation warning, but that's not what ldap3 does. The only uses of the now-keyword "async" in their codebase are like this: > > from ..strategy.async import AsyncStrategy > from .async import AsyncStrategy > > These do not provoke deprecation warnings from Python 3.6. (They probably should!) > > I'm afraid that showing deprecation warnings by default wouldn't have helped in this particular case. Oh gosh, I should have tried that instead of assuming it would generate the same warning. Yes, that?s definitely a bug. I wonder if we should push back making async/await reserved words until Python 3.8? https://bugs.python.org/issue31973 Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Tue Nov 7 19:03:49 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 16:03:49 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: OK, so let's come up with a set of heuristics that does the right thing for those cases specifically. I'd say whenever you're executing code from a zipfile or some such it's not considered your own code (by default). On Tue, Nov 7, 2017 at 2:16 PM, Nick Coghlan wrote: > On 8 November 2017 at 06:32, Guido van Rossum wrote: > > On Mon, Nov 6, 2017 at 7:23 PM, Terry Reedy wrote: > >> > >> On 11/6/2017 9:47 PM, Nick Coghlan wrote: > >> [...] > >>> > >>> - "only show me legacy calls in *my* code" (the "I trust my deps to > >>> take care of themselves" use case) > >> > >> > >> Perhaps this should be the new default, where 'my code' means everything > >> under the directory containing the startup file. If an app developer > either > >> fixes or suppresses warnings from app code when they first appear, then > >> users will seldom or never see warnings. So for users, this would then > be > >> close to the current default. > > > > Yes, this or a close a variant sounds like a decent option. > > Unfortunately, there are a lot of common directory layouts where a > simple filesystem based assumption like that one will lead to warnings > from third party code: > > 1. zipapp archives, where everything, including __main__.py, shares a > common path prefix (the zip archive) > 2. Working directories that include a ".venv" link to the default > virtual environment for a project (this is a not uncommon way of > telling IDEs which venv to use) > 3. Package execution, when the package includes a "_vendor" or "_bundle" > subtree > > The one thing we can be reasonably confident counts as "the > developer's code" is "__main__", but even that isn't completely > certain in cases where folks are publishing single file scripts for > use by others (e.g. a DeprecationWarning from get-pip.py would be > useful to pip developers, but almost entirely unhelpful to users of > the script itself). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Nov 7 19:15:53 2017 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 7 Nov 2017 16:15:53 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: <-2724031332965349247@unknownmsgid> This seems like overkill to me. By the same logic, we should add a "delay garbage collection" mode, that allows people to test that their code doesn't make unwarranted assumptions that a reference-counting garbage collector is in use. Actually, there is a LOT of code out there that expects reference counting. I know a lot of my code does. So if cPython does abandon it some day, there will be issues. Another thought: Let?s say Python declares that dict literals are order preserving. Then, one day, someone invents a massively more efficient non-order preserving implementation, and we want to use it for Python 4. So the Python 4 language spec says that dicts are not order preserving. And this is flagged as an INCOMPATIBLE CHANGE. Now everyone knows to go and check their code, and the 3to4 tool adds an ?o? to all dict literals. People will complain, but it won?t be unexpected breakage. Compare to leaving it as an implementation detail ? now we will have a lot of code in the wild that expects order-preservation (because people don?t read the language spec) that will break with such a change without any real warning. I think we really do need to accept that cPython is a reference implementation. Because it is. By the way, I only just realized I can delete a key to demonstrate non-order-preservation on py 3.6. So at least I know what to tell students now. -CHB But you can get pretty fine-grained control of garbage collection with judicious use of gc.disable(). If there were a similar mechanism for changing the ordering properties of dictionaries in code, I wouldn't suggest it as an interpreter flag / option. And you're right - it's not pressing - the people likely to test with dictionaries scrambled are exactly the people likely to be supporting 2.7 and 3.5, but that won't be true forever, and it would be nice to have *some* mechanism to test that you're not relying on this property. @Barry Warsaw As has been suggested elsewhere, if we decide not to make that guarantee, then we should probably deliberately randomize iteration order. This was my suggestion of a middle way, since deliberate randomization seems like a step too far just to avoid people relying on implementation details. I've seen plenty of code that assumes that `assert` statements will always throw `AssertionError`, but that's not guaranteed, and some people run their test suites with -O just to check that they aren't making that assumption. On 11/07/2017 04:15 PM, Paul Moore wrote: On 7 November 2017 at 20:35, Paul G wrote: If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. I could either be something like an environment variable SCRAMBLE_DICT_ORDER or a flag like --scramble-dict-order. That would probably help somewhat with the very real problem of "everyone's going to start counting on this ordered property". Most public projects (which are the only ones that really need to worry about this sort of detail) will probably be supporting Python 3.5 and likely even Python 2.7 for some time yet. So they test under non-order-preserving dictionary implementations anyway. And if code that's only targeted for Python 3.7 assumes order preserving dictionaries, it's likely not got a huge user base anyway, so what's the problem? Paul _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Nov 7 19:51:07 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 16:51:07 -0800 Subject: [Python-Dev] Remove typing from the stdlib In-Reply-To: <3B7084B4-39C7-4666-A6E8-27F081CEE3CA@langa.pl> References: <54936674-7d61-582e-2390-61b5d45d47b5@trueblade.com> <1D824500-740F-4029-8235-47E961335789@langa.pl> <3B7084B4-39C7-4666-A6E8-27F081CEE3CA@langa.pl> Message-ID: Let me just cut this short. typing.py will stay, and it will stay provisional. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Tue Nov 7 20:01:56 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 8 Nov 2017 10:01:56 +0900 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <-2724031332965349247@unknownmsgid> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> <-2724031332965349247@unknownmsgid> Message-ID: > By the way, I only just realized I can delete a key to demonstrate > non-order-preservation on py 3.6. So at least I know what to tell students > now. > You can't. dict in Python 3.6 preserves insertion order even after key deletion. From lukasz at langa.pl Tue Nov 7 20:17:18 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Tue, 7 Nov 2017 17:17:18 -0800 Subject: [Python-Dev] Reminder: PEP 479's __future__ about to become the default behavior Message-ID: <5C97340E-E03E-481E-ADD9-22AC925A7E58@langa.pl> This is according to https://www.python.org/dev/peps/pep-0479/#transition-plan but looking at Objects/genobject.c that hasn't been implemented yet. Is this as simple as removing the `else` clause here? https://github.com/python/cpython/blob/master/Objects/genobject.c#L277-L298 - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Tue Nov 7 20:35:13 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 11:35:13 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On 8 November 2017 at 10:03, Guido van Rossum wrote: > OK, so let's come up with a set of heuristics that does the right thing for > those cases specifically. I'd say whenever you're executing code from a > zipfile or some such it's not considered your own code (by default). My current preferred heuristic is just to add a new default filter to the list: once::DeprecationWarning:__main__ Which says to warn specifically for the __main__ module, and continue ignoring everything else. That way ad hoc scripts and the REPL will get warnings by default, while zipapps and packages can avoid warnings by keeping their __main__.py simple, and importing a CLI helper function from another module. Entry point wrapper scripts will implicitly have the same effect for installed packages. If folks want to get warnings for other modules as well, then they can either pass "-Wd" to get warnings for everything, or else enable them selectively using the default main module filter as an example. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 7 20:44:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 11:44:12 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: On 8 November 2017 at 07:15, Paul Moore wrote: > On 7 November 2017 at 20:35, Paul G wrote: >> If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. >> >> I could either be something like an environment variable SCRAMBLE_DICT_ORDER or a flag like --scramble-dict-order. That would probably help somewhat with the very real problem of "everyone's going to start counting on this ordered property". > > This seems like overkill to me. By the same logic, we should add a > "delay garbage collection" mode, that allows people to test that their > code doesn't make unwarranted assumptions that a reference-counting > garbage collector is in use. Quite a few projects these days include PyPy in their CI test matrix, and one of the things that does is test whether or not you're relying on refcounting semantics. We also offer ResourceWarning natively in CPython, which means if you run under "python3 -Wd", you'll get a warning when external resources like files are cleaned up implicitly: $ python3 -Wd Python 3.6.2 (default, Oct 2 2017, 16:51:32) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> f = open(".bashrc") >>> del f __main__:1: ResourceWarning: unclosed file <_io.TextIOWrapper name='.bashrc' mode='r' encoding='UTF-8'> >>> > Most public projects (which are the only ones that really need to > worry about this sort of detail) will probably be supporting Python > 3.5 and likely even Python 2.7 for some time yet. So they test under > non-order-preserving dictionary implementations anyway. And if code > that's only targeted for Python 3.7 assumes order preserving > dictionaries, it's likely not got a huge user base anyway, so what's > the problem? The concern is that if we don't explicitly perturb dict iteration order (or offer a way to opt-in to that), then insertion ordering will end up becoming a *de facto* compatibility requirement for Python implementations as CPython 2.7 and other releases prior to 3.6 start dropping out of typical test matrices. With both 2.7 and 3.5 going end-of-life in 2020, that means 3.7 (mid 2018) and 3.8 (late 2019 or early 2020) are our available opportunities to make that decision - beyond that, it starts getting a lot harder to change course away from implicit standardisation, as there'll be a lot more 3.6+-only code in the world by then. The way golang handled this problem is in their dict iterator: they added an extra field to hold a randomly generated hash, and used that hash as the starting point in their iteration sequence. We wouldn't be able to implement per-iterator order randomisation in CPython due to the way the PyDict_Next API works: that uses a caller-provided Py_ssize_t entry to track the current position in the iteration sequence. This means the simplest change we can make is to adjust the code in _PyDict_Next that reads the "current iteration index" from the user supplied pointer to instead interpret that as having a constant offset (e.g. starting with the last item in the "natural" iteration order, and then looping back around to the first one). "Simplest" isn't the same as "simple" though, as: 1. We can't change this globally for all dicts, as we actually *do* need keyword argument dicts and class body execution namespaces to be insertion ordered. That makes it either a per-instance setting, or else a subtly different dict type. 2. So far, I haven't actually come up with a perturbed iteration implementation that doesn't segfault the interpreter. The dict internals are nicely laid out to be iteration friendly, but they really do assume that you're going to start at index zero, and then iterate through to the end of the array. The bounds checking and pointer validity testing becomes relatively fiddly if you try to push against that and instead start iteration from a point partway through the storage array. That second point also becomes a concern from a performance perspective because this is code that runs on *each* iteration of a loop, rather than purely as part of the loop setup. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Tue Nov 7 20:46:04 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 17:46:04 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On Tue, Nov 7, 2017 at 5:35 PM, Nick Coghlan wrote: > On 8 November 2017 at 10:03, Guido van Rossum wrote: > > OK, so let's come up with a set of heuristics that does the right thing > for > > those cases specifically. I'd say whenever you're executing code from a > > zipfile or some such it's not considered your own code (by default). > > My current preferred heuristic is just to add a new default filter to the > list: > > once::DeprecationWarning:__main__ > > Which says to warn specifically for the __main__ module, and continue > ignoring everything else. > OK, that sounds great. > That way ad hoc scripts and the REPL will get warnings by default, > while zipapps and packages can avoid warnings by keeping their > __main__.py simple, and importing a CLI helper function from another > module. Entry point wrapper scripts will implicitly have the same > effect for installed packages. > That's fine. If folks want to get warnings for other modules as well, then they can > either pass "-Wd" to get warnings for everything, or else enable them > selectively using the default main module filter as an example. > Assuming that's how it already works, we're done here. :-) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Nov 7 20:50:58 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 7 Nov 2017 17:50:58 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: <-2724031332965349247@unknownmsgid> References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> <-2724031332965349247@unknownmsgid> Message-ID: On Nov 7, 2017, at 16:15, Chris Barker - NOAA Federal wrote: > Actually, there is a LOT of code out there that expects reference counting. I know a lot of my code does. So if cPython does abandon it some day, there will be issues. I see this all the time in code reviews: content = open(some file).read() I never let that go uncommented. So while you?re right that CPython is the reference implementation, and few people read the language spec, it?s still encombunt on us to point out broken code, code with implicit assumptions, and code that is not portable between implementations. Having the reference manual to point to chapter and verse is critical to avoid Python just devolving into an ad-hoc language ruled by its most popular implementation. This is something I believe Guido realized early on when JPython was first invented, and it was an important distinction that Python held, e.g. versus Perl. I still believe it?s an important principle to maintain. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Tue Nov 7 21:04:00 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 07 Nov 2017 18:04:00 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <20171106143952.346e19d7@fsol> References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> Message-ID: Antoine Pitrou wrote: > On Mon, 6 Nov 2017 23:23:25 +1000 >> - tweak the default warning filters to turn DeprecationWarning back on >> for __main__ only > > Thats sounds error-prone. I'd rather have them on by default > everywhere. If DeprecationWarnings were on by default, and setuptools were modified to silence them in entry point generated mains, and we had a simple API to easily silence them for manually written mains, wouldn't that handle the majority of relevant use cases nicely? -Barry From guido at python.org Tue Nov 7 21:27:53 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 18:27:53 -0800 Subject: [Python-Dev] Reminder: PEP 479's __future__ about to become the default behavior In-Reply-To: <5C97340E-E03E-481E-ADD9-22AC925A7E58@langa.pl> References: <5C97340E-E03E-481E-ADD9-22AC925A7E58@langa.pl> Message-ID: Thanks for the reminder! I don't know if it's that simple -- did you grep for occurrences of that flag (CO_FUTURE_GENERATOR_STOP)? And I suppose there are tests for both sides that need to be adjusted? On Tue, Nov 7, 2017 at 5:17 PM, Lukasz Langa wrote: > This is according to https://www.python.org/dev/ > peps/pep-0479/#transition-plan but looking at Objects/genobject.c that > hasn't been implemented yet. Is this as simple as removing the `else` > clause here? > > https://github.com/python/cpython/blob/master/Objects/ > genobject.c#L277-L298 > > - ? > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Nov 7 21:33:38 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 12:33:38 +1000 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: On 8 November 2017 at 11:44, Nick Coghlan wrote: > 2. So far, I haven't actually come up with a perturbed iteration > implementation that doesn't segfault the interpreter. The dict > internals are nicely laid out to be iteration friendly, but they > really do assume that you're going to start at index zero, and then > iterate through to the end of the array. The bounds checking and > pointer validity testing becomes relatively fiddly if you try to push > against that and instead start iteration from a point partway through > the storage array. In case anyone else wants to experiment with a proof of concept: https://github.com/ncoghlan/cpython/commit/6a8a6fa32f0a9cd71d9078fbb2b5ea44d5c5c14d I think we've probably exhausted the utility of discussing this as a purely hypothetical change, and so the only way to move the discussion forward will be for someone to draft a patch that: 1. Perturbs iteration for regular dicts (it's OK for our purposes if it's still deterministic - it just shouldn't match insertion order the way odict does) 2. Switches keyword args and class body execution namespaces over to odict so the test suite passes again 3. Measures the impact such a change would have on the benchmark suite My experiment is a starting point, but it will still be a fair bit of work to get it from there to a viable proof of concept that can be assessed against the status quo. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From songofacandy at gmail.com Tue Nov 7 21:40:38 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 8 Nov 2017 11:40:38 +0900 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: > 2. Switches keyword args and class body execution namespaces over to > odict so the test suite passes again > 3. Measures the impact such a change would have on the benchmark suite For now, odict use twice memory and 2x slower on iteration. https://bugs.python.org/issue31265#msg301942 INADA Naoki On Wed, Nov 8, 2017 at 11:33 AM, Nick Coghlan wrote: > On 8 November 2017 at 11:44, Nick Coghlan wrote: >> 2. So far, I haven't actually come up with a perturbed iteration >> implementation that doesn't segfault the interpreter. The dict >> internals are nicely laid out to be iteration friendly, but they >> really do assume that you're going to start at index zero, and then >> iterate through to the end of the array. The bounds checking and >> pointer validity testing becomes relatively fiddly if you try to push >> against that and instead start iteration from a point partway through >> the storage array. > > In case anyone else wants to experiment with a proof of concept: > https://github.com/ncoghlan/cpython/commit/6a8a6fa32f0a9cd71d9078fbb2b5ea44d5c5c14d > > I think we've probably exhausted the utility of discussing this as a > purely hypothetical change, and so the only way to move the discussion > forward will be for someone to draft a patch that: > > 1. Perturbs iteration for regular dicts (it's OK for our purposes if > it's still deterministic - it just shouldn't match insertion order the > way odict does) > 2. Switches keyword args and class body execution namespaces over to > odict so the test suite passes again > 3. Measures the impact such a change would have on the benchmark suite > > My experiment is a starting point, but it will still be a fair bit of > work to get it from there to a viable proof of concept that can be > assessed against the status quo. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com From guido at python.org Tue Nov 7 21:41:28 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 18:41:28 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> Message-ID: To cut this thread short, I say we should use Nick's proposal to turn these warnings on for __main__ but off elsewhere. (See https://mail.python.org/pipermail/python-dev/2017-November/150364.html.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Tue Nov 7 19:59:11 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 8 Nov 2017 09:59:11 +0900 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: On Wed, Nov 8, 2017 at 5:35 AM, Paul G wrote: > If dictionary order is *not* guaranteed in the spec and the dictionary order isn't randomized (which I think everyone agrees is a bit messed up), it would probably be useful if you could enable "random order mode" in CPython, so you can stress-test that your code isn't making any assumptions about dictionary ordering without having to use an implementation where order isn't deterministic. > > I could either be something like an environment variable SCRAMBLE_DICT_ORDER or a flag like --scramble-dict-order. That would probably help somewhat with the very real problem of "everyone's going to start counting on this ordered property". Namespace is ordered by language spec. What does SCRAMBLE_DICT_ORDER in this code? class A: def __init__(self): self.a, self.b, self.c = 1, 2, 3 a = A() print(a.__dict__) a.__dict__.pop('a') print(a.__dict__) Anyway, I'm -1 on adding such option to dict. dict in CPython is complicated already for performance and compatibility reason. I don't want to add more complexity to dict for such reason. Regards, INADA Naoki From ncoghlan at gmail.com Tue Nov 7 21:57:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 12:57:33 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On 8 November 2017 at 11:46, Guido van Rossum wrote: > On Tue, Nov 7, 2017 at 5:35 PM, Nick Coghlan wrote: >> >> On 8 November 2017 at 10:03, Guido van Rossum wrote: >> > OK, so let's come up with a set of heuristics that does the right thing >> > for >> > those cases specifically. I'd say whenever you're executing code from a >> > zipfile or some such it's not considered your own code (by default). >> >> My current preferred heuristic is just to add a new default filter to the >> list: >> >> once::DeprecationWarning:__main__ >> >> Which says to warn specifically for the __main__ module, and continue >> ignoring everything else. > > OK, that sounds great. > >> That way ad hoc scripts and the REPL will get warnings by default, >> while zipapps and packages can avoid warnings by keeping their >> __main__.py simple, and importing a CLI helper function from another >> module. Entry point wrapper scripts will implicitly have the same >> effect for installed packages. > > That's fine. > >> If folks want to get warnings for other modules as well, then they can >> either pass "-Wd" to get warnings for everything, or else enable them >> selectively using the default main module filter as an example. > > Assuming that's how it already works, we're done here. :-) Cool :) RFE filed here for that specific change to the default filter set: https://bugs.python.org/issue31975 Cheers, Nick. P.S. If anyone wants to follow up on some of the other more esoteric ideas we've discussed in the past few days, they can be separate RFEs. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 7 22:07:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 13:07:12 +1000 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: On 8 November 2017 at 07:19, Evpok Padding wrote: > On 7 November 2017 at 21:47, Chris Barker wrote: >> if dict order is preserved in cPython , people WILL count on it! > > I won't, and if people do and their code break, they'll have only themselves > to blame. > Also, what proof do you have of that besides anecdotal evidence?? ~27 calendar years of anecdotal evidence across a multitude of CPython API behaviours (as well as API usage in other projects). Other implementation developers don't say "CPython's runtime behaviour is the real Python specification" for the fun of it - they say it because "my code works on CPython, but it does the wrong thing on your interpreter, so I'm going to stick with CPython" is a real barrier to end user adoption, no matter what the language specification says. Blaming users for not writing portable code doesn't achieve anything in that scenario - it just puts an extra road block in the way of those users trying out the alternative interpreter implementation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jsbueno at python.org.br Tue Nov 7 22:24:02 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Wed, 8 Nov 2017 01:24:02 -0200 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: On 6 November 2017 at 05:09, Guido van Rossum wrote: > I still find this unfriendly to users of Python scripts and small apps who > are not the developers of those scripts. (Large apps tend to spit out so > much logging it doesn't really matter.) > > Isn't there a better heuristic we can come up with so that the warnings tend > to be on for developers but off for end users? > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br > From jsbueno at python.org.br Tue Nov 7 22:27:11 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Wed, 8 Nov 2017 01:27:11 -0200 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106022923.GA26858@phdru.name> <23040.146.864543.206484@turnbull.sk.tsukuba.ac.jp> Message-ID: Sorry - trigger happy on the previous message. On 6 November 2017 at 05:09, Guido van Rossum wrote: > I still find this unfriendly to users of Python scripts and small apps who > are not the developers of those scripts. (Large apps tend to spit out so > much logging it doesn't really matter.) > > Isn't there a better heuristic we can come up with so that the warnings tend > to be on for developers but off for end users? So, I don't know who is the "Jose Bueno" mentioned in the first message - :-) I just conveyed a concern from the Brython developers, as I follow the project - and I'd rather have my terminal clean when using programs. Making some thing when one does "python setup.py develop" to flip the switches so that the deprecation warnings get turned on might be a nice idea, and overall help people, tough. joao bueno -><- > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br > From random832 at fastmail.com Tue Nov 7 23:38:23 2017 From: random832 at fastmail.com (Random832) Date: Tue, 07 Nov 2017 23:38:23 -0500 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171107145641.GB19802@ando.pearwood.info> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107145641.GB19802@ando.pearwood.info> Message-ID: <1510115903.3881092.1165295520.3B1417EC@webmail.messagingengine.com> On Tue, Nov 7, 2017, at 09:56, Steven D'Aprano wrote: > Don't let the perfect be the enemy of the good. > > For many applications, keys are never removed from the dict, so this > doesn't matter. If you never delete a key, then the remaining keys will > never be reordered. > > I think that Nick's intent was not to say that after a single deletion, > the ordering guarantee goes away "forever", but that a deletion may be > permitted to reorder the keys, after which further additions will honour > insertion order. At least, that's how I interpret him. Honestly, I think the more intuitive way would be the "other way around" - deletions don't themselves change the order, but they do cause subsequent insertions to be allowed to insert at places other than the end. Does the implementation in CPython have this property? One way I have seen this done is that the items themselves live in an array, and each new item is always inserted in the first empty slot in the array (empty slots form a freelist). The hash buckets contain indices into the array. From steve at pearwood.info Tue Nov 7 23:47:36 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 8 Nov 2017 15:47:36 +1100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107145641.GB19802@ando.pearwood.info> Message-ID: <20171108044736.GD19802@ando.pearwood.info> On Tue, Nov 07, 2017 at 05:37:15PM +0200, Serhiy Storchaka wrote: > 07.11.17 16:56, Steven D'Aprano ????: > >To clarify: if we start with an empty dict, add keys A...D, delete B, > >then add E...H, we could expect: [...] > Rather > > {A: 1, D: 4, C: 3} # move the last item in place of removed > {A: 1, D: 4, C: 3, E: 5} Thanks for the correction. -- Steve From guido at python.org Tue Nov 7 23:58:39 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 20:58:39 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: What does "RFE" mean? I don't recall hearing that term before on the Python bug tracker. Request For E-what? (I presume it's a RedHat internal term?) On Tue, Nov 7, 2017 at 6:57 PM, Nick Coghlan wrote: > On 8 November 2017 at 11:46, Guido van Rossum wrote: > > On Tue, Nov 7, 2017 at 5:35 PM, Nick Coghlan wrote: > >> > >> On 8 November 2017 at 10:03, Guido van Rossum wrote: > >> > OK, so let's come up with a set of heuristics that does the right > thing > >> > for > >> > those cases specifically. I'd say whenever you're executing code from > a > >> > zipfile or some such it's not considered your own code (by default). > >> > >> My current preferred heuristic is just to add a new default filter to > the > >> list: > >> > >> once::DeprecationWarning:__main__ > >> > >> Which says to warn specifically for the __main__ module, and continue > >> ignoring everything else. > > > > OK, that sounds great. > > > >> That way ad hoc scripts and the REPL will get warnings by default, > >> while zipapps and packages can avoid warnings by keeping their > >> __main__.py simple, and importing a CLI helper function from another > >> module. Entry point wrapper scripts will implicitly have the same > >> effect for installed packages. > > > > That's fine. > > > >> If folks want to get warnings for other modules as well, then they can > >> either pass "-Wd" to get warnings for everything, or else enable them > >> selectively using the default main module filter as an example. > > > > Assuming that's how it already works, we're done here. :-) > > Cool :) > > RFE filed here for that specific change to the default filter set: > https://bugs.python.org/issue31975 > > Cheers, > Nick. > > P.S. If anyone wants to follow up on some of the other more esoteric > ideas we've discussed in the past few days, they can be separate RFEs. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 8 00:23:20 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 21:23:20 -0800 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171108044736.GD19802@ando.pearwood.info> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107145641.GB19802@ando.pearwood.info> <20171108044736.GD19802@ando.pearwood.info> Message-ID: I'll probably get complaints because I'm not waiting for the benchmark results to come in, but I think I've seen enough. To me the only defensible behavior *other than the pre-3.6 behavior* is that after deletions the order remains preserved and new insertions happen at the end -- i.e. the same where they would go if the deleted items were never inserted. I find it hard to believe that there would be a speed difference that's noticeable outside micro-benchmarks or applications making extreme use of dicts. PS. It seems odd that people arguing that the behavior after deletions doesn't matter are also arguing that deletions are uncommon? Surely there should be no speed penalty if you never delete anything from a dict, so if you believe deletions are rare anyway, why would you care about paying a bit extra for them? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 8 00:28:36 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Nov 2017 15:28:36 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: On 8 November 2017 at 14:58, Guido van Rossum wrote: > What does "RFE" mean? I don't recall hearing that term before on the Python > bug tracker. Request For E-what? (I presume it's a RedHat internal term?) Request for Enhancement (as opposed to a bug report). It's not Red Hat specific, but shortening it to the initialism is a bit enterprisey :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Wed Nov 8 00:45:05 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 21:45:05 -0800 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: It seems there must be at least two threads for each topic worth discussing at all. Therefore I feel compelled to point to https://mail.python.org/pipermail/python-dev/2017-November/150381.html, where I state my own conclusion about dict order. I know Paul Sokolovsky does not claim to speak for MicroPython, but I think he had better shut up or he's nevertheless going to damage its reputation beyond repair. I note that micropython.org claims "MicroPython aims to be as compatible with normal Python as possible to allow you to transfer code with ease from the desktop to a microcontroller or embedded system." To me this implies that it is entirely up to the MicroPython project to decide what they'll do about the order of dict elements -- they can keep doing what they are doing, or choose a new dict implementation that satisfies their space and performance needs while also preserving order, or give developer a compile-time choice, or give users a choice at startup time, or something else I haven't thought of yet. Any of those options is better than continuing the flamewar that Paul is waging. Finally: the dict type should not be endowed with other parts of the OrderedDict API, not should other API changes to dict be considered. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 8 00:46:24 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 21:46:24 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: I'd call that a feature request. :-) On Tue, Nov 7, 2017 at 9:28 PM, Nick Coghlan wrote: > On 8 November 2017 at 14:58, Guido van Rossum wrote: > > What does "RFE" mean? I don't recall hearing that term before on the > Python > > bug tracker. Request For E-what? (I presume it's a RedHat internal term?) > > Request for Enhancement (as opposed to a bug report). It's not Red Hat > specific, but shortening it to the initialism is a bit enterprisey :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 8 01:24:10 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 7 Nov 2017 22:24:10 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: To top off today's set of hopeful thread closures I want to put this one to rest. While I'm not ready to accept PEP 563 yet (I'm waiting for the first cut of the implementation) I *am* hereby nixing the thunk idea. When I said I was more worried about that than about the stringification that wasn't about compatibility so much as about conceptual simplicity. Right now the contents of __annotations__ can be one of two things: an object created by evaluating an annotation, or a string literal. With stringification these two possibilities remain the only two possibilities (and they will still both occur in files that don't use the __future__ import). With thunks, there would be a third option, a 0-argument callable. I've always disliked APIs (like Django templates, IIRC) that "transparently" support callables by calling them and treat all other values as-is. (What if I have a value that just *happens* to be callable?) I don't want to add such an API. I also don't like the idea that there's nothing you can do with a thunk besides calling it -- you can't meaningfully introspect it (not without building your own bytecode interpreter anyway). Using an AST instead of a string is also undesirable -- the AST changes in each release, and the usual strong compatibility guarantees don't apply here. And how are you going to do anything with it? If you've got a string and you want an AST node, it's one call away. But if you've got an AST node and you want either a string *or* the object to which that string would evaluate, you've got a lot of work to do. Plus the AST takes up a lot more space than the string, and we don't have a way to put an AST in a bytecode file. (And as Inada-san pointed out a thunk *also* takes up more space than a string.) Nick, please don't try to save the thunk proposal by carefully dissecting every one of my objections. That will just prolong its demise. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From flying-sheep at web.de Wed Nov 8 02:47:07 2017 From: flying-sheep at web.de (Philipp A.) Date: Wed, 08 Nov 2017 07:47:07 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> Message-ID: Hi Guido, As far as I can see, the general consensus seems to be to turn them back on in general: The last person to argue against it was Paul Moore, and he since said: ?OK, I overstated [that you?re ?hosed? by DeprecationWarnings appearing]. Apologies. My recollection is of a lot more end user complaints when deprecation warnings were previously switched on than others seem to remember, but I can't find hard facts, so I'll assume I'm misremembering.? Besides, quite some of the problems people mention would only be fixed by turning them on in general, not with the compromise. So I don?t think we need a compromise, right? Best, Philipp Guido van Rossum schrieb am Mi., 8. Nov. 2017 um 03:46 Uhr: > To cut this thread short, I say we should use Nick's proposal to turn > these warnings on for __main__ but off elsewhere. (See > https://mail.python.org/pipermail/python-dev/2017-November/150364.html.) > > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/flying-sheep%40web.de > -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Nov 8 03:33:07 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 8 Nov 2017 10:33:07 +0200 Subject: [Python-Dev] Guarantee ordered dict literals in v3.7? In-Reply-To: References: <20171104173013.GA4005@bytereef.org> <20171106121817.4c3c2367@x230> <20171106122754.23939971@fsol> <20171106131434.GE15990@ando.pearwood.info> <20171106143045.3bc16405@fsol> <20171106183548.20fee86b@x230> <20171106210757.25e14be4@x230> <83E68BC6-186B-4EB1-9F91-37F6E5792715@gmail.com> <20171107193941.0c86fa7f@x230> Message-ID: 08.11.17 04:33, Nick Coghlan ????: > On 8 November 2017 at 11:44, Nick Coghlan wrote: >> 2. So far, I haven't actually come up with a perturbed iteration >> implementation that doesn't segfault the interpreter. The dict >> internals are nicely laid out to be iteration friendly, but they >> really do assume that you're going to start at index zero, and then >> iterate through to the end of the array. The bounds checking and >> pointer validity testing becomes relatively fiddly if you try to push >> against that and instead start iteration from a point partway through >> the storage array. > > In case anyone else wants to experiment with a proof of concept: > https://github.com/ncoghlan/cpython/commit/6a8a6fa32f0a9cd71d9078fbb2b5ea44d5c5c14d > > I think we've probably exhausted the utility of discussing this as a > purely hypothetical change, and so the only way to move the discussion > forward will be for someone to draft a patch that: > > 1. Perturbs iteration for regular dicts (it's OK for our purposes if > it's still deterministic - it just shouldn't match insertion order the > way odict does) > 2. Switches keyword args and class body execution namespaces over to > odict so the test suite passes again > 3. Measures the impact such a change would have on the benchmark suite > > My experiment is a starting point, but it will still be a fair bit of > work to get it from there to a viable proof of concept that can be > assessed against the status quo. It may be easy and more efficient to break the order at insertion. 1. Insert in the reversed order. 2. Add at the end or at the begin, changing the order on every insertion. 2. Swap with an arbitrary item. From solipsis at pitrou.net Wed Nov 8 04:21:57 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 8 Nov 2017 10:21:57 +0100 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> Message-ID: <20171108102157.2170b59e@fsol> On Wed, 8 Nov 2017 11:35:13 +1000 Nick Coghlan wrote: > On 8 November 2017 at 10:03, Guido van Rossum wrote: > > OK, so let's come up with a set of heuristics that does the right thing for > > those cases specifically. I'd say whenever you're executing code from a > > zipfile or some such it's not considered your own code (by default). > > My current preferred heuristic is just to add a new default filter to the list: > > once::DeprecationWarning:__main__ Special cases are not special enough to break the rules. In other words, I'm -1 on this. Not only does it add complication and inconsistency (bound to catch people by surprise) to an already non-trivial set of default warning settings, but it doesn't even solve any problem that I'm aware of. The idea that __main__ scripts should get special treatment here is entirely gratuitous. Regards Antoine. From solipsis at pitrou.net Wed Nov 8 04:28:57 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 8 Nov 2017 10:28:57 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> Message-ID: <20171108102857.649651c9@fsol> On Wed, 8 Nov 2017 13:07:12 +1000 Nick Coghlan wrote: > On 8 November 2017 at 07:19, Evpok Padding wrote: > > On 7 November 2017 at 21:47, Chris Barker wrote: > >> if dict order is preserved in cPython , people WILL count on it! > > > > I won't, and if people do and their code break, they'll have only themselves > > to blame. > > Also, what proof do you have of that besides anecdotal evidence?? > > ~27 calendar years of anecdotal evidence across a multitude of CPython > API behaviours (as well as API usage in other projects). > > Other implementation developers don't say "CPython's runtime behaviour > is the real Python specification" for the fun of it - they say it > because "my code works on CPython, but it does the wrong thing on your > interpreter, so I'm going to stick with CPython" is a real barrier to > end user adoption, no matter what the language specification says. Yet, PyPy has no reference counting, and it doesn't seem to be a cause of concern. Broken code is fixed along the way, when people notice. Regards Antoine. From erik.m.bray at gmail.com Wed Nov 8 09:39:55 2017 From: erik.m.bray at gmail.com (Erik Bray) Date: Wed, 8 Nov 2017 15:39:55 +0100 Subject: [Python-Dev] Clarifying Cygwin support in CPython Message-ID: Hi folks, As some people here know I've been working off and on for a while to improve CPython's support of Cygwin. I'm motivated in part by a need to have software working on Python 3.x on Cygwin for the foreseeable future, preferably with minimal graft. (As an incidental side-effect Python's test suite--especially of system-level functionality--serves as an interesting test suite for Cygwin itself too.) This is partly what motivated PEP 539 [1], although that PEP had the advantage of benefiting other POSIX-compatible platforms as well (and in fact was fixing an aspect of CPython that made it unfriendly to supporting other platforms). As far as I can tell, the first commit to Python to add any kind of support for Cygwin was made by Guido (committing a contributed patch) back in 1999 [2]. Since then, bits and pieces have been added for Cygwin's benefit over time, with varying degrees of impact in terms of #ifdefs and the like (for the most part Cygwin does not require *much* in the way of special support, but it does have some differences from a "normal" POSIX-compliant platform, such as the possibility for case-insensitive filesystems and executables that end in .exe). I don't know whether it's ever been "officially supported" but someone with a longer memory of the project can comment on that. I'm not sure if it was discussed at all or not in the context of PEP 11. I have personally put in a fair amount of effort already in either fixing issues on Cygwin (many of these issues also impact MinGW), or more often than not fixing issues in the CPython test suite on Cygwin--these are mostly tests that are broken due to invalid assumptions about the platform (for example, that there is always a "root" user with uid=0; this is not the case on Cygwin). In other cases some tests need to be skipped or worked around due to platform-specific bugs, and Cygwin is hardly the only case of this in the test suite. I also have an experimental AppVeyor configuration for running the tests on Cygwin [3], as well as an experimental buildbot (not available on the internet, but working). These currently rely on a custom branch that includes fixes needed for the test suite to run to completion without crashing or hanging (e.g. https://bugs.python.org/issue31885). It would be nice to add this as an official buildbot, but I'm not sure if it makes sense to do that until it's "green", or at least not crashing. I have several other patches to the tests toward this goal, and am currently down to ~22 tests failing. Before I do any more work on this, however, it would be best to once and for all clarify the support for Cygwin in CPython, as it has never been "officially supported" nor unsupported--this way we can avoid having this discussion every time a patch related to Cygwin comes up. I could provide some arguments for why I believe Cygwin should supported, but before this gets too long I'd just like to float the idea of having the discussion in the first place. It's also not exactly clear to me how to meet the standards in PEP 11 for supporting a platform--in particular it's not clear when a buildbot is considered "stable", or how to achieve that without getting necessary fixes merged into the main branch in the first place. Thanks, Erik [1] https://www.python.org/dev/peps/pep-0539/ [2] https://github.com/python/cpython/commit/717d1fdf2acbef5e6b47d9b4dcf48ef1829be685 [3] https://ci.appveyor.com/project/embray/cpython From wes.turner at gmail.com Wed Nov 8 10:31:23 2017 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 8 Nov 2017 10:31:23 -0500 Subject: [Python-Dev] OrderedDict(kwargs) optimization? Message-ID: On Wednesday, November 8, 2017, Guido van Rossum wrote: > It seems there must be at least two threads for each topic worth > discussing at all. Therefore I feel compelled to point to > https://mail.python.org/pipermail/python-dev/2017-November/150381.html, > where I state my own conclusion about dict order. > > . > > Finally: the dict type should not be endowed with other parts of the > OrderedDict API, not should other API changes to dict be considered. > Is there an opportunity to support a fast cast to OrderedDict from 3.6 dict? Can it just copy .keys() into the OrderedDict linked list?Or is there more overhead to the transition? That'd be great for preserving kwargs' order after a pop() or a del? def func(**kwargs): kwargs = OrderedDict(kwargs) arg2 = kwargs.pop('arg2') > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.ware+pydev at gmail.com Wed Nov 8 11:28:01 2017 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Wed, 8 Nov 2017 10:28:01 -0600 Subject: [Python-Dev] Clarifying Cygwin support in CPython In-Reply-To: References: Message-ID: On Wed, Nov 8, 2017 at 8:39 AM, Erik Bray wrote: > a platform--in particular it's not clear when a buildbot is considered > "stable", or how to achieve that without getting necessary fixes > merged into the main branch in the first place. I think in this context, "stable" just means "keeps a connection to the buildbot master and doesn't blow up when told to build" :). As such, I'm ready to get you added to the fleet whenever you are. -- Zach From storchaka at gmail.com Wed Nov 8 11:30:25 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 8 Nov 2017 18:30:25 +0200 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API Message-ID: Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all maintained versions ([1] and [2]). Despite their names they are private. I think that they are enough stable now and would be helpful in third-party code. Are there any objections against adding them to the stable C API? [3] [1] https://bugs.python.org/issue20440 [2] https://bugs.python.org/issue26200 [3] https://bugs.python.org/issue31983 From victor.stinner at gmail.com Wed Nov 8 11:37:20 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 8 Nov 2017 17:37:20 +0100 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: I like these macros! Technically, would it be possible to use an inline function instead of a macro for Python 3.7? Victor 2017-11-08 17:30 GMT+01:00 Serhiy Storchaka : > Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all > maintained versions ([1] and [2]). Despite their names they are private. I > think that they are enough stable now and would be helpful in third-party > code. Are there any objections against adding them to the stable C API? [3] > > [1] https://bugs.python.org/issue20440 > [2] https://bugs.python.org/issue26200 > [3] https://bugs.python.org/issue31983 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From storchaka at gmail.com Wed Nov 8 11:47:52 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 8 Nov 2017 18:47:52 +0200 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: 08.11.17 18:37, Victor Stinner ????: > I like these macros! > > Technically, would it be possible to use an inline function instead of > a macro for Python 3.7? No, unless there is a way to pass arguments by reference in C99. Py_SETREF(x, y) is the safe equivalent of x = y; Py_DECREF(x, y); From guido at python.org Wed Nov 8 11:47:23 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 8 Nov 2017 08:47:23 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> Message-ID: Philipp, You seem to have missed Nick's posts where he clearly accepts that a middle ground is necessary. R D Murray is also still unconvinced. (And obviously I myself am against reverting to the behavior from 7 years ago.) If we can't agree on some middle ground, the status quo will be maintained. On Tue, Nov 7, 2017 at 11:47 PM, Philipp A. wrote: > Hi Guido, > > As far as I can see, the general consensus seems to be to turn them back > on in general: The last person to argue against it was Paul Moore, and he > since said: > > ?OK, I overstated [that you?re ?hosed? by DeprecationWarnings appearing]. > Apologies. My recollection is of a lot more end user complaints when > deprecation warnings were previously switched on than others seem to > remember, but I can't find hard facts, so I'll assume I'm misremembering.? > > Besides, quite some of the problems people mention would only be fixed by > turning them on in general, not with the compromise. > > So I don?t think we need a compromise, right? > > Best, Philipp > > Guido van Rossum schrieb am Mi., 8. Nov. 2017 um > 03:46 Uhr: > >> To cut this thread short, I say we should use Nick's proposal to turn >> these warnings on for __main__ but off elsewhere. (See >> https://mail.python.org/pipermail/python-dev/2017-November/150364.html.) >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> flying-sheep%40web.de >> > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Nov 8 13:56:51 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 8 Nov 2017 10:56:51 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> Message-ID: <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> On Nov 8, 2017, at 08:47, Guido van Rossum wrote: > > You seem to have missed Nick's posts where he clearly accepts that a middle ground is necessary. R D Murray is also still unconvinced. (And obviously I myself am against reverting to the behavior from 7 years ago.) If we can't agree on some middle ground, the status quo will be maintained. I haven?t seen a response to my suggestion, so it?s possible that it got missed in the flurry. With coordination with setuptools, we could: * Re-enable DeprecationWarning by default * Add a simplified API for specifically silencing DeprecationWarnings * Modify setuptools to call this API for generated entry point scripts I think this would mean that most application users would still not see the warnings. The simplified API would be available for handcrafted scripts to call to accomplish the same thing the setuptools enhancement would provide. Developers would see DeprecationWarnings in their development and test environments. The simplified API would be the equivalent of ignore::DeprecationWarning, so with some additional documentation even versions of applications running on versions of Python < 3.7 would still have an ?out?. (Yes, the simplified API is just a convenience moving forward.) Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Wed Nov 8 15:02:45 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 8 Nov 2017 12:02:45 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> Message-ID: I hadn't seen that, but it requires too much cooperation of library owners. On Wed, Nov 8, 2017 at 10:56 AM, Barry Warsaw wrote: > On Nov 8, 2017, at 08:47, Guido van Rossum wrote: > > > > You seem to have missed Nick's posts where he clearly accepts that a > middle ground is necessary. R D Murray is also still unconvinced. (And > obviously I myself am against reverting to the behavior from 7 years ago.) > If we can't agree on some middle ground, the status quo will be maintained. > > I haven?t seen a response to my suggestion, so it?s possible that it got > missed in the flurry. With coordination with setuptools, we could: > > * Re-enable DeprecationWarning by default > * Add a simplified API for specifically silencing DeprecationWarnings > * Modify setuptools to call this API for generated entry point scripts > > I think this would mean that most application users would still not see > the warnings. The simplified API would be available for handcrafted > scripts to call to accomplish the same thing the setuptools enhancement > would provide. Developers would see DeprecationWarnings in their > development and test environments. > > The simplified API would be the equivalent of ignore::DeprecationWarning, > so with some additional documentation even versions of applications running > on versions of Python < 3.7 would still have an ?out?. (Yes, the > simplified API is just a convenience moving forward.) > > Cheers, > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 8 15:12:55 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 8 Nov 2017 20:12:55 +0000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> Message-ID: On 8 November 2017 at 18:56, Barry Warsaw wrote: > On Nov 8, 2017, at 08:47, Guido van Rossum wrote: >> >> You seem to have missed Nick's posts where he clearly accepts that a middle ground is necessary. R D Murray is also still unconvinced. (And obviously I myself am against reverting to the behavior from 7 years ago.) If we can't agree on some middle ground, the status quo will be maintained. > > I haven?t seen a response to my suggestion, so it?s possible that it got missed in the flurry. With coordination with setuptools, we could: > > * Re-enable DeprecationWarning by default > * Add a simplified API for specifically silencing DeprecationWarnings > * Modify setuptools to call this API for generated entry point scripts > > I think this would mean that most application users would still not see the warnings. The simplified API would be available for handcrafted scripts to call to accomplish the same thing the setuptools enhancement would provide. Developers would see DeprecationWarnings in their development and test environments. > > The simplified API would be the equivalent of ignore::DeprecationWarning, so with some additional documentation even versions of applications running on versions of Python < 3.7 would still have an ?out?. (Yes, the simplified API is just a convenience moving forward.) pip uses distutils for its script wrappers, but uses its own script template, so it'd need a pip change too (which means it'd be in pip 10 but not earlier versions). Paul From ncoghlan at gmail.com Wed Nov 8 15:33:11 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 06:33:11 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <20171108102157.2170b59e@fsol> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> Message-ID: On 8 November 2017 at 19:21, Antoine Pitrou wrote: > On Wed, 8 Nov 2017 11:35:13 +1000 > Nick Coghlan wrote: > >> On 8 November 2017 at 10:03, Guido van Rossum wrote: >> > OK, so let's come up with a set of heuristics that does the right thing for >> > those cases specifically. I'd say whenever you're executing code from a >> > zipfile or some such it's not considered your own code (by default). >> >> My current preferred heuristic is just to add a new default filter to the list: >> >> once::DeprecationWarning:__main__ > > Special cases are not special enough to break the rules. > > In other words, I'm -1 on this. Not only does it add complication and > inconsistency (bound to catch people by surprise) to an already > non-trivial set of default warning settings, but it doesn't even solve > any problem that I'm aware of. The idea that __main__ scripts should > get special treatment here is entirely gratuitous. For interactive use, the principle ends up being "Code you write gives deprecation warnings, code you import doesn't" (which is the main aspect I care about, since it's the one that semi-regularly trips me up when I forget that DeprecationWarning is off by default). Beyond that, it encourages folks distributing applications for others to use to keep __main__ simple and import most of their functionality from other libraries, which I also consider a desirable outcome. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at python.org Wed Nov 8 15:53:41 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 8 Nov 2017 12:53:41 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> Message-ID: <846AE41E-2336-441A-A99E-C5443BBC4859@python.org> On Nov 8, 2017, at 12:02, Guido van Rossum wrote: > > I hadn't seen that, but it requires too much cooperation of library owners. Actually, mostly just setuptools and as Paul points out, pip. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From hodgestar+pythondev at gmail.com Wed Nov 8 16:09:45 2017 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Wed, 8 Nov 2017 23:09:45 +0200 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> Message-ID: On Wed, Nov 8, 2017 at 10:33 PM, Nick Coghlan wrote: > For interactive use, the principle ends up being "Code you write gives > deprecation warnings, code you import doesn't" (which is the main > aspect I care about, since it's the one that semi-regularly trips me > up when I forget that DeprecationWarning is off by default). I with Antoine here. The idea that "code in __main__" is the set of code someone wrote really seems a lot like guessing (and not even very good guessing). If everyone follows the "keep __main__ small" then scripts won't automatically display deprecation warnings by default and so the original problem of "warnings are easy to miss" remains. Counter proposal -- why don't testing frameworks turn on warnings by default? E.g. like pytest-warnings? That way people running tests will see warnings and others won't unless they ask to see them. From ncoghlan at gmail.com Wed Nov 8 16:43:29 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 07:43:29 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> Message-ID: On 9 November 2017 at 07:09, Simon Cross wrote: > On Wed, Nov 8, 2017 at 10:33 PM, Nick Coghlan wrote: >> For interactive use, the principle ends up being "Code you write gives >> deprecation warnings, code you import doesn't" (which is the main >> aspect I care about, since it's the one that semi-regularly trips me >> up when I forget that DeprecationWarning is off by default). > > I with Antoine here. The idea that "code in __main__" is the set of code someone > wrote really seems a lot like guessing (and not even very good guessing). > > If everyone follows the "keep __main__ small" then scripts won't automatically > display deprecation warnings by default and so the original problem of "warnings > are easy to miss" remains. That's an intended outcome - the goal is to have applications that are packaged in any way (installed Python package, importable packages with a __main__ submodule, zip archives with a __main__.py file) continue to silence deprecation warnings by default, as in those cases, the folks running them are *users* of those applications, rather than developers on them. > > Counter proposal -- why don't testing frameworks turn on warnings by default? > E.g. like pytest-warnings? That way people running tests will see warnings and > others won't unless they ask to see them. They do. The problem is that this only works for things that actually have test suites, which misses a lot of "Interactive Python is part of the user experience" use cases, like CPython's own REPL, and personal scripting on a platform like Linux or conda. However, between them, the following two guidelines should provide pretty good deprecation warning coverage for the world's Python code: 1. If it's in __main__, it will emit deprecation warnings at runtime 2. If it's not in __main__, it should have a test suite Thus the future answer to "I don't want deprecation warnings at runtime" becomes "Move it out of __main__ into an importable module, and give it a test suite". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From antoine at python.org Wed Nov 8 16:46:27 2017 From: antoine at python.org (Antoine Pitrou) Date: Wed, 8 Nov 2017 22:46:27 +0100 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> Message-ID: <99945e14-5099-1e28-fad8-08be204766ef@python.org> Le 08/11/2017 ? 22:43, Nick Coghlan a ?crit?: > > However, between them, the following two guidelines should provide > pretty good deprecation warning coverage for the world's Python code: > > 1. If it's in __main__, it will emit deprecation warnings at runtime > 2. If it's not in __main__, it should have a test suite Nick, have you actually read the discussion and the complaints people had with the current situation? Most of them *don't* specifically talk about __main__ scripts. Regards Antoine. From jeanpatrick.francoia at gmail.com Wed Nov 8 17:01:42 2017 From: jeanpatrick.francoia at gmail.com (Jean-Patrick Francoia) Date: Wed, 8 Nov 2017 22:01:42 +0000 Subject: [Python-Dev] PEP 484: difference between tuple and () Message-ID: This is my first post on this list, so please don't kill me if I ask it in the wrong place, or if the question is stupid. I asked this question on Stack Overflow already: https://stackoverflow.com/questions/47163048/python-annotations-difference-between-tuple-and In very short, which form is correct ? |deffunc()->Tuple[int,int] ||| || But this requires to import the typing module. Or this (doesn't crash): |deffunc()->(int,int): | || || -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.zijlstra at gmail.com Wed Nov 8 17:26:54 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Wed, 8 Nov 2017 14:26:54 -0800 Subject: [Python-Dev] PEP 484: difference between tuple and () In-Reply-To: References: Message-ID: 2017-11-08 14:01 GMT-08:00 Jean-Patrick Francoia < jeanpatrick.francoia at gmail.com>: > This is my first post on this list, so please don't kill me if I ask it in > the wrong place, or if the question is stupid. > > > I asked this question on Stack Overflow already: > > https://stackoverflow.com/questions/47163048/python- > annotations-difference-between-tuple-and > > > In very short, which form is correct ? > > > def func() -> Tuple[int, int] > > > > But this requires to import the typing module. > > > Or this (doesn't crash): > > > def func() -> (int, int): > > > The former is correct. Type checkers should reject the second one. But because type checking in Python is through static analysis, either will work at runtime?you need to run a separate static analysis tool like mypy or pytype to find type errors in your code. Also, python-dev is a mailing list for the development of Python, not for questions about Python. The Gitter chatroom at https://gitter.im/python/typing and the typing issue tracker at https://github.com/python/typing are better places for questions about typing in Python. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 8 19:10:43 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 10:10:43 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <99945e14-5099-1e28-fad8-08be204766ef@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On 9 November 2017 at 07:46, Antoine Pitrou wrote: > > Le 08/11/2017 ? 22:43, Nick Coghlan a ?crit : >> >> However, between them, the following two guidelines should provide >> pretty good deprecation warning coverage for the world's Python code: >> >> 1. If it's in __main__, it will emit deprecation warnings at runtime >> 2. If it's not in __main__, it should have a test suite > > Nick, have you actually read the discussion and the complaints people > had with the current situation? Most of them *don't* specifically talk > about __main__ scripts. I have, and I've also re-read the discussions regarding why the default got changed in the first place. Behaviour up until 2.6 & 3.1: once::DeprecationWarning Behaviour since 2.7 & 3.2: ignore::DeprecationWarning With test runners overriding the default filters to set it back to "once::DeprecationWarning". The rationale for that change was so that end users of applications that merely happened to be written in Python wouldn't see deprecation warnings when Linux distros (or the end user) updated to a new Python version. It had the downside that you had to remember to opt-in to deprecation warnings in order to see them, which is a problem if you mostly use Python for ad hoc personal scripting. Proposed behaviour for Python 3.7+: once::DeprecationWarning:__main__ ignore::DeprecationWarning With test runners still overriding the default filters to set them back to "once::DeprecationWarning". This is a partial reversion back to the pre-2.7 behaviour, focused specifically on interactive use and ad hoc personal scripting. For ad hoc *distributed* scripting, the changed default encourages upgrading from single-file scripts to the zipapp model, and then minimising the amount of code that runs directly in __main__.py. I expect this will be a sufficient change to solve the specific problem I'm personally concerned by, so I'm no longer inclined to argue for anything more complicated. Other folks may have other concerns that this tweak to the default filters doesn't address - they can continue to build their case for more complex options using this as the new baseline behaviour. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 8 20:49:01 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 11:49:01 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: On 8 November 2017 at 16:24, Guido van Rossum wrote: > I also don't like the idea that there's nothing you can do with a thunk > besides calling it -- you can't meaningfully introspect it (not without > building your own bytecode interpreter anyway). Wait, that wasn't what I was suggesting at all - with thunks exposing their code object the same way a function does (i.e. as a `__code__` attribute), the introspection functions in `dis` would still work on them, so you'd be able to look at things like which variable names they referenced, thus granting the caller complete control over *how* they resolved those variable names (by setting them in the local namespace passed to the call). This is why they'd have interesting potential future use cases as general purpose callbacks - every local, nonlocal, global, and builtin name reference would implicitly be an optional parameter (or a required parameter if the name couldn't be resolved as a nonlocal, global, or builtin). > Using an AST instead of a string is also undesirable -- the AST changes in > each release, and the usual strong compatibility guarantees don't apply > here. And how are you going to do anything with it? If you've got a string > and you want an AST node, it's one call away. But if you've got an AST node > and you want either a string *or* the object to which that string would > evaluate, you've got a lot of work to do. Plus the AST takes up a lot more > space than the string, and we don't have a way to put an AST in a bytecode > file. (And as Inada-san pointed out a thunk *also* takes up more space than > a string.) > > Nick, please don't try to save the thunk proposal by carefully dissecting > every one of my objections. That will just prolong its demise. Just the one objection, since you seem to be rejecting something I didn't suggest (i.e. adding an opaque callable type that the dis and inspect modules didn't understand). I agree that would be a bad idea, but it's also not what I was suggesting we do. Instead, thunks would offer all the same introspection features as lambda expressions do, they'd just differ in the following ways: * the parameter list on their code objects would always be empty * the parameter list for their __call__ method would always be "ns=None" * they'd be compiled without CO_OPTIMIZED (the same as a class namespace) * they'd look up their closure references using LOAD_CLASSDEREF (the same as a class namespace) That said, even without a full-fledged thunk based solution to handling lexical scoping I think there's a way to resolve the nested class problem in PEP 563 that works for both explicitly and implicitly quoted strings, while still leaving the door open to replacing implicitly quoted strings with thunks at a later date: stating that *if* users want such nested references to be resolvable at runtime, they need to inject a runtime reference to the outermost class into the inner class namespace. That is, if you want to take: class C: field = 1 class D: def method(a: C.field): ... and move it inside a function, that would actually look like: def f(): class C: field = 1 class D: def method(a: C.field): ... C.D.C = C # Make annotations work at runtime return f That leaves the door open to a future PEP that proposes thunk-based annotations as part of proposing thunks as a new low level delayed evaluation primitive. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From songofacandy at gmail.com Wed Nov 8 20:48:50 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 9 Nov 2017 10:48:50 +0900 Subject: [Python-Dev] OrderedDict(kwargs) optimization? In-Reply-To: References: Message-ID: > That'd be great for preserving kwargs' order after a pop() or a del? To clarify, order is preserved after pop in Python 3.6 (and maybe 3.7). There is discussion about breaking it to optimize for limited use cases, but I don't think it's worth enough to discuss more until it demonstrates real performance gain. > Is there an opportunity to support a fast cast to OrderedDict from 3.6 dict? > Can it just copy .keys() into the OrderedDict linked list?Or is there more overhead to the transition? https://bugs.python.org/issue31265 Regards, INADA Naoki From ncoghlan at gmail.com Wed Nov 8 21:00:06 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 12:00:06 +1000 Subject: [Python-Dev] Clarifying Cygwin support in CPython In-Reply-To: References: Message-ID: On 9 November 2017 at 02:28, Zachary Ware wrote: > On Wed, Nov 8, 2017 at 8:39 AM, Erik Bray wrote: >> a platform--in particular it's not clear when a buildbot is considered >> "stable", or how to achieve that without getting necessary fixes >> merged into the main branch in the first place. > > I think in this context, "stable" just means "keeps a connection to > the buildbot master and doesn't blow up when told to build" :). As > such, I'm ready to get you added to the fleet whenever you are. Yep, and from a PEP 11 perspective, it's fine to have the order be: 1. add the new buildbot worker while it's still red 2. make the changes needed to turn it green in 3.X 3. keep it green for future releases of 3.X+ There just needs to be enough lead time between step 1 and the release of 3.Xb1 to land the compatibility fixes without causing problems for the release manager (and you're currently fine on that front with respect to 3.7). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From raymond.hettinger at gmail.com Wed Nov 8 21:08:44 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 8 Nov 2017 18:08:44 -0800 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: > On Nov 8, 2017, at 8:30 AM, Serhiy Storchaka wrote: > > Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all maintained versions ([1] and [2]). Despite their names they are private. I think that they are enough stable now and would be helpful in third-party code. Are there any objections against adding them to the stable C API? [3] I have mixed feeling about this. You and Victor seem to really like these macros, but they have been problematic for me. I'm not sure whether it is a conceptual issue or a naming issue, but the presence of these macros impairs my ability to read code and determine whether the refcounts are correct. I usually end-up replacing the code with the unrolled macro so that I can count the refs across all the code paths. The other issue is that when there are multiple occurrences of these macros for multiple variables, it interferes with my best practice of deferring all decrefs until the data structures are in a fully consistent state. Any one of these can cause arbitrary code to run. I greatly prefer putting all the decrefs at the end to increase my confidence that it is okay to run other code that might reenter the current code. Pure python functions effectively have this built-in because the locals all get decreffed at the end of the function when a return-statement is encountered. That practice helps me avoid hard to spot re-entrancy issues. Lastly, I think we should have a preference to not grow the stable C API. Bigger APIs are harder to learn and remember, not so much for you and Victor who use these frequently, but for everyone else who has to lookup all the macros whose function isn't immediately self-evident. Raymond From barry at python.org Wed Nov 8 21:17:28 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 8 Nov 2017 18:17:28 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: <18B6C5BB-BC56-4CA0-8E9C-17A4E9677942@python.org> On Nov 8, 2017, at 16:10, Nick Coghlan wrote: > The rationale for that change was so that end users of applications > that merely happened to be written in Python wouldn't see deprecation > warnings when Linux distros (or the end user) updated to a new Python > version. Instead they?d see breakage as DeprecationWarnings turned into errors. :( I?m not saying that Python apps, regardless of who they are provided by, should expose DeprecationWarnings to their end users. I actually think it?s good that they don?t because I don?t think most users care if their apps are written in Python, and wouldn?t know what to do about those warnings anyway. And they do cause anxiety. I suppose there are lots of ways to do this, but at least I?m pretty sure we all agree that end users shouldn?t see DeprecationWarnings, while developers should. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ncoghlan at gmail.com Wed Nov 8 21:18:18 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 12:18:18 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> References: <20171106124527.07ad7844@fsol> <20171106125856.471c6163@fsol> <20171106143952.346e19d7@fsol> <994BFBA5-4A31-4FE7-A3C4-F7EF1B0CE3C2@python.org> Message-ID: On 9 November 2017 at 04:56, Barry Warsaw wrote: > On Nov 8, 2017, at 08:47, Guido van Rossum wrote: >> >> You seem to have missed Nick's posts where he clearly accepts that a middle ground is necessary. R D Murray is also still unconvinced. (And obviously I myself am against reverting to the behavior from 7 years ago.) If we can't agree on some middle ground, the status quo will be maintained. > > I haven?t seen a response to my suggestion, so it?s possible that it got missed in the flurry. With coordination with setuptools, we could: > > * Re-enable DeprecationWarning by default > * Add a simplified API for specifically silencing DeprecationWarnings > * Modify setuptools to call this API for generated entry point scripts > > I think this would mean that most application users would still not see the warnings. The simplified API would be available for handcrafted scripts to call to accomplish the same thing the setuptools enhancement would provide. Developers would see DeprecationWarnings in their development and test environments. > > The simplified API would be the equivalent of ignore::DeprecationWarning, so with some additional documentation even versions of applications running on versions of Python < 3.7 would still have an ?out?. (Yes, the simplified API is just a convenience moving forward.) I did see that, but I think a "once::DeprecationWarning:__main__" filter provides a comparable benefit in a simpler way, as the recommended idiom to turn off deprecation warnings at runtime becomes: from elsewhere import main if __name__ == "__main__": import sys sys.exit(main(sys.argv)) That same idiom will then work for: * entry point wrapper scripts * __main__ submodules in executable packages * __main__.py files in executable directories and zip archives And passing "-Wd" will still correctly override the default filter set. It doesn't resolve the problem Nathaniel pointed out that "stacklevel" can be hard to set correctly when emitting a warning (especially at import time), but it also opens a new way of dealing with that: using warnings.warn_explicit to claim that the reporting module is "__main__" if you want to increase the chances of the warning being seen by default. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Wed Nov 8 23:16:53 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 8 Nov 2017 20:16:53 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: On Wed, Nov 8, 2017 at 5:49 PM, Nick Coghlan wrote: > On 8 November 2017 at 16:24, Guido van Rossum wrote: > > I also don't like the idea that there's nothing you can do with a thunk > > besides calling it -- you can't meaningfully introspect it (not without > > building your own bytecode interpreter anyway). > > Wait, that wasn't what I was suggesting at all - with thunks exposing > their code object the same way a function does (i.e. as a `__code__` > attribute), the introspection functions in `dis` would still work on > them, so you'd be able to look at things like which variable names > they referenced, thus granting the caller complete control over *how* > they resolved those variable names (by setting them in the local > namespace passed to the call). > I understood that they would be translated to `lambda: `. It seems you have a slightly more complex idea but if you're suggesting introspection through dis, that's too complicated for my taste. This is why they'd have interesting potential future use cases as > general purpose callbacks - every local, nonlocal, global, and builtin > name reference would implicitly be an optional parameter (or a > required parameter if the name couldn't be resolved as a nonlocal, > global, or builtin). > Yeah, but that's scope creep for PEP 563. ?ukasz and I are interested in gradually restricting the use of annotations to static typing with an optional runtime component. We're not interested in adding different use cases. (We're committed to backwards compatibility, but only until 4.0, with a clear deprecation path.) > > Using an AST instead of a string is also undesirable -- the AST changes > in > > each release, and the usual strong compatibility guarantees don't apply > > here. And how are you going to do anything with it? If you've got a > string > > and you want an AST node, it's one call away. But if you've got an AST > node > > and you want either a string *or* the object to which that string would > > evaluate, you've got a lot of work to do. Plus the AST takes up a lot > more > > space than the string, and we don't have a way to put an AST in a > bytecode > > file. (And as Inada-san pointed out a thunk *also* takes up more space > than > > a string.) > > > > Nick, please don't try to save the thunk proposal by carefully dissecting > > every one of my objections. That will just prolong its demise. > > Just the one objection, since you seem to be rejecting something I > didn't suggest (i.e. adding an opaque callable type that the dis and > inspect modules didn't understand). I agree that would be a bad idea, > but it's also not what I was suggesting we do. > I did not assume totally opaque -- but code objects are not very introspection friendly (and they have no strong compatibility guarantees). Instead, thunks would offer all the same introspection features as > lambda expressions do, they'd just differ in the following ways: > > * the parameter list on their code objects would always be empty > * the parameter list for their __call__ method would always be "ns=None" > * they'd be compiled without CO_OPTIMIZED (the same as a class namespace) > * they'd look up their closure references using LOAD_CLASSDEREF (the > same as a class namespace) > I don't understand the __call__ with "ns-None" thing but I don't expect it matters. > That said, even without a full-fledged thunk based solution to > handling lexical scoping I think there's a way to resolve the nested > class problem in PEP 563 that works for both explicitly and implicitly > quoted strings, while still leaving the door open to replacing > implicitly quoted strings with thunks at a later date: stating that > *if* users want such nested references to be resolvable at runtime, > they need to inject a runtime reference to the outermost class into > the inner class namespace. > > That is, if you want to take: > > class C: > field = 1 > class D: > def method(a: C.field): > ... > > and move it inside a function, that would actually look like: > > def f(): > class C: > field = 1 > class D: > def method(a: C.field): > ... > C.D.C = C # Make annotations work at runtime > return f > > That leaves the door open to a future PEP that proposes thunk-based > annotations as part of proposing thunks as a new low level delayed > evaluation primitive. > Sorry, that's not a door I'd like to leave open. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 9 02:57:39 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Nov 2017 17:57:39 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: TL;DR version: I'm now +1 on a string-based PEP 563, with one relatively small quibble regarding the future flag's name. Putting that quibble first: could we adjust the feature flag to be either "from __future__ import lazy_annotations" or "from __future__ import str_annotations"? Every time I see "from __future__ import annotations" I think "But we've had annotations since 3.0, why would they need a future import?". Adding the "lazy_" or "str_" prefix makes the feature flag self-documenting: it isn't the annotations support that's new, it's the fact the interpreter will avoid evaluating them at runtime by treating them as implicitly quoted strings at compile time. See inline comments for clarifications on what I was attempting to propose in relation to thunks, and more details on why I changed my mind :) On 9 November 2017 at 14:16, Guido van Rossum wrote: > On Wed, Nov 8, 2017 at 5:49 PM, Nick Coghlan wrote: >> >> On 8 November 2017 at 16:24, Guido van Rossum wrote: >> > I also don't like the idea that there's nothing you can do with a thunk >> > besides calling it -- you can't meaningfully introspect it (not without >> > building your own bytecode interpreter anyway). >> >> Wait, that wasn't what I was suggesting at all - with thunks exposing >> their code object the same way a function does (i.e. as a `__code__` >> attribute), the introspection functions in `dis` would still work on >> them, so you'd be able to look at things like which variable names >> they referenced, thus granting the caller complete control over *how* >> they resolved those variable names (by setting them in the local >> namespace passed to the call). > > I understood that they would be translated to `lambda: `. It seems you > have a slightly more complex idea but if you're suggesting introspection > through dis, that's too complicated for my taste. Substituting in a lambda expression wouldn't work for the reasons you gave when you objected to that idea (there wouldn't be any way for typing.get_type_hints() to inject "vars(cls)" when evaluating the annotations for method definitions, and enabling a cell-based alternative would be a really intrusive change). >> This is why they'd have interesting potential future use cases as >> general purpose callbacks - every local, nonlocal, global, and builtin >> name reference would implicitly be an optional parameter (or a >> required parameter if the name couldn't be resolved as a nonlocal, >> global, or builtin). > > Yeah, but that's scope creep for PEP 563. ?ukasz and I are interested in > gradually restricting the use of annotations to static typing with an > optional runtime component. We're not interested in adding different use > cases. (We're committed to backwards compatibility, but only until 4.0, with > a clear deprecation path.) Sorry, that was ambiguous wording on my part: the "potential future use cases" there related to thunks in general, not their use for annotations in particular. APIs like pandas.query are a more meaningful example of where thunks are potentially useful (and that's a problem I've been intermittently pondering since Fernando Perez explained it to me at SciPy a few years back - strings are an OK'ish workaround, but losing syntax highlighting, precompiled code object caching, and other benefits of real Python expressions means they *are* a workaround). >> Instead, thunks would offer all the same introspection features as >> lambda expressions do, they'd just differ in the following ways: >> >> * the parameter list on their code objects would always be empty >> * the parameter list for their __call__ method would always be "ns=None" >> * they'd be compiled without CO_OPTIMIZED (the same as a class namespace) >> * they'd look up their closure references using LOAD_CLASSDEREF (the >> same as a class namespace) > > I don't understand the __call__ with "ns-None" thing but I don't expect it > matters. It was an attempted shorthand for the way thunks could handle the method annotations use case in a way that regular lambda expressions can't: "thunk(vars(cls))" would be roughly equivalent to "exec(thunk.__code__, thunk.__globals__, vars(cls))", similar to the way class body evaluations works like "exec(body.__code__, body.__globals__, mcl.__prepare__())" That doesn't make a difference to your decision in relation to PEP 563, though. >> That leaves the door open to a future PEP that proposes thunk-based >> annotations as part of proposing thunks as a new low level delayed >> evaluation primitive. > > Sorry, that's not a door I'd like to leave open. At this point, I'd expect any successful PEP for the thunks idea to offer far more compelling use cases than type annotations - the key detail for me is that even if PEP 563 says "Lazy evaluation as strings means that type annotations do not support lexical closures", injecting attributes into class namespaces will still offer a way for devs to emulate closure references if they really want them. It's also the case that not supporting lexical closures would likely be *better* for potential thunk use cases like pandas.query, as in those kinds of use cases, the desired name resolution sequence is "table columns, module globals, builtins" - picking up random function locals as a closure reference and hence keeping them alive indefinitely isn't actually desirable. So I've reached a point where I'm happy that name resolution in simple string-based delayed evaluation in annotations can be mapped to existing name resolution concepts ("it's like a nested class body, just without lexical closure support, and with method definitions receiving their defining class namespace as a read-only locals()"), *and* that it won't prevent us from considering potential future enhancements to our syntactic support for ad hoc query APIs. Those are the two major things that were worrying me about the current draft, so with those concerns resolved, +1 from me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Thu Nov 9 04:07:18 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 9 Nov 2017 09:07:18 +0000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <18B6C5BB-BC56-4CA0-8E9C-17A4E9677942@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> <18B6C5BB-BC56-4CA0-8E9C-17A4E9677942@python.org> Message-ID: On 9 November 2017 at 02:17, Barry Warsaw wrote: > I suppose there are lots of ways to do this, but at least I?m pretty sure we all agree that end users shouldn?t see DeprecationWarnings, while developers should. Agreed. Most of the debate to me seems to be around who is an end user and who is a developer (and whether someone can be both at the same time). In my opinion, I am a developer of any code I write, but an end user of any code I get from others (whether that be a library or a full-blown application). However, the problem is that Python can't tell what code I wrote. Enabling warnings just for __main__ takes a conservative view of what counts as "code I wrote", while not being as conservative as the current approach (which is basically "assume I'm an end user unless I explicitly say I'm not"). My preference is to be conservative, so the proposed change is OK with me. Paul From greg.ewing at canterbury.ac.nz Thu Nov 9 02:13:56 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 09 Nov 2017 20:13:56 +1300 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: <5A040034.9090502@canterbury.ac.nz> Guido van Rossum wrote: > I did not assume totally opaque -- but code objects are not very > introspection friendly (and they have no strong compatibility guarantees). If I understand the proposal correctly, there wouldn't be any point in trying to introspect the lambdas/thunks/whatever. They're only there to provide a level of lazy evaluation. You would evaluate them and then introspect the returned data structure. -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 9 01:49:15 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 09 Nov 2017 19:49:15 +1300 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> Message-ID: <5A03FA6B.5040206@canterbury.ac.nz> > On 8 November 2017 at 19:21, Antoine Pitrou wrote: >> The idea that __main__ scripts should >> get special treatment here is entirely gratuitous. When I'm writing an app in Python, very often my __main__ is just a stub that imports the actual functionality from another module to get the benefits of a pyc. So enabling deprecation warnings for __main__ only wouldn't result in me seeing any more warnings. -- Greg From storchaka at gmail.com Thu Nov 9 05:44:01 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 9 Nov 2017 12:44:01 +0200 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: 09.11.17 04:08, Raymond Hettinger ????: > >> On Nov 8, 2017, at 8:30 AM, Serhiy Storchaka wrote: >> >> Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all maintained versions ([1] and [2]). Despite their names they are private. I think that they are enough stable now and would be helpful in third-party code. Are there any objections against adding them to the stable C API? [3] > > I have mixed feeling about this. You and Victor seem to really like these macros, but they have been problematic for me. I'm not sure whether it is a conceptual issue or a naming issue, but the presence of these macros impairs my ability to read code and determine whether the refcounts are correct. I usually end-up replacing the code with the unrolled macro so that I can count the refs across all the code paths. If the problem is with naming, what names do you prefer? This already was bikeshedded (I insisted on discussing names before introducing the macros), but may now you have better ideas? The current code contains 212 usages of Py_SETREF and Py_XSETREF. Maybe 10% of them correctly used temporary variables before introducing these macros, and these macros just made the code shorter. But in the rest of cases the refcount was decremented before setting new value. Not always this caused a problem, but it is too hard to prove that using such code is safe in every concrete case. Unrolling all these invocations will make the code larger and more cumbersome, and it is hard to do automatically. > The other issue is that when there are multiple occurrences of these macros for multiple variables, it interferes with my best practice of deferring all decrefs until the data structures are in a fully consistent state. Any one of these can cause arbitrary code to run. I greatly prefer putting all the decrefs at the end to increase my confidence that it is okay to run other code that might reenter the current code. Pure python functions effectively have this built-in because the locals all get decreffed at the end of the function when a return-statement is encountered. That practice helps me avoid hard to spot re-entrancy issues. I agree with you. If you need to set two or more attributes synchronously, Py_SETREF will not help you. This should be clearly explained in the documentation. Several subsequent Py_SETREFs may be an error. When I created my patches for using Py_SETREF I encountered several such cases and used different code for them. Maybe still there is not completely correct code, but in any case it is better now than before introducing Py_SETREF. But in many case you need to set only one attribute or different attributes are not tightly related. > Lastly, I think we should have a preference to not grow the stable C API. Bigger APIs are harder to learn and remember, not so much for you and Victor who use these frequently, but for everyone else who has to lookup all the macros whose function isn't immediately self-evident. I agree with you again. But these macros are pretty helpful. They allow to write the safer code easy. And they are more used than any other C API addition in 3.5, 3.6, and 3.7. 3.7: Py_X?SETREF -- 216 Py_UNREACHABLE -- 65 Py_RETURN_RICHCOMPARE -- 15 PyImport_GetModule -- 28 PyTraceMalloc_(T|Unt)rack -- 9 PyOS_(BeforeFork|AfterFork_(Parent|Child)) -- 24 Py_tss_NEEDS_INIT -- 6 PyInterpreterState_GetID -- 3 3.6: PySlice_Unpack -- 22 PySlice_AdjustIndices -- 22 PyErr_SetImportErrorSubclass -- 3 PyErr_ResourceWarning -- 6 PyOS_FSPath -- 9 Py_FinalizeEx -- 21 Py_MEMBER_SIZE -- 4 3.5: PyCodec_NameReplaceErrors -- 3 PyErr_FormatV -- 6 PyCoro_New -- 3 PyCoro_CheckExact -- 14 PyModuleDef_Init -- 30 PyModule_FromDefAndSpec2? -- 10 PyModule_ExecDef -- 3 PyModule_SetDocString -- 4 PyModule_AddFunctions -- 3 PyNumber_MatrixMultiply -- 4 PyNumber_InPlaceMatrixMultiply -- 4 Py_DecodeLocale -- 25 Py_EncodeLocale -- 11 Some older macros: Py_STRINGIFY -- 15 Py_ABS -- 54 Py_MIN -- 66 Py_MAX - 56 The above number include declaration and definition, hence remove 2 or 3 per name for getting the number of usages. Only added in 3.4 Py_UNUSED beats them (291) because it is used in generated Argument Clinic code. From erik.m.bray at gmail.com Thu Nov 9 05:55:12 2017 From: erik.m.bray at gmail.com (Erik Bray) Date: Thu, 9 Nov 2017 11:55:12 +0100 Subject: [Python-Dev] Clarifying Cygwin support in CPython In-Reply-To: References: Message-ID: On Wed, Nov 8, 2017 at 5:28 PM, Zachary Ware wrote: > On Wed, Nov 8, 2017 at 8:39 AM, Erik Bray wrote: >> a platform--in particular it's not clear when a buildbot is considered >> "stable", or how to achieve that without getting necessary fixes >> merged into the main branch in the first place. > > I think in this context, "stable" just means "keeps a connection to > the buildbot master and doesn't blow up when told to build" :). As > such, I'm ready to get you added to the fleet whenever you are. "Doesn't blow up when told to build" is the tricky part, because there are a few tests that are known to cause the test suite process to hang until killed. It's not clear to me whether, even with the --timeout option, that the test runner will kill hanging processes (I haven't actually tried this though so I'll double-check, but I'm pretty sure it does not). So until at least those issues are resolved I'd be hesitate to call it "stable". Thanks, Erik From victor.stinner at gmail.com Thu Nov 9 06:06:12 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 9 Nov 2017 12:06:12 +0100 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: Recently, Oren Milman fixed multiple bugs when an __init__() method was called twice. IMHO Py_SETREF() was nicely used in __init__(): https://github.com/python/cpython/commit/e56ab746a965277ffcc4396d8a0902b6e072d049 https://github.com/python/cpython/commit/c0cabc23bbe474d542ff8a4f1243f4ec3cce5549 While it's possible to rewrite the code *correctly* without PY_SETREF(), it would be much more verbose. Here the fix remains a single line: - self->archive = filename; + Py_XSETREF(self->archive, filename); Victor 2017-11-09 3:08 GMT+01:00 Raymond Hettinger : > >> On Nov 8, 2017, at 8:30 AM, Serhiy Storchaka wrote: >> >> Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all maintained versions ([1] and [2]). Despite their names they are private. I think that they are enough stable now and would be helpful in third-party code. Are there any objections against adding them to the stable C API? [3] > > I have mixed feeling about this. You and Victor seem to really like these macros, but they have been problematic for me. I'm not sure whether it is a conceptual issue or a naming issue, but the presence of these macros impairs my ability to read code and determine whether the refcounts are correct. I usually end-up replacing the code with the unrolled macro so that I can count the refs across all the code paths. > > The other issue is that when there are multiple occurrences of these macros for multiple variables, it interferes with my best practice of deferring all decrefs until the data structures are in a fully consistent state. Any one of these can cause arbitrary code to run. I greatly prefer putting all the decrefs at the end to increase my confidence that it is okay to run other code that might reenter the current code. Pure python functions effectively have this built-in because the locals all get decreffed at the end of the function when a return-statement is encountered. That practice helps me avoid hard to spot re-entrancy issues. > > Lastly, I think we should have a preference to not grow the stable C API. Bigger APIs are harder to learn and remember, not so much for you and Victor who use these frequently, but for everyone else who has to lookup all the macros whose function isn't immediately self-evident. > > > Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From victor.stinner at gmail.com Thu Nov 9 06:26:28 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 9 Nov 2017 12:26:28 +0100 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: 2017-11-09 3:08 GMT+01:00 Raymond Hettinger : > I greatly prefer putting all the decrefs at the end to increase my confidence that it is okay to run other code that might reenter the current code. There are 3 patterns to update C attributes of an object: (1) Py_XDECREF(obj->attr); // can call Python code obj->attr = new_value; or (2) old_value = obj->attr; obj->attr = new_value; Py_XDECREF(old_value); // can call Python code or (3) old_value = obj->attr; obj->attr = new_value; ... // The assumption here is that nothing here ... // can call arbitrary Python code // Finally, after setting all other attributes Py_XDECREF(old_value); // can call Python code Pattern (1) is likely to be vulnerable to reentrancy issue: Py_XDECREF() can call arbitrary Python code indirectly by the garbage collector, while the object being modified contains a *borrowed* reference instead of a *strong* reference, or can even refer an object which was just destroyed. Pattern (2) is better: the object always keeps a strong reference, *but* the modified attribute can be inconsistent with other attributes. At least, you prevent hard crashes. Pattern (3) is likely the most correct way to write C code to implement a Python object... but it's harder to write such code correctly :-( You have to be careful to not leak a reference. If I understood correctly, the purpose of the Py_SETREF() macro is not to replace (3) with (2), but to fix all incorrect code written as (1). If I recall correctly, Serhiy modified a *lot* of code written as (1) when he implemented Py_SETREF(). > Pure python functions effectively have this built-in because the locals all get decreffed at the end of the function when a return-statement is encountered. That practice helps me avoid hard to spot re-entrancy issues. Except if you use a lock, all Python methods are written as (2): a different thread or a signal handler is likely to see the object as inconsistent, when accessed between two instructions modifying an object attributes. Example: def __init__(self, value): self.value = value self.double = value * 2 def increment(self): self.value += 1 # object inconsistent here self.double *= 2 The increment() method is not atomic: if the object is accessed at "# object inconsistent here", the object is seen in an inconsistent state. Victor From raymond.hettinger at gmail.com Thu Nov 9 07:22:20 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 9 Nov 2017 04:22:20 -0800 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: Message-ID: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> > On Nov 9, 2017, at 2:44 AM, Serhiy Storchaka wrote: > > If the problem is with naming, what names do you prefer? This already was bikeshedded (I insisted on discussing names before introducing the macros), but may now you have better ideas? It didn't really seem like a bad idea until after you swept through the code with 200+ applications of the macro and I saw how unclear the results were. Even code that I wrote myself is now harder for me to grok (for example, the macro was applied 17 times to already correct code in itertools). We used to employ a somewhat plain coding style that was easy to walk through, but the following examples seem opaque. I find it takes practice to look at any one of these and say that it is unequivocally correct (were the function error return arguments handled correctly, are the typecasts proper, at what point can a reentrant call occur, which is the source operand and which is the destination, is the macro using either of the operands twice, is the destination operand an allowable lvalue, do I need to decref the source operand afterwards, etc): Py_SETREF(((PyHeapTypeObject*)type)->ht_name, value) Py_SETREF(newconst, PyFrozenSet_New(newconst)); Py_XSETREF(c->u->u_private, s->v.ClassDef.name); Py_SETREF(*p, t); Py_XSETREF(self->lineno, PyTuple_GET_ITEM(info, 1)); Py_SETREF(entry->path, PyUnicode_EncodeFSDefault(entry->path)); Py_XSETREF(self->checker, PyObject_GetAttrString(ob, "_check_retval_")); Py_XSETREF(fut->fut_source_tb, _PyObject_CallNoArg(traceback_extract_stack)); Stylistically, all of these seem awkward and I think there is more to it than just the name. I'm not sure it is wise to pass complex inputs into a two-argument macro that makes an assignment and has a conditional refcount side-effect. Even now, one of the above looks to me like it might not be correct. Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. As a result of making the macros official, will the Python world have a net increase in complexity or decrease in complexity? My personal experience with the macros hasn't been positive. Perhaps everyone else thinks it's fine. If so, I won't stand in your way. Raymond From solipsis at pitrou.net Thu Nov 9 07:35:07 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 9 Nov 2017 13:35:07 +0100 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API References: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> Message-ID: <20171109133507.15c0dd88@fsol> On Thu, 9 Nov 2017 04:22:20 -0800 Raymond Hettinger wrote: > > Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. Generally I would advocate anyone wanting to write a third-party C extension, but not very familiar with the C API and its quirks, use Cython instead. I'm not sure if that's an argument for the SETREF APIs to remain private or to become public :-) Regards Antoine. From victor.stinner at gmail.com Thu Nov 9 07:35:38 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 9 Nov 2017 13:35:38 +0100 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> References: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> Message-ID: Hum, to give more context to the discussion, the two discussed macros are documented this way: #ifndef Py_LIMITED_API /* Safely decref `op` and set `op` to `op2`. * * As in case of Py_CLEAR "the obvious" code can be deadly: * * Py_DECREF(op); * op = op2; * * The safe way is: * * Py_SETREF(op, op2); * * That arranges to set `op` to `op2` _before_ decref'ing, so that any code * triggered as a side-effect of `op` getting torn down no longer believes * `op` points to a valid object. * * Py_XSETREF is a variant of Py_SETREF that uses Py_XDECREF instead of * Py_DECREF. */ #define Py_SETREF(op, op2) \ do { \ PyObject *_py_tmp = (PyObject *)(op); \ (op) = (op2); \ Py_DECREF(_py_tmp); \ } while (0) #define Py_XSETREF(op, op2) \ do { \ PyObject *_py_tmp = (PyObject *)(op); \ (op) = (op2); \ Py_XDECREF(_py_tmp); \ } while (0) #endif /* ifndef Py_LIMITED_API */ Victor 2017-11-09 13:22 GMT+01:00 Raymond Hettinger : > >> On Nov 9, 2017, at 2:44 AM, Serhiy Storchaka wrote: >> >> If the problem is with naming, what names do you prefer? This already was bikeshedded (I insisted on discussing names before introducing the macros), but may now you have better ideas? > > It didn't really seem like a bad idea until after you swept through the code with 200+ applications of the macro and I saw how unclear the results were. Even code that I wrote myself is now harder for me to grok (for example, the macro was applied 17 times to already correct code in itertools). > > We used to employ a somewhat plain coding style that was easy to walk through, but the following examples seem opaque. I find it takes practice to look at any one of these and say that it is unequivocally correct (were the function error return arguments handled correctly, are the typecasts proper, at what point can a reentrant call occur, which is the source operand and which is the destination, is the macro using either of the operands twice, is the destination operand an allowable lvalue, do I need to decref the source operand afterwards, etc): > > Py_SETREF(((PyHeapTypeObject*)type)->ht_name, value) > Py_SETREF(newconst, PyFrozenSet_New(newconst)); > Py_XSETREF(c->u->u_private, s->v.ClassDef.name); > Py_SETREF(*p, t); > Py_XSETREF(self->lineno, PyTuple_GET_ITEM(info, 1)); > Py_SETREF(entry->path, PyUnicode_EncodeFSDefault(entry->path)); > Py_XSETREF(self->checker, PyObject_GetAttrString(ob, "_check_retval_")); > Py_XSETREF(fut->fut_source_tb, _PyObject_CallNoArg(traceback_extract_stack)); > > Stylistically, all of these seem awkward and I think there is more to it than just the name. I'm not sure it is wise to pass complex inputs into a two-argument macro that makes an assignment and has a conditional refcount side-effect. Even now, one of the above looks to me like it might not be correct. > > Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. As a result of making the macros official, will the Python world have a net increase in complexity or decrease in complexity? > > My personal experience with the macros hasn't been positive. Perhaps everyone else thinks it's fine. If so, I won't stand in your way. > > > Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From storchaka at gmail.com Thu Nov 9 08:24:48 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 9 Nov 2017 15:24:48 +0200 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> References: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> Message-ID: 09.11.17 14:22, Raymond Hettinger ????: > Stylistically, all of these seem awkward and I think there is more to it than just the name. I'm not sure it is wise to pass complex inputs into a two-argument macro that makes an assignment and has a conditional refcount side-effect. Even now, one of the above looks to me like it might not be correct. If you have found an incorrect code, please open an issue and provide a patch. But recently you have rewrote the correct code (Py_SETREF was not involved) in more complicated way [1] and have rejected my patch that gets rid of the duplication of this complicated code [2]. Please don't "fix" the code that is not broken. [1] https://bugs.python.org/issue26491 [2] https://bugs.python.org/issue31585 > Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. As a result of making the macros official, will the Python world have a net increase in complexity or decrease in complexity? I afraid that these macros will be used in any case, even when they are not the part of an official C API, because they are handy. The main purpose of documenting them officially is documenting in what cases these macros are appropriate and make the code more reliable, and in what cases they are not enough and a more complex code should be used. This would be a lesson about correct replacing references. I didn't write this in the source comment because it was purposed for experienced Python core developers, and all usages under our control and passes a peer review. From tseaver at palladion.com Thu Nov 9 10:27:45 2017 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 9 Nov 2017 10:27:45 -0500 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <5A03FA6B.5040206@canterbury.ac.nz> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <5A03FA6B.5040206@canterbury.ac.nz> Message-ID: On 11/09/2017 01:49 AM, Greg Ewing wrote: >> On 8 November 2017 at 19:21, Antoine Pitrou wrote: >>> The idea that __main__ scripts should >>> get special treatment here is entirely gratuitous. > > When I'm writing an app in Python, very often my __main__ is > just a stub that imports the actual functionality from another > module to get the benefits of a pyc. So enabling deprecation > warnings for __main__ only wouldn't result in me seeing any > more warnings. IIUC, that would be as expected: you would see the warnings when running your test suite exercising that imported code (which should run with all warnings enabled), but not when running the app. Seems like a reasonable choice to me. Tres. -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com From barry at python.org Thu Nov 9 10:45:52 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 9 Nov 2017 07:45:52 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <5A03FA6B.5040206@canterbury.ac.nz> Message-ID: <9EB1623B-A69F-4C6E-BE8A-67347D314DDC@python.org> On Nov 9, 2017, at 07:27, Tres Seaver wrote: > IIUC, that would be as expected: you would see the warnings when running > your test suite exercising that imported code (which should run with all > warnings enabled), but not when running the app. > > Seems like a reasonable choice to me. I?m coming around to that view too. FWIW, I definitely do want to see the DeprecationWarnings in libraries I use, even if I didn?t write them. That let?s me help that package?s author identify them, maybe even provide a fix, and let?s me evaluate whether maybe some other library is better suited to my needs. It probably does strike the right balance to see that in my own test suite only. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From wes.turner at gmail.com Thu Nov 9 13:50:12 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 9 Nov 2017 13:50:12 -0500 Subject: [Python-Dev] OrderedDict(kwargs) optimization? In-Reply-To: References: Message-ID: Got it. Thanks! On Wednesday, November 8, 2017, INADA Naoki wrote: > > That'd be great for preserving kwargs' order after a pop() or a del? > > To clarify, order is preserved after pop in Python 3.6 (and maybe 3.7). > > There is discussion about breaking it to optimize for limited use cases, > but I don't think it's worth enough to discuss more until it demonstrates > real performance gain. > > > > Is there an opportunity to support a fast cast to OrderedDict from 3.6 > dict? > > Can it just copy .keys() into the OrderedDict linked list?Or is there > more overhead to the transition? > > https://bugs.python.org/issue31265 > > Regards, > > INADA Naoki > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Nov 9 14:39:20 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 9 Nov 2017 11:39:20 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> Message-ID: <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> On Nov 8, 2017, at 23:57, Nick Coghlan wrote: > Putting that quibble first: could we adjust the feature flag to be > either "from __future__ import lazy_annotations" or "from __future__ > import str_annotations"? > > Every time I see "from __future__ import annotations" I think "But > we've had annotations since 3.0, why would they need a future > import?". +1 for lazy_annotations for the same reason. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Thu Nov 9 14:51:54 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 9 Nov 2017 11:51:54 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: If we have to change the name I'd vote for string_annotations -- "lazy" has too many other connotations (e.g. it might cause people to think it's the thunks). I find str_annotations too abbreviated, and stringify_annotations is too hard to spell. On Thu, Nov 9, 2017 at 11:39 AM, Barry Warsaw wrote: > On Nov 8, 2017, at 23:57, Nick Coghlan wrote: > > > Putting that quibble first: could we adjust the feature flag to be > > either "from __future__ import lazy_annotations" or "from __future__ > > import str_annotations"? > > > > Every time I see "from __future__ import annotations" I think "But > > we've had annotations since 3.0, why would they need a future > > import?". > > +1 for lazy_annotations for the same reason. > > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cs at cskk.id.au Thu Nov 9 16:46:23 2017 From: cs at cskk.id.au (Cameron Simpson) Date: Fri, 10 Nov 2017 08:46:23 +1100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171108102857.649651c9@fsol> References: <20171108102857.649651c9@fsol> Message-ID: <20171109214623.GA13810@cskk.homeip.net> On 08Nov2017 10:28, Antoine Pitrou wrote: >On Wed, 8 Nov 2017 13:07:12 +1000 >Nick Coghlan wrote: >> On 8 November 2017 at 07:19, Evpok Padding wrote: >> > On 7 November 2017 at 21:47, Chris Barker wrote: >> >> if dict order is preserved in cPython , people WILL count on it! >> > >> > I won't, and if people do and their code break, they'll have only themselves >> > to blame. >> > Also, what proof do you have of that besides anecdotal evidence?? >> >> ~27 calendar years of anecdotal evidence across a multitude of CPython >> API behaviours (as well as API usage in other projects). >> >> Other implementation developers don't say "CPython's runtime behaviour >> is the real Python specification" for the fun of it - they say it >> because "my code works on CPython, but it does the wrong thing on your >> interpreter, so I'm going to stick with CPython" is a real barrier to >> end user adoption, no matter what the language specification says. > >Yet, PyPy has no reference counting, and it doesn't seem to be a cause >of concern. Broken code is fixed along the way, when people notice. I'd expect that this may be because that would merely to cause temporary memory leakage or differently timed running of __del__ actions. Neither of which normally affects semantics critical to the end result of most programs. However, code which relies on an ordering effect which works in the usual case but (often subtly) breaks in some unusual case can be hard to debug, because (a) recognising the salient error situation may be hard to do and (b) reasoning about the failure is difficult when the language semantics are not what you thought they were. I think the two situations are not as parallel as you think. Cheers, Cameron Simpson (formerly cs at zip.com.au) From njs at pobox.com Thu Nov 9 18:54:12 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 9 Nov 2017 15:54:12 -0800 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171109214623.GA13810@cskk.homeip.net> References: <20171108102857.649651c9@fsol> <20171109214623.GA13810@cskk.homeip.net> Message-ID: On Thu, Nov 9, 2017 at 1:46 PM, Cameron Simpson wrote: > On 08Nov2017 10:28, Antoine Pitrou wrote: >> >> On Wed, 8 Nov 2017 13:07:12 +1000 >> Nick Coghlan wrote: >>> >>> On 8 November 2017 at 07:19, Evpok Padding >>> wrote: >>> > On 7 November 2017 at 21:47, Chris Barker >>> > wrote: >>> >> if dict order is preserved in cPython , people WILL count on it! >>> > >>> > I won't, and if people do and their code break, they'll have only >>> > themselves >>> > to blame. >>> > Also, what proof do you have of that besides anecdotal evidence?? >>> >>> ~27 calendar years of anecdotal evidence across a multitude of CPython >>> API behaviours (as well as API usage in other projects). >>> >>> Other implementation developers don't say "CPython's runtime behaviour >>> is the real Python specification" for the fun of it - they say it >>> because "my code works on CPython, but it does the wrong thing on your >>> interpreter, so I'm going to stick with CPython" is a real barrier to >>> end user adoption, no matter what the language specification says. >> >> >> Yet, PyPy has no reference counting, and it doesn't seem to be a cause >> of concern. Broken code is fixed along the way, when people notice. > > > I'd expect that this may be because that would merely to cause temporary > memory leakage or differently timed running of __del__ actions. Neither of > which normally affects semantics critical to the end result of most > programs. It's actually a major problem when porting apps to PyPy. The common case is servers that crash because they rely on the GC to close file descriptors, and then run out of file descriptors. IIRC this is the major obstacle to supporting OpenStack-on-PyPy. NumPy is currently going through the process to deprecate and replace a core bit of API [1] because it turns out to assume a refcounting GC. -n [1] See: https://github.com/numpy/numpy/pull/9639 https://mail.python.org/pipermail/numpy-discussion/2017-November/077367.html -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Thu Nov 9 19:46:40 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 10:46:40 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <9EB1623B-A69F-4C6E-BE8A-67347D314DDC@python.org> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <5A03FA6B.5040206@canterbury.ac.nz> <9EB1623B-A69F-4C6E-BE8A-67347D314DDC@python.org> Message-ID: On 10 November 2017 at 01:45, Barry Warsaw wrote: > On Nov 9, 2017, at 07:27, Tres Seaver wrote: > >> IIUC, that would be as expected: you would see the warnings when running >> your test suite exercising that imported code (which should run with all >> warnings enabled), but not when running the app. >> >> Seems like a reasonable choice to me. > > I?m coming around to that view too. FWIW, I definitely do want to see the DeprecationWarnings in libraries I use, even if I didn?t write them. That let?s me help that package?s author identify them, maybe even provide a fix, and let?s me evaluate whether maybe some other library is better suited to my needs. It probably does strike the right balance to see that in my own test suite only. Right, this was my reasoning as well: if someone has gone to the trouble of factoring their code out into a support library, it's reasonable to expect that they'll also write at least a rudimentary test suite for that code. (The one case where that argument falls down is when they only have an integration test suite, and hence run their application in a subprocess, rather than directly in the test runner. However, that's a question for test frameworks to consider: the case can be made that test runners should be setting PYTHONWARNINGS in addition to setting the warning filter in the current process) By contrast, I have quite a bit of __main__-only code, and I routinely use the REPL to check the validity of snippets of code that I plan to use (or advise someone else to use). Those are the cases where the status quo sometimes trips me up, because I forget that I'm *not* getting deprecation warnings. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Thu Nov 9 20:32:14 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 9 Nov 2017 17:32:14 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On Nov 8, 2017 16:12, "Nick Coghlan" wrote: On 9 November 2017 at 07:46, Antoine Pitrou wrote: > > Le 08/11/2017 ? 22:43, Nick Coghlan a ?crit : >> >> However, between them, the following two guidelines should provide >> pretty good deprecation warning coverage for the world's Python code: >> >> 1. If it's in __main__, it will emit deprecation warnings at runtime >> 2. If it's not in __main__, it should have a test suite > > Nick, have you actually read the discussion and the complaints people > had with the current situation? Most of them *don't* specifically talk > about __main__ scripts. I have, and I've also re-read the discussions regarding why the default got changed in the first place. Behaviour up until 2.6 & 3.1: once::DeprecationWarning Behaviour since 2.7 & 3.2: ignore::DeprecationWarning With test runners overriding the default filters to set it back to "once::DeprecationWarning". Is this intended to be a description of the current state of affairs? Because I've never encountered a test runner that does this... Which runners are you thinking of? The rationale for that change was so that end users of applications that merely happened to be written in Python wouldn't see deprecation warnings when Linux distros (or the end user) updated to a new Python version. It had the downside that you had to remember to opt-in to deprecation warnings in order to see them, which is a problem if you mostly use Python for ad hoc personal scripting. Proposed behaviour for Python 3.7+: once::DeprecationWarning:__main__ ignore::DeprecationWarning With test runners still overriding the default filters to set them back to "once::DeprecationWarning". This is a partial reversion back to the pre-2.7 behaviour, focused specifically on interactive use and ad hoc personal scripting. For ad hoc *distributed* scripting, the changed default encourages upgrading from single-file scripts to the zipapp model, and then minimising the amount of code that runs directly in __main__.py. I expect this will be a sufficient change to solve the specific problem I'm personally concerned by, so I'm no longer inclined to argue for anything more complicated. Other folks may have other concerns that this tweak to the default filters doesn't address - they can continue to build their case for more complex options using this as the new baseline behaviour. I think most people's concern is that we've gotten into a state where DeprecationWarning's are largely useless in practice, because no one sees them. Effectively the norm now is that developers (both the Python core team and downstream libraries) think they're following some sensible deprecation cycle, but often they're actually making changes without any warning, just they wait a year to do it. It's not clear why we're bothering through multiple releases -- which adds major overhead -- if in practice we aren't going to actually warn most people. Enabling them for another 1% of code doesn't really address this. As I mentioned above, it's also having the paradoxical effect of making it so that end-users are *more* likely to see deprecation warnings, since major libraries are giving up on using DeprecationWarning. Most recently it looks like pyca/cryptography is going to switch, partly as a result of this thread: https://github.com/pyca/cryptography/pull/4014 Some more ideas to throw out there: - if an envvar CI=true is set, then by default make deprecation warnings into errors. (This is an informal standard that lots of CI systems use. Error instead of "once" because most people don't look at CI output at all unless there's an error.) - provide some mechanism that makes it easy to have a deprecation warning that starts out as invisible, but then becomes visible as you get closer to the switchover point. (E.g. CPython might make the deprecation warnings that it issues be invisible in 3.x.0 and 3.x.1 but become visible in 3.x.2+.) Maybe: # in warnings.py def deprecation_warning(library_version, visible_in_version, change_in_version, msg, stacklevel): ... Then a call like: deprecation_warning(my_library.__version__, "1.3", "1.4", "This function is deprecated", 2) issues an InvisibleDeprecationWarning if my_library.__version__ < 1.3, and a VisibleDeprecationWarning otherwise. (The stacklevel argument is mandatory because the usual default of 1 is always wrong for deprecation warnings.) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 9 20:53:40 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 11:53:40 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On 10 November 2017 at 11:32, Nathaniel Smith wrote: > Is this intended to be a description of the current state of affairs? > Because I've never encountered a test runner that does this... Which runners > are you thinking of? Ah, you're right, pytest currently still requires individual developers to opt-in, rather than switching the defaults: https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings That's not the intention - we expect test runners to switch the defaults the same way unittest does: https://docs.python.org/3/library/unittest.html#unittest.TextTestRunner So that's likely part of the problem. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Nov 9 20:58:45 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 11:58:45 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On 10 November 2017 at 11:53, Nick Coghlan wrote: > On 10 November 2017 at 11:32, Nathaniel Smith wrote: >> Is this intended to be a description of the current state of affairs? >> Because I've never encountered a test runner that does this... Which runners >> are you thinking of? > > Ah, you're right, pytest currently still requires individual > developers to opt-in, rather than switching the defaults: > https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings > > That's not the intention - we expect test runners to switch the > defaults the same way unittest does: > https://docs.python.org/3/library/unittest.html#unittest.TextTestRunner Issue filed for pytest here: https://github.com/pytest-dev/pytest/issues/2908 I haven't checked nose2's behaviour. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Nov 9 20:38:41 2017 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Fri, 10 Nov 2017 10:38:41 +0900 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <5A01C172.2010107@stoneleaf.us> References: <5A01C172.2010107@stoneleaf.us> Message-ID: <23045.801.961486.339287@turnbull.sk.tsukuba.ac.jp> Ethan Furman writes: > Suffering from DeprecationWarnings is not "being hosed". Having > your script/application/framework suddenly stop working because > nobody noticed something was being deprecated is "being hosed". OK, so suffering from DeprecationWarnings is not "being hosed". Nevertheless, it's a far greater waste of my time (supervising students in business and economics with ~50% annual turnover) than is "suddenly stop working", even though it only takes 1-5 minutes each time to explain how to do whatever seems appropriate. "Suddenly stopped working", in fact, hasn't happened to me yet in that environment. It's not hard to understand why: the student downloads Python, and doesn't upgrade within the life cycle of the software they've written. It becomes obsolete upon graduation, and is archived, never to be used again. I don't know how common this kind of environment is, so I can't say it's terribly important, but AIUI Python should be pleasant to use in this context. Unfortunately I have no time to contribute code or even useful ideas to the end of making it more likely that Those Who Can Do Something (a) see the DeprecationWarning and (b) are made sufficiently itchy that they actually scratch, and that Those Who Cannot Do Anything, or are limited to suggesting that something be done, not see it. So I'll shut up now, having contributed this user story. Steve -- Associate Professor Division of Policy and Planning Science http://turnbull/sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From ncoghlan at gmail.com Thu Nov 9 21:09:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 12:09:12 +1000 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: <20171109133507.15c0dd88@fsol> References: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> <20171109133507.15c0dd88@fsol> Message-ID: On 9 November 2017 at 22:35, Antoine Pitrou wrote: > On Thu, 9 Nov 2017 04:22:20 -0800 > Raymond Hettinger wrote: >> >> Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. > > Generally I would advocate anyone wanting to write a third-party C > extension, but not very familiar with the C API and its quirks, use > Cython instead. I'm not sure if that's an argument for the SETREF APIs > to remain private or to become public :-) I'm with Antoine on this - we should be pushing folks writing extension modules towards code generators like Cython, cffi, SWIG, and SIP, support libraries like Boost::Python, or safer languages like Rust (which can then be wrapped with cffi), rather than encouraging more bespoke C/C++ extensions modules with handcrafted refcount management. There's a reason the only parts of https://packaging.python.org/guides/packaging-binary-extensions/ that have actually been filled in are the ones explaining how to use a tool to write the extension module for you :) For me, that translates to being -1 on making these part of the public API - for code outside CPython, our message should consistently be "Do not roll your own refcount management, get a code generator or library to handle it for you". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Nov 9 21:11:53 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 12:11:53 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On 10 November 2017 at 05:51, Guido van Rossum wrote: > If we have to change the name I'd vote for string_annotations -- "lazy" has > too many other connotations (e.g. it might cause people to think it's the > thunks). I find str_annotations too abbreviated, and stringify_annotations > is too hard to spell. Aye, I'd be fine with "from __future__ import string_annotations" - that's even more explicitly self-documenting than either of my suggestions. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Thu Nov 9 22:30:51 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 9 Nov 2017 19:30:51 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: So... ?ukasz? On Thu, Nov 9, 2017 at 6:11 PM, Nick Coghlan wrote: > On 10 November 2017 at 05:51, Guido van Rossum wrote: > > If we have to change the name I'd vote for string_annotations -- "lazy" > has > > too many other connotations (e.g. it might cause people to think it's the > > thunks). I find str_annotations too abbreviated, and > stringify_annotations > > is too hard to spell. > > Aye, I'd be fine with "from __future__ import string_annotations" - > that's even more explicitly self-documenting than either of my > suggestions. > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Nov 9 23:34:10 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 10 Nov 2017 17:34:10 +1300 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <5A03FA6B.5040206@canterbury.ac.nz> Message-ID: <5A052C42.1090200@canterbury.ac.nz> Tres Seaver wrote: > IIUC, that would be as expected: you would see the warnings when running > your test suite exercising that imported code (which should run with all > warnings enabled), but not when running the app. But then what benefit is there in turning on deprecation warnings automatically for __main__? -- Greg From ncoghlan at gmail.com Thu Nov 9 23:49:13 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 14:49:13 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: <5A052C42.1090200@canterbury.ac.nz> References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <5A03FA6B.5040206@canterbury.ac.nz> <5A052C42.1090200@canterbury.ac.nz> Message-ID: On 10 November 2017 at 14:34, Greg Ewing wrote: > Tres Seaver wrote: >> >> IIUC, that would be as expected: you would see the warnings when running >> your test suite exercising that imported code (which should run with all >> warnings enabled), but not when running the app. > > But then what benefit is there in turning on deprecation > warnings automatically for __main__? Not all code has test suites, most notably: - code entered at the REPL - personal automation scripts - single file Python scripts (as opposed to structured applications) The tests for these are generally either "Did it do what I wanted?" or else a dry-run mode where it prints out what it *would* have done in normal operation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tjreedy at udel.edu Fri Nov 10 01:29:30 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 10 Nov 2017 01:29:30 -0500 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On 11/9/2017 9:11 PM, Nick Coghlan wrote: > On 10 November 2017 at 05:51, Guido van Rossum wrote: >> If we have to change the name I'd vote for string_annotations -- "lazy" has >> too many other connotations (e.g. it might cause people to think it's the >> thunks). I find str_annotations too abbreviated, and stringify_annotations >> is too hard to spell. > > Aye, I'd be fine with "from __future__ import string_annotations" - > that's even more explicitly self-documenting than either of my > suggestions. I think this is the best proposed so far. -- Terry Jan Reedy From victor.stinner at gmail.com Fri Nov 10 01:42:36 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 10 Nov 2017 07:42:36 +0100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: I didn't follow the discussion on the PEP but I was surprised to read "from __future__ import annotations" in an example. Annotations exist since Python 3.0, why would Python 3.7 require a future for them? Well, I was aware of the PEP, but I was confused anyway. I really prefer "from __future__ import string_annotations" ! Victor Le 10 nov. 2017 03:14, "Nick Coghlan" a ?crit : > On 10 November 2017 at 05:51, Guido van Rossum wrote: > > If we have to change the name I'd vote for string_annotations -- "lazy" > has > > too many other connotations (e.g. it might cause people to think it's the > > thunks). I find str_annotations too abbreviated, and > stringify_annotations > > is too hard to spell. > > Aye, I'd be fine with "from __future__ import string_annotations" - > that's even more explicitly self-documenting than either of my > suggestions. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Nov 10 02:36:25 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 17:36:25 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On 10 November 2017 at 16:42, Victor Stinner wrote: > I didn't follow the discussion on the PEP but I was surprised to read "from > __future__ import annotations" in an example. Annotations exist since Python > 3.0, why would Python 3.7 require a future for them? Well, I was aware of > the PEP, but I was confused anyway. > > I really prefer "from __future__ import string_annotations" ! At risk of complicating matters, I now see that this could be read as "annotations on strings", just as variable annotations are annotations on variable names, and function annotations are annotations on functions. If we decide we care about that possible misreading, then an alternative would be to swap the word order and use "from __future__ import annotation_strings". Cheers, Nick. P.S. I don't think this really matters either way, it just struck me that the reversed order might be marginally clearer, so it seemed worthwhile to mention it. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lukasz at langa.pl Fri Nov 10 04:20:24 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Fri, 10 Nov 2017 01:20:24 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: Alright, we're on bikeshed territory now. Finally! :-) I was always thinking about this as "static annotations". The fact they're strings at runtime is irrelevant for most people who will use this future. They don't want string annotations, they want them to not be evaluated on import time... they want them to be static. Also, "static typing" et al. I think it has a nice vibe to it. I admit "annotations" is too broad but "static_annotations" (or "string_annotations" ?\_(?)_/? ) will be the longest __future__ name so far. That was my main motivation behind using the shorter name. And a bit of megalomania I guess. - ? > On 9 Nov, 2017, at 7:30 PM, Guido van Rossum wrote: > > So... ?ukasz? > > On Thu, Nov 9, 2017 at 6:11 PM, Nick Coghlan > wrote: > On 10 November 2017 at 05:51, Guido van Rossum > wrote: > > If we have to change the name I'd vote for string_annotations -- "lazy" has > > too many other connotations (e.g. it might cause people to think it's the > > thunks). I find str_annotations too abbreviated, and stringify_annotations > > is too hard to spell. > > Aye, I'd be fine with "from __future__ import string_annotations" - > that's even more explicitly self-documenting than either of my > suggestions. > > -- > --Guido van Rossum (python.org/~guido ) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From njs at pobox.com Fri Nov 10 04:51:06 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 10 Nov 2017 01:51:06 -0800 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: On Tue, Nov 7, 2017 at 8:45 AM, Nathaniel Smith wrote: > Also, IIRC it's actually impossible to set the stacklevel= correctly when > you're deprecating a whole module and issue the warning at import time, > because you need to know how many stack frames the import system uses. Doh, I didn't remember correctly. Actually Brett fixed this in 3.5: https://bugs.python.org/issue24305 -n -- Nathaniel J. Smith -- https://vorpus.org From stefan at bytereef.org Fri Nov 10 06:25:55 2017 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 10 Nov 2017 12:25:55 +0100 Subject: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API In-Reply-To: References: <0420BC81-A363-4217-AB91-01A6921ACC35@gmail.com> <20171109133507.15c0dd88@fsol> Message-ID: <20171110112555.GA3215@bytereef.org> On Fri, Nov 10, 2017 at 12:09:12PM +1000, Nick Coghlan wrote: > I'm with Antoine on this - we should be pushing folks writing > extension modules towards code generators like Cython, cffi, SWIG, and > SIP, support libraries like Boost::Python, or safer languages like > Rust (which can then be wrapped with cffi), rather than encouraging > more bespoke C/C++ extensions modules with handcrafted refcount > management. There's a reason the only parts of > https://packaging.python.org/guides/packaging-binary-extensions/ that > have actually been filled in are the ones explaining how to use a tool > to write the extension module for you :) They will be slower and in my experience not easier to maintain -- quite the opposite. Stefan Krah From ncoghlan at gmail.com Fri Nov 10 07:38:11 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Nov 2017 22:38:11 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On 10 November 2017 at 19:20, Lukasz Langa wrote: > Alright, we're on bikeshed territory now. Finally! :-) > > I was always thinking about this as "static annotations". The fact they're > strings at runtime is irrelevant for most people who will use this future. It's highly relevant to anyone currently using annotations for a purpose *other than* type hints, though - they're going to have to work out how to cope with annotations being strings rather than eagerly evaluated expressions. It's also a hopefully useful mnemonic as to what the new runtime semantics are: the feature flag makes it as if all your annotations were quoted strings, just without the actual quote markers. > They don't want string annotations, they want them to not be evaluated on > import time... they want them to be static. Also, "static typing" et al. I > think it has a nice vibe to it. Getting folks to *not* call type hinting static typing is an ongoing challenge though, so it doesn't seem like a good idea to encourage that link to me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From k7hoven at gmail.com Fri Nov 10 10:04:17 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Fri, 10 Nov 2017 17:04:17 +0200 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Thu, Nov 9, 2017 at 9:51 PM, Guido van Rossum wrote: > If we have to change the name I'd vote for string_annotations -- "lazy" > has too many other connotations (e.g. it might cause people to think it's > the thunks). I find str_annotations too abbreviated, and > stringify_annotations is too hard to spell. > > ?I can't say I disagree. ?And maybe importing string_annotations from the __future__ doesn't sound quite as sad as importing something from the __past__. Anyway, it's not obvious to me that it is the module author that should decide how the annotations are handled. See also this quote below: (Quoted from the end of https://mail.python.org/pipermail/python-ideas/2017-October/047311.html ) On Thu, Oct 12, 2017 at 3:59 PM, Koos Zevenhoven wrote: > > ??[*] Maybe somehow make the existing functionality a phantom easter > egg??a blast from the past which you can import and use, but which is > otherwise invisible :-). Then later give warnings and finally remove it > completely. > > But we need better smooth upgrade paths anyway, maybe something like: > > from __compat__ import unintuitive_decimal_contexts > > with unintuitive_decimal_contexts: > do_stuff() > > ?Now code bases can more quickly switch to new python versions and make > the occasional compatibility adjustments more lazily, while already > benefiting from other new language features. > > > ??Koos? > > > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Nov 10 10:48:37 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 10 Nov 2017 07:48:37 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Fri, Nov 10, 2017 at 1:20 AM, Lukasz Langa wrote: > Alright, we're on bikeshed territory now. Finally! :-) > > I was always thinking about this as "static annotations". The fact they're > strings at runtime is irrelevant for most people who will use this future. > They don't want string annotations, they want them to not be evaluated on > import time... they want them to be static. Also, "static typing" et al. I > think it has a nice vibe to it. > > I admit "annotations" is too broad but "static_annotations" (or > "string_annotations" ?\_(?)_/? ) will be the longest __future__ name so > far. That was my main motivation behind using the shorter name. And a bit > of megalomania I guess. > I don't mind the long name. Of all the options so far I really only like 'string_annotations' so let's go with that. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Fri Nov 10 11:02:43 2017 From: random832 at fastmail.com (Random832) Date: Fri, 10 Nov 2017 11:02:43 -0500 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: References: Message-ID: <1510329763.3444970.1168317136.193F9B74@webmail.messagingengine.com> On Tue, Nov 7, 2017, at 07:22, Nick Coghlan wrote: > My suggestion for that definition is to have the *default* meaning of > "third party code" be "everything that isn't __main__". What is __main__? Or, rather, how do you determine when it is to blame? For syntax it's easy, but any deprecated function necessarily belongs to its own module and not to main. Main may have called it, which can be detected from the stack trace, or it may have used it in some other way (pass to some builtin or e.g. itertools function that takes a callable argument, for example). Maybe the DeprecationWarning should be raised at the name lookup* rather than the call? What if "calling this function with some particular combination of arguments" is deprecated? *i.e. something like: class deprecated: def __init__(self, obj): self.obj = obj class DeprecatableModule(ModuleType): def __getattr__(self, name): obj = self.__dict__[name] if isinstance(type(obj), deprecated): if (detect somehow caller is __main__): raise DeprecationWarning return obj.obj else: return obj def __dir__(self): return [k for k in self.__dict__ if not isinstance(self.__dict__[k], deprecated)] sys.modules[__name__].type=DeprecatableModule @deprecated def some_deprecated_function(...): ... SOME_DEPRECATED_CONSTANT = deprecated(42) From status at bugs.python.org Fri Nov 10 12:09:42 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 10 Nov 2017 18:09:42 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20171110170942.6D16511A8C4@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-11-03 - 2017-11-10) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6247 ( -8) closed 37507 (+76) total 43754 (+68) Open issues with patches: 2404 Issues opened (44) ================== #10049: Add a "no-op" (null) context manager to contextlib https://bugs.python.org/issue10049 reopened by ncoghlan #31486: calling a _json.Encoder object raises a SystemError in case ob https://bugs.python.org/issue31486 reopened by serhiy.storchaka #31937: Add the term "dunder" to the glossary https://bugs.python.org/issue31937 opened by brett.cannon #31938: Convert selectmodule.c to Argument Clinic https://bugs.python.org/issue31938 opened by taleinat #31939: Support return annotation in signature for Argument Clinic https://bugs.python.org/issue31939 opened by haypo #31940: copystat on symlinks fails for alpine -- faulty lchmod impleme https://bugs.python.org/issue31940 opened by Anthony Sottile #31942: Document that support of start and stop parameters in the Sequ https://bugs.python.org/issue31942 opened by serhiy.storchaka #31943: Add asyncio.Handle.cancelled() method https://bugs.python.org/issue31943 opened by decaz #31946: mailbox.MH.add loses status info from other formats https://bugs.python.org/issue31946 opened by shai #31947: names=None case is not handled by EnumMeta._create_ method https://bugs.python.org/issue31947 opened by anentropic #31948: [EASY] Broken MSDN links in msilib docs https://bugs.python.org/issue31948 opened by berker.peksag #31949: Bugs in PyTraceBack_Print() https://bugs.python.org/issue31949 opened by serhiy.storchaka #31951: import curses is broken on windows https://bugs.python.org/issue31951 opened by joe m2 #31954: Don't prevent dict optimization by coupling with OrderedDict https://bugs.python.org/issue31954 opened by serhiy.storchaka #31956: Add start and stop parameters to the array.index() https://bugs.python.org/issue31956 opened by niki.spahiev #31958: UUID versions are not validated to lie in the documented range https://bugs.python.org/issue31958 opened by David MacIver #31961: subprocess._execute_child doesn't accept a single PathLike arg https://bugs.python.org/issue31961 opened by Roy Williams #31962: test_importlib double free or corruption https://bugs.python.org/issue31962 opened by DNSGeek #31964: [3.4][3.5] pyexpat: compilaton of libexpat fails with: ISO C90 https://bugs.python.org/issue31964 opened by haypo #31966: [EASY C][Windows] print('hello\n', end='', flush=True) raises https://bugs.python.org/issue31966 opened by Guillaume Aldebert #31967: [Windows] test_distutils: fatal error LNK1158: cannot run 'rc. https://bugs.python.org/issue31967 opened by haypo #31968: exec(): method's default arguments from dict-inherited globals https://bugs.python.org/issue31968 opened by Ilya Polyakovskiy #31971: idle_test: failures on x86 Windows7 3.x https://bugs.python.org/issue31971 opened by haypo #31972: Inherited docstrings for pathlib classes are confusing https://bugs.python.org/issue31972 opened by eric.araujo #31973: Incomplete DeprecationWarning for async/await keywords https://bugs.python.org/issue31973 opened by barry #31975: Add a default filter for DeprecationWarning in __main__ https://bugs.python.org/issue31975 opened by ncoghlan #31976: Segfault when closing BufferedWriter from a different thread https://bugs.python.org/issue31976 opened by benfogle #31978: make it simpler to round fractions https://bugs.python.org/issue31978 opened by wolma #31979: Simplify converting non-ASCII strings to int, float and comple https://bugs.python.org/issue31979 opened by serhiy.storchaka #31982: 8.3. collections ??? Container datatypes https://bugs.python.org/issue31982 opened by Sasha Kacanski #31983: Officially add Py_SETREF and Py_XSETREF https://bugs.python.org/issue31983 opened by serhiy.storchaka #31984: startswith and endswith leak implementation details https://bugs.python.org/issue31984 opened by Ronan.Lamy #31985: Deprecate openfp() in aifc, sunau and wave https://bugs.python.org/issue31985 opened by brian.curtin #31986: [2.7] test_urllib2net.test_sites_no_connection_close() randoml https://bugs.python.org/issue31986 opened by haypo #31988: Saving bytearray to binary plist file doesn't work https://bugs.python.org/issue31988 opened by serhiy.storchaka #31990: Pickling deadlocks in thread with python -m https://bugs.python.org/issue31990 opened by Werner Smidt #31991: Race condition in wait with timeout for multiprocessing.Event https://bugs.python.org/issue31991 opened by tomMoral #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 opened by Olivier.Grisel #31994: json encoder exception could be better https://bugs.python.org/issue31994 opened by Jason Hihn #31995: Set operations documentation error https://bugs.python.org/issue31995 opened by Alexander Mentis #31997: SSL lib does not handle trailing dot (period) in hostname or c https://bugs.python.org/issue31997 opened by samiam #32001: @lru_cache needs to be called with () https://bugs.python.org/issue32001 opened by ataraxy #32002: test_c_locale_coercion fails when the default LC_CTYPE != "C" https://bugs.python.org/issue32002 opened by erik.bray #32003: multiprocessing.Array("b", 1), multiprocessing.Array("c",1 ) https://bugs.python.org/issue32003 opened by snwokenk Most recent 15 issues with no replies (15) ========================================== #32003: multiprocessing.Array("b", 1), multiprocessing.Array("c",1 ) https://bugs.python.org/issue32003 #31995: Set operations documentation error https://bugs.python.org/issue31995 #31994: json encoder exception could be better https://bugs.python.org/issue31994 #31991: Race condition in wait with timeout for multiprocessing.Event https://bugs.python.org/issue31991 #31990: Pickling deadlocks in thread with python -m https://bugs.python.org/issue31990 #31988: Saving bytearray to binary plist file doesn't work https://bugs.python.org/issue31988 #31986: [2.7] test_urllib2net.test_sites_no_connection_close() randoml https://bugs.python.org/issue31986 #31979: Simplify converting non-ASCII strings to int, float and comple https://bugs.python.org/issue31979 #31976: Segfault when closing BufferedWriter from a different thread https://bugs.python.org/issue31976 #31973: Incomplete DeprecationWarning for async/await keywords https://bugs.python.org/issue31973 #31972: Inherited docstrings for pathlib classes are confusing https://bugs.python.org/issue31972 #31962: test_importlib double free or corruption https://bugs.python.org/issue31962 #31948: [EASY] Broken MSDN links in msilib docs https://bugs.python.org/issue31948 #31947: names=None case is not handled by EnumMeta._create_ method https://bugs.python.org/issue31947 #31946: mailbox.MH.add loses status info from other formats https://bugs.python.org/issue31946 Most recent 15 issues waiting for review (15) ============================================= #32002: test_c_locale_coercion fails when the default LC_CTYPE != "C" https://bugs.python.org/issue32002 #31997: SSL lib does not handle trailing dot (period) in hostname or c https://bugs.python.org/issue31997 #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 #31985: Deprecate openfp() in aifc, sunau and wave https://bugs.python.org/issue31985 #31979: Simplify converting non-ASCII strings to int, float and comple https://bugs.python.org/issue31979 #31978: make it simpler to round fractions https://bugs.python.org/issue31978 #31976: Segfault when closing BufferedWriter from a different thread https://bugs.python.org/issue31976 #31971: idle_test: failures on x86 Windows7 3.x https://bugs.python.org/issue31971 #31968: exec(): method's default arguments from dict-inherited globals https://bugs.python.org/issue31968 #31961: subprocess._execute_child doesn't accept a single PathLike arg https://bugs.python.org/issue31961 #31954: Don't prevent dict optimization by coupling with OrderedDict https://bugs.python.org/issue31954 #31949: Bugs in PyTraceBack_Print() https://bugs.python.org/issue31949 #31942: Document that support of start and stop parameters in the Sequ https://bugs.python.org/issue31942 #31940: copystat on symlinks fails for alpine -- faulty lchmod impleme https://bugs.python.org/issue31940 #31938: Convert selectmodule.c to Argument Clinic https://bugs.python.org/issue31938 Top 10 most discussed issues (10) ================================= #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 17 msgs #31415: Add -X option to show import time https://bugs.python.org/issue31415 15 msgs #31975: Add a default filter for DeprecationWarning in __main__ https://bugs.python.org/issue31975 13 msgs #31939: Support return annotation in signature for Argument Clinic https://bugs.python.org/issue31939 10 msgs #31978: make it simpler to round fractions https://bugs.python.org/issue31978 10 msgs #31937: Add the term "dunder" to the glossary https://bugs.python.org/issue31937 9 msgs #30952: [Windows] include Math extension in SQlite https://bugs.python.org/issue30952 7 msgs #31984: startswith and endswith leak implementation details https://bugs.python.org/issue31984 7 msgs #31966: [EASY C][Windows] print('hello\n', end='', flush=True) raises https://bugs.python.org/issue31966 6 msgs #31985: Deprecate openfp() in aifc, sunau and wave https://bugs.python.org/issue31985 6 msgs Issues closed (63) ================== #17852: Built-in module _io can lose data from buffered files at exit https://bugs.python.org/issue17852 closed by nascheme #18669: curses.chgat() moves cursor, documentation says it shouldn't https://bugs.python.org/issue18669 closed by serhiy.storchaka #20486: msilib: can't close opened database https://bugs.python.org/issue20486 closed by berker.peksag #21423: concurrent.futures.ThreadPoolExecutor/ProcessPoolExecutor shou https://bugs.python.org/issue21423 closed by pitrou #21457: NetBSD curses support improvements https://bugs.python.org/issue21457 closed by serhiy.storchaka #21790: Change blocksize in http.client to the value of resource.getpa https://bugs.python.org/issue21790 closed by berker.peksag #21862: cProfile command-line should accept "-m module_name" as an alt https://bugs.python.org/issue21862 closed by pitrou #28340: [py2] TextIOWrapper.tell extremely slow https://bugs.python.org/issue28340 closed by haypo #28564: shutil.rmtree is inefficient due to listdir() instead of scand https://bugs.python.org/issue28564 closed by serhiy.storchaka #28706: msvc9compiler does not find a vcvarsall.bat of Visual C++ for https://bugs.python.org/issue28706 closed by skrah #28907: test_pydoc fails if build is in sub-directory https://bugs.python.org/issue28907 closed by nascheme #28997: test_readline.test_nonascii fails on Android https://bugs.python.org/issue28997 closed by xdegaye #29179: Py_UNUSED is not documented https://bugs.python.org/issue29179 closed by haypo #30057: signal.signal should check tripped signals https://bugs.python.org/issue30057 closed by pitrou #31222: datetime.py implementation of .replace inconsistent with C imp https://bugs.python.org/issue31222 closed by haypo #31271: an assertion failure in io.TextIOWrapper.write https://bugs.python.org/issue31271 closed by haypo #31523: Windows build file fixes https://bugs.python.org/issue31523 closed by steve.dower #31530: Python 2.7 readahead feature of file objects is not thread saf https://bugs.python.org/issue31530 closed by serhiy.storchaka #31609: PCbuild\clean.bat fails if the path contains whitespaces https://bugs.python.org/issue31609 closed by steve.dower #31668: "fixFirefoxAnchorBug" function in doctools.js causes navigatin https://bugs.python.org/issue31668 closed by berker.peksag #31678: Incorrect C Function name for timedelta https://bugs.python.org/issue31678 closed by berker.peksag #31764: sqlite3.Cursor.close() crashes in case the Cursor object is un https://bugs.python.org/issue31764 closed by haypo #31770: crash and refleaks when calling sqlite3.Cursor.__init__() more https://bugs.python.org/issue31770 closed by haypo #31793: Allow to specialize smart quotes in documentation translations https://bugs.python.org/issue31793 closed by inada.naoki #31843: sqlite3.connect() should accept PathLike objects https://bugs.python.org/issue31843 closed by haypo #31884: [Windows] subprocess set priority on windows https://bugs.python.org/issue31884 closed by haypo #31889: difflib SequenceMatcher ratio() still have unpredictable behav https://bugs.python.org/issue31889 closed by tim.peters #31895: Native hijri calendar support https://bugs.python.org/issue31895 closed by terry.reedy #31896: In function define class inherit ctypes.structure, and using c https://bugs.python.org/issue31896 closed by berker.peksag #31910: test_socket.test_create_connection() failed with EADDRNOTAVAIL https://bugs.python.org/issue31910 closed by haypo #31921: Bring together logic for entering/leaving a frame in frameobje https://bugs.python.org/issue31921 closed by pdox #31923: Misspelled "loading" in Doc/includes/sqlite3/load_extension.py https://bugs.python.org/issue31923 closed by berker.peksag #31924: Fix test_curses on NetBSD 8 https://bugs.python.org/issue31924 closed by serhiy.storchaka #31927: Fix compiling the socket module on NetBSD 8 and other issues https://bugs.python.org/issue31927 closed by serhiy.storchaka #31932: setup.py cannot find vcversall.bat on MSWin 8.1 if installed i https://bugs.python.org/issue31932 closed by steve.dower #31933: some Blake2 parameters are encoded backwards on big-endian pla https://bugs.python.org/issue31933 closed by christian.heimes #31934: Failure to build out of source from a not clean source https://bugs.python.org/issue31934 closed by haypo #31936: "5. The import system" grammatical error https://bugs.python.org/issue31936 closed by barry #31941: ImportError: DLL Load Failure: The specified module cannot be https://bugs.python.org/issue31941 closed by r.david.murray #31944: Windows Apps and Features items only have "Uninstall" https://bugs.python.org/issue31944 closed by steve.dower #31945: Configurable blocksize in HTTP(S)Connection https://bugs.python.org/issue31945 closed by haypo #31950: Default event loop policy doc lacks precision https://bugs.python.org/issue31950 closed by pitrou #31952: Weird behavior on tupple item assignment https://bugs.python.org/issue31952 closed by ebarry #31953: Dedicated place for security announcements? https://bugs.python.org/issue31953 closed by Mariatta #31955: distutils C compiler: set_executables() incorrectly parse valu https://bugs.python.org/issue31955 closed by haypo #31957: [Windows] PCbuild error: A numeric comparison was attempted https://bugs.python.org/issue31957 closed by haypo #31959: Directory at `TemporaryDirectory().name` does not exist https://bugs.python.org/issue31959 closed by serhiy.storchaka #31960: Protection against using a Future with another loop only works https://bugs.python.org/issue31960 closed by pitrou #31963: AMD64 Debian PGO 3.x buildbot: compilation failed with an inte https://bugs.python.org/issue31963 closed by gregory.p.smith #31965: Incorrect documentation for multiprocessing.connection.{Client https://bugs.python.org/issue31965 closed by pitrou #31969: re.groups() is not checking the arguments https://bugs.python.org/issue31969 closed by serhiy.storchaka #31970: asyncio debug mode is very slow https://bugs.python.org/issue31970 closed by pitrou #31974: Cursor misbahavior with Tkinter 3.6.1/tk 8.5 Text on Mac Sierr https://bugs.python.org/issue31974 closed by ned.deily #31977: threading.Condition can not work with threading.Semaphore https://bugs.python.org/issue31977 closed by pitrou #31980: Special case log(x, 2), log(x, 10) and pow(2, x) https://bugs.python.org/issue31980 closed by serhiy.storchaka #31981: os.mkdirs does not exist on 2.7.5+ https://bugs.python.org/issue31981 closed by serhiy.storchaka #31987: Ctypes Packing Bitfields Incorrectly - GCC both Linux and Cygw https://bugs.python.org/issue31987 closed by berker.peksag #31989: setattr on a property gives a very unhelpful exception https://bugs.python.org/issue31989 closed by r.david.murray #31992: Make iteration over dict_items yield namedtuples https://bugs.python.org/issue31992 closed by r.david.murray #31996: `setuptools.setup` parameter `py_modules` is undocumented https://bugs.python.org/issue31996 closed by berker.peksag #31998: test_zipapp failed when the zlib module is not available https://bugs.python.org/issue31998 closed by serhiy.storchaka #31999: test_venv failed when the zlib module is not available https://bugs.python.org/issue31999 closed by serhiy.storchaka #32000: test_undecodable_filename in test_httpservers failed on Mac OS https://bugs.python.org/issue32000 closed by serhiy.storchaka From ethan at stoneleaf.us Fri Nov 10 12:50:34 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 10 Nov 2017 09:50:34 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: <5A05E6EA.7050602@stoneleaf.us> On 11/10/2017 07:48 AM, Guido van Rossum wrote: > I don't mind the long name. Of all the options so far I really only like 'string_annotations' so let's go with that. As someone else mentioned, we have function annotations and variable annotations already, which makes string_annotations sound like it's annotations for strings. Contriwise, "annotation_strings" sounds like a different type of annotation -- they are now being stored as strings, instead of something else. -- ~Ethan~ From k7hoven at gmail.com Fri Nov 10 13:07:42 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Fri, 10 Nov 2017 20:07:42 +0200 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <5A05E6EA.7050602@stoneleaf.us> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A05E6EA.7050602@stoneleaf.us> Message-ID: On Fri, Nov 10, 2017 at 7:50 PM, Ethan Furman wrote: > On 11/10/2017 07:48 AM, Guido van Rossum wrote: > > I don't mind the long name. Of all the options so far I really only like >> 'string_annotations' so let's go with that. >> > > As someone else mentioned, we have function annotations and variable > annotations already, which makes string_annotations sound like it's > annotations for strings. > > > Contriwise, "annotation_strings" sounds like a different type of > annotation -- they are now being stored as strings, instead of something > else. > > ?Or a step further (longer), with annotations_as_strings. ??Koos? -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Nov 10 13:13:23 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 10 Nov 2017 10:13:23 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <5A05E6EA.7050602@stoneleaf.us> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A05E6EA.7050602@stoneleaf.us> Message-ID: On Fri, Nov 10, 2017 at 9:50 AM, Ethan Furman wrote: > On 11/10/2017 07:48 AM, Guido van Rossum wrote: > > I don't mind the long name. Of all the options so far I really only like >> 'string_annotations' so let's go with that. >> > > As someone else mentioned, we have function annotations and variable > annotations already, which makes string_annotations sound like it's > annotations for strings. > We can't strive to encode the full documentation in the future's name. > Contriwise, "annotation_strings" sounds like a different type of > annotation -- they are now being stored as strings, instead of something > else. > Trust me, that can also be misinterpreted. Let's stop the bikeshedding. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasz at langa.pl Fri Nov 10 13:17:22 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Fri, 10 Nov 2017 19:17:22 +0100 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: > On 10 Nov, 2017, at 4:48 PM, Guido van Rossum wrote: > > On Fri, Nov 10, 2017 at 1:20 AM, Lukasz Langa > wrote: > Alright, we're on bikeshed territory now. Finally! :-) > > I was always thinking about this as "static annotations". The fact they're strings at runtime is irrelevant for most people who will use this future. They don't want string annotations, they want them to not be evaluated on import time... they want them to be static. Also, "static typing" et al. I think it has a nice vibe to it. > > I admit "annotations" is too broad but "static_annotations" (or "string_annotations" ?\_(?)_/? ) will be the longest __future__ name so far. That was my main motivation behind using the shorter name. And a bit of megalomania I guess. > > I don't mind the long name. Of all the options so far I really only like 'string_annotations' so let's go with that. Done. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From larry.chen at microsemi.com Fri Nov 10 14:54:57 2017 From: larry.chen at microsemi.com (Larry Chen) Date: Fri, 10 Nov 2017 19:54:57 +0000 Subject: [Python-Dev] Python 3.6.3 venv FAILURE Message-ID: <06FCCD5E975F7F48BC1BC9CB5F461DBB9D944C7E@avsrvexchmbx2.microsemi.net> Upgraded from 3.6.1 to 3.6.3; but got an error when trying to create my virtual environment. [larrchen at rslab239 Larry]$ /opt/python3.6.3/bin/python3.6 -m venv /u/larrchen/work2/SAN/Users/Larry/rslab239_myENV_363 Error: Command '['/u/larrchen/work2/SAN/Users/Larry/rslab239_myENV_363/bin/python3.6', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. [larrchen at rslab239 Larry]$ Regards, Larry Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Fri Nov 10 22:17:27 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 11 Nov 2017 16:17:27 +1300 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <5A05E6EA.7050602@stoneleaf.us> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A05E6EA.7050602@stoneleaf.us> Message-ID: <5A066BC7.9040708@canterbury.ac.nz> Ethan Furman wrote: > Contriwise, "annotation_strings" sounds like a different type of > annotation -- they are now being stored as strings, instead of something > else. How about "annotations_as_strings"? -- Greg From pludemann at google.com Fri Nov 10 22:32:24 2017 From: pludemann at google.com (Peter Ludemann) Date: Fri, 10 Nov 2017 19:32:24 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: <5A066BC7.9040708@canterbury.ac.nz> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A05E6EA.7050602@stoneleaf.us> <5A066BC7.9040708@canterbury.ac.nz> Message-ID: On 10 November 2017 at 19:17, Greg Ewing wrote: > Ethan Furman wrote: > >> Contriwise, "annotation_strings" sounds like a different type of >> annotation -- they are now being stored as strings, instead of something >> else. >> > > How about "annotations_as_strings"? That feels unambiguous. "annotations_to_str" is shorter, given that "str" is a type in Python, and "to" says that it's converting *to* string (it's given *as* an expression). ? > > > -- > Greg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/pludemann > %40google.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bigobangux at gmail.com Fri Nov 10 22:53:44 2017 From: bigobangux at gmail.com (Ben Usman) Date: Fri, 10 Nov 2017 22:53:44 -0500 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) Message-ID: The following works now: seq = [1, 2] d = {'c': 3, 'a': 1, 'b': 2} (el1, el2) = *seq el1, el2 = *seq head, *tail = *seq seq_new = (*seq, *tail) dict_new = {**d, **{'c': 4}} def f(arg1, arg2, a, b, c): pass f(*seq, **d) It seems like dict unpacking syntax would not be fully coherent with list unpacking syntax without something like: {b, a, **other} = **d Because iterables have both syntax for function call unpacking and "rhs in assignment unpacking" and dict has only function call unpacking syntax. I was not able to find any PEPs that suggest this (search keywords: "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), however, let me know if I am wrong. The main use-case, in my understating, is getting shortcuts to elements of a dictionary if they are going to be used more then ones later in the scope. A made-up example is using a config to initiate a bunch of things with many config arguments with long names that have overlap in keywords used in initialization. One should either write long calls like start_a(config['parameter1'], config['parameter2'], config['parameter3'], config['parameter4']) start_b(config['parameter3'], config['parameter2'], config['parameter3'], config['parameter4']) many times or use a list-comprehension solution mentioned above. It becomes even worse (in terms of readability) with nested structures. start_b(config['group2']['parameter3'], config['parameter2'], config['parameter3'], config['group2']['parameter3']) ## Rationale Right now this problem is often solved using [list] comprehensions, but this is somewhat verbose: a, b = (d[k] for k in ['a', 'b']) or direct per-instance assignment (looks simple for with single-character keys, but often becomes very verbose with real-world long key names) a = d['a'] b = d['b'] Alternatively one could have a very basic method\function get_n() or __getitem__() accepting more then a single argument a, b = d.get_n('a', 'b') a, b = get_n(d, 'a', 'b') a, b = d['a', 'b'] All these approaches require verbose double-mentioning of same key. It becomes even worse if you have nested structures of dictionaries. ## Concerns and questions: 0. This is the most troubling part, imho, other questions are more like common thoughts. It seems (to put it mildly) weird that execution flow depends on names of local variables. For example, one can not easily refactor these variable names. However, same is true for dictionary keys anyway: you can not suddenly decide and refactor your code to expect dictionaries with keys 'c' and 'd' whereas your entire system still expects you to use dictionaries with keys 'a' and 'b'. A counter-objection is that this specific scenario is usually handled with record\struct-like classes with fixed members rather then dicts, so this is not an issue. Quite a few languages (closure and javascript to name a few) seem to have this feature now and it seems like they did not suffer too much from refactoring hell. This does not mean that their approach is good, just that it is "manageable". 1. This line seems coherent with sequence syntax, but redundant: {b, a, **other} = **d and the following use of "throwaway" variable just looks poor visually {b, a, **_} = **d could it be less verbose like this {b, a} = **d but it is not very coherent with lists behavior. E.g. what if that line did not raise something like "ValueError: Too many keys to unpack, got an unexpected keyword argument 'c'". 2. Unpacking in other contexts {self.a, b, **other} = **d should it be interpreted as self.a, b = d['a'], d['b'] or self.a, b = d['self.a'], d['b'] probably the first, but what I am saying is that these name-extracting rules should be strictly specified and it might not be trivial. --- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.zijlstra at gmail.com Sat Nov 11 01:22:41 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Fri, 10 Nov 2017 22:22:41 -0800 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: 2017-11-10 19:53 GMT-08:00 Ben Usman : > The following works now: > > seq = [1, 2] > d = {'c': 3, 'a': 1, 'b': 2} > > (el1, el2) = *seq > el1, el2 = *seq > head, *tail = *seq > > seq_new = (*seq, *tail) > dict_new = {**d, **{'c': 4}} > > def f(arg1, arg2, a, b, c): > pass > > f(*seq, **d) > > It seems like dict unpacking syntax would not be fully coherent with > list unpacking syntax without something like: > > {b, a, **other} = **d > > Because iterables have both syntax for function call unpacking and > "rhs in assignment unpacking" and dict has only function call > unpacking syntax. > > I was not able to find any PEPs that suggest this (search keywords: > "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), > however, let me know if I am wrong. > > It was discussed at great length on Python-ideas about a year ago. There is a thread called "Unpacking a dict" from May 2016. > The main use-case, in my understating, is getting shortcuts to > elements of a dictionary if they are going to be used more then > ones later in the scope. A made-up example is using a config to > initiate a bunch of things with many config arguments with long > names that have overlap in keywords used in initialization. > > One should either write long calls like > > start_a(config['parameter1'], config['parameter2'], > config['parameter3'], config['parameter4']) > > start_b(config['parameter3'], config['parameter2'], > config['parameter3'], config['parameter4']) > > many times or use a list-comprehension solution mentioned above. > > It becomes even worse (in terms of readability) with nested structures. > > start_b(config['group2']['parameter3'], config['parameter2'], > config['parameter3'], config['group2']['parameter3']) > > > ## Rationale > > Right now this problem is often solved using [list] comprehensions, > but this is somewhat verbose: > > a, b = (d[k] for k in ['a', 'b']) > > or direct per-instance assignment (looks simple for with > single-character keys, but often becomes very verbose with > real-world long key names) > > a = d['a'] > b = d['b'] > > Alternatively one could have a very basic method\function > get_n() or __getitem__() accepting more then a single argument > > a, b = d.get_n('a', 'b') > a, b = get_n(d, 'a', 'b') > a, b = d['a', 'b'] > > All these approaches require verbose double-mentioning of same > key. It becomes even worse if you have nested structures > of dictionaries. > > ## Concerns and questions: > > 0. This is the most troubling part, imho, other questions > are more like common thoughts. It seems (to put it mildly) > weird that execution flow depends on names of local variables. > > For example, one can not easily refactor these variable names. However, > same is true for dictionary keys anyway: you can not suddenly decide > and refactor your code to expect dictionaries with keys 'c' and > 'd' whereas your entire system still expects you to use dictionaries > with keys 'a' and 'b'. A counter-objection is that this specific > scenario is usually handled with record\struct-like classes with > fixed members rather then dicts, so this is not an issue. > > Quite a few languages (closure and javascript to name a few) seem > to have this feature now and it seems like they did not suffer too > much from refactoring hell. This does not mean that their approach > is good, just that it is "manageable". > > 1. This line seems coherent with sequence syntax, but redundant: > {b, a, **other} = **d > > and the following use of "throwaway" variable just looks poor visually > {b, a, **_} = **d > > could it be less verbose like this > {b, a} = **d > > but it is not very coherent with lists behavior. > > E.g. what if that line did not raise something like "ValueError: > Too many keys to unpack, got an unexpected keyword argument 'c'". > > 2. Unpacking in other contexts > > {self.a, b, **other} = **d > > should it be interpreted as > self.a, b = d['a'], d['b'] > > or > > self.a, b = d['self.a'], d['b'] > > probably the first, but what I am saying is that these name-extracting > rules should be strictly specified and it might not be trivial. > > --- > Ben > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bigobangux at gmail.com Sat Nov 11 01:26:19 2017 From: bigobangux at gmail.com (Ben Usman) Date: Sat, 11 Nov 2017 01:26:19 -0500 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: Got it, thank you. I'll go and check it out! On Nov 11, 2017 01:22, "Jelle Zijlstra" wrote: > > > 2017-11-10 19:53 GMT-08:00 Ben Usman : > >> The following works now: >> >> seq = [1, 2] >> d = {'c': 3, 'a': 1, 'b': 2} >> >> (el1, el2) = *seq >> el1, el2 = *seq >> head, *tail = *seq >> >> seq_new = (*seq, *tail) >> dict_new = {**d, **{'c': 4}} >> >> def f(arg1, arg2, a, b, c): >> pass >> >> f(*seq, **d) >> >> It seems like dict unpacking syntax would not be fully coherent with >> list unpacking syntax without something like: >> >> {b, a, **other} = **d >> >> Because iterables have both syntax for function call unpacking and >> "rhs in assignment unpacking" and dict has only function call >> unpacking syntax. >> >> I was not able to find any PEPs that suggest this (search keywords: >> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), >> however, let me know if I am wrong. >> >> It was discussed at great length on Python-ideas about a year ago. There > is a thread called "Unpacking a dict" from May 2016. > > >> The main use-case, in my understating, is getting shortcuts to >> elements of a dictionary if they are going to be used more then >> ones later in the scope. A made-up example is using a config to >> initiate a bunch of things with many config arguments with long >> names that have overlap in keywords used in initialization. >> >> One should either write long calls like >> >> start_a(config['parameter1'], config['parameter2'], >> config['parameter3'], config['parameter4']) >> >> start_b(config['parameter3'], config['parameter2'], >> config['parameter3'], config['parameter4']) >> >> many times or use a list-comprehension solution mentioned above. >> >> It becomes even worse (in terms of readability) with nested structures. >> >> start_b(config['group2']['parameter3'], config['parameter2'], >> config['parameter3'], config['group2']['parameter3']) >> >> >> ## Rationale >> >> Right now this problem is often solved using [list] comprehensions, >> but this is somewhat verbose: >> >> a, b = (d[k] for k in ['a', 'b']) >> >> or direct per-instance assignment (looks simple for with >> single-character keys, but often becomes very verbose with >> real-world long key names) >> >> a = d['a'] >> b = d['b'] >> >> Alternatively one could have a very basic method\function >> get_n() or __getitem__() accepting more then a single argument >> >> a, b = d.get_n('a', 'b') >> a, b = get_n(d, 'a', 'b') >> a, b = d['a', 'b'] >> >> All these approaches require verbose double-mentioning of same >> key. It becomes even worse if you have nested structures >> of dictionaries. >> >> ## Concerns and questions: >> >> 0. This is the most troubling part, imho, other questions >> are more like common thoughts. It seems (to put it mildly) >> weird that execution flow depends on names of local variables. >> >> For example, one can not easily refactor these variable names. However, >> same is true for dictionary keys anyway: you can not suddenly decide >> and refactor your code to expect dictionaries with keys 'c' and >> 'd' whereas your entire system still expects you to use dictionaries >> with keys 'a' and 'b'. A counter-objection is that this specific >> scenario is usually handled with record\struct-like classes with >> fixed members rather then dicts, so this is not an issue. >> >> Quite a few languages (closure and javascript to name a few) seem >> to have this feature now and it seems like they did not suffer too >> much from refactoring hell. This does not mean that their approach >> is good, just that it is "manageable". >> >> 1. This line seems coherent with sequence syntax, but redundant: >> {b, a, **other} = **d >> >> and the following use of "throwaway" variable just looks poor visually >> {b, a, **_} = **d >> >> could it be less verbose like this >> {b, a} = **d >> >> but it is not very coherent with lists behavior. >> >> E.g. what if that line did not raise something like "ValueError: >> Too many keys to unpack, got an unexpected keyword argument 'c'". >> >> 2. Unpacking in other contexts >> >> {self.a, b, **other} = **d >> >> should it be interpreted as >> self.a, b = d['a'], d['b'] >> >> or >> >> self.a, b = d['self.a'], d['b'] >> >> probably the first, but what I am saying is that these name-extracting >> rules should be strictly specified and it might not be trivial. >> >> --- >> Ben >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle. >> zijlstra%40gmail.com >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 11 02:00:36 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 11 Nov 2017 17:00:36 +1000 Subject: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default In-Reply-To: <1510329763.3444970.1168317136.193F9B74@webmail.messagingengine.com> References: <1510329763.3444970.1168317136.193F9B74@webmail.messagingengine.com> Message-ID: On 11 November 2017 at 02:02, Random832 wrote: > On Tue, Nov 7, 2017, at 07:22, Nick Coghlan wrote: >> My suggestion for that definition is to have the *default* meaning of >> "third party code" be "everything that isn't __main__". > > What is __main__? Or, rather, how do you determine when it is to blame? > For syntax it's easy, but any deprecated function necessarily belongs to > its own module and not to main. Main may have called it, which can be > detected from the stack trace, or it may have used it in some other way > (pass to some builtin or e.g. itertools function that takes a callable > argument, for example). The warnings machinery already defines how this works (look for "stacklevel"). For callbacks defined as Python code, the deprecated call will be attributed to whichever module defined the callback, not the machinery that called the callback. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Nov 11 02:02:15 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 11 Nov 2017 17:02:15 +1000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On 11 November 2017 at 01:48, Guido van Rossum wrote: > I don't mind the long name. Of all the options so far I really only like > 'string_annotations' so let's go with that. +1 from me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Sat Nov 11 02:34:05 2017 From: brett at python.org (Brett Cannon) Date: Sat, 11 Nov 2017 07:34:05 +0000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On Thu, Nov 9, 2017, 17:33 Nathaniel Smith, wrote: > On Nov 8, 2017 16:12, "Nick Coghlan" wrote: > > On 9 November 2017 at 07:46, Antoine Pitrou wrote: > > > > Le 08/11/2017 ? 22:43, Nick Coghlan a ?crit : > >> > >> However, between them, the following two guidelines should provide > >> pretty good deprecation warning coverage for the world's Python code: > >> > >> 1. If it's in __main__, it will emit deprecation warnings at runtime > >> 2. If it's not in __main__, it should have a test suite > > > > Nick, have you actually read the discussion and the complaints people > > had with the current situation? Most of them *don't* specifically talk > > about __main__ scripts. > > I have, and I've also re-read the discussions regarding why the > default got changed in the first place. > > Behaviour up until 2.6 & 3.1: > > once::DeprecationWarning > > Behaviour since 2.7 & 3.2: > > ignore::DeprecationWarning > > With test runners overriding the default filters to set it back to > "once::DeprecationWarning". > > > Is this intended to be a description of the current state of affairs? > Because I've never encountered a test runner that does this... Which > runners are you thinking of? > > > The rationale for that change was so that end users of applications > that merely happened to be written in Python wouldn't see deprecation > warnings when Linux distros (or the end user) updated to a new Python > version. It had the downside that you had to remember to opt-in to > deprecation warnings in order to see them, which is a problem if you > mostly use Python for ad hoc personal scripting. > > Proposed behaviour for Python 3.7+: > > once::DeprecationWarning:__main__ > ignore::DeprecationWarning > > With test runners still overriding the default filters to set them > back to "once::DeprecationWarning". > > This is a partial reversion back to the pre-2.7 behaviour, focused > specifically on interactive use and ad hoc personal scripting. For ad > hoc *distributed* scripting, the changed default encourages upgrading > from single-file scripts to the zipapp model, and then minimising the > amount of code that runs directly in __main__.py. > > I expect this will be a sufficient change to solve the specific > problem I'm personally concerned by, so I'm no longer inclined to > argue for anything more complicated. Other folks may have other > concerns that this tweak to the default filters doesn't address - they > can continue to build their case for more complex options using this > as the new baseline behaviour. > > > I think most people's concern is that we've gotten into a state where > DeprecationWarning's are largely useless in practice, because no one sees > them. Effectively the norm now is that developers (both the Python core > team and downstream libraries) think they're following some sensible > deprecation cycle, but often they're actually making changes without any > warning, just they wait a year to do it. It's not clear why we're bothering > through multiple releases -- which adds major overhead -- if in practice we > aren't going to actually warn most people. Enabling them for another 1% of > code doesn't really address this. > > As I mentioned above, it's also having the paradoxical effect of making it > so that end-users are *more* likely to see deprecation warnings, since > major libraries are giving up on using DeprecationWarning. Most recently it > looks like pyca/cryptography is going to switch, partly as a result of this > thread: > https://github.com/pyca/cryptography/pull/4014 > > Some more ideas to throw out there: > > - if an envvar CI=true is set, then by default make deprecation warnings > into errors. (This is an informal standard that lots of CI systems use. > Error instead of "once" because most people don't look at CI output at all > unless there's an error.) > One problem with that is I don't want e.g. mypy to start spewing out warnings while checking my code. That's why I like Victor's idea of a -X option that also flips on other test/debug features. Yes, this would also trigger for test runners, but that's at least a smaller amount of affected code. -Brett > - provide some mechanism that makes it easy to have a deprecation warning > that starts out as invisible, but then becomes visible as you get closer to > the switchover point. (E.g. CPython might make the deprecation warnings > that it issues be invisible in 3.x.0 and 3.x.1 but become visible in > 3.x.2+.) Maybe: > > # in warnings.py > def deprecation_warning(library_version, visible_in_version, > change_in_version, msg, stacklevel): > ... > > Then a call like: > > deprecation_warning(my_library.__version__, "1.3", "1.4", "This function > is deprecated", 2) > > issues an InvisibleDeprecationWarning if my_library.__version__ < 1.3, and > a VisibleDeprecationWarning otherwise. > > (The stacklevel argument is mandatory because the usual default of 1 is > always wrong for deprecation warnings.) > > -n > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Nov 11 02:36:20 2017 From: brett at python.org (Brett Cannon) Date: Sat, 11 Nov 2017 07:36:20 +0000 Subject: [Python-Dev] Python 3.6.3 venv FAILURE In-Reply-To: <06FCCD5E975F7F48BC1BC9CB5F461DBB9D944C7E@avsrvexchmbx2.microsemi.net> References: <06FCCD5E975F7F48BC1BC9CB5F461DBB9D944C7E@avsrvexchmbx2.microsemi.net> Message-ID: Please file bugs at bugs.python.org. On Fri, Nov 10, 2017, 16:40 Larry Chen, wrote: > Upgraded from 3.6.1 to 3.6.3; but got an error when trying to create my > virtual environment. > > > > [larrchen at rslab239 Larry]$ /opt/python3.6.3/bin/python3.6 -m venv > /u/larrchen/work2/SAN/Users/Larry/rslab239_myENV_363 > > Error: Command > '['/u/larrchen/work2/SAN/Users/Larry/rslab239_myENV_363/bin/python3.6', > '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit > status 1. > > [larrchen at rslab239 Larry]$ > > > > Regards, > > Larry Chen > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Nov 11 06:34:56 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 11 Nov 2017 12:34:56 +0100 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: <20171111123456.5ffcebc1@fsol> On Sat, 11 Nov 2017 07:34:05 +0000 Brett Cannon wrote: > > One problem with that is I don't want e.g. mypy to start spewing out > warnings while checking my code. It's rather trivial for mypy (or any other code analysis tool) to turn warnings off when importing the code under analysis. And since there are other warnings out there than DeprecationWarnings, it should do it anyway even if we don't change DeprecationWarning's default behaviour. Regards Antoine. From ncoghlan at gmail.com Sat Nov 11 18:29:36 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 12 Nov 2017 09:29:36 +1000 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On 11 November 2017 at 17:34, Brett Cannon wrote: > On Thu, Nov 9, 2017, 17:33 Nathaniel Smith, wrote: >> Some more ideas to throw out there: >> >> - if an envvar CI=true is set, then by default make deprecation warnings >> into errors. (This is an informal standard that lots of CI systems use. >> Error instead of "once" because most people don't look at CI output at all >> unless there's an error.) > > > One problem with that is I don't want e.g. mypy to start spewing out > warnings while checking my code. That's why I like Victor's idea of a -X > option that also flips on other test/debug features. Yes, this would also > trigger for test runners, but that's at least a smaller amount of affected > code. For mypy itself, the CLI is declared as a console_scripts entry point, so none of mypy's own code actually runs in __main__ - it's all part of an imported module. And given that one of the key benefits of static analysis is that it *doesn't* run the code, I'd be surprised if mypy could ever trigger a runtime warning in the code under tests :) For other test runners that do import the code under test, I think that *our* responsibility is to make it clearer that the default warning state isn't something that test runner designers should passively inherit from the interpreter - deciding what the default warning state should be (and how to get subprocesses to inherit that setting by default) is part of the process of designing the test runner. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Sat Nov 11 18:55:39 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 11 Nov 2017 15:55:39 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On Fri, Nov 10, 2017 at 11:34 PM, Brett Cannon wrote: > On Thu, Nov 9, 2017, 17:33 Nathaniel Smith, wrote: >> - if an envvar CI=true is set, then by default make deprecation warnings >> into errors. (This is an informal standard that lots of CI systems use. >> Error instead of "once" because most people don't look at CI output at all >> unless there's an error.) > > One problem with that is I don't want e.g. mypy to start spewing out > warnings while checking my code. That's why I like Victor's idea of a -X > option that also flips on other test/debug features. Yes, this would also > trigger for test runners, but that's at least a smaller amount of affected > code. Ah, yeah, you're right -- often CI systems use Python programs for infrastructure, beyond the actual code under test. pip is maybe a more obvious example than mypy -- we probably don't want pip to stop working in CI runs just because it happens to use a deprecated API somewhere :-). So this idea doesn't work. -n -- Nathaniel J. Smith -- https://vorpus.org From jsbueno at python.org.br Sat Nov 11 19:09:58 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Sat, 11 Nov 2017 22:09:58 -0200 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: Ben, I have a small package which enables one to do: with MapGetter(my_dictionary): from my_dictionary import a, b, parameter3 If this interests you, contributions so it can get hardenned for mainstram acceptance are welcome. https://github.com/jsbueno/extradict On 11 November 2017 at 04:26, Ben Usman wrote: > Got it, thank you. I'll go and check it out! > > On Nov 11, 2017 01:22, "Jelle Zijlstra" wrote: >> >> >> >> 2017-11-10 19:53 GMT-08:00 Ben Usman : >>> >>> The following works now: >>> >>> seq = [1, 2] >>> d = {'c': 3, 'a': 1, 'b': 2} >>> >>> (el1, el2) = *seq >>> el1, el2 = *seq >>> head, *tail = *seq >>> >>> seq_new = (*seq, *tail) >>> dict_new = {**d, **{'c': 4}} >>> >>> def f(arg1, arg2, a, b, c): >>> pass >>> >>> f(*seq, **d) >>> >>> It seems like dict unpacking syntax would not be fully coherent with >>> list unpacking syntax without something like: >>> >>> {b, a, **other} = **d >>> >>> Because iterables have both syntax for function call unpacking and >>> "rhs in assignment unpacking" and dict has only function call >>> unpacking syntax. >>> >>> I was not able to find any PEPs that suggest this (search keywords: >>> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), >>> however, let me know if I am wrong. >>> >> It was discussed at great length on Python-ideas about a year ago. There >> is a thread called "Unpacking a dict" from May 2016. >> >>> >>> The main use-case, in my understating, is getting shortcuts to >>> elements of a dictionary if they are going to be used more then >>> ones later in the scope. A made-up example is using a config to >>> initiate a bunch of things with many config arguments with long >>> names that have overlap in keywords used in initialization. >>> >>> One should either write long calls like >>> >>> start_a(config['parameter1'], config['parameter2'], >>> config['parameter3'], config['parameter4']) >>> >>> start_b(config['parameter3'], config['parameter2'], >>> config['parameter3'], config['parameter4']) >>> >>> many times or use a list-comprehension solution mentioned above. >>> >>> It becomes even worse (in terms of readability) with nested structures. >>> >>> start_b(config['group2']['parameter3'], config['parameter2'], >>> config['parameter3'], config['group2']['parameter3']) >>> >>> >>> ## Rationale >>> >>> Right now this problem is often solved using [list] comprehensions, >>> but this is somewhat verbose: >>> >>> a, b = (d[k] for k in ['a', 'b']) >>> >>> or direct per-instance assignment (looks simple for with >>> single-character keys, but often becomes very verbose with >>> real-world long key names) >>> >>> a = d['a'] >>> b = d['b'] >>> >>> Alternatively one could have a very basic method\function >>> get_n() or __getitem__() accepting more then a single argument >>> >>> a, b = d.get_n('a', 'b') >>> a, b = get_n(d, 'a', 'b') >>> a, b = d['a', 'b'] >>> >>> All these approaches require verbose double-mentioning of same >>> key. It becomes even worse if you have nested structures >>> of dictionaries. >>> >>> ## Concerns and questions: >>> >>> 0. This is the most troubling part, imho, other questions >>> are more like common thoughts. It seems (to put it mildly) >>> weird that execution flow depends on names of local variables. >>> >>> For example, one can not easily refactor these variable names. However, >>> same is true for dictionary keys anyway: you can not suddenly decide >>> and refactor your code to expect dictionaries with keys 'c' and >>> 'd' whereas your entire system still expects you to use dictionaries >>> with keys 'a' and 'b'. A counter-objection is that this specific >>> scenario is usually handled with record\struct-like classes with >>> fixed members rather then dicts, so this is not an issue. >>> >>> Quite a few languages (closure and javascript to name a few) seem >>> to have this feature now and it seems like they did not suffer too >>> much from refactoring hell. This does not mean that their approach >>> is good, just that it is "manageable". >>> >>> 1. This line seems coherent with sequence syntax, but redundant: >>> {b, a, **other} = **d >>> >>> and the following use of "throwaway" variable just looks poor visually >>> {b, a, **_} = **d >>> >>> could it be less verbose like this >>> {b, a} = **d >>> >>> but it is not very coherent with lists behavior. >>> >>> E.g. what if that line did not raise something like "ValueError: >>> Too many keys to unpack, got an unexpected keyword argument 'c'". >>> >>> 2. Unpacking in other contexts >>> >>> {self.a, b, **other} = **d >>> >>> should it be interpreted as >>> self.a, b = d['a'], d['b'] >>> >>> or >>> >>> self.a, b = d['self.a'], d['b'] >>> >>> probably the first, but what I am saying is that these name-extracting >>> rules should be strictly specified and it might not be trivial. >>> >>> --- >>> Ben >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/jelle.zijlstra%40gmail.com >>> >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br > From k7hoven at gmail.com Sat Nov 11 20:40:13 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Sun, 12 Nov 2017 03:40:13 +0200 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: Oops, forgot to reply to the list. On Nov 12, 2017 03:35, "Koos Zevenhoven" wrote: On Nov 12, 2017 02:12, "Joao S. O. Bueno" wrote: Ben, I have a small package which enables one to do: with MapGetter(my_dictionary): from my_dictionary import a, b, parameter3 If this interests you, contributions so it can get hardenned for mainstram acceptance are welcome. https://github.com/jsbueno/extradict Your VersionDict in fact has some similarities to what I have thought of implementing using the PEP 555 machinery, but it is also a bit different. Interesting... -- Koos (mobile) On 11 November 2017 at 04:26, Ben Usman wrote: > Got it, thank you. I'll go and check it out! > > On Nov 11, 2017 01:22, "Jelle Zijlstra" wrote: >> >> >> >> 2017-11-10 19:53 GMT-08:00 Ben Usman : >>> >>> The following works now: >>> >>> seq = [1, 2] >>> d = {'c': 3, 'a': 1, 'b': 2} >>> >>> (el1, el2) = *seq >>> el1, el2 = *seq >>> head, *tail = *seq >>> >>> seq_new = (*seq, *tail) >>> dict_new = {**d, **{'c': 4}} >>> >>> def f(arg1, arg2, a, b, c): >>> pass >>> >>> f(*seq, **d) >>> >>> It seems like dict unpacking syntax would not be fully coherent with >>> list unpacking syntax without something like: >>> >>> {b, a, **other} = **d >>> >>> Because iterables have both syntax for function call unpacking and >>> "rhs in assignment unpacking" and dict has only function call >>> unpacking syntax. >>> >>> I was not able to find any PEPs that suggest this (search keywords: >>> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), >>> however, let me know if I am wrong. >>> >> It was discussed at great length on Python-ideas about a year ago. There >> is a thread called "Unpacking a dict" from May 2016. >> >>> >>> The main use-case, in my understating, is getting shortcuts to >>> elements of a dictionary if they are going to be used more then >>> ones later in the scope. A made-up example is using a config to >>> initiate a bunch of things with many config arguments with long >>> names that have overlap in keywords used in initialization. >>> >>> One should either write long calls like >>> >>> start_a(config['parameter1'], config['parameter2'], >>> config['parameter3'], config['parameter4']) >>> >>> start_b(config['parameter3'], config['parameter2'], >>> config['parameter3'], config['parameter4']) >>> >>> many times or use a list-comprehension solution mentioned above. >>> >>> It becomes even worse (in terms of readability) with nested structures. >>> >>> start_b(config['group2']['parameter3'], config['parameter2'], >>> config['parameter3'], config['group2']['parameter3']) >>> >>> >>> ## Rationale >>> >>> Right now this problem is often solved using [list] comprehensions, >>> but this is somewhat verbose: >>> >>> a, b = (d[k] for k in ['a', 'b']) >>> >>> or direct per-instance assignment (looks simple for with >>> single-character keys, but often becomes very verbose with >>> real-world long key names) >>> >>> a = d['a'] >>> b = d['b'] >>> >>> Alternatively one could have a very basic method\function >>> get_n() or __getitem__() accepting more then a single argument >>> >>> a, b = d.get_n('a', 'b') >>> a, b = get_n(d, 'a', 'b') >>> a, b = d['a', 'b'] >>> >>> All these approaches require verbose double-mentioning of same >>> key. It becomes even worse if you have nested structures >>> of dictionaries. >>> >>> ## Concerns and questions: >>> >>> 0. This is the most troubling part, imho, other questions >>> are more like common thoughts. It seems (to put it mildly) >>> weird that execution flow depends on names of local variables. >>> >>> For example, one can not easily refactor these variable names. However, >>> same is true for dictionary keys anyway: you can not suddenly decide >>> and refactor your code to expect dictionaries with keys 'c' and >>> 'd' whereas your entire system still expects you to use dictionaries >>> with keys 'a' and 'b'. A counter-objection is that this specific >>> scenario is usually handled with record\struct-like classes with >>> fixed members rather then dicts, so this is not an issue. >>> >>> Quite a few languages (closure and javascript to name a few) seem >>> to have this feature now and it seems like they did not suffer too >>> much from refactoring hell. This does not mean that their approach >>> is good, just that it is "manageable". >>> >>> 1. This line seems coherent with sequence syntax, but redundant: >>> {b, a, **other} = **d >>> >>> and the following use of "throwaway" variable just looks poor visually >>> {b, a, **_} = **d >>> >>> could it be less verbose like this >>> {b, a} = **d >>> >>> but it is not very coherent with lists behavior. >>> >>> E.g. what if that line did not raise something like "ValueError: >>> Too many keys to unpack, got an unexpected keyword argument 'c'". >>> >>> 2. Unpacking in other contexts >>> >>> {self.a, b, **other} = **d >>> >>> should it be interpreted as >>> self.a, b = d['a'], d['b'] >>> >>> or >>> >>> self.a, b = d['self.a'], d['b'] >>> >>> probably the first, but what I am saying is that these name-extracting >>> rules should be strictly specified and it might not be trivial. >>> >>> --- >>> Ben >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/jelle. zijlstra%40gmail.com >>> >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br > _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/k7hoven% 40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Nov 11 20:47:20 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 11 Nov 2017 17:47:20 -0800 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171106185145.mfgq6qylrugk6nqo@python.ca> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: On Sat, Nov 11, 2017 at 3:29 PM, Nick Coghlan wrote: > And given that one of the key benefits of static analysis is that it > *doesn't* run the code, I'd be surprised if mypy could ever trigger a > runtime warning in the code under tests :) > Actually there are a few cases where mypy *will* generate deprecation warnings: when the warning is produced by the standard Python parser. Mypy's parser (typed_ast) is a fork of the stdlib ast module and it preserves the code that generates such warnings. I found two cases in particular that generate them: - In Python 2 code, the `<>` operator gives "DeprecationWarning: <> not supported in 3.x; use !="/ - In Python 3 code, using `\u` escapes in a b'...' literal gives "DeprecationWarning: invalid escape sequence '\u'" In both cases these warnings are currently only generated if you run mypy with these warnings enabled, e.g. `python3 -Wd -m mypy `. But this means that mypy would start generating these by default if those warnings were enabled everywhere by default (per Antoine's preference). And while it's debatable whether they are useful, there should at least be a way to turn them off (e.g. when checking Python 2 code that's never going to be ported). Running mypy in the above way is awkward; mypy would likely have to grow a new flag to control this. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbueno at python.org.br Sat Nov 11 22:08:39 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Sun, 12 Nov 2017 01:08:39 -0200 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: On 11 November 2017 at 23:40, Koos Zevenhoven wrote: > Oops, forgot to reply to the list. > > > On Nov 12, 2017 03:35, "Koos Zevenhoven" wrote: > > On Nov 12, 2017 02:12, "Joao S. O. Bueno" wrote: > > Ben, I have a small package which enables one to do: > > with MapGetter(my_dictionary): > from my_dictionary import a, b, parameter3 > > If this interests you, contributions so it can get hardenned for > mainstram acceptance are welcome. > https://github.com/jsbueno/extradict > > > Your VersionDict in fact has some similarities to what I have thought of > implementing using the PEP 555 machinery, but it is also a bit different. > Interesting... > My main issue with that VersionDict is that after I did it, I didn't had a real case where to use it. So, it never been used beyond the unit tests and some playing around. (I wrote it when dicts in Python got versioning and that was only visible from the C-API) I remember the idea for the versioned value retrieved occurred to me quite naturally when I wrote it, so it is probably close to the "OOWTDI". > -- Koos (mobile) > > From guido at python.org Sun Nov 12 00:07:07 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 11 Nov 2017 21:07:07 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Fri, Nov 10, 2017 at 11:02 PM, Nick Coghlan wrote: > On 11 November 2017 at 01:48, Guido van Rossum wrote: > > I don't mind the long name. Of all the options so far I really only like > > 'string_annotations' so let's go with that. > > +1 from me. > I'd like to reverse my stance on this. We had `from __future__ import division` for many years in Python 2, and nobody argued that it implied that Python 2 doesn't have division -- it just meant to import the future *version* of division. So I think the original idea, `from __future__ import annotations` is fine. I don't expect there will be *other* things related to annotations that we'll be importing from the future. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Nov 12 04:24:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 12 Nov 2017 19:24:12 +1000 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ Message-ID: I've written a short(ish) PEP for the proposal to change the default warnings filters to show DeprecationWarning in __main__: https://www.python.org/dev/peps/pep-0565/ The core proposal itself is just the idea in https://bugs.python.org/issue31975 (i.e. adding "default::DeprecationWarning:__main__" to the default filter set), but the PEP fills in some details on the motivation for the original change to the defaults, and why the current proposal is to add a new filter for __main__, rather than dropping the default DeprecationWarning filter entirely. The PEP also proposes repurposing the existing FutureWarning category to explicitly mean "backwards compatibility warnings that should be shown to users of Python applications" since: - we don't tend to use FutureWarning for its original nominal purpose (changes that will continue to run but will do something different) - FutureWarning was added in 2.3, so it's available in all still supported versions of Python, and is shown by default in all of them - it's at least arguably a less-jargony spelling of DeprecationWarning, and hence more appropriate for displaying to end users that may not have encountered the specific notion of "API deprecation" Cheers, Nick. ============== PEP: 565 Title: Show DeprecationWarning in __main__ Author: Nick Coghlan Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 12-Nov-2017 Python-Version: 3.7 Post-History: 12-Nov-2017 Abstract ======== In Python 2.7 and Python 3.2, the default warning filters were updated to hide DeprecationWarning by default, such that deprecation warnings in development tools that were themselves written in Python (e.g. linters, static analysers, test runners, code generators) wouldn't be visible to their users unless they explicitly opted in to seeing them. However, this change has had the unfortunate side effect of making DeprecationWarning markedly less effective at its primary intended purpose: providing advance notice of breaking changes in APIs (whether in CPython, the standard library, or in third party libraries) to users of those APIs. To improve this situation, this PEP proposes a single adjustment to the default warnings filter: displaying deprecation warnings attributed to the main module by default. This change will mean that code entered at the interactive prompt and code in single file scripts will revert to reporting these warnings by default, while they will continue to be silenced by default for packaged code distributed as part of an importable module. The PEP also proposes a number of small adjustments to the reference interpreter and standard library documentation to help make the warnings subsystem more approachable for new Python developers. Specification ============= The current set of default warnings filters consists of:: ignore::DeprecationWarning ignore::PendingDeprecationWarning ignore::ImportWarning ignore::BytesWarning ignore::ResourceWarning The default ``unittest`` test runner then uses ``warnings.catch_warnings()`` ``warnings.simplefilter('default')`` to override the default filters while running test cases. The change proposed in this PEP is to update the default warning filter list to be:: default::DeprecationWarning:__main__ ignore::DeprecationWarning ignore::PendingDeprecationWarning ignore::ImportWarning ignore::BytesWarning ignore::ResourceWarning This means that in cases where the nominal location of the warning (as determined by the ``stacklevel`` parameter to ``warnings.warn``) is in the ``__main__`` module, the first occurrence of each DeprecationWarning will once again be reported. This change will lead to DeprecationWarning being displayed by default for: * code executed directly at the interactive prompt * code executed directly as part of a single-file script While continuing to be hidden by default for: * code imported from another module in a ``zipapp`` archive's ``__main__.py`` file * code imported from another module in an executable package's ``__main__`` submodule * code imported from an executable script wrapper generated at installation time based on a ``console_scripts`` or ``gui_scripts`` entry point definition As a result, API deprecation warnings encountered by development tools written in Python should continue to be hidden by default for users of those tools While not its originally intended purpose, the standard library documentation will also be updated to explicitly recommend the use of ``FutureWarning`` (rather than ``DeprecationWarning``) for backwards compatibility warnings that are intended to be seen by *users* of an application. This will give the following three distinct categories of backwards compatibility warning, with three different intended audiences: * ``PendingDeprecationWarning``: reported by default only in test runners that override the default set of warning filters. The intended audience is Python developers that take an active interest in ensuring the future compatibility of their software (e.g. professional Python application developers with specific support obligations). * ``DeprecationWarning``: reported by default for code that runs directly in the ``__main__`` module (as such code is considered relatively unlikely to have a dedicated test suite), but relies on test suite based reporting for code in other modules. The intended audience is Python developers that are at risk of upgrades to their dependencies (including upgrades to Python itself) breaking their software (e.g. developers using Python to script environments where someone else is in control of the timing of dependency upgrades). * ``FutureWarning``: always reported by default. The intended audience is users of applications written in Python, rather than other Python developers (e.g. warning about use of a deprecated setting in a configuration file format). Given its presence in the standard library since Python 2.3, ``FutureWarning`` would then also have a secondary use case for libraries and frameworks that support multiple Python versions: as a more reliably visible alternative to ``DeprecationWarning`` in Python 2.7 and versions of Python 3.x prior to 3.7. Motivation ========== As discussed in [1_] and mentioned in [2_], Python 2.7 and Python 3.2 changed the default handling of ``DeprecationWarning`` such that: * the warning was hidden by default during normal code execution * the `unittest`` test runner was updated to re-enable it when running tests The intent was to avoid cases of tooling output like the following:: $ devtool mycode/ /usr/lib/python3.6/site-packages/devtool/cli.py:1: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7 async = True ... actual tool output ... Even when `devtool` is a tool specifically for Python programmers, this is not a particularly useful warning, as it will be shown on every invocation, even though the main helpful step an end user can take is to report a bug to the developers of ``devtool``. The warning is even less helpful for general purpose developer tools that are used across more languages than just Python. However, this change proved to have unintended consequences for the following audiences: * anyone using a test runner other than the default one built into ``unittest`` (since the request for third party test runners to change their default warnings filters was never made explicitly) * anyone using the default ``unittest`` test runner to test their Python code in a subprocess (since even ``unittest`` only adjusts the warnings settings in the current process) * anyone writing Python code at the interactive prompt or as part of a directly executed script that didn't have a Python level test suite at all In these cases, ``DeprecationWarning`` ended up become almost entirely equivalent to ``PendingDeprecationWarning``: it was simply never seen at all. Limitations on PEP Scope ======================== This PEP exists specifically to explain both the proposed addition to the default warnings filter for 3.7, *and* to more clearly articulate the rationale for the original change to the handling of DeprecationWarning back in Python 2.7 and 3.2. This PEP does not solve all known problems with the current approach to handling deprecation warnings. Most notably: * the default ``unittest`` test runner does not currently report deprecation warnings emitted at module import time, as the warnings filter override is only put in place during test execution, not during test discovery and loading. * the default ``unittest`` test runner does not currently report deprecation warnings in subprocesses, as the warnings filter override is applied directly to the loaded ``warnings`` module, not to the ``PYTHONWARNINGS`` environment variable. * the standard library doesn't provide a straightforward way to opt-in to seeing all warnings emitted *by* a particular dependency prior to upgrading it (the third-party ``warn`` module [3_] does provide this, but enabling it involves monkeypatching the standard library's ``warnings`` module). * re-enabling deprecation warnings by default in __main__ doesn't help in handling cases where software has been factored out into support modules, but those modules still have little or no automated test coverage. Near term, the best currently available answer is to run such applications with ``PYTHONWARNINGS=default::DeprecationWarning`` or ``python -W default::DeprecationWarning`` and pay attention to their ``stderr`` output. Longer term, this is really a question for researchers working on static analysis of Python code: how to reliably find usage of deprecated APIs, and how to infer that an API or parameter is deprecated based on ``warnings.warn`` calls, without actually running either the code providing the API or the code accessing it While these are real problems with the status quo, they're excluded from consideration in this PEP because they're going to require more complex solutions than a single additional entry in the default warnings filter, and resolving them at least potentially won't require going through the PEP process. For anyone interested in pursuing them further, the first two would be ``unittest`` module enhancement requests, the third would be a ``warnings`` module enhancement request, while the last would only require a PEP if inferring API deprecations from their contents was deemed to be an intractable code analysis problem, and an explicit function and parameter marker syntax in annotations was proposed instead. References ========== .. [1] stdlib-sig thread proposing the original default filter change (https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html) .. [2] Python 2.7 notification of the default warnings filter change (https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-deprecation-warnings) .. [3] Emitting warnings based on the location of the warning itself (https://pypi.org/project/warn/) Copyright ========= This document has been placed in the public domain. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 12 05:06:06 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 12 Nov 2017 20:06:06 +1000 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: On 11 November 2017 at 16:22, Jelle Zijlstra wrote: > 2017-11-10 19:53 GMT-08:00 Ben Usman : >> I was not able to find any PEPs that suggest this (search keywords: >> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), >> however, let me know if I am wrong. >> > It was discussed at great length on Python-ideas about a year ago. There is > a thread called "Unpacking a dict" from May 2016. I tend to post this every time the topic comes up, but: it's highly unlikely we'll get syntax for this when we don't even have a builtin to extract multiple items from a mapping in a single operation. So if folks would like dict unpacking syntax, then a suitable place to start would be a proposal for a "getitems" builtin that allowed operations like: b, a = getitems(d, ("b", "a")) operator.itemgetter and operator.attrgetter may provide some inspiration for possible proposals. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ethan at ethanhs.me Sun Nov 12 06:40:21 2017 From: ethan at ethanhs.me (Ethan Smith) Date: Sun, 12 Nov 2017 03:40:21 -0800 Subject: [Python-Dev] PEP 561 rework Message-ID: Hello, I re-wrote my PEP to have typing opt-in be per-package rather than per-distribution. This greatly simplifies things, and thanks to the feedback and suggestions of Nick Coghlan, it is entirely compatible with older packaging tooling. The main idea is there are two types of packages: - types are packaged with runtime code (inline or stubs in the same package) - types are in a separate package (a third party or maintainer wants to ship type information, but not with runtime code). The PEP is live on python.org: https://www.python.org/dev/peps/pep-0561/ And as always, duplicated below. Cheers, Ethan Smith --------------------------------------------------- PEP: 561 Title: Distributing and Packaging Type Information Author: Ethan Smith Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 09-Sep-2017 Python-Version: 3.7 Post-History: 10-Sep-2017, 12-Sep-2017, 06-Oct-2017, 26-Oct-2017 Abstract ======== PEP 484 introduced type hinting to Python, with goals of making typing gradual and easy to adopt. Currently, typing information must be distributed manually. This PEP provides a standardized means to leverage existing tooling to package and distribute type information with minimal work and an ordering for type checkers to resolve modules and collect this information for type checking. Rationale ========= Currently, package authors wish to distribute code that has inline type information. Additionally, maintainers would like to distribute stub files to keep Python 2 compatibility while using newer annotation syntax. However, there is no standard method to distribute packages with type information. Also, if one wished to ship stub files privately the only method available would be via setting ``MYPYPATH`` or the equivalent to manually point to stubs. If the package can be released publicly, it can be added to typeshed [1]_. However, this does not scale and becomes a burden on the maintainers of typeshed. In addition, it ties bug fixes in stubs to releases of the tool using typeshed. PEP 484 has a brief section on distributing typing information. In this section [2]_ the PEP recommends using ``shared/typehints/pythonX.Y/`` for shipping stub files. However, manually adding a path to stub files for each third party library does not scale. The simplest approach people have taken is to add ``site-packages`` to their ``MYPYPATH``, but this causes type checkers to fail on packages that are highly dynamic (e.g. sqlalchemy and Django). Definition of Terms =================== The definition of "MAY", "MUST", and "SHOULD", and "SHOULD NOT" are to be interpreted as described in RFC 2119. "inline" - the types are part of the runtime code using PEP 526 and 3107 syntax. "stubs" - files containing only type information, empty of runtime code. "Distributions" are the packaged files which are used to publish and distribute a release. [3]_ "Module" a file containing Python runtime code or stubbed type information. "Package" a directory or directories that namespace Python modules. Specification ============= There are several motivations and methods of supporting typing in a package. This PEP recognizes three (3) types of packages that users of typing wish to create: 1. The package maintainer would like to add type information inline. 2. The package maintainer would like to add type information via stubs. 3. A third party or package maintainer would like to share stub files for a package, but the maintainer does not want to include them in the source of the package. This PEP aims to support these scenarios and make them simple to add to packaging and deployment. The two major parts of this specification are the packaging specifications and the resolution order for resolving module type information. The type checking spec is meant to replace the ``shared/typehints/pythonX.Y/`` spec of PEP 484 [2]_. New third party stub libraries SHOULD distribute stubs via the third party packaging methods proposed in this PEP in place of being added to typeshed. Typeshed will remain in use, but if maintainers are found, third party stubs in typeshed MAY be split into their own package. Packaging Type Information -------------------------- In order to make packaging and distributing type information as simple and easy as possible, packaging and distribution is done through existing frameworks. Package maintainers who wish to support type checking of their code MUST add a ``py.typed`` file to their package supporting typing. This marker is recursive, if a top-level package includes it, all sub-packages MUST support type checking as well. To have this file installed with the package, maintainers can use existing packaging options such as ``package_data`` in distutils, shown below. Distutils option example:: ... package_data = { 'pkg': ['py.typed'], }, ... For namespace packages, the ``py.typed`` file should be in the submodules of the namespace, to avoid conflicts and for clarity. Stub Only Packages '''''''''''''''''' For package maintainers wishing to ship stub files containing all of their type information, it is preferred that the ``*.pyi`` stubs are alongside the corresponding ``*.py`` files. However, the stubs can also be put in a separate package and distributed separately. Third parties can also find this method useful if they wish to distribute stub files. The name of the stub package MUST follow the scheme ``pkg_stubs`` for type stubs for the package named ``pkg``. The normal resolution order of checking ``*.pyi`` before ``*.py`` will be maintained. Third parties seeking to distribute stub files are encouraged to contact the maintainer of the package about distribution alongside the package. If the maintainer does not wish to maintain or package stub files or type information inline, then a third party stub only package can be created. In addition, stub only distributions SHOULD indicate which version(s) of the runtime package are supported by indicating the runtime distribution's version(s) through normal dependency data. For example, if there was a stub package ``flyingcircus_stubs``, it can indicate the versions of the runtime ``Flyingcircus`` distribution supported through ``install_requires`` in distutils based tools, or the equivalent in other packaging tools. Type Checker Module Resolution Order ------------------------------------ The following is the order that type checkers supporting this PEP SHOULD resolve modules containing type information: 1. User code - the files the type checker is running on. 2. Stubs or Python source manually put in the beginning of the path. Type checkers SHOULD provide this to allow the user complete control of which stubs to use, and patch broken stubs/inline types from packages. 3. Stub packages - these packages can supersede the installed packages. They can be found at ``pkg_stubs`` for package ``pkg``. 4. Inline packages - if there is nothing overriding the installed package, and it opts into type checking, inline types SHOULD be used. 5. Typeshed (if used) - Provides the stdlib types and several third party libraries. Type checkers that check a different Python version than the version they run on MUST find the type information in the ``site-packages``/``dist-packages`` of that Python version. This can be queried e.g. ``pythonX.Y -c 'import site; print(site.getsitepackages())'``. It is also recommended that the type checker allow for the user to point to a particular Python binary, in case it is not in the path. Implementation ============== The proposed scheme of indicating support for typing is completely backwards compatible, and requires no modification to tooling. A sample package with inline types is available [typed_pkg]_, as well as a sample package checker [pkg_checker]_ which reads the metadata of installed packages and reports on their status as either not typed, inline typed, or a stub package. Acknowledgements ================ This PEP would not have been possible without the ideas, feedback, and support of Ivan Levkivskyi, Jelle Zijlstra, Nick Coghlan, Daniel F Moisset, Nathaniel Smith, and Guido van Rossum. Version History =============== * 2017-11-12 * Rewritten to use existing tooling only * No need to indicate kind of type information in metadata * Name of marker file changed from ``.typeinfo`` to ``py.typed`` * 2017-11-10 * Specification re-written to use package metadata instead of distribution metadata. * Removed stub only packages and merged into third party packages spec. * Removed suggestion for typecheckers to consider checking runtime versions * Implementations updated to reflect PEP changes. * 2017-10-26 * Added implementation references. * Added acknowledgements and version history. * 2017-10-06 * Rewritten to use .distinfo/METADATA over a distutils specific command. * Clarify versioning of third party stub packages. * 2017-09-11 * Added information about current solutions and typeshed. * Clarify rationale. References ========== .. [1] Typeshed (https://github.com/python/typeshed) .. [2] PEP 484, Storing and Distributing Stub Files (https://www.python.org/dev/peps/pep-0484/#storing-and-distributing-stub-files) .. [3] PEP 426 definitions (https://www.python.org/dev/peps/pep-0426/) .. [typed_pkg] Sample typed package (https://github.com/ethanhs/sample-typed-package) .. [pkg_checker] Sample package checker (https://github.com/ethanhs/check_typedpkg) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Sun Nov 12 07:14:50 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Sun, 12 Nov 2017 14:14:50 +0200 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Sun, Nov 12, 2017 at 7:07 AM, Guido van Rossum wrote: > On Fri, Nov 10, 2017 at 11:02 PM, Nick Coghlan wrote: > >> On 11 November 2017 at 01:48, Guido van Rossum wrote: >> > I don't mind the long name. Of all the options so far I really only like >> > 'string_annotations' so let's go with that. >> >> +1 from me. >> > > I'd like to reverse my stance on this. We had `from __future__ import > division` for many years in Python 2, and nobody argued that it implied > that Python 2 doesn't have division -- it just meant to import the future > *version* of division. So I think the original idea, `from __future__ > import annotations` is fine. I don't expect there will be *other* things > related to annotations that we'll be importing from the future. > > Furthermore, *?nobody* expects the majority of programmers to look at __annotations__ either. But those who do need to care about the 'implementation detail' of whether it's a string won't be surprised to find nested strings like "'ForwardReferencedThing'". But one might fear that those cases get ruthlessly converted into being equivalent to just "ForwardReferencedThing". So actually my question is: What should happen when the annotation is already a string literal? -- Koos ? -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Sun Nov 12 10:17:51 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 12 Nov 2017 17:17:51 +0200 Subject: [Python-Dev] Disallow ambiguous syntax f(x for x in [1],) Message-ID: Initially generator expressions always had to be written inside parentheses, as documented in PEP 289 [1]. The additional parenthesis could be omitted on calls with only one argument, because in this case the generator expression already is written inside parentheses. You could write just `list(x for x in [1])` instead of `list((x for x in [1]))`. The following code was an error: >>> list(x for x in [1], *[]) File "", line 1 SyntaxError: invalid syntax >>> list(x for x in [1],) File "", line 1 SyntaxError: invalid syntax You needed to add explicit parenthesis in these cases: >>> list((x for x in [1]), *[]) [1] >>> list((x for x in [1]),) [1] But in Python 2.5 the following examples were accepted: >>> list(x for x in [1], *[]) [1] >>> list(x for x in [1], *{}) [1] >>> list(x for x in [1],) [1] However I haven't found anything about this change in the "What's New In Python 2.5" document [2]. The former two cases were found to be a mistake and it was fixed in Python 3.5. >>> list(x for x in [1], *[]) File "", line 1 SyntaxError: Generator expression must be parenthesized if not sole argument >>> list(x for x in [1], *{}) File "", line 1 SyntaxError: Generator expression must be parenthesized if not sole argument But `list(x for x in [1],)` still is accepted. I think it would be better it this raises a SyntaxError. 1. This syntax is ambiguous, because at first look it is not clear whether it is equivalent to `list((x for x in [1]),)` or to `list(x for x in ([1],))`. 2. It is bad from the aesthetic point of view, because this is the only case when the generator expression has not written inside parentheses. I believe that allowing to omit parenthesis in a call with a single generator expression argument was caused by aesthetic reasons. 3. I believe the trailing comma in function call was allowed because this simplified adding, removing and commenting out arguments. func(first_argument, second_argument, #third_argument, ) You shouldn't touch other lines by adding or removing a comma when add or remove arguments. But this reason is not applicable to the case of `list((x for x in [1]),)`, because the generator expression without parenthesis should be the only argument. Therefore there is no reasons to allow this syntax. 4. 2to3 didn't supported this syntax for recent times [4]. Finally it was changed, but I think that it would be better to disallow this syntax for reasons mentioned above. [1] https://www.python.org/dev/peps/pep-0289/ [2] https://docs.python.org/2.5/whatsnew/whatsnew25.html [3] https://docs.python.org/3.5/whatsnew/3.5.html#changes-in-python-behavior [4] https://bugs.python.org/issue27494 From storchaka at gmail.com Sun Nov 12 10:43:53 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 12 Nov 2017 17:43:53 +0200 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: 12.11.17 12:06, Nick Coghlan ????: > So if folks would like dict unpacking syntax, then a suitable place to > start would be a proposal for a "getitems" builtin that allowed > operations like: > > b, a = getitems(d, ("b", "a")) > > operator.itemgetter and operator.attrgetter may provide some > inspiration for possible proposals. I don't see any relations between this getitems and operator.itemgetter or operator.attrgetter. getitems can be implemented as: (the most obvious way) def getitems(mapping, keys): for key in keys: yield mapping[key] or def getitems(mapping, keys): return map(functools.partial(operator.getitem, mapping), keys) or (simpler but rough equivalent) def getitems(mapping, keys): return map(mapping.__getitem__, keys) From guido at python.org Sun Nov 12 10:45:52 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 12 Nov 2017 07:45:52 -0800 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: > In Python 2.7 and Python 3.2, the default warning filters were updated to > hide > DeprecationWarning by default, such that deprecation warnings in > development > tools that were themselves written in Python (e.g. linters, static > analysers, > test runners, code generators) wouldn't be visible to their users unless > they > explicitly opted in to seeing them. > Looking at the official What's New entry for the change ( https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-deprecation-warnings) it's not just about development tools. It's about any app (or tool, or utility, or program) written in Python whose users just treat it as "some app", not as something that's necessarily part of their Python environment. While in extreme cases such apps can *bundle* their own Python interpreter (like Dropbox does), many developers opt to assume or ensure that Python is avaiable, perhaps via the OS package management system. (Because my day job is software development I am having a hard time coming up with concrete examples that aren't development tools, but AFAIK at Dropbox the *deployment* of e.g. Go binaries is managed through utilities written in Python. The Go developers couldn't care less about that.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Nov 12 10:54:14 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 12 Nov 2017 16:54:14 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ References: Message-ID: <20171112165414.5d1cb6d9@fsol> On Sun, 12 Nov 2017 19:24:12 +1000 Nick Coghlan wrote: > I've written a short(ish) PEP for the proposal to change the default > warnings filters to show DeprecationWarning in __main__: > https://www.python.org/dev/peps/pep-0565/ Thank you for writing this. This is a nice summary. You finally convinced me that it was a slight improvement rather than a pointless complication. So I'm +0.5 :-) Regards Antoine. From mariocj89 at gmail.com Sun Nov 12 09:56:23 2017 From: mariocj89 at gmail.com (Mario Corchero) Date: Sun, 12 Nov 2017 14:56:23 +0000 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: Do you mean making getitems call itemgetter? At the moment we can already do with itemgetter: from operator import itemgetter a,b = itemgetter("a", "b")(d) > I tend to post this every time the topic comes up, but: it's highly > unlikely we'll get syntax for this when we don't even have a builtin > to extract multiple items from a mapping in a single operation. You mean subitems as attrgetter does? That would be actually quite cool! d = dict(a=dict(b=1), b=dict(c=2)) ab, ac = itemgetter("a.b", "b.c", separator=".")(d) I've created an issue in case something like that is desired: https://bugs.python.org/issue32010 No real strong push for it, happy to just close it if it does not get interest. That said I am not sure it solves Ben requests as he seamed to be targetting a way to bind the variable name with the dictionary keys implicitly. On 12 November 2017 at 10:06, Nick Coghlan wrote: > On 11 November 2017 at 16:22, Jelle Zijlstra > wrote: > > 2017-11-10 19:53 GMT-08:00 Ben Usman : > >> I was not able to find any PEPs that suggest this (search keywords: > >> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), > >> however, let me know if I am wrong. > >> > > It was discussed at great length on Python-ideas about a year ago. There > is > > a thread called "Unpacking a dict" from May 2016. > > I tend to post this every time the topic comes up, but: it's highly > unlikely we'll get syntax for this when we don't even have a builtin > to extract multiple items from a mapping in a single operation. > > So if folks would like dict unpacking syntax, then a suitable place to > start would be a proposal for a "getitems" builtin that allowed > operations like: > > b, a = getitems(d, ("b", "a")) > > operator.itemgetter and operator.attrgetter may provide some > inspiration for possible proposals. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariocj89%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Nov 12 11:57:46 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 12 Nov 2017 08:57:46 -0800 Subject: [Python-Dev] Disallow ambiguous syntax f(x for x in [1],) In-Reply-To: References: Message-ID: Sounds good to me. On Sun, Nov 12, 2017 at 7:17 AM, Serhiy Storchaka wrote: > Initially generator expressions always had to be written inside > parentheses, as documented in PEP 289 [1]. The additional parenthesis could > be omitted on calls with only one argument, because in this case the > generator expression already is written inside parentheses. You could write > just `list(x for x in [1])` instead of `list((x for x in [1]))`. The > following code was an error: > > >>> list(x for x in [1], *[]) > File "", line 1 > SyntaxError: invalid syntax > >>> list(x for x in [1],) > File "", line 1 > SyntaxError: invalid syntax > > You needed to add explicit parenthesis in these cases: > > >>> list((x for x in [1]), *[]) > [1] > >>> list((x for x in [1]),) > [1] > > But in Python 2.5 the following examples were accepted: > > >>> list(x for x in [1], *[]) > [1] > >>> list(x for x in [1], *{}) > [1] > >>> list(x for x in [1],) > [1] > > However I haven't found anything about this change in the "What's New In > Python 2.5" document [2]. > > The former two cases were found to be a mistake and it was fixed in Python > 3.5. > > >>> list(x for x in [1], *[]) > File "", line 1 > SyntaxError: Generator expression must be parenthesized if not sole > argument > >>> list(x for x in [1], *{}) > File "", line 1 > SyntaxError: Generator expression must be parenthesized if not sole > argument > > But `list(x for x in [1],)` still is accepted. I think it would be better > it this raises a SyntaxError. > > 1. This syntax is ambiguous, because at first look it is not clear whether > it is equivalent to `list((x for x in [1]),)` or to `list(x for x in > ([1],))`. > > 2. It is bad from the aesthetic point of view, because this is the only > case when the generator expression has not written inside parentheses. I > believe that allowing to omit parenthesis in a call with a single generator > expression argument was caused by aesthetic reasons. > > 3. I believe the trailing comma in function call was allowed because this > simplified adding, removing and commenting out arguments. > > func(first_argument, > second_argument, > #third_argument, > ) > > You shouldn't touch other lines by adding or removing a comma when add or > remove arguments. But this reason is not applicable to the case of `list((x > for x in [1]),)`, because the generator expression without parenthesis > should be the only argument. Therefore there is no reasons to allow this > syntax. > > 4. 2to3 didn't supported this syntax for recent times [4]. Finally it was > changed, but I think that it would be better to disallow this syntax for > reasons mentioned above. > > [1] https://www.python.org/dev/peps/pep-0289/ > [2] https://docs.python.org/2.5/whatsnew/whatsnew25.html > [3] https://docs.python.org/3.5/whatsnew/3.5.html#changes-in-pyt > hon-behavior > [4] https://bugs.python.org/issue27494 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Sun Nov 12 12:10:08 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 12 Nov 2017 19:10:08 +0200 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: 12.11.17 11:24, Nick Coghlan ????: > The PEP also proposes repurposing the existing FutureWarning category > to explicitly mean "backwards compatibility warnings that should be > shown to users of Python applications" since: > > - we don't tend to use FutureWarning for its original nominal purpose > (changes that will continue to run but will do something different) FutureWarning currently is used for its original nominal purpose in the re and ElementTree modules. It even had been added in 2.7 for behavior that already have been changed in Python 3 or will be changed in future versions (emitted only with the -3 option). From guido at python.org Sun Nov 12 12:10:05 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 12 Nov 2017 09:10:05 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Sun, Nov 12, 2017 at 4:14 AM, Koos Zevenhoven wrote: > So actually my question is: What should happen when the annotation is > already a string literal? > The PEP answers that clearly (under Implementation): > If an annotation was already a string, this string is preserved > verbatim. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.zijlstra at gmail.com Sun Nov 12 12:53:34 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Sun, 12 Nov 2017 09:53:34 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: 2017-11-12 3:40 GMT-08:00 Ethan Smith : > Hello, > > I re-wrote my PEP to have typing opt-in be per-package rather than > per-distribution. This greatly simplifies things, and thanks to the > feedback and suggestions of Nick Coghlan, it is entirely compatible with > older packaging tooling. > > The main idea is there are two types of packages: > - types are packaged with runtime code (inline or stubs in the same > package) > - types are in a separate package (a third party or maintainer wants to > ship type information, but not with runtime code). > > The PEP is live on python.org: https://www.python.org/dev/peps/pep-0561/ > > And as always, duplicated below. > > Cheers, > > Ethan Smith > > --------------------------------------------------- > > PEP: 561 > Title: Distributing and Packaging Type Information > Author: Ethan Smith > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 09-Sep-2017 > Python-Version: 3.7 > Post-History: 10-Sep-2017, 12-Sep-2017, 06-Oct-2017, 26-Oct-2017 > > > Abstract > ======== > > PEP 484 introduced type hinting to Python, with goals of making typing > gradual and easy to adopt. Currently, typing information must be distributed > manually. This PEP provides a standardized means to leverage existing tooling > to package and distribute type information with minimal work and an ordering > for type checkers to resolve modules and collect this information for type > checking. > > > Rationale > ========= > > Currently, package authors wish to distribute code that has inline type > information. Additionally, maintainers would like to distribute stub files > to keep Python 2 compatibility while using newer annotation syntax. However, > there is no standard method to distribute packages with type information. > Also, if one wished to ship stub files privately the only method available > would be via setting ``MYPYPATH`` or the equivalent to manually point to > stubs. If the package can be released publicly, it can be added to > typeshed [1]_. However, this does not scale and becomes a burden on the > maintainers of typeshed. In addition, it ties bug fixes in stubs to releases > of the tool using typeshed. > > PEP 484 has a brief section on distributing typing information. In this > section [2]_ the PEP recommends using ``shared/typehints/pythonX.Y/`` for > shipping stub files. However, manually adding a path to stub files for each > third party library does not scale. The simplest approach people have taken > is to add ``site-packages`` to their ``MYPYPATH``, but this causes type > checkers to fail on packages that are highly dynamic (e.g. sqlalchemy > and Django). > > > Definition of Terms > =================== > > The definition of "MAY", "MUST", and "SHOULD", and "SHOULD NOT" are > to be interpreted as described in RFC 2119. > > "inline" - the types are part of the runtime code using PEP 526 and 3107 > syntax. > > "stubs" - files containing only type information, empty of runtime code. > > "Distributions" are the packaged files which are used to publish and distribute > a release. [3]_ > > "Module" a file containing Python runtime code or stubbed type information. > > "Package" a directory or directories that namespace Python modules. > > > Specification > ============= > > There are several motivations and methods of supporting typing in a package. > This PEP recognizes three (3) types of packages that users of typing wish to > create: > > 1. The package maintainer would like to add type information inline. > > 2. The package maintainer would like to add type information via stubs. > > 3. A third party or package maintainer would like to share stub files for > a package, but the maintainer does not want to include them in the source > of the package. > > This PEP aims to support these scenarios and make them simple to add to > packaging and deployment. > > The two major parts of this specification are the packaging specifications > and the resolution order for resolving module type information. The type > checking spec is meant to replace the ``shared/typehints/pythonX.Y/`` spec > of PEP 484 [2]_. > > New third party stub libraries SHOULD distribute stubs via the third party > packaging methods proposed in this PEP in place of being added to typeshed. > Typeshed will remain in use, but if maintainers are found, third party stubs > in typeshed MAY be split into their own package. > > > Packaging Type Information > -------------------------- > > In order to make packaging and distributing type information as simple and > easy as possible, packaging and distribution is done through existing > frameworks. > > Package maintainers who wish to support type checking of their code MUST add > a ``py.typed`` file to their package supporting typing. This marker is > recursive, if a top-level package includes it, all sub-packages MUST support > type checking as well. To have this file installed with the package, > maintainers can use existing packaging options such as ``package_data`` in > distutils, shown below. > > Distutils option example:: > > ... > package_data = { > 'pkg': ['py.typed'], > }, > ... > > For namespace packages, the ``py.typed`` file should be in the submodules of > the namespace, to avoid conflicts and for clarity. > > Stub Only Packages > '''''''''''''''''' > > For package maintainers wishing to ship stub files containing all of their > type information, it is preferred that the ``*.pyi`` stubs are alongside the > corresponding ``*.py`` files. However, the stubs can also be put in a separate > package and distributed separately. Third parties can also find this method > useful if they wish to distribute stub files. The name of the stub package > MUST follow the scheme ``pkg_stubs`` for type stubs for the package named > ``pkg``. The normal resolution order of checking ``*.pyi`` before ``*.py`` > will be maintained. > > This is very minor, but what do you think of using "pkg-stubs" instead of "pkg_stubs" (using a hyphen rather than an underscore)? This would make the name illegal to import as a normal Python package, which makes it clear that it's not a normal package. Also, there could be real packages named "_stubs". > > Third parties seeking to distribute stub files are encouraged to contact the > maintainer of the package about distribution alongside the package. If the > maintainer does not wish to maintain or package stub files or type information > inline, then a third party stub only package can be created. > > In addition, stub only distributions SHOULD indicate which version(s) > of the runtime package are supported by indicating the runtime distribution's > version(s) through normal dependency data. For example, if there was a > stub package ``flyingcircus_stubs``, it can indicate the versions of the > runtime ``Flyingcircus`` distribution supported through ``install_requires`` > in distutils based tools, or the equivalent in other packaging tools. > > > Type Checker Module Resolution Order > ------------------------------------ > > The following is the order that type checkers supporting this PEP SHOULD > resolve modules containing type information: > > 1. User code - the files the type checker is running on. > > 2. Stubs or Python source manually put in the beginning of the path. Type > checkers SHOULD provide this to allow the user complete control of which > stubs to use, and patch broken stubs/inline types from packages. > > 3. Stub packages - these packages can supersede the installed packages. > They can be found at ``pkg_stubs`` for package ``pkg``. > > 4. Inline packages - if there is nothing overriding the installed > package, and it opts into type checking, inline types SHOULD be used. > > 5. Typeshed (if used) - Provides the stdlib types and several third party > libraries. > > Type checkers that check a different Python version than the version they run > on MUST find the type information in the ``site-packages``/``dist-packages`` > of that Python version. This can be queried e.g. > ``pythonX.Y -c 'import site; print(site.getsitepackages())'``. It is also recommended > that the type checker allow for the user to point to a particular Python > binary, in case it is not in the path. > > > Implementation > ============== > > The proposed scheme of indicating support for typing is completely backwards > compatible, and requires no modification to tooling. A sample package with > inline types is available [typed_pkg]_, as well as a sample package checker > [pkg_checker]_ which reads the metadata of installed packages and reports on > their status as either not typed, inline typed, or a stub package. > > > Acknowledgements > ================ > > This PEP would not have been possible without the ideas, feedback, and support > of Ivan Levkivskyi, Jelle Zijlstra, Nick Coghlan, Daniel F Moisset, Nathaniel > Smith, and Guido van Rossum. > > > Version History > =============== > > * 2017-11-12 > > * Rewritten to use existing tooling only > * No need to indicate kind of type information in metadata > * Name of marker file changed from ``.typeinfo`` to ``py.typed`` > > * 2017-11-10 > > * Specification re-written to use package metadata instead of distribution > metadata. > * Removed stub only packages and merged into third party packages spec. > * Removed suggestion for typecheckers to consider checking runtime versions > * Implementations updated to reflect PEP changes. > > * 2017-10-26 > > * Added implementation references. > * Added acknowledgements and version history. > > * 2017-10-06 > > * Rewritten to use .distinfo/METADATA over a distutils specific command. > * Clarify versioning of third party stub packages. > > * 2017-09-11 > > * Added information about current solutions and typeshed. > * Clarify rationale. > > > References > ========== > .. [1] Typeshed (https://github.com/python/typeshed) > > .. [2] PEP 484, Storing and Distributing Stub Files > (https://www.python.org/dev/peps/pep-0484/#storing-and-distributing-stub-files) > > .. [3] PEP 426 definitions > (https://www.python.org/dev/peps/pep-0426/) > > .. [typed_pkg] Sample typed package > (https://github.com/ethanhs/sample-typed-package) > > .. [pkg_checker] Sample package checker > (https://github.com/ethanhs/check_typedpkg) > > Copyright > ========= > > This document has been placed in the public domain. > > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Sun Nov 12 13:22:07 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Sun, 12 Nov 2017 20:22:07 +0200 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Nov 12, 2017 19:10, "Guido van Rossum" wrote: On Sun, Nov 12, 2017 at 4:14 AM, Koos Zevenhoven wrote: > So actually my question is: What should happen when the annotation is > already a string literal? > The PEP answers that clearly (under Implementation): > If an annotation was already a string, this string is preserved > verbatim. Oh sorry, I was looking for a spec, so I somehow assumed I can ignore the gory implementation details just like I routinely ignore things like headers and footers of emails. There's two thing I don't understand here: * What does it mean to preserve the string verbatim? No matter how I read it, I can't tell if it's with quotes or without. Maybe I'm missing some context. -- Koos (mobile) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at ethanhs.me Sun Nov 12 14:21:33 2017 From: ethan at ethanhs.me (Ethan Smith) Date: Sun, 12 Nov 2017 11:21:33 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: On Sun, Nov 12, 2017 at 9:53 AM, Jelle Zijlstra wrote: > > > 2017-11-12 3:40 GMT-08:00 Ethan Smith : > >> Hello, >> >> I re-wrote my PEP to have typing opt-in be per-package rather than >> per-distribution. This greatly simplifies things, and thanks to the >> feedback and suggestions of Nick Coghlan, it is entirely compatible with >> older packaging tooling. >> >> The main idea is there are two types of packages: >> - types are packaged with runtime code (inline or stubs in the same >> package) >> - types are in a separate package (a third party or maintainer wants to >> ship type information, but not with runtime code). >> >> The PEP is live on python.org: https://www.python.org/dev/peps/pep-0561/ >> >> And as always, duplicated below. >> >> Cheers, >> >> Ethan Smith >> >> --------------------------------------------------- >> >> PEP: 561 >> Title: Distributing and Packaging Type Information >> Author: Ethan Smith >> Status: Draft >> Type: Standards Track >> Content-Type: text/x-rst >> Created: 09-Sep-2017 >> Python-Version: 3.7 >> Post-History: 10-Sep-2017, 12-Sep-2017, 06-Oct-2017, 26-Oct-2017 >> >> >> Abstract >> ======== >> >> PEP 484 introduced type hinting to Python, with goals of making typing >> gradual and easy to adopt. Currently, typing information must be distributed >> manually. This PEP provides a standardized means to leverage existing tooling >> to package and distribute type information with minimal work and an ordering >> for type checkers to resolve modules and collect this information for type >> checking. >> >> >> Rationale >> ========= >> >> Currently, package authors wish to distribute code that has inline type >> information. Additionally, maintainers would like to distribute stub files >> to keep Python 2 compatibility while using newer annotation syntax. However, >> there is no standard method to distribute packages with type information. >> Also, if one wished to ship stub files privately the only method available >> would be via setting ``MYPYPATH`` or the equivalent to manually point to >> stubs. If the package can be released publicly, it can be added to >> typeshed [1]_. However, this does not scale and becomes a burden on the >> maintainers of typeshed. In addition, it ties bug fixes in stubs to releases >> of the tool using typeshed. >> >> PEP 484 has a brief section on distributing typing information. In this >> section [2]_ the PEP recommends using ``shared/typehints/pythonX.Y/`` for >> shipping stub files. However, manually adding a path to stub files for each >> third party library does not scale. The simplest approach people have taken >> is to add ``site-packages`` to their ``MYPYPATH``, but this causes type >> checkers to fail on packages that are highly dynamic (e.g. sqlalchemy >> and Django). >> >> >> Definition of Terms >> =================== >> >> The definition of "MAY", "MUST", and "SHOULD", and "SHOULD NOT" are >> to be interpreted as described in RFC 2119. >> >> "inline" - the types are part of the runtime code using PEP 526 and 3107 >> syntax. >> >> "stubs" - files containing only type information, empty of runtime code. >> >> "Distributions" are the packaged files which are used to publish and distribute >> a release. [3]_ >> >> "Module" a file containing Python runtime code or stubbed type information. >> >> "Package" a directory or directories that namespace Python modules. >> >> >> Specification >> ============= >> >> There are several motivations and methods of supporting typing in a package. >> This PEP recognizes three (3) types of packages that users of typing wish to >> create: >> >> 1. The package maintainer would like to add type information inline. >> >> 2. The package maintainer would like to add type information via stubs. >> >> 3. A third party or package maintainer would like to share stub files for >> a package, but the maintainer does not want to include them in the source >> of the package. >> >> This PEP aims to support these scenarios and make them simple to add to >> packaging and deployment. >> >> The two major parts of this specification are the packaging specifications >> and the resolution order for resolving module type information. The type >> checking spec is meant to replace the ``shared/typehints/pythonX.Y/`` spec >> of PEP 484 [2]_. >> >> New third party stub libraries SHOULD distribute stubs via the third party >> packaging methods proposed in this PEP in place of being added to typeshed. >> Typeshed will remain in use, but if maintainers are found, third party stubs >> in typeshed MAY be split into their own package. >> >> >> Packaging Type Information >> -------------------------- >> >> In order to make packaging and distributing type information as simple and >> easy as possible, packaging and distribution is done through existing >> frameworks. >> >> Package maintainers who wish to support type checking of their code MUST add >> a ``py.typed`` file to their package supporting typing. This marker is >> recursive, if a top-level package includes it, all sub-packages MUST support >> type checking as well. To have this file installed with the package, >> maintainers can use existing packaging options such as ``package_data`` in >> distutils, shown below. >> >> Distutils option example:: >> >> ... >> package_data = { >> 'pkg': ['py.typed'], >> }, >> ... >> >> For namespace packages, the ``py.typed`` file should be in the submodules of >> the namespace, to avoid conflicts and for clarity. >> >> Stub Only Packages >> '''''''''''''''''' >> >> For package maintainers wishing to ship stub files containing all of their >> type information, it is preferred that the ``*.pyi`` stubs are alongside the >> corresponding ``*.py`` files. However, the stubs can also be put in a separate >> package and distributed separately. Third parties can also find this method >> useful if they wish to distribute stub files. The name of the stub package >> MUST follow the scheme ``pkg_stubs`` for type stubs for the package named >> ``pkg``. The normal resolution order of checking ``*.pyi`` before ``*.py`` >> will be maintained. >> >> This is very minor, but what do you think of using "pkg-stubs" instead of > "pkg_stubs" (using a hyphen rather than an underscore)? This would make the > name illegal to import as a normal Python package, which makes it clear > that it's not a normal package. Also, there could be real packages named > "_stubs". > I suppose this makes sense. I checked PyPI and as of a few weeks ago there were no packages with the name pattern, but I like the idea of making it explicitly non-runtime importable. I cannot think of any reason not to do it, and the avoidance of confusion about the package being importable is a benefit. I will make the change with my next round of edits. > Third parties seeking to distribute stub files are encouraged to contact the >> maintainer of the package about distribution alongside the package. If the >> maintainer does not wish to maintain or package stub files or type information >> inline, then a third party stub only package can be created. >> >> In addition, stub only distributions SHOULD indicate which version(s) >> of the runtime package are supported by indicating the runtime distribution's >> version(s) through normal dependency data. For example, if there was a >> stub package ``flyingcircus_stubs``, it can indicate the versions of the >> runtime ``Flyingcircus`` distribution supported through ``install_requires`` >> in distutils based tools, or the equivalent in other packaging tools. >> >> >> Type Checker Module Resolution Order >> ------------------------------------ >> >> The following is the order that type checkers supporting this PEP SHOULD >> resolve modules containing type information: >> >> 1. User code - the files the type checker is running on. >> >> 2. Stubs or Python source manually put in the beginning of the path. Type >> checkers SHOULD provide this to allow the user complete control of which >> stubs to use, and patch broken stubs/inline types from packages. >> >> 3. Stub packages - these packages can supersede the installed packages. >> They can be found at ``pkg_stubs`` for package ``pkg``. >> >> 4. Inline packages - if there is nothing overriding the installed >> package, and it opts into type checking, inline types SHOULD be used. >> >> 5. Typeshed (if used) - Provides the stdlib types and several third party >> libraries. >> >> Type checkers that check a different Python version than the version they run >> on MUST find the type information in the ``site-packages``/``dist-packages`` >> of that Python version. This can be queried e.g. >> ``pythonX.Y -c 'import site; print(site.getsitepackages())'``. It is also recommended >> that the type checker allow for the user to point to a particular Python >> binary, in case it is not in the path. >> >> >> Implementation >> ============== >> >> The proposed scheme of indicating support for typing is completely backwards >> compatible, and requires no modification to tooling. A sample package with >> inline types is available [typed_pkg]_, as well as a sample package checker >> [pkg_checker]_ which reads the metadata of installed packages and reports on >> their status as either not typed, inline typed, or a stub package. >> >> >> Acknowledgements >> ================ >> >> This PEP would not have been possible without the ideas, feedback, and support >> of Ivan Levkivskyi, Jelle Zijlstra, Nick Coghlan, Daniel F Moisset, Nathaniel >> Smith, and Guido van Rossum. >> >> >> Version History >> =============== >> >> * 2017-11-12 >> >> * Rewritten to use existing tooling only >> * No need to indicate kind of type information in metadata >> * Name of marker file changed from ``.typeinfo`` to ``py.typed`` >> >> * 2017-11-10 >> >> * Specification re-written to use package metadata instead of distribution >> metadata. >> * Removed stub only packages and merged into third party packages spec. >> * Removed suggestion for typecheckers to consider checking runtime versions >> * Implementations updated to reflect PEP changes. >> >> * 2017-10-26 >> >> * Added implementation references. >> * Added acknowledgements and version history. >> >> * 2017-10-06 >> >> * Rewritten to use .distinfo/METADATA over a distutils specific command. >> * Clarify versioning of third party stub packages. >> >> * 2017-09-11 >> >> * Added information about current solutions and typeshed. >> * Clarify rationale. >> >> >> References >> ========== >> .. [1] Typeshed (https://github.com/python/typeshed) >> >> .. [2] PEP 484, Storing and Distributing Stub Files >> (https://www.python.org/dev/peps/pep-0484/#storing-and-distributing-stub-files) >> >> .. [3] PEP 426 definitions >> (https://www.python.org/dev/peps/pep-0426/) >> >> .. [typed_pkg] Sample typed package >> (https://github.com/ethanhs/sample-typed-package) >> >> .. [pkg_checker] Sample package checker >> (https://github.com/ethanhs/check_typedpkg) >> >> Copyright >> ========= >> >> This document has been placed in the public domain. >> >> >> >> .. >> Local Variables: >> mode: indented-text >> indent-tabs-mode: nil >> sentence-end-double-space: t >> fill-column: 70 >> coding: utf-8 >> End: >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle. >> zijlstra%40gmail.com >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bigobangux at gmail.com Sun Nov 12 14:33:41 2017 From: bigobangux at gmail.com (Ben Usman) Date: Sun, 12 Nov 2017 14:33:41 -0500 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: Sounds like that happens quite often. Yep, I totally agree with your point, I think I mentioned something like this in the post as a possible partial solution: a drop-in replacement for an ugly list compression people seem to be using now to solve the problem. It's easy to implement, but the adoption by community is questionable. I mean, if this is a relatively rare use case, but those who need it seem to have their own one-liners for that already, is there even a need for a method or function like this in standard library? To unify to improve readability (single standard "getitems" instead of many different get_n, gets, get_mutliple)? The only motivation I can think of, and even it is questionable. On Nov 12, 2017 05:06, "Nick Coghlan" wrote: On 11 November 2017 at 16:22, Jelle Zijlstra wrote: > 2017-11-10 19:53 GMT-08:00 Ben Usman : >> I was not able to find any PEPs that suggest this (search keywords: >> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), >> however, let me know if I am wrong. >> > It was discussed at great length on Python-ideas about a year ago. There is > a thread called "Unpacking a dict" from May 2016. I tend to post this every time the topic comes up, but: it's highly unlikely we'll get syntax for this when we don't even have a builtin to extract multiple items from a mapping in a single operation. So if folks would like dict unpacking syntax, then a suitable place to start would be a proposal for a "getitems" builtin that allowed operations like: b, a = getitems(d, ("b", "a")) operator.itemgetter and operator.attrgetter may provide some inspiration for possible proposals. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From bigobangux at gmail.com Sun Nov 12 14:43:11 2017 From: bigobangux at gmail.com (Ben Usman) Date: Sun, 12 Nov 2017 14:43:11 -0500 Subject: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs) In-Reply-To: References: Message-ID: Anyway, considering that this has been discussed a lot in the original post in 2016, I suggest stopping any further discussions here to avoid littering dev mailing list. Sorry for starting the thread in the first place and thank you, Jelle, for pointing me to the original discussion. On Nov 12, 2017 14:33, "Ben Usman" wrote: Sounds like that happens quite often. Yep, I totally agree with your point, I think I mentioned something like this in the post as a possible partial solution: a drop-in replacement for an ugly list compression people seem to be using now to solve the problem. It's easy to implement, but the adoption by community is questionable. I mean, if this is a relatively rare use case, but those who need it seem to have their own one-liners for that already, is there even a need for a method or function like this in standard library? To unify to improve readability (single standard "getitems" instead of many different get_n, gets, get_mutliple)? The only motivation I can think of, and even it is questionable. On Nov 12, 2017 05:06, "Nick Coghlan" wrote: On 11 November 2017 at 16:22, Jelle Zijlstra wrote: > 2017-11-10 19:53 GMT-08:00 Ben Usman : >> I was not able to find any PEPs that suggest this (search keywords: >> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0), >> however, let me know if I am wrong. >> > It was discussed at great length on Python-ideas about a year ago. There is > a thread called "Unpacking a dict" from May 2016. I tend to post this every time the topic comes up, but: it's highly unlikely we'll get syntax for this when we don't even have a builtin to extract multiple items from a mapping in a single operation. So if folks would like dict unpacking syntax, then a suitable place to start would be a proposal for a "getitems" builtin that allowed operations like: b, a = getitems(d, ("b", "a")) operator.itemgetter and operator.attrgetter may provide some inspiration for possible proposals. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Sun Nov 12 17:39:05 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 13 Nov 2017 11:39:05 +1300 Subject: [Python-Dev] Standardise the AST (Re: PEP 563: Postponed Evaluation of Annotations) In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: <5A08CD89.90406@canterbury.ac.nz> Guido van Rossum wrote: > The PEP answers that clearly (under Implementation): > > > If an annotation was already a string, this string is preserved > > verbatim. This bothers me, because it means the transformation from what you write in the source and the object you get at run time is not reversible. Something interpreting the annotations as Python expressions at run time has no way to know whether a Python expression was written in the source or a string literal whose contents happen to look like an expression. I still think that the run-time form of a non-evaluated annotation should be some form of AST. That's been rejected on the grounds that the structure of an AST is considered an implementation detail that can change between Python versions. However, that in itself seems like a bad thing to me. There *should* be a standard and stable form of AST provided that doesn't change unless the syntax of Python changes. It doesn't have to be the same as what's used internally by the compiler. Proponents of Lisp point to the advantages of easily being able to express Lisp programs using Lisp data structures. There would also be benefits in having a standard way to represent Python programs using Python data structures. -- Greg From storchaka at gmail.com Sun Nov 12 17:57:53 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 13 Nov 2017 00:57:53 +0200 Subject: [Python-Dev] Disallow ambiguous syntax f(x for x in [1],) In-Reply-To: References: Message-ID: 12.11.17 18:57, Guido van Rossum ????: > Sounds good to me. Thanks! Here is an implementation: https://bugs.python.org/issue32012. I have found that formally trailing comma after generator expression is not allowed by the grammar defined in the language reference: call: `primary` "(" [`argument_list` [","] | `comprehension`] ")" But the actual Grammar file contains different rules. From ncoghlan at gmail.com Sun Nov 12 18:34:06 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Nov 2017 09:34:06 +1000 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On 13 November 2017 at 03:10, Serhiy Storchaka wrote: > 12.11.17 11:24, Nick Coghlan ????: >> >> The PEP also proposes repurposing the existing FutureWarning category >> to explicitly mean "backwards compatibility warnings that should be >> shown to users of Python applications" since: >> >> - we don't tend to use FutureWarning for its original nominal purpose >> (changes that will continue to run but will do something different) > > FutureWarning currently is used for its original nominal purpose in the re > and ElementTree modules. If the future warnings relate to regex and XML parsing, they'd still fall under the "for display to users" category, since those modules can't tell if the input data was application provided or part of an end user interface like a configuration file. > It even had been added in 2.7 for behavior that > already have been changed in Python 3 or will be changed in future versions > (emitted only with the -3 option). That's closer to the original purpose, but with them being 2.7 only, and gated behind the -3 switch, I think we can ignore them when it comes to defining the expected usage in 3.7+ Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Sun Nov 12 19:00:18 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 12 Nov 2017 16:00:18 -0800 Subject: [Python-Dev] Disallow ambiguous syntax f(x for x in [1],) In-Reply-To: References: Message-ID: It's hard to keep those two in sync, since the actual Grammar file is constrained by being strictly LL(1)... Can you get someone else to review the implementation? -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Nov 12 19:26:07 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 12 Nov 2017 16:26:07 -0800 Subject: [Python-Dev] Standardise the AST (Re: PEP 563: Postponed Evaluation of Annotations) In-Reply-To: <5A08CD89.90406@canterbury.ac.nz> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A08CD89.90406@canterbury.ac.nz> Message-ID: On Sun, Nov 12, 2017 at 2:39 PM, Greg Ewing wrote: > Guido van Rossum wrote: > >> The PEP answers that clearly (under Implementation): >> >> > If an annotation was already a string, this string is preserved >> > verbatim. >> > > This bothers me, because it means the transformation from > what you write in the source and the object you get at > run time is not reversible. Something interpreting the > annotations as Python expressions at run time has no way > to know whether a Python expression was written in the > source or a string literal whose contents happen to look > like an expression. > The rule is a form of normalization. It is the best way to ensure that the following two will mean the same thing: def foo(a: int): ... def foo(a: 'int'): ... To reverse the transformation you have two options: always generate string literals, or always generate non-string annotations. The resulting code could look like either of the above two, and the meaning should be identical. (There are edge cases, e.g.: def foo(a: 'int)'): ... def foo(a: '"int"'): ... but they aren't currently valid as PEP 484 type annotations and it's no burden to keep them out of future syntactic extensions of annotations.) > I still think that the run-time form of a non-evaluated > annotation should be some form of AST. That's been rejected > on the grounds that the structure of an AST is considered > an implementation detail that can change between Python > versions. > > However, that in itself seems like a bad thing to me. > There *should* be a standard and stable form of AST > provided that doesn't change unless the syntax of Python > changes. It doesn't have to be the same as what's used > internally by the compiler. > But Python's syntax changes in nearly every release. And even though we nearly always take pains not to invalidate source code that used to be valid, that's not so easy at the AST level, which elides many details (such as whitespace and parentheses). > Proponents of Lisp point to the advantages of easily > being able to express Lisp programs using Lisp data > structures. There would also be benefits in having a > standard way to represent Python programs using Python > data structures. > But we have to weigh the advantages against other requirements. IIRC Lisp had almost no syntax so I presume the mapping to data structures was nearly trivial compared to Python. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Nov 12 20:42:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Nov 2017 11:42:33 +1000 Subject: [Python-Dev] Standardise the AST (Re: PEP 563: Postponed Evaluation of Annotations) In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A08CD89.90406@canterbury.ac.nz> Message-ID: On 13 November 2017 at 10:26, Guido van Rossum wrote: > [Greg] >> Proponents of Lisp point to the advantages of easily >> being able to express Lisp programs using Lisp data >> structures. There would also be benefits in having a >> standard way to represent Python programs using Python >> data structures. > > But we have to weigh the advantages against other requirements. IIRC Lisp > had almost no syntax so I presume the mapping to data structures was nearly > trivial compared to Python. As far as I recall, the base primitives in Lisp are something like UnaryOp, BinOp, and an asymmetric head/tail BinOp variant for working with sequences. That said, I think it could be genuinely useful to formally define a "Simple Python expression" concept that: - allowed symbolic Python expressions (including literals & displays) - prohibited the use of keywords (avoiding the harder cases like lambda, yield, await, and comprehensions) - allowed name references (but didn't resolve them at compile time) - defined an "escape prefix" to explicitly indicate references to Python globals & builtins That specific set of characteristics is drawn from the syntax used for queries on pandas data frames: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html Right now, the data frame API relies on strings for that capability, and pandas.eval to do the evaluation. pandas.eval in turn provides two levels of configurability: * which parser/compiler to use (which affects operand precedence) * which execution engine to use (numexpr is much faster for SciPy components than a regular Python eval) For an integrated-into-Python variant, we presumably wouldn't allow configurable operand precedence (we'd use the same rules as normal expressions), but we could still offer a runtime expression type that was compiled at the same time as everything else, but rather than accepting parameters like a regular function, instead accepted a namespace to use when evaluating the expression. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Sun Nov 12 22:48:28 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 12 Nov 2017 19:48:28 -0800 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: > This change will lead to DeprecationWarning being displayed by default for: > > * code executed directly at the interactive prompt > * code executed directly as part of a single-file script Technically it's orthogonal, but if you're trying to get better warnings in the REPL, then you might also want to look at: https://bugs.python.org/issue1539925 https://github.com/ipython/ipython/issues/6611 -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Sun Nov 12 23:07:12 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 12 Nov 2017 20:07:12 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: On Sun, Nov 12, 2017 at 11:21 AM, Ethan Smith wrote: > > > On Sun, Nov 12, 2017 at 9:53 AM, Jelle Zijlstra > wrote: >> >> 2017-11-12 3:40 GMT-08:00 Ethan Smith : >>> The name of the stub >>> package >>> MUST follow the scheme ``pkg_stubs`` for type stubs for the package named >>> ``pkg``. The normal resolution order of checking ``*.pyi`` before >>> ``*.py`` >>> will be maintained. >> >> This is very minor, but what do you think of using "pkg-stubs" instead of >> "pkg_stubs" (using a hyphen rather than an underscore)? This would make the >> name illegal to import as a normal Python package, which makes it clear that >> it's not a normal package. Also, there could be real packages named >> "_stubs". > > I suppose this makes sense. I checked PyPI and as of a few weeks ago there > were no packages with the name pattern, but I like the idea of making it > explicitly non-runtime importable. I cannot think of any reason not to do > it, and the avoidance of confusion about the package being importable is a > benefit. I will make the change with my next round of edits. PyPI doesn't distinguish between the names 'foo-stubs' and 'foo_stubs' -- they get normalized together. So even if you use 'foo-stubs' as the directory name on sys.path to avoid collisions at import time, it still won't allow someone to distribute a separate 'foo_stubs' package on PyPI. If you do go with a fixed naming convention like this, the PEP should probably also instruct the PyPI maintainers that whoever owns 'foo' automatically has the right to control the name 'foo-stubs' as well. Or maybe some tweak to PEP 541 is needed. -n -- Nathaniel J. Smith -- https://vorpus.org From storchaka at gmail.com Mon Nov 13 02:22:11 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 13 Nov 2017 09:22:11 +0200 Subject: [Python-Dev] Disallow ambiguous syntax f(x for x in [1],) In-Reply-To: References: Message-ID: 13.11.17 02:00, Guido van Rossum ????: > It's hard to keep those two in sync, since the actual Grammar file is > constrained by being strictly LL(1)... Can you get someone else to > review the implementation? I haven't change the grammar, just have changed checks in the CST to AST transformer. Maybe it would be better to change the grammar, but I'm not expert in this. Maybe Benjamin could provide better solution. There are other related differences between language specification and the implementation. The following examples are valid syntax: @deco(x for x in [1]) def f(): ... class C(x for x in [1]): ... The latter always raises a TypeError at runtime ("cannot create 'generator' instances"), but is compiled successfully. From ethan at ethanhs.me Mon Nov 13 02:33:39 2017 From: ethan at ethanhs.me (Ethan Smith) Date: Sun, 12 Nov 2017 23:33:39 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: On Sun, Nov 12, 2017 at 8:07 PM, Nathaniel Smith wrote: > On Sun, Nov 12, 2017 at 11:21 AM, Ethan Smith wrote: > > > > > > On Sun, Nov 12, 2017 at 9:53 AM, Jelle Zijlstra < > jelle.zijlstra at gmail.com> > > wrote: > >> > >> 2017-11-12 3:40 GMT-08:00 Ethan Smith : > >>> The name of the stub > >>> package > >>> MUST follow the scheme ``pkg_stubs`` for type stubs for the package > named > >>> ``pkg``. The normal resolution order of checking ``*.pyi`` before > >>> ``*.py`` > >>> will be maintained. > >> > >> This is very minor, but what do you think of using "pkg-stubs" instead > of > >> "pkg_stubs" (using a hyphen rather than an underscore)? This would make > the > >> name illegal to import as a normal Python package, which makes it clear > that > >> it's not a normal package. Also, there could be real packages named > >> "_stubs". > > > > I suppose this makes sense. I checked PyPI and as of a few weeks ago > there > > were no packages with the name pattern, but I like the idea of making it > > explicitly non-runtime importable. I cannot think of any reason not to do > > it, and the avoidance of confusion about the package being importable is > a > > benefit. I will make the change with my next round of edits. > > PyPI doesn't distinguish between the names 'foo-stubs' and 'foo_stubs' > -- they get normalized together. So even if you use 'foo-stubs' as the > directory name on sys.path to avoid collisions at import time, it > still won't allow someone to distribute a separate 'foo_stubs' package > on PyPI. > > If you do go with a fixed naming convention like this, the PEP should > probably also instruct the PyPI maintainers that whoever owns 'foo' > automatically has the right to control the name 'foo-stubs' as well. > Or maybe some tweak to PEP 541 is needed. > As I understand it however, the distribution name need not map to to the package name in any way. So regardless of if foo-stubs is seen as foo_stubs, I could name the distribution Albatross if I wished, and install the foo-stubs package into site/dist-packages, and it would work. Also I'm not sure if the PyPI change would require an edict from a PEP, but if so, I wouldn't be opposed to the idea, I think it would be nice to default the stub packages to the owners of the normal packages (people should, to my understanding, be able to make alternate distributions without hassle). Ethan > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 13 04:20:47 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Nov 2017 19:20:47 +1000 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: On 13 November 2017 at 17:33, Ethan Smith wrote: > On Sun, Nov 12, 2017 at 8:07 PM, Nathaniel Smith wrote: >> PyPI doesn't distinguish between the names 'foo-stubs' and 'foo_stubs' >> -- they get normalized together. So even if you use 'foo-stubs' as the >> directory name on sys.path to avoid collisions at import time, it >> still won't allow someone to distribute a separate 'foo_stubs' package >> on PyPI. >> >> If you do go with a fixed naming convention like this, the PEP should >> probably also instruct the PyPI maintainers that whoever owns 'foo' >> automatically has the right to control the name 'foo-stubs' as well. >> Or maybe some tweak to PEP 541 is needed. > > As I understand it however, the distribution name need not map to to the > package name in any way. So regardless of if foo-stubs is seen as foo_stubs, > I could name the distribution Albatross if I wished, and install the > foo-stubs package into site/dist-packages, and it would work. Also I'm not > sure if the PyPI change would require an edict from a PEP, but if so, I > wouldn't be opposed to the idea, I think it would be nice to default the > stub packages to the owners of the normal packages (people should, to my > understanding, be able to make alternate distributions without hassle). Questions like the following aren't new ones: - If I am responsible for the name 'foo' on PyPI, how much influence, if any, should I have over the use of 'foo' as a prefix in other distribution package names? - If I ship a distribution package through PyPI containing an import package named 'bar', how much influence, if any, should I have over the use of 'bar' as an import package or module name in other distribution packages? I expect that the PSF will need to address them directly some day, but I don't think PEP 561 itself needs to address them (and the first version of PEP 541 probably won't either). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Mon Nov 13 05:11:12 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 13 Nov 2017 12:11:12 +0200 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: 13.11.17 05:48, Nathaniel Smith ????: > On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: >> This change will lead to DeprecationWarning being displayed by default for: >> >> * code executed directly at the interactive prompt >> * code executed directly as part of a single-file script > > Technically it's orthogonal, but if you're trying to get better > warnings in the REPL, then you might also want to look at: > > https://bugs.python.org/issue1539925 > https://github.com/ipython/ipython/issues/6611 The idea LGTM. Do you mind to create a pull request Nathaniel? From storchaka at gmail.com Mon Nov 13 05:33:00 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 13 Nov 2017 12:33:00 +0200 Subject: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff In-Reply-To: References: <60B5B0C5-7688-428C-980E-1D08AEFD076A@python.org> <20171108102157.2170b59e@fsol> <99945e14-5099-1e28-fad8-08be204766ef@python.org> Message-ID: 12.11.17 03:47, Guido van Rossum ????: > - In Python 3 code, using `\u` escapes in a b'...' literal gives > "DeprecationWarning: invalid escape sequence '\u'" > > In both cases these warnings are currently only generated if you run > mypy with these warnings enabled, e.g. `python3 -Wd -m mypy `. > But this means that mypy would start generating these by default if > those warnings were enabled everywhere by default (per Antoine's > preference). And while it's debatable whether they are useful, there > should at least be a way to turn them off (e.g. when checking Python 2 > code that's never going to be ported).. Running mypy in the above way is > awkward; mypy would likely have to grow a new flag to control this. It is hard to determine what category is better for this warnings: DeprecationWarning or SyntaxWarning. On one hand, SyntaxWarning looks more appropriate since the warning is generated by the Python parser. On other hand, numerous warnings can confuse end users of Python applications, and DeprecationWarning is silenced by default. This warning is a special case. When it is converted to an error, it becomes a SyntaxError, because the latter contains information about location of an invalid escape sequence, and produces better error report (containing source line, cursor, etc). From solipsis at pitrou.net Mon Nov 13 05:46:37 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 13 Nov 2017 11:46:37 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ References: Message-ID: <20171113114637.280f3c54@fsol> On Sun, 12 Nov 2017 19:48:28 -0800 Nathaniel Smith wrote: > On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: > > This change will lead to DeprecationWarning being displayed by default for: > > > > * code executed directly at the interactive prompt > > * code executed directly as part of a single-file script > > Technically it's orthogonal, but if you're trying to get better > warnings in the REPL, then you might also want to look at: > > https://bugs.python.org/issue1539925 > https://github.com/ipython/ipython/issues/6611 Depends what you call "better". Personally, I don't want to see warnings each and every time I use a deprecated or questionable construct or API from the REPL. Regards Antoine. From rosuav at gmail.com Mon Nov 13 06:37:46 2017 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 13 Nov 2017 22:37:46 +1100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: <20171113114637.280f3c54@fsol> References: <20171113114637.280f3c54@fsol> Message-ID: On Mon, Nov 13, 2017 at 9:46 PM, Antoine Pitrou wrote: > On Sun, 12 Nov 2017 19:48:28 -0800 > Nathaniel Smith wrote: >> On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: >> > This change will lead to DeprecationWarning being displayed by default for: >> > >> > * code executed directly at the interactive prompt >> > * code executed directly as part of a single-file script >> >> Technically it's orthogonal, but if you're trying to get better >> warnings in the REPL, then you might also want to look at: >> >> https://bugs.python.org/issue1539925 >> https://github.com/ipython/ipython/issues/6611 > > Depends what you call "better". Personally, I don't want to see > warnings each and every time I use a deprecated or questionable > construct or API from the REPL. Isn't that the entire *point* of warnings? When you're working at the REPL, you're the one in control of which APIs you use, so you should be the one to know about deprecations. ChrisA From solipsis at pitrou.net Mon Nov 13 07:29:30 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 13 Nov 2017 13:29:30 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ References: <20171113114637.280f3c54@fsol> Message-ID: <20171113132930.5496e80b@fsol> On Mon, 13 Nov 2017 22:37:46 +1100 Chris Angelico wrote: > On Mon, Nov 13, 2017 at 9:46 PM, Antoine Pitrou wrote: > > On Sun, 12 Nov 2017 19:48:28 -0800 > > Nathaniel Smith wrote: > >> On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: > >> > This change will lead to DeprecationWarning being displayed by default for: > >> > > >> > * code executed directly at the interactive prompt > >> > * code executed directly as part of a single-file script > >> > >> Technically it's orthogonal, but if you're trying to get better > >> warnings in the REPL, then you might also want to look at: > >> > >> https://bugs.python.org/issue1539925 > >> https://github.com/ipython/ipython/issues/6611 > > > > Depends what you call "better". Personally, I don't want to see > > warnings each and every time I use a deprecated or questionable > > construct or API from the REPL. > > Isn't that the entire *point* of warnings? When you're working at the > REPL, you're the one in control of which APIs you use, so you should > be the one to know about deprecations. If I see a warning once every REPL session, I know about the deprecation already, thank you. I don't need to be taken by the hand like a little child. Besides, the code I write in the REPL is not meant for durable use. Regards Antoine. From stefan at bytereef.org Mon Nov 13 08:55:00 2017 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 13 Nov 2017 14:55:00 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: <20171113114637.280f3c54@fsol> Message-ID: <20171113135459.GA2788@bytereef.org> On Mon, Nov 13, 2017 at 10:37:46PM +1100, Chris Angelico wrote: > >> https://bugs.python.org/issue1539925 > >> https://github.com/ipython/ipython/issues/6611 > > > > Depends what you call "better". Personally, I don't want to see > > warnings each and every time I use a deprecated or questionable > > construct or API from the REPL. > > Isn't that the entire *point* of warnings? When you're working at the > REPL, you're the one in control of which APIs you use, so you should > be the one to know about deprecations. I haven't followed the long discussions, so this is probably not a very novel observation. But it seems to me that we have a problem getting users to treat the python command like e.g. gcc. If I want gcc warnings, I use -Wall -Wextra. I think the whole problem is that python warnings are a bit of an obscure feature (they were to me for a long time). Stefan Krah From storchaka at gmail.com Mon Nov 13 09:09:01 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 13 Nov 2017 16:09:01 +0200 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: <20171113132930.5496e80b@fsol> References: <20171113114637.280f3c54@fsol> <20171113132930.5496e80b@fsol> Message-ID: 13.11.17 14:29, Antoine Pitrou ????: > On Mon, 13 Nov 2017 22:37:46 +1100 > Chris Angelico wrote: >> On Mon, Nov 13, 2017 at 9:46 PM, Antoine Pitrou wrote: >>> On Sun, 12 Nov 2017 19:48:28 -0800 >>> Nathaniel Smith wrote: >>>> On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: >>>>> This change will lead to DeprecationWarning being displayed by default for: >>>>> >>>>> * code executed directly at the interactive prompt >>>>> * code executed directly as part of a single-file script >>>> >>>> Technically it's orthogonal, but if you're trying to get better >>>> warnings in the REPL, then you might also want to look at: >>>> >>>> https://bugs.python.org/issue1539925 >>>> https://github.com/ipython/ipython/issues/6611 >>> >>> Depends what you call "better". Personally, I don't want to see >>> warnings each and every time I use a deprecated or questionable >>> construct or API from the REPL. >> >> Isn't that the entire *point* of warnings? When you're working at the >> REPL, you're the one in control of which APIs you use, so you should >> be the one to know about deprecations. > > If I see a warning once every REPL session, I know about the deprecation > already, thank you. I don't need to be taken by the hand like a little > child. Besides, the code I write in the REPL is not meant for durable > use. Hmm, now I see that the simple Nathaniel's solution is not completely correct. If the warning action is 'module', it should be emitted only once if used directly in the REPL, because '__main__' is the same module. But if you use functions foo() and bar(), and both emit the same warning, you should get a warning from every entered command, because after the first warning you know only about the first function. From levkivskyi at gmail.com Mon Nov 13 10:56:03 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 13 Nov 2017 16:56:03 +0100 Subject: [Python-Dev] Disallow ambiguous syntax f(x for x in [1],) In-Reply-To: References: Message-ID: FWIW, it is common to have syntax checks in ast.c. Especially situations like class ``C(x for x in [1]): ...`` I think would be hard to prohibit in Grammar. Since anyway we have many checks in ast.c already, I wouldn't care much about implementing these corner cases in Grammar. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Mon Nov 13 10:58:24 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 13 Nov 2017 16:58:24 +0100 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: Thanks Ethan for all the work! I will be glad to see this accepted and implemented in mypy. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Nov 13 11:08:06 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 13 Nov 2017 17:08:06 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option Message-ID: Hi, The discussion on DeprecationWarning reminded me my old idea of a "development mode" for CPython: -X dev. Since Brett likes it, I post it on python-dev. Last year, I posted this idea to python-ideas but my idea was rejected: https://mail.python.org/pipermail/python-ideas/2016-March/039314.html In short: python3.7 -X dev script.py behaves as: PYTHONMALLOC=debug python3.7 -Wd -b -X faulthandler script.py The idea of -X dev is to enable "debug checks". Some of these checks are already enabled by default if CPython is compiled in debug mode. These checks help to debug bugs earlier. The problem is that outside CPython development, Python developers use a Python binary from their Linux distributor or a binary installed from python.org: a binary compiled in release mode. Well, some Linux distributions provide a debug build, but the ABI is incompatible, and so is hard to use in practice. Running the tests of your code in -X dev mode should help you to catch subtle bugs and prevent regressions if you upgrade Python. The -X dev mode doesn't raise hard exceptions, but usually only emits warnings. It allows you to fix bugs one by one in your own code, without having to fix bugs in third party code. If you control your whole code base, you may want to enable -X dev=strict which raises hard exceptions, to make sure that you are really safe. My "-X dev" idea is not incompatible with Nick's PEP 565 "Show DeprecationWarning in __main__" and it's different: it's an opt-in option, while Nick wants to change the default behaviour. --- Full copy of my email sent last year to python-ideas. When I develop on CPython, I'm always building Python in debug mode using ./configure --with-pydebug. This mode enables a *lot* of extra checks which helps me to detect bugs earlier. The debug mode makes Python much slower and so is not the default. "python3" in Linux distributions is a binary compiled in release mode. When they also provide a binary compiled in debug mode, you will probably have issues to use your existing C extensions, since they all of them are compiled in release mode which is not compatible (you must recompile C extensions in debug mode). I propose to add a "development mode" to Python, to get a few checks to detect bugs earlier: a new -X dev command line option. Example: python3.6 -X dev script.py Checks enabled by this flag must: * Not flood the stdout/stderr (ex: don't write one message per second) * Have a low overhead in term of CPU and memory (ex: max 10%) I propose to enable: * Show DeprecationWarning and ResourceWarning warnings: python -Wd * Show BytesWarning warning: python -b * Enable Python assertions (assert) and set __debug__ to True: remove (or just ignore) -O or -OO command line arguments * faulthandler to get a Python traceback on segfault and fatal errors: python -X faulthandler * Debug hooks on Python memory allocators: PYTHONMALLOC=debug For example, I consider that enabling tracemalloc is too expensive (CPU & memory) and must not be enabled in -X dev. I wrote a proof-of-concept: if -X dev is used, executable Python once again with more parameters. Basically, replace: python3.6 -X dev ... with PYTHONMALLOC=debug python3.6 -Wd -b -X faulthandler ... The list of checks can be extended later. For example, we may enable the debug mode of asyncio: PYTHONASYNCIODEBUG=1. The scope of the "developer mode" is unclear to me. Many modules (ex: asyncio) already have a debug mode. Would it be a bad idea to enable *all* debug modes of *all* modules? For example, http.client has a debug level: do you expect debug traces of the HTTP client when you develop your application? IMHO the scope must be well defined: don't modify modules of the stdlib, only enable more debug flags in the Python core (warnings, unicode, memory allocators, etc.). Maybe we even a need -X dev=strict which would be stricter: * use -Werror instead of -Wd: raise an exception when a warning is emitted * use -bb instead of -b: get BytesWarning exceptions * Replace "inconsistent use of tabs and spaces in indentation" warning with an error in the Python parser * etc. Again, this list can be extended later. Or maybe we need multiple level to control the quantity of debug traces, warnings, ... written into stdout/stderr? In my experience, the problem with converting warnings to errors is that you don't control the code of your whole application. It's common that third party modules raise DeprecationWarning, ResourceWarning, etc. Even some modules of the Python stdlib raise DeprecatingWarning! For example, I recall that distutils raises a DeprecationWarning on Python 3.4 when importing the imp module. "Similar" idea in other programming languages: * Perl: http://perldoc.perl.org/strict.html * PHP: https://secure.php.net/manual/fr/function.error-reporting.php Victor From victor.stinner at gmail.com Mon Nov 13 11:27:17 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 13 Nov 2017 17:27:17 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: > The change proposed in this PEP is to update the default warning filter list > to be:: > > default::DeprecationWarning:__main__ > ignore::DeprecationWarning > ignore::PendingDeprecationWarning > ignore::ImportWarning > ignore::BytesWarning > ignore::ResourceWarning This PEP can break applications parsing Python stderr, application which don't expect to get DeprecationWarning in their output. Is it possible to disable this PEP using a command line option and/or environment variable to get the Python 3.6 behaviour (always DeprecationWarning)? I guess that it's "PYTHONWARNINGS=ignore::DeprecationWarning:__main__". Am I right? Would you mind to mention that in the PEP, please? Sorry, I'm not an expert of the warnings module. Is it possible to also configure Python to ignore DeprecationWarning using the warnings module, at the start of the __main__ script? Something like warnings.filterwarnings("ignore", '', DeprecationWarning)? Again, maybe explain that in the PEP? Victor From victor.stinner at gmail.com Mon Nov 13 11:33:24 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 13 Nov 2017 17:33:24 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: Hi, I'm not convinced that this PEP 565 will prevent developers to be surprised when upgrading Python, since more and more applications are using an entry point: an import + a single function call. For example, *all* OpenStack applications use an entry point and so will be unaffected by this PEP. There is no suprise, it's documented in the PEP: "code imported from an executable script wrapper generated at installation time based on a console_scripts or gui_scripts entry point definition". It's hard to find a compromise between two incompatible use cases: "run an application" ("user") and "develop an application" ("developer"). I proposed again my "-X dev" idea in another thread for the "develop an application" use case ;-) If the Python REPL is included in the "run an application" use case, the frontier between user and developer becomes blurry :-) Is REPL designed for users or developers? Should Python guess the intent of the human connected to the keyboard? ... Victor 2017-11-12 10:24 GMT+01:00 Nick Coghlan : > I've written a short(ish) PEP for the proposal to change the default > warnings filters to show DeprecationWarning in __main__: > https://www.python.org/dev/peps/pep-0565/ > > The core proposal itself is just the idea in > https://bugs.python.org/issue31975 (i.e. adding > "default::DeprecationWarning:__main__" to the default filter set), but > the PEP fills in some details on the motivation for the original > change to the defaults, and why the current proposal is to add a new > filter for __main__, rather than dropping the default > DeprecationWarning filter entirely. > > The PEP also proposes repurposing the existing FutureWarning category > to explicitly mean "backwards compatibility warnings that should be > shown to users of Python applications" since: > > - we don't tend to use FutureWarning for its original nominal purpose > (changes that will continue to run but will do something different) > - FutureWarning was added in 2.3, so it's available in all still > supported versions of Python, and is shown by default in all of them > - it's at least arguably a less-jargony spelling of > DeprecationWarning, and hence more appropriate for displaying to end > users that may not have encountered the specific notion of "API > deprecation" > > Cheers, > Nick. > > ============== > PEP: 565 > Title: Show DeprecationWarning in __main__ > Author: Nick Coghlan > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 12-Nov-2017 > Python-Version: 3.7 > Post-History: 12-Nov-2017 > > > Abstract > ======== > > In Python 2.7 and Python 3.2, the default warning filters were updated to hide > DeprecationWarning by default, such that deprecation warnings in development > tools that were themselves written in Python (e.g. linters, static analysers, > test runners, code generators) wouldn't be visible to their users unless they > explicitly opted in to seeing them. > > However, this change has had the unfortunate side effect of making > DeprecationWarning markedly less effective at its primary intended purpose: > providing advance notice of breaking changes in APIs (whether in CPython, the > standard library, or in third party libraries) to users of those APIs. > > To improve this situation, this PEP proposes a single adjustment to the > default warnings filter: displaying deprecation warnings attributed to the main > module by default. > > This change will mean that code entered at the interactive prompt and code in > single file scripts will revert to reporting these warnings by default, while > they will continue to be silenced by default for packaged code distributed as > part of an importable module. > > The PEP also proposes a number of small adjustments to the reference > interpreter and standard library documentation to help make the warnings > subsystem more approachable for new Python developers. > > > Specification > ============= > > The current set of default warnings filters consists of:: > > ignore::DeprecationWarning > ignore::PendingDeprecationWarning > ignore::ImportWarning > ignore::BytesWarning > ignore::ResourceWarning > > The default ``unittest`` test runner then uses ``warnings.catch_warnings()`` > ``warnings.simplefilter('default')`` to override the default filters while > running test cases. > > The change proposed in this PEP is to update the default warning filter list > to be:: > > default::DeprecationWarning:__main__ > ignore::DeprecationWarning > ignore::PendingDeprecationWarning > ignore::ImportWarning > ignore::BytesWarning > ignore::ResourceWarning > > This means that in cases where the nominal location of the warning (as > determined by the ``stacklevel`` parameter to ``warnings.warn``) is in the > ``__main__`` module, the first occurrence of each DeprecationWarning will once > again be reported. > > This change will lead to DeprecationWarning being displayed by default for: > > * code executed directly at the interactive prompt > * code executed directly as part of a single-file script > > While continuing to be hidden by default for: > > * code imported from another module in a ``zipapp`` archive's ``__main__.py`` > file > * code imported from another module in an executable package's ``__main__`` > submodule > * code imported from an executable script wrapper generated at installation time > based on a ``console_scripts`` or ``gui_scripts`` entry point definition > > As a result, API deprecation warnings encountered by development tools written > in Python should continue to be hidden by default for users of those tools > > While not its originally intended purpose, the standard library documentation > will also be updated to explicitly recommend the use of > ``FutureWarning`` (rather > than ``DeprecationWarning``) for backwards compatibility warnings that are > intended to be seen by *users* of an application. > > This will give the following three distinct categories of backwards > compatibility warning, with three different intended audiences: > > * ``PendingDeprecationWarning``: reported by default only in test runners that > override the default set of warning filters. The intended audience is Python > developers that take an active interest in ensuring the future compatibility > of their software (e.g. professional Python application developers with > specific support obligations). > * ``DeprecationWarning``: reported by default for code that runs directly in > the ``__main__`` module (as such code is considered relatively unlikely to > have a dedicated test suite), but relies on test suite based reporting for > code in other modules. The intended audience is Python developers that are at > risk of upgrades to their dependencies (including upgrades to Python itself) > breaking their software (e.g. developers using Python to script environments > where someone else is in control of the timing of dependency upgrades). > * ``FutureWarning``: always reported by default. The intended audience is users > of applications written in Python, rather than other Python developers > (e.g. warning about use of a deprecated setting in a configuration file > format). > > Given its presence in the standard library since Python 2.3, ``FutureWarning`` > would then also have a secondary use case for libraries and frameworks that > support multiple Python versions: as a more reliably visible alternative to > ``DeprecationWarning`` in Python 2.7 and versions of Python 3.x prior to 3.7. > > > Motivation > ========== > > As discussed in [1_] and mentioned in [2_], Python 2.7 and Python 3.2 changed > the default handling of ``DeprecationWarning`` such that: > > * the warning was hidden by default during normal code execution > * the `unittest`` test runner was updated to re-enable it when running tests > > The intent was to avoid cases of tooling output like the following:: > > $ devtool mycode/ > /usr/lib/python3.6/site-packages/devtool/cli.py:1: > DeprecationWarning: 'async' and 'await' will become reserved keywords > in Python 3.7 > async = True > ... actual tool output ... > > Even when `devtool` is a tool specifically for Python programmers, this is not > a particularly useful warning, as it will be shown on every invocation, even > though the main helpful step an end user can take is to report a bug to the > developers of ``devtool``. The warning is even less helpful for general purpose > developer tools that are used across more languages than just Python. > > However, this change proved to have unintended consequences for the following > audiences: > > * anyone using a test runner other than the default one built into ``unittest`` > (since the request for third party test runners to change their default > warnings filters was never made explicitly) > * anyone using the default ``unittest`` test runner to test their Python code > in a subprocess (since even ``unittest`` only adjusts the warnings settings > in the current process) > * anyone writing Python code at the interactive prompt or as part of a directly > executed script that didn't have a Python level test suite at all > > In these cases, ``DeprecationWarning`` ended up become almost entirely > equivalent to ``PendingDeprecationWarning``: it was simply never seen at all. > > > Limitations on PEP Scope > ======================== > > This PEP exists specifically to explain both the proposed addition to the > default warnings filter for 3.7, *and* to more clearly articulate the rationale > for the original change to the handling of DeprecationWarning back in Python 2.7 > and 3.2. > > This PEP does not solve all known problems with the current approach to handling > deprecation warnings. Most notably: > > * the default ``unittest`` test runner does not currently report deprecation > warnings emitted at module import time, as the warnings filter > override is only > put in place during test execution, not during test discovery and loading. > * the default ``unittest`` test runner does not currently report deprecation > warnings in subprocesses, as the warnings filter override is applied directly > to the loaded ``warnings`` module, not to the ``PYTHONWARNINGS`` environment > variable. > * the standard library doesn't provide a straightforward way to opt-in to seeing > all warnings emitted *by* a particular dependency prior to upgrading it > (the third-party ``warn`` module [3_] does provide this, but enabling it > involves monkeypatching the standard library's ``warnings`` module). > * re-enabling deprecation warnings by default in __main__ doesn't help in > handling cases where software has been factored out into support modules, but > those modules still have little or no automated test coverage. Near term, the > best currently available answer is to run such applications with > ``PYTHONWARNINGS=default::DeprecationWarning`` or > ``python -W default::DeprecationWarning`` and pay attention to their > ``stderr`` output. Longer term, this is really a question for researchers > working on static analysis of Python code: how to reliably find usage of > deprecated APIs, and how to infer that an API or parameter is deprecated > based on ``warnings.warn`` calls, without actually running either the code > providing the API or the code accessing it > > While these are real problems with the status quo, they're excluded from > consideration in this PEP because they're going to require more complex > solutions than a single additional entry in the default warnings filter, > and resolving them at least potentially won't require going through the PEP > process. > > For anyone interested in pursuing them further, the first two would be > ``unittest`` module enhancement requests, the third would be a ``warnings`` > module enhancement request, while the last would only require a PEP if > inferring API deprecations from their contents was deemed to be an intractable > code analysis problem, and an explicit function and parameter marker syntax in > annotations was proposed instead. > > > References > ========== > > .. [1] stdlib-sig thread proposing the original default filter change > (https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html) > > .. [2] Python 2.7 notification of the default warnings filter change > (https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-deprecation-warnings) > > .. [3] Emitting warnings based on the location of the warning itself > (https://pypi.org/project/warn/) > > Copyright > ========= > > This document has been placed in the public domain. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From solipsis at pitrou.net Mon Nov 13 11:40:14 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 13 Nov 2017 17:40:14 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option References: Message-ID: <20171113174014.2c141979@fsol> On Mon, 13 Nov 2017 17:08:06 +0100 Victor Stinner wrote: > Hi, > > The discussion on DeprecationWarning reminded me my old idea of a > "development mode" for CPython: -X dev. Since Brett likes it, I post > it on python-dev. Last year, I posted this idea to python-ideas but my > idea was rejected: > > https://mail.python.org/pipermail/python-ideas/2016-March/039314.html > > In short: > python3.7 -X dev script.py > behaves as: > PYTHONMALLOC=debug python3.7 -Wd -b -X faulthandler script.py I would personally not add `-b` in those options. I think it was useful while porting stuff to 3.x, but not so much these days. Regards Antoine. From victor.stinner at gmail.com Mon Nov 13 11:46:58 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 13 Nov 2017 17:46:58 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <20171113174014.2c141979@fsol> References: <20171113174014.2c141979@fsol> Message-ID: 2017-11-13 17:40 GMT+01:00 Antoine Pitrou : > I would personally not add `-b` in those options. I think it was > useful while porting stuff to 3.x, but not so much these days. You should consider youself as lucky if you completed to port all your code to Python 3. It's not my case yet :-) (I'm thinking to code that I have to port, not only code that I wrote myself.) I confirm that I usually use -b while I'm porting code from Python 2 to Python 3. So, usually I know that I will get Python 3 compatibility issues. Sometimes, you may still get Python 3 compatibility issues while the project is already tagged as "compatible with Python 3", just because you get into a code path which wasn't properyl tested before. Well, I don't have a strong opinion on -b. Victor From storchaka at gmail.com Mon Nov 13 11:50:51 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 13 Nov 2017 18:50:51 +0200 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <20171113174014.2c141979@fsol> References: <20171113174014.2c141979@fsol> Message-ID: 13.11.17 18:40, Antoine Pitrou ????: > On Mon, 13 Nov 2017 17:08:06 +0100 > Victor Stinner wrote: >> In short: >> python3.7 -X dev script.py >> behaves as: >> PYTHONMALLOC=debug python3.7 -Wd -b -X faulthandler script.py > > I would personally not add `-b` in those options. I think it was > useful while porting stuff to 3.x, but not so much these days. I concur with Antoine. From antoine at python.org Mon Nov 13 11:51:06 2017 From: antoine at python.org (Antoine Pitrou) Date: Mon, 13 Nov 2017 17:51:06 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171113174014.2c141979@fsol> Message-ID: Le 13/11/2017 ? 17:46, Victor Stinner a ?crit?: > 2017-11-13 17:40 GMT+01:00 Antoine Pitrou : >> I would personally not add `-b` in those options. I think it was >> useful while porting stuff to 3.x, but not so much these days. > > You should consider youself as lucky if you completed to port all your > code to Python 3. It's not my case yet :-) (I'm thinking to code that > I have to port, not only code that I wrote myself.) The main issue I have with `-b` is actually that you can get spurious warnings about properly working code. You can also get warnings in well-tested third-party libraries, e.g.: distributed/tests/test_client.py::test_get_versions /home/antoine/miniconda3/envs/dask36/lib/python3.6/site-packages/pandas/core/dtypes/common.py:20: BytesWarning: Comparison between bytes and string for t in ['O', 'int8', 'uint8', 'int16', 'uint16', /home/antoine/miniconda3/envs/dask36/lib/python3.6/site-packages/pandas/io/packers.py:231: BytesWarning: Comparison between bytes and string 7: np.dtype('int64'), distributed/tests/test_client.py::test_serialize_collections_of_futures_sync /home/antoine/miniconda3/envs/dask36/lib/python3.6/site-packages/numpy/core/numeric.py:583: BytesWarning: Comparison between bytes and string return array(a, dtype, copy=False, order=order, subok=True) distributed/tests/test_client.py::test_serialize_collections_of_futures /home/antoine/miniconda3/envs/dask36/lib/python3.6/site-packages/numpy/core/numeric.py:583: BytesWarning: Comparison between bytes and string return array(a, dtype, copy=False, order=order, subok=True) (this is an excerpt of the warnings I got by running our test suite with "python -b") Regards Antoine. From victor.stinner at gmail.com Mon Nov 13 12:19:07 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 13 Nov 2017 18:19:07 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171113174014.2c141979@fsol> Message-ID: 2017-11-13 17:51 GMT+01:00 Antoine Pitrou : > The main issue I have with `-b` is actually that you can get spurious > warnings about properly working code. You can also get warnings in > well-tested third-party libraries, e.g.: > > distributed/tests/test_client.py::test_get_versions > /home/antoine/miniconda3/envs/dask36/lib/python3.6/site-packages/pandas/core/dtypes/common.py:20: BytesWarning: Comparison between bytes and string > for t in ['O', 'int8', 'uint8', 'int16', 'uint16', > /home/antoine/miniconda3/envs/dask36/lib/python3.6/site-packages/pandas/io/packers.py:231: BytesWarning: Comparison between bytes and string > 7: np.dtype('int64'), Oh right, that's a very good reason to not include -b option in the -X dev mode ;-) Usually, I mostly care of ResourceWarning and DeprecationWarning warnings. PYTHONMALLOC=debug and -X faulthandler just comes "for free", they don't change the behaviour as -b, and should help to debug crashes. --- By the way, My worst memory of BytesWarning is when implemented/fixed (I don't call) os.get_exec_path(): # {b'PATH': ...}.get('PATH') and {'PATH': ...}.get(b'PATH') emit a # BytesWarning when using python -b or python -bb: ignore the warning with warnings.catch_warnings(): warnings.simplefilter("ignore", BytesWarning) ... I really dislike this code since warnings.catch_warnings() is process-wide and so impact other threads :-( (Maybe Yury's PEP "context variables" would help here? ;-)) Victor From solipsis at pitrou.net Mon Nov 13 12:27:40 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 13 Nov 2017 18:27:40 +0100 Subject: [Python-Dev] process-wide warning settings References: <20171113174014.2c141979@fsol> Message-ID: <20171113182740.5ba09cdb@fsol> On Mon, 13 Nov 2017 18:19:07 +0100 Victor Stinner wrote: > > I really dislike this code since warnings.catch_warnings() is > process-wide and so impact other threads :-( > > (Maybe Yury's PEP "context variables" would help here? ;-)) They can't really help unless we break the catch_warnings() API to make it affect some kind of thread-local warning settings (which don't exist currently). Regards Antoine. From greg.ewing at canterbury.ac.nz Mon Nov 13 16:36:32 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 14 Nov 2017 10:36:32 +1300 Subject: [Python-Dev] Standardise the AST (Re: PEP 563: Postponed Evaluation of Annotations) In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A08CD89.90406@canterbury.ac.nz> Message-ID: <5A0A1060.7080506@canterbury.ac.nz> Guido van Rossum wrote: > But Python's syntax changes in nearly every release. The changes are almost always additions, so there's no reason why the AST can't remain backwards compatible. > the AST level ... elides many details > (such as whitespace and parentheses). That's okay, because the AST is only expected to represent the semantics of Python code, not its exact lexical representation in the source. It's the same with Lisp -- comments and whitespace have been stripped out by the time you get to Lisp data. > Lisp had almost no syntax so I presume the mapping to data structures > was nearly trivial compared to Python. Yes, the Python AST is more complicated, but we already have that much complexity in the AST being used by the compiler. If I understand correctly, we also have a process for converting that internal structure to and from an equally complicated set of Python objects, that isn't needed by the compiler and exists purely for the convenience of Python code. I can't see much complexity being added if we were to decide to standardise the Python representation. -- Greg From brett at python.org Mon Nov 13 16:46:22 2017 From: brett at python.org (Brett Cannon) Date: Mon, 13 Nov 2017 21:46:22 +0000 Subject: [Python-Dev] Standardise the AST (Re: PEP 563: Postponed Evaluation of Annotations) In-Reply-To: <5A0A1060.7080506@canterbury.ac.nz> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A08CD89.90406@canterbury.ac.nz> <5A0A1060.7080506@canterbury.ac.nz> Message-ID: On Mon, Nov 13, 2017, 13:37 Greg Ewing, wrote: > Guido van Rossum wrote: > > But Python's syntax changes in nearly every release. > > The changes are almost always additions, so there's no > reason why the AST can't remain backwards compatible. > > > the AST level ... elides many details > > (such as whitespace and parentheses). > > That's okay, because the AST is only expected to > represent the semantics of Python code, not its > exact lexical representation in the source. It's > the same with Lisp -- comments and whitespace have > been stripped out by the time you get to Lisp > data. > > > Lisp had almost no syntax so I presume the mapping to data structures > > was nearly trivial compared to Python. > > Yes, the Python AST is more complicated, but we > already have that much complexity in the AST being > used by the compiler. > > If I understand correctly, we also have a process > for converting that internal structure to and from > an equally complicated set of Python objects, that > isn't needed by the compiler and exists purely for > the convenience of Python code. > The internal and stdlib AST are generated from the AST definition which is written in a DEAL. The conversion code is also auto-generated. (The devguide has the details.) > I can't see much complexity being added if we were > to decide to standardise the Python representation. > Do you have a specific example in a recent release where a change was made that you disapproved of? Those of us who have mutated the AST have tried to not be gratuitous in the changes to begin with while also not letting the AST make maintaining Python harder. Plus there are external libraries like typed_ast and asteroid that try to make the AST uniform across releases so we don't have to worry quite as much about this explicitly. -brett > -- > Greg > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Nov 13 16:59:22 2017 From: brett at python.org (Brett Cannon) Date: Mon, 13 Nov 2017 21:59:22 +0000 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Sun, Nov 12, 2017, 10:22 Koos Zevenhoven, wrote: > On Nov 12, 2017 19:10, "Guido van Rossum" wrote: > > On Sun, Nov 12, 2017 at 4:14 AM, Koos Zevenhoven > wrote: > >> So actually my question is: What should happen when the annotation is >> already a string literal? >> > > The PEP answers that clearly (under Implementation): > > > If an annotation was already a string, this string is preserved > > verbatim. > > > Oh sorry, I was looking for a spec, so I somehow assumed I can ignore the > gory implementation details just like I routinely ignore things like > headers and footers of emails. > > There's two thing I don't understand here: > > * What does it mean to preserve the string verbatim? No matter how I read > it, I can't tell if it's with quotes or without. > > Maybe I'm missing some context. > I believe the string passes through unchanged (i.e. no quotes). Think of the PEP as simply turning all non-string annotations into string ones. -brett > > -- Koos (mobile) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 13 18:29:35 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 13 Nov 2017 15:29:35 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: Hi Ethan! This is a nice piece of work. I expect to accept it pretty much verbatim (with some small edits, see https://github.com/python/peps/pull/467). I agree with Nick that we don't have to do anything specifically about control of foo_stubs packages -- nor do I think we need to worry about foo_stubs vs. foo-stubs. Everyone else: if you think this should not go through, now's the time to reply-all here! --Guido On Mon, Nov 13, 2017 at 7:58 AM, Ivan Levkivskyi wrote: > Thanks Ethan for all the work! > > I will be glad to see this accepted and implemented in mypy. > > -- > Ivan > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Nov 13 18:40:22 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 13 Nov 2017 15:40:22 -0800 Subject: [Python-Dev] Standardise the AST (Re: PEP 563: Postponed Evaluation of Annotations) In-Reply-To: <5A0A1060.7080506@canterbury.ac.nz> References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> <5A08CD89.90406@canterbury.ac.nz> <5A0A1060.7080506@canterbury.ac.nz> Message-ID: Can you give any examples of problems caused by the ast not being standardized? The original motivation of being able to distinguish between foo(x: int) foo(x: "int") isn't very compelling ? it's not clear it's a problem in the first place, and even if it is then all we need is some kind of boolean flag, not an ast standard. On Nov 13, 2017 13:38, "Greg Ewing" wrote: > Guido van Rossum wrote: > >> But Python's syntax changes in nearly every release. >> > > The changes are almost always additions, so there's no > reason why the AST can't remain backwards compatible. > > the AST level ... elides many details (such as whitespace and parentheses). >> > > That's okay, because the AST is only expected to > represent the semantics of Python code, not its > exact lexical representation in the source. It's > the same with Lisp -- comments and whitespace have > been stripped out by the time you get to Lisp > data. > > Lisp had almost no syntax so I presume the mapping to data structures was >> nearly trivial compared to Python. >> > > Yes, the Python AST is more complicated, but we > already have that much complexity in the AST being > used by the compiler. > > If I understand correctly, we also have a process > for converting that internal structure to and from > an equally complicated set of Python objects, that > isn't needed by the compiler and exists purely for > the convenience of Python code. > > I can't see much complexity being added if we were > to decide to standardise the Python representation. > > -- > Greg > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/njs% > 40pobox.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 13 19:23:03 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 13 Nov 2017 16:23:03 -0800 Subject: [Python-Dev] PEP 549 vs. PEP 562 Message-ID: I've pondered two PEPs that are in (friendly) competition with each other: - PEP 549 -- Instance Descriptors (Larry Hastings) - PEP 562 -- Module __getattr__ (Ivan Levkivskyi) In the end I am *rejecting* PEP 549 and I hope to *accept* PEP 562, with a small addition to the latter to also support overriding __dir__ (so that one can provide a __dir__ implementation that matches the __getattr__ implementation, or perhaps counteracts it, in case one wants deprecated attributes to be omitted from dir() but still usable). The __dir__ addition is mentioned here: https://github.com/ilevkivskyi/cpython/pull/3#issuecomment-343591293 . A bit more motivation for my choice: re-reading PEP 549 reminded me of how its implementation is remarkably subtle (invoking Armin Rigo; for more details read https://www.python.org/dev/peps/pep-0549/#implementation). On the contrary, the implementation of PEP 562 is much simpler. With the Zen of Python in mind, this gives a hint that it is the better idea, and possibly even a good idea. Ivan, once you've added the __dir__ thing to your PEP, please post it to python-dev to solicit review from its (larger) readership. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From srittau at rittau.biz Mon Nov 13 18:50:28 2017 From: srittau at rittau.biz (Sebastian Rittau) Date: Tue, 14 Nov 2017 00:50:28 +0100 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: Message-ID: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> Hello everyone, Am 14.11.2017 um 00:29 schrieb Guido van Rossum: > This is a nice piece of work. I expect to accept it pretty much > verbatim (with some small edits, see > https://github.com/python/peps/pull/467). I agree with Nick that we > don't have to do anything specifically about control of foo_stubs > packages -- nor do I think we need to worry about foo_stubs vs. foo-stubs. > > Everyone else: if you think this should not go through, now's the time > to reply-all here! I am really looking forward to the implementation of this PEP and I am glad that it is close to acceptance. One thing that is not really clear to me is how module-only packages are handled. Say I have a package "foo" that installs the file "foo.py" to site-packages, where would I install "foo.pyi" and py.typed to? Or is this case not supported and I have to convert the foo module into a package containing just __init__.py? ?- Sebastian From ncoghlan at gmail.com Mon Nov 13 20:05:59 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Nov 2017 11:05:59 +1000 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On 14 November 2017 at 02:27, Victor Stinner wrote: >> The change proposed in this PEP is to update the default warning filter list >> to be:: >> >> default::DeprecationWarning:__main__ >> ignore::DeprecationWarning >> ignore::PendingDeprecationWarning >> ignore::ImportWarning >> ignore::BytesWarning >> ignore::ResourceWarning > > This PEP can break applications parsing Python stderr, application > which don't expect to get DeprecationWarning in their output. Right, but anything affected by this would eventually have broken anyway, when the change being warned about actually happens. That said, reducing the likelihood of breaking application output parsers is part of the rationale for restricting the change to unpackaged scripts (we know from the initial PEP 538 implementation that there's plenty of code out there that doesn't cope well with unexpected lines appearing on stderr). > Is it possible to disable this PEP using a command line option and/or > environment variable to get the Python 3.6 behaviour (always > DeprecationWarning)? The patch for the PEP will also update the documentation for the `PYTHONWARNINGS` environment variable to explicitly call out the following settings: PYTHONWARNINGS=error # Convert to exceptions PYTHONWARNINGS=always # Warn every time PYTHONWARNINGS=default # Warn once per call location PYTHONWARNINGS=module # Warn once per calling module PYTHONWARNINGS=once # Warn once per Python process PYTHONWARNINGS=ignore # Never warn And then also cover their respective shorthand command line equivalents (`-We`, `-Wa`, `-Wd`, `-Wm`,`-Wo`, `-Wi`). While https://docs.python.org/3/using/cmdline.html#cmdoption-w does currently explain this, neither it nor https://docs.python.org/3/using/cmdline.html#envvar-PYTHONWARNINGS show specific examples They also don't link directly to https://docs.python.org/3/library/warnings.html#the-warnings-filter, and that section doesn't explain the shorthand `:`-separated notation used in sys.warnoptions. > I guess that it's > "PYTHONWARNINGS=ignore::DeprecationWarning:__main__". Am I right? `PYTHONWARNINGS=ignore::DeprecationWarning` would be the shortest way to suppress deprecations everywhere again while still getting other warnings. > Would you mind to mention that in the PEP, please? Will do - I actually meant to cover this anyway (hence the reference to docs changes in the abstract), but I missed it in the initial draft. > Sorry, I'm not an expert of the warnings module. Is it possible to > also configure Python to ignore DeprecationWarning using the warnings > module, at the start of the __main__ script? Something like > warnings.filterwarnings("ignore", '', DeprecationWarning)? Again, > maybe explain that in the PEP? `warnings.simplefilter("ignore", DeprecationWarning)` is the simplest runtime option for ensuring that deprecation warnings are never displayed. The downside of doing this programmatically is that you can end up overriding environmental and command line settings, so the best application level "Only show warnings if my users ask for them" snippet actually looks like: if not sys.warnoptions: warnings.simplefilter("ignore") (I'll mention this in the PEP and docs patch as well) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Mon Nov 13 20:38:39 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 13 Nov 2017 17:38:39 -0800 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: <20171113114637.280f3c54@fsol> <20171113132930.5496e80b@fsol> Message-ID: On Mon, Nov 13, 2017 at 6:09 AM, Serhiy Storchaka wrote: > 13.11.17 14:29, Antoine Pitrou ????: >> >> On Mon, 13 Nov 2017 22:37:46 +1100 >> Chris Angelico wrote: >>> >>> On Mon, Nov 13, 2017 at 9:46 PM, Antoine Pitrou >>> wrote: >>>> >>>> On Sun, 12 Nov 2017 19:48:28 -0800 >>>> Nathaniel Smith wrote: >>>>> >>>>> On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan >>>>> wrote: >>>>>> >>>>>> This change will lead to DeprecationWarning being displayed by default >>>>>> for: >>>>>> >>>>>> * code executed directly at the interactive prompt >>>>>> * code executed directly as part of a single-file script >>>>> >>>>> >>>>> Technically it's orthogonal, but if you're trying to get better >>>>> warnings in the REPL, then you might also want to look at: >>>>> >>>>> https://bugs.python.org/issue1539925 >>>>> https://github.com/ipython/ipython/issues/6611 >>>> >>>> >>>> Depends what you call "better". Personally, I don't want to see >>>> warnings each and every time I use a deprecated or questionable >>>> construct or API from the REPL. >>> >>> >>> Isn't that the entire *point* of warnings? When you're working at the >>> REPL, you're the one in control of which APIs you use, so you should >>> be the one to know about deprecations. >> >> >> If I see a warning once every REPL session, I know about the deprecation >> already, thank you. I don't need to be taken by the hand like a little >> child. Besides, the code I write in the REPL is not meant for durable >> use. > > > Hmm, now I see that the simple Nathaniel's solution is not completely > correct. If the warning action is 'module', it should be emitted only once > if used directly in the REPL, because '__main__' is the same module. True. The fundamental problem is that generally, Python uses (filename, lineno) pairs to identify lines of code. But (a) the warning module assumes that for each namespace dict, there is a unique mapping between line numbers and lines of code, so it ignores filename and just keys off lineno, and (b) the REPL re-uses the same (file, lineno) for different lines of code anyway. So I guess the fully correct solution would be to use a unique "filename" when compiling each block of code -- e.g. the REPL could do the equivalent of compile(, "REPL[1]", ...) for the first line, compile(, "REPL[2]", ...) for the second line, etc. -- and then also teach the warnings module's duplicate detection logic to key off of (file, lineno) pairs instead of just lineno. -n -- Nathaniel J. Smith -- https://vorpus.org From guido at python.org Mon Nov 13 20:38:02 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 13 Nov 2017 17:38:02 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> References: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> Message-ID: On Mon, Nov 13, 2017 at 3:50 PM, Sebastian Rittau wrote: > Am 14.11.2017 um 00:29 schrieb Guido van Rossum: > >> This is a nice piece of work. I expect to accept it pretty much verbatim >> (with some small edits, see https://github.com/python/peps/pull/467). I >> agree with Nick that we don't have to do anything specifically about >> control of foo_stubs packages -- nor do I think we need to worry about >> foo_stubs vs. foo-stubs. >> >> Everyone else: if you think this should not go through, now's the time to >> reply-all here! >> > > I am really looking forward to the implementation of this PEP and I am > glad that it is close to acceptance. One thing that is not really clear to > me is how module-only packages are handled. Say I have a package "foo" that > installs the file "foo.py" to site-packages, where would I install > "foo.pyi" and py.typed to? Or is this case not supported and I have to > convert the foo module into a package containing just __init__.py? > Good call. I think that conversion to a package is indeed the best approach -- it doesn't seem worth it to add more special-casing for this scenario. Ethan, if you agree, you should just add a sentence about this to the PEP. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at ethanhs.me Mon Nov 13 20:38:49 2017 From: ethan at ethanhs.me (Ethan Smith) Date: Mon, 13 Nov 2017 17:38:49 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> References: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> Message-ID: On Mon, Nov 13, 2017 at 3:50 PM, Sebastian Rittau wrote: > Hello everyone, > > > Am 14.11.2017 um 00:29 schrieb Guido van Rossum: > >> This is a nice piece of work. I expect to accept it pretty much verbatim >> (with some small edits, see https://github.com/python/peps/pull/467). I >> agree with Nick that we don't have to do anything specifically about >> control of foo_stubs packages -- nor do I think we need to worry about >> foo_stubs vs. foo-stubs. >> >> Everyone else: if you think this should not go through, now's the time to >> reply-all here! >> > I am really looking forward to the implementation of this PEP and I am > glad that it is close to acceptance. One thing that is not really clear to > me is how module-only packages are handled. Say I have a package "foo" that > installs the file "foo.py" to site-packages, where would I install > "foo.pyi" and py.typed to? Or is this case not supported and I have to > convert the foo module into a package containing just __init__.py? > > The PEP as of right now provides no support for module only distributions. I don't think it would be possible to support inline typing in module only distributions, as a random mymod.py in site/dist-packages is hard to trust. Refactoring into a package is probably the best option. Alternatively, I believe a mymod_stubs with just an __init__.pyi might work as a work around as well. Ethan > - Sebastian > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ethan% > 40ethanhs.me > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stephen.Michell at maurya.on.ca Mon Nov 13 15:55:03 2017 From: Stephen.Michell at maurya.on.ca (Stephen Michell) Date: Mon, 13 Nov 2017 15:55:03 -0500 Subject: [Python-Dev] Python possible vulnerabilities in concurrency Message-ID: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> I am looking for one or two experts to discuss with me how Python concurrency features fit together, and possible vulnerabilities associated with that. TR 24772 lists 5 vulnerabilities associated with 1. activating threads, tasks or pico-threads 2. Directed termination of threads, tasks or pico-threads 3. Premature termination of threads, tasks or pico-threads 4. Concurrent access to data shared between threads, tasks or pico-threads, and 5. Lock protocol errors for concurrent entities I need to document how these appear (or don?t appear) in Python. The writeups would possibly swamp this email reflector, so I am looking for a small number of people to review these sections of our language-independent document and discuss with me how these are handled in Python. I have a good background in these issues, but no relevant experience with Python. Please contact me at stephen.michell at maurya.on.ca to respond directly. Thank you ?stephen michell Convenor ISO/IEC/JTC 1/SC 22/WG 23 Programming Language Vulnerabilities Working Group -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 13 21:57:53 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Nov 2017 12:57:53 +1000 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: Message-ID: On 14 November 2017 at 02:08, Victor Stinner wrote: > My "-X dev" idea is not incompatible with Nick's PEP 565 "Show > DeprecationWarning in __main__" and it's different: it's an opt-in > option, while Nick wants to change the default behaviour. I'm +1 on a `-X dev` mode, since it enables a lot of things that are useful for making an application more robust (extension module debugging, explicit scope-controlled resource management) that I wouldn't want turned at the REPL by default. It also implicitly adjusts over time as we add more debugging capabilities. I don't consider it a replacement for tweaking how we handle DeprecationWarning by default, though :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From srittau at rittau.biz Tue Nov 14 04:02:07 2017 From: srittau at rittau.biz (Sebastian Rittau) Date: Tue, 14 Nov 2017 10:02:07 +0100 Subject: [Python-Dev] PEP 561 rework In-Reply-To: References: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> Message-ID: <5f6d8000-eaed-51fe-b5ce-7e4d5c90470f@rittau.biz> Am 14.11.2017 um 02:38 schrieb Guido van Rossum: > On Mon, Nov 13, 2017 at 3:50 PM, Sebastian Rittau > wrote: > > I am really looking forward to the implementation of this PEP and > I am glad that it is close to acceptance. One thing that is not > really clear to me is how module-only packages are handled. Say I > have a package "foo" that installs the file "foo.py" to > site-packages, where would I install "foo.pyi" and py.typed to? Or > is this case not supported and I have to convert the foo module > into a package containing just __init__.py? > > > Good call. I think that conversion to a package is indeed the best > approach -- it doesn't seem worth it to add more special-casing for > this scenario. > > Ethan, if you agree, you should just add a sentence about this to the PEP. > This seems like the best solution, especially since setuptools does not really support installing package data for pure modules. ?- Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at ethanhs.me Tue Nov 14 05:10:07 2017 From: ethan at ethanhs.me (Ethan Smith) Date: Tue, 14 Nov 2017 02:10:07 -0800 Subject: [Python-Dev] PEP 561 rework In-Reply-To: <5f6d8000-eaed-51fe-b5ce-7e4d5c90470f@rittau.biz> References: <96fe3b14-dc31-f132-ccc5-346c7d072a66@rittau.biz> <5f6d8000-eaed-51fe-b5ce-7e4d5c90470f@rittau.biz> Message-ID: A note was added [1] about the solution for module only distributions and is live on Python.org. [1] https://github.com/python/peps/pull/468 Ethan Smith On Tue, Nov 14, 2017 at 1:02 AM, Sebastian Rittau wrote: > Am 14.11.2017 um 02:38 schrieb Guido van Rossum: > > On Mon, Nov 13, 2017 at 3:50 PM, Sebastian Rittau > wrote: > >> > >> I am really looking forward to the implementation of this PEP and I am >> glad that it is close to acceptance. One thing that is not really clear to >> me is how module-only packages are handled. Say I have a package "foo" that >> installs the file "foo.py" to site-packages, where would I install >> "foo.pyi" and py.typed to? Or is this case not supported and I have to >> convert the foo module into a package containing just __init__.py? >> > > Good call. I think that conversion to a package is indeed the best > approach -- it doesn't seem worth it to add more special-casing for this > scenario. > > Ethan, if you agree, you should just add a sentence about this to the PEP. > > This seems like the best solution, especially since setuptools does not > really support installing package data for pure modules. > > - Sebastian > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > ethan%40ethanhs.me > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Nov 14 07:15:09 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 14 Nov 2017 13:15:09 +0100 Subject: [Python-Dev] Python possible vulnerabilities in concurrency References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> Message-ID: <20171114131509.26125738@fsol> Hi Stephen, On Mon, 13 Nov 2017 15:55:03 -0500 Stephen Michell wrote: > I am looking for one or two experts to discuss with me how Python concurrency features fit together, and possible vulnerabilities associated with that. > > TR 24772 lists 5 vulnerabilities associated with Can you explain what "TR 24772" is? (and/or give a link to a publicly-available resource) Regards Antoine. From lists at janc.be Tue Nov 14 08:55:20 2017 From: lists at janc.be (Jan Claeys) Date: Tue, 14 Nov 2017 14:55:20 +0100 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: <20171114131509.26125738@fsol> References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> Message-ID: <1510667720.21117.137.camel@janc.be> On Tue, 2017-11-14 at 13:15 +0100, Antoine Pitrou wrote: > On Mon, 13 Nov 2017 15:55:03 -0500 > Stephen Michell wrote: > > I am looking for one or two experts to discuss with me how Python > > concurrency features fit together, and possible vulnerabilities > > associated with that. > > > > TR 24772 lists 5 vulnerabilities associated with > > Can you explain what "TR 24772" is? (and/or give a link to a > publicly-available resource) Sounds like https://www.iso.org/standard/71094.html which is updating https://www.iso.org/standard/61457.html (which you can download from there if you search a bit; clearly either ISO doesn't have a UI/UX "standard" or they aren't following it...) -- Jan Claeys From brent.bejot at gmail.com Tue Nov 14 07:49:58 2017 From: brent.bejot at gmail.com (brent bejot) Date: Tue, 14 Nov 2017 07:49:58 -0500 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On Mon, Nov 13, 2017 at 11:33 AM, Victor Stinner wrote: > If the Python REPL is included in the "run an application" use case, > the frontier between user and developer becomes blurry :-) Is REPL > designed for users or developers? Should Python guess the intent of > the human connected to the keyboard? ... > > Victor > I don't think folks are counting the Python REPL as "run an application". People who use the REPL are at least writing python and should definitely be informed of deprecations. +1 for deprecations in the REPL from me; I think it's a great way to inform a good percentage of the python devs of upcoming changes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Tue Nov 14 15:26:20 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Tue, 14 Nov 2017 21:26:20 +0100 Subject: [Python-Dev] PEP 560 Message-ID: After some discussion on python-ideas, see https://mail.python.org/pipermail/python-ideas/2017-September/047220.html, this PEP received positive comments. The updated version that takes into account the comments that appeared in the discussion so far is available at https://www.python.org/dev/peps/pep-0560/ Here I post the full text for convenience: ++++++++++++++++++++++++++ PEP: 560 Title: Core support for typing module and generic types Author: Ivan Levkivskyi Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 03-Sep-2017 Python-Version: 3.7 Post-History: 09-Sep-2017 Abstract ======== Initially PEP 484 was designed in such way that it would not introduce *any* changes to the core CPython interpreter. Now type hints and the ``typing`` module are extensively used by the community, e.g. PEP 526 and PEP 557 extend the usage of type hints, and the backport of ``typing`` on PyPI has 1M downloads/month. Therefore, this restriction can be removed. It is proposed to add two special methods ``__class_getitem__`` and ``__mro_entries__`` to the core CPython for better support of generic types. Rationale ========= The restriction to not modify the core CPython interpreter led to some design decisions that became questionable when the ``typing`` module started to be widely used. There are three main points of concern: performance of the ``typing`` module, metaclass conflicts, and the large number of hacks currently used in ``typing``. Performance ----------- The ``typing`` module is one of the heaviest and slowest modules in the standard library even with all the optimizations made. Mainly this is because of subscripted generic types (see PEP 484 for definition of terms used in this PEP) are class objects (see also [1]_). The three main ways how the performance can be improved with the help of the proposed special methods: - Creation of generic classes is slow since the ``GenericMeta.__new__`` is very slow; we will not need it anymore. - Very long MROs for generic classes will be twice shorter; they are present because we duplicate the ``collections.abc`` inheritance chain in ``typing``. - Time of instantiation of generic classes will be improved (this is minor however). Metaclass conflicts ------------------- All generic types are instances of ``GenericMeta``, so if a user uses a custom metaclass, then it is hard to make a corresponding class generic. This is particularly hard for library classes that a user doesn't control. A workaround is to always mix-in ``GenericMeta``:: class AdHocMeta(GenericMeta, LibraryMeta): pass class UserClass(LibraryBase, Generic[T], metaclass=AdHocMeta): ... but this is not always practical or even possible. With the help of the proposed special attributes the ``GenericMeta`` metaclass will not be needed. Hacks and bugs that will be removed by this proposal ---------------------------------------------------- - ``_generic_new`` hack that exists because ``__init__`` is not called on instances with a type differing form the type whose ``__new__`` was called, ``C[int]().__class__ is C``. - ``_next_in_mro`` speed hack will be not necessary since subscription will not create new classes. - Ugly ``sys._getframe`` hack. This one is particularly nasty since it looks like we can't remove it without changes outside ``typing``. - Currently generics do dangerous things with private ABC caches to fix large memory consumption that grows at least as O(N\ :sup:`2`), see [2]_. This point is also important because it was recently proposed to re-implement ``ABCMeta`` in C. - Problems with sharing attributes between subscripted generics, see [3]_. The current solution already uses ``__getattr__`` and ``__setattr__``, but it is still incomplete, and solving this without the current proposal will be hard and will need ``__getattribute__``. - ``_no_slots_copy`` hack, where we clean up the class dictionary on every subscription thus allowing generics with ``__slots__``. - General complexity of the ``typing`` module. The new proposal will not only allow to remove the above mentioned hacks/bugs, but also simplify the implementation, so that it will be easier to maintain. Specification ============= ``__class_getitem__`` --------------------- The idea of ``__class_getitem__`` is simple: it is an exact analog of ``__getitem__`` with an exception that it is called on a class that defines it, not on its instances. This allows us to avoid ``GenericMeta.__getitem__`` for things like ``Iterable[int]``. The ``__class_getitem__`` is automatically a class method and does not require ``@classmethod`` decorator (similar to ``__init_subclass__``) and is inherited like normal attributes. For example:: class MyList: def __getitem__(self, index): return index + 1 def __class_getitem__(cls, item): return f"{cls.__name__}[{item.__name__}]" class MyOtherList(MyList): pass assert MyList()[0] == 1 assert MyList[int] == "MyList[int]" assert MyOtherList()[0] == 1 assert MyOtherList[int] == "MyOtherList[int]" Note that this method is used as a fallback, so if a metaclass defines ``__getitem__``, then that will have the priority. ``__mro_entries__`` ------------------- If an object that is not a class object appears in the bases of a class definition, then ``__mro_entries__`` is searched on it. If found, it is called with the original tuple of bases as an argument. The result of the call must be a tuple, that is unpacked in the bases classes in place of this object. (If the tuple is empty, this means that the original bases is simply discarded.) Using the method API instead of just an attribute is necessary to avoid inconsistent MRO errors, and perform other manipulations that are currently done by ``GenericMeta.__new__``. After creating the class, the original bases are saved in ``__orig_bases__`` (currently this is also done by the metaclass). For example:: class GenericAlias: def __init__(self, origin, item): self.origin = origin self.item = item def __mro_entries__(self, bases): return (self.origin,) class NewList: def __class_getitem__(cls, item): return GenericAlias(cls, item) class Tokens(NewList[int]): ... assert Tokens.__bases__ == (NewList,) assert Tokens.__orig_bases__ == (NewList[int],) assert Tokens.__mro__ == (Tokens, NewList, object) NOTE: These two method names are reserved for use by the ``typing`` module and the generic types machinery, and any other use is discouraged. The reference implementation (with tests) can be found in [4]_, and the proposal was originally posted and discussed on the ``typing`` tracker, see [5]_. Dynamic class creation and ``types.resolve_bases`` -------------------------------------------------- ``type.__new__`` will not perform any MRO entry resolution. So that a direct call ``type('Tokens', (List[int],), {})`` will fail. This is done for performance reasons and to minimize the number of implicit transformations. Instead, a helper function ``resolve_bases`` will be added to the ``types`` module to allow an explicit ``__mro_entries__`` resolution in the context of dynamic class creation. Correspondingly, ``types.new_class`` will be updated to reflect the new class creation steps while maintaining the backwards compatibility:: def new_class(name, bases=(), kwds=None, exec_body=None): resolved_bases = resolve_bases(bases) # This step is added meta, ns, kwds = prepare_class(name, resolved_bases, kwds) if exec_body is not None: exec_body(ns) cls = meta(name, resolved_bases, ns, **kwds) cls.__orig_bases__ = bases # This step is added return cls Backwards compatibility and impact on users who don't use ``typing`` ==================================================================== This proposal may break code that currently uses the names ``__class_getitem__`` and ``__mro_entries__``. (But the language reference explicitly reserves *all* undocumented dunder names, and allows "breakage without warning"; see [6]_.) This proposal will support almost complete backwards compatibility with the current public generic types API; moreover the ``typing`` module is still provisional. The only two exceptions are that currently ``issubclass(List[int], List)`` returns True, while with this proposal it will raise ``TypeError``, and ``repr()`` of unsubscripted user-defined generics cannot be tweaked and will coincide with ``repr()`` of normal (non-generic) classes. With the reference implementation I measured negligible performance effects (under 1% on a micro-benchmark) for regular (non-generic) classes. At the same time performance of generics is significantly improved: * ``importlib.reload(typing)`` is up to 7x faster * Creation of user defined generic classes is up to 4x faster (on a micro- benchmark with an empty body) * Instantiation of generic classes is up to 5x faster (on a micro-benchmark with an empty ``__init__``) * Other operations with generic types and instances (like method lookup and ``isinstance()`` checks) are improved by around 10-20% * The only aspect that gets slower with the current proof of concept implementation is the subscripted generics cache look-up. However it was already very efficient, so this aspect gives negligible overall impact. References ========== .. [1] Discussion following Mark Shannon's presentation at Language Summit (https://github.com/python/typing/issues/432) .. [2] Pull Request to implement shared generic ABC caches (merged) (https://github.com/python/typing/pull/383) .. [3] An old bug with setting/accessing attributes on generic types (https://github.com/python/typing/issues/392) .. [4] The reference implementation (https://github.com/ilevkivskyi/cpython/pull/2/files, https://github.com/ilevkivskyi/cpython/tree/new-typing) .. [5] Original proposal (https://github.com/python/typing/issues/468) .. [6] Reserved classes of identifiers ( https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers ) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Tue Nov 14 15:34:30 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Tue, 14 Nov 2017 21:34:30 +0100 Subject: [Python-Dev] PEP 562 Message-ID: After discussion on python-ideas, it looks this PEP moves towards a favorable decision. For a recent discussion see https://mail.python.org/pipermail/python-ideas/2017-November/047806.html. The PEP is available at https://www.python.org/dev/peps/pep-0562/ The most important recent change is the addition of __dir__, as proposed by Guido. Here is the full text: +++++++++++++++++++++ PEP: 562 Title: Module __getattr__ and __dir__ Author: Ivan Levkivskyi Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 09-Sep-2017 Python-Version: 3.7 Post-History: 09-Sep-2017 Abstract ======== It is proposed to support a ``__getattr__`` and ``__dir__`` functions defined +on modules to provide basic customization of module attribute access. Rationale ========= It is sometimes convenient to customize or otherwise have control over access to module attributes. A typical example is managing deprecation warnings. Typical workarounds are assigning ``__class__`` of a module object to a custom subclass of ``types.ModuleType`` or replacing the ``sys.modules`` item with a custom wrapper instance. It would be convenient to simplify this procedure by recognizing ``__getattr__`` defined directly in a module that would act like a normal ``__getattr__`` method, except that it will be defined on module *instances*. For example:: # lib.py from warnings import warn deprecated_names = ["old_function", ...] def _deprecated_old_function(arg, other): ... def __getattr__(name): if name in deprecated_names: warn(f"{name} is deprecated", DeprecationWarning) return globals()[f"_deprecated_{name}"] raise AttributeError(f"module {__name__} has no attribute {name}") # main.py from lib import old_function # Works, but emits the warning Another widespread use case for ``__getattr__`` would be lazy submodule imports. Consider a simple example:: # lib/__init__.py import importlib __all__ = ['submod', ...] def __getattr__(name): if name in __all__: return importlib.import_module("." + name, __name__) raise AttributeError(f"module {__name__!r} has no attribute {name!r}") # lib/submod.py print("Submodule loaded") class HeavyClass: ... # main.py import lib lib.submodule.HeavyClass # prints "Submodule loaded" There is a related proposal PEP 549 that proposes to support instance properties for a similar functionality. The difference is this PEP proposes a faster and simpler mechanism, but provides more basic customization. An additional motivation for this proposal is that PEP 484 already defines the use of module ``__getattr__`` for this purpose in Python stub files, see [1]_. In addition, to allow modifying result of a ``dir()`` call on a module to show deprecated and other dynamically generated attributes, it is proposed to support module level ``__dir__`` function. For example:: # lib.py deprecated_names = ["old_function", ...] __all__ = ["new_function_one", "new_function_two", ...] def new_function_one(arg, other): ... def new_function_two(arg, other): ... def __dir__(): return sorted(__all__ + deprecated_names) # main.py import lib dir(lib) # prints ["new_function_one", "new_function_two", "old_function", ...] Specification ============= The ``__getattr__`` function at the module level should accept one argument which is the name of an attribute and return the computed value or raise an ``AttributeError``:: def __getattr__(name: str) -> Any: ... This function will be called only if ``name`` is not found in the module through the normal attribute lookup. The ``__dir__`` function should accept no arguments, and return a list of strings that represents the names accessible on module:: def __dir__() -> List[str]: ... If present, this function overrides the standard ``dir()`` search on a module. The reference implementation for this PEP can be found in [2]_. Backwards compatibility and impact on performance ================================================= This PEP may break code that uses module level (global) names ``__getattr__`` and ``__dir__``. The performance implications of this PEP are minimal, since ``__getattr__`` is called only for missing attributes. Discussion ========== Note that the use of module ``__getattr__`` requires care to keep the referred objects pickleable. For example, the ``__name__`` attribute of a function should correspond to the name with which it is accessible via ``__getattr__``:: def keep_pickleable(func): func.__name__ = func.__name__.replace('_deprecated_', '') func.__qualname__ = func.__qualname__.replace('_deprecated_', '') return func @keep_pickleable def _deprecated_old_function(arg, other): ... One should be also careful to avoid recursion as one would do with a class level ``__getattr__``. References ========== .. [1] PEP 484 section about ``__getattr__`` in stub files (https://www.python.org/dev/peps/pep-0484/#stub-files) .. [2] The reference implementation (https://github.com/ilevkivskyi/cpython/pull/3/files) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Tue Nov 14 18:09:29 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Wed, 15 Nov 2017 00:09:29 +0100 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: <20171108102857.649651c9@fsol> References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> <20171107202803.24931cf6@fsol> <20171108102857.649651c9@fsol> Message-ID: Hi Antoine, On 8 November 2017 at 10:28, Antoine Pitrou wrote: > Yet, PyPy has no reference counting, and it doesn't seem to be a cause > of concern. Broken code is fixed along the way, when people notice. It is a major cause of concern. This is the main blocker for pure-Python code compatibility between CPython and all other implementations of Python, but most of the Python community plays nice and says since long ago "don't do that". As a result, nowadays, people generally know better than rely on deterministic __del__, and the language evolution usually cares not to add new dependencies on that. The problem is mostly confined to some pre-existing large code bases (like OpenStack), where no good solution exists. It's roughly OK to have one big blocker that people need to know about. I don't think it's anywhere close to OK to have a miriad of small ones. PyPy has worked very hard to get where it is now, and continues to regularly "fix" very obscure compatibility issues where CPython's behaviour differs from its own documentation---by copying the CPython actual behaviour, of course. A bient?t, Armin. From ericsnowcurrently at gmail.com Tue Nov 14 18:54:15 2017 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 14 Nov 2017 16:54:15 -0700 Subject: [Python-Dev] The current dict is not an "OrderedDict" In-Reply-To: References: <20171107134846.GA2683@bytereef.org> <20171107153229.70ad6ad6@fsol> Message-ID: On Nov 7, 2017 08:12, "INADA Naoki" wrote: Additionally, class namespace should keep insertion order. It's language spec from 3.6. So we should have two mode for such optimization. It makes dict more complicated. FWIW, PEP 520 (Preserving Class Attribute Definition Order) originally specified leaving the class namespace alone. Instead, the default class *definition* namespace was changed to OrderedDict, and the ordering from that namespace was stored as a tuple of names in a new __definition_order__ attribute on classes. That approach effectively decoupled the final class namespace from the proposed feature. If it's an issue now then we might consider reviving __definition_order__ (which, as a bonus, has other minor benefits). However, I expect we will make the current dict implementation's behavior official, which renders any changes unnecessary. -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 15 02:06:46 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 15 Nov 2017 17:06:46 +1000 Subject: [Python-Dev] PEP 560 In-Reply-To: References: Message-ID: On 15 November 2017 at 06:26, Ivan Levkivskyi wrote: > After some discussion on python-ideas, see https://mail.python.org/ > pipermail/python-ideas/2017-September/047220.html, this PEP received > positive comments. The updated version that takes into account the comments > that appeared in the discussion so far is available at > https://www.python.org/dev/peps/pep-0560/ > I don't have anything to add to the python-ideas comments you already incorporated, so +1 for this version from me. * ``importlib.reload(typing)`` is up to 7x faster > Nice! That's getting much closer to the "negligible" range, even for command line apps. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Nov 15 02:43:47 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 15 Nov 2017 09:43:47 +0200 Subject: [Python-Dev] PEP 562 In-Reply-To: References: Message-ID: 14.11.17 22:34, Ivan Levkivskyi ????: > This function will be called only if ``name`` is not found in the module > through the normal attribute lookup. It is worth to mention that using name as a module global will bypass __getattr__. And this is intentional, otherwise calling __getattr__ for builtins will harm a performance. > Backwards compatibility and impact on performance > ================================================= What is affect on pydoc, word completion, inspect, pkgutil, unittest? > ? def keep_pickleable(func): > ? ? ? func.__name__ = func.__name__.replace('_deprecated_', '') > ? ? ? func.__qualname__ = func.__qualname__.replace('_deprecated_', '') > ? ? ? return func > > ? @keep_pickleable > ? def _deprecated_old_function(arg, other): > ? ? ? ... I would create more standard helpers (for deprecation, for lazy importing). This feature is helpful not by itself, but because it will be used for implementing new features. Using __getattr__ directly will need to write a boilerplate code. Maybe when implementing these helper you will discover that this PEP needs some additions. From levkivskyi at gmail.com Wed Nov 15 05:53:19 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 15 Nov 2017 11:53:19 +0100 Subject: [Python-Dev] PEP 562 In-Reply-To: References: Message-ID: On 15 November 2017 at 08:43, Serhiy Storchaka wrote: > 14.11.17 22:34, Ivan Levkivskyi ????: > >> This function will be called only if ``name`` is not found in the module >> through the normal attribute lookup. >> > > It is worth to mention that using name as a module global will bypass > __getattr__. And this is intentional, otherwise calling __getattr__ for > builtins will harm a performance. > > Good point! > Backwards compatibility and impact on performance >> ================================================= >> > > What is affect on pydoc, word completion, inspect, pkgutil, unittest? > > This is rather gray area. I am not sure that we need to update them in any way, just the people who use __getattr__ should be aware that some tools might not yet expect it. I will add a note to the PEP about this. > def keep_pickleable(func): >> func.__name__ = func.__name__.replace('_deprecated_', '') >> func.__qualname__ = func.__qualname__.replace('_deprecated_', '') >> return func >> >> @keep_pickleable >> def _deprecated_old_function(arg, other): >> ... >> > > I would create more standard helpers (for deprecation, for lazy > importing). This feature is helpful not by itself, but because it will be > used for implementing new features. Using __getattr__ directly will need to > write a boilerplate code. Maybe when implementing these helper you will > discover that this PEP needs some additions. > > > But in which module these helpers should live? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Nov 15 06:59:55 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 15 Nov 2017 13:59:55 +0200 Subject: [Python-Dev] PEP 562 In-Reply-To: References: Message-ID: 15.11.17 12:53, Ivan Levkivskyi ????: > On 15 November 2017 at 08:43, Serhiy Storchaka > wrote: > > It is worth to mention that using name as a module global will > bypass __getattr__. And this is intentional, otherwise calling > __getattr__ for builtins will harm a performance. > > > Good point! And please document idiomatic way of using a module global with triggering __getattr__. For example if you want to use a lazy loaded submodule. sys.modules[__name__].foobar or from . import foobar The difference between them that the latter sets the module attribute, thus __getattr__ will be called only once. > Backwards compatibility and impact on performance > ================================================= > > > What is affect on pydoc, word completion, inspect, pkgutil, unittest? > > > This is rather gray area. I am not sure that we need to update them in > any way, just the people who use __getattr__ should be aware that > some tools might not yet expect it.. I will add a note to the PEP about > this. This problem is not new, since it was possible to replace a module with a module subclass with overridden __getattr__ and __dir__ before, but now this problem can occur more often. > I would create more standard helpers (for deprecation, for lazy > importing). This feature is helpful not by itself, but because it > will be used for implementing new features. Using __getattr__ > directly will need to write a boilerplate code. Maybe when > implementing these helper you will discover that this PEP needs some > additions. > > > > But in which module these helpers should live? Good question. lazy_import() could be added in importlib (or importlib.util?). The helper that just adds deprecation on importing a name, could be added in importlib too. But I think that it would be better if the deprecated() helper will also create a wrapper that raises a deprecation warning on the use of deprecated function. It could be added in the warnings or functools modules. I would add also a more general lazy_initialized(). It is something like cached module property. Executes the specified code on first use, and cache the result as a module attribute. In all these cases the final __getattr__ method should be automatically constructed from different chunks. At the end it could call a user supplied __getattr__. Or maybe the module method __getattr__ should look first at special registry before calling the instance attribute __getattr__()? def ModuleType.__getattr__(self, name): if name in self.__properties__: call self.__properties__[name]() elif '__getattr__' in self.__dict__: call self.__dict__['__getattr__'](name) else: raise AttributeError I'm wondering if the __set_name__ mechanism can be extended to modules. What if call the __set_name__() method for all items in a module dict after finishing importing the module? From k7hoven at gmail.com Wed Nov 15 07:55:53 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 15 Nov 2017 14:55:53 +0200 Subject: [Python-Dev] PEP 562 In-Reply-To: References: Message-ID: On Tue, Nov 14, 2017 at 10:34 PM, Ivan Levkivskyi wrote: ?[..]? > Rationale > ========= > > It is sometimes convenient to customize or otherwise have control over > access to module attributes. A typical example is managing deprecation > warnings. Typical workarounds are assigning ``__class__`` of a module > object > to a custom subclass of ``types.ModuleType`` or replacing the > ``sys.modules`` > item with a custom wrapper instance. It would be convenient to simplify > this > procedure by recognizing ``__getattr__`` defined directly in a module that > would act like a normal ``__getattr__`` method, except that it will be > defined > on module *instances*. For example:: > > # lib.py > > from warnings import warn > > deprecated_names = ["old_function", ...] > > def _deprecated_old_function(arg, other): > ... > > def __getattr__(name): > if name in deprecated_names: > warn(f"{name} is deprecated", DeprecationWarning) > return globals()[f"_deprecated_{name}"] > raise AttributeError(f"module {__name__} has no attribute {name}") > > # main.py > > from lib import old_function # Works, but emits the warning > > ?Deprecating functions is already possible, so I assume the reason for this would be performance? If so, are you sure this would help for performance? ? ?Deprecating module attributes / globals is indeed difficult to do at present. This PEP would allow deprecation warnings for accessing attributes, which is nice! However, as thread-unsafe as it is, many modules use module attributes to configure the state of the module. In that case, the user is more likely to *set* the attribute that to *get* it. Is this outside the scope of the PEP? ?[..]? > There is a related proposal PEP 549 that proposes to support instance > properties for a similar functionality. The difference is this PEP proposes > a faster and simpler mechanism, but provides more basic customization. > ?I'm not surprised that the comparison is in favor of this PEP ;-).? ?[..]? > Specification > ============= > > The ``__getattr__`` function at the module level should accept one argument > which is the name of an attribute and return the computed value or raise > an ``AttributeError``:: > > def __getattr__(name: str) -> Any: ... > > This function will be called only if ``name`` is not found in the module > through the normal attribute lookup. > > The Rationale (quoted in the beginning of this email) easily leaves a different impression of this.? ?[..] ? > > Discussion > ========== > > Note that the use of module ``__getattr__`` requires care to keep the > referred > objects pickleable. For example, the ``__name__`` attribute of a function > should correspond to the name with which it is accessible via > ``__getattr__``:: > > def keep_pickleable(func): > func.__name__ = func.__name__.replace('_deprecated_', '') > func.__qualname__ = func.__qualname__.replace('_deprecated_', '') > return func > > @keep_pickleable > def _deprecated_old_function(arg, other): > ... > > One should be also careful to avoid recursion as one would do with > a class level ``__getattr__``. > > Off-topic: In some sense, I'm happy to hear something about pickleability. But in some sense not. I think there are three kinds of people regarding pickleability: 1. Those who don't care about anything being pickleable 2. Those ?who care about some things being picklable ?3. ?Those who care about all things being picklable Personally, I'd like to belong to group 3, but because group 3 cannot even attempt to coexist with groups 1 and 2, I actually belong to group 1 most of the time. ???Koos ? > References > ========== > > .. [1] PEP 484 section about ``__getattr__`` in stub files > (https://www.python.org/dev/peps/pep-0484/#stub-files) > > .. [2] The reference implementation > (https://github.com/ilevkivskyi/cpython/pull/3/files) > > > Copyright > ========= > > This document has been placed in the public domain. > > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > k7hoven%40gmail.com > > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimjjewett at gmail.com Wed Nov 15 09:20:09 2017 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Wed, 15 Nov 2017 09:20:09 -0500 Subject: [Python-Dev] PEP 560: bases classes / confusion Message-ID: (1) I found the following (particularly "bases classes") very confusing: """ If an object that is not a class object appears in the bases of a class definition, then ``__mro_entries__`` is searched on it. If found, it is called with the original tuple of bases as an argument. The result of the call must be a tuple, that is unpacked in the bases classes in place of this object. (If the tuple is empty, this means that the original bases is simply discarded.) """ Based on the following GenericAlias/NewList/Tokens example, I think I now I understand what you mean, and would have had somewhat less difficulty if it were expressed as: """ When an object that is not a class object appears in the (tuple of) bases of a class definition, then attribute ``__mro_entries__`` is searched on that non-class object. If ``__mro_entries__`` found, it is called with the entire original tuple of bases as an argument. The result of the call must be a tuple, which is unpacked and replaces only the non-class object in the tuple of bases. (If the tuple is empty, this means that the original bases is simply discarded.) """ Note that this makes some assumptions about the __mro_entries__ signature that I wasn't quite sure about from the example. So building on that: class ABList(A, NewList[int], B): I *think* the following will happen: "NewList[int]" will be evaluated, and __class_getitem__ called, so that the bases tuple will be (A, GenericAlias(NewList, int), B) # (A) I *think* __mro_entries__ gets called with the full tuple, # instead of just the object it is found on. # (B) I *think* it is called on the results of evaluating # the terms within the tuple, instead of the original # string representation. _tmp = __mro_entries__(A, GenericAlias(NewList, int), B) # (C) I *think* __mro_entries__ returns a replacement for # just the single object, even though it was called on # the whole tuple, without knowing which object it # represents. bases = (A, _tmp, B) # (D) If there are two non-class objects, I *think* the # second one gets the same arguments as the first, # rather than an intermediate tuple with the first such # object already substituted out. -jJ From k7hoven at gmail.com Wed Nov 15 09:50:35 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 15 Nov 2017 16:50:35 +0200 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: For anyone confused about similar things, I expect you to be interested in my post on python-ideas from today: https://mail.python.org/pipermail/python-ideas/2017-November/047896.html ??Koos On Wed, Nov 15, 2017 at 4:20 PM, Jim J. Jewett wrote: > (1) I found the following (particularly "bases classes") very confusing: > > """ > If an object that is not a class object appears in the bases of a class > > definition, then ``__mro_entries__`` is searched on it. If found, > it is called with the original tuple of bases as an argument. The result > of the call must be a tuple, that is unpacked in the bases classes in place > of this object. (If the tuple is empty, this means that the original bases > is > simply discarded.) > """ > > Based on the following GenericAlias/NewList/Tokens example, I think I > now I understand what you mean, and would have had somewhat less > difficulty if it were expressed as: > > """ > When an object that is not a class object appears in the (tuple of) > bases of a class > definition, then attribute ``__mro_entries__`` is searched on that > non-class object. If ``__mro_entries__`` found, > it is called with the entire original tuple of bases as an argument. The > result > of the call must be a tuple, which is unpacked and replaces only the > non-class object in the tuple of bases. (If the tuple is empty, this > means that the original bases > is > simply discarded.) > """ > > Note that this makes some assumptions about the __mro_entries__ > signature that I wasn't quite sure about from the example. So > building on that: > > class ABList(A, NewList[int], B): > > I *think* the following will happen: > > "NewList[int]" will be evaluated, and __class_getitem__ called, so > that the bases tuple will be (A, GenericAlias(NewList, int), B) > > # (A) I *think* __mro_entries__ gets called with the full tuple, > # instead of just the object it is found on. > # (B) I *think* it is called on the results of evaluating > # the terms within the tuple, instead of the original > # string representation. > _tmp = __mro_entries__(A, GenericAlias(NewList, int), B) > > # (C) I *think* __mro_entries__ returns a replacement for > # just the single object, even though it was called on > # the whole tuple, without knowing which object it > # represents. > bases = (A, _tmp, B) > > # (D) If there are two non-class objects, I *think* the > # second one gets the same arguments as the first, > # rather than an intermediate tuple with the first such > # object already substituted out. > > -jJ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > k7hoven%40gmail.com > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 15 10:37:06 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Nov 2017 01:37:06 +1000 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: On 16 November 2017 at 00:20, Jim J. Jewett wrote: > I *think* the following will happen: > > "NewList[int]" will be evaluated, and __class_getitem__ called, so > that the bases tuple will be (A, GenericAlias(NewList, int), B) > > # (A) I *think* __mro_entries__ gets called with the full tuple, > # instead of just the object it is found on. > # (B) I *think* it is called on the results of evaluating > # the terms within the tuple, instead of the original > # string representation. > _tmp = __mro_entries__(A, GenericAlias(NewList, int), B) > > # (C) I *think* __mro_entries__ returns a replacement for > # just the single object, even though it was called on > # the whole tuple, without knowing which object it > # represents. > bases = (A, _tmp, B) > My understanding of the method signature: def __mro_entries__(self, orig_bases): ... return replacement_for_self My assumption as to the purpose of the extra complexity was: - given orig_bases, a method could avoid injecting bases already listed if it wanted to - allowing multiple items to be returned provides a way to programmatically combine mixins without having to define a new subclass for each combination Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 15 11:27:04 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 15 Nov 2017 08:27:04 -0800 Subject: [Python-Dev] PEP 562 In-Reply-To: References: Message-ID: I think it's reasonable for the PEP to include some examples, consequences and best practices. I don't think it's reasonable for the PEP to also define the API and implementation of helper functions that might be added once the mechanisms are in place. Those are better developed as 3rd party packages first. On Wed, Nov 15, 2017 at 3:59 AM, Serhiy Storchaka wrote: > 15.11.17 12:53, Ivan Levkivskyi ????: > >> On 15 November 2017 at 08:43, Serhiy Storchaka > > wrote: >> >> It is worth to mention that using name as a module global will >> bypass __getattr__. And this is intentional, otherwise calling >> __getattr__ for builtins will harm a performance. >> >> >> Good point! >> > > And please document idiomatic way of using a module global with triggering > __getattr__. For example if you want to use a lazy loaded submodule. > > sys.modules[__name__].foobar > > or > > from . import foobar > > The difference between them that the latter sets the module attribute, > thus __getattr__ will be called only once. > > Backwards compatibility and impact on performance >> ================================================= >> >> >> What is affect on pydoc, word completion, inspect, pkgutil, unittest? >> >> >> This is rather gray area. I am not sure that we need to update them in >> any way, just the people who use __getattr__ should be aware that >> some tools might not yet expect it.. I will add a note to the PEP about >> this. >> > > This problem is not new, since it was possible to replace a module with a > module subclass with overridden __getattr__ and __dir__ before, but now > this problem can occur more often. > > I would create more standard helpers (for deprecation, for lazy >> importing). This feature is helpful not by itself, but because it >> will be used for implementing new features. Using __getattr__ >> directly will need to write a boilerplate code. Maybe when >> implementing these helper you will discover that this PEP needs some >> additions. >> >> >> >> But in which module these helpers should live? >> > > Good question. lazy_import() could be added in importlib (or > importlib.util?). The helper that just adds deprecation on importing a > name, could be added in importlib too. But I think that it would be better > if the deprecated() helper will also create a wrapper that raises a > deprecation warning on the use of deprecated function. It could be added in > the warnings or functools modules. > > I would add also a more general lazy_initialized(). It is something like > cached module property. Executes the specified code on first use, and cache > the result as a module attribute. > > In all these cases the final __getattr__ method should be automatically > constructed from different chunks. At the end it could call a user supplied > __getattr__. Or maybe the module method __getattr__ should look first at > special registry before calling the instance attribute __getattr__()? > > def ModuleType.__getattr__(self, name): > if name in self.__properties__: > call self.__properties__[name]() > elif '__getattr__' in self.__dict__: > call self.__dict__['__getattr__'](name) > else: > raise AttributeError > > I'm wondering if the __set_name__ mechanism can be extended to modules. > What if call the __set_name__() method for all items in a module dict after > finishing importing the module? > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Wed Nov 15 13:02:34 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 15 Nov 2017 10:02:34 -0800 Subject: [Python-Dev] PEP 562 In-Reply-To: References: Message-ID: <5A0C813A.6000508@stoneleaf.us> On 11/15/2017 04:55 AM, Koos Zevenhoven wrote: > On Tue, Nov 14, 2017 at 10:34 PM, Ivan Levkivskyi wrote: > >> Rationale >> ========= >> >> [...] It would be convenient to simplify this >> procedure by recognizing ``__getattr__`` defined directly in a module that >> would act like a normal ``__getattr__`` method >> >> [...] >> >> Specification >> ============= >> >> The ``__getattr__`` function at the module level should accept one argument >> which is the name of an attribute and return the computed value or raise >> an ``AttributeError``:: > >> def __getattr__(name: str) -> Any: ... > >> This function will be called only if ``name`` is not found in the module >> through the normal attribute lookup. > > The Rationale (quoted in the beginning of this email) easily leaves a different impression of this.? I don't see how. This is exactly the way normal __getattr__ works. -- ~Ethan~ From levkivskyi at gmail.com Wed Nov 15 13:39:11 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 15 Nov 2017 19:39:11 +0100 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: Nick is exactly right here. Jim, if you want to propose alternative wording, then we could consider it. -- Ivan On 15 November 2017 at 16:37, Nick Coghlan wrote: > On 16 November 2017 at 00:20, Jim J. Jewett wrote: > >> I *think* the following will happen: >> >> "NewList[int]" will be evaluated, and __class_getitem__ called, so >> that the bases tuple will be (A, GenericAlias(NewList, int), B) >> >> # (A) I *think* __mro_entries__ gets called with the full tuple, >> # instead of just the object it is found on. >> # (B) I *think* it is called on the results of evaluating >> # the terms within the tuple, instead of the original >> # string representation. >> _tmp = __mro_entries__(A, GenericAlias(NewList, int), B) >> >> # (C) I *think* __mro_entries__ returns a replacement for >> # just the single object, even though it was called on >> # the whole tuple, without knowing which object it >> # represents. >> bases = (A, _tmp, B) >> > > My understanding of the method signature: > > def __mro_entries__(self, orig_bases): > ... > return replacement_for_self > > My assumption as to the purpose of the extra complexity was: > > - given orig_bases, a method could avoid injecting bases already listed if > it wanted to > - allowing multiple items to be returned provides a way to > programmatically combine mixins without having to define a new subclass for > each combination > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Wed Nov 15 14:04:25 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 15 Nov 2017 21:04:25 +0200 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: On Wed, Nov 15, 2017 at 5:37 PM, Nick Coghlan wrote: > On 16 November 2017 at 00:20, Jim J. Jewett wrote: > >> I *think* the following will happen: >> >> "NewList[int]" will be evaluated, and __class_getitem__ called, so >> that the bases tuple will be (A, GenericAlias(NewList, int), B) >> >> # (A) I *think* __mro_entries__ gets called with the full tuple, >> # instead of just the object it is found on. >> # (B) I *think* it is called on the results of evaluating >> # the terms within the tuple, instead of the original >> # string representation. >> _tmp = __mro_entries__(A, GenericAlias(NewList, int), B) >> >> # (C) I *think* __mro_entries__ returns a replacement for >> # just the single object, even though it was called on >> # the whole tuple, without knowing which object it >> # represents. >> bases = (A, _tmp, B) >> > > My understanding of the method signature: > > def __mro_entries__(self, orig_bases): > ... > return replacement_for_self > > My assumption as to the purpose of the extra complexity was: > > - given orig_bases, a method could avoid injecting bases already listed if > it wanted to > - allowing multiple items to be returned provides a way to > programmatically combine mixins without having to define a new subclass for > each combination > > ?Thanks, this might provide an answer to my question about multiple mro entries here https://mail.python.org/pipermail/python-ideas/2017-November/047897.html? ???Koos? -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Wed Nov 15 16:45:34 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 15 Nov 2017 23:45:34 +0200 Subject: [Python-Dev] PEP 562 In-Reply-To: <5A0C813A.6000508@stoneleaf.us> References: <5A0C813A.6000508@stoneleaf.us> Message-ID: On Wed, Nov 15, 2017 at 8:02 PM, Ethan Furman wrote: > On 11/15/2017 04:55 AM, Koos Zevenhoven wrote: > >> On Tue, Nov 14, 2017 at 10:34 PM, Ivan Levkivskyi wrote: >> > > >> Rationale >>> ========= >>> >>> [...] It would be convenient to simplify this >>> procedure by recognizing ``__getattr__`` defined directly in a module >>> that >>> would act like a normal ``__getattr__`` method >>> >> >> > >> [...] > >> > >> Specification >>> ============= >>> >>> >> The ``__getattr__`` function at the module level should accept one > argument > >> which is the name of an attribute and return the computed value or raise >>> an ``AttributeError``:: >>> >> >> def __getattr__(name: str) -> Any: ... >>> >> >> This function will be called only if ``name`` is not found in the module >>> through the normal attribute lookup. >>> >> >> The Rationale (quoted in the beginning of this email) easily leaves a >> different impression of this.? >> > > I don't see how. This is exactly the way normal __getattr__ works. > > > ?Oh sorry, I think I put this email together too quickly. I was writing down a bunch of thoughts I had earlier but hadn't written down.? I think I was mixing this up in my head with overriding __getitem__ for the module namespace dict and __class_getitem__ from PEP 560, which only gets called if the metaclass doesn't implement __getitem__ (IIRC). But I did have another thought related to this. I was wondering whether the lack of passing the module to the methods as `self` would harm future attempts to generalize these ideas. -- Koos -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Nov 15 17:36:41 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 15 Nov 2017 17:36:41 -0500 Subject: [Python-Dev] PEP 560 In-Reply-To: References: Message-ID: On 11/14/2017 3:26 PM, Ivan Levkivskyi wrote: > After some discussion on python-ideas, see > https://mail.python.org/pipermail/python-ideas/2017-September/047220.html, > this PEP received positive comments. The updated version that takes into > account the comments that appeared in the discussion so far is available > at https://www.python.org/dev/peps/pep-0560/ > > Here I post the full text for convenience: > > ++++++++++++++++++++++++++ > > PEP: 560 > Title: Core support for typing module and generic types > Author: Ivan Levkivskyi > > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 03-Sep-2017 > Python-Version: 3.7 > Post-History: 09-Sep-2017 ... Suggested wording improvements: > Performance > ----------- > > The ``typing`` module is one of the heaviest and slowest modules in > the standard library even with all the optimizations made. > Mainly this is > because of subscripted generic types (see PEP 484 for definition of terms > used in this PEP) are class objects (see also [1]_). Delete 'of' after 'because' to make this a proper sentence. > The three main ways how "There are three ..." reads better to me. > the performance can be improved with the help of the proposed special > methods: > > - Creation of generic classes is slow since the ``GenericMeta.__new__`` is > ? very slow; we will not need it anymore. > > - Very long MROs for generic classes will be twice shorter; I believe by 'twice shorter', which is meaningless by itself, you mean 'half as long'. If so, please say the latter. > they are present > ? because we duplicate the ``collections.abc`` inheritance chain > ? in ``typing``. > > - Time of instantiation of generic classes will be improved Instantiation of generic classes will be faster. > ? (this is minor however). -- Terry Jan Reedy From ethan at stoneleaf.us Wed Nov 15 19:27:31 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 15 Nov 2017 16:27:31 -0800 Subject: [Python-Dev] module customization Message-ID: <5A0CDB73.3030507@stoneleaf.us> So there are currently two ways to customize a module, with PEP 562 proposing a third. The first method involves creating a standard class object, instantiating it, and replacing the sys.modules entry with it. The second way is fairly similar, but instead of replacing the entire sys.modules entry, its class is updated to be the class just created -- something like sys.modules['mymod'].__class__ = MyNewClass . My request: Can someone write a better example of the second method? And include __getattr__ ? My question: Does that __getattr__ method have 'self' as the first parameter? If not, why not, and if so, shouldn't PEP 562's __getattr__ also take a 'self'? -- ~Ethan~ From guido at python.org Wed Nov 15 20:49:07 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 15 Nov 2017 17:49:07 -0800 Subject: [Python-Dev] module customization In-Reply-To: <5A0CDB73.3030507@stoneleaf.us> References: <5A0CDB73.3030507@stoneleaf.us> Message-ID: On Wed, Nov 15, 2017 at 4:27 PM, Ethan Furman wrote: > So there are currently two ways to customize a module, with PEP 562 > proposing a third. > > The first method involves creating a standard class object, instantiating > it, and replacing the sys.modules entry with it. > > The second way is fairly similar, but instead of replacing the entire > sys.modules entry, its class is updated to be the class just created -- > something like sys.modules['mymod'].__class__ = MyNewClass . > > My request: Can someone write a better example of the second method? And > include __getattr__ ? > > My question: Does that __getattr__ method have 'self' as the first > parameter? It does. > If not, why not, and if so, shouldn't PEP 562's __getattr__ also take a > 'self'? > Not really, since there's only one module (the one containing the __getattr__ function). Plus we already have a 1-argument module-level __getattr__ in mypy. See PEP 484. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Wed Nov 15 21:37:23 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Thu, 16 Nov 2017 03:37:23 +0100 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: <1510667720.21117.137.camel@janc.be> References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: Hi, On 14 November 2017 at 14:55, Jan Claeys wrote: > Sounds like https://www.iso.org/standard/71094.html > which is updating https://www.iso.org/standard/61457.html > (which you can download from there if you search a bit; clearly either > ISO doesn't have a UI/UX "standard" or they aren't following it...) Just for completeness, I think that what you can download for free from that second page only contains the first few sections ("Terms and definitions"). It doesn't even go to "Purpose of this technical report"---we need to pay $200 just to learn what the purpose is... *Shrug* Armin From guido at python.org Wed Nov 15 21:50:27 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 15 Nov 2017 18:50:27 -0800 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: On Wed, Nov 15, 2017 at 6:37 PM, Armin Rigo wrote: > Hi, > > On 14 November 2017 at 14:55, Jan Claeys wrote: > > Sounds like https://www.iso.org/standard/71094.html > > which is updating https://www.iso.org/standard/61457.html > > (which you can download from there if you search a bit; clearly either > > ISO doesn't have a UI/UX "standard" or they aren't following it...) > > Just for completeness, I think that what you can download for free > from that second page only contains the first few sections ("Terms and > definitions"). It doesn't even go to "Purpose of this technical > report"---we need to pay $200 just to learn what the purpose is... > > *Shrug* > Actually it linked to http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html from which I managed to download what looks like the complete c061457_ISO_IEC_TR_24772_2013.pdf (336 pages) after clicking on an "I accept" button (I didn't read what I accepted :-). The $200 is for the printed copy I presume. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Nov 15 21:57:49 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 03:57:49 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: Message-ID: Hi, Since Brett and Nick like the idea and nobody complained against it, I implemented the -X dev option: https://bugs.python.org/issue32043 (Right now, it's a pull request.) I removed the -b option. Victor 2017-11-14 3:57 GMT+01:00 Nick Coghlan : > On 14 November 2017 at 02:08, Victor Stinner wrote: >> My "-X dev" idea is not incompatible with Nick's PEP 565 "Show >> DeprecationWarning in __main__" and it's different: it's an opt-in >> option, while Nick wants to change the default behaviour. > > I'm +1 on a `-X dev` mode, since it enables a lot of things that are > useful for making an application more robust (extension module > debugging, explicit scope-controlled resource management) that I > wouldn't want turned at the REPL by default. It also implicitly > adjusts over time as we add more debugging capabilities. > > I don't consider it a replacement for tweaking how we handle > DeprecationWarning by default, though :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Wed Nov 15 23:53:15 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 15 Nov 2017 20:53:15 -0800 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: On Wed, Nov 15, 2017 at 6:50 PM, Guido van Rossum wrote: > On Wed, Nov 15, 2017 at 6:37 PM, Armin Rigo wrote: > >> Hi, >> >> On 14 November 2017 at 14:55, Jan Claeys wrote: >> > Sounds like https://www.iso.org/standard/71094.html >> > which is updating https://www.iso.org/standard/61457.html >> > (which you can download from there if you search a bit; clearly either >> > ISO doesn't have a UI/UX "standard" or they aren't following it...) >> >> Just for completeness, I think that what you can download for free >> from that second page only contains the first few sections ("Terms and >> definitions"). It doesn't even go to "Purpose of this technical >> report"---we need to pay $200 just to learn what the purpose is... >> >> *Shrug* >> > > Actually it linked to http://standards.iso.org/ittf/ > PubliclyAvailableStandards/index.html from which I managed to download > what looks like the complete c061457_ISO_IEC_TR_24772_2013.pdf (336 > pages) after clicking on an "I accept" button (I didn't read what I > accepted :-). The $200 is for the printed copy I presume. > So far I learned one thing from the report. They use the term "vulnerabilities" liberally, defining it essentially as "bug": All programming languages contain constructs that are incompletely > specified, exhibit undefined behaviour, are implementation-dependent, or > are difficult to use correctly. The use of those constructs may therefore > give rise to *vulnerabilities*, as a result of which, software programs > can execute differently than intended by the writer. > They then go on to explain that sometimes vulnerabilities can be exploited, but I object to calling all bugs vulnerabilities -- that's just using a scary word to get attention for a sleep-inducing document containing such gems as "Use floating-point arithmetic only when absolutely needed" (page 230). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Nov 16 01:05:12 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 16 Nov 2017 01:05:12 -0500 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: CWE (Common Weakness Enumeration) has numbers (and URLs) and a graph model, and code examples, and mitigations for bugs, vulnerabilities, faults, design flaws, weaknesses. https://cwe.mitre.org/ Research Concepts https://cwe.mitre.org/data/definitions/1000.html Development Concepts https://cwe.mitre.org/data/definitions/699.html CWE CATEGORY: Time and State https://cwe.mitre.org/data/definitions/361.html CWE CATEGORY: Concurrency Issues https://cwe.mitre.org/data/definitions/557.html 17. Concurrent Execution https://docs.python.org/3/library/concurrency.html > 1. activating threads, tasks or pico-threads https://docs.python.org/3/library/threading.html#threading.Thread.start https://docs.python.org/3/library/threading.html#threading.Thread.run > 2. Directed termination of threads, tasks or pico-threads So, I looked this up: https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python Do asynchronous programming patterns actually make this basically never necessary? (asyncio coroutines, greenlet (eventlet, gevent, ), twisted) https://docs.python.org/3/library/asyncio.html https://docs.python.org/3/library/asyncio-task.html If you really feel like you need the overhead of threads instead of or in addition to coroutines (they won't use multiple cores without going to IPC, anyway), you can. > 3. Premature termination of threads, tasks or pico-threads What is this referring to? Does it release handles and locks on exception? (try/finally?) > 4. Concurrent access to data shared between threads, tasks or pico-threads, and CWE-362: Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition') https://cwe.mitre.org/data/definitions/362.html CWE-567: Unsynchronized Access to Shared Data in a Multithreaded Context https://cwe.mitre.org/data/definitions/567.html > 5. Lock protocol errors for concurrent entities CWE-667: Improper Locking https://cwe.mitre.org/data/definitions/667.html CWE-366: Race Condition within a Thread https://cwe.mitre.org/data/definitions/366.html The ``mutex`` module is removed in Python 3: https://docs.python.org/2/library/mutex.html 17.1. threading ? Thread-based parallelism https://docs.python.org/3/library/threading.html ... Are there other good resources (in addition to Chapter 17) for concurrency in CPython and/or PyPy and/or Stackless Python, MicroPython, IronPython, Jython? - [ ] How do we add Python to the excellent CWE reference? - How can/could/should one add the things with labels (*) from the ISO PDF you speak of to thr CWE graph? (* schema:name, rdfs:label) On Wednesday, November 15, 2017, Guido van Rossum wrote: > On Wed, Nov 15, 2017 at 6:50 PM, Guido van Rossum > wrote: > >> On Wed, Nov 15, 2017 at 6:37 PM, Armin Rigo > > wrote: >> >>> Hi, >>> >>> On 14 November 2017 at 14:55, Jan Claeys >> > wrote: >>> > Sounds like https://www.iso.org/standard/71094.html >>> > which is updating https://www.iso.org/standard/61457.html >>> > (which you can download from there if you search a bit; clearly either >>> > ISO doesn't have a UI/UX "standard" or they aren't following it...) >>> >>> Just for completeness, I think that what you can download for free >>> from that second page only contains the first few sections ("Terms and >>> definitions"). It doesn't even go to "Purpose of this technical >>> report"---we need to pay $200 just to learn what the purpose is... >>> >>> *Shrug* >>> >> >> Actually it linked to http://standards.iso.org/ittf/ >> PubliclyAvailableStandards/index.html from which I managed to download >> what looks like the complete c061457_ISO_IEC_TR_24772_2013.pdf (336 >> pages) after clicking on an "I accept" button (I didn't read what I >> accepted :-). The $200 is for the printed copy I presume. >> > > So far I learned one thing from the report. They use the term > "vulnerabilities" liberally, defining it essentially as "bug": > > All programming languages contain constructs that are incompletely >> specified, exhibit undefined behaviour, are implementation-dependent, or >> are difficult to use correctly. The use of those constructs may therefore >> give rise to *vulnerabilities*, as a result of which, software programs >> can execute differently than intended by the writer. >> > > They then go on to explain that sometimes vulnerabilities can be > exploited, but I object to calling all bugs vulnerabilities -- that's just > using a scary word to get attention for a sleep-inducing document > containing such gems as "Use floating-point arithmetic only when absolutely > needed" (page 230). > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Nov 16 01:11:56 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 16 Nov 2017 19:11:56 +1300 Subject: [Python-Dev] module customization In-Reply-To: <5A0CDB73.3030507@stoneleaf.us> References: <5A0CDB73.3030507@stoneleaf.us> Message-ID: <5A0D2C2C.1070109@canterbury.ac.nz> Ethan Furman wrote: > The second way is fairly similar, but instead of replacing the entire > sys.modules entry, its class is updated to be the class just created -- > something like sys.modules['mymod'].__class__ = MyNewClass . If the recent suggestion to replace the global namespace dict with the module object goes ahead, maybe it would enable using this idiom to reassign the module class: class __class__(__class__): # module methods here Badger-badger-badger-ly, Greg From njs at pobox.com Thu Nov 16 01:12:30 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Nov 2017 22:12:30 -0800 Subject: [Python-Dev] module customization In-Reply-To: References: <5A0CDB73.3030507@stoneleaf.us> Message-ID: On Wed, Nov 15, 2017 at 5:49 PM, Guido van Rossum wrote: >> If not, why not, and if so, shouldn't PEP 562's __getattr__ also take a >> 'self'? > > Not really, since there's only one module (the one containing the > __getattr__ function). Plus we already have a 1-argument module-level > __getattr__ in mypy. See PEP 484. I guess the benefit of taking 'self' would be that it would make it possible (though still a bit odd-looking) to have reusable __getattr__ implementations, like: # mymodule.py from auto_importer import __getattr__, __dir__ auto_import_modules = {"foo", "bar"} # auto_importer.py def __getattr__(self, name): if name in self.auto_import_modules: ... -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Thu Nov 16 01:14:20 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Nov 2017 22:14:20 -0800 Subject: [Python-Dev] module customization In-Reply-To: <5A0CDB73.3030507@stoneleaf.us> References: <5A0CDB73.3030507@stoneleaf.us> Message-ID: On Wed, Nov 15, 2017 at 4:27 PM, Ethan Furman wrote: > The second way is fairly similar, but instead of replacing the entire > sys.modules entry, its class is updated to be the class just created -- > something like sys.modules['mymod'].__class__ = MyNewClass . > > My request: Can someone write a better example of the second method? And > include __getattr__ ? Here's a fairly straightforward example: https://github.com/python-trio/trio/blob/master/trio/_deprecate.py#L114-L140 (Intentionally doesn't include __dir__ because I didn't want deprecated attributes to show up in tab completion. For other use cases like lazy imports, you would implement __dir__ too.) Example usage: https://github.com/python-trio/trio/blob/master/trio/__init__.py#L66-L98 -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Thu Nov 16 01:16:11 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Nov 2017 22:16:11 -0800 Subject: [Python-Dev] module customization In-Reply-To: References: <5A0CDB73.3030507@stoneleaf.us> Message-ID: On Wed, Nov 15, 2017 at 10:14 PM, Nathaniel Smith wrote: > On Wed, Nov 15, 2017 at 4:27 PM, Ethan Furman wrote: >> The second way is fairly similar, but instead of replacing the entire >> sys.modules entry, its class is updated to be the class just created -- >> something like sys.modules['mymod'].__class__ = MyNewClass . >> >> My request: Can someone write a better example of the second method? And >> include __getattr__ ? Doh, I forgot to permalinkify those. Better links for anyone reading this in the future: > Here's a fairly straightforward example: > > https://github.com/python-trio/trio/blob/master/trio/_deprecate.py#L114-L140 https://github.com/python-trio/trio/blob/3edfafeedef4071646a9015e28be01f83dc02f94/trio/_deprecate.py#L114-L140 > (Intentionally doesn't include __dir__ because I didn't want > deprecated attributes to show up in tab completion. For other use > cases like lazy imports, you would implement __dir__ too.) > > Example usage: > > https://github.com/python-trio/trio/blob/master/trio/__init__.py#L66-L98 https://github.com/python-trio/trio/blob/3edfafeedef4071646a9015e28be01f83dc02f94/trio/__init__.py#L66-L98 > -n > > -- > Nathaniel J. Smith -- https://vorpus.org -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Thu Nov 16 01:56:29 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Nov 2017 16:56:29 +1000 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: On 16 November 2017 at 04:39, Ivan Levkivskyi wrote: > Nick is exactly right here. Jim, if you want to propose alternative > wording, then we could consider it. > Jim also raised an important point that needs clarification at the spec level: given multiple entries in "orig_bases" with __mro_entries__ methods, do all such methods get passed the *same* orig_bases tuple? Or do they receive partially resolved ones, such that bases listed before them have already been resolved to their MRO entries by the time they run. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at hotpy.org Thu Nov 16 04:34:53 2017 From: mark at hotpy.org (Mark Shannon) Date: Thu, 16 Nov 2017 09:34:53 +0000 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: <539eaedf-0cf8-6a52-1a45-a4a90b371da5@hotpy.org> On 16/11/17 04:53, Guido van Rossum wrote: [snip] > > They then go on to explain that sometimes vulnerabilities can be > exploited, but I object to calling all bugs vulnerabilities -- that's > just using a scary word to get attention for a sleep-inducing document > containing such gems as "Use floating-point arithmetic only when > absolutely needed" (page 230). Thanks for reading it, so we don't have to :) As Wes said, cwe.mitre.org is the place to go if you care about this stuff, although it can be a bit opaque. For non-experts, https://www.owasp.org/index.php/Top_10_2013-Top_10 is a good starting point to learn about software vulnerabilities, Cheers, Mark. From victor.stinner at gmail.com Thu Nov 16 06:53:24 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 12:53:24 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: Message-ID: Ok, I merged my PR adding -X dev: you can now test in Python 3.7 ;-) > The list of checks can be extended later. For example, we may enable > the debug mode of asyncio: PYTHONASYNCIODEBUG=1. I opened https://bugs.python.org/issue32047 to propose enable asyncio debug mode using the Python developer mode. What do you think? Is it ok to include asyncio in the global "developer mode"? Victor From solipsis at pitrou.net Thu Nov 16 07:09:43 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 16 Nov 2017 13:09:43 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option References: Message-ID: <20171116130943.484d2807@fsol> On Thu, 16 Nov 2017 12:53:24 +0100 Victor Stinner wrote: > Ok, I merged my PR adding -X dev: you can now test in Python 3.7 ;-) > > > The list of checks can be extended later. For example, we may enable > > the debug mode of asyncio: PYTHONASYNCIODEBUG=1. > > I opened https://bugs.python.org/issue32047 to propose enable asyncio > debug mode using the Python developer mode. > > What do you think? Is it ok to include asyncio in the global "developer mode"? I'd rather not. Those are two orthogonal things. In particular, asyncio debug mode is quite expensive. Regards Antoine. From solipsis at pitrou.net Thu Nov 16 07:11:32 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 16 Nov 2017 13:11:32 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option References: Message-ID: <20171116131132.3ef65380@fsol> On Thu, 16 Nov 2017 03:57:49 +0100 Victor Stinner wrote: > Hi, > > Since Brett and Nick like the idea and nobody complained against it, I > implemented the -X dev option: > https://bugs.python.org/issue32043 > (Right now, it's a pull request.) Could you measure and perhaps document the expected effect on performance and memory consumption? (it can be a very rough ballpart estimate) Regards Antoine. From levkivskyi at gmail.com Thu Nov 16 07:22:43 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 16 Nov 2017 13:22:43 +0100 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: On 16 November 2017 at 07:56, Nick Coghlan wrote: > On 16 November 2017 at 04:39, Ivan Levkivskyi > wrote: > >> Nick is exactly right here. Jim, if you want to propose alternative >> wording, then we could consider it. >> > > Jim also raised an important point that needs clarification at the spec > level: given multiple entries in "orig_bases" with __mro_entries__ methods, > do all such methods get passed the *same* orig_bases tuple? Or do they > receive partially resolved ones, such that bases listed before them have > already been resolved to their MRO entries by the time they run. > > > Yes, they all get the same initial bases tuple as an argument. Passing updated ones will cost a bit more and I don't think it will be needed (in the worst case a base can resolve another base by calling its __mro_entries__ manually). I will clarify this in the PEP. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Nov 16 07:38:01 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 13:38:01 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <20171116130943.484d2807@fsol> References: <20171116130943.484d2807@fsol> Message-ID: 2017-11-16 13:09 GMT+01:00 Antoine Pitrou : >> What do you think? Is it ok to include asyncio in the global "developer mode"? > > I'd rather not. Those are two orthogonal things. In particular, > asyncio debug mode is quite expensive. Is it really an issue? When you develop an application, the performance of the application shouldn't be an issue no? From my point of view, it's the purpose of the opt-in developer mode: enable "expensive" checks at runtime. But you are right that the cost of the checks should be evaluated. About asyncio debug mode, if it's too expensive to be used to develop an application, maybe there is an issue with additional checks? Should we remove some of them to be able to use asyncio debug mode in more cases? Victor From antoine at python.org Thu Nov 16 07:43:30 2017 From: antoine at python.org (Antoine Pitrou) Date: Thu, 16 Nov 2017 13:43:30 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116130943.484d2807@fsol> Message-ID: Le 16/11/2017 ? 13:38, Victor Stinner a ?crit?: > 2017-11-16 13:09 GMT+01:00 Antoine Pitrou : >>> What do you think? Is it ok to include asyncio in the global "developer mode"? >> >> I'd rather not. Those are two orthogonal things. In particular, >> asyncio debug mode is quite expensive. > > Is it really an issue? When you develop an application, the > performance of the application shouldn't be an issue no? When you develop an application, you can run functional tests which have timing requirements (or simply be too annoying to run if runtimes are multiplied by 2 or more). In that case it is good to enable "cheap" debug checks (those that have less than a 20% cost) while leaving the expensive ones disabled. > About asyncio debug mode, if it's too expensive to be used to develop > an application, maybe there is an issue with additional checks? Should > we remove some of them to be able to use asyncio debug mode in more > cases? Well, I'm sure some people like them, otherwise they wouldn't have been added to the codebase in the first place :-) For example, knowing where a Future was created can make debug logs much more informative. (see https://bugs.python.org/issue31970) Regards Antoine. From victor.stinner at gmail.com Thu Nov 16 07:48:41 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 13:48:41 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <20171116131132.3ef65380@fsol> References: <20171116131132.3ef65380@fsol> Message-ID: 2017-11-16 13:11 GMT+01:00 Antoine Pitrou : > Could you measure and perhaps document the expected effect on > performance and memory consumption? > (it can be a very rough ballpart estimate) Currently "python3 -X dev script.py" behaves as "PYTHONMALLOC=debug python3 -W default -X faulthandler script.py". faulthandler has a negligible cost on performance/memory. For -W default, I guess that your question is the cost on emitting a warning: overhead when a warning is displayed, and overhead when the warning is filtered. Right? IMHO the most expensive check is PYTHONMALLOC=debug which increases a lot the memory usage. You can measure the difference using tracemalloc and PYTHONMALLOC: haypo at selma$ PYTHONMALLOC=debug ./python -X tracemalloc -i -m test test_os (...) >>> import tracemalloc; tracemalloc.get_traced_memory() (10719623, 10981725) haypo at selma$ PYTHONMALLOC=debug ./python -X tracemalloc -i -m test test_os (...) >>> import tracemalloc; tracemalloc.get_traced_memory() (10724064, 16577338) For example, on test_os, PYTHONMALLOC=debug increases the peak memory usage from 10.5 MiB to 15.8 MiB: +50%. PYTHONMALLOC=debug adds 4 * sizeof(size_t) bytes to each allocated memory block. For example, an empty tuple uses 64 bytes, but PYTHONMALLOC=debug allocates 96 bytes (+ 32 bytes) in 64-bit mode. Victor From antoine at python.org Thu Nov 16 07:54:14 2017 From: antoine at python.org (Antoine Pitrou) Date: Thu, 16 Nov 2017 13:54:14 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116131132.3ef65380@fsol> Message-ID: <2ce3e311-35b0-90dd-6d74-615145bef3d3@python.org> Hi Victor, Thanks for the answer! Le 16/11/2017 ? 13:48, Victor Stinner a ?crit?: > > faulthandler has a negligible cost on performance/memory. > > For -W default, I guess that your question is the cost on emitting a > warning: overhead when a warning is displayed, and overhead when the > warning is filtered. Right? -Wdefault means -Wonce or -Walways? If the former, I don't expect many warnings to be emitted. > For example, on test_os, PYTHONMALLOC=debug increases the peak memory > usage from 10.5 MiB to 15.8 MiB: +50%. I see. For my use cases, this would be acceptable :-) But I think this should be documented, for example: """Currently, developer mode adds negligible CPU time overhead, but can increase memory consumption significantly if many small objects are allocated. This is subject to change in the future.""" Regards Antoine. From victor.stinner at gmail.com Thu Nov 16 07:56:16 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 13:56:16 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116130943.484d2807@fsol> Message-ID: 2017-11-16 13:43 GMT+01:00 Antoine Pitrou : >> About asyncio debug mode, if it's too expensive to be used to develop >> an application, maybe there is an issue with additional checks? Should >> we remove some of them to be able to use asyncio debug mode in more >> cases? > > Well, I'm sure some people like them, otherwise they wouldn't have been > added to the codebase in the first place :-) For example, knowing where > a Future was created can make debug logs much more informative. The most expensive part of asyncio debug mode is the code to extracts the current stack when a coroutine or a handle is created. Would it make sense to modify asyncio debug mode to skip the traceback by default, but add a second debug level which extracts the traceback? Victor From victor.stinner at gmail.com Thu Nov 16 08:04:43 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 14:04:43 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <2ce3e311-35b0-90dd-6d74-615145bef3d3@python.org> References: <20171116131132.3ef65380@fsol> <2ce3e311-35b0-90dd-6d74-615145bef3d3@python.org> Message-ID: 2017-11-16 13:54 GMT+01:00 Antoine Pitrou : > -Wdefault means -Wonce or -Walways? If the former, I don't expect many > warnings to be emitted. It's kind of funny that passing "-W default" changes the "default" behaviour. "-W default" is documented as: "Explicitly request the default behavior (printing each warning once per source line)." https://docs.python.org/dev/using/cmdline.html#cmdoption-w Default warnings filters in release mode: $ python3 -c 'import pprint, warnings; pprint.pprint(warnings.filters)' [('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0)] -Wd adds the a warnings filter with the "default" action matching all warnings (any kind, any message, any line number) if I understand correctly: $ python3 -Wd -c 'import pprint, warnings; pprint.pprint(warnings.filters)' [('default', re.compile('', re.IGNORECASE), , re.compile(''), 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0)] "default" and "once" actions are different: default: "print the first occurrence of matching warnings for **each location where the warning is issued**" once: "print only the first occurrence of matching warnings, **regardless of location**" >> For example, on test_os, PYTHONMALLOC=debug increases the peak memory >> usage from 10.5 MiB to 15.8 MiB: +50%. > > I see. For my use cases, this would be acceptable :-) > > But I think this should be documented, for example: > > """Currently, developer mode adds negligible CPU time overhead, but can > increase memory consumption significantly if many small objects are > allocated. This is subject to change in the future.""" +50% memory is inacceptable to develop on embedded devices, or more generally with low level. But in that case, you are can enable options enabled by -X dev manually, without PYTHONMALLOC=debug :-) Ok, I will document that, I like your proposed paragraph. Victor From ncoghlan at gmail.com Thu Nov 16 08:05:35 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Nov 2017 23:05:35 +1000 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <2ce3e311-35b0-90dd-6d74-615145bef3d3@python.org> References: <20171116131132.3ef65380@fsol> <2ce3e311-35b0-90dd-6d74-615145bef3d3@python.org> Message-ID: On 16 November 2017 at 22:54, Antoine Pitrou wrote: > > Hi Victor, > > Thanks for the answer! > > Le 16/11/2017 ? 13:48, Victor Stinner a ?crit : > > > > faulthandler has a negligible cost on performance/memory. > > > > For -W default, I guess that your question is the cost on emitting a > > warning: overhead when a warning is displayed, and overhead when the > > warning is filtered. Right? > > -Wdefault means -Wonce or -Walways? If the former, I don't expect many > warnings to be emitted. > Confusingly, neither of these: default, once, module, and always are all different settings. once: once per process (regardless of location) module: once per module (regardless of line) default: once per location (line+module combination) always: every time Still, even with once-per-location behaviour, the warning overhead should be minimal. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at python.org Thu Nov 16 09:45:00 2017 From: nad at python.org (Ned Deily) Date: Thu, 16 Nov 2017 15:45:00 +0100 Subject: [Python-Dev] [python-committers] IPv6 issues on *.python.org In-Reply-To: <4d5124a3-7283-8b37-b4ff-f06daec0c6c0@python.org> References: <4d5124a3-7283-8b37-b4ff-f06daec0c6c0@python.org> Message-ID: <33A44BF9-BA9D-4B80-B0B3-B75FF0487640@python.org> On Nov 16, 2017, at 15:07, Antoine Pitrou wrote: > I'm having IPv6 issues on *.python.org. Is anyone having the same > issues or is it just me? Who should I report this to? The PSF Infrastructure team is available via email or IRC: http://psf-salt.readthedocs.io/overview/#the-infrastructure-team -- Ned Deily nad at python.org -- [] From yselivanov.ml at gmail.com Thu Nov 16 09:47:31 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 16 Nov 2017 09:47:31 -0500 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116130943.484d2807@fsol> Message-ID: On Thu, Nov 16, 2017 at 7:56 AM, Victor Stinner wrote: > 2017-11-16 13:43 GMT+01:00 Antoine Pitrou : >>> About asyncio debug mode, if it's too expensive to be used to develop >>> an application, maybe there is an issue with additional checks? Should >>> we remove some of them to be able to use asyncio debug mode in more >>> cases? >> >> Well, I'm sure some people like them, otherwise they wouldn't have been >> added to the codebase in the first place :-) For example, knowing where >> a Future was created can make debug logs much more informative. > > The most expensive part of asyncio debug mode is the code to extracts > the current stack when a coroutine or a handle is created. Probably the most expensive part of asyncio debug mode is all coroutines wrapped with CoroWrapper. This makes every "await" and coroutine instantiation much slower (think 2-3x). > Would it make sense to modify asyncio debug mode to skip the traceback > by default, but add a second debug level which extracts the traceback? Let's keep it simple. I'm big -1 on adding different "debug levels", they are always confusing. Overall I don't see an issue with enabling asyncio debug mode when python is executed with "-X dev". If the purpose of the flag is to make Python super verbose and it will not be recommended to use it in production -- then why not. Yury From barry at python.org Thu Nov 16 10:32:58 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 16 Nov 2017 10:32:58 -0500 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: Message-ID: On Nov 16, 2017, at 06:53, Victor Stinner wrote: > What do you think? Is it ok to include asyncio in the global "developer mode"? I?m +1 on that, and the performance hit doesn?t bother me for a developer mode. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Thu Nov 16 10:35:25 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 16 Nov 2017 10:35:25 -0500 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: Message-ID: <9C0B98D5-1427-4512-8394-38D71DABBEF8@python.org> On Nov 15, 2017, at 21:57, Victor Stinner wrote: > > Since Brett and Nick like the idea and nobody complained against it, I > implemented the -X dev option: Cool! What would you think about printing a summary of the settings under the standard banner when you run the REPL under -X dev? I?d rather not have to look it up in some obscure docs page whenever I use it. If not that, then what about having a -msettings module or some such that prints it out? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Thu Nov 16 10:39:15 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 16 Nov 2017 10:39:15 -0500 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116130943.484d2807@fsol> Message-ID: <8A1CD389-A3A4-4D04-89E2-AE13B37AB643@python.org> On Nov 16, 2017, at 09:47, Yury Selivanov wrote: > Let's keep it simple. I'm big -1 on adding different "debug levels", > they are always confusing. Oh, this one?s easy. -X dev == some debugging -X deve == a little more -X devel == give it to me! -X develo == now you?re talking (literally) -X develop == thank you sir, may I have another? -X develope == here comes the flood -X developer == needle inna haystack! Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From brent.bejot at gmail.com Thu Nov 16 11:28:45 2017 From: brent.bejot at gmail.com (brent bejot) Date: Thu, 16 Nov 2017 11:28:45 -0500 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: Hello all, Noticed that "MRO" is not actually defined in the PEP and it seems like it should be. Probably in the Performance section where the abbreviation is first used outside of a function name. -Brent On Thu, Nov 16, 2017 at 7:22 AM, Ivan Levkivskyi wrote: > On 16 November 2017 at 07:56, Nick Coghlan wrote: > >> On 16 November 2017 at 04:39, Ivan Levkivskyi >> wrote: >> >>> Nick is exactly right here. Jim, if you want to propose alternative >>> wording, then we could consider it. >>> >> >> Jim also raised an important point that needs clarification at the spec >> level: given multiple entries in "orig_bases" with __mro_entries__ methods, >> do all such methods get passed the *same* orig_bases tuple? Or do they >> receive partially resolved ones, such that bases listed before them have >> already been resolved to their MRO entries by the time they run. >> >> >> > Yes, they all get the same initial bases tuple as an argument. Passing > updated ones will cost a bit more and I don't think it will be needed (in > the worst case a base can resolve another base by calling its > __mro_entries__ manually). > I will clarify this in the PEP. > > -- > Ivan > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brent.bejot%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Nov 16 11:42:11 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 17:42:11 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116130943.484d2807@fsol> Message-ID: 2017-11-16 15:47 GMT+01:00 Yury Selivanov : > Overall I don't see an issue with enabling asyncio debug mode when > python is executed with "-X dev". Cool! > If the purpose of the flag is to > make Python super verbose and it will not be recommended to use it in > production -- then why not. Running an application in debug mode or "developer mode" was never recommanded by anyone. Who runs Django in debug mode on production? :-) (Nobody, I hope.) I'm working on the -X dev documentation to clarify its purpose. No, the developer mode must not flood stdout with debug messages. It should only emit warnings if a potential or real bug is detected. For example, I don't want to enable the debug mode of ftplib in the developer mode, since this option logs each FTP command to stdout. It's a different use case. About performance, IMHO the purpose of the developer mode is to enable additional checks to ease debug, checks were are too expensive to be enabled by default. This definition fits well with the asyncio debug mode. Victor From antoine at python.org Thu Nov 16 11:52:25 2017 From: antoine at python.org (Antoine Pitrou) Date: Thu, 16 Nov 2017 17:52:25 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: References: <20171116130943.484d2807@fsol> Message-ID: <9ec82a54-f4e0-a4d3-a6a1-5683de474af9@python.org> Le 16/11/2017 ? 17:42, Victor Stinner a ?crit?: > > Running an application in debug mode or "developer mode" was never > recommanded by anyone. I don't know. Almost everyone runs Python with __debug__ set to True :-) Regards Antoine. From victor.stinner at gmail.com Thu Nov 16 12:00:38 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 16 Nov 2017 18:00:38 +0100 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <9ec82a54-f4e0-a4d3-a6a1-5683de474af9@python.org> References: <20171116130943.484d2807@fsol> <9ec82a54-f4e0-a4d3-a6a1-5683de474af9@python.org> Message-ID: My draft addition to the -X dev doc: """ The developer mode is different from the "debug mode" of some modules of the standard library which enables a lot of debug logs. The developer mode should only log a warning if a potential bug is detected. The developer mode has a limited effect. Many modules of the standard library have their own "debug mode" which is not enabled in the developer mode. See also the :data:`__debug__` flag disabled by the :option:`-O` option, and :data:`sys.flags.debug` enabled by the :option:`-d` option. """ You forgot the infamous -d option! Honestly, I never used __debug__ nor sys.flags.debug. I like adding yet another option to confuse people a little bit more! Victor 2017-11-16 17:52 GMT+01:00 Antoine Pitrou : > > Le 16/11/2017 ? 17:42, Victor Stinner a ?crit : >> >> Running an application in debug mode or "developer mode" was never >> recommanded by anyone. > > I don't know. Almost everyone runs Python with __debug__ set to True :-) > > Regards > > Antoine. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From v+python at g.nevcal.com Thu Nov 16 12:16:21 2017 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 16 Nov 2017 09:16:21 -0800 Subject: [Python-Dev] Add a developer mode to Python: -X dev command line option In-Reply-To: <8A1CD389-A3A4-4D04-89E2-AE13B37AB643@python.org> References: <20171116130943.484d2807@fsol> <8A1CD389-A3A4-4D04-89E2-AE13B37AB643@python.org> Message-ID: <753980a7-84f7-3469-123c-b076321d0b30@g.nevcal.com> On 11/16/2017 7:39 AM, Barry Warsaw wrote: > On Nov 16, 2017, at 09:47, Yury Selivanov wrote: > >> Let's keep it simple. I'm big -1 on adding different "debug levels", >> they are always confusing. > Oh, this one?s easy. > > -X dev == some debugging > -X deve == a little more > -X devel == give it to me! This is the?? "devel level", where you solve those bugs from hell... > -X develo == now you?re talking (literally) > -X develop == thank you sir, may I have another? > -X develope == here comes the flood > -X developer == needle inna haystack! > > Cheers, > -Barry > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/v%2Bpython%40g.nevcal.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Thu Nov 16 12:41:26 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Thu, 16 Nov 2017 19:41:26 +0200 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: On Thu, Nov 16, 2017 at 6:28 PM, brent bejot wrote: > Hello all, > > Noticed that "MRO" is not actually defined in the PEP and it seems like it > should be. Probably in the Performance section where the abbreviation is > first used outside of a function name. > > ?I don't think it will hurt if I suggest that __bases__, bases, "original bases"?, mro, __orig_bases__, MRO, __mro__ and "concatenated mro entries" are all defined as synonyms of each other, except with different meanings :-) ??Koos > -Brent > > On Thu, Nov 16, 2017 at 7:22 AM, Ivan Levkivskyi > wrote: > >> On 16 November 2017 at 07:56, Nick Coghlan wrote: >> >>> On 16 November 2017 at 04:39, Ivan Levkivskyi >>> wrote: >>> >>>> Nick is exactly right here. Jim, if you want to propose alternative >>>> wording, then we could consider it. >>>> >>> >>> Jim also raised an important point that needs clarification at the spec >>> level: given multiple entries in "orig_bases" with __mro_entries__ methods, >>> do all such methods get passed the *same* orig_bases tuple? Or do they >>> receive partially resolved ones, such that bases listed before them have >>> already been resolved to their MRO entries by the time they run. >>> >>> >>> >> Yes, they all get the same initial bases tuple as an argument. Passing >> updated ones will cost a bit more and I don't think it will be needed (in >> the worst case a base can resolve another base by calling its >> __mro_entries__ manually). >> I will clarify this in the PEP. >> >> -- >> Ivan >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/brent. >> bejot%40gmail.com >> >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > k7hoven%40gmail.com > > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Nov 16 12:58:59 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 16 Nov 2017 09:58:59 -0800 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: References: Message-ID: <5A0DD1E3.5090508@stoneleaf.us> On 11/16/2017 04:22 AM, Ivan Levkivskyi wrote: > On 16 November 2017 at 07:56, Nick Coghlan wrote: >> Jim also raised an important point that needs clarification at the spec >> level: given multiple entries in "orig_bases" with __mro_entries__ methods, >> do all such methods get passed the *same* orig_bases tuple? Or do they >> receive partially resolved ones, such that bases listed before them have >> already been resolved to their MRO entries by the time they run. > > Yes, they all get the same initial bases tuple as an argument. Passing > updated ones will cost a bit more and I don't think it will be needed > (in the worst case a base can resolve another base by calling its > __mro_entries__ manually). I will clarify this in the PEP. If the extra complexity is to: > - given orig_bases, a method could avoid injecting bases already listed > if it wanted to > - allowing multiple items to be returned provides a way to programmatically > combine mixins without having to define a new subclass for each combination And each method is passed the same original tuple (without other methods' updates) then don't we end up in a situation where we can have duplicates base classes? -- ~Ethan~ From brett at python.org Thu Nov 16 14:18:28 2017 From: brett at python.org (Brett Cannon) Date: Thu, 16 Nov 2017 19:18:28 +0000 Subject: [Python-Dev] module customization In-Reply-To: <5A0CDB73.3030507@stoneleaf.us> References: <5A0CDB73.3030507@stoneleaf.us> Message-ID: On Wed, 15 Nov 2017 at 16:27 Ethan Furman wrote: > So there are currently two ways to customize a module, with PEP 562 > proposing a third. > > The first method involves creating a standard class object, instantiating > it, and replacing the sys.modules entry with it. > > The second way is fairly similar, but instead of replacing the entire > sys.modules entry, its class is updated to be the > class just created -- something like sys.modules['mymod'].__class__ = > MyNewClass . > > My request: Can someone write a better example of the second method? And > include __getattr__ ? > There's actually an example in the stdlib thanks to importlib.util.LazyLoader , although it uses __getattribute__() and not __getattr__(): https://github.com/python/cpython/blob/d505a29a15a6f9315d8c46445b8a0cccfc2048b8/Lib/importlib/util.py#L212 -Brett > My question: Does that __getattr__ method have 'self' as the first > parameter? If not, why not, and if so, shouldn't > PEP 562's __getattr__ also take a 'self'? > > -- > ~Ethan~ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 16 17:57:06 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 16 Nov 2017 23:57:06 +0100 Subject: [Python-Dev] PEP 560: bases classes / confusion In-Reply-To: <5A0DD1E3.5090508@stoneleaf.us> References: <5A0DD1E3.5090508@stoneleaf.us> Message-ID: On 16 November 2017 at 18:58, Ethan Furman wrote: > On 11/16/2017 04:22 AM, Ivan Levkivskyi wrote: > >> On 16 November 2017 at 07:56, Nick Coghlan wrote: >> > > Jim also raised an important point that needs clarification at the spec >>> >> >> level: given multiple entries in "orig_bases" with __mro_entries__ > methods, > >> do all such methods get passed the *same* orig_bases tuple? Or do they > >> receive partially resolved ones, such that bases listed before them have >>> >> >> already been resolved to their MRO entries by the time they run. > >> >> Yes, they all get the same initial bases tuple as an argument. Passing >> > > updated ones will cost a bit more and I don't think it will be needed > > (in the worst case a base can resolve another base by calling its > > __mro_entries__ manually). I will clarify this in the PEP. > > If the extra complexity is to: > > > - given orig_bases, a method could avoid injecting bases already listed > > if it wanted to > > - allowing multiple items to be returned provides a way to > programmatically > > combine mixins without having to define a new subclass for each > combination > > And each method is passed the same original tuple (without other methods' > updates) then don't we end up in a situation where we can have duplicates > base classes? > Not that it is impossible now (in certain sense): class MultiMeta(type): def __new__(mcls, name, bases, ns): return super().__new__(mcls, name, (), ns) class MultiBase(metaclass=MultiMeta): pass class C(MultiBase, list, list, MultiBase, dict, dict, dict): # OK pass What is probably confusing in the current PEP text, is that it doesn't say clearly that the substitution happens before any other steps in __build_class__. Therefore all normal checks (like duplicate bases and MRO consistency) happen and e.g. class C(List[int], List[str]): pass will fail with: TypeError: duplicate base class list (by the way while playing with this I have found a bug in the reference implementation) -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tismer at stackless.com Fri Nov 17 04:15:24 2017 From: tismer at stackless.com (Christian Tismer) Date: Fri, 17 Nov 2017 10:15:24 +0100 Subject: [Python-Dev] unittest isolation and warnings Message-ID: <28703b31-40fe-56f0-dd56-a1b7d104b50d@stackless.com> Hi guys, when writing tests, I suddenly discovered that unittest is not isolated to warnings. Example: One of my tests emits warnings when a certain condition is met. Instead of reporting the error immediately, it uses warnings, and at the end of the test, an error is produced if there were warnings. if hasattr(__main__, "__warningregistry__"): raise RuntimeError("There are errors, see above.") By chance, I discovered that an error was suddenly triggered without a warning. That must mean the warning existed already from another test as a left-over. My question: Is that known, and is that intended? To what extent are the test cases isolated from each other? I do admit that my usage of warnings is somewhat special. But it is very convenient to report many errors on remote servers. Cheers -- Chris -- Christian Tismer :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From k7hoven at gmail.com Fri Nov 17 08:40:58 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Fri, 17 Nov 2017 15:40:58 +0200 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: On Thu, Nov 16, 2017 at 6:53 AM, Guido van Rossum wrote: > On Wed, Nov 15, 2017 at 6:50 PM, Guido van Rossum > wrote: >> >> >> Actually it linked to http://standards.iso.org/ittf/ >> PubliclyAvailableStandards/index.html from which I managed to download >> what looks like the complete c061457_ISO_IEC_TR_24772_2013.pdf (336 >> pages) after clicking on an "I accept" button (I didn't read what I >> accepted :-). The $200 is for the printed copy I presume. >> > > So far I learned one thing from the report. They use the term > "vulnerabilities" liberally, defining it essentially as "bug": > > All programming languages contain constructs that are incompletely >> specified, exhibit undefined behaviour, are implementation-dependent, or >> are difficult to use correctly. The use of those constructs may therefore >> give rise to *vulnerabilities*, as a result of which, software programs >> can execute differently than intended by the writer. >> > > They then go on to explain that sometimes vulnerabilities can be > exploited, but I object to calling all bugs vulnerabilities -- that's just > using a scary word to get attention for a sleep-inducing document > containing such gems as "Use floating-point arithmetic only when absolutely > needed" (page 230). > > ?I don't like such a definition of "vulnerability" either. Some bugs can be vulnerabilities (those that can be exploited) and some vulnerabilities can be bugs. But there are definitely types of vulnerabilities that are not bugs??the DoS vulnerability that is eliminated by hash randomization is one. There may also be a gray area of bugs that can be vulnerabilities but only in some special situation. I think it's ok to call those vulnerabilities too. ???Koos? ?PS. How come I haven't seen a proposal to remove the float type from builtins yet?-)? -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Fri Nov 17 08:53:52 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Fri, 17 Nov 2017 15:53:52 +0200 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: On Fri, Nov 17, 2017 at 3:40 PM, Koos Zevenhoven wrote: > On Thu, Nov 16, 2017 at 6:53 AM, Guido van Rossum > wrote: > >> On Wed, Nov 15, 2017 at 6:50 PM, Guido van Rossum >> wrote: >>> >>> >>> Actually it linked to http://standards.iso.org/ittf/ >>> PubliclyAvailableStandards/index.html from which I managed to download >>> what looks like the complete c061457_ISO_IEC_TR_24772_2013.pdf (336 >>> pages) after clicking on an "I accept" button (I didn't read what I >>> accepted :-). The $200 is for the printed copy I presume. >>> >> >> So far I learned one thing from the report. They use the term >> "vulnerabilities" liberally, defining it essentially as "bug": >> >> All programming languages contain constructs that are incompletely >>> specified, exhibit undefined behaviour, are implementation-dependent, or >>> are difficult to use correctly. The use of those constructs may therefore >>> give rise to *vulnerabilities*, as a result of which, software programs >>> can execute differently than intended by the writer. >>> >> >> They then go on to explain that sometimes vulnerabilities can be >> exploited, but I object to calling all bugs vulnerabilities -- that's just >> using a scary word to get attention for a sleep-inducing document >> containing such gems as "Use floating-point arithmetic only when absolutely >> needed" (page 230). >> >> > ?I don't like such a definition of "vulnerability" either. Some bugs can > be vulnerabilities (those that can be exploited) and some vulnerabilities > can be bugs. But there are definitely types of vulnerabilities that are not > bugs??the DoS vulnerability that is eliminated by hash randomization is one. > > There may also be a gray area of bugs that can be vulnerabilities but only > in some special situation. I think it's ok to call those vulnerabilities > too. > > ?Just to clarify the obvious: By the above, I *don't* mean that one could use the word "vulnerability" for any functionality that can be used in such a way that it creates a vulnerability. For example, `eval` or `exec` or `open` by themselves are not vulnerabilities. ??Koos -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Nov 17 10:44:06 2017 From: brett at python.org (Brett Cannon) Date: Fri, 17 Nov 2017 15:44:06 +0000 Subject: [Python-Dev] unittest isolation and warnings In-Reply-To: <28703b31-40fe-56f0-dd56-a1b7d104b50d@stackless.com> References: <28703b31-40fe-56f0-dd56-a1b7d104b50d@stackless.com> Message-ID: Tests are not isolated from the warnings system, so things will leak out. Your best option is to use the context manager in the warnings module to temporarily make all warnings raise exceptions and test for the exception (I'm at the airport, hence why I don't know the name of the context manager; the warnings module docs actually have a sample on how best to write tests the involve warnings). On Fri, Nov 17, 2017, 01:34 Christian Tismer, wrote: > Hi guys, > > when writing tests, I suddenly discovered that unittest > is not isolated to warnings. > > Example: > One of my tests emits warnings when a certain condition is > met. Instead of reporting the error immediately, it uses > warnings, and at the end of the test, an error is produced > if there were warnings. > > if hasattr(__main__, "__warningregistry__"): > raise RuntimeError("There are errors, see above.") > > By chance, I discovered that an error was suddenly triggered without > a warning. That must mean the warning existed already from > another test as a left-over. > > My question: > Is that known, and is that intended? > To what extent are the test cases isolated from each other? > > I do admit that my usage of warnings is somewhat special. > But it is very convenient to report many errors on remote servers. > > Cheers -- Chris > > -- > Christian Tismer :^) tismer at stackless.com > Software Consulting : http://www.stackless.com/ > Karl-Liebknecht-Str. 121 : https://github.com/PySide > 14482 Potsdam : GPG key -> 0xFB7BEE0E > phone +49 173 24 18 776 fax +49 (30) 700143-0023 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donghee.na92 at gmail.com Fri Nov 17 11:56:06 2017 From: donghee.na92 at gmail.com (Dong-hee Na) Date: Sat, 18 Nov 2017 01:56:06 +0900 Subject: [Python-Dev] Clarify the compatibility policy of lib2to3. Message-ID: <481B9B19-BC52-4939-925E-D27F8313BA65@gmail.com> Hi, Few days ago, I submitted a patch(https://github.com/python/cpython/pull/4417) which updates2to3 converts `operator.isCallable(obj)` to `isinstance(obj, collections.abc.Callable)`. This was Serhiy Storchaka?s idea(https://bugs.python.org/issue32046) and I agree with his idea since `callable` is not available in all 3.x versions. However, some people would like to clarify the policy of 2to3. That means that 2to3 is compatible with the branch that is currently maintained? or it is converted to specific Python 3.X codes with the specific version of 2to3? Cordially, Dong-hee ? Dong-hee Na Chungnam National University | Computer Science & Engineering Tel: +82 010-3353-9127 Email: donghee.na92 at gmail.com Linkedin: https://www.linkedin.com/in/dong-hee-na-2b713b49/ Github: https://github.com/corona10 From status at bugs.python.org Fri Nov 17 12:09:50 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 17 Nov 2017 18:09:50 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20171117170950.71CF511A8E1@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-11-10 - 2017-11-17) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6261 (+14) closed 37553 (+46) total 43814 (+60) Open issues with patches: 2409 Issues opened (39) ================== #27494: 2to3 parser failure caused by a comma after a generator expres https://bugs.python.org/issue27494 reopened by serhiy.storchaka #31963: AMD64 Debian PGO 3.x buildbot: compilation failed with an inte https://bugs.python.org/issue31963 reopened by vstinner #32004: Allow specifying code packing order in audioop adpcm functions https://bugs.python.org/issue32004 opened by MosesofEgypt #32005: multiprocessing.Array misleading error message in slice assign https://bugs.python.org/issue32005 opened by steven.daprano #32006: multiprocessing.Array 'c' code is not documented https://bugs.python.org/issue32006 opened by steven.daprano #32007: nis module fails to build against glibc-2.26 https://bugs.python.org/issue32007 opened by floppymaster #32008: Example suggest to use a TLSv1 socket https://bugs.python.org/issue32008 opened by kroeckx #32014: multiprocessing Server's shutdown method useless send message https://bugs.python.org/issue32014 opened by stevezh #32016: Python 3.6.3 venv FAILURE https://bugs.python.org/issue32016 opened by nihon2000 #32017: profile.Profile() has no method enable() https://bugs.python.org/issue32017 opened by pitrou #32019: Interactive shell doesn't work with readline bracketed paste https://bugs.python.org/issue32019 opened by Aaron.Meurer #32021: Brotli encoding is not recognized by mimetypes https://bugs.python.org/issue32021 opened by Andrey #32022: Python problem - == RESTART: Shell ===== https://bugs.python.org/issue32022 opened by shamon51 #32024: Nominal decorator function call syntax is inconsistent with re https://bugs.python.org/issue32024 opened by ncoghlan #32025: Add time.thread_time() https://bugs.python.org/issue32025 opened by pitrou #32026: Memory leaks in Python on Windows https://bugs.python.org/issue32026 opened by pjna #32028: Syntactically wrong suggestions by the new custom print statem https://bugs.python.org/issue32028 opened by mdraw #32030: PEP 432: Rewrite Py_Main() https://bugs.python.org/issue32030 opened by vstinner #32031: Do not use the canonical path in pydoc test_mixed_case_module_ https://bugs.python.org/issue32031 opened by xdegaye #32033: The pwd module implementation incorrectly sets some attributes https://bugs.python.org/issue32033 opened by xdegaye #32035: Documentation of zipfile.ZipFile().writestr() fails to mention https://bugs.python.org/issue32035 opened by Daniel5148 #32038: Add API to intercept socket.close() https://bugs.python.org/issue32038 opened by yselivanov #32039: timeit documentation should describe caveats https://bugs.python.org/issue32039 opened by barry #32041: Cannot cast '\0' to c_void_p https://bugs.python.org/issue32041 opened by Ilya.Kulakov #32042: Option for comparing values instead of reprs in doctest https://bugs.python.org/issue32042 opened by Tom???? Pet??????ek #32043: Add a new -X dev option: "developer mode" https://bugs.python.org/issue32043 opened by vstinner #32045: Does json.dumps have a memory leak? https://bugs.python.org/issue32045 opened by rohandsa #32046: 2to3 fix for operator.isCallable() https://bugs.python.org/issue32046 opened by serhiy.storchaka #32047: asyncio: enable debug mode when -X dev is used https://bugs.python.org/issue32047 opened by vstinner #32049: 2.7.14 does not uninstall cleanly if installation was run as S https://bugs.python.org/issue32049 opened by niemalsnever #32050: Deprecated python3 -x option https://bugs.python.org/issue32050 opened by vstinner #32051: Possible issue in multiprocessing doc https://bugs.python.org/issue32051 opened by 1a1a11a #32052: Provide access to buffer of asyncio.StreamReader https://bugs.python.org/issue32052 opened by Bruce Merry #32054: Creating RPM on Python 2 works, but Python 3 fails because of https://bugs.python.org/issue32054 opened by pgacv2 #32055: Reconsider comparison chaining for containment tests https://bugs.python.org/issue32055 opened by ncoghlan #32056: bug in Lib/wave.py https://bugs.python.org/issue32056 opened by BT123 #32059: detect_modules() in setup.py must also search the sysroot path https://bugs.python.org/issue32059 opened by xdegaye #32060: Should an ABC fail if no abstract methods are defined? https://bugs.python.org/issue32060 opened by Alex Corcoles #32063: test_multiprocessing_forkserver failed with OSError: [Errno 48 https://bugs.python.org/issue32063 opened by vstinner Most recent 15 issues with no replies (15) ========================================== #32063: test_multiprocessing_forkserver failed with OSError: [Errno 48 https://bugs.python.org/issue32063 #32056: bug in Lib/wave.py https://bugs.python.org/issue32056 #32054: Creating RPM on Python 2 works, but Python 3 fails because of https://bugs.python.org/issue32054 #32052: Provide access to buffer of asyncio.StreamReader https://bugs.python.org/issue32052 #32049: 2.7.14 does not uninstall cleanly if installation was run as S https://bugs.python.org/issue32049 #32046: 2to3 fix for operator.isCallable() https://bugs.python.org/issue32046 #32035: Documentation of zipfile.ZipFile().writestr() fails to mention https://bugs.python.org/issue32035 #32019: Interactive shell doesn't work with readline bracketed paste https://bugs.python.org/issue32019 #32017: profile.Profile() has no method enable() https://bugs.python.org/issue32017 #32016: Python 3.6.3 venv FAILURE https://bugs.python.org/issue32016 #32014: multiprocessing Server's shutdown method useless send message https://bugs.python.org/issue32014 #32008: Example suggest to use a TLSv1 socket https://bugs.python.org/issue32008 #32006: multiprocessing.Array 'c' code is not documented https://bugs.python.org/issue32006 #32005: multiprocessing.Array misleading error message in slice assign https://bugs.python.org/issue32005 #32004: Allow specifying code packing order in audioop adpcm functions https://bugs.python.org/issue32004 Most recent 15 issues waiting for review (15) ============================================= #32056: bug in Lib/wave.py https://bugs.python.org/issue32056 #32050: Deprecated python3 -x option https://bugs.python.org/issue32050 #32047: asyncio: enable debug mode when -X dev is used https://bugs.python.org/issue32047 #32046: 2to3 fix for operator.isCallable() https://bugs.python.org/issue32046 #32043: Add a new -X dev option: "developer mode" https://bugs.python.org/issue32043 #32031: Do not use the canonical path in pydoc test_mixed_case_module_ https://bugs.python.org/issue32031 #32030: PEP 432: Rewrite Py_Main() https://bugs.python.org/issue32030 #32025: Add time.thread_time() https://bugs.python.org/issue32025 #32002: test_c_locale_coercion fails when the default LC_CTYPE != "C" https://bugs.python.org/issue32002 #31997: SSL lib does not handle trailing dot (period) in hostname or c https://bugs.python.org/issue31997 #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 #31985: Deprecate openfp() in aifc, sunau and wave https://bugs.python.org/issue31985 #31978: make it simpler to round fractions https://bugs.python.org/issue31978 #31971: idle_test: failures on x86 Windows7 3.x https://bugs.python.org/issue31971 #31968: exec(): method's default arguments from dict-inherited globals https://bugs.python.org/issue31968 Top 10 most discussed issues (10) ================================= #32038: Add API to intercept socket.close() https://bugs.python.org/issue32038 19 msgs #31975: Add a default filter for DeprecationWarning in __main__ https://bugs.python.org/issue31975 18 msgs #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 10 msgs #32021: Brotli encoding is not recognized by mimetypes https://bugs.python.org/issue32021 10 msgs #31701: faulthandler dumps 'Windows fatal exception: code 0xe06d7363' https://bugs.python.org/issue31701 8 msgs #32025: Add time.thread_time() https://bugs.python.org/issue32025 7 msgs #32030: PEP 432: Rewrite Py_Main() https://bugs.python.org/issue32030 7 msgs #32050: Deprecated python3 -x option https://bugs.python.org/issue32050 6 msgs #31356: Add context manager to temporarily disable GC https://bugs.python.org/issue31356 5 msgs #32022: Python problem - == RESTART: Shell ===== https://bugs.python.org/issue32022 5 msgs Issues closed (46) ================== #15606: re.VERBOSE whitespace behavior not completely documented https://bugs.python.org/issue15606 closed by serhiy.storchaka #24555: Python logic error when deal with re and muti-threading https://bugs.python.org/issue24555 closed by serhiy.storchaka #24896: It is undocumented that re.UNICODE and re.LOCALE affect re.IGN https://bugs.python.org/issue24896 closed by serhiy.storchaka #29180: skip tests that raise PermissionError in test_os (non-root use https://bugs.python.org/issue29180 closed by xdegaye #29181: skip tests that raise PermissionError in test_tarfile (non-roo https://bugs.python.org/issue29181 closed by xdegaye #30133: Strings that end with properly escaped backslashes cause error https://bugs.python.org/issue30133 closed by serhiy.storchaka #30143: Using collections ABC from collections.abc rather than collect https://bugs.python.org/issue30143 closed by serhiy.storchaka #30148: Pathological regex behaviour https://bugs.python.org/issue30148 closed by serhiy.storchaka #30349: Preparation for advanced set syntax in regular expressions https://bugs.python.org/issue30349 closed by serhiy.storchaka #30399: Get rid of trailing comma in the repr() of BaseException https://bugs.python.org/issue30399 closed by serhiy.storchaka #30696: infinite loop in PyRun_InteractiveLoopFlags() https://bugs.python.org/issue30696 closed by xdegaye #30950: Convert round() to Arument Clinic https://bugs.python.org/issue30950 closed by serhiy.storchaka #31691: Include missing info on required build steps and how to build https://bugs.python.org/issue31691 closed by steve.dower #31702: Allow to specify the number of rounds for SHA-* hashing in cry https://bugs.python.org/issue31702 closed by serhiy.storchaka #31824: Missing default argument detail in documentation of StreamRead https://bugs.python.org/issue31824 closed by berker.peksag #31867: Duplicated keys in MIME type_map with different values https://bugs.python.org/issue31867 closed by asvetlov #31948: [EASY] Broken MSDN links in msilib docs https://bugs.python.org/issue31948 closed by Mariatta #31949: Bugs in PyTraceBack_Print() https://bugs.python.org/issue31949 closed by serhiy.storchaka #31976: Segfault when closing BufferedWriter from a different thread https://bugs.python.org/issue31976 closed by pitrou #31979: Simplify converting non-ASCII strings to int, float and comple https://bugs.python.org/issue31979 closed by serhiy.storchaka #31994: json encoder exception could be better https://bugs.python.org/issue31994 closed by serhiy.storchaka #31995: Set operations documentation error https://bugs.python.org/issue31995 closed by rhettinger #32001: @lru_cache needs to be called with () https://bugs.python.org/issue32001 closed by rhettinger #32009: seg fault when using Cntrl-q keymap to exit app https://bugs.python.org/issue32009 closed by martin.panter #32010: Multiple get "itemgetter" https://bugs.python.org/issue32010 closed by rhettinger #32011: Restore loading of TYPE_INT64 in marshal https://bugs.python.org/issue32011 closed by serhiy.storchaka #32012: Disallow ambiguous syntax f(x for x in [1],) https://bugs.python.org/issue32012 closed by ncoghlan #32013: _pickle: Py_DECREF seems to be missing from a failure case in https://bugs.python.org/issue32013 closed by serhiy.storchaka #32015: Asyncio looping during simultaneously socket read/write and re https://bugs.python.org/issue32015 closed by asvetlov #32018: inspect.signature does not respect PEP 8 https://bugs.python.org/issue32018 closed by yselivanov #32020: arraymodule: Missing Py_DECREF in failure case of make_array() https://bugs.python.org/issue32020 closed by serhiy.storchaka #32023: Always require parentheses for genexps in base class lists https://bugs.python.org/issue32023 closed by serhiy.storchaka #32027: argparse allow_abbrev option also controls short flag combinat https://bugs.python.org/issue32027 closed by berker.peksag #32029: cgi: TypeError when no argument string is found https://bugs.python.org/issue32029 closed by berker.peksag #32032: Module-level pickle tests test only default implementation https://bugs.python.org/issue32032 closed by serhiy.storchaka #32034: Error when unpickling IncompleteReadError https://bugs.python.org/issue32034 closed by yselivanov #32036: error mixing positional and non-positional arguments with `arg https://bugs.python.org/issue32036 closed by r.david.murray #32037: Pickle 32-bit integers with protocol 0 as INT instead of LONG https://bugs.python.org/issue32037 closed by serhiy.storchaka #32040: Sorting pahtlib.Paths does give the same order as sorting the https://bugs.python.org/issue32040 closed by r.david.murray #32044: pip3 install 3.6.3 crashes on startup; 3.5.4 works; on OS X 10 https://bugs.python.org/issue32044 closed by berker.peksag #32048: Misprint in the unittest.mock documentation. https://bugs.python.org/issue32048 closed by ?????????????? ???????????????? #32053: Inconsistent use of tabs and spaces in indentation not always https://bugs.python.org/issue32053 closed by skrah #32057: time.sleep(n) interrupted by signal https://bugs.python.org/issue32057 closed by Bogdan Popa #32058: Faulty behaviour in email.utils.parseaddr if square brackets i https://bugs.python.org/issue32058 closed by r.david.murray #32061: test_httpservers.test_undecodable_filename() fails with [Errno https://bugs.python.org/issue32061 closed by vstinner #32062: test_venv fails if zlib is not available https://bugs.python.org/issue32062 closed by vstinner From steve.dower at python.org Fri Nov 17 14:11:16 2017 From: steve.dower at python.org (Steve Dower) Date: Fri, 17 Nov 2017 11:11:16 -0800 Subject: [Python-Dev] Python possible vulnerabilities in concurrency In-Reply-To: References: <6D3CB350-9475-4A8D-888A-CEEB95ADBCB2@maurya.on.ca> <20171114131509.26125738@fsol> <1510667720.21117.137.camel@janc.be> Message-ID: On 15Nov2017 2053, Guido van Rossum wrote: > On Wed, Nov 15, 2017 at 6:50 PM, Guido van Rossum > wrote: > > So far I learned one thing from the report. They use the term > "vulnerabilities" liberally, defining it essentially as "bug": > > All programming languages contain constructs that are incompletely > specified, exhibit undefined behaviour, are > implementation-dependent, or are difficult to use correctly. The use > of those constructs may therefore give rise to /vulnerabilities/, as > a result of which, software programs can execute differently than > intended by the writer. > > > They then go on to explain that sometimes vulnerabilities can be > exploited, but I object to calling all bugs vulnerabilities -- that's > just using a scary word to get attention for a sleep-inducing document > containing such gems as "Use floating-point arithmetic only when > absolutely needed" (page 230). I looked at this report the first time it was posted and came to the same conclusion. It's only valuable in the sense that it makes clear just how perfect your code has to be to avoid being vulnerable, and since that level of perfection can never be achieved, the takeaway is that you can't achieve security solely within the application/framework/runtime. It is convenient to have formally researched and collated it, so the rest of us can just write blog posts/PEPs stating it as fact, but I think most people will intuitively get the main point without referring to the report. (Yes, I'm still interested in pushing PEP 551 forward :) I've been trying to get some actual companies other than Microsoft using it for the real-world experience, and I have a couple of conference talks coming up about it. There are implementations against v3.7.0a2 is at https://github.com/zooba/cpython/tree/pep551 and against v3.6.3 at https://github.com/zooba/cpython/tree/pep551_36 ) Cheers, Steve From guido at python.org Fri Nov 17 14:29:25 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 17 Nov 2017 11:29:25 -0800 Subject: [Python-Dev] Clarify the compatibility policy of lib2to3. In-Reply-To: <481B9B19-BC52-4939-925E-D27F8313BA65@gmail.com> References: <481B9B19-BC52-4939-925E-D27F8313BA65@gmail.com> Message-ID: On Fri, Nov 17, 2017 at 8:56 AM, Dong-hee Na wrote: > Few days ago, I submitted a patch(https://github.com/ > python/cpython/pull/4417) which updates2to3 converts > `operator.isCallable(obj)` to `isinstance(obj, collections.abc.Callable)`. > > This was Serhiy Storchaka?s idea(https://bugs.python.org/issue32046) and > I agree with his idea since `callable` is not available in all 3.x versions. > > However, some people would like to clarify the policy of 2to3. > That means that 2to3 is compatible with the branch that is currently > maintained? or it is converted to specific Python 3.X codes with the > specific version of 2to3? > Ideally it would generate code that works under all *still supported* versions of Python, not just under the version that happens to be used to run the converter. In the context of that specific bug, callable() appeared in Python 3.2, but the oldest Python 3 version that's still supported is 3.4, so we're safe generating callable(). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Nov 17 19:01:47 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 18 Nov 2017 01:01:47 +0100 Subject: [Python-Dev] Python initialization and embedded Python Message-ID: Hi, The CPython internals evolved during Python 3.7 cycle. I would like to know if we broke the C API or not. Nick Coghlan and Eric Snow are working on cleaning up the Python initialization with the "on going" PEP 432: https://www.python.org/dev/peps/pep-0432/ Many global variables used by the "Python runtime" were move to a new single "_PyRuntime" variable (big structure made of sub-structures). See Include/internal/pystate.h. A side effect of moving variables from random files into header files is that it's not more possible to fully initialize _PyRuntime at "compilation time". For example, previously, it was possible to refer to local C function (functions declared with "static", so only visible in the current file). Now a new "initialization function" is required to must be called. In short, it means that using the "Python runtime" before it's initialized by _PyRuntime_Initialize() is now likely to crash. For example, calling PyMem_RawMalloc(), before calling _PyRuntime_Initialize(), now calls the function NULL: dereference a NULL pointer, and so immediately crash with a segmentation fault. I'm writing this email to ask if this change is an issue or not to embedded Python and the Python C API. Is it still possible to call "all" functions of the C API before calling Py_Initialize()? I was bitten by the bug while reworking the Py_Main() function to split it into subfunctions and cleanup the code to handle the command line arguments and environment variables. I fixed the issue in main() by calling _PyRuntime_Initialize() as soon as possible: it's now the first instruction of main() :-) (See Programs/python.c) To give a more concrete example: Py_DecodeLocale() is the recommanded function to decode bytes from the operating system, but this function calls PyMem_RawMalloc() which does crash before _PyRuntime_Initialize() is called. Is Py_DecodeLocale() used to initialize Python? For example, "void Py_SetProgramName(wchar_t *);" expects a text string, whereas main() gives argv as bytes. Calling Py_SetProgramName() from argv requires to decode bytes... So use Py_DecodeLocale()... Should we do something in Py_DecodeLocale()? Maybe crash if _PyRuntime_Initialize() wasn't called yet? Maybe, the minimum change is to expose _PyRuntime_Initialize() in the public C API? Victor From steve.dower at python.org Fri Nov 17 19:17:25 2017 From: steve.dower at python.org (Steve Dower) Date: Fri, 17 Nov 2017 16:17:25 -0800 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: <374687f1-f79a-edff-8092-1e41c5e1f1c6@python.org> On 17Nov2017 1601, Victor Stinner wrote: > In short, it means that using the "Python runtime" before it's > initialized by _PyRuntime_Initialize() is now likely to crash. For > example, calling PyMem_RawMalloc(), before calling > _PyRuntime_Initialize(), now calls the function NULL: dereference a > NULL pointer, and so immediately crash with a segmentation fault. > > I'm writing this email to ask if this change is an issue or not to > embedded Python and the Python C API. Is it still possible to call > "all" functions of the C API before calling Py_Initialize()? I thought it was never possible to call most of the C API without initializing, except for certain APIs that are documented as being safe. I've certainly crashed many times calling C APIs before initialization. My intuition was that the only safe ones before were those that were used to initialize the runtime (Py_SetPath and such), which are also the ones being "upgraded" as part of this work. If we have a good idea of which ones are [un]safe now, perhaps we should tag them explicitly in the docs? Do we know which ones are [un]safe? Cheers, Steve From victor.stinner at gmail.com Fri Nov 17 20:05:06 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 18 Nov 2017 02:05:06 +0100 Subject: [Python-Dev] Make the stable API-ABI usable Message-ID: Hi, tl; dr I propose to extend the existing "stable API" to make it almost as complete as the current API. For example, add back PyTuple_GET_ITEM() to be stable API, but it becomes a function call rather than a macro. The final question is if it's not too late to iterate on an implementation of this idea for Python 3.7? Knowing that the stable API doesn't affect the "current API" at all, since the "new C API" (extended stable API) would only be accessible using an *opt-in* flag. Since I failed to find time to write a proper PEP (sorry about that), and the incoming deadline for Python 3.7 features is getting closer, I decided to dump my dump into this email. Sorry about the raw formatting. It's a third turn of feedback. The first one was done in python-ideas last July, the second was at the CPython sprint last September at Instagram. Both were positive. https://mail.python.org/pipermail/python-ideas/2017-July/046399.html The C API of CPython is amazing. It made CPython as powerful as we know it today. Without the C API, Python wouldn't have numpy. Without numpy, Python wouldn't be as popular as it is nowadays. Ok, enough for the introduction. The C API is awful. I hate it so much! The C API is old, error prone, too big, and expose every single CPython implementation detail... The C API is likely the first reason why faster implementation of Python (PyPy?) are not as popular as they should be. The fact that the C API is so widely used prevents many evolutions of CPython. CPython is stuck by its own (public) C API. PyPy developers spent a lot of time to make cffi great and advertize it all around the world. While it's a great project, the fact is that the C API *is* still used and the motivation to rewrite old code written with the C API to cffi, Cython or whateer else is too low. Maybe because the carrot is not big enough. Let's make a bigger carrot! (bigger than PyPy amazing performances? I'm not sure that it's doable :-( we will see) Using Cython, it's simpler to write C extensions and Cython code is more "portable" on different Python versions, since the C API evolved the last 10 years. For example, Python 3 renamed PyString with PyBytes and dropped the PyInt type. Writing a C extension working on Python 2 and Python 3 requires to "pollute" the code with many #ifdef. Cython helps to reduce them (or even avoid them? sorry, I don't know well Cython). The C *API* is tidely linked to the *ABI*. I tried to explain it with an example in my article: https://vstinner.github.io/new-python-c-api.html A known ABI issue is that it's not possible to load C extensions compiled in "release mode" on a Python compiled in "debug mode". The debug mode changes the API in a subtle way which changes the ABI and so makes C extensions incompatible. This issue prevents many people to use a Python debug build, whereas debug builds are very helpful to detect bugs earlier. Another issue is that C extensions must be recompiled for each Python release (3.5, 3.6, 3.7, etc.). Linux vendors like Red Hat cannot provide a single binary for multiple Python versions which prevents to upgade the "system" Python in the lifecycle of a distribution major version (or at least, it makes things much more complicated in term of packaging, it multiply packages for each Python version...). . . . Don't worry (be happy!), I'm not writing this email to complain, but to propose a solution. Aha! I propose to modify the API step by step to add more functions to the "stable ABI" (PEP 384) to be able to compile more and more C extensions using the "stable API" (please try to follow, first it was B, now it's P... seriously, the difference between the ABI and the API is subtle, to simplify let's say that it's the same thing, ok? :-)). I wrote a draft PEP, but I never found time to update it after the two rounds of feedbacks (python-ideas and the sprint) to write a proper PEP. So I will only give a link to my draft, sorry! https://github.com/vstinner/misc/blob/master/python/pep_c_api.rst In short, the plan is to add back the PyTuple_GET_ITEM() *API* to the "stable API" but change its implementation to a *function call* rather than the existing macro, so the compiled C extension will use a function call and so don't rely on the ABI anymore. My plan is to have two main milestones: (1) Python 3.7: Extend the *existing* opt-in "stable API" which requires to compile C extensions in a special mode. Add maybe an option in distutils to ease the compilation of a C extension with the "stable API"? (2) In Python 3.8, --if the project is successful and the performance overhead is acceptable compared the advantages of having C extensions working on multiple Python verisons--, make the "stable API (without implementation details)" the default, but add a new opt-in option to give access to the "full API (with implementation details)" for debuggers and other people who understand what they do (like Cython?). Note: currently, the "stable API" is accessible using Py_LIMITED_API define, and the "full API" is accessible using Py_BUILD_CORE define. No define gives the current C API. My problem is more on the concrete implementation: * Need to provide two different API using the same filenames (like: #include "Python.h") * Need to extend distutils to have a flag to compile a C extension with one specific API (define Py_LIMITED_API or Py_BUILD_CORE?) * Need to test many C extensions and check how many extensions are broken My plan for Python 3.7 is to not touch the current API at all. There is no risk of backward incompatibility. You should only get issues if you opt-in for the new API without implementation details. Final note: Nothing new under the sun: PyPy already implemented my "idea"! Where the idea is a C API without macros; PyTuple_GET_ITEM() is already a function call in PyPy ;-) Final question: Is it acceptable to iterate on many small changes on the C API implement this idea in Python 3.7? Maybe only write partial implementation, and finish it in Python 3.8? Victor From victor.stinner at gmail.com Fri Nov 17 20:22:27 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 18 Nov 2017 02:22:27 +0100 Subject: [Python-Dev] Show DeprecationWarning in debug mode? Message-ID: Hi, I noticed that Python not only hides DeprecationWarning, but also PendingDeprecationWarning and ImportWarning by default. While I understand why we decided to hide these warnings to users for a Python compiled in release mode, why are they hidden in Python debug builds? I'm asking the question because in debug mode, Python shows ResourceWarning warnings (whereas these warnings are hidden in release mode). Why only displaying ResourceWarning, but not other warnings in debug mode? At least, with the Python 3.7 new "developer mode" (-X dev), now you can be totally annoyed^W^W appreciate *all* these warnings :-) Example: ------------------ $ cat x.py import warnings warnings.warn('Resource warning', ResourceWarning) warnings.warn('Deprecation warning', DeprecationWarning) # Release build: ignore all :-( $ python3 x.py # Debug build: ignore deprecation :-| $ ./python x.py x.py:2: ResourceWarning: Resource warning warnings.warn('Resource warning', ResourceWarning) # Developer mode: show all :-) $ ./python -X dev x.py x.py:2: ResourceWarning: Resource warning warnings.warn('Resource warning', ResourceWarning) x.py:3: DeprecationWarning: Deprecation warning warnings.warn('Deprecation warning', DeprecationWarning) ------------------ Or maybe we should start adding new modes like -X all-warnings-except-PendingDeprecationWarning, -X I-really-really-love-warnings and -X warnings-hater, as Barry proposed? Victor From storchaka at gmail.com Sat Nov 18 02:15:37 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 18 Nov 2017 09:15:37 +0200 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 18.11.17 02:01, Victor Stinner ????: > Many global variables used by the "Python runtime" were move to a new > single "_PyRuntime" variable (big structure made of sub-structures). > See Include/internal/pystate.h. > > A side effect of moving variables from random files into header files > is that it's not more possible to fully initialize _PyRuntime at > "compilation time". For example, previously, it was possible to refer > to local C function (functions declared with "static", so only visible > in the current file). Now a new "initialization function" is required > to must be called. > > In short, it means that using the "Python runtime" before it's > initialized by _PyRuntime_Initialize() is now likely to crash. For > example, calling PyMem_RawMalloc(), before calling > _PyRuntime_Initialize(), now calls the function NULL: dereference a > NULL pointer, and so immediately crash with a segmentation fault. Wouldn't be better to revert (the part of) global variables moving? I still don't see a benefit of it. > To give a more concrete example: Py_DecodeLocale() is the recommanded > function to decode bytes from the operating system, but this function > calls PyMem_RawMalloc() which does crash before > _PyRuntime_Initialize() is called. Is Py_DecodeLocale() used to > initialize Python? > > For example, "void Py_SetProgramName(wchar_t *);" expects a text > string, whereas main() gives argv as bytes. Calling > Py_SetProgramName() from argv requires to decode bytes... So use > Py_DecodeLocale()... > > Should we do something in Py_DecodeLocale()? Maybe crash if > _PyRuntime_Initialize() wasn't called yet? I think Py_DecodeLocale() should be usable before calling Py_Initialize(). In the example in Doc/extending/extending.rst it is used before Py_Initialize(). If the third-party code is based on this example, it will crash now. From storchaka at gmail.com Sat Nov 18 02:30:06 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 18 Nov 2017 09:30:06 +0200 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: 18.11.17 03:05, Victor Stinner ????: > tl; dr I propose to extend the existing "stable API" to make it almost > as complete as the current API. For example, add back > PyTuple_GET_ITEM() to be stable API, but it becomes a function call > rather than a macro. The final question is if it's not too late to > iterate on an implementation of this idea for Python 3.7? Knowing that > the stable API doesn't affect the "current API" at all, since the "new > C API" (extended stable API) would only be accessible using an > *opt-in* flag. There is the PyTuple_GetItem() function. The benefit of using PyTuple_GET_ITEM() in tight loops: a) avoid redundant arguments checks; b) avoid calling function overhead. Making PyTuple_GET_ITEM() a function will destroy the half of the benefit. And this will make the ABI larger. First at all we need to document all API, what is stable, and in what version it had became stable. Then I would separate three kinds of API physically: limited API, extended unstable API, and internal private API, and place their declarations in different headers. The headers with internal API should not even be visible for third-party developers. From storchaka at gmail.com Sat Nov 18 02:33:39 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 18 Nov 2017 09:33:39 +0200 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: References: Message-ID: 18.11.17 03:22, Victor Stinner ????: > I noticed that Python not only hides DeprecationWarning, but also > PendingDeprecationWarning and ImportWarning by default. While I > understand why we decided to hide these warnings to users for a Python > compiled in release mode, why are they hidden in Python debug builds? > > I'm asking the question because in debug mode, Python shows > ResourceWarning warnings (whereas these warnings are hidden in release > mode). Why only displaying ResourceWarning, but not other warnings in > debug mode? +1 for showing all warning (except maybe PendingDeprecationWarning) in the debug build! I constantly forgot about this. From victor.stinner at gmail.com Sat Nov 18 04:13:36 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 18 Nov 2017 10:13:36 +0100 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: Le 18 nov. 2017 08:32, "Serhiy Storchaka" a ?crit : Making PyTuple_GET_ITEM() a function will destroy the half of the benefit. And this will make the ABI larger. Sorry if I wasn't explicit about it: my idea of changing the API has an obvious impact on performance. That's why the first step on the second milestone of my plan is to spend time to measure the slowdown. The other part of the my overall plan is to experiement new optimizations. See my draft PEP for ideas. The idea behind adding PyTuple_GET_ITEM() is to be able to compile C extensions using it, without having to modify the code. If you require to modify the code, I don't expect that you will be able to compile more than half of C extensions on PyPI, like old code with no active maintainer... Anyway, the PyTuple_GET_ITEM() will remain a macro in the default API for Python 3.7. See also my blog post which explains why the fact that it is a macro prevents us from optimizing it, like having specialized compact tuple for small integers. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From tismer at stackless.com Sat Nov 18 04:25:35 2017 From: tismer at stackless.com (Christian Tismer) Date: Sat, 18 Nov 2017 10:25:35 +0100 Subject: [Python-Dev] unittest isolation and warnings In-Reply-To: References: <28703b31-40fe-56f0-dd56-a1b7d104b50d@stackless.com> Message-ID: Thanks a lot! Good to know. Ciao -- Chris On 17.11.17 16:44, Brett Cannon wrote: > Tests are not isolated from the warnings system, so things will leak > out. Your best option is to use the context manager in the warnings > module to temporarily make all warnings raise exceptions and test for > the exception (I'm at the airport, hence why I don't know the name of > the context manager; the warnings module docs actually have a sample on > how best to write tests the involve warnings). > > > On Fri, Nov 17, 2017, 01:34 Christian Tismer, > wrote: > > Hi guys, > > when writing tests, I suddenly discovered that unittest > is not isolated to warnings. > > Example: > One of my tests emits warnings when a certain condition is > met. Instead of reporting the error immediately, it uses > warnings, and at the end of the test, an error is produced > if there were warnings. > > ? ? ? ? if hasattr(__main__, "__warningregistry__"): > ? ? ? ? ? ? raise RuntimeError("There are errors, see above.") > > By chance, I discovered that an error was suddenly triggered without > a warning. That must mean the warning existed already from > another test as a left-over. > > My question: > Is that known, and is that intended? > To what extent are the test cases isolated from each other? > > I do admit that my usage of warnings is somewhat special. > But it is very convenient to report many errors on remote servers. > > Cheers -- Chris > > -- > Christian Tismer? ? ? ? ? ? ?:^)? ?tismer at stackless.com > > Software Consulting? ? ? ? ? :? ? ?http://www.stackless.com/ > Karl-Liebknecht-Str. 121? ? ?:? ? ?https://github.com/PySide > 14482 Potsdam? ? ? ? ? ? ? ? :? ? ?GPG key -> 0xFB7BEE0E > phone +49 173 24 18 776? fax +49 (30) 700143-0023 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -- Christian Tismer :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From storchaka at gmail.com Sat Nov 18 04:42:36 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 18 Nov 2017 11:42:36 +0200 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: 18.11.17 11:13, Victor Stinner ????: > The idea behind adding PyTuple_GET_ITEM() is to be able to compile C > extensions using it, without having to modify the code. The simplest way to do this: #define PyTuple_GET_ITEM PyTuple_GetItem This will not add new names to ABI. Such defines can be added in a separate header file included for compatibility. In any case making PyTuple_GET_ITEM() a function will break the following code: PyObject **items = &PyTuple_GET_ITEM(tuple, 0); Py_ssize_t size = PyTuple_GET_SIZE(tuple, 0); foo(items, size); From solipsis at pitrou.net Sat Nov 18 06:27:54 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 18 Nov 2017 12:27:54 +0100 Subject: [Python-Dev] unittest isolation and warnings References: <28703b31-40fe-56f0-dd56-a1b7d104b50d@stackless.com> Message-ID: <20171118122754.53817c18@fsol> Hi Christian, On Fri, 17 Nov 2017 10:15:24 +0100 Christian Tismer wrote: > > Example: > One of my tests emits warnings when a certain condition is > met. Instead of reporting the error immediately, it uses > warnings, and at the end of the test, an error is produced > if there were warnings. I suggest you try using subtests. This would allow you to report several errors from a given test method. https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests Regards Antoine. From solipsis at pitrou.net Sat Nov 18 06:31:06 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 18 Nov 2017 12:31:06 +0100 Subject: [Python-Dev] Make the stable API-ABI usable References: Message-ID: <20171118123106.5f0db3ea@fsol> I agree with Serhiy. It doesn't make sense to add PyTuple_GET_ITEM to the stable ABI. People who want to benefit from the stable ABI should use PyTuple_GetItem. That's not very complicated. Regards Antoine. On Sat, 18 Nov 2017 11:42:36 +0200 Serhiy Storchaka wrote: > 18.11.17 11:13, Victor Stinner ????: > > The idea behind adding PyTuple_GET_ITEM() is to be able to compile C > > extensions using it, without having to modify the code. > > The simplest way to do this: > > #define PyTuple_GET_ITEM PyTuple_GetItem > > This will not add new names to ABI. Such defines can be added in a > separate header file included for compatibility. > > In any case making PyTuple_GET_ITEM() a function will break the > following code: > > PyObject **items = &PyTuple_GET_ITEM(tuple, 0); > Py_ssize_t size = PyTuple_GET_SIZE(tuple, 0); > foo(items, size); > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org From solipsis at pitrou.net Sat Nov 18 06:38:48 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 18 Nov 2017 12:38:48 +0100 Subject: [Python-Dev] Make the stable API-ABI usable References: Message-ID: <20171118123848.4484704b@fsol> On Sat, 18 Nov 2017 10:13:36 +0100 Victor Stinner wrote: > > Anyway, the PyTuple_GET_ITEM() will remain a macro in the default API for > Python 3.7. > > See also my blog post which explains why the fact that it is a macro > prevents us from optimizing it, like having specialized compact tuple for > small integers. I'm not sure this would be an optimization. You'll add checks to PyTuple_GET_ITEM() to select the "kind" of tuple at runtime, and this may very well make things slower. You would need a JIT with type constraints to remove the overhead and truly gain from the optimization. Besides, PyTuple_GET_ITEM() can't really get any faster if all you care about is the *objects* in the tuple, not their value. Regards Antoine. From solipsis at pitrou.net Sat Nov 18 06:32:53 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 18 Nov 2017 12:32:53 +0100 Subject: [Python-Dev] Python initialization and embedded Python References: Message-ID: <20171118123253.10250ead@fsol> On Sat, 18 Nov 2017 01:01:47 +0100 Victor Stinner wrote: > > Maybe, the minimum change is to expose _PyRuntime_Initialize() in the > public C API? +1. Also a symmetric PyRuntime_Finalize() function (even if it's a no-op currently). Regards Antoine. From ncoghlan at gmail.com Sat Nov 18 08:50:01 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 18 Nov 2017 23:50:01 +1000 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: On 18 November 2017 at 11:05, Victor Stinner wrote: > Hi, > > tl; dr I propose to extend the existing "stable API" to make it almost > as complete as the current API. For example, add back > PyTuple_GET_ITEM() to be stable API, but it becomes a function call > rather than a macro. The final question is if it's not too late to > iterate on an implementation of this idea for Python 3.7? Knowing that > the stable API doesn't affect the "current API" at all, since the "new > C API" (extended stable API) would only be accessible using an > *opt-in* flag. I'm -1 on expanding the stable API/ABI in 3.7 (especially without a PEP), but I'm +1 on refactoring the way we maintain it, with a view to expanding it (with function calls substituting in for the macros in Py_LIMITED_API mode) in 3.8. This isn't an urgent change, but the strict backwards compatibility policy means it's one where we'll be stuck with any mistakes we make for a long time. (Proper use of symbol versioning might offer a subsequent escape clause, but that introduces its own cross-platform compatibility problems). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Nov 18 08:58:46 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 18 Nov 2017 23:58:46 +1000 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: On 18 November 2017 at 23:50, Nick Coghlan wrote: > On 18 November 2017 at 11:05, Victor Stinner wrote: >> Hi, >> >> tl; dr I propose to extend the existing "stable API" to make it almost >> as complete as the current API. For example, add back >> PyTuple_GET_ITEM() to be stable API, but it becomes a function call >> rather than a macro. The final question is if it's not too late to >> iterate on an implementation of this idea for Python 3.7? Knowing that >> the stable API doesn't affect the "current API" at all, since the "new >> C API" (extended stable API) would only be accessible using an >> *opt-in* flag. > > I'm -1 on expanding the stable API/ABI in 3.7 (especially without a > PEP), but I'm +1 on refactoring the way we maintain it, with a view to > expanding it (with function calls substituting in for the macros in > Py_LIMITED_API mode) in 3.8. Expanding on this concept: I think PEP 432 may be a reasonable model here, whereby you can write a mostly-core-developer focused PEP that sets out your vision for where you'd like to get to eventually, and then that PEP provides context for issue-level refactorings that might otherwise seem like code churn with no apparent purpose. Each refactoring will still need to stand as beneficial on its own (usually on grounds of code clarity, or otherwise making future maintenance easier, even if it makes near term backports harder), but that process tends to run more smoothly when there's a shared understanding of the overarching goal. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From k7hoven at gmail.com Sat Nov 18 09:13:28 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Sat, 18 Nov 2017 16:13:28 +0200 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: Your email didn't compile. The compiler says that it's a naming conflict, but actually I think you forgot a semicolon! (Don't worry, it happens to all of us, whether we be happy or not :-) --Koos On Sat, Nov 18, 2017 at 3:05 AM, Victor Stinner wrote: > Hi, > > tl; dr I propose to extend the existing "stable API" to make it almost > as complete as the current API. For example, add back > PyTuple_GET_ITEM() to be stable API, but it becomes a function call > rather than a macro. The final question is if it's not too late to > iterate on an implementation of this idea for Python 3.7? Knowing that > the stable API doesn't affect the "current API" at all, since the "new > C API" (extended stable API) would only be accessible using an > *opt-in* flag. > > > Since I failed to find time to write a proper PEP (sorry about that), > and the incoming deadline for Python 3.7 features is getting closer, I > decided to dump my dump into this email. Sorry about the raw > formatting. > > It's a third turn of feedback. The first one was done in python-ideas > last July, the second was at the CPython sprint last September at > Instagram. Both were positive. > > https://mail.python.org/pipermail/python-ideas/2017-July/046399.html > > The C API of CPython is amazing. It made CPython as powerful as we > know it today. Without the C API, Python wouldn't have numpy. Without > numpy, Python wouldn't be as popular as it is nowadays. Ok, enough for > the introduction. > > The C API is awful. I hate it so much! The C API is old, error prone, > too big, and expose every single CPython implementation detail... The > C API is likely the first reason why faster implementation of Python > (PyPy?) are not as popular as they should be. > > The fact that the C API is so widely used prevents many evolutions of > CPython. CPython is stuck by its own (public) C API. > > PyPy developers spent a lot of time to make cffi great and advertize > it all around the world. While it's a great project, the fact is that > the C API *is* still used and the motivation to rewrite old code > written with the C API to cffi, Cython or whateer else is too low. > Maybe because the carrot is not big enough. Let's make a bigger > carrot! (bigger than PyPy amazing performances? I'm not sure that it's > doable :-( we will see) > > Using Cython, it's simpler to write C extensions and Cython code is > more "portable" on different Python versions, since the C API evolved > the last 10 years. For example, Python 3 renamed PyString with PyBytes > and dropped the PyInt type. Writing a C extension working on Python 2 > and Python 3 requires to "pollute" the code with many #ifdef. Cython > helps to reduce them (or even avoid them? sorry, I don't know well > Cython). > > The C *API* is tidely linked to the *ABI*. I tried to explain it with > an example in my article: > > https://vstinner.github.io/new-python-c-api.html > > A known ABI issue is that it's not possible to load C extensions > compiled in "release mode" on a Python compiled in "debug mode". The > debug mode changes the API in a subtle way which changes the ABI and > so makes C extensions incompatible. This issue prevents many people to > use a Python debug build, whereas debug builds are very helpful to > detect bugs earlier. > > Another issue is that C extensions must be recompiled for each Python > release (3.5, 3.6, 3.7, etc.). Linux vendors like Red Hat cannot > provide a single binary for multiple Python versions which prevents to > upgade the "system" Python in the lifecycle of a distribution major > version (or at least, it makes things much more complicated in term of > packaging, it multiply packages for each Python version...). > > . > . > . > > Don't worry (be happy!), I'm not writing this email to complain, but > to propose a solution. Aha! > > I propose to modify the API step by step to add more functions to the > "stable ABI" (PEP 384) to be able to compile more and more C > extensions using the "stable API" (please try to follow, first it was > B, now it's P... seriously, the difference between the ABI and the API > is subtle, to simplify let's say that it's the same thing, ok? :-)). > > I wrote a draft PEP, but I never found time to update it after the two > rounds of feedbacks (python-ideas and the sprint) to write a proper > PEP. So I will only give a link to my draft, sorry! > > https://github.com/vstinner/misc/blob/master/python/pep_c_api.rst > > In short, the plan is to add back the PyTuple_GET_ITEM() *API* to the > "stable API" but change its implementation to a *function call* rather > than the existing macro, so the compiled C extension will use a > function call and so don't rely on the ABI anymore. > > > My plan is to have two main milestones: > > (1) Python 3.7: Extend the *existing* opt-in "stable API" which > requires to compile C extensions in a special mode. Add maybe an > option in distutils to ease the compilation of a C extension with the > "stable API"? > > (2) In Python 3.8, --if the project is successful and the performance > overhead is acceptable compared the advantages of having C extensions > working on multiple Python verisons--, make the "stable API (without > implementation details)" the default, but add a new opt-in option to > give access to the "full API (with implementation details)" for > debuggers and other people who understand what they do (like Cython?). > > Note: currently, the "stable API" is accessible using Py_LIMITED_API > define, and the "full API" is accessible using Py_BUILD_CORE define. > No define gives the current C API. > > > My problem is more on the concrete implementation: > > * Need to provide two different API using the same filenames (like: > #include "Python.h") > > * Need to extend distutils to have a flag to compile a C extension > with one specific API (define Py_LIMITED_API or Py_BUILD_CORE?) > > * Need to test many C extensions and check how many extensions are broken > > > My plan for Python 3.7 is to not touch the current API at all. There > is no risk of backward incompatibility. You should only get issues if > you opt-in for the new API without implementation details. > > > Final note: Nothing new under the sun: PyPy already implemented my > "idea"! Where the idea is a C API without macros; PyTuple_GET_ITEM() > is already a function call in PyPy ;-) > > > Final question: Is it acceptable to iterate on many small changes on > the C API implement this idea in Python 3.7? Maybe only write partial > implementation, and finish it in Python 3.8? > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > k7hoven%40gmail.com > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 18 09:17:59 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Nov 2017 00:17:59 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 18 November 2017 at 10:01, Victor Stinner wrote: > I'm writing this email to ask if this change is an issue or not to > embedded Python and the Python C API. Is it still possible to call > "all" functions of the C API before calling Py_Initialize()? It isn't technically permitted to call any of them, unless their documentation specifically says that calling them before `Py_Initialize` is permitted (and that permission is only given for a select few configuration APIs in https://docs.python.org/3/c-api/init.html). While it's still PEP 432's intention to eventually expose a public multi-phase start-up API, it's *also* the case that we're not actually ready to do that yet - we're not sure we have the data model right, and we don't want to commit to a supported API until that's resolved. So for Python 3.7, I'd suggest pursuing one of the following options: 1. Add a variant of Py_DecodeLocale that accepts a memory allocation function directly and reports back both the allocated pointer and its size (allowing the calling program to manage that memory); or 2. Offer a new `Py_SetProgramNameFromString` API that accepts a `char *` directly. That way, CPython can take care of lazily decoding it after the decoding machinery has been fully set up, rather than expecting the embedding application to always do it; (While we could also make the promise that PyMem_RawMalloc and Py_DecodeLocale will be callable before Py_Initialize, I don't think we're far enough into the startup refactoring process to be making those kinds of promises). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Sat Nov 18 10:45:56 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 18 Nov 2017 17:45:56 +0200 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 18.11.17 16:17, Nick Coghlan ????: > On 18 November 2017 at 10:01, Victor Stinner wrote: >> I'm writing this email to ask if this change is an issue or not to >> embedded Python and the Python C API. Is it still possible to call >> "all" functions of the C API before calling Py_Initialize()? > > It isn't technically permitted to call any of them, unless their > documentation specifically says that calling them before > `Py_Initialize` is permitted (and that permission is only given for a > select few configuration APIs in > https://docs.python.org/3/c-api/init.html). The Py_Initialize() is not complete. It mentions only Py_SetProgramName(), Py_SetPythonHome() and Py_SetPath(). But in other places it is documented that Py_SetStandardStreamEncoding(), PyImport_AppendInittab(), PyImport_ExtendInittab() should be called before Py_Initialize(). And the embedding examples call Py_DecodeLocale() before Py_Initialize(). PyMem_RawMalloc(), PyMem_RawFree() and PyInitFrozenExtensions() are called before Py_Initialize() in Py_FrozenMain(). Also these functions call _PyMem_RawStrdup(). Hence, the minimal set of functions that can be called before Py_Initialize() is: * Py_SetProgramName() * Py_SetPythonHome() * Py_SetPath() * Py_SetStandardStreamEncoding() * PyImport_AppendInittab() * PyImport_ExtendInittab() * Py_DecodeLocale() * PyMem_RawMalloc() * PyMem_RawFree() * PyInitFrozenExtensions() From brett at python.org Sat Nov 18 12:13:06 2017 From: brett at python.org (Brett Cannon) Date: Sat, 18 Nov 2017 17:13:06 +0000 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: References: Message-ID: +1 from me as well. On Sat, Nov 18, 2017, 02:36 Serhiy Storchaka, wrote: > 18.11.17 03:22, Victor Stinner ????: > > I noticed that Python not only hides DeprecationWarning, but also > > PendingDeprecationWarning and ImportWarning by default. While I > > understand why we decided to hide these warnings to users for a Python > > compiled in release mode, why are they hidden in Python debug builds? > > > > I'm asking the question because in debug mode, Python shows > > ResourceWarning warnings (whereas these warnings are hidden in release > > mode). Why only displaying ResourceWarning, but not other warnings in > > debug mode? > > +1 for showing all warning (except maybe PendingDeprecationWarning) in > the debug build! I constantly forgot about this. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Nov 18 16:02:27 2017 From: barry at python.org (Barry Warsaw) Date: Sat, 18 Nov 2017 16:02:27 -0500 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: References: Message-ID: On Nov 17, 2017, at 20:22, Victor Stinner wrote: > > Or maybe we should start adding new modes like -X > all-warnings-except-PendingDeprecationWarning, -X > I-really-really-love-warnings and -X warnings-hater, as Barry > proposed? Well, if I can?t convince you about a `-X the-flufls-gonna-gitcha` mode, +1 for turning on all those other warnings in debug mode. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From victor.stinner at gmail.com Sat Nov 18 18:18:28 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 19 Nov 2017 00:18:28 +0100 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: Message-ID: Le 18 nov. 2017 10:44, "Serhiy Storchaka" a ?crit : The simplest way to do this: #define PyTuple_GET_ITEM PyTuple_GetItem This will not add new names to ABI. Such defines can be added in a separate header file included for compatibility. It is exactly what I am proposing :-) Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 18 21:17:22 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Nov 2017 12:17:22 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 19 November 2017 at 01:45, Serhiy Storchaka wrote: > 18.11.17 16:17, Nick Coghlan ????: >> >> On 18 November 2017 at 10:01, Victor Stinner >> wrote: >>> >>> I'm writing this email to ask if this change is an issue or not to >>> embedded Python and the Python C API. Is it still possible to call >>> "all" functions of the C API before calling Py_Initialize()? >> >> >> It isn't technically permitted to call any of them, unless their >> documentation specifically says that calling them before >> `Py_Initialize` is permitted (and that permission is only given for a >> select few configuration APIs in >> https://docs.python.org/3/c-api/init.html). > > The Py_Initialize() is not complete. It mentions only Py_SetProgramName(), > Py_SetPythonHome() and Py_SetPath(). But in other places it is documented > that Py_SetStandardStreamEncoding(), PyImport_AppendInittab(), > PyImport_ExtendInittab() should be called before Py_Initialize(). And the > embedding examples call Py_DecodeLocale() before Py_Initialize(). > PyMem_RawMalloc(), PyMem_RawFree() and PyInitFrozenExtensions() are called > before Py_Initialize() in Py_FrozenMain(). Also these functions call > _PyMem_RawStrdup(). > > Hence, the minimal set of functions that can be called before > Py_Initialize() is: > > * Py_SetProgramName() > * Py_SetPythonHome() > * Py_SetPath() > * Py_SetStandardStreamEncoding() > * PyImport_AppendInittab() > * PyImport_ExtendInittab() > * Py_DecodeLocale() > * PyMem_RawMalloc() > * PyMem_RawFree() > * PyInitFrozenExtensions() OK, in that case I think the answer to Victor's question is: 1. Breaking calling Py_DecodeLocale() before calling Py_Initialize() is a compatibility break with the API implied by our own usage examples, and we'll need to revert the breakage for 3.7, and ensure at least one release's worth of DeprecationWarning before requiring either the use of an alternative API (where the caller controls the memory management), or else a new lower level pre-initialization API (i.e. making `PyRuntime_Initialize` a public API) 2. We should provide a consolidated list of these functions in the C API initialization docs 3. We should add more test cases to _testembed.c that ensure they all work correctly prior to Py_Initialize (some of them are already tested there, but definitely not all of them) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 19 01:36:20 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Nov 2017 16:36:20 +1000 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: References: Message-ID: On 18 November 2017 at 11:22, Victor Stinner wrote: > Hi, > > I noticed that Python not only hides DeprecationWarning, but also > PendingDeprecationWarning and ImportWarning by default. While I > understand why we decided to hide these warnings to users for a Python > compiled in release mode, why are they hidden in Python debug builds? > > I'm asking the question because in debug mode, Python shows > ResourceWarning warnings (whereas these warnings are hidden in release > mode). Why only displaying ResourceWarning, but not other warnings in > debug mode? I don't recall the exact reasoning (if I ever even knew it), but if I had to guess, it would just be because ResourceWarning is newer than the others. BytesWarning has its own control flags and shouldn't be on by default even in debug builds, but I think it would be reasonable to have at least ImportWarning (which our own test suite should never be hitting) and DeprecationWarning on by default in debug builds. For PendingDeprecationWarning, I don't really mind either way, but "everything except BytesWarning, because that has its own dedicated command line flags" would be an easier guideline to remember. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 19 02:22:26 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Nov 2017 17:22:26 +1000 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On 13 November 2017 at 01:45, Guido van Rossum wrote: > On Sun, Nov 12, 2017 at 1:24 AM, Nick Coghlan wrote: >> >> In Python 2.7 and Python 3.2, the default warning filters were updated to >> hide >> DeprecationWarning by default, such that deprecation warnings in >> development >> tools that were themselves written in Python (e.g. linters, static >> analysers, >> test runners, code generators) wouldn't be visible to their users unless >> they >> explicitly opted in to seeing them. > > Looking at the official What's New entry for the change > (https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-deprecation-warnings) > it's not just about development tools. It's about any app (or tool, or > utility, or program) written in Python whose users just treat it as "some > app", not as something that's necessarily part of their Python environment. > While in extreme cases such apps can *bundle* their own Python interpreter > (like Dropbox does), many developers opt to assume or ensure that Python is > avaiable, perhaps via the OS package management system. (Because my day job > is software development I am having a hard time coming up with concrete > examples that aren't development tools, but AFAIK at Dropbox the > *deployment* of e.g. Go binaries is managed through utilities written in > Python. The Go developers couldn't care less about that.) It occurs to me that the original Dropbox client itself would be an example of deprecation warnings in deployed code being unhelpful noise - assuming that the runtime warnings were reported back to the developers, the deprecation warnings would be entirely irrelevant to a developer trying to debug a production issue with client/server communication. I've updated the PEP to try to make the explanation of the historical rationale more accurate: https://github.com/python/peps/commit/30daada7867dd7f0e008545c7fd98612282ec602 I still emphasise the developer tooling case, as I think that's the easiest one for Python developers to follow when it comes to appreciating the problems with the old defaults (I know it is for me), but I've added some references to regular user facing applications as well. I've also added a new "Documentation Updates" section, which doesn't technically need to be in the PEP, but I think is useful to include in terms of giving the existing features more visibility than they might otherwise receive (I learned quite a few new things myself about the basics of warnings control researching and working on implementing this PEP, so I assume plenty of end users are in a similar situation, where we know the warnings module exists, but aren't necessarily completely familiar with how to reconfigure it at runtime). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 19 02:29:10 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Nov 2017 17:29:10 +1000 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On 19 November 2017 at 17:22, Nick Coghlan wrote: > I've updated the PEP to try to make the explanation of the historical > rationale more accurate: > https://github.com/python/peps/commit/30daada7867dd7f0e008545c7fd98612282ec602 With these changes, I think the version now live at https://www.python.org/dev/peps/pep-0565/ is ready for pronouncement. Are you happy to review and pronounce on that in this thread, or would you prefer that I started a new one? Cheers, Nick. P.S. The actual implementation still needs some work, due to details of the dual Python/C warnings module implementation, and the subtleties of how that interacts with the re module. However, I've implemented a working version with a test case, and have a pretty clear idea of the extra test cases that we need to close some of the test coverage gaps that draft implementation has revealed. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Sun Nov 19 02:53:30 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 19 Nov 2017 09:53:30 +0200 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 19.11.17 04:17, Nick Coghlan ????: > 1. Breaking calling Py_DecodeLocale() before calling Py_Initialize() > is a compatibility break with the API implied by our own usage > examples, and we'll need to revert the breakage for 3.7, and ensure at > least one release's worth of DeprecationWarning before requiring > either the use of an alternative API (where the caller controls the > memory management), or else a new lower level pre-initialization API > (i.e. making `PyRuntime_Initialize` a public API) There is a way to to control the memory manager. The caller should just define their own PyMem_RawMalloc(), PyMem_RawFree(), etc. It seems to me that the reasons of introducing these functions were: 1. Get around the implementation detail when malloc(0) could return NULL. PyMem_RawMalloc() always should return an unique address (unless error). 2. Allow the caller to control the memory management by providing their own implementations. Let use existing possibilities and not expand the API. I don't think the deprecation and breaking compatibility are needed here. From victor.stinner at gmail.com Sun Nov 19 03:52:31 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 19 Nov 2017 09:52:31 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: Maybe we can find a compromise: revert the change on memory allocators. They are too special to require to call PyRuntime_Init(). Currently, you cannot call PyMem_SetAllocators() before PyRuntime_Init(). Victor Le 19 nov. 2017 08:55, "Serhiy Storchaka" a ?crit : > 19.11.17 04:17, Nick Coghlan ????: > >> 1. Breaking calling Py_DecodeLocale() before calling Py_Initialize() >> is a compatibility break with the API implied by our own usage >> examples, and we'll need to revert the breakage for 3.7, and ensure at >> least one release's worth of DeprecationWarning before requiring >> either the use of an alternative API (where the caller controls the >> memory management), or else a new lower level pre-initialization API >> (i.e. making `PyRuntime_Initialize` a public API) >> > > There is a way to to control the memory manager. The caller should just > define their own PyMem_RawMalloc(), PyMem_RawFree(), etc. It seems to me > that the reasons of introducing these functions were: > > 1. Get around the implementation detail when malloc(0) could return NULL. > PyMem_RawMalloc() always should return an unique address (unless error). > > 2. Allow the caller to control the memory management by providing their > own implementations. > > Let use existing possibilities and not expand the API. I don't think the > deprecation and breaking compatibility are needed here. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor. > stinner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Sun Nov 19 05:26:40 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 19 Nov 2017 12:26:40 +0200 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: 13.11.17 01:34, Nick Coghlan ????: > On 13 November 2017 at 03:10, Serhiy Storchaka wrote: >> 12.11.17 11:24, Nick Coghlan ????: >>> >>> The PEP also proposes repurposing the existing FutureWarning category >>> to explicitly mean "backwards compatibility warnings that should be >>> shown to users of Python applications" since: >>> >>> - we don't tend to use FutureWarning for its original nominal purpose >>> (changes that will continue to run but will do something different) >> >> FutureWarning currently is used for its original nominal purpose in the re >> and ElementTree modules. > > If the future warnings relate to regex and XML parsing, they'd still > fall under the "for display to users" category, since those modules > can't tell if the input data was application provided or part of an > end user interface like a configuration file. In the case of regular expressions or XPath the warnings could fall under the "for display to users" category, though in most cases they are emitted for hardcoded expressions. It is easy to fix these expressions, making them working ambiguously in past and future versions, and the author should do this. But FutureWarning is also raised in the __bool__ method which will change its meaning in future (the similar change was made for the midnight time object). It seems to me that most of issues with FutureWarning on GitHub [1] are related to NumPy and pandas which use FutureWarning for its original nominal purpose, for warning about using programming interfaces that will change the behavior in future. This doesn't have any relation to end users unless the end user is an author of the written code. [1] https://github.com/search?q=FutureWarning&type=Issues From solipsis at pitrou.net Sun Nov 19 05:59:04 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 19 Nov 2017 11:59:04 +0100 Subject: [Python-Dev] Make the stable API-ABI usable References: Message-ID: <20171119115904.59de33f4@fsol> On Sun, 19 Nov 2017 00:18:28 +0100 Victor Stinner wrote: > Le 18 nov. 2017 10:44, "Serhiy Storchaka" a ?crit : > > The simplest way to do this: > > #define PyTuple_GET_ITEM PyTuple_GetItem > > This will not add new names to ABI. Such defines can be added in a separate > header file included for compatibility. > > > It is exactly what I am proposing :-) But those do not have the same semantics. PyTuple_GetItem() checks its arguments and raises an error if you pass it something else than a tuple, or if the index is out of bounds. PyTuple_GET_ITEM(), however, will crash if you do so. Regards Antoine. From storchaka at gmail.com Sun Nov 19 06:50:07 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 19 Nov 2017 13:50:07 +0200 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: <20171119115904.59de33f4@fsol> References: <20171119115904.59de33f4@fsol> Message-ID: 19.11.17 12:59, Antoine Pitrou ????: > On Sun, 19 Nov 2017 00:18:28 +0100 > Victor Stinner wrote: >> Le 18 nov. 2017 10:44, "Serhiy Storchaka" a ?crit : >> >> The simplest way to do this: >> >> #define PyTuple_GET_ITEM PyTuple_GetItem >> >> This will not add new names to ABI. Such defines can be added in a separate >> header file included for compatibility. >> >> >> It is exactly what I am proposing :-) > > But those do not have the same semantics. PyTuple_GetItem() checks its > arguments and raises an error if you pass it something else than a > tuple, or if the index is out of bounds. PyTuple_GET_ITEM(), however, > will crash if you do so. There are no guaranties that PyTuple_GET_ITEM() will crash. In all cases when PyTuple_GET_ITEM() is used for getting the reference to a tuple's item it can be replaced with PyTuple_GetItem(). But if PyTuple_GET_ITEM() is used for getting a reference to a C array of items it can't be replaced with PyTuple_GetItem(). And actually there is no replacement for this case in the limited API. PyObject **items = &PyTuple_GET_ITEM(tuple, 0); From njs at pobox.com Sun Nov 19 08:40:30 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 19 Nov 2017 05:40:30 -0800 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On Sun, Nov 19, 2017 at 2:26 AM, Serhiy Storchaka wrote: > It seems to me that most of issues with FutureWarning on GitHub [1] are > related to NumPy and pandas which use FutureWarning for its original nominal > purpose, for warning about using programming interfaces that will change the > behavior in future. This doesn't have any relation to end users unless the > end user is an author of the written code. > > [1] https://github.com/search?q=FutureWarning&type=Issues Eh, numpy does use FutureWarning for changes where the same code will transition from doing one thing to doing something else without passing through a state where it raises an error. But that decision was based on FutureWarning being shown to users by default, not because it matches the nominal purpose :-). IIRC I proposed this policy for NumPy in the first place, and I still don't even know if it matches the original intent because the docs are so vague. "Will change behavior in the future" describes every case where you might consider using FutureWarning *or* DeprecationWarning, right? We have been using DeprecationWarning for changes where code will transition from working -> raising an error, and that *is* based on the Official Recommendation to hide those by default whenever possible. We've been doing this for a few years now, and I'd say our experience so far has been... poor. I'm trying to figure out how to say this politely. Basically it doesn't work at all. What happens in practice is that we issue a DeprecationWarning for a year, mostly no-one notices, then we make the change in a 1.x.0 release, everyone's code breaks, we roll it back in 1.x.1, and then possibly repeat several times in 1.(x+1).0 and 1.(x+2).0 until enough people have updated their code that the screams die down. I'm pretty sure we'll be changing our policy at some point, possibly to always use FutureWarning for everything. -n -- Nathaniel J. Smith -- https://vorpus.org From mark at hotpy.org Sun Nov 19 13:56:08 2017 From: mark at hotpy.org (Mark Shannon) Date: Sun, 19 Nov 2017 18:56:08 +0000 Subject: [Python-Dev] Comments on PEP 563 (Postponed Evaluation of Annotations) Message-ID: <8b5cc595-adf0-c75e-6517-26c01445f573@hotpy.org> Hi, Overall I am strongly in favour of this PEP. It pretty much cures all the ongoing pain of using PEP 3017 annotations for type hints. There is one thing I don't like however, and that is treating strings as if the quotes weren't there. While this seems like a superficial simplification to make transition easier, it introduces inconsistency and will ultimately make both implementing and using type hints harder. Having the treatment of strings depend on their depth in the AST seems confusing and unnecessary: "List[int]" becomes 'List[int]' # quotes removed but List["int"] becomes 'List["int"]' # quoted retained Also, T = "My unparseable annotation" def f()->T: pass would remain legal, but def f()->"My unparseable annotation" would become illegal. The change in behaviour between the above two code snippets is already confusing enough without making one of them a SyntaxError. Using annotations for purposes other than type hinting is legal and has been for quite a while. Also, PEP 484 type-hints are not the only type system in the Python ecosystem. Cython has a long history of using static type hints. For tools other than MyPy, the inconsistent quoting is onerous and will require double-quoting to prevent a parse error. For example def foo()->"unsigned int": ... will become illegal and require the cumbersome def foo()->'"unsigned int"': ... Cheers, Mark. From mark at hotpy.org Sun Nov 19 15:06:33 2017 From: mark at hotpy.org (Mark Shannon) Date: Sun, 19 Nov 2017 20:06:33 +0000 Subject: [Python-Dev] Comments on PEP 560 (Core support for typing module and generic types) Message-ID: <99dc8c47-6dd7-11d5-288c-c0bedcd55f20@hotpy.org> Hi, I am very concerned by this PEP. By far and away the largest change in PEP 560 is the change to the behaviour of object.__getitem__. This is not mentioned in the PEP at all, but is explicit in the draft implementation. The implementation could implement `type.__getitem__` instead of changing `object.__getitem__`, but that is still a major change to the language. In fact, the addition of `__mro_entries__` makes `__class_getitem__` unnecessary. The addition of `__mro_entries__` allows instances of classes that do not subclass `type` to act as classes in some circumstances. That means that any class can implement `__getitem__` to provide a generic type. For example, here is a minimal working implementation of `List`: class Generic: def __init__(self, concrete): self.concrete = concrete def __getitem__(self, index): return self.concrete def __mro_entries__(self): return self.concrete List = Generic(list) class MyList(List): pass # Works perfectly class MyIntList(List[int]): pass # Also works. The name `__mro_entries__` suggests that this method is solely related method resolution order, but it is really about providing an instance of `type` where one is expected. This is analogous to `__int__`, `__float__` and `__index__` which provide an int, float and int respectively. This rather suggests (to me at least) the name `__type__` instead of `__mro_entries__` Also, why return a tuple of classes, not just a single class? The PEP should include the justification for this decision. Should `isinstance` and `issubclass` call `__mro_entries__` before raising an error if the second argument is not a class? In other words, if `List` implements `__mro_entries__` to return `list` then should `issubclass(x, List)` act like `issubclass(x, list)`? (IMO, it shouldn't) The reasoning behind this decision should be made explicit in the PEP. Cheers, Mark. From mark at hotpy.org Sun Nov 19 15:24:00 2017 From: mark at hotpy.org (Mark Shannon) Date: Sun, 19 Nov 2017 20:24:00 +0000 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) Message-ID: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> Hi, Just one comment. Could the new behaviour of attribute lookup on a module be spelled out more explicitly please? I'm guessing it is now something like: `module.__getattribute__` is now equivalent to: def __getattribute__(mod, name): try: return object.__getattribute__(mod, name) except AttributeError: try: getter = mod.__dict__["__getattr__"] except KeyError: raise AttributeError(f"module has no attribute '{name}'") return getter(name) Cheers, Mark. From storchaka at gmail.com Sun Nov 19 15:41:52 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 19 Nov 2017 22:41:52 +0200 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> Message-ID: 19.11.17 22:24, Mark Shannon ????: > Just one comment. Could the new behaviour of attribute lookup on a > module be spelled out more explicitly please? > > > I'm guessing it is now something like: > > `module.__getattribute__` is now equivalent to: > > def __getattribute__(mod, name): > ??? try: > ??????? return object.__getattribute__(mod, name) > ??? except AttributeError: > ??????? try: > ??????????? getter = mod.__dict__["__getattr__"] > ??????? except KeyError: > ??????????? raise AttributeError(f"module has no attribute '{name}'") > ??????? return getter(name) I think it is better to describe in the terms of __getattr__. def ModuleType.__getattr__(mod, name): try: getter = mod.__dict__["__getattr__"] except KeyError: raise AttributeError(f"module has no attribute '{name}'") return getter(name) The implementation of ModuleType.__getattribute__ will be not changed (it is inherited from the object type). From mark at hotpy.org Sun Nov 19 15:48:58 2017 From: mark at hotpy.org (Mark Shannon) Date: Sun, 19 Nov 2017 20:48:58 +0000 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> Message-ID: <1143bd6e-268c-46d3-1940-b3c11e819679@hotpy.org> On 19/11/17 20:41, Serhiy Storchaka wrote: > 19.11.17 22:24, Mark Shannon ????: >> Just one comment. Could the new behaviour of attribute lookup on a >> module be spelled out more explicitly please? >> >> >> I'm guessing it is now something like: >> >> `module.__getattribute__` is now equivalent to: >> >> def __getattribute__(mod, name): >> try: >> return object.__getattribute__(mod, name) >> except AttributeError: >> try: >> getter = mod.__dict__["__getattr__"] >> except KeyError: >> raise AttributeError(f"module has no attribute '{name}'") >> return getter(name) > > I think it is better to describe in the terms of __getattr__. > > def ModuleType.__getattr__(mod, name): > try: > getter = mod.__dict__["__getattr__"] > except KeyError: > raise AttributeError(f"module has no attribute '{name}'") > return getter(name) > > The implementation of ModuleType.__getattribute__ will be not changed > (it is inherited from the object type). Not quite, ModuleType overrides object.__getattribute__ in order to provide a better error message. So with your suggestion, the change would be to *not* override object.__getattribute__ and provide the above ModuleType.__getattr__ Cheers, Mark. From k7hoven at gmail.com Sun Nov 19 17:03:54 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 20 Nov 2017 00:03:54 +0200 Subject: [Python-Dev] __future__ imports and breaking code (was: PEP 563: Postponed Evaluation of Annotations) Message-ID: Previously, I expressed some concerns about PEP 563 regarding what should happen when a string is used as an annotation. Since my point here is more general, I'm starting yet another thread. For a lot of existing type-annotated code, adding "from __tuture__ import annotations" [1] *doesn't break anything*. But that doesn't seem right. The whole point of __future__ imports is to break things. Maybe the __future__ import will not give a 100% equivalent functionality to what will be in Python 4 by default, but anyway, it's Python 4 that should break as little as possible. This leaves the breaking business to the future import, if necessary. If someone cares enough to add the future import that avoids needing string annotations for forward references, it shouldn't be such a big deal to get a warning if there's a string annotation left. But the person upgrading to Python 4 (or whatever they might be upgrading) will have a lot less motivation to figure out what went wrong. Then again, code that works in both Python 3 and 4 could still have the future import. But that would defeat the purpose of Python 4 as a clean and high-performance dynamic language. ?Koos [1] As defined in the PEP 563 draft: https://mail.python.org/pipermail/python-dev/2017-November/150062.html -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Sun Nov 19 17:36:51 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Sun, 19 Nov 2017 23:36:51 +0100 Subject: [Python-Dev] Comments on PEP 560 (Core support for typing module and generic types) In-Reply-To: <99dc8c47-6dd7-11d5-288c-c0bedcd55f20@hotpy.org> References: <99dc8c47-6dd7-11d5-288c-c0bedcd55f20@hotpy.org> Message-ID: On 19 November 2017 at 21:06, Mark Shannon wrote: > By far and away the largest change in PEP 560 is the change to the > behaviour of object.__getitem__. This is not mentioned in the PEP at all, > but is explicit in the draft implementation. > The implementation could implement `type.__getitem__` instead of changing > `object.__getitem__`, but that is still a major change to the language. > Except that there is no such thing as object._getitem__. Probably you mean PyObject_GetItem (which is just what is done by BINARY_SUBSCR opcode). In fact, I initially implemented type.__getitem__, but I didn't like it for various reasons. I don't think that any of the above are changes to the language. These are rather implementation details. The only unusual thing is that while dunders are searched on class, __class_getitem__ is searched on the object (class object in this case) itself. But this is clearly explained in the PEP. > In fact, the addition of `__mro_entries__` makes `__class_getitem__` > unnecessary. > But how would you implement this: class C(Generic[T]): ... C[int] # This should work The name `__mro_entries__` suggests that this method is solely related > method resolution order, but it is really about providing an instance of > `type` where one is expected. This is analogous to `__int__`, `__float__` > and `__index__` which provide an int, float and int respectively. > This rather suggests (to me at least) the name `__type__` instead of > `__mro_entries__` > This was already discussed during months, and in particular the name __type__ was not liked by ... you https://github.com/python/typing/issues/432#issuecomment-304070379 So I would propose to stop bikesheding this (also Guido seems to like the currently proposed name). > Should `isinstance` and `issubclass` call `__mro_entries__` before raising > an error if the second argument is not a class? > In other words, if `List` implements `__mro_entries__` to return `list` > then should `issubclass(x, List)` act like `issubclass(x, list)`? > (IMO, it shouldn't) The reasoning behind this decision should be made > explicit in the PEP. > I think this is orthogonal to the PEP. There are many situations where a class is expected, and IMO it is clear that all that are not mentioned in the PEP stay unchanged. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Sun Nov 19 17:38:56 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 20 Nov 2017 00:38:56 +0200 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: On Mon, Nov 13, 2017 at 11:59 PM, Brett Cannon wrote: ?[..]? > On Sun, Nov 12, 2017, 10:22 Koos Zevenhoven, wrote: > ?? > >> >> There's two thing I don't understand here: >> >> * What does it mean to preserve the string verbatim? No matter how I read >> it, I can't tell if it's with quotes or without. >> >> Maybe I'm missing some context. >> > > I believe the string passes through unchanged (i.e. no quotes). Think of > the PEP as simply turning all non-string annotations into string ones. > > ?Ok, maybe that was just wishful thinking on my part ;-). More info in the other threads, for example: https://mail.python.org/pipermail/python-dev/2017-November/150642.html https://mail.python.org/pipermail/python-dev/2017-November/150637.html -- Koos -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Nov 19 19:57:05 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 20 Nov 2017 11:57:05 +1100 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> Message-ID: <20171120005705.GZ19802@ando.pearwood.info> On Sun, Nov 19, 2017 at 08:24:00PM +0000, Mark Shannon wrote: > Hi, > > Just one comment. Could the new behaviour of attribute lookup on a > module be spelled out more explicitly please? > > > I'm guessing it is now something like: > > `module.__getattribute__` is now equivalent to: > > def __getattribute__(mod, name): > try: > return object.__getattribute__(mod, name) > except AttributeError: > try: > getter = mod.__dict__["__getattr__"] A minor point: this should(?) be written in terms of the public interface for accessing namespaces, namely: getter = vars(mod)["__getattr__"] -- Steve From guido at python.org Sun Nov 19 20:02:50 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 19 Nov 2017 17:02:50 -0800 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: <1143bd6e-268c-46d3-1940-b3c11e819679@hotpy.org> References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <1143bd6e-268c-46d3-1940-b3c11e819679@hotpy.org> Message-ID: Serhiy's definition sounds recursive (defining __getattr__ to define the behavior of __getattr__) but Mark's suggestion makes his intention unclear since the error message is still the same. Also the word "now" is confusing (does it mean "currently, before the PEP" or "once this PEP is accepted"?) It would be clearer to first state that Module.__getattribute__ is currently (before the PEP) essentially defined as class Module(object): def __getattribute__(self, name): try: return object.__getattribute__(self, name) except AttributeError: if hasattr(self, '__dict__'): mod_name = self.__dict__.get(name) if isinstance(mod_name, str): raise AttributeError("module '%s' has no attribute '%s'" % (mod_name, name)) raise AttributeError("module has no attribute '%s'" % name) The PEP changes the contents of the except clause to: if hasattr(self, '__dict__'): if '__getattr__' in self.__dict__: getter = self.__dict__['__getattr__'] if not callable(getter): raise TypeError("module __getattr__ must be callable") return getter(name) # Unchanged from here on mod_name = self.__dict__.get(name) if isinstance(mod_name, str): raise AttributeError("module '%s' has no attribute '%s'" % (mod_name, name)) raise AttributeError("module has no attribute '%s'" % name) (However exception chaining makes the equivalency still not perfect. And we ignore threading. But how far do we need to go when specifying "equivalent code" to what every implementation should implement natively?) On Sun, Nov 19, 2017 at 12:48 PM, Mark Shannon wrote: > > > On 19/11/17 20:41, Serhiy Storchaka wrote: > >> 19.11.17 22:24, Mark Shannon ????: >> >>> Just one comment. Could the new behaviour of attribute lookup on a >>> module be spelled out more explicitly please? >>> >>> >>> I'm guessing it is now something like: >>> >>> `module.__getattribute__` is now equivalent to: >>> >>> def __getattribute__(mod, name): >>> try: >>> return object.__getattribute__(mod, name) >>> except AttributeError: >>> try: >>> getter = mod.__dict__["__getattr__"] >>> except KeyError: >>> raise AttributeError(f"module has no attribute '{name}'") >>> return getter(name) >>> >> >> I think it is better to describe in the terms of __getattr__. >> >> def ModuleType.__getattr__(mod, name): >> try: >> getter = mod.__dict__["__getattr__"] >> except KeyError: >> raise AttributeError(f"module has no attribute '{name}'") >> return getter(name) >> >> The implementation of ModuleType.__getattribute__ will be not changed (it >> is inherited from the object type). >> > > Not quite, ModuleType overrides object.__getattribute__ in order to > provide a better error message. So with your suggestion, the change would > be to *not* override object.__getattribute__ and provide the above > ModuleType.__getattr__ > > Cheers, > Mark. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Nov 19 20:34:35 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 19 Nov 2017 17:34:35 -0800 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: <20171120005705.GZ19802@ando.pearwood.info> References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <20171120005705.GZ19802@ando.pearwood.info> Message-ID: On Sun, Nov 19, 2017 at 4:57 PM, Steven D'Aprano wrote: > On Sun, Nov 19, 2017 at 08:24:00PM +0000, Mark Shannon wrote: > > Just one comment. Could the new behaviour of attribute lookup on a > > module be spelled out more explicitly please? > > > > > > I'm guessing it is now something like: > > > > `module.__getattribute__` is now equivalent to: > > > > def __getattribute__(mod, name): > > try: > > return object.__getattribute__(mod, name) > > except AttributeError: > > try: > > getter = mod.__dict__["__getattr__"] > > A minor point: this should(?) be written in terms of the public > interface for accessing namespaces, namely: > > getter = vars(mod)["__getattr__"] > Should it? The PEP is not proposing anything for other namespaces. What difference do you envision this way of specifying it would make? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Nov 19 21:34:09 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 20 Nov 2017 13:34:09 +1100 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <20171120005705.GZ19802@ando.pearwood.info> Message-ID: <20171120023409.GB19802@ando.pearwood.info> On Sun, Nov 19, 2017 at 05:34:35PM -0800, Guido van Rossum wrote: > On Sun, Nov 19, 2017 at 4:57 PM, Steven D'Aprano > wrote: > > A minor point: this should(?) be written in terms of the public > > interface for accessing namespaces, namely: > > > > getter = vars(mod)["__getattr__"] > > Should it? The PEP is not proposing anything for other namespaces. What > difference do you envision this way of specifying it would make? I don't know if it should -- that's why I included the question mark. But my idea is that __dict__ is the implementation and vars() is the interface to __dir__, and we should prefer using the interface rather than the implementation unless there's a good reason not to. (I'm not talking here about changing the actual name lookup code to go through vars(). I'm just talking about how we write the equivalent recipe.) Its not a big deal either way, __dict__ is already heavily used and vars() poorly known. Call it a matter of taste, if you like, but in my opinion the fewer times we directly reference dunders, the better. -- Steve From guido at python.org Sun Nov 19 23:45:31 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 19 Nov 2017 20:45:31 -0800 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: <20171120023409.GB19802@ando.pearwood.info> References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <20171120005705.GZ19802@ando.pearwood.info> <20171120023409.GB19802@ando.pearwood.info> Message-ID: Given that we're also allowing customization of __dir__ I wouldn't want to link this to __dir__. But maybe you meant to say that vars() is the public interface for __dict__. Even if it were, in the case of specifying this particular customization for this PEP, I strongly prefer to write it in terms of __dict__. On Sun, Nov 19, 2017 at 6:34 PM, Steven D'Aprano wrote: > On Sun, Nov 19, 2017 at 05:34:35PM -0800, Guido van Rossum wrote: > > On Sun, Nov 19, 2017 at 4:57 PM, Steven D'Aprano > > wrote: > > > > A minor point: this should(?) be written in terms of the public > > > interface for accessing namespaces, namely: > > > > > > getter = vars(mod)["__getattr__"] > > > > Should it? The PEP is not proposing anything for other namespaces. What > > difference do you envision this way of specifying it would make? > > I don't know if it should -- that's why I included the question mark. > > But my idea is that __dict__ is the implementation and vars() is the > interface to __dir__, and we should prefer using the interface rather > than the implementation unless there's a good reason not to. > > (I'm not talking here about changing the actual name lookup code to go > through vars(). I'm just talking about how we write the equivalent > recipe.) > > Its not a big deal either way, __dict__ is already heavily used and > vars() poorly known. Call it a matter of taste, if you like, but in my > opinion the fewer times we directly reference dunders, the better. > > > -- > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 20 01:54:22 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 20 Nov 2017 16:54:22 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 19 November 2017 at 18:52, Victor Stinner wrote: > Maybe we can find a compromise: revert the change on memory allocators. They > are too special to require to call PyRuntime_Init(). > > Currently, you cannot call PyMem_SetAllocators() before PyRuntime_Init(). At least the raw allocators, anyway - that way, the developer facing documentation/comments can just say that the raw allocators can't have any prerequisites that aren't shared by regular malloc/calloc/realloc/free calls. If that's enough to get Py_DecodeLocale working again prior to _PyRuntime_Init(), then I'd suggest officially adding that to the "must work prior to Py_Initialize" list, otherwise we can re-examine it based on whatever's still broken after reverting the raw allocator changes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Mon Nov 20 03:33:03 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 20 Nov 2017 10:33:03 +0200 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <1143bd6e-268c-46d3-1940-b3c11e819679@hotpy.org> Message-ID: 20.11.17 03:02, Guido van Rossum ????: > Serhiy's definition sounds recursive (defining __getattr__ to define the > behavior of __getattr__) but Mark's suggestion makes his intention > unclear since the error message is still the same. It is recursive only when the '__dict__' attribute is not defined. I assumed that it is defined for simplicity. And if isn't defined hasattr(self, '__dict__') will cause a recursion too. In any case the real C code handles this more carefully and effectively. From mark at hotpy.org Mon Nov 20 04:22:16 2017 From: mark at hotpy.org (Mark Shannon) Date: Mon, 20 Nov 2017 09:22:16 +0000 Subject: [Python-Dev] Comments on PEP 560 (Core support for typing module and generic types) In-Reply-To: References: <99dc8c47-6dd7-11d5-288c-c0bedcd55f20@hotpy.org> Message-ID: <56cbf831-9e75-9a9c-97c9-40ec77d4f506@hotpy.org> On 19/11/17 22:36, Ivan Levkivskyi wrote: > On 19 November 2017 at 21:06, Mark Shannon > wrote: > > By far and away the largest change in PEP 560 is the change to the > behaviour of object.__getitem__. This is not mentioned in the PEP at > all, but is explicit in the draft implementation. > The implementation could implement `type.__getitem__` instead of > changing `object.__getitem__`, but that is still a major change to > the language. > > > Except that there is no such thing as object._getitem__. Probably you > mean PyObject_GetItem (which is just what is done by BINARY_SUBSCR opcode). Yes, I should have taken more time to look at the code. I thought you were implementing `object.__getitem__`. In general, Python implements its operators as a simple redirection to a special method, with the exception of binary operators which are necessarily more complex. f(...) -> type(f).__call__(f, ...) o.a -> type(o).__getattribute__(o, "a") o[i] -> type(o).__getitem__(o, i) Which is why I don't like the additional complexity you are adding to the dispatching. If we really must have `__class_getitem__` (and I don't think that we do) then implementing `type.__getitem__` is a much less intrusive way to do it. > In fact, I initially implemented type.__getitem__, but I didn't like it > for various reasons. Could you elaborate? > > I don't think that any of the above are changes to the language. These > are rather implementation details. The only unusual thing is that while > dunders are > searched on class, __class_getitem__ is searched on the object (class > object in this case) itself. But this is clearly explained in the PEP. > > In fact, the addition of `__mro_entries__` makes `__class_getitem__` > unnecessary. > > > But how would you implement this: > > class C(Generic[T]): > ... > > C[int] # This should work The issue of type-hinting container classes is a tricky one. The definition is defining both the implementation class and the interface type. We want the implementation and interface to be distinct. However, we want to avoid needless repetition. In the example you gave, `C` is a class definition that is intended to be used as a generic container. In my mind the cleanest way to do this is with a class decorator. Something like: @Generic[T] class C: ... or @implements(Generic[T]) class C: ... C would then be a type not a class, as the decorator is free to return a non-class object. It allows the implementation and interface to be distinct: @implements(Sequence[T]) class MySeq(list): ... @implements(Set[Node]) class SmallNodeSet(list): ... # For small sets a list is more efficient than a set. but avoid repetition for the more common case: class IntStack(List[int]): ... Given the power and flexibility of the built-in data structures, defining custom containers is relatively rare. I'm not saying that it should not be considered, but a few minor hurdles are acceptable to keep the rest of the language (including more common uses of type-hints) clean. > > The name `__mro_entries__` suggests that this method is solely > related method resolution order, but it is really about providing an > instance of `type` where one is expected. This is analogous to > `__int__`, `__float__` and `__index__` which provide an int, float > and int respectively. > This rather suggests (to me at least) the name `__type__` instead of > `__mro_entries__` > > > This was already discussed during months, and in particular the name > __type__ was not liked by ... you Ha, you have a better memory than I :) I won't make any more naming suggestions. What I should have said is that the name should reflect what it does, not the initial reason for including it. > https://github.com/python/typing/issues/432#issuecomment-304070379 > So I would propose to stop bikesheding this (also Guido seems to like > the currently proposed name). > > Should `isinstance` and `issubclass` call `__mro_entries__` before > raising an error if the second argument is not a class? > In other words, if `List` implements `__mro_entries__` to return > `list` then should `issubclass(x, List)` act like `issubclass(x, list)`? > (IMO, it shouldn't) The reasoning behind this decision should be > made explicit in the PEP. > > > I think this is orthogonal to the PEP. There are many situations where a > class is expected, > and IMO it is clear that all that are not mentioned in the PEP stay > unchanged. Indeed, but you do mention issubclass in the PEP. I think a few extra words of explanation would be helpful. Cheers, Mark. From hrvoje.niksic at avl.com Mon Nov 20 08:31:24 2017 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Mon, 20 Nov 2017 14:31:24 +0100 Subject: [Python-Dev] Make the stable API-ABI usable In-Reply-To: References: <20171119115904.59de33f4@fsol> Message-ID: <2a2de460-a6d6-5939-e80e-77cfcfeae807@avl.com> On 11/19/2017 12:50 PM, Serhiy Storchaka wrote: > But if PyTuple_GET_ITEM() is used for getting a reference to a C array > of items it can't be replaced with PyTuple_GetItem(). And actually there > is no replacement for this case in the limited API. > > PyObject **items = &PyTuple_GET_ITEM(tuple, 0); That use case might be better covered with a new function, e.g. PyTuple_GetStorage, which the PyObject ** pointing to the first element of the internal array. This function would serve two purposes: * provide the performance benefits of PyTuple_GET_ITEM in tight loops, but without the drawback of exposing the PyTuple layout to the code that invokes the macro; * allow invocation of APIs that expect a pointer to contiguous storage, such as STL algorithms that expect random access iterators. Something similar is already available as PySequence_Fast_ITEMS, except that one is again a macro, and is tied to PySequence_FAST API, which may not be appropriate for the kind of performance-critical code where PyTuple_GET_ITEM tends to be used. (That kind of code is designed to deal specifically with lists or tuples and doesn't benefit from implicit conversion of arbitrary sequences to a temporary list; that conversion would only serve to mask bugs.) From victor.stinner at gmail.com Mon Nov 20 09:21:41 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 20 Nov 2017 15:21:41 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: To not lost track of the issue, I created this issue on the bpo: https://bugs.python.org/issue32086 Victor 2017-11-20 7:54 GMT+01:00 Nick Coghlan : > On 19 November 2017 at 18:52, Victor Stinner wrote: >> Maybe we can find a compromise: revert the change on memory allocators. They >> are too special to require to call PyRuntime_Init(). >> >> Currently, you cannot call PyMem_SetAllocators() before PyRuntime_Init(). > > At least the raw allocators, anyway - that way, the developer facing > documentation/comments can just say that the raw allocators can't have > any prerequisites that aren't shared by regular > malloc/calloc/realloc/free calls. > > If that's enough to get Py_DecodeLocale working again prior to > _PyRuntime_Init(), then I'd suggest officially adding that to the > "must work prior to Py_Initialize" list, otherwise we can re-examine > it based on whatever's still broken after reverting the raw allocator > changes. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Mon Nov 20 10:01:28 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 20 Nov 2017 16:01:28 +0100 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: References: Message-ID: 2017-11-18 18:13 GMT+01:00 Brett Cannon : > +1 from me as well. Ok, I created https://bugs.python.org/issue32088 and https://github.com/python/cpython/pull/4474 to implement the proposed change. Victor From ericsnowcurrently at gmail.com Mon Nov 20 10:31:32 2017 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 20 Nov 2017 08:31:32 -0700 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On Nov 18, 2017 19:20, "Nick Coghlan" wrote: OK, in that case I think the answer to Victor's question is: 1. Breaking calling Py_DecodeLocale() before calling Py_Initialize() is a compatibility break with the API implied by our own usage examples, and we'll need to revert the breakage for 3.7, +1 The break was certainly unintentional. :/ Fortunately, Py_DecodeLocale() should be the only "Process-wide parameter" needing repair. I suppose, PyMem_RawMalloc() and PyMem_RawFree() *could* be considered too, but my understanding is that they aren't really intended for direct use (especially pre-init). and ensure at least one release's worth of DeprecationWarning before requiring either the use of an alternative API (where the caller controls the memory management), or else a new lower level pre-initialization API (i.e. making `PyRuntime_Initialize` a public API) There shouldn't be a need to deprecate anything, right? We just need to restore the pre-init behavior of Py_DecodeLocale. 2. We should provide a consolidated list of these functions in the C API initialization docs +1 PyMem_Raw*() do not belong in that group, right? Again, my understanding is that they aren't intended for direct third-party use (are they even a part of the C-API?), and particularly pre-init. That Py_DecodeLocale() can use PyMem_RawMalloc() pre-init is an implementation detail. 3. We should add more test cases to _testembed.c that ensure they all work correctly prior to Py_Initialize (some of them are already tested there, but definitely not all of them) +1 -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Nov 20 10:43:24 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 20 Nov 2017 16:43:24 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 2017-11-20 16:31 GMT+01:00 Eric Snow : > That Py_DecodeLocale() can use PyMem_RawMalloc() pre-init is an implementation detail. Py_DecodeLocale() uses PyMem_RawMalloc(), and so its result must be freed by PyMem_RawFree(). It's part the documentation. I'm not sure that I understood correctly. Do you agree to move "PyMem" globals back to Objects/obmalloc.c? (to allow to call PyMem_RawMalloc() before Py_Initialize()) Victor From lukasz at langa.pl Mon Nov 20 12:49:20 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Mon, 20 Nov 2017 09:49:20 -0800 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: References: Message-ID: <08EC5552-5E02-414E-AFC6-CAA80B7DD459@langa.pl> Merged. Thanks! ? ? ? - ? > On Nov 20, 2017, at 7:01 AM, Victor Stinner wrote: > > 2017-11-18 18:13 GMT+01:00 Brett Cannon : >> +1 from me as well. > > Ok, I created https://bugs.python.org/issue32088 and > https://github.com/python/cpython/pull/4474 to implement the proposed > change. > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Mon Nov 20 12:58:02 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Mon, 20 Nov 2017 09:58:02 -0800 Subject: [Python-Dev] Comments on PEP 563 (Postponed Evaluation of Annotations) In-Reply-To: <8b5cc595-adf0-c75e-6517-26c01445f573@hotpy.org> References: <8b5cc595-adf0-c75e-6517-26c01445f573@hotpy.org> Message-ID: <660FD1E1-1098-4EDD-82C4-18B90E901709@langa.pl> I agree with you. The special handling of outermost strings vs. strings embedded inside annotations bugged me a lot. Now you convinced me that this functionality should be moved to `get_type_hints()` and the __future__ import shouldn't try to special-case this one instance, while leaving others as is. I will be amending the PEP accordingly. - ? > On Nov 19, 2017, at 10:56 AM, Mark Shannon wrote: > > Hi, > > Overall I am strongly in favour of this PEP. It pretty much cures all the ongoing pain of using PEP 3017 annotations for type hints. > > There is one thing I don't like however, and that is treating strings as if the quotes weren't there. > While this seems like a superficial simplification to make transition easier, it introduces inconsistency and will ultimately make both implementing and using type hints harder. > > Having the treatment of strings depend on their depth in the AST seems confusing and unnecessary: > "List[int]" becomes 'List[int]' # quotes removed > but > List["int"] becomes 'List["int"]' # quoted retained > > Also, > > T = "My unparseable annotation" > def f()->T: pass > > would remain legal, but > > def f()->"My unparseable annotation" > > would become illegal. > > The change in behaviour between the above two code snippets is already confusing enough without making one of them a SyntaxError. > > Using annotations for purposes other than type hinting is legal and has been for quite a while. > Also, PEP 484 type-hints are not the only type system in the Python ecosystem. Cython has a long history of using static type hints. > > For tools other than MyPy, the inconsistent quoting is onerous and will require double-quoting to prevent a parse error. > For example > > def foo()->"unsigned int": ... > > will become illegal and require the cumbersome > > def foo()->'"unsigned int"': ... > > Cheers, > Mark. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Mon Nov 20 12:59:25 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Mon, 20 Nov 2017 09:59:25 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations In-Reply-To: References: <6EC569D7-360E-46F4-9A98-EAF8BE87A9F8@langa.pl> <20171102154516.GG9068@ando.pearwood.info> <9D794108-BA36-4FC3-B807-D05F62333F13@langa.pl> <2663678E-5474-4B74-92BA-ED1C6CD31D24@langa.pl> <4BD9E398-9CFA-4A7C-AC6C-56EFEECFFC9A@python.org> Message-ID: See my response to Mark's e-mail. I agree that special-casing outermost strings is not generic enough of a solution and will be removing this special handling from the PEP and the implementation. - ? > On Nov 19, 2017, at 2:38 PM, Koos Zevenhoven wrote: > > On Mon, Nov 13, 2017 at 11:59 PM, Brett Cannon > wrote: > ?[..]? > On Sun, Nov 12, 2017, 10:22 Koos Zevenhoven, > wrote:?? > > There's two thing I don't understand here: > > * What does it mean to preserve the string verbatim? No matter how I read it, I can't tell if it's with quotes or without. > > Maybe I'm missing some context. > > I believe the string passes through unchanged (i.e. no quotes). Think of the PEP as simply turning all non-string annotations into string ones. > > > ?Ok, maybe that was just wishful thinking on my part ;-). > > More info in the other threads, for example: > > https://mail.python.org/pipermail/python-dev/2017-November/150642.html > https://mail.python.org/pipermail/python-dev/2017-November/150637.html > > -- Koos > > > -- > + Koos Zevenhoven + http://twitter.com/k7hoven + > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From victor.stinner at gmail.com Mon Nov 20 13:05:13 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 20 Nov 2017 19:05:13 +0100 Subject: [Python-Dev] Show DeprecationWarning in debug mode? In-Reply-To: <08EC5552-5E02-414E-AFC6-CAA80B7DD459@langa.pl> References: <08EC5552-5E02-414E-AFC6-CAA80B7DD459@langa.pl> Message-ID: Oh, that was quick. Thanks! I counted Serhiy, Barry, Nick, you and me in favor of the change, and nobody against the code. So well, it's ok to merge it :-) Victor 2017-11-20 18:49 GMT+01:00 Lukasz Langa : > Merged. Thanks! ? ? ? > > - ? > >> On Nov 20, 2017, at 7:01 AM, Victor Stinner wrote: >> >> 2017-11-18 18:13 GMT+01:00 Brett Cannon : >>> +1 from me as well. >> >> Ok, I created https://bugs.python.org/issue32088 and >> https://github.com/python/cpython/pull/4474 to implement the proposed >> change. >> >> Victor >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl > From k7hoven at gmail.com Mon Nov 20 13:58:09 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 20 Nov 2017 20:58:09 +0200 Subject: [Python-Dev] Comments on PEP 563 (Postponed Evaluation of Annotations) In-Reply-To: <660FD1E1-1098-4EDD-82C4-18B90E901709@langa.pl> References: <8b5cc595-adf0-c75e-6517-26c01445f573@hotpy.org> <660FD1E1-1098-4EDD-82C4-18B90E901709@langa.pl> Message-ID: On Mon, Nov 20, 2017 at 7:58 PM, Lukasz Langa wrote: > I agree with you. The special handling of outermost strings vs. strings > embedded inside annotations bugged me a lot. Now you convinced me that this > functionality should be moved to `get_type_hints()` and the __future__ > import shouldn't try to special-case this one instance, while leaving > others as is. > > ? > That's better. I don't necessarily care if there will be a warning when a string is given as annotation, but if the idea is to simplify things for the future and get rid of strings to represent types, then this would be a good moment to gently "enforce" it. ??Koos -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 20 14:21:02 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 20 Nov 2017 11:21:02 -0800 Subject: [Python-Dev] Comments on PEP 563 (Postponed Evaluation of Annotations) In-Reply-To: <660FD1E1-1098-4EDD-82C4-18B90E901709@langa.pl> References: <8b5cc595-adf0-c75e-6517-26c01445f573@hotpy.org> <660FD1E1-1098-4EDD-82C4-18B90E901709@langa.pl> Message-ID: OK, this is fine. It won't affect mypy (which will continue to treat string literals the same as before) but it will require an update to typing.py -- hope you are working on that with Ivan. On Mon, Nov 20, 2017 at 9:58 AM, Lukasz Langa wrote: > I agree with you. The special handling of outermost strings vs. strings > embedded inside annotations bugged me a lot. Now you convinced me that this > functionality should be moved to `get_type_hints()` and the __future__ > import shouldn't try to special-case this one instance, while leaving > others as is. > > I will be amending the PEP accordingly. > > - ? > > > On Nov 19, 2017, at 10:56 AM, Mark Shannon wrote: > > > > Hi, > > > > Overall I am strongly in favour of this PEP. It pretty much cures all > the ongoing pain of using PEP 3017 annotations for type hints. > > > > There is one thing I don't like however, and that is treating strings as > if the quotes weren't there. > > While this seems like a superficial simplification to make transition > easier, it introduces inconsistency and will ultimately make both > implementing and using type hints harder. > > > > Having the treatment of strings depend on their depth in the AST seems > confusing and unnecessary: > > "List[int]" becomes 'List[int]' # quotes removed > > but > > List["int"] becomes 'List["int"]' # quoted retained > > > > Also, > > > > T = "My unparseable annotation" > > def f()->T: pass > > > > would remain legal, but > > > > def f()->"My unparseable annotation" > > > > would become illegal. > > > > The change in behaviour between the above two code snippets is already > confusing enough without making one of them a SyntaxError. > > > > Using annotations for purposes other than type hinting is legal and has > been for quite a while. > > Also, PEP 484 type-hints are not the only type system in the Python > ecosystem. Cython has a long history of using static type hints. > > > > For tools other than MyPy, the inconsistent quoting is onerous and will > require double-quoting to prevent a parse error. > > For example > > > > def foo()->"unsigned int": ... > > > > will become illegal and require the cumbersome > > > > def foo()->'"unsigned int"': ... > > > > Cheers, > > Mark. > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > lukasz%40langa.pl > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 20 14:51:26 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 20 Nov 2017 11:51:26 -0800 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <1143bd6e-268c-46d3-1940-b3c11e819679@hotpy.org> Message-ID: Yeah, I don't think there's an action item here except *maybe* changes to the wording of the PEP. Ivan? On Mon, Nov 20, 2017 at 12:33 AM, Serhiy Storchaka wrote: > 20.11.17 03:02, Guido van Rossum ????: > >> Serhiy's definition sounds recursive (defining __getattr__ to define the >> behavior of __getattr__) but Mark's suggestion makes his intention unclear >> since the error message is still the same. >> > > It is recursive only when the '__dict__' attribute is not defined. I > assumed that it is defined for simplicity. And if isn't defined > hasattr(self, '__dict__') will cause a recursion too. > > In any case the real C code handles this more carefully and effectively. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Mon Nov 20 15:14:17 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 20 Nov 2017 21:14:17 +0100 Subject: [Python-Dev] Comment on PEP 562 (Module __getattr__ and __dir__) In-Reply-To: References: <9fb118ea-5482-db35-8b1d-74a9d92b2ac7@hotpy.org> <1143bd6e-268c-46d3-1940-b3c11e819679@hotpy.org> Message-ID: On 20 November 2017 at 20:51, Guido van Rossum wrote: > Yeah, I don't think there's an action item here except *maybe* changes to > the wording of the PEP. Ivan? > Yes, I will make a small PR with a more detailed description of how __getattr__ works. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Mon Nov 20 15:32:41 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 20 Nov 2017 21:32:41 +0100 Subject: [Python-Dev] Comments on PEP 560 (Core support for typing module and generic types) In-Reply-To: <56cbf831-9e75-9a9c-97c9-40ec77d4f506@hotpy.org> References: <99dc8c47-6dd7-11d5-288c-c0bedcd55f20@hotpy.org> <56cbf831-9e75-9a9c-97c9-40ec77d4f506@hotpy.org> Message-ID: On 20 November 2017 at 10:22, Mark Shannon wrote: > On 19/11/17 22:36, Ivan Levkivskyi wrote: > >> Except that there is no such thing as object._getitem__. Probably you >> mean PyObject_GetItem (which is just what is done by BINARY_SUBSCR opcode). >> > In fact, I initially implemented type.__getitem__, but I didn't like it >> for various reasons. > > > Could you elaborate? > I didn't like that although type.__getitem__ would exist, it would almost always raise, which can be confusing. Also dir(type) is already long, I don't want to make it even longer. Maybe there were other reasons that I forgot. Anyway, I propose to stop here, since this is not about the PEP, this is about implementation. This is a topic of a separate discussion. > Given the power and flexibility of the built-in data structures, defining > custom containers is relatively rare. I'm not saying that it should not be > considered, but a few minor hurdles are acceptable to keep the rest of the language > (including more common uses of type-hints) clean. > This essentially means changing decisions already made in PEP 484 and not a topic of this PEP. Also, @Generic[T] class C: ... is currently a syntax error (only names and function calls are allowed in a decorator). Finally, it is too late to change how generics are declared, since it will break existing code. Should `isinstance` and `issubclass` call `__mro_entries__` before >> raising an error if the second argument is not a class? >> In other words, if `List` implements `__mro_entries__` to return >> `list` then should `issubclass(x, List)` act like `issubclass(x, >> list)`? >> (IMO, it shouldn't) The reasoning behind this decision should be >> made explicit in the PEP. >> >> >> I think this is orthogonal to the PEP. There are many situations where a >> class is expected, >> and IMO it is clear that all that are not mentioned in the PEP stay >> unchanged. >> > > Indeed, but you do mention issubclass in the PEP. I think a few extra > words of explanation would be helpful. > OK, I will add a comment to the PEP. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Mon Nov 20 16:35:58 2017 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 20 Nov 2017 14:35:58 -0700 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On Mon, Nov 20, 2017 at 8:43 AM, Victor Stinner wrote: > 2017-11-20 16:31 GMT+01:00 Eric Snow : >> That Py_DecodeLocale() can use PyMem_RawMalloc() pre-init is an implementation detail. > > Py_DecodeLocale() uses PyMem_RawMalloc(), and so its result must be > freed by PyMem_RawFree(). It's part the documentation. Ah, I'd missed that. Thanks for pointing it out. > > I'm not sure that I understood correctly. Do you agree to move "PyMem" > globals back to Objects/obmalloc.c? (to allow to call > PyMem_RawMalloc() before Py_Initialize()) I'm okay with that if we can't find another way. However, shouldn't we be able to statically initialize the raw allocator in _PyRuntime, much as we were doing before in obmalloc.c? I have a rough PR up: https://github.com/python/cpython/pull/4481 Also, I opened https://bugs.python.org/issue32096 for the regression. Thanks for bringing it up. -eric From victor.stinner at gmail.com Mon Nov 20 17:03:50 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 20 Nov 2017 23:03:50 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 2017-11-20 22:35 GMT+01:00 Eric Snow : > I'm okay with that if we can't find another way. However, shouldn't > we be able to statically initialize the raw allocator in _PyRuntime, > much as we were doing before in obmalloc.c? I have a rough PR up: > > https://github.com/python/cpython/pull/4481 > > Also, I opened https://bugs.python.org/issue32096 for the regression. > Thanks for bringing it up. To statically initialize PyMemAllocatorEx fields, you need to export a lot of allocator functions. I would prefer to not do that. static void* _PyMem_DebugRawMalloc(void *ctx, size_t size); static void* _PyMem_DebugRawCalloc(void *ctx, size_t nelem, size_t elsize); static void* _PyMem_DebugRawRealloc(void *ctx, void *ptr, size_t size); static void _PyMem_DebugRawFree(void *ctx, void *ptr); static void* _PyMem_DebugMalloc(void *ctx, size_t size); static void* _PyMem_DebugCalloc(void *ctx, size_t nelem, size_t elsize); static void* _PyMem_DebugRealloc(void *ctx, void *ptr, size_t size); static void _PyMem_DebugFree(void *ctx, void *p); static void* _PyObject_Malloc(void *ctx, size_t size); static void* _PyObject_Calloc(void *ctx, size_t nelem, size_t elsize); static void _PyObject_Free(void *ctx, void *p); static void* _PyObject_Realloc(void *ctx, void *ptr, size_t size); The rules to choose the allocator to each domain are also complex depending if pymalloc is enabled, debug hooks are enabled by default, etc. The memory allocator is also linked to _PyMem_Debug which is not currently in Include/internals/ but Objects/obmalloc.c. I understand that moving global variables to _PyRuntime helps to clarify how these variables are initialized and then finalized, but memory allocators are a complex corner case. main(), Py_Main() and _PyRuntime_Initialize() now have to change temporary the allocators to make sure that their initialization and finalization use the same allocator. I prefer to revert the change on memory allocators, and retry later to fix it, once other initializations issues are fixed ;-) Victor From guido at python.org Mon Nov 20 19:34:03 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 20 Nov 2017 16:34:03 -0800 Subject: [Python-Dev] Comments on PEP 560 (Core support for typing module and generic types) In-Reply-To: References: <99dc8c47-6dd7-11d5-288c-c0bedcd55f20@hotpy.org> <56cbf831-9e75-9a9c-97c9-40ec77d4f506@hotpy.org> Message-ID: Mark, with that comment now added to the PEP, are you satisfied? https://github.com/python/peps/pull/479 On Mon, Nov 20, 2017 at 12:32 PM, Ivan Levkivskyi wrote: > On 20 November 2017 at 10:22, Mark Shannon wrote: > >> On 19/11/17 22:36, Ivan Levkivskyi wrote: >> >>> Except that there is no such thing as object._getitem__. Probably you >>> mean PyObject_GetItem (which is just what is done by BINARY_SUBSCR opcode). >>> >> In fact, I initially implemented type.__getitem__, but I didn't like it >>> for various reasons. >> >> >> Could you elaborate? >> > > I didn't like that although type.__getitem__ would exist, it would almost > always raise, which can be confusing. Also dir(type) is already long, > I don't want to make it even longer. Maybe there were other reasons that I > forgot. > > Anyway, I propose to stop here, since this is not about the PEP, this is > about implementation. > This is a topic of a separate discussion. > > >> Given the power and flexibility of the built-in data structures, defining >> custom containers is relatively rare. I'm not saying that it should not be >> considered, > > but a few minor hurdles are acceptable to keep the rest of the language >> (including more common uses of type-hints) clean. >> > > This essentially means changing decisions already made in PEP 484 and not > a topic of this PEP. > Also, > > @Generic[T] > class C: > ... > > is currently a syntax error (only names and function calls are allowed in > a decorator). > Finally, it is too late to change how generics are declared, since it will > break > existing code. > > Should `isinstance` and `issubclass` call `__mro_entries__` before >>> raising an error if the second argument is not a class? >>> In other words, if `List` implements `__mro_entries__` to return >>> `list` then should `issubclass(x, List)` act like `issubclass(x, >>> list)`? >>> (IMO, it shouldn't) The reasoning behind this decision should be >>> made explicit in the PEP. >>> >>> >>> I think this is orthogonal to the PEP. There are many situations where a >>> class is expected, >>> and IMO it is clear that all that are not mentioned in the PEP stay >>> unchanged. >>> >> >> Indeed, but you do mention issubclass in the PEP. I think a few extra >> words of explanation would be helpful. >> > > OK, I will add a comment to the PEP. > > -- > Ivan > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 20 21:24:44 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 21 Nov 2017 12:24:44 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 21 November 2017 at 01:31, Eric Snow wrote: > On Nov 18, 2017 19:20, "Nick Coghlan" wrote: > > > OK, in that case I think the answer to Victor's question is: > > 1. Breaking calling Py_DecodeLocale() before calling Py_Initialize() > is a compatibility break with the API implied by our own usage > examples, and we'll need to revert the breakage for 3.7, > > > +1 > > The break was certainly unintentional. :/ Fortunately, Py_DecodeLocale() > should be the only "Process-wide parameter" needing repair. I suppose, > PyMem_RawMalloc() and PyMem_RawFree() *could* be considered too, but my > understanding is that they aren't really intended for direct use (especially > pre-init). PyMem_RawFree will need to continue working pre-initialize as well, since it's the specified cleanup function for Py_DecodeLocale. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ericsnowcurrently at gmail.com Tue Nov 21 10:57:14 2017 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 21 Nov 2017 08:57:14 -0700 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On Mon, Nov 20, 2017 at 3:03 PM, Victor Stinner wrote: > To statically initialize PyMemAllocatorEx fields, you need to export a > lot of allocator functions. I would prefer to not do that. > > [snip] > > The rules to choose the allocator to each domain are also complex > depending if pymalloc is enabled, debug hooks are enabled by default, > etc. The memory allocator is also linked to _PyMem_Debug which is not > currently in Include/internals/ but Objects/obmalloc.c. I'm not suggesting supporting the full machinery. Rather, as my PR demonstrates, we can statically initialize the minimum needed to support pre-init use of PyMem_RawMalloc() and PyMem_RawFree(). The allocators will be fully initialized once the runtime is initialized (i.e. once Py_Initialize() is called), just as they are now. FWIW, I'm not sure that's the best approach. See my notes in https://bugs.python.org/issue32096. > > I understand that moving global variables to _PyRuntime helps to > clarify how these variables are initialized and then finalized, but > memory allocators are a complex corner case. Agreed. I spent a large portion of my time getting the allocators right when working on the original _PyRuntime patch. It's tricky code. -eric From lukasz at langa.pl Tue Nov 21 19:26:19 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Tue, 21 Nov 2017 16:26:19 -0800 Subject: [Python-Dev] PEP 563: Postponed Evaluation of Annotations (Draft 3) Message-ID: Based on the feedback I gather in early November, I'm publishing the third draft for consideration on python-dev. I hope you like it! A nicely formatted rendering is available here: https://www.python.org/dev/peps/pep-0563/ The full list of changes between this version and the previous draft can be found here: https://github.com/ambv/static-annotations/compare/python-dev1...python-dev2 - ? PEP: 563 Title: Postponed Evaluation of Annotations Version: $Revision$ Last-Modified: $Date$ Author: ?ukasz Langa Discussions-To: Python-Dev Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 8-Sep-2017 Python-Version: 3.7 Post-History: 1-Nov-2017, 21-Nov-2017 Resolution: Abstract ======== PEP 3107 introduced syntax for function annotations, but the semantics were deliberately left undefined. PEP 484 introduced a standard meaning to annotations: type hints. PEP 526 defined variable annotations, explicitly tying them with the type hinting use case. This PEP proposes changing function annotations and variable annotations so that they are no longer evaluated at function definition time. Instead, they are preserved in ``__annotations__`` in string form. This change is going to be introduced gradually, starting with a new ``__future__`` import in Python 3.7. Rationale and Goals =================== PEP 3107 added support for arbitrary annotations on parts of a function definition. Just like default values, annotations are evaluated at function definition time. This creates a number of issues for the type hinting use case: * forward references: when a type hint contains names that have not been defined yet, that definition needs to be expressed as a string literal; * type hints are executed at module import time, which is not computationally free. Postponing the evaluation of annotations solves both problems. Non-goals --------- Just like in PEP 484 and PEP 526, it should be emphasized that **Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.** This PEP is meant to solve the problem of forward references in type annotations. There are still cases outside of annotations where forward references will require usage of string literals. Those are listed in a later section of this document. Annotations without forced evaluation enable opportunities to improve the syntax of type hints. This idea will require its own separate PEP and is not discussed further in this document. Non-typing usage of annotations ------------------------------- While annotations are still available for arbitrary use besides type checking, it is worth mentioning that the design of this PEP, as well as its precursors (PEP 484 and PEP 526), is predominantly motivated by the type hinting use case. In Python 3.8 PEP 484 will graduate from provisional status. Other enhancements to the Python programming language like PEP 544, PEP 557, or PEP 560, are already being built on this basis as they depend on type annotations and the ``typing`` module as defined by PEP 484. In fact, the reason PEP 484 is staying provisional in Python 3.7 is to enable rapid evolution for another release cycle that some of the aforementioned enhancements require. With this in mind, uses for annotations incompatible with the aforementioned PEPs should be considered deprecated. Implementation ============== In Python 4.0, function and variable annotations will no longer be evaluated at definition time. Instead, a string form will be preserved in the respective ``__annotations__`` dictionary. Static type checkers will see no difference in behavior, whereas tools using annotations at runtime will have to perform postponed evaluation. The string form is obtained from the AST during the compilation step, which means that the string form might not preserve the exact formatting of the source. Note: if an annotation was a string literal already, it will still be wrapped in a string. Annotations need to be syntactically valid Python expressions, also when passed as literal strings (i.e. ``compile(literal, '', 'eval')``). Annotations can only use names present in the module scope as postponed evaluation using local names is not reliable (with the sole exception of class-level names resolved by ``typing.get_type_hints()``). Note that as per PEP 526, local variable annotations are not evaluated at all since they are not accessible outside of the function's closure. Enabling the future behavior in Python 3.7 ------------------------------------------ The functionality described above can be enabled starting from Python 3.7 using the following special import:: from __future__ import annotations A reference implementation of this functionality is available `on GitHub `_. Resolving Type Hints at Runtime =============================== To resolve an annotation at runtime from its string form to the result of the enclosed expression, user code needs to evaluate the string. For code that uses type hints, the ``typing.get_type_hints(obj, globalns=None, localns=None)`` function correctly evaluates expressions back from its string form. Note that all valid code currently using ``__annotations__`` should already be doing that since a type annotation can be expressed as a string literal. For code which uses annotations for other purposes, a regular ``eval(ann, globals, locals)`` call is enough to resolve the annotation. In both cases it's important to consider how globals and locals affect the postponed evaluation. An annotation is no longer evaluated at the time of definition and, more importantly, *in the same scope* where it was defined. Consequently, using local state in annotations is no longer possible in general. As for globals, the module where the annotation was defined is the correct context for postponed evaluation. The ``get_type_hints()`` function automatically resolves the correct value of ``globalns`` for functions and classes. It also automatically provides the correct ``localns`` for classes. When running ``eval()``, the value of globals can be gathered in the following way: * function objects hold a reference to their respective globals in an attribute called ``__globals__``; * classes hold the name of the module they were defined in, this can be used to retrieve the respective globals:: cls_globals = vars(sys.modules[SomeClass.__module__]) Note that this needs to be repeated for base classes to evaluate all ``__annotations__``. * modules should use their own ``__dict__``. The value of ``localns`` cannot be reliably retrieved for functions because in all likelihood the stack frame at the time of the call no longer exists. For classes, ``localns`` can be composed by chaining vars of the given class and its base classes (in the method resolution order). Since slots can only be filled after the class was defined, we don't need to consult them for this purpose. Runtime annotation resolution and class decorators -------------------------------------------------- Metaclasses and class decorators that need to resolve annotations for the current class will fail for annotations that use the name of the current class. Example:: def class_decorator(cls): annotations = get_type_hints(cls) # raises NameError on 'C' print(f'Annotations for {cls}: {annotations}') return cls @class_decorator class C: singleton: 'C' = None This was already true before this PEP. The class decorator acts on the class before it's assigned a name in the current definition scope. Runtime annotation resolution and ``TYPE_CHECKING`` --------------------------------------------------- Sometimes there's code that must be seen by a type checker but should not be executed. For such situations the ``typing`` module defines a constant, ``TYPE_CHECKING``, that is considered ``True`` during type checking but ``False`` at runtime. Example:: import typing if typing.TYPE_CHECKING: import expensive_mod def a_func(arg: expensive_mod.SomeClass) -> None: a_var: expensive_mod.SomeClass = arg ... This approach is also useful when handling import cycles. Trying to resolve annotations of ``a_func`` at runtime using ``typing.get_type_hints()`` will fail since the name ``expensive_mod`` is not defined (``TYPE_CHECKING`` variable being ``False`` at runtime). This was already true before this PEP. Backwards Compatibility ======================= This is a backwards incompatible change. Applications depending on arbitrary objects to be directly present in annotations will break if they are not using ``typing.get_type_hints()`` or ``eval()``. Annotations that depend on locals at the time of the function definition will not be resolvable later. Example:: def generate(): A = Optional[int] class C: field: A = 1 def method(self, arg: A) -> None: ... return C X = generate() Trying to resolve annotations of ``X`` later by using ``get_type_hints(X)`` will fail because ``A`` and its enclosing scope no longer exists. Python will make no attempt to disallow such annotations since they can often still be successfully statically analyzed, which is the predominant use case for annotations. Annotations using nested classes and their respective state are still valid. They can use local names or the fully qualified name. Example:: class C: field = 'c_field' def method(self) -> C.field: # this is OK ... def method(self) -> field: # this is OK ... def method(self) -> C.D: # this is OK ... def method(self) -> D: # this is OK ... class D: field2 = 'd_field' def method(self) -> C.D.field2: # this is OK ... def method(self) -> D.field2: # this is OK ... def method(self) -> field2: # this is OK ... def method(self) -> field: # this FAILS, class D doesn't ... # see C's attributes, This was # already true before this PEP. In the presence of an annotation that isn't a syntactically valid expression, SyntaxError is raised at compile time. However, since names aren't resolved at that time, no attempt is made to validate whether used names are correct or not. Deprecation policy ------------------ Starting with Python 3.7, a ``__future__`` import is required to use the described functionality. No warnings are raised. In Python 3.8 a ``PendingDeprecationWarning`` is raised by the compiler in the presence of type annotations in modules without the ``__future__`` import. Starting with Python 3.9 the warning becomes a ``DeprecationWarning``. In Python 4.0 this will become the default behavior. Use of annotations incompatible with this PEP is no longer supported. Forward References ================== Deliberately using a name before it was defined in the module is called a forward reference. For the purpose of this section, we'll call any name imported or defined within a ``if TYPE_CHECKING:`` block a forward reference, too. This PEP addresses the issue of forward references in *type annotations*. The use of string literals will no longer be required in this case. However, there are APIs in the ``typing`` module that use other syntactic constructs of the language, and those will still require working around forward references with string literals. The list includes: * type definitions:: T = TypeVar('T', bound='') UserId = NewType('UserId', '') Employee = NamedTuple('Employee', [('name', '', ('id', '')]) * aliases:: Alias = Optional[''] AnotherAlias = Union['', ''] YetAnotherAlias = '' * casting:: cast('', value) * base classes:: class C(Tuple['', '']): ... Depending on the specific case, some of the cases listed above might be worked around by placing the usage in a ``if TYPE_CHECKING:`` block. This will not work for any code that needs to be available at runtime, notably for base classes and casting. For named tuples, using the new class definition syntax introduced in Python 3.6 solves the issue. In general, fixing the issue for *all* forward references requires changing how module instantiation is performed in Python, from the current single-pass top-down model. This would be a major change in the language and is out of scope for this PEP. Rejected Ideas ============== Keeping the ability to use function local state when defining annotations ------------------------------------------------------------------------- With postponed evaluation, this would require keeping a reference to the frame in which an annotation got created. This could be achieved for example by storing all annotations as lambdas instead of strings. This would be prohibitively expensive for highly annotated code as the frames would keep all their objects alive. That includes predominantly objects that won't ever be accessed again. To be able to address class-level scope, the lambda approach would require a new kind of cell in the interpreter. This would proliferate the number of types that can appear in ``__annotations__``, as well as wouldn't be as introspectable as strings. Note that in the case of nested classes, the functionality to get the effective "globals" and "locals" at definition time is provided by ``typing.get_type_hints()``. If a function generates a class or a function with annotations that have to use local variables, it can populate the given generated object's ``__annotations__`` dictionary directly, without relying on the compiler. Disallowing local state usage for classes, too ---------------------------------------------- This PEP originally proposed limiting names within annotations to only allow names from the model-level scope, including for classes. The author argued this makes name resolution unambiguous, including in cases of conflicts between local names and module-level names. This idea was ultimately rejected in case of classes. Instead, ``typing.get_type_hints()`` got modified to populate the local namespace correctly if class-level annotations are needed. The reasons for rejecting the idea were that it goes against the intuition of how scoping works in Python, and would break enough existing type annotations to make the transition cumbersome. Finally, local scope access is required for class decorators to be able to evaluate type annotations. This is because class decorators are applied before the class receives its name in the outer scope. Introducing a new dictionary for the string literal form instead ---------------------------------------------------------------- Yury Selivanov shared the following idea: 1. Add a new special attribute to functions: ``__annotations_text__``. 2. Make ``__annotations__`` a lazy dynamic mapping, evaluating expressions from the corresponding key in ``__annotations_text__`` just-in-time. This idea is supposed to solve the backwards compatibility issue, removing the need for a new ``__future__`` import. Sadly, this is not enough. Postponed evaluation changes which state the annotation has access to. While postponed evaluation fixes the forward reference problem, it also makes it impossible to access function-level locals anymore. This alone is a source of backwards incompatibility which justifies a deprecation period. A ``__future__`` import is an obvious and explicit indicator of opting in for the new functionality. It also makes it trivial for external tools to recognize the difference between a Python files using the old or the new approach. In the former case, that tool would recognize that local state access is allowed, whereas in the latter case it would recognize that forward references are allowed. Finally, just-in-time evaluation in ``__annotations__`` is an unnecessary step if ``get_type_hints()`` is used later. Dropping annotations with -O ---------------------------- There are two reasons this is not satisfying for the purpose of this PEP. First, this only addresses runtime cost, not forward references, those still cannot be safely used in source code. A library maintainer would never be able to use forward references since that would force the library users to use this new hypothetical -O switch. Second, this throws the baby out with the bath water. Now *no* runtime annotation use can be performed. PEP 557 is one example of a recent development where evaluating type annotations at runtime is useful. All that being said, a granular -O option to drop annotations is a possibility in the future, as it's conceptually compatible with existing -O behavior (dropping docstrings and assert statements). This PEP does not invalidate the idea. Pass string literals in annotations verbatim to ``__annotations__`` ------------------------------------------------------------------- This PEP originally suggested directly storing the contents of a string literal under its respective key in ``__annotations__``. This was meant to simplify support for runtime type checkers. Mark Shannon pointed out this idea was flawed since it wasn't handling situations where strings are only part of a type annotation. The inconsistency of it was always apparent but given that it doesn't fully prevent cases of double-wrapping strings anyway, it is not worth it. Make the name of the future import more verbose ----------------------------------------------- Instead of requiring the following import:: from __future__ import annotations the PEP could call the feature more explicitly, for example ``string_annotations``, ``stringify_annotations``, ``annotation_strings``, ``annotations_as_strings``, ``lazy_anotations``, ``static_annotations``, etc. The problem with those names is that they are very verbose. Each of them besides ``lazy_annotations`` would constitute the longest future feature name in Python. They are long to type and harder to remember than the single-word form. There is precedence of a future import name that sounds overly generic but in practice was obvious to users as to what it does:: from __future__ import division Prior discussion ================ In PEP 484 ---------- The forward reference problem was discussed when PEP 484 was originally drafted, leading to the following statement in the document: A compromise is possible where a ``__future__`` import could enable turning *all* annotations in a given module into string literals, as follows:: from __future__ import annotations class ImSet: def add(self, a: ImSet) -> List[ImSet]: ... assert ImSet.add.__annotations__ == { 'a': 'ImSet', 'return': 'List[ImSet]' } Such a ``__future__`` import statement may be proposed in a separate PEP. python/typing#400 ----------------- The problem was discussed at length on the typing module's GitHub project, under `Issue 400 `_. The problem statement there includes critique of generic types requiring imports from ``typing``. This tends to be confusing to beginners: Why this:: from typing import List, Set def dir(o: object = ...) -> List[str]: ... def add_friends(friends: Set[Friend]) -> None: ... But not this:: def dir(o: object = ...) -> list[str]: ... def add_friends(friends: set[Friend]) -> None ... Why this:: up_to_ten = list(range(10)) friends = set() But not this:: from typing import List, Set up_to_ten = List[int](range(10)) friends = Set[Friend]() While typing usability is an interesting problem, it is out of scope of this PEP. Specifically, any extensions of the typing syntax standardized in PEP 484 will require their own respective PEPs and approval. Issue 400 ultimately suggests postponing evaluation of annotations and keeping them as strings in ``__annotations__``, just like this PEP specifies. This idea was received well. Ivan Levkivskyi supported using the ``__future__`` import and suggested unparsing the AST in ``compile.c``. Jukka Lehtosalo pointed out that there are some cases of forward references where types are used outside of annotations and postponed evaluation will not help those. For those cases using the string literal notation would still be required. Those cases are discussed briefly in the "Forward References" section of this PEP. The biggest controversy on the issue was Guido van Rossum's concern that untokenizing annotation expressions back to their string form has no precedent in the Python programming language and feels like a hacky workaround. He said: One thing that comes to mind is that it's a very random change to the language. It might be useful to have a more compact way to indicate deferred execution of expressions (using less syntax than ``lambda:``). But why would the use case of type annotations be so all-important to change the language to do it there first (rather than proposing a more general solution), given that there's already a solution for this particular use case that requires very minimal syntax? Eventually, Ethan Smith and schollii voiced that feedback gathered during PyCon US suggests that the state of forward references needs fixing. Guido van Rossum suggested coming back to the ``__future__`` idea, pointing out that to prevent abuse, it's important for the annotations to be kept both syntactically valid and evaluating correctly at runtime. First draft discussion on python-ideas -------------------------------------- Discussion happened largely in two threads, `the original announcement `_ and a follow-up called `PEP 563 and expensive backwards compatibility `_. The PEP received rather warm feedback (4 strongly in favor, 2 in favor with concerns, 2 against). The biggest voice of concern on the former thread being Steven D'Aprano's review stating that the problem definition of the PEP doesn't justify breaking backwards compatibility. In this response Steven seemed mostly concerned about Python no longer supporting evaluation of annotations that depended on local function/class state. A few people voiced concerns that there are libraries using annotations for non-typing purposes. However, none of the named libraries would be invalidated by this PEP. They do require adapting to the new requirement to call ``eval()`` on the annotation with the correct ``globals`` and ``locals`` set. This detail about ``globals`` and ``locals`` having to be correct was picked up by a number of commenters. Nick Coghlan benchmarked turning annotations into lambdas instead of strings, sadly this proved to be much slower at runtime than the current situation. The latter thread was started by Jim J. Jewett who stressed that the ability to properly evaluate annotations is an important requirement and backwards compatibility in that regard is valuable. After some discussion he admitted that side effects in annotations are a code smell and modal support to either perform or not perform evaluation is a messy solution. His biggest concern remained loss of functionality stemming from the evaluation restrictions on global and local scope. Nick Coghlan pointed out that some of those evaluation restrictions from the PEP could be lifted by a clever implementation of an evaluation helper, which could solve self-referencing classes even in the form of a class decorator. He suggested the PEP should provide this helper function in the standard library. Second draft discussion on python-dev ------------------------------------- Discussion happened mainly in the `announcement thread `_, followed by a brief discussion under `Mark Shannon's post `_. Steven D'Aprano was concerned whether it's acceptable for typos to be allowed in annotations after the change proposed by the PEP. Brett Cannon responded that type checkers and other static analyzers (like linters or programming text editors) will catch this type of error. Jukka Lehtosalo added that this situation is analogous to how names in function bodies are not resolved until the function is called. A major topic of discussion was Nick Coghlan's suggestion to store annotations in "thunk form", in other words as a specialized lambda which would be able to access class-level scope (and allow for scope customization at call time). He presented a possible design for it (`indirect attribute cells `_). This was later seen as equivalent to "special forms" in Lisp. Guido van Rossum expressed worry that this sort of feature cannot be safely implemented in twelve weeks (i.e. in time before the Python 3.7 beta freeze). After a while it became clear that the point of division between supporters of the string form vs. supporters of the thunk form is actually about whether annotations should be perceived as a general syntactic element vs. something tied to the type checking use case. Finally, Guido van Rossum declared he's rejecting the thunk idea based on the fact that it would require a new building block in the interpreter. This block would be exposed in annotations, multiplying possible types of values stored in ``__annotations__`` (arbitrary objects, strings, and now thunks). Moreover, thunks aren't as introspectable as strings. Most importantly, Guido van Rossum explicitly stated interest in gradually restricting the use of annotations to static typing (with an optional runtime component). Nick Coghlan got convinced to PEP 563, too, promptly beginning the mandatory bike shedding session on the name of the ``__future__`` import. Many debaters agreed that ``annotations`` seems like an overly broad name for the feature name. Guido van Rossum briefly decided to call it ``string_annotations`` but then changed his mind, arguing that ``division`` is a precedent of a broad name with a clear meaning. The final improvement to the PEP suggested in the discussion by Mark Shannon was the rejection of the temptation to pass string literals through to ``__annotations__`` verbatim. A side-thread of discussion started around the runtime penalty of static typing, with topic like the import time of the ``typing`` module (which is comparable to ``re`` without dependencies, and three times as heavy as ``re`` when counting dependencies). Acknowledgements ================ This document could not be completed without valuable input, encouragement and advice from Guido van Rossum, Jukka Lehtosalo, and Ivan Levkivskyi. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From thomas.mansencal at gmail.com Tue Nov 21 23:08:47 2017 From: thomas.mansencal at gmail.com (Thomas Mansencal) Date: Wed, 22 Nov 2017 04:08:47 +0000 Subject: [Python-Dev] Script bootstrapping executables on Windows Message-ID: Hi, This is a Windows specific question, to give a bit of context I'm working in a studio where depending on the project we use different Python interpreters installed in different locations, e.g. Python 2.7.13, Python 2.7.14, Python 3.6. We set PATH, PYTHONHOME and PYTHONPATH accordingly depending the interpreter in use. Our Python packages are living atomically on the network and are added to the environment on a per project basis by extending PYTHONPATH. This is in contrast to using a monolithic virtual environment built with virtualenv or conda. Assuming it is compatible, a Python package might be used with any of the 3 aforementioned interpreters, e.g. yapf (a code formatter). Now, on Windows, if you for example *pip install yapf*, a yapf.exe boostrapping executable and a yapf-script.py file are being generated. The boostrapping executable seems to look for the yapf-script.py file and launch it using the absolute hardcoded interpreter path of the yapf-script.py shebang. Given the above we run into issues if for example yapf was deployed using Python 2.7.13 but the Python 2.7.14 interpreter is being used in the environment instead. We get a "failed to create process." error in that case. What we would like to do is not being tied to the absolute interpreter path but have it defined with a variable or just use #!python. I have tried to search for the above error in cpython source code and the installation directory without luck. I would like to know what module/package is responsible for generating the boostrapping executables to understand how it works and see if we can doctor it for our usage. Bests, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Nov 22 04:38:32 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 22 Nov 2017 10:38:32 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 2017-11-21 16:57 GMT+01:00 Eric Snow : >> I understand that moving global variables to _PyRuntime helps to >> clarify how these variables are initialized and then finalized, but >> memory allocators are a complex corner case. > > Agreed. I spent a large portion of my time getting the allocators > right when working on the original _PyRuntime patch. It's tricky > code. Oh, I forgot to notify you: when I worked on Py_Main(), I got crashes because PyMem_RawMalloc() wasn't usable before calling Py_Initialize(). This is what I call a regresion, and that's why I started this thread :-) I fixed the issue by calling _PyRuntime_Initialize() as the very first function in main(). I also had to add _PyMem_GetDefaultRawAllocator() to get a deterministic memory allocator, rather than depending on the allocator set an application embedding Python, we must be sure that the same allocator is used to initialize and finalize Python. Victor From solipsis at pitrou.net Wed Nov 22 06:04:34 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 22 Nov 2017 12:04:34 +0100 Subject: [Python-Dev] Python initialization and embedded Python References: Message-ID: <20171122120434.2fa59ea4@fsol> On Wed, 22 Nov 2017 10:38:32 +0100 Victor Stinner wrote: > > I fixed the issue by calling _PyRuntime_Initialize() as the very first > function in main(). > > I also had to add _PyMem_GetDefaultRawAllocator() to get a > deterministic memory allocator, rather than depending on the allocator > set an application embedding Python, we must be sure that the same > allocator is used to initialize and finalize Python. This is a bit worrying. Do Python embedders have to go through the same dance? IMHO this really needs a simple solution documented somewhere. Also, hopefully when you do the wrong thing, you get a clear error message to know how to fix your code? Regards Antoine. From victor.stinner at gmail.com Wed Nov 22 06:12:32 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 22 Nov 2017 12:12:32 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: <20171122120434.2fa59ea4@fsol> References: <20171122120434.2fa59ea4@fsol> Message-ID: 2017-11-22 12:04 GMT+01:00 Antoine Pitrou : > IMHO this really needs a simple solution documented somewhere. Also, > hopefully when you do the wrong thing, you get a clear error message to > know how to fix your code? Right now, calling PyMem_RawMalloc() before calling _PyRuntime_Initialize() calls the function at address NULL, so you get a segmentation fault. Documenting the new requirements is part of the discussion, it's one option how to fix this issue. Victor From solipsis at pitrou.net Wed Nov 22 06:19:52 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 22 Nov 2017 12:19:52 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: <20171122120434.2fa59ea4@fsol> Message-ID: <20171122121952.24b1409f@fsol> On Wed, 22 Nov 2017 12:12:32 +0100 Victor Stinner wrote: > 2017-11-22 12:04 GMT+01:00 Antoine Pitrou : > > IMHO this really needs a simple solution documented somewhere. Also, > > hopefully when you do the wrong thing, you get a clear error message to > > know how to fix your code? > > Right now, calling PyMem_RawMalloc() before calling > _PyRuntime_Initialize() calls the function at address NULL, so you get > a segmentation fault. Can we get something more readable? For example: FATAL ERROR: PyMem_RawMalloc(): malloc function is NULL, did you call _PyRuntime_Initialize? Regards Antoine. From storchaka at gmail.com Wed Nov 22 08:03:09 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 22 Nov 2017 15:03:09 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression Message-ID: From https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions. g = [(yield i) for i in range(3)] Syntactically this looks like a list comprehension, and g should be a list, right? But actually it is a generator. This code is equivalent to the following code: def _make_list(it): result = [] for i in it: result.append(yield i) return result g = _make_list(iter(range(3))) Due to "yield" in the expression _make_list() is not a function returning a list, but a generator function returning a generator. This change in semantic looks unintentional to me. It looks like leaking an implementation detail. If a list comprehension would be implemented not via creating and calling an intermediate function, but via an inlined loop (like in Python 2) this would be a syntax error if used outside of a function or would make an outer function a generator function. __result = [] __i = None try: for __i in range(3): __result.append(yield __i) g = __result finally: del __result, __i I don't see how the current behavior can be useful. From levkivskyi at gmail.com Wed Nov 22 08:25:16 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 14:25:16 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: Message-ID: Serhiy, I think this is indeed a problem. For me the biggest surprise was that `yield` inside a comprehension does not turn a surrounding function into comprehension, see also https://stackoverflow.com/questions/29334054/why-am-i-getting-different-results-when-using-a-list-comprehension-with-coroutin In fact there is a b.p.o. issue for this https://bugs.python.org/issue10544, it is assigned to me since July, but I was focused on other things recently. My plan was to restore the Python 2 semantics while still avoiding the leak of comprehension variable to the enclosing scope (the initial reason of introducing auxiliary "_make_list" function IIUC). So that: 1) g = [(yield i) for i in range(3)] outside a function will be a SyntaxError (yield outside a function) 2) g = [(yield i) for i in range(3)] inside a function will turn that enclosing function into generator. 3) accessing i after g = [(yield i) for i in range(3)] will give a NameError: name 'i' is not defined If you have time to work on this, then I will be glad if you take care of this issue, you can re-assign it. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Nov 22 08:38:58 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 22 Nov 2017 14:38:58 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: Message-ID: <20171122143858.17e51546@fsol> On Wed, 22 Nov 2017 15:03:09 +0200 Serhiy Storchaka wrote: > From > https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions. > > g = [(yield i) for i in range(3)] > > Syntactically this looks like a list comprehension, and g should be a > list, right? But actually it is a generator. This code is equivalent to > the following code: > > def _make_list(it): > result = [] > for i in it: > result.append(yield i) > return result > g = _make_list(iter(range(3))) > > Due to "yield" in the expression _make_list() is not a function > returning a list, but a generator function returning a generator. > > This change in semantic looks unintentional to me. It looks like leaking > an implementation detail. Perhaps we can deprecate the use of "yield" in comprehensions and make it a syntax error in a couple versions? I don't see a reason for writing such code rather than the more explicit variants. It looks really obscure, regardless of the actual semantics. Regards Antoine. From storchaka at gmail.com Wed Nov 22 08:53:41 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 22 Nov 2017 15:53:41 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: Message-ID: 22.11.17 15:25, Ivan Levkivskyi ????: > I think this is indeed a problem.. For me the biggest surprise was that > `yield` inside a comprehension does not turn a surrounding function into > comprehension, see also > https://stackoverflow.com/questions/29334054/why-am-i-getting-different-results-when-using-a-list-comprehension-with-coroutin > > In fact there is a b.p.o. issue for this > https://bugs.python.org/issue10544, it is assigned to me since July, but > I was focused on other things recently. > My plan was to restore the Python 2 semantics while still avoiding the > leak of comprehension variable to the enclosing scope (the initial > reason of introducing auxiliary "_make_list" function IIUC). > So that: > > 1) g = [(yield i) for i in range(3)] outside a function will be a > SyntaxError (yield outside a function) > 2) g = [(yield i) for i in range(3)] inside a function will turn that > enclosing function into generator. > 3) accessing i after g = [(yield i) for i in range(3)] will give a > NameError: name 'i' is not defined > > If you have time to work on this, then I will be glad if you take care > of this issue, you can re-assign it. I have the same plan. I know how implement this for comprehensions, but the tricky question is what to do with generator expressions? Ideally result = [expr for i in iterable] and result = list(expr for i in iterable) should have the same semantic. I.e. if expr is "(yield i)", this should turn the enclosing function into a generator function, and fill the list with values passed to the generator's .send(). I have no idea how implement this. From levkivskyi at gmail.com Wed Nov 22 08:53:11 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 14:53:11 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171122143858.17e51546@fsol> References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 14:38, Antoine Pitrou wrote: > On Wed, 22 Nov 2017 15:03:09 +0200 > Serhiy Storchaka wrote: > > From > > https://stackoverflow.com/questions/45190729/ > differences-between-generator-comprehension-expressions. > > > > g = [(yield i) for i in range(3)] > > > > Syntactically this looks like a list comprehension, and g should be a > > list, right? But actually it is a generator. This code is equivalent to > > the following code: > > > > def _make_list(it): > > result = [] > > for i in it: > > result.append(yield i) > > return result > > g = _make_list(iter(range(3))) > > > > Due to "yield" in the expression _make_list() is not a function > > returning a list, but a generator function returning a generator. > > > > This change in semantic looks unintentional to me. It looks like leaking > > an implementation detail. > > Perhaps we can deprecate the use of "yield" in comprehensions and make > it a syntax error in a couple versions? > > I don't see a reason for writing such code rather than the more > explicit variants. It looks really obscure, regardless of the actual > semantics. > People actually try this (probably simply because they like comprehensions) see two mentioned Stackoverflow questions, plus there are two b.p.o. issues. So this will be a breaking change. Second, recent PEP 530 allowed writing a similar comprehensions with `await`: async def process(funcs): result = [await fun() for fun in funcs] # OK ... Moreover, it has the semantics very similar to the proposed by Serhiy for `yield` (i.e. equivalent to for-loop without name leaking into outer scope). Taking into account that the actual fix is not so hard, I don't think it makes sense to have all the hassles of deprecation period. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 22 09:09:33 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 14:09:33 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 13:53, Ivan Levkivskyi wrote: > On 22 November 2017 at 14:38, Antoine Pitrou wrote: >> >> On Wed, 22 Nov 2017 15:03:09 +0200 >> Serhiy Storchaka wrote: >> > From >> > >> > https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions. >> > >> > g = [(yield i) for i in range(3)] >> > >> > Syntactically this looks like a list comprehension, and g should be a >> > list, right? But actually it is a generator. This code is equivalent to >> > the following code: >> > >> > def _make_list(it): >> > result = [] >> > for i in it: >> > result.append(yield i) >> > return result >> > g = _make_list(iter(range(3))) >> > >> > Due to "yield" in the expression _make_list() is not a function >> > returning a list, but a generator function returning a generator. >> > >> > This change in semantic looks unintentional to me. It looks like leaking >> > an implementation detail. >> >> Perhaps we can deprecate the use of "yield" in comprehensions and make >> it a syntax error in a couple versions? >> >> I don't see a reason for writing such code rather than the more >> explicit variants. It looks really obscure, regardless of the actual >> semantics. > > > People actually try this (probably simply because they like comprehensions) > see two mentioned Stackoverflow questions, plus there are two b.p.o. issues. > So this will be a breaking change. Second, recent PEP 530 allowed writing a > similar comprehensions with `await`: > > async def process(funcs): > result = [await fun() for fun in funcs] # OK > ... > > Moreover, it has the semantics very similar to the proposed by Serhiy for > `yield` (i.e. equivalent to for-loop without name leaking into outer scope). > Taking into account that the actual fix is not so hard, I don't think it > makes sense to have all the hassles of deprecation period. I agree with Antoine. This (yield in a comprehension) seems far too tricky, and I'd rather it just be rejected. If I saw it in a code review, I'd certainly insist it be rewritten as an explicit loop. Paul From levkivskyi at gmail.com Wed Nov 22 09:09:40 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 15:09:40 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: Message-ID: On 22 November 2017 at 14:53, Serhiy Storchaka wrote: > 22.11.17 15:25, Ivan Levkivskyi ????: > >> I think this is indeed a problem.. For me the biggest surprise was that >> `yield` inside a comprehension does not turn a surrounding function into >> comprehension, see also >> https://stackoverflow.com/questions/29334054/why-am-i-gettin >> g-different-results-when-using-a-list-comprehension-with-coroutin >> >> In fact there is a b.p.o. issue for this https://bugs.python.org/issue1 >> 0544, it is assigned to me since July, but I was focused on other things >> recently. >> My plan was to restore the Python 2 semantics while still avoiding the >> leak of comprehension variable to the enclosing scope (the initial reason >> of introducing auxiliary "_make_list" function IIUC). >> So that: >> >> 1) g = [(yield i) for i in range(3)] outside a function will be a >> SyntaxError (yield outside a function) >> 2) g = [(yield i) for i in range(3)] inside a function will turn that >> enclosing function into generator. >> 3) accessing i after g = [(yield i) for i in range(3)] will give a >> NameError: name 'i' is not defined >> >> If you have time to work on this, then I will be glad if you take care of >> this issue, you can re-assign it. >> > > I have the same plan. I know how implement this for comprehensions, but > the tricky question is what to do with generator expressions? Ideally > > result = [expr for i in iterable] > > and > > result = list(expr for i in iterable) > > should have the same semantic. I.e. if expr is "(yield i)", this should > turn the enclosing function into a generator function, and fill the list > with values passed to the generator's .send(). I have no idea how implement > this. > > Yes, generator expressions are also currently problematic with `await`: >>> async def g(i): ... print(i) ... >>> async def f(): ... result = list(await g(i) for i in range(3)) ... print(result) ... >>> f().send(None) Traceback (most recent call last): File "", line 1, in File "", line 2, in f TypeError: 'async_generator' object is not iterable Maybe Yury can say something about this? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 09:15:49 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 15:15:49 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 15:09, Paul Moore wrote: > On 22 November 2017 at 13:53, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 14:38, Antoine Pitrou > wrote: > >> > >> On Wed, 22 Nov 2017 15:03:09 +0200 > >> Serhiy Storchaka wrote: > >> > From > >> > > >> > https://stackoverflow.com/questions/45190729/ > differences-between-generator-comprehension-expressions. > >> > > >> > g = [(yield i) for i in range(3)] > >> > > >> > Syntactically this looks like a list comprehension, and g should be a > >> > list, right? But actually it is a generator. This code is equivalent > to > >> > the following code: > >> > > >> > def _make_list(it): > >> > result = [] > >> > for i in it: > >> > result.append(yield i) > >> > return result > >> > g = _make_list(iter(range(3))) > >> > > >> > Due to "yield" in the expression _make_list() is not a function > >> > returning a list, but a generator function returning a generator. > >> > > >> > This change in semantic looks unintentional to me. It looks like > leaking > >> > an implementation detail. > >> > >> Perhaps we can deprecate the use of "yield" in comprehensions and make > >> it a syntax error in a couple versions? > >> > >> I don't see a reason for writing such code rather than the more > >> explicit variants. It looks really obscure, regardless of the actual > >> semantics. > > > > > > People actually try this (probably simply because they like > comprehensions) > > see two mentioned Stackoverflow questions, plus there are two b.p.o. > issues. > > So this will be a breaking change. Second, recent PEP 530 allowed > writing a > > similar comprehensions with `await`: > > > > async def process(funcs): > > result = [await fun() for fun in funcs] # OK > > ... > > > > Moreover, it has the semantics very similar to the proposed by Serhiy for > > `yield` (i.e. equivalent to for-loop without name leaking into outer > scope). > > Taking into account that the actual fix is not so hard, I don't think it > > makes sense to have all the hassles of deprecation period. > > I agree with Antoine. This (yield in a comprehension) seems far too > tricky, and I'd rather it just be rejected. If I saw it in a code > review, I'd certainly insist it be rewritten as an explicit loop. > > There are many things that I would reject in code review, but they are still allowed in Python, this is one of the reasons why code reviews exist. Also I am not sure how `yield` in a comprehension is more tricky than `await` in a comprehension. Anyway, this looks more like a taste question. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Nov 22 09:46:43 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 22 Nov 2017 15:46:43 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: <20171122154643.7193d991@fsol> On Wed, 22 Nov 2017 15:15:49 +0100 Ivan Levkivskyi wrote: > There are many things that I would reject in code review, but they are > still allowed in Python, > this is one of the reasons why code reviews exist. Also I am not sure how > `yield` in a comprehension > is more tricky than `await` in a comprehension. I am not sure either, but do note that "yield" and "await" are two different things with different semantics, so allowing "await" while disallowing "yield" wouldn't strike me as inconsistent. The exact semantics of "yield" inside a comprehension is a common source of surprise or bewilderment, and the only actual use I've seen of it is to show it off as a "clever trick". So instead of fixing (and perhaps complicating) the implementation to try and make it do the supposedly right thing, I am proposing to simply disallow it so that we are done with the controversy :-) Regards Antoine. From p.f.moore at gmail.com Wed Nov 22 09:47:06 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 14:47:06 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 14:15, Ivan Levkivskyi wrote: > On 22 November 2017 at 15:09, Paul Moore wrote: >> >> On 22 November 2017 at 13:53, Ivan Levkivskyi >> wrote: >> > On 22 November 2017 at 14:38, Antoine Pitrou >> > wrote: >> >> >> >> On Wed, 22 Nov 2017 15:03:09 +0200 >> >> Serhiy Storchaka wrote: >> >> > From >> >> > >> >> > >> >> > https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions. >> >> > >> >> > g = [(yield i) for i in range(3)] >> >> > >> >> > Syntactically this looks like a list comprehension, and g should be a >> >> > list, right? But actually it is a generator. This code is equivalent >> >> > to >> >> > the following code: >> >> > >> >> > def _make_list(it): >> >> > result = [] >> >> > for i in it: >> >> > result.append(yield i) >> >> > return result >> >> > g = _make_list(iter(range(3))) >> >> > >> >> > Due to "yield" in the expression _make_list() is not a function >> >> > returning a list, but a generator function returning a generator. >> >> > >> >> > This change in semantic looks unintentional to me. It looks like >> >> > leaking >> >> > an implementation detail. >> >> >> >> Perhaps we can deprecate the use of "yield" in comprehensions and make >> >> it a syntax error in a couple versions? >> >> >> >> I don't see a reason for writing such code rather than the more >> >> explicit variants. It looks really obscure, regardless of the actual >> >> semantics. >> > >> > >> > People actually try this (probably simply because they like >> > comprehensions) >> > see two mentioned Stackoverflow questions, plus there are two b.p.o. >> > issues. >> > So this will be a breaking change. Second, recent PEP 530 allowed >> > writing a >> > similar comprehensions with `await`: >> > >> > async def process(funcs): >> > result = [await fun() for fun in funcs] # OK >> > ... >> > >> > Moreover, it has the semantics very similar to the proposed by Serhiy >> > for >> > `yield` (i.e. equivalent to for-loop without name leaking into outer >> > scope). >> > Taking into account that the actual fix is not so hard, I don't think it >> > makes sense to have all the hassles of deprecation period. >> >> I agree with Antoine. This (yield in a comprehension) seems far too >> tricky, and I'd rather it just be rejected. If I saw it in a code >> review, I'd certainly insist it be rewritten as an explicit loop. >> > > There are many things that I would reject in code review, but they are still > allowed in Python, > this is one of the reasons why code reviews exist. Also I am not sure how > `yield` in a comprehension > is more tricky than `await` in a comprehension. Anyway, this looks more like > a taste question. I generally don't understand "await" in any context, so I deferred judgement on that :-) Based on your comment that they are equally tricky, I'd suggest we prohibit them both ;-) Less facetiously, comprehensions are defined in the language reference in terms of a source translation to nested loops. That description isn't 100% precise, but nevertheless, if yield/async in a comprehension doesn't behave like that, I'd consider it a bug. So current behaviour (for both yield and await) is a bug, and your proposed semantics for yield is correct. Await in a comprehension should work similarly - the docs for await expressions just say "Can only be used inside a coroutine function", so based on that they are legal within a comprehension inside a coroutine function (if the semantics in that case isn't obvious, it should be clarified - the await expression docs are terse to the point that I can't tell myself). So I'm +1 on your suggestion. I remain concerned that yield/await expressions are adding a lot of complexity to the language, that isn't explained in the manuals in a way that's accessible to non-specialists. Maybe "ban (or reject at code review) all the complex stuff" isn't a reasonable approach, but nor is expecting everyone encountering async code to have read and understood all the async PEPs and docs. We need to consider maintenance programmers as well (who are often looking at code with only a relatively high-level understanding). Paul From levkivskyi at gmail.com Wed Nov 22 10:00:05 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 16:00:05 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171122154643.7193d991@fsol> References: <20171122143858.17e51546@fsol> <20171122154643.7193d991@fsol> Message-ID: On 22 November 2017 at 15:46, Antoine Pitrou wrote: > On Wed, 22 Nov 2017 15:15:49 +0100 > Ivan Levkivskyi wrote: > > There are many things that I would reject in code review, but they are > > still allowed in Python, > > this is one of the reasons why code reviews exist. Also I am not sure how > > `yield` in a comprehension > > is more tricky than `await` in a comprehension. > > I am not sure either, but do note that "yield" and "await" are two > different things with different semantics, so allowing "await" while > disallowing "yield" wouldn't strike me as inconsistent. > > The exact semantics of "yield" inside a comprehension is a common > source of surprise or bewilderment, and the only actual use I've seen > of it is to show it off as a "clever trick". So instead of fixing (and > perhaps complicating) the implementation to try and make it do the > supposedly right thing, I am proposing to simply disallow it so that > we are done with the controversy :-) > > Actually, I am not sure there is really a controversy, I have not yet met a person who _expects_ `yield` in a comprehension to work as it works now, instead everyone thinks it is just equivalent to a for-loop. Anyway, I have some compromise idea, will send it in a separate e-mail. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 10:10:11 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 16:10:11 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 15:47, Paul Moore wrote: > I generally don't understand "await" in any context, so I deferred > judgement on that :-) Based on your comment that they are equally > tricky, I'd suggest we prohibit them both ;-) > > Less facetiously, comprehensions are defined in the language reference > in terms of a source translation to nested loops. That description > isn't 100% precise, but nevertheless, if yield/async in a > comprehension doesn't behave like that, I'd consider it a bug. So > current behaviour (for both yield and await) is a bug, and your > proposed semantics for yield is correct. > I think there may be a small misunderstanding here. The situation is different for comprehensions and generator expressions, let me summarize the current state: - yield in comprehensions works "wrong" (a shorthand for not according to the docs/naive expectations, i.e. not equivalent to for loop) - await in comprehensions works "right" - yield in generator expressions works "wrong" - await in generator expressions works "wrong" After some thinking, both `yield` and `await` look quite mind bending in _generator expressions_, so maybe the right compromise strategy is: - fix yield in comprehensions - await in comprehensions already works - make both `yield` and `await` a SyntaxError in generator expressions. What do you think? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 22 10:53:27 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 15:53:27 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 15:10, Ivan Levkivskyi wrote: > I think there may be a small misunderstanding here. The situation is > different for comprehensions and generator expressions, > let me summarize the current state: [...] > What do you think? Can you point me to where in the docs it explains the semantic difference between a generaor expression and a list comprehension? Ignoring the trivial point that they return different values - think of [a for a in expr] vs list(a for a in expr) if you like - these should be semantically the same, I believe? TBH, the docs aren't super-precise in this area. Before we start debating the "correct" behaviour of edge cases, I think we need to agree (and document) the semantics more precisely. Paul From yselivanov.ml at gmail.com Wed Nov 22 10:56:02 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 22 Nov 2017 10:56:02 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On Wed, Nov 22, 2017 at 10:10 AM, Ivan Levkivskyi wrote: > On 22 November 2017 at 15:47, Paul Moore wrote: >> >> I generally don't understand "await" in any context, so I deferred >> judgement on that :-) Based on your comment that they are equally >> tricky, I'd suggest we prohibit them both ;-) >> >> Less facetiously, comprehensions are defined in the language reference >> in terms of a source translation to nested loops. That description >> isn't 100% precise, but nevertheless, if yield/async in a >> comprehension doesn't behave like that, I'd consider it a bug. So >> current behaviour (for both yield and await) is a bug, and your >> proposed semantics for yield is correct. > > > I think there may be a small misunderstanding here. The situation is > different for comprehensions and generator expressions, > let me summarize the current state: > > - yield in comprehensions works "wrong" (a shorthand for not according to > the docs/naive expectations, i.e. not equivalent to for loop) > - await in comprehensions works "right" > - yield in generator expressions works "wrong" > - await in generator expressions works "wrong" "await" in generator expressions works *correct*. I explained it here: https://bugs.python.org/issue32113 To clarify a bit more: r = (V for I in X) is equivalent to: def _(): for I in X: yield V r = _() For synchronous generator expression: r = (f(i) for i in range(3)) is really: def _(): for i in range(3): yield f(i) r = _() For an asynchronous generator expression: r = (await f(i) for i in range(3)) is equivalent to: def _(): for i in range(3): yield (await f(i)) r = _() I'll update PEP 530 to clarify these transformations better. > > After some thinking, both `yield` and `await` look quite mind bending in > _generator expressions_, so maybe the right compromise strategy is: > > - fix yield in comprehensions > - await in comprehensions already works > - make both `yield` and `await` a SyntaxError in generator expressions. I'm all for prohibiting using 'yield' expression in generator expressions or comprehensions. The semantics is way to hard to understand and hence be of any value. Making 'await' a SyntaxError is absolutely not an option. Async generator expressions are a shorthand syntax for defining asynchronous generators (PEP 525), and it's already being used in the wild. Yury From levkivskyi at gmail.com Wed Nov 22 11:03:10 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:03:10 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 16:53, Paul Moore wrote: > On 22 November 2017 at 15:10, Ivan Levkivskyi > wrote: > > I think there may be a small misunderstanding here. The situation is > > different for comprehensions and generator expressions, > > let me summarize the current state: > [...] > > What do you think? > > Can you point me to where in the docs it explains the semantic > difference between a generaor expression and a list comprehension? > Ignoring the trivial point that they return different values - think > of [a for a in expr] vs list(a for a in expr) if you like - these > should be semantically the same, I believe? > > TBH, the docs aren't super-precise in this area. Before we start > debating the "correct" behaviour of edge cases, I think we need to > agree (and document) the semantics more precisely. > > Paul > Looking at https://docs.python.org/3/reference/expressions.html?highlight=generator%20expression#generator-expressions it is indeed not super clear what it _should_ mean. I agree this requires documentation updates. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 11:08:14 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:08:14 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 16:56, Yury Selivanov wrote: > On Wed, Nov 22, 2017 at 10:10 AM, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 15:47, Paul Moore wrote: > [...] > I'm all for prohibiting using 'yield' expression in generator > expressions or comprehensions. The semantics is way to hard to > understand and hence be of any value. > > Making 'await' a SyntaxError is absolutely not an option. Async > generator expressions are a shorthand syntax for defining asynchronous > generators (PEP 525), and it's already being used in the wild. > OK, makes sense, so it looks like we may have the following plan: - fix `yield` in comprehensions - update PEP 530 and docs re generator expressions vs comprehensions - make `yield` in generator expressions a SyntaxError If everyone agrees, then I propose to open a separate issue on b.p.o. to coordinate the efforts. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 22 11:11:05 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 16:11:05 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 15:56, Yury Selivanov wrote: > "await" in generator expressions works *correct*. I explained it here: > https://bugs.python.org/issue32113 > > To clarify a bit more: > > r = (V for I in X) > > is equivalent to: > > def _(): > for I in X: > yield V > r = _() The docs don't actually say that this equivalence is definitive. There's a lot of vagueness - possibly because the equivalence wasn't precise in 2.X (due to name leakage) and it wasn't updated to say that comprehensions are now defined *precisely* in terms of this equivalence. But surely this means that: 1. await isn't allowed in comprehensions/generator expressions, because the dummy function (_ in your expansion above) is not a coroutine function. 2. yield expressions in a comprehension.generator will yield extra values into the generated list (or the stream returned from the generator epression). That seems wrong to me, at least in terms of how I'd expect such an expression to behave. So I think there's a problem with treating the equivalence as the definition - it's informative, but not normative. > Making 'await' a SyntaxError is absolutely not an option. Async > generator expressions are a shorthand syntax for defining asynchronous > generators (PEP 525), and it's already being used in the wild. But by the logic you just described, await isn't (or shouldn't be) allowed, surely? Paul From p.f.moore at gmail.com Wed Nov 22 11:16:42 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 16:16:42 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 16:08, Ivan Levkivskyi wrote: > On 22 November 2017 at 16:56, Yury Selivanov > wrote: >> >> On Wed, Nov 22, 2017 at 10:10 AM, Ivan Levkivskyi >> wrote: >> > On 22 November 2017 at 15:47, Paul Moore wrote: >> [...] >> I'm all for prohibiting using 'yield' expression in generator >> expressions or comprehensions. The semantics is way to hard to >> understand and hence be of any value. >> >> Making 'await' a SyntaxError is absolutely not an option. Async >> generator expressions are a shorthand syntax for defining asynchronous >> generators (PEP 525), and it's already being used in the wild. > > > OK, makes sense, so it looks like we may have the following plan: > > - fix `yield` in comprehensions I'm still not clear what "fix" would actually mean, but you propose clarifying the docs below, so I assume it means "according to whatever the updated docs say"... > - update PEP 530 and docs re generator expressions vs comprehensions Docs more importantly than PEP IMO. And are you implying that there's a difference between generator expressions and comprehensions? I thought both were intended to behave as if expanded to a function containing nested for loops? Nothing said in this thread so far (about semantics, as opposed to about current behaviour) implies there's a deliberate difference. > - make `yield` in generator expressions a SyntaxError That contradicts the suggestion that generator expressions are equivalent to the expanded function definition we saw before. > If everyone agrees, then I propose to open a separate issue on b.p.o. to > coordinate the efforts. I agree it needs clarifying. Not sure I agree with your proposed semantics, but I guess that can wait for when we have a concrete doc change proposal. Paul From p.f.moore at gmail.com Wed Nov 22 11:19:29 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 16:19:29 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 15:56, Yury Selivanov wrote: > For synchronous generator expression: > > r = (f(i) for i in range(3)) > > is really: > > def _(): > for i in range(3): > yield f(i) > r = _() > > For an asynchronous generator expression: > > r = (await f(i) for i in range(3)) > > is equivalent to: > > def _(): > for i in range(3): > yield (await f(i)) > r = _() Wait, I missed this on first reading. The note in the docs for generator expressions defining asynchronous generator expressions is *incredibly* easy to miss, and doesn't say anything about the semantics (the expansion you quote above) being different for the two cases. This definitely needs clarifying in the docs. Paul From solipsis at pitrou.net Wed Nov 22 11:24:04 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 22 Nov 2017 17:24:04 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: <20171122172404.75b05d56@fsol> On Wed, 22 Nov 2017 17:08:14 +0100 Ivan Levkivskyi wrote: > On 22 November 2017 at 16:56, Yury Selivanov > wrote: > > > On Wed, Nov 22, 2017 at 10:10 AM, Ivan Levkivskyi > > wrote: > > > On 22 November 2017 at 15:47, Paul Moore wrote: > > [...] > > I'm all for prohibiting using 'yield' expression in generator > > expressions or comprehensions. The semantics is way to hard to > > understand and hence be of any value. > > > > Making 'await' a SyntaxError is absolutely not an option. Async > > generator expressions are a shorthand syntax for defining asynchronous > > generators (PEP 525), and it's already being used in the wild. > > > > OK, makes sense, so it looks like we may have the following plan: > > - fix `yield` in comprehensions > - update PEP 530 and docs re generator expressions vs comprehensions > - make `yield` in generator expressions a SyntaxError Given a comprehension (e.g. list comprehension) is expected to work nominally as `constructor(generator expression)` (e.g. `list(generator expression)`), I think both generator expressions *and* comprehensions should raise a SyntaxError. (more exactly, they should first emit a SyntaxWarning and then raise a SyntaxError in a couple of versions) Regards Antoine. From levkivskyi at gmail.com Wed Nov 22 11:30:27 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:30:27 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171122172404.75b05d56@fsol> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: On 22 November 2017 at 17:24, Antoine Pitrou wrote: > On Wed, 22 Nov 2017 17:08:14 +0100 > Ivan Levkivskyi wrote: > > On 22 November 2017 at 16:56, Yury Selivanov > > wrote: > > > > > On Wed, Nov 22, 2017 at 10:10 AM, Ivan Levkivskyi < > levkivskyi at gmail.com> > > > wrote: > > > > On 22 November 2017 at 15:47, Paul Moore > wrote: > > > [...] > > > I'm all for prohibiting using 'yield' expression in generator > > > expressions or comprehensions. The semantics is way to hard to > > > understand and hence be of any value. > > > > > > Making 'await' a SyntaxError is absolutely not an option. Async > > > generator expressions are a shorthand syntax for defining asynchronous > > > generators (PEP 525), and it's already being used in the wild. > > > > > > > OK, makes sense, so it looks like we may have the following plan: > > > > - fix `yield` in comprehensions > > - update PEP 530 and docs re generator expressions vs comprehensions > > - make `yield` in generator expressions a SyntaxError > > Given a comprehension (e.g. list comprehension) is expected to work > nominally as `constructor(generator expression)` > As Yury just explained, these two are not equivalent if there is an `await` in the comprehension/generator expression. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 11:38:11 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:38:11 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 17:16, Paul Moore wrote: > On 22 November 2017 at 16:08, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 16:56, Yury Selivanov > > wrote: > >> > >> On Wed, Nov 22, 2017 at 10:10 AM, Ivan Levkivskyi > > >> wrote: > >> > On 22 November 2017 at 15:47, Paul Moore wrote: > >> [...] > >> I'm all for prohibiting using 'yield' expression in generator > >> expressions or comprehensions. The semantics is way to hard to > >> understand and hence be of any value. > >> > >> Making 'await' a SyntaxError is absolutely not an option. Async > >> generator expressions are a shorthand syntax for defining asynchronous > >> generators (PEP 525), and it's already being used in the wild. > > > > > > OK, makes sense, so it looks like we may have the following plan: > > > > - fix `yield` in comprehensions > > I'm still not clear what "fix" would actually mean, but you propose > clarifying the docs below, so I assume it means "according to whatever > the updated docs say"... > > I mean the initial proposal: make comprehensions equivalent to a for-loop > > - update PEP 530 and docs re generator expressions vs comprehensions > > Docs more importantly than PEP IMO. And are you implying that there's > a difference between generator expressions and comprehensions? I > thought both were intended to behave as if expanded to a function > containing nested for loops? Nothing said in this thread so far (about > semantics, as opposed to about current behaviour) implies there's a > deliberate difference. > I think there may be a difference: comprehension `g = [(yield i) for i in range(3)]` is defined as this code: __result = [] __i = None try: for __i in range(3): __result.append(yield __i) g = __result finally: del __result, __i while `g = list((yield i) for i in range(3))` is defined as this code: def __gen(): for i in range(3): yield (yield i) g = list(__gen()) Although these two definitions are equivalent in simple cases (like having `f(i)` instead of `yield i`) But this is debatable, I think before we move to other points we need to agree on the clear definitions of semantics of generator expressions and comprehensions. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 22 11:43:28 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 16:43:28 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: On 22 November 2017 at 16:30, Ivan Levkivskyi wrote: > On 22 November 2017 at 17:24, Antoine Pitrou wrote: >> Given a comprehension (e.g. list comprehension) is expected to work >> nominally as `constructor(generator expression)` > > As Yury just explained, these two are not equivalent if there is an `await` > in the comprehension/generator expression. As Antoine said, people *expect* them to work the same. If they don't, then that's a subtle change that came in as a result of the new async functionality. It's a shame that this wasn't made clearer at the time - one of the major issues I see with async is that it works "nearly, but not quite" the way people expect, and we need to do more to help people integrate async into their intuition. Just reiterating "that's not right" isn't much help - we need to educate people in better intuitions if the traditional ones are no longer accurate. At the moment, I know I tend to treat Python semantics as I always did, but with an implicit proviso, "unless async is involved, when I can't assume any of my intuitions apply". That's *not* a good situation to be in. Paul From levkivskyi at gmail.com Wed Nov 22 11:43:35 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:43:35 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: Sorry, forgot some details in the second "definition": try: def __gen(): for i in range(3): yield (yield i) g = list(__gen()) finally: del __gen -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 11:47:38 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:47:38 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: On 22 November 2017 at 17:43, Paul Moore wrote: > On 22 November 2017 at 16:30, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 17:24, Antoine Pitrou > wrote: > >> Given a comprehension (e.g. list comprehension) is expected to work > >> nominally as `constructor(generator expression)` > > > > As Yury just explained, these two are not equivalent if there is an > `await` > > in the comprehension/generator expression. > > As Antoine said, people *expect* them to work the same. > The difference is that a generator expression can be used independently, one can assign it to a variable etc. not necessary to wrap it into a list() Anyway, can you propose an equivalent "defining" code for both? Otherwise it is not clear what you are defending. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 22 11:50:40 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 16:50:40 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 16:38, Ivan Levkivskyi wrote: > On 22 November 2017 at 17:16, Paul Moore wrote: >> >> Docs more importantly than PEP IMO. And are you implying that there's >> a difference between generator expressions and comprehensions? I >> thought both were intended to behave as if expanded to a function >> containing nested for loops? Nothing said in this thread so far (about >> semantics, as opposed to about current behaviour) implies there's a >> deliberate difference. > > > I think there may be a difference: > > comprehension `g = [(yield i) for i in range(3)]` is defined as this code: > > __result = [] > __i = None > try: > for __i in range(3): > __result.append(yield __i) > g = __result > finally: > del __result, __i Not in the docs, it isn't... The docs explicitly state that a new scope is involved. People may *intuitively understand it* like this, but it's not the definition. If it were the definition, I'd be fine, as people can reason about it and know that the conclusions they come to (no matter how unintuitive) are accurate. But if it's how to *understand* what a list comprehension does, then different rules apply - corner cases may work differently, but conversely the cases that work differently have to clearly be corner cases, otherwise it's no use as a mental model. > while `g = list((yield i) for i in range(3))` is defined as this code: > > def __gen(): > for i in range(3): > yield (yield i) > g = list(__gen()) Again, not in the docs. > Although these two definitions are equivalent in simple cases (like having > `f(i)` instead of `yield i`) And if they don't imply list(...) is the same as [...] then that will come as a surprise to many people. > But this is debatable, I think before we move to other points we need to > agree on the clear definitions of semantics of generator expressions and > comprehensions. Absolutely. And I'd like those semantics to be expressed in a way that doesn't need "except when await/async is involved" exceptions. Paul From yselivanov.ml at gmail.com Wed Nov 22 11:58:08 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 22 Nov 2017 11:58:08 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On Wed, Nov 22, 2017 at 11:11 AM, Paul Moore wrote: [..] > But by the logic you just described, await isn't (or shouldn't be) > allowed, surely? No, that's a stretch. People can understand and actually use 'yield (await foo())', but they usually can't figure what 'yield (yield foo())' actually does: def foo(): yield (yield 1) Who on this mailing list can meaningfully use the 'foo()' generator? async def bar(): yield (await foo()) 'bar()' is perfectly usable in an 'async for' statement and it's easy to understand what it does if you spend an hour writing async/await code. Yury From levkivskyi at gmail.com Wed Nov 22 11:58:25 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 17:58:25 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: On 22 November 2017 at 17:50, Paul Moore wrote: > On 22 November 2017 at 16:38, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 17:16, Paul Moore wrote: > >> > >> Docs more importantly than PEP IMO. And are you implying that there's > >> a difference between generator expressions and comprehensions? I > >> thought both were intended to behave as if expanded to a function > >> containing nested for loops? Nothing said in this thread so far (about > >> semantics, as opposed to about current behaviour) implies there's a > >> deliberate difference. > > > > > > I think there may be a difference: > > > > comprehension `g = [(yield i) for i in range(3)]` is defined as this > code: > > > > __result = [] > > __i = None > > try: > > for __i in range(3): > > __result.append(yield __i) > > g = __result > > finally: > > del __result, __i > > Not in the docs, it isn't... Yes, since there is almost nothing there, this is what I _propose_ (or actually Serhiy proposed it first) > The docs explicitly state that a new > scope is involved. > But docs don't say it is a _function_ scope. The meaning of that statement (as I understand it) is just that the loop variable doen't leak from comprehension. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 22 12:15:01 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 22 Nov 2017 17:15:01 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: On 22 November 2017 at 16:47, Ivan Levkivskyi wrote: > On 22 November 2017 at 17:43, Paul Moore wrote: >> >> On 22 November 2017 at 16:30, Ivan Levkivskyi >> wrote: >> > On 22 November 2017 at 17:24, Antoine Pitrou >> > wrote: >> >> Given a comprehension (e.g. list comprehension) is expected to work >> >> nominally as `constructor(generator expression)` >> > >> > As Yury just explained, these two are not equivalent if there is an >> > `await` >> > in the comprehension/generator expression. >> >> As Antoine said, people *expect* them to work the same. > > > The difference is that a generator expression can be used independently, one > can assign it to a variable etc. not necessary to wrap it into a list() > Anyway, can you propose an equivalent "defining" code for both? Otherwise it > is not clear what you are defending. I can't propose a precise description of the semantics in the form "(x for x in expr) is precisely equivalent to <...>" (for generator expressions or list/dict/set comprehensions). I'm not even sure that there *is* a definition in those terms - I don't see it as a given that generator expressions are simply syntax shorthand. It's not necessary that such an expansion exists - there's no such expansion for most expressions in Python. The equivalences I would suggest, you've stated previously in this thread are inaccurate, so it's not clear to me what options you've left open for me. What I can say is that the semantics of generator expressions and comprehensions should match the understanding that has been documented and taught for many years now. Or at least, if the semantics change, then documentation, "what's new in Python x.y", and tutorials should be updated to match. In my view, the following understandings have been true since comprehensions and generators were introduced, and should not be changed without careful management of the change. (I accept that things *have* changed, so we have a retroactive exercise that we need to do to mitigate any damage to people's understanding from the changes, but I'm trying to start from what I believe is the baseline understanding that people have had since pre-async, and don't yet understand how to modify). 1. List comprehensions expand into nested for/if statements in the "obvious" way - with an empty list created to start and append used to add items to it. 1a. Variables introduced in the comprehension don't "leak" (see below). 2. Generator expressions expand into generators with the same "nested loop" behaviour, and a yield of the generated value. 3. List comprehensions are the same as list(the equivalent generator expression). With regard to 1a, note that Python 2 *did* leak names. The change to stop those leaks is arguably a minor, low-level detail (much like the sort of things coming up in regard to yield expressions and async). But it was considered significant enough to go through a long and careful debate, and a non-trivial amount of publicity ensuring users were aware of, and understood, the change. I see that as a good example of how the semantics *can* change, but we should ensure we manage users' understanding. I'm not sure we're going to reach agreement on this. Let it stand that I believe the above 3 points are key to how people understand comprehensions/generator expressions, and I find the subtle discrepancies that have been introduced by async to be difficult to understand, and counter to my intuition. I don't believe I'm alone in this. I can't tell you how to explain this clearly to me - by definition, I don't know a good explanation. I take it as self-evident that having a big chunk of Python semantics that leaves people thinking "I'll avoid this because it's confusing" is bad - maybe you (or others) see it as simply "advanced concepts" that I (and people like me) can ignore, but while I usually do just that, threads like this keep cropping up for which that approach doesn't work... Paul From ethan at stoneleaf.us Wed Nov 22 12:32:09 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 22 Nov 2017 09:32:09 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: <5A15B499.1030103@stoneleaf.us> On 11/22/2017 09:15 AM, Paul Moore wrote: > 1. List comprehensions expand into nested for/if statements in the > "obvious" way - with an empty list created to start and append used to > add items to it. > 1a. Variables introduced in the comprehension don't "leak" (see below). > 2. Generator expressions expand into generators with the same "nested > loop" behaviour, and a yield of the generated value. > 3. List comprehensions are the same as list(the equivalent generator > expression). For what it's worth, that is how I understand it. -- ~Ethan~ From levkivskyi at gmail.com Wed Nov 22 12:36:20 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 18:36:20 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: On 22 November 2017 at 18:15, Paul Moore wrote: > On 22 November 2017 at 16:47, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 17:43, Paul Moore wrote: > >> > >> On 22 November 2017 at 16:30, Ivan Levkivskyi > >> wrote: > >> > On 22 November 2017 at 17:24, Antoine Pitrou > >> > wrote: > >> >> Given a comprehension (e.g. list comprehension) is expected to work > >> >> nominally as `constructor(generator expression)` > >> > > >> > As Yury just explained, these two are not equivalent if there is an > >> > `await` > >> > in the comprehension/generator expression. > >> > >> As Antoine said, people *expect* them to work the same. > > > > > > The difference is that a generator expression can be used independently, > one > > can assign it to a variable etc. not necessary to wrap it into a list() > > Anyway, can you propose an equivalent "defining" code for both? > Otherwise it > > is not clear what you are defending. > > [...snip...] > > 1. List comprehensions expand into nested for/if statements in the > "obvious" way - with an empty list created to start and append used to > add items to it. > 1a. Variables introduced in the comprehension don't "leak" (see below). > 2. Generator expressions expand into generators with the same "nested > loop" behaviour, and a yield of the generated value. > 3. List comprehensions are the same as list(the equivalent generator > expression). > Great, I agree with all three rules. But there is a problem, it is hard to make these three rules consistent in some corner cases even _without async_. For example, with the original problematic example, it is not clear to me how to apply the rule 2 so that it is consistent with 3: def fun_comp(): return [(yield i) for i in range(3)] def fun_gen(): return list((yield i) for i in range(3)) I think the solution may be to formulate the rules in terms of the iterator protocol (__iter__ and __next__). I will try to think more about this. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Wed Nov 22 12:37:55 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 22 Nov 2017 09:37:55 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: Message-ID: <5A15B5F3.2080009@stoneleaf.us> On 11/22/2017 05:03 AM, Serhiy Storchaka wrote: > From https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions. > > g = [(yield i) for i in range(3)] > > Syntactically this looks like a list comprehension, and g should be a list, right? But actually it is a generator. This > code is equivalent to the following code: > > def _make_list(it): > result = [] > for i in it: > result.append(yield i) > return result > g = _make_list(iter(range(3))) > > Due to "yield" in the expression _make_list() is not a function returning a list, but a generator function returning a > generator. The [] syntax says g should be list. Seems to me we could do either of: 1) raise if the returned object is not a list; 2) wrap a returned object in a list if it isn't one already; In other words, (2) would make g = [(yield i) for i in range(3)] and g = [((yield i) for i in range(3))] be the same. I have no idea how either of those solutions would interact with async/await. -- ~Ethan~ From guido at python.org Wed Nov 22 12:58:32 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 22 Nov 2017 09:58:32 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A15B5F3.2080009@stoneleaf.us> References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: Wow, 44 messages in 4 hours. That must be some kind of record. If/when there's an action item, can someone summarize for me? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Nov 22 13:38:52 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 22 Nov 2017 20:38:52 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: 22.11.17 17:10, Ivan Levkivskyi ????: > - fix yield in comprehensions I'm working on this. I was going to inline the generating function a long time ago for small performance benefit, but the benefit looked too small for complicating the compiler code. Now I have other reason for doing this. > - await in comprehensions already works I'm surprised by this. But I don't understand await. If this works, maybe it is possible to make working `yield` in generator expressions. > - make both `yield` and `await` a SyntaxError in generator expressions. This looks as a temporary solution to me. Ideally we should make this working too. But this will be harder. From jelle.zijlstra at gmail.com Wed Nov 22 13:54:01 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Wed, 22 Nov 2017 10:54:01 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: 2017-11-22 9:58 GMT-08:00 Guido van Rossum : > Wow, 44 messages in 4 hours. That must be some kind of record. > > If/when there's an action item, can someone summarize for me? > > The main disagreement seems to be about what this code should do: g = [(yield i) for i in range(3)] Currently, this makes `g` into a generator, not a list. Everybody seems to agree this is nonintuitive and should be changed. One proposal is to make it so `g` gets assigned a list, and the `yield` happens in the enclosing scope (so the enclosing function would have to be a generator). This was the way things worked in Python 2, I believe. Another proposal is to make this code a syntax error, because it's confusing either way. (For what it's worth, that would be my preference.) There is related discussion about the semantics of list comprehensions versus calling list() on a generator expression, and of async semantics, but I don't think there's any clear point of action there. > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 14:01:52 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 20:01:52 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: On 22 November 2017 at 18:15, Paul Moore wrote: > On 22 November 2017 at 16:47, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 17:43, Paul Moore wrote: > >> > >> On 22 November 2017 at 16:30, Ivan Levkivskyi > >> wrote: > >> > On 22 November 2017 at 17:24, Antoine Pitrou > >> > wrote: > >> >> Given a comprehension (e.g. list comprehension) is expected to work > >> >> nominally as `constructor(generator expression)` > >> > > >> > As Yury just explained, these two are not equivalent if there is an > >> > `await` > >> > in the comprehension/generator expression. > >> > >> As Antoine said, people *expect* them to work the same. > > > > > > The difference is that a generator expression can be used independently, > one > > can assign it to a variable etc. not necessary to wrap it into a list() > > Anyway, can you propose an equivalent "defining" code for both? > Otherwise it > > is not clear what you are defending. > > [...] > > 1. List comprehensions expand into nested for/if statements in the > "obvious" way - with an empty list created to start and append used to > add items to it. > 1a. Variables introduced in the comprehension don't "leak" (see below). > 2. Generator expressions expand into generators with the same "nested > loop" behaviour, and a yield of the generated value. > 3. List comprehensions are the same as list(the equivalent generator > expression). > > Paul, OK, I think how to formulate these rules more "precisely" so that they will be all consistent even if there is a `yield` inside. The key idea is that neither comprehensions nor generator expressions should create a function scope surrounding the `expr` in (expr for ind in iterable) and [expr for ind in iterable]. (this still can be some hidden implementation detail) So as Serhiy proposed in one of his first posts any comprehensions and generator expressions with `yield` are not valid outside functions. If such comprehension or generator expression is inside a function, then it makes it a generator, and all the `yiled`s are yielded from that generator, for example: def fun_gen(): return list((yield i) for i in range(3)) should work as following: g = func_gen() g.send(42) 0 g.send(43) 1 g.send(44) 2 try: g.send(45) except StopIteration as e: assert e.value == [42, 43, 44] And exactly the same with def fun_comp(): [(yield i) for i in range(3)] I hope with this we can close the no-async part of the problem. Currently this is not how it works, and should be fixed. Do you agree? The async part can then be considered separately. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 22 14:05:07 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 22 Nov 2017 11:05:07 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On Wed, Nov 22, 2017 at 10:54 AM, Jelle Zijlstra wrote: > > > 2017-11-22 9:58 GMT-08:00 Guido van Rossum : > >> Wow, 44 messages in 4 hours. That must be some kind of record. >> >> If/when there's an action item, can someone summarize for me? >> >> The main disagreement seems to be about what this code should do: > > g = [(yield i) for i in range(3)] > > Currently, this makes `g` into a generator, not a list. Everybody seems to > agree this is nonintuitive and should be changed. > > One proposal is to make it so `g` gets assigned a list, and the `yield` > happens in the enclosing scope (so the enclosing function would have to be > a generator). This was the way things worked in Python 2, I believe. > > Another proposal is to make this code a syntax error, because it's > confusing either way. (For what it's worth, that would be my preference.) > Hm, yes, I don't think we should try to preserve the accidental meaning it had in either Py2 or Py3. Let's make it illegal. (OTOH, await in the same position must keep working since it's not broken and not unintuitive either.) Possibly we should make it a (hard) warning in 3.7 and break in 3.8. > There is related discussion about the semantics of list comprehensions > versus calling list() on a generator expression, and of async semantics, > but I don't think there's any clear point of action there. > OK. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 14:08:40 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 20:08:40 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 22 November 2017 at 19:54, Jelle Zijlstra wrote: > > One proposal is to make it so `g` gets assigned a list, and the `yield` > happens in the enclosing scope (so the enclosing function would have to be > a generator). This was the way things worked in Python 2, I believe. > > Another proposal is to make this code a syntax error, because it's > confusing either way. (For what it's worth, that would be my preference.) > > Concerning this two options it looks like me and Serhiy like the first one, Paul is undecided (), and Antoine is in favor of option 2. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 22 14:15:34 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 22 Nov 2017 11:15:34 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On Wed, Nov 22, 2017 at 11:08 AM, Ivan Levkivskyi wrote: > On 22 November 2017 at 19:54, Jelle Zijlstra > wrote: > >> >> One proposal is to make it so `g` gets assigned a list, and the `yield` >> happens in the enclosing scope (so the enclosing function would have to be >> a generator). This was the way things worked in Python 2, I believe. >> >> Another proposal is to make this code a syntax error, because it's >> confusing either way. (For what it's worth, that would be my preference.) >> >> > Concerning this two options it looks like me and Serhiy like the first > one, Paul is undecided (), and Antoine is in favor of option 2. > While that may be the right thing to do, it's a silent change in semantics, which I find pretty disturbing -- how would people debug such a failure? That's why I think we should deprecate or hard break it for at least one release. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Nov 22 14:19:33 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 22 Nov 2017 14:19:33 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On Wed, Nov 22, 2017 at 2:08 PM, Ivan Levkivskyi wrote: > On 22 November 2017 at 19:54, Jelle Zijlstra > wrote: >> >> >> One proposal is to make it so `g` gets assigned a list, and the `yield` >> happens in the enclosing scope (so the enclosing function would have to be a >> generator). This was the way things worked in Python 2, I believe. >> >> Another proposal is to make this code a syntax error, because it's >> confusing either way. (For what it's worth, that would be my preference.) >> > > Concerning this two options it looks like me and Serhiy like the first one, > Paul is undecided (), and Antoine is in favor of option 2. FWIW I'm in favour of deprecating 'yield' expression in comprehensions and generator expressions in 3.7 and removing them from 3.8. async/await support should stay as is. Yury From levkivskyi at gmail.com Wed Nov 22 14:20:36 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 20:20:36 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 22 November 2017 at 20:15, Guido van Rossum wrote: > On Wed, Nov 22, 2017 at 11:08 AM, Ivan Levkivskyi > wrote: > >> On 22 November 2017 at 19:54, Jelle Zijlstra >> wrote: >> >>> >>> One proposal is to make it so `g` gets assigned a list, and the `yield` >>> happens in the enclosing scope (so the enclosing function would have to be >>> a generator). This was the way things worked in Python 2, I believe. >>> >>> Another proposal is to make this code a syntax error, because it's >>> confusing either way. (For what it's worth, that would be my preference.) >>> >>> >> Concerning this two options it looks like me and Serhiy like the first >> one, Paul is undecided (), and Antoine is in favor of option 2. >> > > While that may be the right thing to do, it's a silent change in > semantics, which I find pretty disturbing -- how would people debug such a > failure? > Some may call this just fixing a bug (At least in two mentioned Stackoverflow questions and in two b.p.o. issues the current behaviour is considered a bug). But anyway, it is not me who decides :-) -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 14:12:29 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 20:12:29 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 22 November 2017 at 20:05, Guido van Rossum wrote: > On Wed, Nov 22, 2017 at 10:54 AM, Jelle Zijlstra > wrote > >> 2017-11-22 9:58 GMT-08:00 Guido van Rossum : >> > (OTOH, await in the same position must keep working since it's not broken >> and not unintuitive either.) >> > > This is very questionable IMO. So do you think that [await x for y in z] and list(await x for y in z) being not equivalent is intuitive? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Nov 22 14:33:30 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 22 Nov 2017 11:33:30 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On Wed, Nov 22, 2017 at 11:12 AM, Ivan Levkivskyi wrote: > On 22 November 2017 at 20:05, Guido van Rossum wrote: > >> On Wed, Nov 22, 2017 at 10:54 AM, Jelle Zijlstra < >> jelle.zijlstra at gmail.com> wrote >> >>> 2017-11-22 9:58 GMT-08:00 Guido van Rossum : >>> >> (OTOH, await in the same position must keep working since it's not broken >>> and not unintuitive either.) >>> >> >> > > This is very questionable IMO. > So do you think that [await x for y in z] and list(await x for y in z) > being not equivalent is intuitive? > I see, that's why this is such a long thread. :-( But are they different? I can't find an example where they don't give the same outcome. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Wed Nov 22 14:37:37 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 20:37:37 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 22 November 2017 at 20:33, Guido van Rossum wrote: > On Wed, Nov 22, 2017 at 11:12 AM, Ivan Levkivskyi > wrote: > >> On 22 November 2017 at 20:05, Guido van Rossum wrote: >> >>> On Wed, Nov 22, 2017 at 10:54 AM, Jelle Zijlstra < >>> jelle.zijlstra at gmail.com> wrote >>> >>>> 2017-11-22 9:58 GMT-08:00 Guido van Rossum : >>>> >>> (OTOH, await in the same position must keep working since it's not >>>> broken and not unintuitive either.) >>>> >>> >>> >> >> This is very questionable IMO. >> So do you think that [await x for y in z] and list(await x for y in z) >> being not equivalent is intuitive? >> > > I see, that's why this is such a long thread. :-( > > But are they different? I can't find an example where they don't give the > same outcome. > > I think this is a minimal example https://bugs.python.org/issue32113 Also Yury explains there why [await x for y in z ] is different from list(await x for y in z). Although I understand why it works this way, TBH it is not very intuitive. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Nov 22 15:07:45 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 22 Nov 2017 15:07:45 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On Wed, Nov 22, 2017 at 2:37 PM, Ivan Levkivskyi wrote: > On 22 November 2017 at 20:33, Guido van Rossum wrote: >> >> On Wed, Nov 22, 2017 at 11:12 AM, Ivan Levkivskyi >> wrote: >>> >>> On 22 November 2017 at 20:05, Guido van Rossum wrote: >>>> >>>> On Wed, Nov 22, 2017 at 10:54 AM, Jelle Zijlstra >>>> wrote >>>>> >>>>> 2017-11-22 9:58 GMT-08:00 Guido van Rossum : >>>>> >>>>> (OTOH, await in the same position must keep working since it's not >>>>> broken and not unintuitive either.) >>>> >>>> >>> >>> >>> This is very questionable IMO. >>> So do you think that [await x for y in z] and list(await x for y in z) Comprehensions are declarative, and that's why [], and {} work with async/await. When you're using parens () you *explicitly* tell Python compiler that you want a generator expression. And the distinction between comprehensions and generator expressions also exists for synchronous code: x = [a for a in range(10)] x[0] and x = (a for a in range(10)) x[0] # TypeError Is the above "intuitive" for all Python users? Probably not. Write it once, get your TypeError, read the error message and you understand what's going on here. Is the difference between "[await x for y in z ]" and "list(await x for y in z)" intuitive for all Python users? Again, probably not. But for those who write async code it is. I also don't recall seeing a lot of `list(x for x in ...)` pattern. Usually people just use list or dict comprehensions directly (and they are faster, btw). Yury From mertz at gnosis.cx Wed Nov 22 15:29:42 2017 From: mertz at gnosis.cx (David Mertz) Date: Wed, 22 Nov 2017 12:29:42 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: Inasmuch as I get to opine, I'm +1 on SyntaxError. There is no behavior for that spelling that I would find intuitive or easy to explain to students. And as far as I can tell, the ONLY time anything has ever been spelled that way is in comments saying "look at this weird edge case behavior in Python." On Nov 22, 2017 10:57 AM, "Jelle Zijlstra" wrote: 2017-11-22 9:58 GMT-08:00 Guido van Rossum : > Wow, 44 messages in 4 hours. That must be some kind of record. > > If/when there's an action item, can someone summarize for me? > > The main disagreement seems to be about what this code should do: g = [(yield i) for i in range(3)] Currently, this makes `g` into a generator, not a list. Everybody seems to agree this is nonintuitive and should be changed. One proposal is to make it so `g` gets assigned a list, and the `yield` happens in the enclosing scope (so the enclosing function would have to be a generator). This was the way things worked in Python 2, I believe. Another proposal is to make this code a syntax error, because it's confusing either way. (For what it's worth, that would be my preference.) There is related discussion about the semantics of list comprehensions versus calling list() on a generator expression, and of async semantics, but I don't think there's any clear point of action there. > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle. > zijlstra%40gmail.com > > _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ mertz%40gnosis.cx -------------- next part -------------- An HTML attachment was scrubbed... URL: From srkunze at mail.de Wed Nov 22 15:48:09 2017 From: srkunze at mail.de (Sven R. Kunze) Date: Wed, 22 Nov 2017 21:48:09 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: <073d78ee-4d0d-a651-66c6-e4d3b4cf0a07@mail.de> Isn't yield like a return? A return in a list/dict/set comprehension makes no sense to me. So, +1 on SyntaxError from me too. Cheers. On 22.11.2017 21:29, David Mertz wrote: > Inasmuch as I get to opine, I'm +1 on SyntaxError. There is no > behavior for that spelling that I would find intuitive or easy to > explain to students. And as far as I can tell, the ONLY time anything > has ever been spelled that way is in comments saying "look at this > weird edge case behavior in Python." > > On Nov 22, 2017 10:57 AM, "Jelle Zijlstra" > wrote: > > > > 2017-11-22 9:58 GMT-08:00 Guido van Rossum >: > > Wow, 44 messages in 4 hours. That must be some kind of record. > > If/when there's an action item, can someone summarize for me? > > The main disagreement seems to be about what this code should do: > > ? ? g = [(yield i) for i in range(3)] > > Currently, this makes `g` into a generator, not a list. Everybody > seems to agree this is nonintuitive and should be changed. > > One proposal is to make it so `g` gets assigned a list, and the > `yield` happens in the enclosing scope (so the enclosing function > would have to be a generator). This was the way things worked in > Python 2, I believe. > > Another proposal is to make this code a syntax error, because it's > confusing either way. (For what it's worth, that would be my > preference.) > > There is related discussion about the semantics of list > comprehensions versus calling list() on a generator expression, > and of async semantics, but I don't think there's any clear point > of action there. > > -- > --Guido van Rossum (python.org/~guido > ) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jelle.zijlstra%40gmail.com > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Wed Nov 22 16:18:10 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 22 Nov 2017 13:18:10 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: <5A15E992.3080500@stoneleaf.us> On 11/22/2017 11:01 AM, Ivan Levkivskyi wrote: > I think how to formulate these rules more "precisely" so that they will be all consistent even if there is a > `yield` inside. > The key idea is that neither comprehensions nor generator expressions should create a function scope surrounding the > `expr` in (expr for ind in iterable) and [expr for ind in iterable]. > (this still can be some hidden implementation detail) > > So as Serhiy proposed in one of his first posts any comprehensions and generator expressions with `yield` are not valid > outside functions. > If such comprehension or generator expression is inside a function, then it makes it a generator, and all the `yield`s > are yielded from that generator, for example: Whether it's inside or outside a function should be irrelevant -- a comprehension / generator expression should have no influence on the type of the resulting function (and at least synchronous comprehensions / generator expressions should be able to live outside of a function). > def fun_gen(): > return list((yield i) for i in range(3)) The return says it's returning a list, so that's what it should be returning. > should work as following: > > g = func_gen() > > g.send(42) > 0 > g.send(43) > 1 > g.send(44) > 2 > try: > g.send(45) > except StopIteration as e: > assert e.value == [42, 43, 44] > > And exactly the same with > > def fun_comp(): > [(yield i) for i in range(3)] > > I hope with this we can close the no-async part of the problem. > Currently this is not how it works, and should be fixed. Do you agree? No. NB: If we go with making yield inside a comprehension / generator expression a SyntaxError then this subthread can die. -- ~Ethan~ From levkivskyi at gmail.com Wed Nov 22 16:25:55 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 22:25:55 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A15E992.3080500@stoneleaf.us> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> <5A15E992.3080500@stoneleaf.us> Message-ID: 22 ??? 2017 22:19 "Ethan Furman" ????: On 11/22/2017 11:01 AM, Ivan Levkivskyi wrote: I think how to formulate these rules more "precisely" so that they will be > all consistent even if there is a > `yield` inside. > The key idea is that neither comprehensions nor generator expressions > should create a function scope surrounding the > `expr` in (expr for ind in iterable) and [expr for ind in iterable]. > (this still can be some hidden implementation detail) > > So as Serhiy proposed in one of his first posts any comprehensions and > generator expressions with `yield` are not valid > outside functions. > If such comprehension or generator expression is inside a function, then > it makes it a generator, and all the `yield`s > > are yielded from that generator, for example: > Whether it's inside or outside a function should be irrelevant -- a comprehension / generator expression should have no influence on the type of the resulting function (and at least synchronous comprehensions / generator expressions should be able to live outside of a function). def fun_gen(): > return list((yield i) for i in range(3)) > The return says it's returning a list, so that's what it should be returning def f(): yield 1 return list() Here return also says it should return a list, so this is not an argument. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Wed Nov 22 16:32:38 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 22 Nov 2017 13:32:38 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122172404.75b05d56@fsol> <5A15E992.3080500@stoneleaf.us> Message-ID: <5A15ECF6.5040905@stoneleaf.us> On 11/22/2017 01:25 PM, Ivan Levkivskyi wrote: > 22 ??? 2017 22:19 "Ethan Furman" ????: >> Whether it's inside or outside a function should be irrelevant -- a comprehension / generator expression should have >> no influence on the type of the resulting function (and at least synchronous comprehensions / generator expressions >> should be able to live outside of a function). >> >> def fun_gen(): >> return list((yield i) for i in range(3)) >> >> The return says it's returning a list, so that's what it should be returning > > def f(): > yield 1 > return list() > > Here return also says it should return a list, so this is not an argument. Right, the argument is that calling the `list` constructor should return a list -- not a database proxy, not a web page, and not a generator. -- ~Ethan~ From levkivskyi at gmail.com Wed Nov 22 16:40:20 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 22 Nov 2017 22:40:20 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A15ECF6.5040905@stoneleaf.us> References: <20171122172404.75b05d56@fsol> <5A15E992.3080500@stoneleaf.us> <5A15ECF6.5040905@stoneleaf.us> Message-ID: 22 ??? 2017 22:33 "Ethan Furman" ????: On 11/22/2017 01:25 PM, Ivan Levkivskyi wrote: > 22 ??? 2017 22:19 "Ethan Furman" ????: > Whether it's inside or outside a function should be irrelevant -- a >> comprehension / generator expression should have >> no influence on the type of the resulting function (and at least >> synchronous comprehensions / generator expressions >> should be able to live outside of a function). >> >> def fun_gen(): >> return list((yield i) for i in range(3)) >> >> The return says it's returning a list, so that's what it should be >> returning >> > > def f(): > yield 1 > return list() > > Here return also says it should return a list, so this is not an argument. > Right, the argument is that calling the `list` constructor should return a list -- not a database proxy, not a web page, and not a generator. Then you didn't read my example carefully, since the whole point is that the list constructor there returns a list. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Wed Nov 22 17:14:41 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 11:14:41 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> Message-ID: <5A15F6D1.3040207@canterbury.ac.nz> Ivan Levkivskyi wrote: > while `g = list((yield i) for i in range(3))` is defined as this code: > > def __gen(): > for i in range(3): > yield (yield i) > g = list(__gen()) Since this is almost certainly not what was intended, I think that 'yield' inside a generator expression should simply be disallowed. The problem with making it work intuitively is that there's no way to distinguish between the implicit yields producing values for the generator comprehension and the explicit ones that the programmer is expecting to be passed through and yielded from the enclosing generator function. Fixing that would require adding another whole layer of complexity similar to what Yuri did to support async generators, and I can't see that being worth the effort. -- Greg From thomas.mansencal at gmail.com Wed Nov 22 17:18:15 2017 From: thomas.mansencal at gmail.com (Thomas Mansencal) Date: Wed, 22 Nov 2017 22:18:15 +0000 Subject: [Python-Dev] Script bootstrapping executables on Windows In-Reply-To: References: Message-ID: Hi, I hope that what follows will be useful for other people: after stepping through code for a few hours this morning, I ended up finding the location of the boostrapping executable. It (they) actually ship with setuptools, e.g. C:\Python27\Lib\site-packages\setuptools\cli-64.exe and get copied and renamed with the script name. The source code is here: https://github.com/pypa/setuptools/blob/master/launcher.c , and I was able to find the error mentioned in OP: https://github.com/pypa/setuptools/blob/master/launcher.c#L209 Cheers, Thomas On Wed, Nov 22, 2017 at 5:08 PM Thomas Mansencal wrote: > Hi, > > This is a Windows specific question, to give a bit of context I'm working > in a studio where depending on the project we use different Python > interpreters installed in different locations, e.g. Python 2.7.13, Python > 2.7.14, Python 3.6. We set PATH, PYTHONHOME and PYTHONPATH accordingly > depending the interpreter in use. > > Our Python packages are living atomically on the network and are added to > the environment on a per project basis by extending PYTHONPATH. This is in > contrast to using a monolithic virtual environment built with virtualenv or > conda. Assuming it is compatible, a Python package might be used with any > of the 3 aforementioned interpreters, e.g. yapf (a code formatter). > > Now, on Windows, if you for example *pip install yapf*, a yapf.exe > boostrapping executable and a yapf-script.py file are being generated. The > boostrapping executable seems to look for the yapf-script.py file and > launch it using the absolute hardcoded interpreter path of the > yapf-script.py shebang. > > Given the above we run into issues if for example yapf was deployed using > Python 2.7.13 but the Python 2.7.14 interpreter is being used in the > environment instead. We get a "failed to create process." error in that > case. > > What we would like to do is not being tied to the absolute interpreter > path but have it defined with a variable or just use #!python. I have tried > to search for the above error in cpython source code and the installation > directory without luck. I would like to know what module/package is > responsible for generating the boostrapping executables to understand how > it works and see if we can doctor it for our usage. > > Bests, > > Thomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Wed Nov 22 17:32:38 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 11:32:38 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: <5A15FB06.3000400@canterbury.ac.nz> Paul Moore wrote: > At the moment, I know I tend to treat Python semantics as I always > did, but with an implicit proviso, "unless async is involved, when I > can't assume any of my intuitions apply". That's *not* a good > situation to be in. It's disappointing that PEP 3152 was rejected, because I think it would have helped people like you by providing a mental model for async stuff that's much easier to reason about. -- Greg From levkivskyi at gmail.com Wed Nov 22 18:46:49 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 00:46:49 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 22 November 2017 at 21:07, Yury Selivanov wrote: > On Wed, Nov 22, 2017 at 2:37 PM, Ivan Levkivskyi > wrote: > > On 22 November 2017 at 20:33, Guido van Rossum wrote: > >> > >> On Wed, Nov 22, 2017 at 11:12 AM, Ivan Levkivskyi > > >> wrote: > >>> > >>> On 22 November 2017 at 20:05, Guido van Rossum > wrote: > >>>> > >>>> On Wed, Nov 22, 2017 at 10:54 AM, Jelle Zijlstra > >>>> wrote > >>>>> > >>>>> 2017-11-22 9:58 GMT-08:00 Guido van Rossum : > >>>>> > >>>>> (OTOH, await in the same position must keep working since it's not > >>>>> broken and not unintuitive either.) > >>>> > >>>> > >>> > >>> > >>> This is very questionable IMO. > >>> So do you think that [await x for y in z] and list(await x for y in z) > > Comprehensions are declarative, and that's why [], and {} work with > async/await. When you're using parens () you *explicitly* tell Python > compiler that you want a generator expression. > > And the distinction between comprehensions and generator expressions > also exists for synchronous code: > > x = [a for a in range(10)] > x[0] > > and > > x = (a for a in range(10)) > x[0] # TypeError > > Is the above "intuitive" for all Python users? Probably not. Write > it once, get your TypeError, read the error message and you understand > what's going on here. > > Is the difference between "[await x for y in z ]" and "list(await x > for y in z)" intuitive for all Python users? Again, probably not. > But for those who write async code it is. > Just found another example of intuitive behaviour: >>> async def f(): ... for i in range(3): ... yield i ... >>> async def g(): ... return [(yield i) async for i in f()] ... >>> g().send(None) Traceback (most recent call last): File "", line 1, in File "", line 2, in g TypeError: object async_generator can't be used in 'await' expression of course it is obvious for anyone who writes async code, but anyway an interesting example. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Nov 22 19:00:36 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 22 Nov 2017 19:00:36 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On Wed, Nov 22, 2017 at 6:46 PM, Ivan Levkivskyi wrote: [..] > Just found another example of intuitive behaviour: > >>>> async def f(): > ... for i in range(3): > ... yield i > ... >>>> async def g(): > ... return [(yield i) async for i in f()] > ... >>>> g().send(None) > Traceback (most recent call last): > File "", line 1, in > File "", line 2, in g > TypeError: object async_generator can't be used in 'await' expression > > of course it is obvious for anyone who writes async code, but anyway an > interesting example. I wouldn't say that it's obvious to anyone... I think this thread has started to discuss the use of 'yield' expression in comprehensions, and the outcome of the discussion is that everyone thinks that we should deprecate that syntax in 3.7, remove in 3.8. Let's start with that? :) Yury From ethan at stoneleaf.us Wed Nov 22 19:30:39 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 22 Nov 2017 16:30:39 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122172404.75b05d56@fsol> <5A15E992.3080500@stoneleaf.us> <5A15ECF6.5040905@stoneleaf.us> Message-ID: <5A1616AF.4040609@stoneleaf.us> On 11/22/2017 01:40 PM, Ivan Levkivskyi wrote: > 22 ??? 2017 22:33 "Ethan Furman" > ????: >> Right, the argument is that calling the `list` constructor should return a list -- not a database proxy, not a web >> page, and not a generator. > > Then you didn't read my example carefully, since the whole point is that the list constructor there returns a list. This example? try: g.send(45) except StopIteration as e: assert e.value == [42, 43, 44] -- ~Ethan~ From victor.stinner at gmail.com Wed Nov 22 19:32:56 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 23 Nov 2017 01:32:56 +0100 Subject: [Python-Dev] PEP 559 - built-in noop() In-Reply-To: References: Message-ID: Aha, contextlib.nullcontext() was just added, cool! https://github.com/python/cpython/commit/0784a2e5b174d2dbf7b144d480559e650c5cf64c https://bugs.python.org/issue10049 Victor 2017-09-09 21:54 GMT+02:00 Victor Stinner : > I always wanted this feature (no kidding). > > Would it be possible to add support for the context manager? > > with noop(): ... > > Maybe noop can be an instance of: > > class Noop: > def __enter__(self, *args, **kw): return self > def __exit__(self, *args): pass > def __call__(self, *args, **kw): return self > > Victor > > Le 9 sept. 2017 11:48 AM, "Barry Warsaw" a ?crit : >> >> I couldn?t resist one more PEP from the Core sprint. I won?t reveal where >> or how this one came to me. >> >> -Barry >> >> PEP: 559 >> Title: Built-in noop() >> Author: Barry Warsaw >> Status: Draft >> Type: Standards Track >> Content-Type: text/x-rst >> Created: 2017-09-08 >> Python-Version: 3.7 >> Post-History: 2017-09-09 >> >> >> Abstract >> ======== >> >> This PEP proposes adding a new built-in function called ``noop()`` which >> does >> nothing but return ``None``. >> >> >> Rationale >> ========= >> >> It is trivial to implement a no-op function in Python. It's so easy in >> fact >> that many people do it many times over and over again. It would be useful >> in >> many cases to have a common built-in function that does nothing. >> >> One use case would be for PEP 553, where you could set the breakpoint >> environment variable to the following in order to effectively disable it:: >> >> $ setenv PYTHONBREAKPOINT=noop >> >> >> Implementation >> ============== >> >> The Python equivalent of the ``noop()`` function is exactly:: >> >> def noop(*args, **kws): >> return None >> >> The C built-in implementation is available as a pull request. >> >> >> Rejected alternatives >> ===================== >> >> ``noop()`` returns something >> ---------------------------- >> >> YAGNI. >> >> This is rejected because it complicates the semantics. For example, if >> you >> always return both ``*args`` and ``**kws``, what do you return when none >> of >> those are given? Returning a tuple of ``((), {})`` is kind of ugly, but >> provides consistency. But you might also want to just return ``None`` >> since >> that's also conceptually what the function was passed. >> >> Or, what if you pass in exactly one positional argument, e.g. ``noop(7)``. >> Do >> you return ``7`` or ``((7,), {})``? And so on. >> >> The author claims that you won't ever need the return value of ``noop()`` >> so >> it will always return ``None``. >> >> Coghlin's Dialogs (edited for formatting): >> >> My counterargument to this would be ``map(noop, iterable)``, >> ``sorted(iterable, key=noop)``, etc. (``filter``, ``max``, and >> ``min`` all accept callables that accept a single argument, as do >> many of the itertools operations). >> >> Making ``noop()`` a useful default function in those cases just >> needs the definition to be:: >> >> def noop(*args, **kwds): >> return args[0] if args else None >> >> The counterargument to the counterargument is that using ``None`` >> as the default in all these cases is going to be faster, since it >> lets the algorithm skip the callback entirely, rather than calling >> it and having it do nothing useful. >> >> >> Copyright >> ========= >> >> This document has been placed in the public domain. >> >> >> .. >> Local Variables: >> mode: indented-text >> indent-tabs-mode: nil >> sentence-end-double-space: t >> fill-column: 70 >> coding: utf-8 >> End: >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com >> > From ncoghlan at gmail.com Wed Nov 22 20:24:54 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 23 Nov 2017 11:24:54 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: <20171122120434.2fa59ea4@fsol> Message-ID: On 22 November 2017 at 21:12, Victor Stinner wrote: > 2017-11-22 12:04 GMT+01:00 Antoine Pitrou : > > IMHO this really needs a simple solution documented somewhere. Also, > > hopefully when you do the wrong thing, you get a clear error message to > > know how to fix your code? > > Right now, calling PyMem_RawMalloc() before calling > _PyRuntime_Initialize() calls the function at address NULL, so you get > a segmentation fault. > > Documenting the new requirements is part of the discussion, it's one > option how to fix this issue. > My own recommendation is that we add Eric's new test case to the embedding test suite and just make sure it works: wchar_t *program = Py_DecodeLocale("spam", NULL); Py_SetProgramName(program); Py_Initialize(); Py_Finalize(); PyMem_RawFree(program); It does place some additional constraints on us in terms of handling static initialization of the allocator state, and ensuring we revert back to that state in Py_Finalize, but I think it's the only way we're going to be able to reliably replace all calls to malloc & free with PyMem_RawMalloc and PyMem_RawFree without causing weird problems. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Wed Nov 22 23:36:30 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 17:36:30 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: <5A16504E.6010200@canterbury.ac.nz> Paul Moore wrote: > 3. List comprehensions are the same as list(the equivalent generator > expression). I don't think that's ever been quite true -- there have always been odd cases such as what happens if you raise StopIteration in list(generator_expression). To my mind, these equivalences have never been intended as exact descriptions of the semantics, but just a way of quickly getting across the general idea. Further words are needed to pin down all the fine details. -- Greg From greg.ewing at canterbury.ac.nz Wed Nov 22 23:44:58 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 17:44:58 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: <5A16524A.6060702@canterbury.ac.nz> Ivan Levkivskyi wrote: > The key idea is that neither comprehensions nor generator expressions > should create a function scope surrounding the `expr` I don't see how you can avoid an implicit function scope in the case of a generator expression, though. And I can't see how to make yield in a generator expression do anything sensible. Consider this: def g(): return ((yield i) for i in range(10)) Presumably the yield should turn g into a generator, but... then what? My brain is hurting trying to figure out what it should do. -- Greg From ncoghlan at gmail.com Wed Nov 22 23:54:49 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 23 Nov 2017 14:54:49 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A16504E.6010200@canterbury.ac.nz> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> <5A16504E.6010200@canterbury.ac.nz> Message-ID: On 23 November 2017 at 14:36, Greg Ewing wrote: > Paul Moore wrote: > >> 3. List comprehensions are the same as list(the equivalent generator >> expression). >> > > I don't think that's ever been quite true -- there have > always been odd cases such as what happens if you > raise StopIteration in list(generator_expression). > > To my mind, these equivalences have never been intended > as exact descriptions of the semantics, but just a way > of quickly getting across the general idea. Further > words are needed to pin down all the fine details. > Getting the name resolution to be identical was definitely one of my goals when working on the Python 3 comprehension scoping changes. The fact that implicit scopes and yield expressions interact strangely was just a pre-existing oddity from when PEP 342 was first implemented (and one we were able to avoid for async/await by retaining the same "await is only permitted in async comprehensions" constraint that exists for explicit scopes). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Nov 23 00:23:25 2017 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 23 Nov 2017 16:23:25 +1100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A16504E.6010200@canterbury.ac.nz> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> <5A16504E.6010200@canterbury.ac.nz> Message-ID: On Thu, Nov 23, 2017 at 3:36 PM, Greg Ewing wrote: > Paul Moore wrote: >> >> 3. List comprehensions are the same as list(the equivalent generator >> expression). > > > I don't think that's ever been quite true -- there have > always been odd cases such as what happens if you > raise StopIteration in list(generator_expression). You mean if the genexp leaks one? That's basically an error either way - the genexp will raise RuntimeError, but it's still an exception. >>> from __future__ import generator_stop >>> def boom(): raise StopIteration ... >>> [x if x < 3 else boom() for x in range(5)] Traceback (most recent call last): File "", line 1, in File "", line 1, in File "", line 1, in boom StopIteration >>> list(x if x < 3 else boom() for x in range(5)) Traceback (most recent call last): File "", line 1, in File "", line 1, in boom StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in RuntimeError: generator raised StopIteration >>> So that's _one_ difference removed (mostly). ChrisA From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Nov 23 00:33:58 2017 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 23 Nov 2017 14:33:58 +0900 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> Message-ID: <23062.24006.433865.333680@turnbull.sk.tsukuba.ac.jp> Paul Moore writes: > 1. List comprehensions expand into nested for/if statements in the > "obvious" way - with an empty list created to start and append used to > add items to it. > 1a. Variables introduced in the comprehension don't "leak" (see below). > 2. Generator expressions expand into generators with the same "nested > loop" behaviour, and a yield of the generated value. > 3. List comprehensions are the same as list(the equivalent generator > expression). I'm a little late to this discussion, but I don't see how 3 can be true if 1 and 2 are. Because I believe my model of generator expressions is pretty simple, I'll present it here: - the generator expression in 2 and 3 implicitly creates a generator function containing the usual loop - the point being that it will "capture" all of the yields (implicit *and* explicit) in the generator expression - then implicitly invokes it, - passing the (iterable) generator object returned to the containing expression. "Look Ma, no yields (or generator functions) left!" However, my model of comprehensions is exactly a for loop that appends to an empty list repeatedly, but doesn't leak iteration variables. So a yield in a comprehension turns the function *containing* the comprehension (which may or may not exist) into a generator function. In other words, I don't agree with Ethan that a yield inside a list comprehension should not affect the generator-ness of the containing function. What would this mean under that condition: [f(x) for x in (yield iterable)] then? IOW, x = ((yield i) for i in iterable) IS valid syntax at top level, while x = [(yield i) for i in iterable] IS NOT valid syntax at top level given those semantics. The latter "works" in Python 3.6; >>> for i in [(yield i) for i in (1, 2, 3)]: ... i ... 1 2 3 though I think it should be a syntax error, and "bare" yield does not "work": >>> i = 1 >>> yield i File "", line 1 SyntaxError: 'yield' outside function >>> (yield i) File "", line 1 SyntaxError: 'yield' outside function FWIW, that's the way I'd want it, and the way I've always understood comprehensions and generator expressions. I think this is consistent with Yuri's, Serhiy's, and Ivan's claims, but I admit I'm not sure. Of course the compiler need not create a generator function and invoke it, but the resulting bytecode should be the same as if it did. This means the semantics of [FOR-EXPRESSION] and [(FOR-EXPRESSION)] should differ in the same way. Steve From ncoghlan at gmail.com Thu Nov 23 01:00:58 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 23 Nov 2017 16:00:58 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <23062.24006.433865.333680@turnbull.sk.tsukuba.ac.jp> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> <23062.24006.433865.333680@turnbull.sk.tsukuba.ac.jp> Message-ID: On 23 November 2017 at 15:33, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > However, my model of comprehensions is exactly a for loop that appends > to an empty list repeatedly, but doesn't leak iteration variables. Not since Python 3.0. Instead, they create a nested function, the same way generator expressions do (which is why the name resolution semantics are now identical between the two cases). The differences in structure between the four cases (genexp, list/set/dict comprehensions) then relate mainly to what the innermost loop does: result.append(expr) # list comp result.add(expr) # set comp result[k] = v # dict comp yield expr # genexp Thus, when the expression itself is a yield expression, you get: result.append(yield expr) # list comp result.add(yield expr) # set comp result[k] = (yield v) # dict comp yield (yield expr) # genexp Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Thu Nov 23 01:12:46 2017 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 23 Nov 2017 15:12:46 +0900 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A16524A.6060702@canterbury.ac.nz> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> Message-ID: <23062.26334.690111.704146@turnbull.sk.tsukuba.ac.jp> Greg Ewing writes: > Consider this: > > def g(): > return ((yield i) for i in range(10)) > > Presumably the yield should turn g into a generator, but... > then what? My brain is hurting trying to figure out what > it should do. I don't understand why you presume that. The generator expression doesn't do that anywhere else. My model is that implicitly the generator expression is creating a function that becomes a generator factory, which is implicitly called to return the iterable generator object, which contains the yield. Because the call takes place implicitly = at compile time, all the containing function "sees" is an iterable (which happens to be a generator object). "Look Ma, no yields left!" And then g returns the generator object. What am I missing? In other words, g above is equivalent to def g(): def _g(): for i in range(10): # the outer yield is the usual implicit yield from the # expansion of the generator expression, and the inner # yield is explicit in your code. yield (yield i) return _g() (modulo some issues of leaking identifiers). I have not figured out why either your g or my g does what it does, but they do the same thing. From tseaver at palladion.com Thu Nov 23 02:23:39 2017 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 23 Nov 2017 02:23:39 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 11/22/2017 07:00 PM, Yury Selivanov wrote: > I wouldn't say that it's obvious to anyone... > > I think this thread has started to discuss the use of 'yield' > expression in comprehensions, and the outcome of the discussion is > that everyone thinks that we should deprecate that syntax in 3.7, > remove in 3.8. Let's start with that? :) You guys are no fun: such a change would remove at least one evil "bwah-ha-ha" cackle from David Beazley's next PyCon talk. :) Tres. -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com From levkivskyi at gmail.com Thu Nov 23 02:30:42 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 08:30:42 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: On 23 November 2017 at 01:00, Yury Selivanov wrote: > On Wed, Nov 22, 2017 at 6:46 PM, Ivan Levkivskyi > wrote: > [..] > > Just found another example of intuitive behaviour: > > > >>>> async def f(): > > ... for i in range(3): > > ... yield i > > ... > >>>> async def g(): > > ... return [(yield i) async for i in f()] > > ... > >>>> g().send(None) > > Traceback (most recent call last): > > File "", line 1, in > > File "", line 2, in g > > TypeError: object async_generator can't be used in 'await' expression > > > > of course it is obvious for anyone who writes async code, but anyway an > > interesting example. > > I wouldn't say that it's obvious to anyone... > > I think this thread has started to discuss the use of 'yield' > expression in comprehensions, and the outcome of the discussion is > that everyone thinks that we should deprecate that syntax in 3.7, > remove in 3.8. Let's start with that? :) I am not sure everyone agrees with this. My main obstacle is following, consider motivation for the `await` part of PEP 530 which is in my understanding is roughly like this: "People sometimes want to refactor for-loops containing `await` into a comprehension but that doesn't work (particularly because of the hidden function scope) - lets fix this" I don't see how this compare to: "People sometimes want to refactor for-loops containing `yield` into a comprehension but that doesn't work (particularly because of the hidden function scope) - lets make it a SyntaxError" -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 23 02:38:07 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 08:38:07 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A16524A.6060702@canterbury.ac.nz> References: <20171122143858.17e51546@fsol> <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> Message-ID: On 23 November 2017 at 05:44, Greg Ewing wrote: > Ivan Levkivskyi wrote: > >> The key idea is that neither comprehensions nor generator expressions >> should create a function scope surrounding the `expr` >> > > I don't see how you can avoid an implicit function scope in > the case of a generator expression, though. And I can't see > how to make yield in a generator expression do anything > sensible. > > Consider this: > > def g(): > return ((yield i) for i in range(10)) > > Presumably the yield should turn g into a generator, but... > then what? My brain is hurting trying to figure out what > it should do. > > I think this code should be just equivalent to this code def g(): temp = [(yield i) for i in range(10)] return (v for v in temp) Semantics of the comprehension should be clear here (just an equivalent for-loop without name leaking) -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From srkunze at mail.de Thu Nov 23 02:51:44 2017 From: srkunze at mail.de (Sven R. Kunze) Date: Thu, 23 Nov 2017 08:51:44 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> Message-ID: <29d709f4-8f60-d9c5-8c2b-b17e1abfcb70@mail.de> On 23.11.2017 08:38, Ivan Levkivskyi wrote: > I think this code should be just equivalent to this code > > ??? def g(): > ??????? temp = [(yield i) for i in range(10)] > ??????? return (v for v in temp) > > Semantics of the comprehension should be clear here (just an > equivalent for-loop without name leaking) Excuse me if I disagree here. If I were to understand this in real-world code, I cannot imagine what will happen here. A "yield" within a comprehension is like a "return" in a comprehension. It makes no sense at all. Also a "yield" and a "return with value" is also rarely seen. Comprehensions build new objects, they are not for control flow, IMO. Cheers, Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Nov 23 03:11:37 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 21:11:37 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: <5A1682B9.6010908@canterbury.ac.nz> Ivan Levkivskyi wrote: > "People sometimes want to refactor for-loops containing `yield` into a > comprehension but that doesn't work (particularly because of the hidden > function scope) - lets make it a SyntaxError" Personally I'd be fine with removing the implicit function scope from comprehensions and allowing yield in them, since the semantics of that are clear. But I don't see a way to do anything equivalent with generator expressions. Since the current effect of yield in a generator expression is pretty useless, it seems best just to disallow it. That means a list comprehension won't be equivalent to list(generator_expression) in all cases, but I don't think there's any great need for it to be. -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 23 03:15:59 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 21:15:59 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> Message-ID: <5A1683BF.4030105@canterbury.ac.nz> Ivan Levkivskyi wrote: > On 23 November 2017 at 05:44, Greg Ewing > wrote: > > def g(): > return ((yield i) for i in range(10)) > > > I think this code should be just equivalent to this code > > def g(): > temp = [(yield i) for i in range(10)] > return (v for v in temp) But then you get a non-lazy iterable, which defeats the purpose of using a generator expression -- you might as well have used a comprehension to begin with. -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 23 03:17:39 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 23 Nov 2017 21:17:39 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> Message-ID: <5A168423.505@canterbury.ac.nz> Ivan Levkivskyi wrote: > "People sometimes want to refactor for-loops containing `yield` into a > comprehension By the way, do we have any real-life examples of people wanting to do this? It might help us decide what the semantics should be. -- Greg From levkivskyi at gmail.com Thu Nov 23 03:19:54 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 09:19:54 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A1683BF.4030105@canterbury.ac.nz> References: <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> <5A1683BF.4030105@canterbury.ac.nz> Message-ID: On 23 November 2017 at 09:15, Greg Ewing wrote: > Ivan Levkivskyi wrote: > >> On 23 November 2017 at 05:44, Greg Ewing > > wrote: >> >> def g(): >> return ((yield i) for i in range(10)) >> >> >> I think this code should be just equivalent to this code >> >> def g(): >> temp = [(yield i) for i in range(10)] >> return (v for v in temp) >> > > But then you get a non-lazy iterable, which defeats the > purpose of using a generator expression -- you might as > well have used a comprehension to begin with. > > This could be just a semantic equivalence (mental model), not how it should be internally implemented. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 23 03:21:58 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 09:21:58 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A1682B9.6010908@canterbury.ac.nz> References: <5A15B5F3.2080009@stoneleaf.us> <5A1682B9.6010908@canterbury.ac.nz> Message-ID: On 23 November 2017 at 09:11, Greg Ewing wrote: > Ivan Levkivskyi wrote: > >> "People sometimes want to refactor for-loops containing `yield` into a >> comprehension but that doesn't work (particularly because of the hidden >> function scope) - lets make it a SyntaxError" >> > > Personally I'd be fine with removing the implicit function > scope from comprehensions and allowing yield in them, since > the semantics of that are clear. > > But I don't see a way to do anything equivalent with > generator expressions. Since the current effect of > yield in a generator expression is pretty useless, > it seems best just to disallow it. > > That means a list comprehension won't be equivalent > to list(generator_expression) in all cases, but I > don't think there's any great need for it to be. > I am also fine with this. Generator expressions are indeed less clear. Also different people have different mental model about them (especially re implicit scope and equivalence to comprehensions). On the contrary, vast majority agrees that comprehensions are just for-loops without leaking variables. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 23 03:26:26 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 09:26:26 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A168423.505@canterbury.ac.nz> References: <5A15B5F3.2080009@stoneleaf.us> <5A168423.505@canterbury.ac.nz> Message-ID: On 23 November 2017 at 09:17, Greg Ewing wrote: > Ivan Levkivskyi wrote: > >> "People sometimes want to refactor for-loops containing `yield` into a >> comprehension >> > > By the way, do we have any real-life examples of people wanting to > do this? It might help us decide what the semantics should be. > > Yes, there are two SO questions in two first posts here, also there are some b.p.o. issues. It looks like in all case people expect: def f(): return [(yield i) for i in range(3)] to be roughly equivalent to def f(): res = [] for i in range(3): r = yield i res.append(r) return res See Serhiy's original post for more detailed proposed semantic equivalence. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Thu Nov 23 04:14:56 2017 From: steve at holdenweb.com (Steve Holden) Date: Thu, 23 Nov 2017 09:14:56 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <073d78ee-4d0d-a651-66c6-e4d3b4cf0a07@mail.de> References: <5A15B5F3.2080009@stoneleaf.us> <073d78ee-4d0d-a651-66c6-e4d3b4cf0a07@mail.de> Message-ID: On Wed, Nov 22, 2017 at 8:48 PM, Sven R. Kunze wrote: > Isn't yield like a return? > ?Enough like it to make a good case, I'd say.? > A return in a list/dict/set comprehension makes no sense to me. > ?Nor me, nor the vast majority of instance. But nowadays yield is more of a synchronisation point. If something is valid syntax we should presumably like to have defined semantics. > So, +1 on SyntaxError from me too. > ?I'd tend to agree. This would give more time to discuss the intended semantics: giving it meaning later might be a more cautious approach that would allow decisions to be made in the light of further experience. I would urge developers, in their ?improvements to the language to support asynchronous programming, to bear in mind that this is (currently) a minority use case. Why the rush to set complex semantics in stone? > regards Steve? > Cheers. > > On 22.11.2017 21:29, David Mertz wrote: > > Inasmuch as I get to opine, I'm +1 on SyntaxError. There is no behavior > for that spelling that I would find intuitive or easy to explain to > students. And as far as I can tell, the ONLY time anything has ever been > spelled that way is in comments saying "look at this weird edge case > behavior in Python." > > On Nov 22, 2017 10:57 AM, "Jelle Zijlstra" > wrote: > > > > 2017-11-22 9:58 GMT-08:00 Guido van Rossum : > >> Wow, 44 messages in 4 hours. That must be some kind of record. >> >> If/when there's an action item, can someone summarize for me? >> >> The main disagreement seems to be about what this code should do: > > g = [(yield i) for i in range(3)] > > Currently, this makes `g` into a generator, not a list. Everybody seems to > agree this is nonintuitive and should be changed. > > One proposal is to make it so `g` gets assigned a list, and the `yield` > happens in the enclosing scope (so the enclosing function would have to be > a generator). This was the way things worked in Python 2, I believe. > > Another proposal is to make this code a syntax error, because it's > confusing either way. (For what it's worth, that would be my preference.) > > There is related discussion about the semantics of list comprehensions > versus calling list() on a generator expression, and of async semantics, > but I don't think there's any clear point of action there. > > >> -- >> --Guido van Rossum (python.org/~guido ) >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle.zij >> lstra%40gmail.com >> >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mertz% > 40gnosis.cx > > > > > _______________________________________________ > Python-Dev mailing listPython-Dev at python.orghttps://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > steve%40holdenweb.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Thu Nov 23 04:37:59 2017 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 23 Nov 2017 10:37:59 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: <2410aa36-bbc3-2f8d-6be7-7d3844cc4b09@egenix.com> On 18.11.2017 01:01, Victor Stinner wrote: > Hi, > > The CPython internals evolved during Python 3.7 cycle. I would like to > know if we broke the C API or not. > > Nick Coghlan and Eric Snow are working on cleaning up the Python > initialization with the "on going" PEP 432: > https://www.python.org/dev/peps/pep-0432/ > > Many global variables used by the "Python runtime" were move to a new > single "_PyRuntime" variable (big structure made of sub-structures). > See Include/internal/pystate.h. > > A side effect of moving variables from random files into header files > is that it's not more possible to fully initialize _PyRuntime at > "compilation time". For example, previously, it was possible to refer > to local C function (functions declared with "static", so only visible > in the current file). Now a new "initialization function" is required > to must be called. > > In short, it means that using the "Python runtime" before it's > initialized by _PyRuntime_Initialize() is now likely to crash. For > example, calling PyMem_RawMalloc(), before calling > _PyRuntime_Initialize(), now calls the function NULL: dereference a > NULL pointer, and so immediately crash with a segmentation fault. To prevent a complete crash, would it be possible to initialize the struct entries to a generic function (or set of such functions with the right signatures), which then issue a message to stderr hinting to the missing call to _PyRuntime_Initialize() before terminating ? > I'm writing this email to ask if this change is an issue or not to > embedded Python and the Python C API. Is it still possible to call > "all" functions of the C API before calling Py_Initialize()? > > I was bitten by the bug while reworking the Py_Main() function to > split it into subfunctions and cleanup the code to handle the command > line arguments and environment variables. I fixed the issue in main() > by calling _PyRuntime_Initialize() as soon as possible: it's now the > first instruction of main() :-) (See Programs/python.c) > > To give a more concrete example: Py_DecodeLocale() is the recommanded > function to decode bytes from the operating system, but this function > calls PyMem_RawMalloc() which does crash before > _PyRuntime_Initialize() is called. Is Py_DecodeLocale() used to > initialize Python? > > For example, "void Py_SetProgramName(wchar_t *);" expects a text > string, whereas main() gives argv as bytes. Calling > Py_SetProgramName() from argv requires to decode bytes... So use > Py_DecodeLocale()... > > Should we do something in Py_DecodeLocale()? Maybe crash if > _PyRuntime_Initialize() wasn't called yet? > > Maybe, the minimum change is to expose _PyRuntime_Initialize() in the > public C API? > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mal%40egenix.com > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Nov 23 2017) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From chris.jerdonek at gmail.com Thu Nov 23 04:42:26 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 23 Nov 2017 01:42:26 -0800 Subject: [Python-Dev] PEP 559 - built-in noop() In-Reply-To: References: Message-ID: On Wed, Nov 22, 2017 at 4:32 PM, Victor Stinner wrote: > Aha, contextlib.nullcontext() was just added, cool! > So is this equivalent to-- @contextmanager def yielding(x): yield x I thought we were against adding one-line functions? --Chris > > https://github.com/python/cpython/commit/0784a2e5b174d2dbf7b144d480559e > 650c5cf64c > https://bugs.python.org/issue10049 > > Victor > > 2017-09-09 21:54 GMT+02:00 Victor Stinner : > > I always wanted this feature (no kidding). > > > > Would it be possible to add support for the context manager? > > > > with noop(): ... > > > > Maybe noop can be an instance of: > > > > class Noop: > > def __enter__(self, *args, **kw): return self > > def __exit__(self, *args): pass > > def __call__(self, *args, **kw): return self > > > > Victor > > > > Le 9 sept. 2017 11:48 AM, "Barry Warsaw" a ?crit : > >> > >> I couldn?t resist one more PEP from the Core sprint. I won?t reveal > where > >> or how this one came to me. > >> > >> -Barry > >> > >> PEP: 559 > >> Title: Built-in noop() > >> Author: Barry Warsaw > >> Status: Draft > >> Type: Standards Track > >> Content-Type: text/x-rst > >> Created: 2017-09-08 > >> Python-Version: 3.7 > >> Post-History: 2017-09-09 > >> > >> > >> Abstract > >> ======== > >> > >> This PEP proposes adding a new built-in function called ``noop()`` which > >> does > >> nothing but return ``None``. > >> > >> > >> Rationale > >> ========= > >> > >> It is trivial to implement a no-op function in Python. It's so easy in > >> fact > >> that many people do it many times over and over again. It would be > useful > >> in > >> many cases to have a common built-in function that does nothing. > >> > >> One use case would be for PEP 553, where you could set the breakpoint > >> environment variable to the following in order to effectively disable > it:: > >> > >> $ setenv PYTHONBREAKPOINT=noop > >> > >> > >> Implementation > >> ============== > >> > >> The Python equivalent of the ``noop()`` function is exactly:: > >> > >> def noop(*args, **kws): > >> return None > >> > >> The C built-in implementation is available as a pull request. > >> > >> > >> Rejected alternatives > >> ===================== > >> > >> ``noop()`` returns something > >> ---------------------------- > >> > >> YAGNI. > >> > >> This is rejected because it complicates the semantics. For example, if > >> you > >> always return both ``*args`` and ``**kws``, what do you return when none > >> of > >> those are given? Returning a tuple of ``((), {})`` is kind of ugly, but > >> provides consistency. But you might also want to just return ``None`` > >> since > >> that's also conceptually what the function was passed. > >> > >> Or, what if you pass in exactly one positional argument, e.g. > ``noop(7)``. > >> Do > >> you return ``7`` or ``((7,), {})``? And so on. > >> > >> The author claims that you won't ever need the return value of > ``noop()`` > >> so > >> it will always return ``None``. > >> > >> Coghlin's Dialogs (edited for formatting): > >> > >> My counterargument to this would be ``map(noop, iterable)``, > >> ``sorted(iterable, key=noop)``, etc. (``filter``, ``max``, and > >> ``min`` all accept callables that accept a single argument, as do > >> many of the itertools operations). > >> > >> Making ``noop()`` a useful default function in those cases just > >> needs the definition to be:: > >> > >> def noop(*args, **kwds): > >> return args[0] if args else None > >> > >> The counterargument to the counterargument is that using ``None`` > >> as the default in all these cases is going to be faster, since it > >> lets the algorithm skip the callback entirely, rather than calling > >> it and having it do nothing useful. > >> > >> > >> Copyright > >> ========= > >> > >> This document has been placed in the public domain. > >> > >> > >> .. > >> Local Variables: > >> mode: indented-text > >> indent-tabs-mode: nil > >> sentence-end-double-space: t > >> fill-column: 70 > >> coding: utf-8 > >> End: > >> > >> > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> https://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: > >> https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > >> > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > chris.jerdonek%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Nov 23 04:50:27 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 09:50:27 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> <073d78ee-4d0d-a651-66c6-e4d3b4cf0a07@mail.de> Message-ID: On 23 November 2017 at 09:14, Steve Holden wrote: > I would urge developers, in their improvements to the language to support > asynchronous programming, to bear in mind that this is (currently) a > minority use case. Why the rush to set complex semantics in stone? +1 Also, given that languages like C# have similar async/await functionality, I'd be interested to know how they address questions like this. If they have a parallel, we should probably follow it. If they don't that would be further indication that no-one has much experience of the "best answers" yet, and caution is indicated. BTW, I'm assuming that the end goal is for async to be a natural and fully-integrated part of the language, no more "minority use" than generators, or context managers. Assuming that's the case, I think that keeping a very careful eye on how intuitive async feels to non-specialists is crucial (so thanks to Ivan and Yury for taking the time to respond to this discussion). Paul From solipsis at pitrou.net Thu Nov 23 05:16:25 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Nov 2017 11:16:25 +0100 Subject: [Python-Dev] Python initialization and embedded Python References: <2410aa36-bbc3-2f8d-6be7-7d3844cc4b09@egenix.com> Message-ID: <20171123111625.1111b47a@fsol> On Thu, 23 Nov 2017 10:37:59 +0100 "M.-A. Lemburg" wrote: > On 18.11.2017 01:01, Victor Stinner wrote: > > Hi, > > > > The CPython internals evolved during Python 3.7 cycle. I would like to > > know if we broke the C API or not. > > > > Nick Coghlan and Eric Snow are working on cleaning up the Python > > initialization with the "on going" PEP 432: > > https://www.python.org/dev/peps/pep-0432/ > > > > Many global variables used by the "Python runtime" were move to a new > > single "_PyRuntime" variable (big structure made of sub-structures). > > See Include/internal/pystate.h. > > > > A side effect of moving variables from random files into header files > > is that it's not more possible to fully initialize _PyRuntime at > > "compilation time". For example, previously, it was possible to refer > > to local C function (functions declared with "static", so only visible > > in the current file). Now a new "initialization function" is required > > to must be called. > > > > In short, it means that using the "Python runtime" before it's > > initialized by _PyRuntime_Initialize() is now likely to crash. For > > example, calling PyMem_RawMalloc(), before calling > > _PyRuntime_Initialize(), now calls the function NULL: dereference a > > NULL pointer, and so immediately crash with a segmentation fault. > > To prevent a complete crash, would it be possible to initialize > the struct entries to a generic function (or set of such functions > with the right signatures), which then issue a message to stderr > hinting to the missing call to _PyRuntime_Initialize() > before terminating ? +1. This sounds like a good idea. Regards Antoine. From ncoghlan at gmail.com Thu Nov 23 05:35:29 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 23 Nov 2017 20:35:29 +1000 Subject: [Python-Dev] PEP 559 - built-in noop() In-Reply-To: References: Message-ID: On 23 November 2017 at 19:42, Chris Jerdonek wrote: > On Wed, Nov 22, 2017 at 4:32 PM, Victor Stinner > wrote: > >> Aha, contextlib.nullcontext() was just added, cool! >> > > So is this equivalent to-- > > @contextmanager > def yielding(x): > yield x > > I thought we were against adding one-line functions? > There's a lot of runtime complexity hiding behind that "@contextmanager" line, so I'm open to `contextlib` additions that make it possible for sufficiently common patterns to avoid it. (The explicit class based nullcontext() implementation is 7 lines, the same as contextlib.closing()) After 7+ years, I'm happy that this one comes up often enough to be worth a more obvious standard library level answer than we'd previously offered. https://bugs.python.org/issue10049#msg281556 captures the point where I really started changing my mind. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 23 05:55:42 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 23 Nov 2017 20:55:42 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A1682B9.6010908@canterbury.ac.nz> References: <5A15B5F3.2080009@stoneleaf.us> <5A1682B9.6010908@canterbury.ac.nz> Message-ID: On 23 November 2017 at 18:11, Greg Ewing wrote: > Ivan Levkivskyi wrote: > >> "People sometimes want to refactor for-loops containing `yield` into a >> comprehension but that doesn't work (particularly because of the hidden >> function scope) - lets make it a SyntaxError" >> > > Personally I'd be fine with removing the implicit function > scope from comprehensions and allowing yield in them, since > the semantics of that are clear. > People keep saying this, but seriously, those semantics aren't clear at all once you actually start trying to implement it. Yes, they're obvious in simple cases, but it isn't the simple cases that are the problem. Instead, things start getting hard once you're dealing with: - unpacking to multiple variable names - nested loops in the comprehension - lexical closures inside the comprehension (e.g. lambda expressions, comprehensions inside comprehensions) Hence the approach we ended up going with for https://bugs.python.org/issue1660500, which was to use a real function scope that already handled all of those potential problems in a well defined way. Technically we *could* define new answers to all of those situations, but then we're stuck explaining to everyone what those new behaviours actually are, and I think that will actually be harder than the status quo, where we only have to explain why these implicit scopes act much the same way that "lambda: await expr" and "lambda: yield expr" do. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 23 06:38:31 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 12:38:31 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> <5A1682B9.6010908@canterbury.ac.nz> Message-ID: On 23 November 2017 at 11:55, Nick Coghlan wrote: > On 23 November 2017 at 18:11, Greg Ewing > wrote: > >> Ivan Levkivskyi wrote: >> >>> "People sometimes want to refactor for-loops containing `yield` into a >>> comprehension but that doesn't work (particularly because of the hidden >>> function scope) - lets make it a SyntaxError" >>> >> >> Personally I'd be fine with removing the implicit function >> scope from comprehensions and allowing yield in them, since >> the semantics of that are clear. >> > > People keep saying this, but seriously, those semantics aren't clear at > all once you actually start trying to implement it. > > If Serhiy will implement his idea (emitting for-loop bytecode inside a try-finally), then I see no problems accepting it as a fix. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 23 06:39:46 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 12:39:46 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> <5A1682B9.6010908@canterbury.ac.nz> Message-ID: On 23 November 2017 at 12:38, Ivan Levkivskyi wrote: > On 23 November 2017 at 11:55, Nick Coghlan wrote: > >> On 23 November 2017 at 18:11, Greg Ewing >> wrote: >> >>> Ivan Levkivskyi wrote: >>> >>>> "People sometimes want to refactor for-loops containing `yield` into a >>>> comprehension but that doesn't work (particularly because of the hidden >>>> function scope) - lets make it a SyntaxError" >>>> >>> >>> Personally I'd be fine with removing the implicit function >>> scope from comprehensions and allowing yield in them, since >>> the semantics of that are clear. >>> >> >> People keep saying this, but seriously, those semantics aren't clear at >> all once you actually start trying to implement it. >> >> > If Serhiy will implement his idea (emitting for-loop bytecode inside a > try-finally), then I see no problems accepting it as a fix. > Also I think it makes sense to keep discussion in one place, i.e. either here xor at https://bugs.python.org/issue10544 -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Thu Nov 23 06:44:20 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 12:44:20 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A15B5F3.2080009@stoneleaf.us> <073d78ee-4d0d-a651-66c6-e4d3b4cf0a07@mail.de> Message-ID: On 23 November 2017 at 10:50, Paul Moore wrote: > On 23 November 2017 at 09:14, Steve Holden wrote: > > I would urge developers, in their improvements to the language to support > > asynchronous programming, to bear in mind that this is (currently) a > > minority use case. Why the rush to set complex semantics in stone? > > +1 > > Also, given that languages like C# have similar async/await > functionality, I'd be interested to know how they address questions > like this. If they have a parallel, we should probably follow it. If > they don't that would be further indication that no-one has much > experience of the "best answers" yet, and caution is indicated. > > Keeping this open for indefinite time is also not a good option. Note that the issue about `yield` in comprehensions https://bugs.python.org/issue10544 is 7 (seven) years old. I don't say that we should fix it _right now_, but I think it makes sense to spend some time and finally resolve it. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Nov 23 06:49:28 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Nov 2017 12:49:28 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: <5A1682B9.6010908@canterbury.ac.nz> Message-ID: <20171123124928.304986e9@fsol> On Thu, 23 Nov 2017 12:39:46 +0100 Ivan Levkivskyi wrote: > > Also I think it makes sense to keep discussion in one place, i.e. either > here xor at https://bugs.python.org/issue10544 The bug tracker can be used for implementation discussions, but general language design decisions (such as whether to allow or not a certain construct) should take place on python-dev. I'm still in favour of deprecating and then disallowing. Nobody seems to have presented a real-world use case that is made significantly easier by trying to "fix" the current behaviour (as opposed to spelling the loop explicitly). I do asynchronous programming using "yield" every day in may job (because of compatibility requirements with Python 2) and I've never had once the need to write a "yield" inside a comprehension or generator expression. Regards Antoine. From solipsis at pitrou.net Thu Nov 23 06:53:26 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Nov 2017 12:53:26 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: <5A15B5F3.2080009@stoneleaf.us> <073d78ee-4d0d-a651-66c6-e4d3b4cf0a07@mail.de> Message-ID: <20171123125326.176f54b2@fsol> On Thu, 23 Nov 2017 09:50:27 +0000 Paul Moore wrote: > On 23 November 2017 at 09:14, Steve Holden wrote: > > I would urge developers, in their improvements to the language to support > > asynchronous programming, to bear in mind that this is (currently) a > > minority use case. Why the rush to set complex semantics in stone? > > +1 > > Also, given that languages like C# have similar async/await > functionality, I'd be interested to know how they address questions > like this. If they have a parallel, we should probably follow it. If > they don't that would be further indication that no-one has much > experience of the "best answers" yet, and caution is indicated. This discussion isn't about async/wait or asynchronous programming. It's about "yield" (which used to be the standard for asynchronous programming before async/await, but isn't anymore). The fact that "await" is now the standard still weakens the case for "yield" inside comprehensions and generator expressions. As someone who does asynchronous programming daily using "yield" (because of compatibility requirements with Python 2), I don't think I've even tried to use "yield" in a comprehension or generator expression. The use case doesn't seem to exist. Regards Antoine. From levkivskyi at gmail.com Thu Nov 23 07:01:08 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 13:01:08 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171123124928.304986e9@fsol> References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 12:49, Antoine Pitrou wrote: > On Thu, 23 Nov 2017 12:39:46 +0100 > Ivan Levkivskyi wrote: > > > > Also I think it makes sense to keep discussion in one place, i.e. either > > here xor at https://bugs.python.org/issue10544 > > The bug tracker can be used for implementation discussions, but general > language design decisions (such as whether to allow or not a certain > construct) should take place on python-dev. > > I'm still in favour of deprecating and then disallowing. Nobody seems > to have presented a real-world use case that is made significantly > easier by trying to "fix" the current behaviour (as opposed to > spelling the loop explicitly). I do asynchronous programming using > "yield" every day in may job (because of compatibility requirements > with Python 2) and I've never had once the need to write a "yield" > inside a comprehension or generator expression. > > "I don't use it, therefore it is not needed" is a great argument, thanks. Lets just forget about two SO questions and dozens people who up-voted it. Do you use async comprehensions? If not, then we don't need them either. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine at python.org Thu Nov 23 07:12:06 2017 From: antoine at python.org (Antoine Pitrou) Date: Thu, 23 Nov 2017 13:12:06 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: Le 23/11/2017 ? 13:01, Ivan Levkivskyi a ?crit?: > > "I don't use it, therefore it is not needed"? is a great argument, thanks. This is just a data point. Some people seem to think that the construct is useful for asynchronous programming. In my experience it isn't. YMMV, etc. > Lets just forget about two SO questions and dozens people who up-voted it. Just because someone asks a question doesn't mean they have a pressing use case. People are curious and will try constructs just for the sake it. Today I looked up the Wikipedia page for Gregorian chant. Did I have a use case for it? No, I was just curious. Regards Antoine. From storchaka at gmail.com Thu Nov 23 07:17:32 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 23 Nov 2017 14:17:32 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171123124928.304986e9@fsol> References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: 23.11.17 13:49, Antoine Pitrou ????: > I'm still in favour of deprecating and then disallowing. We could disallow it without deprecation. The current behavior definitely is wrong, nobody should depend on it. It should be either fixed or disallowed. > Nobody seems > to have presented a real-world use case that is made significantly > easier by trying to "fix" the current behaviour (as opposed to > spelling the loop explicitly). I do asynchronous programming using > "yield" every day in may job (because of compatibility requirements > with Python 2) and I've never had once the need to write a "yield" > inside a comprehension or generator expression. I used the "yield" statement, but I never used the "yield" expressions. And I can't found examples. Could you please present a real-world use case for the "yield" (not "yield from") expression? From levkivskyi at gmail.com Thu Nov 23 07:28:40 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 13:28:40 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 13:11, Paul Moore wrote: > On 23 November 2017 at 12:01, Ivan Levkivskyi > wrote: > > > "I don't use it, therefore it is not needed" is a great argument, > thanks. > > Lets just forget about two SO questions and dozens people who up-voted > it. > > Do you use async comprehensions? If not, then we don't need them either. > > For those of us trying to keep up with the discussion who don't have > time to chase the various references, and in the interest of keeping > the discussion in one place, can you summarise the real-world use > cases from the SO questions here? (I assume they are real world cases, > and not just theoretical questions) > OK, here are the links: https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions https://stackoverflow.com/questions/29334054/why-am-i-getting-different-results-when-using-a-list-comprehension-with-coroutin https://bugs.python.org/issue10544 https://bugs.python.org/issue3267 In all four cases pattern is the same, people were trying to refactor something like this: def f(): res = [] for x in y: r = yield x res.append(r) return res into something like this: def f(): return [(yield x) for x in y] My understanding is that none of the case is _pressing_, since they all start with a for-loop, but following this logic comprehensions themselves are not needed. Nevertheless people use them because they like it. The problem in all four cases is that they got hard to debug problem, since calling `f()` returns a generator, just not the one they would expect. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Nov 23 07:30:02 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Nov 2017 13:30:02 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: <20171123133002.353321ce@fsol> On Thu, 23 Nov 2017 14:17:32 +0200 Serhiy Storchaka wrote: > > I used the "yield" statement, but I never used the "yield" expressions. > And I can't found examples. Could you please present a real-world use > case for the "yield" (not "yield from") expression? Of course I can. "yield" expressions are important for writing Python 2-compatible asynchronous code while avoiding callback hell: See e.g. http://www.tornadoweb.org/en/stable/gen.html or https://jdb.github.io/concurrent/smartpython.html There are tons of real-world code written using this scheme (as opposed to almost no real-world code, even Python 2-only, using "yield" in comprehensions or generation expressions). Regards Antoine. From p.f.moore at gmail.com Thu Nov 23 07:11:36 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 12:11:36 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 12:01, Ivan Levkivskyi wrote: > "I don't use it, therefore it is not needed" is a great argument, thanks. > Lets just forget about two SO questions and dozens people who up-voted it. > Do you use async comprehensions? If not, then we don't need them either. For those of us trying to keep up with the discussion who don't have time to chase the various references, and in the interest of keeping the discussion in one place, can you summarise the real-world use cases from the SO questions here? (I assume they are real world cases, and not just theoretical questions) Thanks, Paul From levkivskyi at gmail.com Thu Nov 23 07:42:59 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 13:42:59 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171123133002.353321ce@fsol> References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> <20171123133002.353321ce@fsol> Message-ID: On 23 November 2017 at 13:30, Antoine Pitrou wrote: > On Thu, 23 Nov 2017 14:17:32 +0200 > Serhiy Storchaka wrote: > > > > I used the "yield" statement, but I never used the "yield" expressions. > > And I can't found examples. Could you please present a real-world use > > case for the "yield" (not "yield from") expression? > > Of course I can. "yield" expressions are important for writing > Python 2-compatible asynchronous code while avoiding callback hell: > > See e.g. http://www.tornadoweb.org/en/stable/gen.html > > Great, so I open this page and see this code: results = [] for future in list_of_futures: results.append(yield future) Interesting, why don't they use a comprehension for this and instead need to invent a whole `tornado.gen.multi` function? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Nov 23 07:45:44 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 12:45:44 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 12:28, Ivan Levkivskyi wrote: > On 23 November 2017 at 13:11, Paul Moore wrote: >> >> On 23 November 2017 at 12:01, Ivan Levkivskyi >> wrote: >> >> > "I don't use it, therefore it is not needed" is a great argument, >> > thanks. >> > Lets just forget about two SO questions and dozens people who up-voted >> > it. >> > Do you use async comprehensions? If not, then we don't need them either. >> >> For those of us trying to keep up with the discussion who don't have >> time to chase the various references, and in the interest of keeping >> the discussion in one place, can you summarise the real-world use >> cases from the SO questions here? (I assume they are real world cases, >> and not just theoretical questions) > > > OK, here are the links: > > https://stackoverflow.com/questions/45190729/differences-between-generator-comprehension-expressions > https://stackoverflow.com/questions/29334054/why-am-i-getting-different-results-when-using-a-list-comprehension-with-coroutin > https://bugs.python.org/issue10544 > https://bugs.python.org/issue3267 > > In all four cases pattern is the same, people were trying to refactor > something like this: > > def f(): > res = [] > for x in y: > r = yield x > res.append(r) > return res > > into something like this: > > def f(): > return [(yield x) for x in y] > > My understanding is that none of the case is _pressing_, since they all > start with a for-loop, but > following this logic comprehensions themselves are not needed. Nevertheless > people use them because they like it. > The problem in all four cases is that they got hard to debug problem, since > calling `f()` returns a generator, > just not the one they would expect. OK, thanks. I can see why someone would want to do this. However, it seems to me that the problem (a hard to debug error) could be solved by disallowing yield in comprehensions and generator expressions (giving an *easy* to debug error). I don't think the above is a compelling argument that we have to support the one-line form. If there was a non-trivial body of actual user code that uses the loop form, which would be substantially improved by being able to use comprehensions, that would be different. To put it another way, the example you gave is still artificial. The second link is a real use case, but as you say seems to be more a question about "why did this not work as I expected" which could be solved with a SyntaxError saying "yield expression not allowed in comprehensions". Paul From antoine at python.org Thu Nov 23 07:48:43 2017 From: antoine at python.org (Antoine Pitrou) Date: Thu, 23 Nov 2017 13:48:43 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> <20171123133002.353321ce@fsol> Message-ID: Le 23/11/2017 ? 13:42, Ivan Levkivskyi a ?crit?: > > Great, so I open this page and see this code: > > results = [] > for future in list_of_futures: > ? ? results.append(yield future) > > Interesting, why don't they use a comprehension for this and instead > need to invent a whole `tornado.gen.multi` function? 1) because it schedules the yielded coroutines in parallel (the "for" loop isn't strictly equivalent, as AFAIU it would schedule the coroutines serially) 2) because it accepts an optional argument to quiet some exceptions Regards Antoine. From p.f.moore at gmail.com Thu Nov 23 07:50:35 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 12:50:35 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> <20171123133002.353321ce@fsol> Message-ID: On 23 November 2017 at 12:42, Ivan Levkivskyi wrote: >> See e.g. http://www.tornadoweb.org/en/stable/gen.html >> > > Great, so I open this page and see this code: > > results = [] > for future in list_of_futures: > results.append(yield future) > > Interesting, why don't they use a comprehension for this and instead need to > invent a whole `tornado.gen.multi` function? Because yield expressions in comprehensions are difficult to understand, and the loop form is easy to understand? :-) (Certainly I didn't find the explanation in that page confusing, I don't know if I'd have found a comprehension form confusing, but I suspect I might have...) Paul From storchaka at gmail.com Thu Nov 23 07:54:27 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 23 Nov 2017 14:54:27 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <20171123133002.353321ce@fsol> References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> <20171123133002.353321ce@fsol> Message-ID: 23.11.17 14:30, Antoine Pitrou ????: > On Thu, 23 Nov 2017 14:17:32 +0200 > Serhiy Storchaka wrote: >> >> I used the "yield" statement, but I never used the "yield" expressions. >> And I can't found examples. Could you please present a real-world use >> case for the "yield" (not "yield from") expression? > > Of course I can. "yield" expressions are important for writing > Python 2-compatible asynchronous code while avoiding callback hell: > > See e.g. http://www.tornadoweb.org/en/stable/gen.html > or https://jdb.github.io/concurrent/smartpython.html > > There are tons of real-world code written using this scheme (as opposed > to almost no real-world code, even Python 2-only, using "yield" in > comprehensions or generation expressions). Thank you. The tornado examples contain the following equivalence code for `results = yield multi(list_of_futures)`: results = [] for future in list_of_futures: results.append(yield future) Couldn't this by written as `results = [(yield future) for future in list_of_futures]`? From levkivskyi at gmail.com Thu Nov 23 08:04:21 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 14:04:21 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 13:45, Paul Moore wrote: > On 23 November 2017 at 12:28, Ivan Levkivskyi > wrote: > > On 23 November 2017 at 13:11, Paul Moore wrote: > >> > >> On 23 November 2017 at 12:01, Ivan Levkivskyi > >> wrote: > >> > >> > "I don't use it, therefore it is not needed" is a great argument, > >> > thanks. > >> > Lets just forget about two SO questions and dozens people who up-voted > >> > it. > >> > Do you use async comprehensions? If not, then we don't need them > either. > >> > >> For those of us trying to keep up with the discussion who don't have > >> time to chase the various references, and in the interest of keeping > >> the discussion in one place, can you summarise the real-world use > >> cases from the SO questions here? (I assume they are real world cases, > >> and not just theoretical questions) > [...] > > > > My understanding is that none of the case is _pressing_, since they all > > start with a for-loop, but > > following this logic comprehensions themselves are not needed. > Nevertheless > > people use them because they like it. > > The problem in all four cases is that they got hard to debug problem, > since > > calling `f()` returns a generator, > > just not the one they would expect. > > OK, thanks. I can see why someone would want to do this. However, it > seems to me that the problem (a hard to debug error) could be solved > by disallowing yield in comprehensions and generator expressions > (giving an *easy* to debug error). I don't think the above is a > compelling argument that we have to support the one-line form. If > there was a non-trivial body of actual user code that uses the loop > form, which would be substantially improved by being able to use > comprehensions, that would be different. To put it another way, the > example you gave is still artificial. The second link is a real use > case, but as you say seems to be more a question about "why did this > not work as I expected" which could be solved with a SyntaxError > saying "yield expression not allowed in comprehensions". > The level of "artificialness" is quite subjective, this is rather matter of taste (see the tornado example). Let us forget for a moment about other problems and focus on this one: list comprehension is currently not equivalent to a for-loop. There are two options: - Fix this, i.e. make comprehension equivalent to a for-loop even in edge cases (Serhiy seems ready to do this) - Prohibit all cases when they are not equivalent I still prefer option one. But I see your point, option two is also an acceptable fix. Note that there were not so many situations when some code became SyntaxError later. I don't see why this particular case qualifies for such a radical measure as an exception to syntactic rules, instead of just fixing it (sorry Nick :-) -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Nov 23 08:04:22 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Nov 2017 14:04:22 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> <20171123133002.353321ce@fsol> Message-ID: <20171123140422.7b0ec4ae@fsol> On Thu, 23 Nov 2017 14:54:27 +0200 Serhiy Storchaka wrote: > 23.11.17 14:30, Antoine Pitrou ????: > > On Thu, 23 Nov 2017 14:17:32 +0200 > > Serhiy Storchaka wrote: > >> > >> I used the "yield" statement, but I never used the "yield" expressions. > >> And I can't found examples. Could you please present a real-world use > >> case for the "yield" (not "yield from") expression? > > > > Of course I can. "yield" expressions are important for writing > > Python 2-compatible asynchronous code while avoiding callback hell: > > > > See e.g. http://www.tornadoweb.org/en/stable/gen.html > > or https://jdb.github.io/concurrent/smartpython.html > > > > There are tons of real-world code written using this scheme (as opposed > > to almost no real-world code, even Python 2-only, using "yield" in > > comprehensions or generation expressions). > > Thank you. The tornado examples contain the following equivalence code > for `results = yield multi(list_of_futures)`: > > results = [] > for future in list_of_futures: > results.append(yield future) > > Couldn't this by written as `results = [(yield future) for future in > list_of_futures]`? See my answer to Ivan above. The code isn't actually equivalent :-) But, yes, this construct *could* be useful if you wanted to schedule futures serially (as opposed to in parallel). However, since it doesn't work on Python 3.x, and the main reason to use "yield" coroutines (even with Tornado) instead of "async/await" is for compatibility, solving the "yield in a comprehension" problem in 3.7 wouldn't make things any better IMO. Regards Antoine. From p.f.moore at gmail.com Thu Nov 23 09:21:32 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 14:21:32 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 13:04, Ivan Levkivskyi wrote: > Let us forget for a moment about other problems and focus on this one: list > comprehension is currently not equivalent to a for-loop. > There are two options: > - Fix this, i.e. make comprehension equivalent to a for-loop even in edge > cases (Serhiy seems ready to do this) > - Prohibit all cases when they are not equivalent > > I still prefer option one. But I see your point, option two is also an > acceptable fix. > Note that there were not so many situations when some code became > SyntaxError later. > I don't see why this particular case qualifies for such a radical measure as > an exception to syntactic rules, > instead of just fixing it (sorry Nick :-) My main concern is that comprehension is not equivalent to a for loop for a specific reason - the scope issue. Has anyone looked back at the original discussions to confirm *why* a function was used? My recollection: >>> i = 1 >>> a = [i for i in (1,2,3)] >>> print(i) 1 Serihy's approach (and your described expansion) would have print(i) return NameError. So - do we actually have a proposal to avoid the implied function that *doesn't* break this example? I'm pretty sure this was a real-life issue at the time we switched to the current implementation. Paul From levkivskyi at gmail.com Thu Nov 23 09:24:27 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 15:24:27 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 15:21, Paul Moore wrote: > On 23 November 2017 at 13:04, Ivan Levkivskyi > wrote: > > Let us forget for a moment about other problems and focus on this one: > list > > comprehension is currently not equivalent to a for-loop. > > There are two options: > > - Fix this, i.e. make comprehension equivalent to a for-loop even in edge > > cases (Serhiy seems ready to do this) > > - Prohibit all cases when they are not equivalent > > > > I still prefer option one. But I see your point, option two is also an > > acceptable fix. > > Note that there were not so many situations when some code became > > SyntaxError later. > > I don't see why this particular case qualifies for such a radical > measure as > > an exception to syntactic rules, > > instead of just fixing it (sorry Nick :-) > > My main concern is that comprehension is not equivalent to a for loop > for a specific reason - the scope issue. Has anyone looked back at the > original discussions to confirm *why* a function was used? > > My recollection: > > >>> i = 1 > >>> a = [i for i in (1,2,3)] > >>> print(i) > 1 > > Serihy's approach (and your described expansion) would have print(i) > return NameError. > Absolutely no, it will still print 1. The internal implementation will use unique ids internally (see https://bugs.python.org/issue10544 for details). -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Nov 23 09:25:15 2017 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 24 Nov 2017 01:25:15 +1100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 1:21 AM, Paul Moore wrote: > On 23 November 2017 at 13:04, Ivan Levkivskyi wrote: >> Let us forget for a moment about other problems and focus on this one: list >> comprehension is currently not equivalent to a for-loop. >> There are two options: >> - Fix this, i.e. make comprehension equivalent to a for-loop even in edge >> cases (Serhiy seems ready to do this) >> - Prohibit all cases when they are not equivalent >> >> I still prefer option one. But I see your point, option two is also an >> acceptable fix. >> Note that there were not so many situations when some code became >> SyntaxError later. >> I don't see why this particular case qualifies for such a radical measure as >> an exception to syntactic rules, >> instead of just fixing it (sorry Nick :-) > > My main concern is that comprehension is not equivalent to a for loop > for a specific reason - the scope issue. Has anyone looked back at the > original discussions to confirm *why* a function was used? > > My recollection: > >>>> i = 1 >>>> a = [i for i in (1,2,3)] >>>> print(i) > 1 > > Serihy's approach (and your described expansion) would have print(i) > return NameError. > > So - do we actually have a proposal to avoid the implied function that > *doesn't* break this example? I'm pretty sure this was a real-life > issue at the time we switched to the current implementation. A while back I had a POC patch that made "with EXPR as NAME:" create a new subscope with NAME in it, such that the variable actually disappeared at the end of the 'with' block. Should I try to track that down and adapt the technique to comprehensions? The subscope shadows names exactly the way a nested function does, but it's all within the same function. ChrisA From p.f.moore at gmail.com Thu Nov 23 09:30:54 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 14:30:54 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 14:24, Ivan Levkivskyi wrote: >> My main concern is that comprehension is not equivalent to a for loop >> for a specific reason - the scope issue. Has anyone looked back at the >> original discussions to confirm *why* a function was used? >> >> My recollection: >> >> >>> i = 1 >> >>> a = [i for i in (1,2,3)] >> >>> print(i) >> 1 >> >> Serihy's approach (and your described expansion) would have print(i) >> return NameError. > > > Absolutely no, it will still print 1. The internal implementation will use > unique ids internally (see https://bugs.python.org/issue10544 for details). > Ok, cool. My main point still applies though - has anyone confirmed why a function scope was considered necessary at the time of the original implementation, but it's apparently not now? I'm pretty sure it was a deliberate choice, not an accident. Paul From storchaka at gmail.com Thu Nov 23 09:45:03 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 23 Nov 2017 16:45:03 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: 23.11.17 16:30, Paul Moore ????: > Ok, cool. My main point still applies though - has anyone confirmed > why a function scope was considered necessary at the time of the > original implementation, but it's apparently not now? I'm pretty sure > it was a deliberate choice, not an accident. The implementation with an intermediate one-time function is just simpler. The one of purposes of Python 3 was simplifying the implementation, even at the cost of some performance penalty. I'm pretty sure the corner case with "yield" was just missed. From levkivskyi at gmail.com Thu Nov 23 09:48:49 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Thu, 23 Nov 2017 15:48:49 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 15:30, Paul Moore wrote: > On 23 November 2017 at 14:24, Ivan Levkivskyi > wrote: > >> My main concern is that comprehension is not equivalent to a for loop > >> for a specific reason - the scope issue. Has anyone looked back at the > >> original discussions to confirm *why* a function was used? > >> > >> My recollection: > >> > >> >>> i = 1 > >> >>> a = [i for i in (1,2,3)] > >> >>> print(i) > >> 1 > >> > >> Serihy's approach (and your described expansion) would have print(i) > >> return NameError. > > > > > > Absolutely no, it will still print 1. The internal implementation will > use > > unique ids internally (see https://bugs.python.org/issue10544 for > details). > > > > Ok, cool. My main point still applies though - has anyone confirmed > why a function scope was considered necessary at the time of the > original implementation, but it's apparently not now? I'm pretty sure > it was a deliberate choice, not an accident. >From what Nick explained on b.p.o. I understand that this is closer to the "accident" definition. Also the original issue https://bugs.python.org/issue1660500 doesn't have any discussion of the implementation _strategy_. So I tried to dig the mailing list, in the latest Guido's message I have found https://mail.python.org/pipermail/python-3000/2006-December/005218.html he still likes the idea of unique hidden ids (like Serhiy proposes now) and no function scopes. After that there is Nick's message https://mail.python.org/pipermail/python-3000/2006-December/005229.html where he says that he still likes pseudo-scopes more. Then I lost the track of discussion. It may well be Nick's intentional decision (and it has its merits) but I am not sure it was a conscious consensus. Nick could probably add more. Also I propose to wait and see when Serhiy will show us his complete implementation. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Nov 23 10:37:39 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 23 Nov 2017 07:37:39 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <29d709f4-8f60-d9c5-8c2b-b17e1abfcb70@mail.de> References: <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> <29d709f4-8f60-d9c5-8c2b-b17e1abfcb70@mail.de> Message-ID: <5A16EB43.90002@stoneleaf.us> On 11/22/2017 11:51 PM, Sven R. Kunze wrote: > A "yield" within a comprehension is like a "return" in a comprehension. It makes no sense at all. > Also a "yield" and a "return with value" is also rarely seen. > > Comprehensions build new objects, they are not for control flow, IMO. +1 -- ~Ethan~ From barry at python.org Thu Nov 23 10:49:28 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 23 Nov 2017 10:49:28 -0500 Subject: [Python-Dev] PEP 559 - built-in noop() In-Reply-To: References: Message-ID: On Nov 22, 2017, at 19:32, Victor Stinner wrote: > > Aha, contextlib.nullcontext() was just added, cool! So, if I rewrite PEP 559 in terms of decorators it won?t get rejected? from functools import wraps def noop(func): @wraps(func) def wrapper(*args, **kws): return None return wrapper @noop def do_something_important(x, y, z): return blah_blah_blah(x, y, z) print(do_something_important(1, 2, z=3)) Cheers? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From p.f.moore at gmail.com Thu Nov 23 10:50:19 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Nov 2017 15:50:19 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A16EB43.90002@stoneleaf.us> References: <20171122172404.75b05d56@fsol> <5A16524A.6060702@canterbury.ac.nz> <29d709f4-8f60-d9c5-8c2b-b17e1abfcb70@mail.de> <5A16EB43.90002@stoneleaf.us> Message-ID: On 23 November 2017 at 15:37, Ethan Furman wrote: > On 11/22/2017 11:51 PM, Sven R. Kunze wrote: > >> A "yield" within a comprehension is like a "return" in a comprehension. It >> makes no sense at all. >> Also a "yield" and a "return with value" is also rarely seen. >> >> Comprehensions build new objects, they are not for control flow, IMO. > > > +1 That isn't a principle I've seen described explicitly before, but I agree it makes sense to me. So +1 here as well. And yes, I know this might seem to contradict my position that comprehensions translate to loops, but remember that the translation is a mental model, not an exact definition (to my way of thinking). Paul From ethan at stoneleaf.us Thu Nov 23 11:05:59 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 23 Nov 2017 08:05:59 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: <5A16F1E7.2020407@stoneleaf.us> On 11/23/2017 04:01 AM, Ivan Levkivskyi wrote: > Lets just forget about two SO questions and dozens people who up-voted it. Questions/answers are routinely up-voted because they are well-written and/or informative, not just because somebody had a need for it or a use for the answer. The SO question linked in the OP wasn't even about using the yield/yield-from in real code, just about timings and differences in byte code and whether or not they "worked" (no real-world use-case). Hardly a clarion call for a fix. -- ~Ethan~ From guido at python.org Thu Nov 23 11:08:03 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 23 Nov 2017 08:08:03 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: This thread is still going over the speed limit. Don't commit anything without my explicit approval. I know one thing for sure. The choice to make all comprehensions functions was quite intentional (even though alternatives were also discussed) and the extra scope is now part of the language definition. It can't be removed as a "bug fix". One thing that can be done without a PEP is making yield inside a comprehension a syntax error, since I don't think *that* was considered when the function scopes were introduced. A problem with dropping the "function-ness" of the comprehension by renaming the variables (as Ivan/Serhiy's plan seems to be?) would be what does it look like in the debugger -- can I still step through the loop and print the values of expressions? Or do I have to know the "renamed" names? And my mind boggles when considering a generator expression containing yield that is returned from a function. I tried this and cannot say I expected the outcome: def f(): return ((yield i) for i in range(3)) print(list(f())) In both Python 2 and Python 3 this prints [0, None, 1, None, 2, None] Even if there's a totally logical explanation for that, I still don't like it, and I think yield in a comprehension should be banned. From this it follows that we should also simply ban yield from comprehensions. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Nov 23 11:17:11 2017 From: brett at python.org (Brett Cannon) Date: Thu, 23 Nov 2017 16:17:11 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: I've now ended up in Guido's boat of needing a summary since I think this thread has grown to cover whether yield should be allowed in comprehensions, something about await in comprehensions, and now about leaking the loop variable (or some implementation detail). IOW there seems to be 3 separate discussions going on in a single thread. Any chance we can get some clear threads going for each independent topic to make it easier to discuss? I do have an opinion on all three topics (if I understand the topics accurately ?), but I don't want to contribute to the confusion of the discussion by mentioning them here. On Thu, Nov 23, 2017, 06:50 Ivan Levkivskyi, wrote: > On 23 November 2017 at 15:30, Paul Moore wrote: > >> On 23 November 2017 at 14:24, Ivan Levkivskyi >> wrote: >> >> My main concern is that comprehension is not equivalent to a for loop >> >> for a specific reason - the scope issue. Has anyone looked back at the >> >> original discussions to confirm *why* a function was used? >> >> >> >> My recollection: >> >> >> >> >>> i = 1 >> >> >>> a = [i for i in (1,2,3)] >> >> >>> print(i) >> >> 1 >> >> >> >> Serihy's approach (and your described expansion) would have print(i) >> >> return NameError. >> > >> > >> > Absolutely no, it will still print 1. The internal implementation will >> use >> > unique ids internally (see https://bugs.python.org/issue10544 for >> details). >> > >> >> Ok, cool. My main point still applies though - has anyone confirmed >> why a function scope was considered necessary at the time of the >> original implementation, but it's apparently not now? I'm pretty sure >> it was a deliberate choice, not an accident. > > > From what Nick explained on b.p.o. I understand that this is closer to the > "accident" definition. > Also the original issue https://bugs.python.org/issue1660500 doesn't have > any discussion of the implementation _strategy_. > So I tried to dig the mailing list, in the latest Guido's message I have > found > https://mail.python.org/pipermail/python-3000/2006-December/005218.html > he still likes the idea of unique hidden ids (like Serhiy proposes now) > and no function scopes. After that there is Nick's message > https://mail.python.org/pipermail/python-3000/2006-December/005229.html > where he says that he still likes pseudo-scopes more. > Then I lost the track of discussion. > > It may well be Nick's intentional decision (and it has its merits) but I > am not sure it was a conscious consensus. > Nick could probably add more. Also I propose to wait and see when Serhiy > will show us his complete implementation. > > -- > Ivan > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Thu Nov 23 12:06:33 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 23 Nov 2017 19:06:33 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: 23.11.17 18:08, Guido van Rossum ????: > This thread is still going over the speed limit. Don't commit anything > without my explicit approval. I'm not going to write a single line of code while the decision about this issue is not made. This is not easy issue. > A problem with dropping the "function-ness" of the comprehension by > renaming the variables (as Ivan/Serhiy's plan seems to be?) would be > what does it look like in the debugger -- can I still step through the > loop and print the values of expressions? Or do I have to know the > "renamed" names? Does the debugger supports stepping inside a comprehension? Isn't the whole comprehension a single logical line? It this is supported the the debugger should see a local variable with a strange name ".0". This is a leaked implementation detail. It wouldn't be surprised if details of other implementation will be leaked too in the debugger. The debugger can unmangle the inner variable names and hide local variables in the outer scope. > And my mind boggles when considering a generator expression containing > yield that is returned from a function. I tried this and cannot say I > expected the outcome: > > ? def f(): > ? ? ? return ((yield i) for i in range(3)) > ? print(list(f())) > > In both Python 2 and Python 3 this prints > > ? [0, None, 1, None, 2, None] > > Even if there's a totally logical explanation for that, I still don't > like it, and I think yield in a comprehension should be banned. From > this it follows that we should also simply ban yield? from comprehensions. This behavior doesn't look correct to me and Ivan. Ivan explained that this function should be rough equivalent to def f(): t = [(yield i) for i in range(3)] return (x for x in t) which should be equivalent to def f(): t = [] for i in range(3): t.append((yield i)) return (x for x in t) and list(f()) should be [1, 2, 3]. But while I know how to fix yield in comprehensions (just inline the code with some additions), I have no ideas how to fix yield in generators. Me and Ivan agreed that SyntaxError in this case is better than the current behavior. Maybe we will find how to implement the expected behavior. From guido at python.org Thu Nov 23 12:20:22 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 23 Nov 2017 09:20:22 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: On Thu, Nov 23, 2017 at 9:06 AM, Serhiy Storchaka wrote: > 23.11.17 18:08, Guido van Rossum ????: > >> This thread is still going over the speed limit. Don't commit anything >> without my explicit approval. >> > > I'm not going to write a single line of code while the decision about this > issue is not made. This is not easy issue. > OK, I think Ivan at some point said he was waiting how your fix would look. > A problem with dropping the "function-ness" of the comprehension by >> renaming the variables (as Ivan/Serhiy's plan seems to be?) would be what >> does it look like in the debugger -- can I still step through the loop and >> print the values of expressions? Or do I have to know the "renamed" names? >> > > Does the debugger supports stepping inside a comprehension? Isn't the > whole comprehension a single logical line? It this is supported the the > debugger should see a local variable with a strange name ".0". This is a > leaked implementation detail. It wouldn't be surprised if details of other > implementation will be leaked too in the debugger. The debugger can > unmangle the inner variable names and hide local variables in the outer > scope. > The debugger does stop at each iteration. It does see a local named ".0" (which of course cannot be printed directly, only using vars()). I suppose there currently is no way for the debugger to map the variable names to what they are named in the source, right? (If you think this is a useful thing to add, please open a separate issue.) > And my mind boggles when considering a generator expression containing >> yield that is returned from a function. I tried this and cannot say I >> expected the outcome: >> >> def f(): >> return ((yield i) for i in range(3)) >> print(list(f())) >> >> In both Python 2 and Python 3 this prints >> >> [0, None, 1, None, 2, None] >> >> Even if there's a totally logical explanation for that, I still don't >> like it, and I think yield in a comprehension should be banned. From this >> it follows that we should also simply ban yield from comprehensions. >> > > This behavior doesn't look correct to me and Ivan. Ivan explained that > this function should be rough equivalent to > > def f(): > t = [(yield i) for i in range(3)] > return (x for x in t) > > which should be equivalent to > > def f(): > t = [] > for i in range(3): > t.append((yield i)) > return (x for x in t) > > and list(f()) should be [1, 2, 3]. > > But while I know how to fix yield in comprehensions (just inline the code > with some additions), I have no ideas how to fix yield in generators. Me > and Ivan agreed that SyntaxError in this case is better than the current > behavior. Maybe we will find how to implement the expected behavior. > I doubt that anyone who actually wrote that had a valid expectation for it -- most likely they were coding by random modification or it was the result of a lack of understanding of generator expressions in general. I think the syntax error is the best option here. I also think we should all stop posting for 24 hours as tempers have gotten heated. Happy Thanksgiving! -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Nov 23 16:56:58 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 24 Nov 2017 10:56:58 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: <5A17442A.6030002@canterbury.ac.nz> Paul Moore wrote: > has anyone confirmed > why a function scope was considered necessary at the time of the > original implementation, but it's apparently not now? At the time I got the impression that nobody wanted to spend the time necessary to design and implement a subscope mechanism. What's changed is that we now have someone offering to do that. -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 23 17:20:31 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 24 Nov 2017 11:20:31 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: <5A1749AF.6010500@canterbury.ac.nz> Serhiy Storchaka wrote: > Ivan explained that > this function should be rough equivalent to > > def f(): > t = [(yield i) for i in range(3)] > return (x for x in t) This seems useless to me. It turns a lazy iterator into an eager one, which is a gross violation of the author's intent in using a generator expression. -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 23 17:26:39 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 24 Nov 2017 11:26:39 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: <5A174B1F.3060905@canterbury.ac.nz> Guido van Rossum wrote: > > The debugger does stop at each iteration. It does see a local named ".0" > I suppose there currently is no way for the debugger to map the variable > names to what they are named in the source, right? If the hidden local were named "a.0" where "a" is the original name, mapping it back would be easy. It would also be easily understood even if it weren't mapped back. -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 23 17:29:51 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 24 Nov 2017 11:29:51 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: <5A174BDF.7080007@canterbury.ac.nz> Guido van Rossum wrote: > the extra scope is now part of the language definition. > It can't be removed as a "bug fix". Does anyone actually rely on the scope-ness of comprehensions in any way other than the fact that it prevents local variable leakage? If not, probably nobody would notice if it were changed back. -- Greg From victor.stinner at gmail.com Thu Nov 23 18:19:32 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 24 Nov 2017 00:19:32 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: Hi, We are close to the 3.7a3 release and the bug is not fixed yet. I propose to revert the changes on memory allocators right now, and take time to design a proper fix which will respect all constraints. https://github.com/python/cpython/pull/4532 Today, someone came to me on IRC to complain that calling Py_DecodeLocale() does now crash on Python 3.7. He is doing tests to embed Python on Android. Later he asks me about PyImport_AppendInittab(), but I don't know this function. He told me that it does crash in PyMem_Realloc()... But PyImport_AppendInittab() must be called before Py_Initialize()... It confirms that Python is embedded and that the C API is used before Py_Initialize(). We don't know yet exactly how the the C API is used, which functions are called before Py_Initialize(). Moreover, PEP 432 implementation is still incomplete, and calling _PyRuntime_Initialize() is just not possible, since it's a private API which is not exported... Victor 2017-11-18 1:01 GMT+01:00 Victor Stinner : > Hi, > > The CPython internals evolved during Python 3.7 cycle. I would like to > know if we broke the C API or not. > > Nick Coghlan and Eric Snow are working on cleaning up the Python > initialization with the "on going" PEP 432: > https://www.python.org/dev/peps/pep-0432/ > > Many global variables used by the "Python runtime" were move to a new > single "_PyRuntime" variable (big structure made of sub-structures). > See Include/internal/pystate.h. > > A side effect of moving variables from random files into header files > is that it's not more possible to fully initialize _PyRuntime at > "compilation time". For example, previously, it was possible to refer > to local C function (functions declared with "static", so only visible > in the current file). Now a new "initialization function" is required > to must be called. > > In short, it means that using the "Python runtime" before it's > initialized by _PyRuntime_Initialize() is now likely to crash. For > example, calling PyMem_RawMalloc(), before calling > _PyRuntime_Initialize(), now calls the function NULL: dereference a > NULL pointer, and so immediately crash with a segmentation fault. > > I'm writing this email to ask if this change is an issue or not to > embedded Python and the Python C API. Is it still possible to call > "all" functions of the C API before calling Py_Initialize()? > > I was bitten by the bug while reworking the Py_Main() function to > split it into subfunctions and cleanup the code to handle the command > line arguments and environment variables. I fixed the issue in main() > by calling _PyRuntime_Initialize() as soon as possible: it's now the > first instruction of main() :-) (See Programs/python.c) > > To give a more concrete example: Py_DecodeLocale() is the recommanded > function to decode bytes from the operating system, but this function > calls PyMem_RawMalloc() which does crash before > _PyRuntime_Initialize() is called. Is Py_DecodeLocale() used to > initialize Python? > > For example, "void Py_SetProgramName(wchar_t *);" expects a text > string, whereas main() gives argv as bytes. Calling > Py_SetProgramName() from argv requires to decode bytes... So use > Py_DecodeLocale()... > > Should we do something in Py_DecodeLocale()? Maybe crash if > _PyRuntime_Initialize() wasn't called yet? > > Maybe, the minimum change is to expose _PyRuntime_Initialize() in the > public C API? > > Victor From ncoghlan at gmail.com Thu Nov 23 19:50:56 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 24 Nov 2017 10:50:56 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 23 November 2017 at 23:04, Ivan Levkivskyi wrote: > I don't see why this particular case qualifies for such a radical measure > as an exception to syntactic rules, > instead of just fixing it (sorry Nick :-) > I've posted in more detail about this to the issue tracker, but the argument here is: because making it behave differently from the way it does now while still hiding the loop iteration variable potentially requires even more radical revisions to the lexical scoping rules :) If somebody can come up with a clever trick to allow yield inside a comprehension to jump levels in a relatively intuitive way, that would actually be genuinely cool, but the lexical scoping rules mean it's trickier than it sounds. Now that I frame the question that way, though, I'm also remembering that we didn't have "yield from" yet when I wrote the current comprehension implementation, and given that, it may be as simple as having an explicit yield expression in a comprehension imply delegation to a subgenerator. If we went down that path, then a list comprehension like the following: results = [(yield future) for future in list_of_futures] might be compiled as being equivalent to: def __listcomp_generator(iterable): result = [] for future in iterable: result.append((yield future)) return result results = yield from _listcomp_generator(list_of_futures) The only difference between the current comprehension code and this idea is "an explicit yield expression in a comprehension implies the use of 'yield from' when calling the nested function". For generator expressions, the adjustment would need to be slightly different: for those, we'd either need to prohibit yield expressions, or else say that if there's an explicit yield expression present anywhere, then we drop the otherwise implied yield expression. If we went down the latter path, then: gen = ((yield future) for future in list_of_futures) and: gen = (future for future in list_of_futures) would be two different ways of writing the same thing. The pay-off for allowing it would be that you could write things like: gen = (f(yield future) for future in list_of_futures) as a shorthand equivalent to: def gen(list_of_futures=list_of_futures): for future in list_of_futures: f(yield future) (Right now, you instead get "yield f(yield future)" as the innermost statement, which probably isn't what you wanted) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 23 20:17:43 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 24 Nov 2017 11:17:43 +1000 Subject: [Python-Dev] PEP 559 - built-in noop() In-Reply-To: References: Message-ID: On 24 November 2017 at 01:49, Barry Warsaw wrote: > On Nov 22, 2017, at 19:32, Victor Stinner > wrote: > > > > Aha, contextlib.nullcontext() was just added, cool! > > So, if I rewrite PEP 559 in terms of decorators it won?t get rejected? > The conceptual delta between knowing how to call "noop()" and how to write "def noop(): pass" is just a *teensy* bit smaller than that between knowing how to use: with nullcontext(value) as var: ... and how to write: @contextlib.contextmanager def nullcontext(enter_result): yield enter_result or: class nullcontext(object): def __init__(self, enter_result): self.enter_result = enter_result def __enter__(self): return self.enter_result def __exit__(self, *args): pass So the deciding line for me was "Should people need to know how to write their own context managers in order to have access to a null context manager?", and I eventually decided the right answer was "No", since the context management protocol is actually reasonably tricky conceptually, and even the simplest version still requires knowing how to use decorators and generators (as well as knowing which specific decorator to use). The conceptual step between calling and writing functions is much smaller, and defining your own re-usable functions is a more fundamental Python skill than defining your own context managers. And while I assume you were mostly joking, the idea of a "@functools.stub_call(result=None)" decorator to temporarily replace an otherwise expensive function call could be a genuinely interesting primitive. `unittest.mock` and other testing libraries already have a bunch of tools along those lines, so the principle at work there would be pattern extraction based on things people already do for themselves. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 23 20:31:14 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 24 Nov 2017 11:31:14 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 24 November 2017 at 09:19, Victor Stinner wrote: > Hi, > > We are close to the 3.7a3 release and the bug is not fixed yet. I > propose to revert the changes on memory allocators right now, and take > time to design a proper fix which will respect all constraints. > > https://github.com/python/cpython/pull/4532 > > Today, someone came to me on IRC to complain that calling > Py_DecodeLocale() does now crash on Python 3.7. He is doing tests to > embed Python on Android. Later he asks me about > PyImport_AppendInittab(), but I don't know this function. He told me > that it does crash in PyMem_Realloc()... But PyImport_AppendInittab() > must be called before Py_Initialize()... > > It confirms that Python is embedded and that the C API is used before > Py_Initialize(). > > We don't know yet exactly how the the C API is used, which functions > are called before Py_Initialize(). We do note some of them explicitly at https://docs.python.org/3/c-api/init.html (search for "before Py"). What we've been missing is a test case that ensures https://docs.python.org/3/extending/embedding.html#very-high-level-embedding actually works reliably (hence how we managed to break it by way of the internal state management refactoring). Once that core regression has been fixed, we can review the docs and the test suite and come up with: - a consolidated list of *all* the APIs that can safely be called before Py_Initialize - one or more new or updated test cases to ensure that any not yet tested pre-initialization APIs actually work as intended > Moreover, PEP 432 implementation is > still incomplete, and calling _PyRuntime_Initialize() is just not > possible, since it's a private API which is not exported... > Even after we reach the point of exposing the more fine-grained initialisation API (which I'm now thinking we may be able to do for 3.8 given Eric & Victor's work on it for 3.7), we're still going to have to ensure the existing configuration API keeps working as expected. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Thu Nov 23 21:21:33 2017 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 23 Nov 2017 18:21:33 -0800 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 11/23/2017 5:31 PM, Nick Coghlan wrote: > - a consolidated list of *all* the APIs that can safely be called > before Py_Initialize So it is interesting to know that list, of course, but the ones that are to be supported and documented might be a smaller list. Or might not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 23 23:01:15 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 24 Nov 2017 14:01:15 +1000 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: On 24 November 2017 at 12:21, Glenn Linderman wrote: > On 11/23/2017 5:31 PM, Nick Coghlan wrote: > > - a consolidated list of *all* the APIs that can safely be called before > Py_Initialize > > So it is interesting to know that list, of course, but the ones that are > to be supported and documented might be a smaller list. Or might not. > Ah, sorry - "safely" was a bit ambiguous there. By "safely" I meant "CPython has a regression test that ensures that particular API will keep working before Py_Initialize(), regardless of any changes we may make to the way we handle interpreter initialization". We've long had a lot of other APIs that happen to work well enough for CPython itself to get away with using them during the startup process, but the official position on those is "Don't count on these APIs working prior to Py_Initialize() in the general case - we only get away with it because we can adjust the exact order in which we do things in order to account for any other changes that break it". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Fri Nov 24 01:43:01 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 24 Nov 2017 08:43:01 +0200 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: 24.11.17 04:21, Glenn Linderman ????: > On 11/23/2017 5:31 PM, Nick Coghlan wrote: >> - a consolidated list of *all* the APIs that can safely be called >> before Py_Initialize > So it is interesting to know that list, of course, but the ones that are > to be supported and documented might be a smaller list. Or might not. This is a small list, 11 functions. From hrvoje.niksic at avl.com Fri Nov 24 04:58:19 2017 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Fri, 24 Nov 2017 10:58:19 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: <306cf370-51f9-30bc-ed74-92a6f4033a59@avl.com> Guido van Rossum writes: > And my mind boggles when considering a generator expression > containing yield that is returned from a function. I tried this > and cannot say I expected the outcome: > > ?? def f(): > ?? ? ? return ((yield i) for i in range(3)) > ?? print(list(f())) > > In both Python 2 and Python 3 this prints > > ?? [0, None, 1, None, 2, None] > > Even if there's a totally logical explanation for that, I still > don't like it, and I think yield in a comprehension should be > banned. From this it follows that we should also simply ban > yield? from comprehensions. > Serhiy Storchaka writes: > This behavior doesn't look correct to me and Ivan. The behavior is surprising, but it seems quite consistent with how generator expressions are defined in the language. A generator expression is defined by the language reference as "compact generator notation in parentheses", which yields (sic!) a "new generator object". I take that to mean that a generator expression is equivalent to defining and calling a generator function. f() can be transformed to: def f(): def _gen(): for i in range(3): ret = yield i yield ret return _gen() The transformed version shows that there are *two* yields per iteration (one explicitly written and one inserted by the transformation), which is the reason why 6 values are produced. The None values come from list constructor calling __next__() on the generator, which (as per documentation) sends None into the generator. This None value is yielded after the "i" is yielded, which is why Nones follow the numbers. Hrvoje From victor.stinner at gmail.com Fri Nov 24 08:23:47 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 24 Nov 2017 14:23:47 +0100 Subject: [Python-Dev] Python initialization and embedded Python In-Reply-To: References: Message-ID: I proposed a PR to explicitly list functions safe to be called before Py_Initialize(): https://bugs.python.org/issue32124 https://github.com/python/cpython/pull/4540 I found more than 11 functions.. I also found variables ;-) Victor 2017-11-24 5:01 GMT+01:00 Nick Coghlan : > On 24 November 2017 at 12:21, Glenn Linderman wrote: >> >> On 11/23/2017 5:31 PM, Nick Coghlan wrote: >> >> - a consolidated list of *all* the APIs that can safely be called before >> Py_Initialize >> >> So it is interesting to know that list, of course, but the ones that are >> to be supported and documented might be a smaller list. Or might not. > > > Ah, sorry - "safely" was a bit ambiguous there. By "safely" I meant "CPython > has a regression test that ensures that particular API will keep working > before Py_Initialize(), regardless of any changes we may make to the way we > handle interpreter initialization". > > We've long had a lot of other APIs that happen to work well enough for > CPython itself to get away with using them during the startup process, but > the official position on those is "Don't count on these APIs working prior > to Py_Initialize() in the general case - we only get away with it because we > can adjust the exact order in which we do things in order to account for any > other changes that break it". > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > From status at bugs.python.org Fri Nov 24 12:09:49 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 24 Nov 2017 18:09:49 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20171124170949.0882156A47@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-11-17 - 2017-11-24) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6267 ( +6) closed 37610 (+57) total 43877 (+63) Open issues with patches: 2416 Issues opened (45) ================== #32067: Deprecate accepting unrecognized braces in regular expressions https://bugs.python.org/issue32067 opened by serhiy.storchaka #32068: textpad from curses package isn't handling backspace key https://bugs.python.org/issue32068 opened by bacher09 #32070: Clarify the behavior of the staticmethod builtin https://bugs.python.org/issue32070 opened by haikuginger #32071: Add py.test-like "-k" test selection to unittest https://bugs.python.org/issue32071 opened by jonash #32072: Issues with binary plists https://bugs.python.org/issue32072 opened by serhiy.storchaka #32073: Add copy_directory_metadata parameter to shutil.copytree https://bugs.python.org/issue32073 opened by desbma #32075: Expose ZipImporter Type Object in the include header files. https://bugs.python.org/issue32075 opened by Decorater #32076: Expose LockFile on Windows https://bugs.python.org/issue32076 opened by pitrou #32077: Documentation: Some Unicode object functions don't indicate wh https://bugs.python.org/issue32077 opened by Mathew M. #32079: version install 3.6.3 hangs in test_socket https://bugs.python.org/issue32079 opened by Ra??l Alvarez #32080: Error Installing Python 3.6.3 on ubuntu 16.04 https://bugs.python.org/issue32080 opened by sachin #32081: ipaddress should support fast IP lookups https://bugs.python.org/issue32081 opened by Attila Nagy #32082: atexit module: allow getting/setting list of handlers directly https://bugs.python.org/issue32082 opened by erik.bray #32083: sqlite3 Cursor.description can't return column types https://bugs.python.org/issue32083 opened by kyoshidajp #32084: [Security] http.server can be abused to redirect to (almost) a https://bugs.python.org/issue32084 opened by vstinner #32085: [Security] A New Era of SSRF - Exploiting URL Parser in Trendi https://bugs.python.org/issue32085 opened by vstinner #32087: deprecated-removed directive generates overlapping msgids in . https://bugs.python.org/issue32087 opened by cocoatomo #32089: In developer mode (-X dev), ResourceWarning is only emited onc https://bugs.python.org/issue32089 opened by vstinner #32090: test_put() of test_multiprocessing queue tests has a race cond https://bugs.python.org/issue32090 opened by vstinner #32091: test_s_option() of test_site.HelperFunctionsTests failed on x8 https://bugs.python.org/issue32091 opened by vstinner #32092: mock.patch with autospec does not consume self / cls argument https://bugs.python.org/issue32092 opened by cbelu #32093: macOS: implement time.thread_time() using thread_info() https://bugs.python.org/issue32093 opened by vstinner #32096: Py_DecodeLocale() fails if used before the runtime is initiali https://bugs.python.org/issue32096 opened by eric.snow #32097: doctest does not consider \r\n a https://bugs.python.org/issue32097 opened by X-Istence #32098: Hardcoded value in Lib/test/test_os.py:L1324:URandomTests.get_ https://bugs.python.org/issue32098 opened by hackan #32101: Add PYTHONDEVMODE=1 to enable the developer mode https://bugs.python.org/issue32101 opened by vstinner #32102: Add "capture_output=True" option to subprocess.run https://bugs.python.org/issue32102 opened by ncoghlan #32104: add method throw() to asyncio.Task https://bugs.python.org/issue32104 opened by Oleg K2 #32107: test_uuid uses the incorrect bitmask https://bugs.python.org/issue32107 opened by barry #32108: configparser bug: section is emptied if you assign a section t https://bugs.python.org/issue32108 opened by simonltwick #32110: Make codecs.StreamReader.read() more compatible with read() of https://bugs.python.org/issue32110 opened by serhiy.storchaka #32112: Should uuid.UUID() accept another UUID() instance? https://bugs.python.org/issue32112 opened by mjpieters #32113: Strange behavior with await in a generator expression https://bugs.python.org/issue32113 opened by levkivskyi #32114: The get_event_loop change in bpo28613 did not update the docum https://bugs.python.org/issue32114 opened by r.david.murray #32115: Ignored SIGCHLD causes asyncio.Process.wait to hang forever https://bugs.python.org/issue32115 opened by rogpeppe #32116: CSV import and export simplified https://bugs.python.org/issue32116 opened by paullongnet #32117: Tuple unpacking in return and yield statements https://bugs.python.org/issue32117 opened by dacut #32118: Docs: add note about sequence comparisons containing non-order https://bugs.python.org/issue32118 opened by Dubslow #32119: test_notify_all() of test_multiprocessing_forkserver failed on https://bugs.python.org/issue32119 opened by vstinner #32121: tracemalloc.Traceback.format() should have an option to revers https://bugs.python.org/issue32121 opened by pitrou #32122: Improve -x option documentation https://bugs.python.org/issue32122 opened by serhiy.storchaka #32123: Make the API of argparse.HelpFormatter public https://bugs.python.org/issue32123 opened by Bernhard10 #32124: Document functions safe to be called before Py_Initialize() https://bugs.python.org/issue32124 opened by vstinner #32125: Remove global configuration variable Py_UseClassExceptionsFlag https://bugs.python.org/issue32125 opened by vstinner #32126: [asyncio] test failure when the platform lacks a functional s https://bugs.python.org/issue32126 opened by xdegaye Most recent 15 issues with no replies (15) ========================================== #32125: Remove global configuration variable Py_UseClassExceptionsFlag https://bugs.python.org/issue32125 #32123: Make the API of argparse.HelpFormatter public https://bugs.python.org/issue32123 #32122: Improve -x option documentation https://bugs.python.org/issue32122 #32115: Ignored SIGCHLD causes asyncio.Process.wait to hang forever https://bugs.python.org/issue32115 #32114: The get_event_loop change in bpo28613 did not update the docum https://bugs.python.org/issue32114 #32098: Hardcoded value in Lib/test/test_os.py:L1324:URandomTests.get_ https://bugs.python.org/issue32098 #32090: test_put() of test_multiprocessing queue tests has a race cond https://bugs.python.org/issue32090 #32087: deprecated-removed directive generates overlapping msgids in . https://bugs.python.org/issue32087 #32085: [Security] A New Era of SSRF - Exploiting URL Parser in Trendi https://bugs.python.org/issue32085 #32082: atexit module: allow getting/setting list of handlers directly https://bugs.python.org/issue32082 #32081: ipaddress should support fast IP lookups https://bugs.python.org/issue32081 #32080: Error Installing Python 3.6.3 on ubuntu 16.04 https://bugs.python.org/issue32080 #32079: version install 3.6.3 hangs in test_socket https://bugs.python.org/issue32079 #32076: Expose LockFile on Windows https://bugs.python.org/issue32076 #32073: Add copy_directory_metadata parameter to shutil.copytree https://bugs.python.org/issue32073 Most recent 15 issues waiting for review (15) ============================================= #32124: Document functions safe to be called before Py_Initialize() https://bugs.python.org/issue32124 #32121: tracemalloc.Traceback.format() should have an option to revers https://bugs.python.org/issue32121 #32118: Docs: add note about sequence comparisons containing non-order https://bugs.python.org/issue32118 #32117: Tuple unpacking in return and yield statements https://bugs.python.org/issue32117 #32114: The get_event_loop change in bpo28613 did not update the docum https://bugs.python.org/issue32114 #32110: Make codecs.StreamReader.read() more compatible with read() of https://bugs.python.org/issue32110 #32107: test_uuid uses the incorrect bitmask https://bugs.python.org/issue32107 #32101: Add PYTHONDEVMODE=1 to enable the developer mode https://bugs.python.org/issue32101 #32096: Py_DecodeLocale() fails if used before the runtime is initiali https://bugs.python.org/issue32096 #32092: mock.patch with autospec does not consume self / cls argument https://bugs.python.org/issue32092 #32089: In developer mode (-X dev), ResourceWarning is only emited onc https://bugs.python.org/issue32089 #32087: deprecated-removed directive generates overlapping msgids in . https://bugs.python.org/issue32087 #32077: Documentation: Some Unicode object functions don't indicate wh https://bugs.python.org/issue32077 #32075: Expose ZipImporter Type Object in the include header files. https://bugs.python.org/issue32075 #32073: Add copy_directory_metadata parameter to shutil.copytree https://bugs.python.org/issue32073 Top 10 most discussed issues (10) ================================= #10544: yield expression inside generator expression does nothing https://bugs.python.org/issue10544 26 msgs #32096: Py_DecodeLocale() fails if used before the runtime is initiali https://bugs.python.org/issue32096 25 msgs #27535: Ignored ResourceWarning warnings leak memory in warnings regis https://bugs.python.org/issue27535 18 msgs #32089: In developer mode (-X dev), ResourceWarning is only emited onc https://bugs.python.org/issue32089 10 msgs #32030: PEP 432: Rewrite Py_Main() https://bugs.python.org/issue32030 9 msgs #32071: Add py.test-like "-k" test selection to unittest https://bugs.python.org/issue32071 9 msgs #32107: test_uuid uses the incorrect bitmask https://bugs.python.org/issue32107 9 msgs #30811: A venv created and activated from within a virtualenv uses the https://bugs.python.org/issue30811 8 msgs #32075: Expose ZipImporter Type Object in the include header files. https://bugs.python.org/issue32075 6 msgs #32121: tracemalloc.Traceback.format() should have an option to revers https://bugs.python.org/issue32121 6 msgs Issues closed (54) ================== #1102: Add support for _msi.Record.GetString() and _msi.Record.GetInt https://bugs.python.org/issue1102 closed by berker.peksag #12239: msilib VT_EMPTY SummaryInformation properties raise an error ( https://bugs.python.org/issue12239 closed by berker.peksag #12276: 3.x ignores sys.tracebacklimit=0 https://bugs.python.org/issue12276 closed by serhiy.storchaka #12382: [msilib] Obscure exception message when trying to open a non-e https://bugs.python.org/issue12382 closed by berker.peksag #19610: Give clear error messages for invalid types used for setup.py https://bugs.python.org/issue19610 closed by berker.peksag #23719: PEP 475: port test_eintr to Windows https://bugs.python.org/issue23719 closed by vstinner #26606: logging.baseConfig is missing the encoding parameter https://bugs.python.org/issue26606 closed by vinay.sajip #27068: Add a detach() method to subprocess.Popen https://bugs.python.org/issue27068 closed by vstinner #28684: [asyncio] bind() on a unix socket raises PermissionError on An https://bugs.python.org/issue28684 closed by xdegaye #29040: building Android with android-ndk-r14 https://bugs.python.org/issue29040 closed by xdegaye #29184: skip tests of test_socketserver when bind() raises PermissionE https://bugs.python.org/issue29184 closed by xdegaye #29185: test_distutils fails on Android API level 24 https://bugs.python.org/issue29185 closed by xdegaye #29556: Remove unused #include https://bugs.python.org/issue29556 closed by Chi Hsuan Yen #30456: 2to3 docs: example of fix for duplicates in second argument of https://bugs.python.org/issue30456 closed by berker.peksag #30904: Python 3 logging HTTPHandler sends duplicate Host header https://bugs.python.org/issue30904 closed by vinay.sajip #30987: Support for ISO-TP protocol in SocketCAN https://bugs.python.org/issue30987 closed by berker.peksag #30989: Sort only when needed in TimedRotatingFileHandler's getFilesTo https://bugs.python.org/issue30989 closed by vinay.sajip #31324: support._match_test() used by test.bisect is very inefficient https://bugs.python.org/issue31324 closed by vstinner #31325: req_rate is a namedtuple type rather than instance https://bugs.python.org/issue31325 closed by rhettinger #31520: ResourceWarning: unclosed w https://bugs.python.org/issue31520 closed by vstinner #31543: Optimize wrapper descriptors using FASTCALL https://bugs.python.org/issue31543 closed by vstinner #31626: Writing in freed memory in _PyMem_DebugRawRealloc() after shri https://bugs.python.org/issue31626 closed by vstinner #31672: string.Template should use re.ASCII flag https://bugs.python.org/issue31672 closed by barry #31701: faulthandler dumps 'Windows fatal exception: code 0xe06d7363' https://bugs.python.org/issue31701 closed by vstinner #31805: support._is_gui_available() hangs on x86-64 Sierra 3.6/3.x bui https://bugs.python.org/issue31805 closed by vstinner #31897: Unexpected exceptions in plistlib.loads https://bugs.python.org/issue31897 closed by serhiy.storchaka #31911: Use malloc_usable_size() in pymalloc for realloc https://bugs.python.org/issue31911 closed by vstinner #31943: Add asyncio.Handle.cancelled() method https://bugs.python.org/issue31943 closed by asvetlov #32022: Python crashes with mutually recursive code https://bugs.python.org/issue32022 closed by vstinner #32025: Add time.thread_time() https://bugs.python.org/issue32025 closed by vstinner #32031: Do not use the canonical path in pydoc test_mixed_case_module_ https://bugs.python.org/issue32031 closed by xdegaye #32042: Option for comparing values instead of reprs in doctest https://bugs.python.org/issue32042 closed by rhettinger #32043: Add a new -X dev option: "developer mode" https://bugs.python.org/issue32043 closed by vstinner #32047: asyncio: enable debug mode when -X dev is used https://bugs.python.org/issue32047 closed by vstinner #32050: Fix -x option documentation https://bugs.python.org/issue32050 closed by vstinner #32060: Should an ABC fail if no abstract methods are defined? https://bugs.python.org/issue32060 closed by rhettinger #32064: python msilib view.fetch is not returning none https://bugs.python.org/issue32064 closed by berker.peksag #32065: range_iterator doesn't have length, leads to surprised result https://bugs.python.org/issue32065 closed by r.david.murray #32066: asyncio: Support pathlib.Path in create_unix_connection; sock https://bugs.python.org/issue32066 closed by yselivanov #32069: Drop loop._make_legacy_ssl_transport https://bugs.python.org/issue32069 closed by asvetlov #32074: Might be a wrong implementation https://bugs.python.org/issue32074 closed by serhiy.storchaka #32078: string result of str(bytes()) in Python3 https://bugs.python.org/issue32078 closed by vstinner #32086: C API: Clarify which C functions are safe to be called before https://bugs.python.org/issue32086 closed by vstinner #32088: Display DeprecationWarning, PendingDeprecationWarning and Impo https://bugs.python.org/issue32088 closed by lukasz.langa #32094: _args_from_interpreter_flags() doesn't keep -X options https://bugs.python.org/issue32094 closed by vstinner #32095: AIX: ModuleNotFoundError: No module named '_ctypes' - make ins https://bugs.python.org/issue32095 closed by zach.ware #32099: Use range in itertools roundrobin recipe https://bugs.python.org/issue32099 closed by rhettinger #32100: IDLE: PathBrowser isn't working https://bugs.python.org/issue32100 closed by terry.reedy #32103: Inconsistent text at TypeError in concatenation https://bugs.python.org/issue32103 closed by serhiy.storchaka #32105: Please add asyncio.BaseEventLoop.connect_accepted_socket Versi https://bugs.python.org/issue32105 closed by yselivanov #32106: Move sysmodule.c to Modules? https://bugs.python.org/issue32106 closed by serhiy.storchaka #32109: Separated square brackets will generate a tuple instead of a l https://bugs.python.org/issue32109 closed by berker.peksag #32111: ObjectListView crashes on python3.5 https://bugs.python.org/issue32111 closed by serhiy.storchaka #32120: python 3.6.0 is not importing sqlite3 https://bugs.python.org/issue32120 closed by berker.peksag From levkivskyi at gmail.com Fri Nov 24 13:20:25 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Fri, 24 Nov 2017 19:20:25 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: OK, so my 24 hours are over :-) On 24 November 2017 at 01:50, Nick Coghlan wrote: > On 23 November 2017 at 23:04, Ivan Levkivskyi > wrote: > >> I don't see why this particular case qualifies for such a radical measure >> as an exception to syntactic rules, >> instead of just fixing it (sorry Nick :-) >> > > I've posted in more detail about this to the issue tracker, but the > argument here is: because making it behave differently from the way it does > now while still hiding the loop iteration variable potentially requires > even more radical revisions to the lexical scoping rules :) > If somebody can come up with a clever trick to allow yield inside a > comprehension to jump levels in a relatively intuitive way, that would > actually be genuinely cool, but the lexical scoping rules mean it's > trickier than it sounds. > "potentially" is the key word here. The plan is to avoid "more radical revisions". > Now that I frame the question that way, though, I'm also remembering that > we didn't have "yield from" yet when I wrote the current comprehension > implementation, and given that, it may be as simple as having an explicit > yield expression in a comprehension imply delegation to a subgenerator. > My understanding is that this is exactly how async comprehensions are currently implemented and why they work as one would naively expect, i.e. `await` is "bound" to the surrounding async def, not to the implicit scope async def. So that an async comprehension is just equivalent to a for-loop. However, although "implicit yield from" solution is simpler than the one proposed by Serhiy, I still more like the latter one. The former has its strange cases, for example I mentioned before in this thread: >>> async def f(): ... for i in range(3): ... yield i ... >>> async def g(): ... return [(yield i) async for i in f()] ... >>> g().send(None) Traceback (most recent call last): File "", line 1, in File "", line 2, in g TypeError: object async_generator can't be used in 'await' expression My understanding is that the strange error is exactly because of the implicit `yield from`. With Serhiy's approach this would work. Another minor bonus of Serhiy's idea is a performance gain: we will not push an execution frame. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Nov 24 19:22:17 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 24 Nov 2017 16:22:17 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: The more I hear about this topic, the more I think that `await`, `yield` and `yield from` should all be banned from occurring in all comprehensions and generator expressions. That's not much different from disallowing `return` or `break`. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Fri Nov 24 20:04:38 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Sat, 25 Nov 2017 02:04:38 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 01:22, Guido van Rossum wrote: > The more I hear about this topic, the more I think that `await`, `yield` > and `yield from` should all be banned from occurring in all comprehensions > and generator expressions. That's not much different from disallowing > `return` or `break`. > > IIUC this would essentially mean rejecting PEP 530. What do you think about banning `await`, `yield` and `yield from` only from generator expressions? Comprehensions can be fixed (just make them equivalent to for-loops without leaks as discussed). -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Nov 24 20:06:33 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 24 Nov 2017 17:06:33 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum wrote: > The more I hear about this topic, the more I think that `await`, `yield` and > `yield from` should all be banned from occurring in all comprehensions and > generator expressions. That's not much different from disallowing `return` > or `break`. I would say that banning `yield` and `yield from` is like banning `return` and `break`, but banning `await` is like banning function calls. There's no reason for most users to even know that `await` is related to generators, so a rule disallowing it inside comprehensions is just confusing. AFAICT 99% of the confusion around async/await is because people think of them as being related to generators, when from the user point of view it's not true at all and `await` is just a funny function-call syntax. Also, at the language level, there's a key difference between these cases. A comprehension has implicit `yield`s in it, and then mixing in explicit `yield`s as well obviously leads to confusion. But when you use an `await` in a comprehension, that turns it into an async generator expression (thanks to PEP 530), and in an async generator, `yield` and `await` use two separate, unrelated channels. So there's no confusion or problem with having `await` inside a comprehension. -n -- Nathaniel J. Smith -- https://vorpus.org From yselivanov.ml at gmail.com Fri Nov 24 20:53:05 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 24 Nov 2017 20:53:05 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 7:22 PM, Guido van Rossum wrote: > The more I hear about this topic, the more I think that `await`, `yield` and > `yield from` should all be banned from occurring in all comprehensions and > generator expressions. That's not much different from disallowing `return` > or `break`. IMO disallowing using await in comprehensions would be a huge mistake. I've personally seen a lot of code that uses the syntax. Moreover, I haven't seen any complaints about await expressions in comprehensions in this thread or *anywhere else*. In this thread I only see complaints about 'yield' expression, and that's easy to understand -- yield in comprehensions is just unusable. 'await' is fundamentally different, because it's essentially a calling convention. While "provisional" status for a PEP ultimately gives us power to remove every trace of it in the next release, I think that it decisions like that have to be based on some commonly reported problems and complaints. My 2c. Yury From ethan at stoneleaf.us Fri Nov 24 21:09:24 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 24 Nov 2017 18:09:24 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: <5A18D0D4.8020702@stoneleaf.us> On 11/24/2017 04:22 PM, Guido van Rossum wrote: > The more I hear about this topic, the more I think that `await`, `yield` and `yield from` should all be banned from > occurring in all comprehensions and generator expressions. That's not much different from disallowing `return` or `break`. For me, the deciding factor would be the the affect upon: 1) the containing function (if any); and 2) the result of the genexp/comprehension I think of generator expressions / comprehensions as self-contained units of code that will have no (side-)effects upon the containing function and/or surrounding code (and a containing function is not necessary), and that each will return the type expressed by the syntax. In other words: [ l for l in ...] -> list {s for s in ...] -> set {k:v for k, v in ...] -> dict (g for g in ...] -> genexp Since 'yield' and 'yield from' invalidate those, I'm in favor of declaring them syntax errors. If 'await' does not invalidate the above constraints then I'm fine with allowing them. -- ~Ethan~ From chris.jerdonek at gmail.com Fri Nov 24 21:21:29 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 24 Nov 2017 18:21:29 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 5:06 PM, Nathaniel Smith wrote: > On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum wrote: >> The more I hear about this topic, the more I think that `await`, `yield` and >> `yield from` should all be banned from occurring in all comprehensions and >> generator expressions. That's not much different from disallowing `return` >> or `break`. > > I would say that banning `yield` and `yield from` is like banning > `return` and `break`, but banning `await` is like banning function > calls. I agree. I was going to make the point earlier in the thread that using "await" can mostly just be thought of as a delayed function call, but it didn't seem at risk of getting banned until Guido's comment so I didn't say anything (and there were too many comments anyways). I think it's in a different category for that reason. It's much easier to reason about than, say, "yield" and "yield from". --Chris From ncoghlan at gmail.com Fri Nov 24 21:25:55 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Nov 2017 12:25:55 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 11:04, Ivan Levkivskyi wrote: > On 25 November 2017 at 01:22, Guido van Rossum wrote: > >> The more I hear about this topic, the more I think that `await`, `yield` >> and `yield from` should all be banned from occurring in all comprehensions >> and generator expressions. That's not much different from disallowing >> `return` or `break`. >> >> > IIUC this would essentially mean rejecting PEP 530. > What do you think about banning `await`, `yield` and `yield from` only > from generator expressions? > Comprehensions can be fixed (just make them equivalent to for-loops > without leaks as discussed). > I'm leaning towards this as well - the immediate evaluation on comprehensions means that injecting an automatic "yield from" when will do the right thing due to this component of PEP 380: https://www.python.org/dev/peps/pep-0380/#the-refactoring-principle Restating how folks intuitively think Py3 comprehensions containing "yield" and "await" should work (''__var" indicates hidden variables only accessible to the interpreter): result = [(yield x) for x in iterable] # Readers expect this to work much like __listcomp1 = [] for __x1 in iterable: __listcomp1.append(yield __x1) result = __listcomp1 PEP 380's refactoring principle then means the above is required to be equivalent to: def __listcomp1(__outermost_iterable): __result = [] for x in __outermost_iterable: __result.append(yield x) return __result result = yield from __listcomp1(iterable) Which means that only missing piece in the current implementation is inserting the implicit "yield from" into the function containing the comprehension. No need for new lexical scoping conventions, so need for syntactical special cases, just teaching this part of the compiler to delegate to a subgenerator correctly. The compiler should just need a small local adjustment to check whether the implicit nested scope is a generator scope or not (and any Python source compiler, including CPython's, already needs to track that information in order to ensure it emits a generator function instead of a regular synchronous function). >From a language evolution perspective, such a change is relatively straightforward to explain on the basis of "Comprehensions became implicitly nested scopes in 3.0, but we didn't implement PEP 380's 'yield from' construct until Python 3.3, and it then took us until 3.7 to realise we could combine them to make yield expressions inside comprehensions behave more intuitively". For "await", I'm pretty sure the current semantics are actually OK, but I confess I haven't been in a position to have to explain them to anyone either. I honestly don't see a good way to salvage yield or yield from inside generator expressions though. If we dropped the implicit "yield" when an explicit one is present, that would break the symmetry between "[(yield x) for x in iterable]" and "list((yield x) for x in iterable)", while inserting an implicit "yield from" (as I'm now suggesting we do for comprehensions) wouldn't work at all (since a genexp returns a generator *fuction*, not a generator-iterator object). I'm also pretty sure it will be difficult for the compiler to tell the difference between explicit and implicit yield expressions in the subfunction (as there's nothing else anywhere in the language that needs to make that distinction). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Fri Nov 24 22:03:02 2017 From: python at mrabarnett.plus.com (MRAB) Date: Sat, 25 Nov 2017 03:03:02 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: On 2017-11-25 02:21, Chris Jerdonek wrote: > On Fri, Nov 24, 2017 at 5:06 PM, Nathaniel Smith wrote: >> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum wrote: >>> The more I hear about this topic, the more I think that `await`, `yield` and >>> `yield from` should all be banned from occurring in all comprehensions and >>> generator expressions. That's not much different from disallowing `return` >>> or `break`. >> >> I would say that banning `yield` and `yield from` is like banning >> `return` and `break`, but banning `await` is like banning function >> calls. > > I agree. I was going to make the point earlier in the thread that > using "await" can mostly just be thought of as a delayed function > call, but it didn't seem at risk of getting banned until Guido's > comment so I didn't say anything (and there were too many comments > anyways). > > I think it's in a different category for that reason. It's much easier > to reason about than, say, "yield" and "yield from". > +1 Seeing "await" there didn't/doesn't confuse me; seeing "yield" or "yield from" does. From guido at python.org Fri Nov 24 22:30:56 2017 From: guido at python.org (Guido van Rossum) Date: Fri, 24 Nov 2017 19:30:56 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum wrote: > The more I hear about this topic, the more I think that `await`, `yield` > and `yield from` should all be banned from occurring in all comprehensions > and generator expressions. That's not much different from disallowing > `return` or `break`. > >From the responses it seems that I tried to simplify things too far. Let's say that `await` in comprehensions is fine, as long as that comprehension is contained in an `async def`. While we *could* save `yield [from]` in comprehensions, I still see it as mostly a source of confusion, and the fact that the presence of `yield [from]` *implicitly* makes the surrounding `def` a generator makes things worse. It just requires too many mental contortions to figure out what it does. I still propose to rule out all of the above from generator expressions, because those can escape from the surrounding scope. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 25 00:04:40 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Nov 2017 15:04:40 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 13:30, Guido van Rossum wrote: > > On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum wrote: >> >> The more I hear about this topic, the more I think that `await`, `yield` and `yield from` should all be banned from occurring in all comprehensions and generator expressions. That's not much different from disallowing `return` or `break`. > > > From the responses it seems that I tried to simplify things too far. Let's say that `await` in comprehensions is fine, as long as that comprehension is contained in an `async def`. While we *could* save `yield [from]` in comprehensions, I still see it as mostly a source of confusion, and the fact that the presence of `yield [from]` *implicitly* makes the surrounding `def` a generator makes things worse. It just requires too many mental contortions to figure out what it does. While I'm OK with ruling the interaction of all of these subexpressions with generator expression too confusingly weird to be supportable, in https://bugs.python.org/issue10544?#msg306940 I came up with an example for the existing comprehension behaviour that I'm hesitant to break without a clear replacement: code that wraps the *current* comprehension behaviour in an explicit "yield from" expression. That is, code like: def example(): comp1 = yield from [(yield x) for x in ('1st', '2nd')] comp2 = yield from [(yield x) for x in ('3rd', '4th')] return comp1, comp2 Creates a generator that returns a 2-tuple containing two 2-item lists, but the exact contents of those lists depend on the values you send into the generator while it is running. The basic principle behind the construct is that a comprehension containing a yield expression essentially defines an inline subgenerator, so you then have to yield from that subgenerator in order to produce the desired list. If the implicit "yield from" idea seems too magical, then the other direction we could go is to make the immediate "yield from" mandatory, such that leaving it out produces a SyntaxWarning in 3.7, and then a SyntaxError in 3.8+. (Something like "'yield from' clause missing before subgenerator comprehension") While we don't generally like bolting two pieces of syntax together like that, I think it's defensible in this case based on the fact that allowing "[expr for var in iterable]" to ever return anything other than a list (and similarly for the other comprehension types) is genuinely confusing. The nicest aspect of the "an explicit and immediate 'yield from' is required when using subgenerator comprehensions" approach is that the above syntax already works today, and has actually worked since 3.3 - we'd just never put the pieces together properly to establish this as a potential pattern for comprehension use in coroutines. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Sat Nov 25 00:27:30 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 24 Nov 2017 21:27:30 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 9:04 PM, Nick Coghlan wrote: > def example(): > comp1 = yield from [(yield x) for x in ('1st', '2nd')] > comp2 = yield from [(yield x) for x in ('3rd', '4th')] > return comp1, comp2 Isn't this a really confusing way of writing def example(): return [(yield '1st'), (yield '2nd')], [(yield '3rd'), (yield '4th')] ? -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Sat Nov 25 00:33:50 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Nov 2017 15:33:50 +1000 Subject: [Python-Dev] PEP 565: show DeprecationWarning in __main__ (round 2) Message-ID: This is a new version of the proposal to show DeprecationWarning in __main__. The proposal itself hasn't changed (it's still recommending a new entry in the default filter list), but there have been several updates to the PEP text based on further development work and comments in the initial thread: - there's now a linked issue and reference implementation - it turns out we don't currently support the definition of module based filters at startup time, so I've explicitly noted the relevant enhancement that turned out to be necessary (allowing plain-string-or-compiled-regex in stored filter definitions where we currently only allow compiled regexes) - I've noted the intended changes to the warnings-related documentation - I've noted a couple of other relevant changes that Victor already implemented for 3.7 - I've noted that the motivation for the change in 2.7 & 3.1 covered all Python applications, not just developer tools (developer tools just provide a particularly compelling example of why "revert to the Python 2.6 behaviour" isn't a good answer) Cheers, Nick. ================= PEP: 565 Title: Show DeprecationWarning in __main__ Author: Nick Coghlan Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 12-Nov-2017 Python-Version: 3.7 Post-History: 12-Nov-2017, 25-Nov-2017 Abstract ======== In Python 2.7 and Python 3.2, the default warning filters were updated to hide DeprecationWarning by default, such that deprecation warnings in development tools that were themselves written in Python (e.g. linters, static analysers, test runners, code generators), as well as any other applications that merely happened to be written in Python, wouldn't be visible to their users unless those users explicitly opted in to seeing them. However, this change has had the unfortunate side effect of making DeprecationWarning markedly less effective at its primary intended purpose: providing advance notice of breaking changes in APIs (whether in CPython, the standard library, or in third party libraries) to users of those APIs. To improve this situation, this PEP proposes a single adjustment to the default warnings filter: displaying deprecation warnings attributed to the main module by default. This change will mean that code entered at the interactive prompt and code in single file scripts will revert to reporting these warnings by default, while they will continue to be silenced by default for packaged code distributed as part of an importable module. The PEP also proposes a number of small adjustments to the reference interpreter and standard library documentation to help make the warnings subsystem more approachable for new Python developers. Specification ============= The current set of default warnings filters consists of:: ignore::DeprecationWarning ignore::PendingDeprecationWarning ignore::ImportWarning ignore::BytesWarning ignore::ResourceWarning The default ``unittest`` test runner then uses ``warnings.catch_warnings()`` ``warnings.simplefilter('default')`` to override the default filters while running test cases. The change proposed in this PEP is to update the default warning filter list to be:: default::DeprecationWarning:__main__ ignore::DeprecationWarning ignore::PendingDeprecationWarning ignore::ImportWarning ignore::BytesWarning ignore::ResourceWarning This means that in cases where the nominal location of the warning (as determined by the ``stacklevel`` parameter to ``warnings.warn``) is in the ``__main__`` module, the first occurrence of each DeprecationWarning will once again be reported. This change will lead to DeprecationWarning being displayed by default for: * code executed directly at the interactive prompt * code executed directly as part of a single-file script While continuing to be hidden by default for: * code imported from another module in a ``zipapp`` archive's ``__main__.py`` file * code imported from another module in an executable package's ``__main__`` submodule * code imported from an executable script wrapper generated at installation time based on a ``console_scripts`` or ``gui_scripts`` entry point definition As a result, API deprecation warnings encountered by development tools written in Python should continue to be hidden by default for users of those tools While not its originally intended purpose, the standard library documentation will also be updated to explicitly recommend the use of ``FutureWarning`` (rather than ``DeprecationWarning``) for backwards compatibility warnings that are intended to be seen by *users* of an application. This will give the following three distinct categories of backwards compatibility warning, with three different intended audiences: * ``PendingDeprecationWarning``: reported by default only in test runners that override the default set of warning filters. The intended audience is Python developers that take an active interest in ensuring the future compatibility of their software (e.g. professional Python application developers with specific support obligations). * ``DeprecationWarning``: reported by default for code that runs directly in the ``__main__`` module (as such code is considered relatively unlikely to have a dedicated test suite), but relies on test suite based reporting for code in other modules. The intended audience is Python developers that are at risk of upgrades to their dependencies (including upgrades to Python itself) breaking their software (e.g. developers using Python to script environments where someone else is in control of the timing of dependency upgrades). * ``FutureWarning``: always reported by default. The intended audience is users of applications written in Python, rather than other Python developers (e.g. warning about use of a deprecated setting in a configuration file format). Given its presence in the standard library since Python 2.3, ``FutureWarning`` would then also have a secondary use case for libraries and frameworks that support multiple Python versions: as a more reliably visible alternative to ``DeprecationWarning`` in Python 2.7 and versions of Python 3.x prior to 3.7. Documentation Updates ===================== The current reference documentation for the warnings system is relatively short on specific *examples* of possible settings for the ``-W`` command line option or the ``PYTHONWARNINGS`` environment variably that achieve particular end results. The following improvements are proposed as part of the implementation of this PEP: * Explicitly list the following entries under the description of the ``PYTHONWARNINGS`` environment variable:: PYTHONWARNINGS=error # Convert to exceptions PYTHONWARNINGS=always # Warn every time PYTHONWARNINGS=default # Warn once per call location PYTHONWARNINGS=module # Warn once per calling module PYTHONWARNINGS=once # Warn once per Python process PYTHONWARNINGS=ignore # Never warn * Explicitly list the corresponding short options (``-We``, ``-Wa``, ``-Wd``, ``-Wm``,``-Wo``, ``-Wi``) for each of the warning actions listed under the ``-W`` command line switch documentation * Explicitly list the default filter set in the ``warnings`` module documentation, using the ``action::category`` and ``action::category:module`` notation * Explicitly list the following snippet in the ``warnings.simplefilter`` documentation as a recommended approach to turning off all warnings by default in a Python application while still allowing them to be turned back on via ``PYTHONWARNINGS`` or the ``-W`` command line switch:: if not sys.warnoptions: warnings.simplefilter("ignore") None of these are *new* (they already work in all still supported Python versions), but they're not especially obvious given the current structure of the related documentation. Reference Implementation ======================== A reference implementation is available in the PR [4_] linked from the related tracker issue for this PEP [5_]. As a side-effect of implementing this PEP, the internal warnings filter list will start allowing the use of plain strings as part of filter definitions (in addition to the existing use of compiled regular expressions). When present, the plain strings will be compared for exact matches only. This approach allows the new default filter to be added during interpreter startup without requiring early access to the ``re`` module. Motivation ========== As discussed in [1_] and mentioned in [2_], Python 2.7 and Python 3.2 changed the default handling of ``DeprecationWarning`` such that: * the warning was hidden by default during normal code execution * the ``unittest`` test runner was updated to re-enable it when running tests The intent was to avoid cases of tooling output like the following:: $ devtool mycode/ /usr/lib/python3.6/site-packages/devtool/cli.py:1: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7 async = True ... actual tool output ... Even when `devtool` is a tool specifically for Python programmers, this is not a particularly useful warning, as it will be shown on every invocation, even though the main helpful step an end user can take is to report a bug to the developers of ``devtool``. The warning is even less helpful for general purpose developer tools that are used across more languages than just Python, and almost entirely \*un\*helpful for applications that simply happen to be written in Python, and aren't necessarily intended for a developer audience at all. However, this change proved to have unintended consequences for the following audiences: * anyone using a test runner other than the default one built into ``unittest`` (the request for third party test runners to change their default warnings filters was never made explicitly, so many of them still rely on the interpreter defaults that are designed to suit deployed applications) * anyone using the default ``unittest`` test runner to test their Python code in a subprocess (since even ``unittest`` only adjusts the warnings settings in the current process) * anyone writing Python code at the interactive prompt or as part of a directly executed script that didn't have a Python level test suite at all In these cases, ``DeprecationWarning`` ended up become almost entirely equivalent to ``PendingDeprecationWarning``: it was simply never seen at all. Limitations on PEP Scope ======================== This PEP exists specifically to explain both the proposed addition to the default warnings filter for 3.7, *and* to more clearly articulate the rationale for the original change to the handling of DeprecationWarning back in Python 2.7 and 3.2. This PEP does not solve all known problems with the current approach to handling deprecation warnings. Most notably: * the default ``unittest`` test runner does not currently report deprecation warnings emitted at module import time, as the warnings filter override is only put in place during test execution, not during test discovery and loading. * the default ``unittest`` test runner does not currently report deprecation warnings in subprocesses, as the warnings filter override is applied directly to the loaded ``warnings`` module, not to the ``PYTHONWARNINGS`` environment variable. * the standard library doesn't provide a straightforward way to opt-in to seeing all warnings emitted *by* a particular dependency prior to upgrading it (the third-party ``warn`` module [3_] does provide this, but enabling it involves monkeypatching the standard library's ``warnings`` module). * re-enabling deprecation warnings by default in __main__ doesn't help in handling cases where software has been factored out into support modules, but those modules still have little or no automated test coverage. Near term, the best currently available answer is to run such applications with ``PYTHONWARNINGS=default::DeprecationWarning`` or ``python -W default::DeprecationWarning`` and pay attention to their ``stderr`` output. Longer term, this is really a question for researchers working on static analysis of Python code: how to reliably find usage of deprecated APIs, and how to infer that an API or parameter is deprecated based on ``warnings.warn`` calls, without actually running either the code providing the API or the code accessing it While these are real problems with the status quo, they're excluded from consideration in this PEP because they're going to require more complex solutions than a single additional entry in the default warnings filter, and resolving them at least potentially won't require going through the PEP process. For anyone interested in pursuing them further, the first two would be ``unittest`` module enhancement requests, the third would be a ``warnings`` module enhancement request, while the last would only require a PEP if inferring API deprecations from their contents was deemed to be an intractable code analysis problem, and an explicit function and parameter marker syntax in annotations was proposed instead. The CPython reference implementation will also include the following related changes in 3.7: * a new ``-X dev`` command line option that combines several developer centric settings (including ``-Wd``) into one command line flag: https://bugs.python.org/issue32043 * changing the behaviour in debug builds to show more of the warnings that are off by default in regular interpeter builds: https://bugs.python.org/issue32088 References ========== .. [1] stdlib-sig thread proposing the original default filter change (https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html) .. [2] Python 2.7 notification of the default warnings filter change (https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-deprecation-warnings) .. [3] Emitting warnings based on the location of the warning itself (https://pypi.org/project/warn/) .. [4] GitHub PR for PEP 565 implementation (https://github.com/python/cpython/pull/4458) .. [5] Tracker issue for PEP 565 implementation (https://bugs.python.org/issue31975) .. [6] python-dev discussion thread for this PEP (https://mail.python.org/pipermail/python-dev/2017-November/150477.html) Copyright ========= This document has been placed in the public domain. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Nov 25 00:39:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Nov 2017 15:39:33 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 15:27, Nathaniel Smith wrote: > On Fri, Nov 24, 2017 at 9:04 PM, Nick Coghlan wrote: >> def example(): >> comp1 = yield from [(yield x) for x in ('1st', '2nd')] >> comp2 = yield from [(yield x) for x in ('3rd', '4th')] >> return comp1, comp2 > > Isn't this a really confusing way of writing > > def example(): > return [(yield '1st'), (yield '2nd')], [(yield '3rd'), (yield '4th')] A real use case wouldn't be iterating over hardcoded tuples in the comprehensions, it would be something more like: def example(iterable1, iterable2): comp1 = yield from [(yield x) for x in iterable1] comp2 = yield from [(yield x) for x in iterable2] return comp1, comp2 Defining an interesting for loop isn't the point of the example though - it's just to show that if you're inside a generator, you can already make a subgenerator comprehension do something sensible by sticking "yield from" in front of it (and have actually been able to do so since 3.3, when "yield from" was first introduced). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Sat Nov 25 01:18:42 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 24 Nov 2017 22:18:42 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017 at 9:39 PM, Nick Coghlan wrote: > On 25 November 2017 at 15:27, Nathaniel Smith wrote: >> On Fri, Nov 24, 2017 at 9:04 PM, Nick Coghlan wrote: >>> def example(): >>> comp1 = yield from [(yield x) for x in ('1st', '2nd')] >>> comp2 = yield from [(yield x) for x in ('3rd', '4th')] >>> return comp1, comp2 >> >> Isn't this a really confusing way of writing >> >> def example(): >> return [(yield '1st'), (yield '2nd')], [(yield '3rd'), (yield '4th')] > > A real use case Do you have a real use case? This seems incredibly niche... > wouldn't be iterating over hardcoded tuples in the > comprehensions, it would be something more like: > > def example(iterable1, iterable2): > comp1 = yield from [(yield x) for x in iterable1] > comp2 = yield from [(yield x) for x in iterable2] > return comp1, comp2 I submit that this would still be easier to understand if written out like: def map_iterable_to_yield_values(iterable): "Yield the values in iterable, then return a list of the values sent back." result = [] for obj in iterable: result.append(yield obj) return result def example(iterable1, iterable2): values1 = yield from map_iterable_to_yield_values(iterable1) values2 = yield from map_iterable_to_yield_values(iterable2) return values1, values2 -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Sat Nov 25 02:10:12 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Nov 2017 17:10:12 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 16:18, Nathaniel Smith wrote: > On Fri, Nov 24, 2017 at 9:39 PM, Nick Coghlan wrote: >> On 25 November 2017 at 15:27, Nathaniel Smith wrote: >>> On Fri, Nov 24, 2017 at 9:04 PM, Nick Coghlan wrote: >>>> def example(): >>>> comp1 = yield from [(yield x) for x in ('1st', '2nd')] >>>> comp2 = yield from [(yield x) for x in ('3rd', '4th')] >>>> return comp1, comp2 >>> >>> Isn't this a really confusing way of writing >>> >>> def example(): >>> return [(yield '1st'), (yield '2nd')], [(yield '3rd'), (yield '4th')] >> >> A real use case > > Do you have a real use case? This seems incredibly niche... That's not how backwards compatibility works - we were suggesting getting rid of this syntax, because there was no current way to make it do anything sensible. It turns out there is a way to make it behave reasonably - you just need to stick "yield from" in front of it, and it goes back to being equivalent to the corresponding for loop (the same as the synchronous version). >> wouldn't be iterating over hardcoded tuples in the >> comprehensions, it would be something more like: >> >> def example(iterable1, iterable2): >> comp1 = yield from [(yield x) for x in iterable1] >> comp2 = yield from [(yield x) for x in iterable2] >> return comp1, comp2 > > I submit that this would still be easier to understand if written out like: > > def map_iterable_to_yield_values(iterable): > "Yield the values in iterable, then return a list of the values sent back." > result = [] > for obj in iterable: > result.append(yield obj) > return result > > def example(iterable1, iterable2): > values1 = yield from map_iterable_to_yield_values(iterable1) > values2 = yield from map_iterable_to_yield_values(iterable2) > return values1, values2 The same can be said for comprehensions in general. Composing them with coroutines certainly doesn't make either easier to understand, but I don't think replacing the comprehension with its full imperative form is particularly helpful in aiding that understanding. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Sat Nov 25 02:11:51 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 25 Nov 2017 08:11:51 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: <20171123124928.304986e9@fsol> Message-ID: <20171125081151.64d4a9fb@fsol> On Sat, 25 Nov 2017 03:03:02 +0000 MRAB wrote: > On 2017-11-25 02:21, Chris Jerdonek wrote: > > On Fri, Nov 24, 2017 at 5:06 PM, Nathaniel Smith wrote: > >> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum wrote: > >>> The more I hear about this topic, the more I think that `await`, `yield` and > >>> `yield from` should all be banned from occurring in all comprehensions and > >>> generator expressions. That's not much different from disallowing `return` > >>> or `break`. > >> > >> I would say that banning `yield` and `yield from` is like banning > >> `return` and `break`, but banning `await` is like banning function > >> calls. > > > > I agree. I was going to make the point earlier in the thread that > > using "await" can mostly just be thought of as a delayed function > > call, but it didn't seem at risk of getting banned until Guido's > > comment so I didn't say anything (and there were too many comments > > anyways). > > > > I think it's in a different category for that reason. It's much easier > > to reason about than, say, "yield" and "yield from". > > > +1 > > Seeing "await" there didn't/doesn't confuse me; seeing "yield" or "yield > from" does. +1 as well. Regards Antoine. From solipsis at pitrou.net Sat Nov 25 02:24:31 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 25 Nov 2017 08:24:31 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: Message-ID: <20171125082431.18f42e91@fsol> On Sat, 25 Nov 2017 17:10:12 +1000 Nick Coghlan wrote: > On 25 November 2017 at 16:18, Nathaniel Smith wrote: > > On Fri, Nov 24, 2017 at 9:39 PM, Nick Coghlan wrote: > >> On 25 November 2017 at 15:27, Nathaniel Smith wrote: > >>> On Fri, Nov 24, 2017 at 9:04 PM, Nick Coghlan wrote: > >>>> def example(): > >>>> comp1 = yield from [(yield x) for x in ('1st', '2nd')] > >>>> comp2 = yield from [(yield x) for x in ('3rd', '4th')] > >>>> return comp1, comp2 > >>> > >>> Isn't this a really confusing way of writing > >>> > >>> def example(): > >>> return [(yield '1st'), (yield '2nd')], [(yield '3rd'), (yield '4th')] > >> > >> A real use case > > > > Do you have a real use case? This seems incredibly niche... > > That's not how backwards compatibility works - we were suggesting > getting rid of this syntax, because there was no current way to make > it do anything sensible. > > It turns out there is a way to make it behave reasonably - you just > need to stick "yield from" in front of it, and it goes back to being > equivalent to the corresponding for loop (the same as the synchronous > version). I don't know if it behaves reasonably, but it's entirely unreadable to me. I think there's a case for discouraging unreadable constructs. Regards Antoine. From storchaka at gmail.com Sat Nov 25 03:16:34 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 25 Nov 2017 10:16:34 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: 24.11.17 02:50, Nick Coghlan ????: > If we went down that path, then a list comprehension like the following: > > ??? results = [(yield future) for future in list_of_futures] > > might be compiled as being equivalent to: > > ??? def __listcomp_generator(iterable): > ??????? result = [] > ??????? for future in iterable: > ??????????? result.append((yield future)) > ??????? return result > > ??? results = yield from _listcomp_generator(list_of_futures) > > The only difference between the current comprehension code and this idea > is "an explicit yield expression in a comprehension implies the use of > 'yield from' when calling the nested function". Oh, nice! This is a much simpler solution! And it solves all related problems. I like it. This has an overhead in comparison with inlining the code, but the latter can be considered just as an optimization. We can apply it when prove the need in optimizing this construction. For now it is enough if it just works. The fact that two independent mental models lead to the same result is an argument for their correctness. I'm not so sure about "yield" in generators, this will need further thoughts and experiments. "yield" can be used not only in the item expression, but in conditions and inner iterables. From storchaka at gmail.com Sat Nov 25 03:18:19 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 25 Nov 2017 10:18:19 +0200 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A1749AF.6010500@canterbury.ac.nz> References: <20171123124928.304986e9@fsol> <5A1749AF.6010500@canterbury.ac.nz> Message-ID: 24.11.17 00:20, Greg Ewing ????: > Serhiy Storchaka wrote: >> Ivan explained that this function should be rough equivalent to >> >> ?? def f(): >> ?????? t = [(yield i) for i in range(3)] >> ?????? return (x for x in t) > > This seems useless to me. It turns a lazy iterator > into an eager one, which is a gross violation of the > author's intent in using a generator expression. This is a *rough* equivalent. There are differences in details. From levkivskyi at gmail.com Sat Nov 25 09:55:28 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Sat, 25 Nov 2017 15:55:28 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 04:30, Guido van Rossum wrote: > On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum > wrote: > >> The more I hear about this topic, the more I think that `await`, `yield` >> and `yield from` should all be banned from occurring in all comprehensions >> and generator expressions. That's not much different from disallowing >> `return` or `break`. >> > > From the responses it seems that I tried to simplify things too far. Let's > say that `await` in comprehensions is fine, as long as that comprehension > is contained in an `async def`. While we *could* save `yield [from]` in > comprehensions, I still see it as mostly a source of confusion, and the > fact that the presence of `yield [from]` *implicitly* makes the surrounding > `def` a generator makes things worse. It just requires too many mental > contortions to figure out what it does. > There were some arguments that `await` is like a function call, while `yield` is like `return`. TBH, I don't really like these arguments since to me they are to vague. Continuing this logic one can say that `return` is just a fancy function call (calling continuation with the result). To me there is one clear distinction: `return` and `break` are statements, while `yield`, `yield from`, and `await` are expressions. Continuing the topic of the ban, what exactly should be banned? For example will this still be valid? def pack_two(): return [(yield), (yield)] # Just a list display I don't see how this is controversial. It is clear that `pack_two` is a generator. If this is going to be prohibited, then one may be surprised by lack of referential transparency, since this will be valid: def pack_two(): first = (yield) second = (yield) return [first, second] If the first example will be allowed, then one will be surprised why it can't be rewritten as def pack_two(): return [(yield) for _ in range(2)] I have found several other examples where it is not clear whether they should be prohibited with `yield` or not. I still propose to rule out all of the above from generator expressions, > because those can escape from the surrounding scope. > Here I agree. Also note that the above problem does not apply to generator expressions since (x, x) and (x for _ in range(2)) are two very different expressions. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dacut at kanga.org Sat Nov 25 01:55:15 2017 From: dacut at kanga.org (David Cuthbert) Date: Sat, 25 Nov 2017 06:55:15 +0000 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements Message-ID: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> First time contributing back -- if I should be filing a PEP or something like that for this, please let me know. Coming from https://bugs.python.org/issue32117, unparenthesized tuple unpacking is allowed in assignments: rest = (4, 5, 6) a = 1, 2, 3, *rest but not in yield or return statements (these result in SyntaxErrors): return 1, 2, 3, *rest yield 1, 2, 3, *rest The unpacking in assignments was enabled by a pre-3.2 commit that I haven't yet been able to track back to a discussion, but I suspect this asymmetry is unintentional. Here's the original commit: https://github.com/python/cpython/commit/4905e80c3d2f6abb613d212f0313d1dfe09475dc I've submitted a patch (CLA is signed and submitted, not yet processed), and Serihy said that since it changes the grammar I should have it reviewed here and have signoff by the BDFL. While I haven't had a need for this myself, it was brought up by a user on StackOverflow (https://stackoverflow.com/questions/47272460/python-tuple-unpacking-in-return-statement/47326859). Thanks! Dave From p.f.moore at gmail.com Sat Nov 25 10:47:14 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 25 Nov 2017 15:47:14 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 14:55, Ivan Levkivskyi wrote: > Continuing the topic of the ban, what exactly should be banned? For example > will this still be valid? > > def pack_two(): > return [(yield), (yield)] # Just a list display > > I don't see how this is controversial. It is clear that `pack_two` is a > generator. It's not clear to me... Seriously, I wouldn't know what this would do. Presumably it needs someone calling send() to generate results (because that's now yield expressions work) but beyond that, I don't really understand what it does. Maybe an example that wasn't artificial would be more obvious, I'm not sure. > If this is going to be prohibited, then one may be surprised by lack of > referential transparency, since this will be valid: > > def pack_two(): > first = (yield) > second = (yield) > return [first, second] The fact that you can't inline first and second doesn't bother me. I can't fully explain why, but it doesn't. (Technically, I *can* explain why, but you'd disagree with my explanation :-)) > If the first example will be allowed, then one will be surprised why it > can't be rewritten as > > def pack_two(): > return [(yield) for _ in range(2)] > > I have found several other examples where it is not clear whether they > should be prohibited with `yield` or not. So far, none of your examples have demonstrated anything that Guido's suggestion to ban yield would make confusing *to me*. Maybe this demonstrates nothing more than how inconsistent and shallow my understanding of yield expressions is. That's fine, I can live with that. I can't give you any assurances that my level of understanding is common among non-experts, but I will say that in my view, Guido's proposal feels sensible and intuitive. And after all, if the use of yield expressions becomes significantly more common, and the general level of familiarity with the concept increases, it's easy enough to relax the restriction later, in line with the average user's level of comfort. Paul From solipsis at pitrou.net Sat Nov 25 11:02:52 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 25 Nov 2017 17:02:52 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression References: <20171123124928.304986e9@fsol> Message-ID: <20171125170252.6536e81a@fsol> At this point, the fact that several Python core developers fail to understand the pieces of code presented as examples should be a hint that the syntax here is far from desirable... Regards Antoine. On Sat, 25 Nov 2017 15:47:14 +0000 Paul Moore wrote: > On 25 November 2017 at 14:55, Ivan Levkivskyi wrote: > > Continuing the topic of the ban, what exactly should be banned? For example > > will this still be valid? > > > > def pack_two(): > > return [(yield), (yield)] # Just a list display > > > > I don't see how this is controversial. It is clear that `pack_two` is a > > generator. > > It's not clear to me... > > Seriously, I wouldn't know what this would do. Presumably it needs > someone calling send() to generate results (because that's now yield > expressions work) but beyond that, I don't really understand what it > does. Maybe an example that wasn't artificial would be more obvious, > I'm not sure. > > > If this is going to be prohibited, then one may be surprised by lack of > > referential transparency, since this will be valid: > > > > def pack_two(): > > first = (yield) > > second = (yield) > > return [first, second] > > The fact that you can't inline first and second doesn't bother me. I > can't fully explain why, but it doesn't. (Technically, I *can* explain > why, but you'd disagree with my explanation :-)) > > > If the first example will be allowed, then one will be surprised why it > > can't be rewritten as > > > > def pack_two(): > > return [(yield) for _ in range(2)] > > > > I have found several other examples where it is not clear whether they > > should be prohibited with `yield` or not. > > So far, none of your examples have demonstrated anything that Guido's > suggestion to ban yield would make confusing *to me*. > > Maybe this demonstrates nothing more than how inconsistent and shallow > my understanding of yield expressions is. That's fine, I can live with > that. I can't give you any assurances that my level of understanding > is common among non-experts, but I will say that in my view, Guido's > proposal feels sensible and intuitive. And after all, if the use of > yield expressions becomes significantly more common, and the general > level of familiarity with the concept increases, it's easy enough to > relax the restriction later, in line with the average user's level of > comfort. > > Paul From guido at python.org Sat Nov 25 10:57:37 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 25 Nov 2017 07:57:37 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 6:55 AM, Ivan Levkivskyi wrote: > On 25 November 2017 at 04:30, Guido van Rossum wrote: > >> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum >> wrote: >> >>> The more I hear about this topic, the more I think that `await`, `yield` >>> and `yield from` should all be banned from occurring in all comprehensions >>> and generator expressions. That's not much different from disallowing >>> `return` or `break`. >>> >> >> From the responses it seems that I tried to simplify things too far. >> Let's say that `await` in comprehensions is fine, as long as that >> comprehension is contained in an `async def`. While we *could* save `yield >> [from]` in comprehensions, I still see it as mostly a source of confusion, >> and the fact that the presence of `yield [from]` *implicitly* makes the >> surrounding `def` a generator makes things worse. It just requires too many >> mental contortions to figure out what it does. >> > > There were some arguments that `await` is like a function call, while > `yield` is like `return`. > TBH, I don't really like these arguments since to me they are to vague. > Continuing this logic one can say that > `return` is just a fancy function call (calling continuation with the > result). To me there is one clear distinction: > `return` and `break` are statements, while `yield`, `yield from`, and > `await` are expressions. > Indeed. However, `yield from` as an expression is mostly a deprecated way to write `await`, and `yield` as an expression is mostly an alternative way of writing coroutines (it's the only way that exists in Python 2). Another big difference is that the use of `yield [from]` affects the surrounding function, making it a generator. Continuing the topic of the ban, what exactly should be banned? For example > will this still be valid? > > def pack_two(): > return [(yield), (yield)] # Just a list display > It's not a comprehension so it's still valid. > I don't see how this is controversial. It is clear that `pack_two` is a > generator. > If this is going to be prohibited, then one may be surprised by lack of > referential transparency, since this will be valid: > > def pack_two(): > first = (yield) > second = (yield) > return [first, second] > > If the first example will be allowed, then one will be surprised why it > can't be rewritten as > > def pack_two(): > return [(yield) for _ in range(2)] > And yet Nick's example shows that that is not equivalent! def example(): comp1 = yield from [(yield x) for x in ('1st', '2nd')] comp2 = yield from [(yield x) for x in ('3rd', '4th')] return comp1, comp2 In this example each thing that looks syntactically like a list comprehension becomes actually a generator expression at at runtime! And so does your example, so instead of a list of two items, it returns a generator that will produce two values when iterated over. That's not referential transparency to me, it feels more like a bug in the code generator. I want to ban this because apparently nobody besides Nick knows about this behavior (I certainly didn't, and from the above it seems you don't either). > I have found several other examples where it is not clear whether they > should be prohibited with `yield` or not. > Such as? > I still propose to rule out all of the above from generator expressions, >> because those can escape from the surrounding scope. >> > > Here I agree. Also note that the above problem does not apply to generator > expressions since (x, x) and (x for _ in range(2)) are > two very different expressions. > PS. A more radical proposal (not for 3.7) would be to deprecate yield as an expression. It once was only a statement, but PEP 342 introduced yield as an expression in order to do coroutines. We now have `async def` and `await` as a superior coroutine mechanism. But we must continue to support yield expressions because there is a lot of Python 2/3 compatible code that depends on it. (E.g. Tornado.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Sat Nov 25 11:07:43 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Sat, 25 Nov 2017 17:07:43 +0100 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 25 November 2017 at 16:57, Guido van Rossum wrote: > On Sat, Nov 25, 2017 at 6:55 AM, Ivan Levkivskyi > wrote: > >> On 25 November 2017 at 04:30, Guido van Rossum wrote: >> >>> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum >>> wrote: >>> >>>> The more I hear about this topic, the more I think that `await`, >>>> `yield` and `yield from` should all be banned from occurring in all >>>> comprehensions and generator expressions. That's not much different from >>>> disallowing `return` or `break`. >>>> >>> >>> From the responses it seems that I tried to simplify things too far. >>> Let's say that `await` in comprehensions is fine, as long as that >>> comprehension is contained in an `async def`. While we *could* save `yield >>> [from]` in comprehensions, I still see it as mostly a source of confusion, >>> and the fact that the presence of `yield [from]` *implicitly* makes the >>> surrounding `def` a generator makes things worse. It just requires too many >>> mental contortions to figure out what it does. >>> >> >> [...] >> If the first example will be allowed, then one will be surprised why it >> can't be rewritten as >> >> def pack_two(): >> return [(yield) for _ in range(2)] >> > > And yet Nick's example shows that that is not equivalent! > > [...] > > In this example each thing that looks syntactically like a list > comprehension becomes actually a generator expression at at runtime! And so > does your example, so instead of a list of two items, it returns a > generator that will produce two values when iterated over. > > That's not referential transparency to me, it feels more like a bug in the > code generator. > > I want to ban this because apparently nobody besides Nick knows about this > behavior (I certainly didn't, and from the above it seems you don't either). > This whole thread started as a proposal to fix this bug and to make the two forms equivalent, so I don't know what you are talking about. Also as there appeared arguments of authority (thanks Antoine) its time to stop this discussion for me. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Nov 25 11:59:49 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 25 Nov 2017 08:59:49 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 8:07 AM, Ivan Levkivskyi wrote: > On 25 November 2017 at 16:57, Guido van Rossum wrote: > >> On Sat, Nov 25, 2017 at 6:55 AM, Ivan Levkivskyi >> wrote: >> >>> On 25 November 2017 at 04:30, Guido van Rossum wrote: >>> >>>> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum >>>> wrote: >>>> >>>>> The more I hear about this topic, the more I think that `await`, >>>>> `yield` and `yield from` should all be banned from occurring in all >>>>> comprehensions and generator expressions. That's not much different from >>>>> disallowing `return` or `break`. >>>>> >>>> >>>> From the responses it seems that I tried to simplify things too far. >>>> Let's say that `await` in comprehensions is fine, as long as that >>>> comprehension is contained in an `async def`. While we *could* save `yield >>>> [from]` in comprehensions, I still see it as mostly a source of confusion, >>>> and the fact that the presence of `yield [from]` *implicitly* makes the >>>> surrounding `def` a generator makes things worse. It just requires too many >>>> mental contortions to figure out what it does. >>>> >>> >>> [...] >>> If the first example will be allowed, then one will be surprised why it >>> can't be rewritten as >>> >>> def pack_two(): >>> return [(yield) for _ in range(2)] >>> >> >> And yet Nick's example shows that that is not equivalent! >> >> [...] >> >> In this example each thing that looks syntactically like a list >> comprehension becomes actually a generator expression at at runtime! And so >> does your example, so instead of a list of two items, it returns a >> generator that will produce two values when iterated over. >> >> That's not referential transparency to me, it feels more like a bug in >> the code generator. >> >> I want to ban this because apparently nobody besides Nick knows about >> this behavior (I certainly didn't, and from the above it seems you don't >> either). >> > > This whole thread started as a proposal to fix this bug and to make the > two forms equivalent, so I don't know what you are talking about. > I see. I misread your equivalence example as "this how it works" -- you meant it as "this is how I propose we fix it". The fix is not unreasonable but I still would like to retreat to the territory where `yield [from]` in comprehensions (and generator expressions) is deemed invalid. > Also as there appeared arguments of authority (thanks Antoine) its time to > stop this discussion for me. > I'd be happy to stop with the conclusion that we're going to rip out some confusing syntax rather than trying to generate code for it -- IMO we've proved to ourselves that this stuff is too complicated to be useful. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Nov 25 12:15:40 2017 From: brett at python.org (Brett Cannon) Date: Sat, 25 Nov 2017 17:15:40 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Fri, Nov 24, 2017, 19:32 Guido van Rossum, wrote: > On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum > wrote: > >> The more I hear about this topic, the more I think that `await`, `yield` >> and `yield from` should all be banned from occurring in all comprehensions >> and generator expressions. That's not much different from disallowing >> `return` or `break`. >> > > From the responses it seems that I tried to simplify things too far. Let's > say that `await` in comprehensions is fine, as long as that comprehension > is contained in an `async def`. While we *could* save `yield [from]` in > comprehensions, I still see it as mostly a source of confusion, and the > fact that the presence of `yield [from]` *implicitly* makes the surrounding > `def` a generator makes things worse. It just requires too many mental > contortions to figure out what it does. > > I still propose to rule out all of the above from generator expressions, > because those can escape from the surrounding scope. > +1 from me. -Brett > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Sat Nov 25 12:21:08 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sat, 25 Nov 2017 17:21:08 +0000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: So we are keeping asynchronous generator expressions as long as they are defined in an 'async def' coroutine? Yury On Sat, Nov 25, 2017 at 12:17 PM Brett Cannon wrote: > > > On Fri, Nov 24, 2017, 19:32 Guido van Rossum, wrote: > >> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum >> wrote: >> >>> The more I hear about this topic, the more I think that `await`, `yield` >>> and `yield from` should all be banned from occurring in all comprehensions >>> and generator expressions. That's not much different from disallowing >>> `return` or `break`. >>> >> >> From the responses it seems that I tried to simplify things too far. >> Let's say that `await` in comprehensions is fine, as long as that >> comprehension is contained in an `async def`. While we *could* save `yield >> [from]` in comprehensions, I still see it as mostly a source of confusion, >> and the fact that the presence of `yield [from]` *implicitly* makes the >> surrounding `def` a generator makes things worse. It just requires too many >> mental contortions to figure out what it does. >> >> I still propose to rule out all of the above from generator expressions, >> because those can escape from the surrounding scope. >> > > +1 from me. > > -Brett > > >> -- >> --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Nov 25 15:27:57 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 25 Nov 2017 12:27:57 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: > > On Sat, Nov 25, 2017 at 12:17 PM Brett Cannon wrote: > On Fri, Nov 24, 2017, 19:32 Guido van Rossum, wrote: >> >>> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum >>> wrote: >>> >>>> The more I hear about this topic, the more I think that `await`, >>>> `yield` and `yield from` should all be banned from occurring in all >>>> comprehensions and generator expressions. That's not much different from >>>> disallowing `return` or `break`. >>>> >>> >>> From the responses it seems that I tried to simplify things too far. >>> Let's say that `await` in comprehensions is fine, as long as that >>> comprehension is contained in an `async def`. While we *could* save `yield >>> [from]` in comprehensions, I still see it as mostly a source of confusion, >>> and the fact that the presence of `yield [from]` *implicitly* makes the >>> surrounding `def` a generator makes things worse. It just requires too many >>> mental contortions to figure out what it does. >>> >>> I still propose to rule out all of the above from generator expressions, >>> because those can escape from the surrounding scope. >>> >> >> +1 from me >> > On Sat, Nov 25, 2017 at 9:21 AM, Yury Selivanov wrote: > So we are keeping asynchronous generator expressions as long as they are > defined in an 'async def' coroutine? > I would be happy to declare that `await` is out of scope for this thread. It seems that it is always well-defined and sensible what it does in comprehensions and in genexprs. (Although I can't help noticing that PEP 530 does not appear to propose `await` in generator expressions -- it proposes `async for` in comprehensions and in genexprs, and `await` in comprehensions only -- but they appear to be accepted nevertheless.) So we're back to the original issue, which is that `yield` inside a comprehension accidentally makes it become a generator rather than a list, set or dict. I believe that this can be fixed. But I don't believe we should fix it. I believe we should ban `yield` from comprehensions and from genexprs. We don't need it, and it's confused most everyone. And the ban should extend to `yield from` in those same contexts. I think we have a hope for consensus on this. (I also think that if we had invented `await` earlier we wouldn't have gone down the path of `yield` expressions -- but historically it appears we wouldn't have invented `await` at all if we hadn't first tried `yield` and then `yield from` to build coroutines, so I don't think this so bad after all. :-) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sat Nov 25 16:05:39 2017 From: mertz at gnosis.cx (David Mertz) Date: Sat, 25 Nov 2017 13:05:39 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: FWIW, on a side point. I use 'yield' and 'yield from' ALL THE TIME in real code. Probably 80% of those would be fine with yield statements, but a significant fraction use `gen.send()`. On the other hand, I have yet once to use 'await', or 'async' outside of pedagogical contexts. There are a whole lot of generators, including ones utilizing state injection, that are useful without the scaffolding of an event loop, in synchronous code. Of course, I never use them in comprehensions or generator expressions. And even after reading every post in this thread, the behavior (either existing or desired by some) such constructs have is murky and difficult for me to reason about. I strongly support deprecation or even just immediate SyntaxError in 3.7. On Nov 25, 2017 12:38 PM, "Guido van Rossum" wrote: > On Sat, Nov 25, 2017 at 12:17 PM Brett Cannon wrote: >> > On Fri, Nov 24, 2017, 19:32 Guido van Rossum, wrote: >>> >>>> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum >>>> wrote: >>>> >>>>> The more I hear about this topic, the more I think that `await`, >>>>> `yield` and `yield from` should all be banned from occurring in all >>>>> comprehensions and generator expressions. That's not much different from >>>>> disallowing `return` or `break`. >>>>> >>>> >>>> From the responses it seems that I tried to simplify things too far. >>>> Let's say that `await` in comprehensions is fine, as long as that >>>> comprehension is contained in an `async def`. While we *could* save `yield >>>> [from]` in comprehensions, I still see it as mostly a source of confusion, >>>> and the fact that the presence of `yield [from]` *implicitly* makes the >>>> surrounding `def` a generator makes things worse. It just requires too many >>>> mental contortions to figure out what it does. >>>> >>>> I still propose to rule out all of the above from generator >>>> expressions, because those can escape from the surrounding scope. >>>> >>> >>> +1 from me >>> >> > On Sat, Nov 25, 2017 at 9:21 AM, Yury Selivanov > wrote: > >> So we are keeping asynchronous generator expressions as long as they are >> defined in an 'async def' coroutine? >> > > I would be happy to declare that `await` is out of scope for this thread. > It seems that it is always well-defined and sensible what it does in > comprehensions and in genexprs. (Although I can't help noticing that PEP > 530 does not appear to propose `await` in generator expressions -- it > proposes `async for` in comprehensions and in genexprs, and `await` in > comprehensions only -- but they appear to be accepted nevertheless.) > > So we're back to the original issue, which is that `yield` inside a > comprehension accidentally makes it become a generator rather than a list, > set or dict. I believe that this can be fixed. But I don't believe we > should fix it. I believe we should ban `yield` from comprehensions and from > genexprs. We don't need it, and it's confused most everyone. And the ban > should extend to `yield from` in those same contexts. I think we have a > hope for consensus on this. > > (I also think that if we had invented `await` earlier we wouldn't have > gone down the path of `yield` expressions -- but historically it appears we > wouldn't have invented `await` at all if we hadn't first tried `yield` and > then `yield from` to build coroutines, so I don't think this so bad after > all. :-) > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mertz%40gnosis.cx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sat Nov 25 16:06:20 2017 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 25 Nov 2017 16:06:20 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes Message-ID: The updated version should show up at https://www.python.org/dev/peps/pep-0557/ shortly. The major changes from the previous version are: - Add InitVar to specify initialize-only fields. - Renamed __dataclass_post_init__() to __post_init(). - Rename cmp to compare. - Added eq, separate from compare, so you can test unorderable items for equality. - Flushed out asdict() and astuple(). - Changed replace() to just call __init__(), and dropped the complex post-create logic. The only open issues I know of are: - Should object comparison require an exact match on the type? https://github.com/ericvsmith/dataclasses/issues/51 - Should the replace() function be renamed to something else? https://github.com/ericvsmith/dataclasses/issues/77 Most of the items that were previously discussed on python-dev were discussed in detail at https://github.com/ericvsmith/dataclasses. Before rehashing an old discussion, please check there first. Also at https://github.com/ericvsmith/dataclasses is an implementation, with tests, that should work with 3.6 and 3.7. The only action item for the code is to clean up the implementation of InitVar, but that's waiting for PEP 560. Oh, and if PEP 563 is accepted I'll also need to do some work. Feedback is welcomed! Eric. From yselivanov.ml at gmail.com Sat Nov 25 17:23:32 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sat, 25 Nov 2017 17:23:32 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 3:27 PM, Guido van Rossum wrote: > On Sat, Nov 25, 2017 at 9:21 AM, Yury Selivanov > wrote: >> >> So we are keeping asynchronous generator expressions as long as they are >> defined in an 'async def' coroutine? > > > I would be happy to declare that `await` is out of scope for this thread. It > seems that it is always well-defined and sensible what it does in > comprehensions and in genexprs. (Although I can't help noticing that PEP 530 > does not appear to propose `await` in generator expressions -- it proposes > `async for` in comprehensions and in genexprs, and `await` in comprehensions > only -- but they appear to be accepted nevertheless.) Great! As for PEP 530, after reading this discussion I realized how many things in it are underspecified. I'll be working on PEP 550 successor next week and will also try to update PEP 530 to make it clearer. > So we're back to the original issue, which is that `yield` inside a > comprehension accidentally makes it become a generator rather than a list, > set or dict. I believe that this can be fixed. But I don't believe we should > fix it. I believe we should ban `yield` from comprehensions and from > genexprs. +1 from me. Yury From tjreedy at udel.edu Sat Nov 25 18:22:41 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 25 Nov 2017 18:22:41 -0500 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements In-Reply-To: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> References: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> Message-ID: On 11/25/2017 1:55 AM, David Cuthbert wrote: > First time contributing back -- if I should be filing a PEP or something like that for this, please let me know. I don't think a PEP is needed. > Coming from https://bugs.python.org/issue32117, unparenthesized tuple unpacking is allowed in assignments: > > rest = (4, 5, 6) > a = 1, 2, 3, *rest Because except for (), it is ',', not '()' that makes a tuple a tuple. > but not in yield or return statements (these result in SyntaxErrors): > > return 1, 2, 3, *rest > yield 1, 2, 3, *rest To be crystal clear, a parenthesized tuple with unpacking *is* valid. return (1, 2, 3, *rest) yield (1, 2, 3, *rest) So is an un-parenthesized tuple without unpacking. Since return and yield are often the first half of a cross-namespace assignment, requiring the () is a bit surprising. Perhaps someone else has a good reason for the difference. Otherwise, +1 on the change. > The unpacking in assignments was enabled by a pre-3.2 commit that I haven't yet been able to track back to a discussion, but I suspect this asymmetry is unintentional. Here's the original commit: > https://github.com/python/cpython/commit/4905e80c3d2f6abb613d212f0313d1dfe09475dc > > I've submitted a patch (CLA is signed and submitted, not yet processed), and Serihy said that since it changes the grammar I should have it reviewed here and have signoff by the BDFL. > While I haven't had a need for this myself, it was brought up by a user on StackOverflow (https://stackoverflow.com/questions/47272460/python-tuple-unpacking-in-return-statement/47326859). > > Thanks! > Dave > > -- Terry Jan Reedy From greg.ewing at canterbury.ac.nz Sat Nov 25 18:24:08 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 26 Nov 2017 12:24:08 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> Message-ID: <5A19FB98.8050903@canterbury.ac.nz> Nick Coghlan wrote: > def example(): > comp1 = yield from [(yield x) for x in ('1st', '2nd')] > comp2 = yield from [(yield x) for x in ('3rd', '4th')] > return comp1, comp2 > If the implicit "yield from" idea seems too magical, then the other > direction we could go is to make the immediate "yield from" mandatory, Whether it's "too magical" or not depends on your stance with regard to the implicit function scope of a comprehension. There seem to be two irreconcilable schools of thought on that: (1) It's an implementation detail that happens to be used to stop the loop variable from leaking. Most of the time you can ignore it and use the nested-loop-expansion mental model. Most people seem to think this way in practice. (2) It's an important part of the semantics of comprehensions that needs to be taken into account at all times. This seems to be Guido's position. If you're in school (1), then the current behaviour of yield in a comprehension is the thing that's weird and magical. Moreover, it's bad magic, because it does something you almost certainly don't want. Adding an implicit yield-from would make things *less* magical and more understandable. However, it will break things for people in school (2), who are aware of the arcane details and know enough to put the yield-from in themselves when needed. Just because it breaks their code doesn't necessarily mean they will find it more magical, though. Generators are already sufficiently magical that they're probably smart enough to figure out what's going on. Personally I'm somewhat uncomfortable with having a rule that "yield" in a comprehension creates a subgenerator, because the syntactic clues that a given "yield" is in a comprehension are fairly subtle. For the same reason, I'm uncomfortable with the nested function scope of a comprehension being an official part of the semantics. All other function scopes are introduced by a very clear piece of syntax, but with comprehensions you have to notice a combination of things that don't have anything to do with function scopes by themselves. So I think I've just talked myself into the opinion that anything that would allow you to tell whether comprehensions have an implicit function scope or not should be banned. -- Greg From gvanrossum at gmail.com Sat Nov 25 18:26:45 2017 From: gvanrossum at gmail.com (Guido van Rossum) Date: Sat, 25 Nov 2017 15:26:45 -0800 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements In-Reply-To: References: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> Message-ID: I think the proposal is reasonable and won't require a PEP. On Nov 25, 2017 3:25 PM, "Terry Reedy" wrote: > On 11/25/2017 1:55 AM, David Cuthbert wrote: > >> First time contributing back -- if I should be filing a PEP or something >> like that for this, please let me know. >> > > I don't think a PEP is needed. > > Coming from https://bugs.python.org/issue32117, unparenthesized tuple >> unpacking is allowed in assignments: >> >> rest = (4, 5, 6) >> a = 1, 2, 3, *rest >> > > Because except for (), it is ',', not '()' that makes a tuple a tuple. > > but not in yield or return statements (these result in SyntaxErrors): >> >> return 1, 2, 3, *rest >> yield 1, 2, 3, *rest >> > > To be crystal clear, a parenthesized tuple with unpacking *is* valid. > > return (1, 2, 3, *rest) > yield (1, 2, 3, *rest) > > So is an un-parenthesized tuple without unpacking. > > Since return and yield are often the first half of a cross-namespace > assignment, requiring the () is a bit surprising. Perhaps someone else has > a good reason for the difference. Otherwise, +1 on the change. > > The unpacking in assignments was enabled by a pre-3.2 commit that I >> haven't yet been able to track back to a discussion, but I suspect this >> asymmetry is unintentional. Here's the original commit: >> https://github.com/python/cpython/commit/4905e80c3d2f6abb613 >> d212f0313d1dfe09475dc >> >> I've submitted a patch (CLA is signed and submitted, not yet processed), >> and Serihy said that since it changes the grammar I should have it reviewed >> here and have signoff by the BDFL. >> > > > > > While I haven't had a need for this myself, it was brought up by a user on >> StackOverflow (https://stackoverflow.com/questions/47272460/python-tuple- >> unpacking-in-return-statement/47326859). >> >> Thanks! >> Dave >> >> >> > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Nov 25 18:37:12 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 25 Nov 2017 15:37:12 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 1:05 PM, David Mertz wrote: > FWIW, on a side point. I use 'yield' and 'yield from' ALL THE TIME in real > code. Probably 80% of those would be fine with yield statements, but a > significant fraction use `gen.send()`. > > On the other hand, I have yet once to use 'await', or 'async' outside of > pedagogical contexts. There are a whole lot of generators, including ones > utilizing state injection, that are useful without the scaffolding of an > event loop, in synchronous code. > Maybe you didn't realize async/await don't need an event loop? Driving an async/await-based coroutine is just as simple as driving a yield-from-based one (`await` does exactly the same thing as `yield from`). But I won't argue with the usefulness of `yield [from]` in expressions. That is not the topic of this thread. > Of course, I never use them in comprehensions or generator expressions. > And even after reading every post in this thread, the behavior (either > existing or desired by some) such constructs have is murky and difficult > for me to reason about. I strongly support deprecation or even just > immediate SyntaxError in 3.7. > Maybe the rest of the discussion should be about deprecation vs. SyntaxError in Python 3.7. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Sat Nov 25 18:45:56 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 26 Nov 2017 12:45:56 +1300 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <20171123124928.304986e9@fsol> <5A1749AF.6010500@canterbury.ac.nz> Message-ID: <5A1A00B4.205@canterbury.ac.nz> Serhiy Storchaka wrote: > Ivan explained that this function should be rough equivalent to > > def f(): > t = [(yield i) for i in range(3)] > return (x for x in t) > > This is a *rough* equivalent. There are differences in details. The details would seem to be overwhelmingly important, though. I take it you're saying the semantics should be "like the above except that the returned iterator is lazy". But that seems impossible, because f() can't return anything until it finishes having all its values sent to it. -- Greg From guido at python.org Sat Nov 25 18:45:44 2017 From: guido at python.org (Guido van Rossum) Date: Sat, 25 Nov 2017 15:45:44 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: <5A19FB98.8050903@canterbury.ac.nz> References: <20171123124928.304986e9@fsol> <5A19FB98.8050903@canterbury.ac.nz> Message-ID: On Sat, Nov 25, 2017 at 3:24 PM, Greg Ewing wrote: > Nick Coghlan wrote: > > def example(): >> comp1 = yield from [(yield x) for x in ('1st', '2nd')] >> comp2 = yield from [(yield x) for x in ('3rd', '4th')] >> return comp1, comp2 >> > > If the implicit "yield from" idea seems too magical, then the other >> direction we could go is to make the immediate "yield from" mandatory, >> > > Whether it's "too magical" or not depends on your stance with > regard to the implicit function scope of a comprehension. There > seem to be two irreconcilable schools of thought on that: > > (1) It's an implementation detail that happens to be used to > stop the loop variable from leaking. Most of the time you can > ignore it and use the nested-loop-expansion mental model. > Most people seem to think this way in practice. > > (2) It's an important part of the semantics of comprehensions > that needs to be taken into account at all times. This seems > to be Guido's position. > > If you're in school (1), then the current behaviour of yield > in a comprehension is the thing that's weird and magical. > Moreover, it's bad magic, because it does something you > almost certainly don't want. Adding an implicit yield-from > would make things *less* magical and more understandable. > > However, it will break things for people in school (2), who > are aware of the arcane details and know enough to put the > yield-from in themselves when needed. > > Just because it breaks their code doesn't necessarily mean > they will find it more magical, though. Generators are > already sufficiently magical that they're probably smart > enough to figure out what's going on. > > Personally I'm somewhat uncomfortable with having a rule > that "yield" in a comprehension creates a subgenerator, > because the syntactic clues that a given "yield" is in a > comprehension are fairly subtle. > > For the same reason, I'm uncomfortable with the nested > function scope of a comprehension being an official part > of the semantics. All other function scopes are introduced > by a very clear piece of syntax, but with comprehensions > you have to notice a combination of things that don't > have anything to do with function scopes by themselves. > > So I think I've just talked myself into the opinion that > anything that would allow you to tell whether comprehensions > have an implicit function scope or not should be banned. > That's not an unreasonable position, and one that would leave a door open to a general design for inner scopes that are not implemented using functions or code objects (something to ponder for 3.8 perhaps?). In terms of the current thread the only thing it would disallow seems to be the current behavior of yield in comprehensions (and perhaps genexprs?). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Sat Nov 25 19:20:34 2017 From: mertz at gnosis.cx (David Mertz) Date: Sat, 25 Nov 2017 16:20:34 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum wrote: > Maybe you didn't realize async/await don't need an event loop? Driving an > async/await-based coroutine is just as simple as driving a yield-from-based > one (`await` does exactly the same thing as `yield from`). > I realize I *can*, but it seems far from straightforward. I guess this is really a python-list question or something, but what is the async/await spelling of something toy like: In [1]: def fib(): ...: a, b = 1, 1 ...: while True: ...: yield a ...: a, b = b, a+b ...: In [2]: from itertools import takewhile In [3]: list(takewhile(lambda x: x<200, fib())) Out[3]: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144] > Maybe the rest of the discussion should be about deprecation vs. > SyntaxError in Python 3.7. > I vote SyntaxError, of course. :-) -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sat Nov 25 19:44:00 2017 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 25 Nov 2017 19:44:00 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: <9bc6b87b-eed3-a387-2dd4-3b0043abab31@trueblade.com> One more change: - Per-field metadata, for use by third parties. Also, thanks to Guido and Ivan for all of their feedback on the various issues that got the PEP to this point. Eric. On 11/25/2017 4:06 PM, Eric V. Smith wrote: > The updated version should show up at > https://www.python.org/dev/peps/pep-0557/ shortly. > > The major changes from the previous version are: > > - Add InitVar to specify initialize-only fields. > - Renamed __dataclass_post_init__() to __post_init(). > - Rename cmp to compare. > - Added eq, separate from compare, so you can test > ? unorderable items for equality. > - Flushed out asdict() and astuple(). > - Changed replace() to just call __init__(), and dropped > ? the complex post-create logic. > > The only open issues I know of are: > - Should object comparison require an exact match on the type? > ? https://github.com/ericvsmith/dataclasses/issues/51 > - Should the replace() function be renamed to something else? > ? https://github.com/ericvsmith/dataclasses/issues/77 > > Most of the items that were previously discussed on python-dev were > discussed in detail at https://github.com/ericvsmith/dataclasses. Before > rehashing an old discussion, please check there first. > > Also at https://github.com/ericvsmith/dataclasses is an implementation, > with tests, that should work with 3.6 and 3.7. The only action item for > the code is to clean up the implementation of InitVar, but that's > waiting for PEP 560. Oh, and if PEP 563 is accepted I'll also need to do > some work. > > Feedback is welcomed! > > Eric. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > From ncoghlan at gmail.com Sat Nov 25 19:57:48 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 26 Nov 2017 10:57:48 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 26 November 2017 at 02:59, Guido van Rossum wrote: > > I'd be happy to stop with the conclusion that we're going to rip out some > confusing syntax rather than trying to generate code for it -- IMO we've > proved to ourselves that this stuff is too complicated to be useful. I'll also note that even if we go through a period of deprecating and then prohibiting the syntax entirely, we'll still have the option of bringing support for "yield" in comprehensions back later with deliberately designed semantics (as happened for "await" in https://www.python.org/dev/peps/pep-0530/#await-in-comprehensions), as opposed to the accident-of-implementation semantics they have now. It may also turn out that as more asynchronous code is able to switch to being 3.6+ only, allowing "await" and prohibiting "yield" will prove to be sufficient for all practical purposes (as even the "yield from" based spelling is Python-3-only, so it's only code that still has to support 3.3, 3.4, 3.5, without needing to support 2.7, that could use "yield from + yield" comprehensions, but wouldn't have the option of just switching to async+await instead). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Nov 25 23:55:26 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 26 Nov 2017 14:55:26 +1000 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements In-Reply-To: References: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> Message-ID: On 26 November 2017 at 09:22, Terry Reedy wrote: > Since return and yield are often the first half of a cross-namespace > assignment, requiring the () is a bit surprising. Perhaps someone else has > a good reason for the difference. These kinds of discrepancies tend to arise because there are a few different grammar nodes for "comma separated sequence of expressions", which makes it possible to miss some when enhancing the tuple syntax. Refactoring the grammar to eliminate the duplication isn't especially easy, and we don't change the syntax all that often, so it makes sense to treat cases like this one as bugs in the implementation of the original syntax change (except that the "don't change the Grammar in maintenance releases" guideline means they still need to be handled as new features when it comes to fixing them). Cheers, Nick. P.S. That said, I do wonder if it might be feasible to write a "Grammar consistency check" test that ensured the known duplicate nodes at least have consistent definitions, such that missing one in a syntax update will cause an automated test failure. Unfortunately, the nodes typically haven't been combined because they have some *intentional* differences in exactly what they allow, so I also suspect that this is easier said than done. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From carl at oddbird.net Sun Nov 26 00:23:38 2017 From: carl at oddbird.net (Carl Meyer) Date: Sat, 25 Nov 2017 21:23:38 -0800 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: <7f2e23a2-5de9-bfed-aa7f-5e8d27c23997@oddbird.net> Hi Eric, Really excited about this PEP, thanks for working on it. A couple minor questions: > If compare is True, then eq is ignored, and __eq__ and __ne__ will be automatically generated. IMO it's generally preferable to make nonsensical parameter combinations an immediate error, rather than silently ignore one of them. Is there a strong reason for letting nonsense pass silently here? (I reviewed the previous thread; there was a lot of discussion about enums/flags vs two boolean params, but I didn't see explicit discussion of this issue; the only passing references I noticed said the invalid combo should be "disallowed", e.g. Guido in [1], which to me implies "an error.") > isdataclass(instance): Returns True if instance is an instance of a Data Class, otherwise returns False. Something smells wrong with the naming here. If I have @dataclass class Person: name: str I think it would be considered obvious and undeniable (in English prose, anyway) that Person is a dataclass. So it seems wrong to have `isdataclass(Person)` return `False`. Is there a reason not to let it handle either a class or an instance (looks like it would actually simplify the implementation)? Carl [1] https://mail.python.org/pipermail/python-dev/2017-September/149505.html On 11/25/2017 01:06 PM, Eric V. Smith wrote: > The updated version should show up at > https://www.python.org/dev/peps/pep-0557/ shortly. > > The major changes from the previous version are: > > - Add InitVar to specify initialize-only fields. > - Renamed __dataclass_post_init__() to __post_init(). > - Rename cmp to compare. > - Added eq, separate from compare, so you can test > ? unorderable items for equality. > - Flushed out asdict() and astuple(). > - Changed replace() to just call __init__(), and dropped > ? the complex post-create logic. > > The only open issues I know of are: > - Should object comparison require an exact match on the type? > ? https://github.com/ericvsmith/dataclasses/issues/51 > - Should the replace() function be renamed to something else? > ? https://github.com/ericvsmith/dataclasses/issues/77 > > Most of the items that were previously discussed on python-dev were > discussed in detail at https://github.com/ericvsmith/dataclasses. Before > rehashing an old discussion, please check there first. > > Also at https://github.com/ericvsmith/dataclasses is an implementation, > with tests, that should work with 3.6 and 3.7. The only action item for > the code is to clean up the implementation of InitVar, but that's > waiting for PEP 560. Oh, and if PEP 563 is accepted I'll also need to do > some work. > > Feedback is welcomed! > > Eric. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/carl%40oddbird.net -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ethan at stoneleaf.us Sun Nov 26 02:13:09 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Sat, 25 Nov 2017 23:13:09 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: Message-ID: <5A1A6985.8040804@stoneleaf.us> On 11/25/2017 04:20 PM, David Mertz wrote: > On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum wrote: >> Maybe you didn't realize async/await don't need an event loop? Driving an async/await-based coroutine is just as >> simple as driving a yield-from-based one (`await` does exactly the same thing as `yield from`). > > I realize I *can*, but it seems far from straightforward. I guess this is really a python-list question or something, > but what is the async/await spelling of something toy like: > > In [1]: def fib(): > ...: a, b = 1, 1 > ...: while True: > ...: yield a > ...: a, b = b, a+b > ...: > > In [2]: from itertools import takewhile > > In [3]: list(takewhile(lambda x: x<200, fib())) > Out[3]: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144] >> Maybe the rest of the discussion should be about deprecation vs. SyntaxError in Python 3.7. > > I vote SyntaxError, of course. :-) Given the recent thread about the difficulty of noticing DeprecationWarnings, I also vote SyntaxError. On the other hand, if we have a change in 3.7 about the visibility of DeprecationWarnings, this would make an excellent test case. -- ~Ethan~ From eric at trueblade.com Sun Nov 26 03:35:02 2017 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 26 Nov 2017 03:35:02 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <7f2e23a2-5de9-bfed-aa7f-5e8d27c23997@oddbird.net> References: <7f2e23a2-5de9-bfed-aa7f-5e8d27c23997@oddbird.net> Message-ID: On 11/26/2017 12:23 AM, Carl Meyer wrote: > Hi Eric, > > Really excited about this PEP, thanks for working on it. Thanks, Carl. > A couple minor > questions: > >> If compare is True, then eq is ignored, and __eq__ and __ne__ will be > automatically generated. > > IMO it's generally preferable to make nonsensical parameter combinations > an immediate error, rather than silently ignore one of them. Is there a > strong reason for letting nonsense pass silently here? > > (I reviewed the previous thread; there was a lot of discussion about > enums/flags vs two boolean params, but I didn't see explicit discussion > of this issue; the only passing references I noticed said the invalid > combo should be "disallowed", e.g. Guido in [1], which to me implies "an > error.") I think you're right here. I'll change it to a ValueError. >> isdataclass(instance): Returns True if instance is an instance of a > Data Class, otherwise returns False. > > Something smells wrong with the naming here. If I have > > @dataclass > class Person: > name: str > > I think it would be considered obvious and undeniable (in English prose, > anyway) that Person is a dataclass. So it seems wrong to have > `isdataclass(Person)` return `False`. Is there a reason not to let it > handle either a class or an instance (looks like it would actually > simplify the implementation)? I think of this as really "isdataclassinstance". Let's see what others think. There are places in dataclasses.py where I need to know both ways: for example fields() works with either a class or instance, but asdict() just an instance. For what it's worth, the equivalent attrs API, attr.has(), returns True for both an instance and a class. And the recommended solution for a namedtuple (check for existence of _fields) would also work for an instance and a class. And I suppose it's easy enough for the caller to further disallow a class, if that's what they want. Eric. > > Carl > > > [1] https://mail.python.org/pipermail/python-dev/2017-September/149505.html > > On 11/25/2017 01:06 PM, Eric V. Smith wrote: >> The updated version should show up at >> https://www.python.org/dev/peps/pep-0557/ shortly. >> >> The major changes from the previous version are: >> >> - Add InitVar to specify initialize-only fields. >> - Renamed __dataclass_post_init__() to __post_init(). >> - Rename cmp to compare. >> - Added eq, separate from compare, so you can test >> ? unorderable items for equality. >> - Flushed out asdict() and astuple(). >> - Changed replace() to just call __init__(), and dropped >> ? the complex post-create logic. >> >> The only open issues I know of are: >> - Should object comparison require an exact match on the type? >> ? https://github.com/ericvsmith/dataclasses/issues/51 >> - Should the replace() function be renamed to something else? >> ? https://github.com/ericvsmith/dataclasses/issues/77 >> >> Most of the items that were previously discussed on python-dev were >> discussed in detail at https://github.com/ericvsmith/dataclasses. Before >> rehashing an old discussion, please check there first. >> >> Also at https://github.com/ericvsmith/dataclasses is an implementation, >> with tests, that should work with 3.6 and 3.7. The only action item for >> the code is to clean up the implementation of InitVar, but that's >> waiting for PEP 560. Oh, and if PEP 563 is accepted I'll also need to do >> some work. >> >> Feedback is welcomed! >> >> Eric. >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/carl%40oddbird.net > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > From eric at trueblade.com Sun Nov 26 03:48:01 2017 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 26 Nov 2017 03:48:01 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <7f2e23a2-5de9-bfed-aa7f-5e8d27c23997@oddbird.net> Message-ID: <636ac1ea-731b-8606-e974-c55878d976b5@trueblade.com> On 11/26/2017 3:35 AM, Eric V. Smith wrote: > On 11/26/2017 12:23 AM, Carl Meyer wrote: >> A couple minor >> questions: >> >>> If compare is True, then eq is ignored, and __eq__ and __ne__ will be >> automatically generated. >> >> IMO it's generally preferable to make nonsensical parameter combinations >> an immediate error, rather than silently ignore one of them. Is there a >> strong reason for letting nonsense pass silently here? >> >> (I reviewed the previous thread; there was a lot of discussion about >> enums/flags vs two boolean params, but I didn't see explicit discussion >> of this issue; the only passing references I noticed said the invalid >> combo should be "disallowed", e.g. Guido in [1], which to me implies "an >> error.") > > I think you're right here. I'll change it to a ValueError. While creating an issue for this (https://github.com/ericvsmith/dataclasses/issues/88), it occurs to me that the class-level parameter really should be "order" or "orderable", not "compare". It made more sense when it was called "cmp", but "compare" now seems wrong. Because "eq" says "can I compare two instances", and what's currently called "compare" is "can I order two instances". Nick had a similar suggestion before the PEP was written (https://mail.python.org/pipermail/python-ideas/2017-May/045740.html). The field-level parameter should stay "compare", because it's used for both __gt__ and friends, as well as __eq__ and __ne__. It's saying "is this field used in all of the comparison methods". Eric. From eric at trueblade.com Sun Nov 26 10:14:24 2017 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 26 Nov 2017 10:14:24 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <636ac1ea-731b-8606-e974-c55878d976b5@trueblade.com> References: <7f2e23a2-5de9-bfed-aa7f-5e8d27c23997@oddbird.net> <636ac1ea-731b-8606-e974-c55878d976b5@trueblade.com> Message-ID: On 11/26/2017 3:48 AM, Eric V. Smith wrote: > While creating an issue for this > (https://github.com/ericvsmith/dataclasses/issues/88), it occurs to me > that the class-level parameter really should be "order" or "orderable", > not "compare". It made more sense when it was called "cmp", but > "compare" now seems wrong. > > Because "eq" says "can I compare two instances", and what's currently > called "compare" is "can I order two instances". Nick had a similar > suggestion before the PEP was written > (https://mail.python.org/pipermail/python-ideas/2017-May/045740.html). > > The field-level parameter should stay "compare", because it's used for > both __gt__ and friends, as well as __eq__ and __ne__. It's saying "is > this field used in all of the comparison methods". I created https://github.com/ericvsmith/dataclasses/issues/90 for this. I think I'll leave 'eq' alone, and change 'compare' to 'order', for the class-level parameter name. Eric. From guido at python.org Sun Nov 26 13:03:54 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 26 Nov 2017 10:03:54 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 4:57 PM, Nick Coghlan wrote: > On 26 November 2017 at 02:59, Guido van Rossum wrote: > > > > I'd be happy to stop with the conclusion that we're going to rip out some > > confusing syntax rather than trying to generate code for it -- IMO we've > > proved to ourselves that this stuff is too complicated to be useful. > > I'll also note that even if we go through a period of deprecating and > then prohibiting the syntax entirely, we'll still have the option of > bringing support for "yield" in comprehensions back later with > deliberately designed semantics (as happened for "await" in > https://www.python.org/dev/peps/pep-0530/#await-in-comprehensions), as > opposed to the accident-of-implementation semantics they have now. > > It may also turn out that as more asynchronous code is able to switch > to being 3.6+ only, allowing "await" and prohibiting "yield" will > prove to be sufficient for all practical purposes (as even the "yield > from" based spelling is Python-3-only, so it's only code that still > has to support 3.3, 3.4, 3.5, without needing to support 2.7, that > could use "yield from + yield" comprehensions, but wouldn't have the > option of just switching to async+await instead). > And I'll also note that some style guides recommend using comprehensions only for simple cases. Example: https://google.github.io/styleguide/pyguide.html#List_Comprehensions -- expand by clicking on the arrow; the "Con" comment is "Complicated list comprehensions or generator expressions can be hard to read." -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Nov 26 13:04:23 2017 From: brett at python.org (Brett Cannon) Date: Sun, 26 Nov 2017 18:04:23 +0000 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: On Sat, Nov 25, 2017, 14:00 Eric V. Smith, wrote: > The updated version should show up at > https://www.python.org/dev/peps/pep-0557/ shortly. > > The major changes from the previous version are: > > - Add InitVar to specify initialize-only fields. > - Renamed __dataclass_post_init__() to __post_init(). > - Rename cmp to compare. > - Added eq, separate from compare, so you can test > unorderable items for equality. > - Flushed out asdict() and astuple(). > - Changed replace() to just call __init__(), and dropped > the complex post-create logic. > It looks great and I'm excited to get to start using this PEP! > The only open issues I know of are: > - Should object comparison require an exact match on the type? > https://github.com/ericvsmith/dataclasses/issues/51 I say don't require the type comparison for duck typing purposes. -Brett > - Should the replace() function be renamed to something else? > https://github.com/ericvsmith/dataclasses/issues/77 > > Most of the items that were previously discussed on python-dev were > discussed in detail at https://github.com/ericvsmith/dataclasses. Before > rehashing an old discussion, please check there first. > > Also at https://github.com/ericvsmith/dataclasses is an implementation, > with tests, that should work with 3.6 and 3.7. The only action item for > the code is to clean up the implementation of InitVar, but that's > waiting for PEP 560. Oh, and if PEP 563 is accepted I'll also need to do > some work. > > Feedback is welcomed! > > Eric. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sun Nov 26 15:22:56 2017 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 26 Nov 2017 15:22:56 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: On 11/26/2017 1:04 PM, Brett Cannon wrote: > The only open issues I know of are: > - Should object comparison require an exact match on the type? > https://github.com/ericvsmith/dataclasses/issues/51 > > > I say don't require the type comparison for duck typing purposes. The problem with that is that you end up with cases like this, which I don't think we want: @dataclass class Point: x: int y: int @dataclass class Point3d: x: int y: int z: int assert Point(1, 2) == Point3d(1, 2, 3) Eric. From njs at pobox.com Sun Nov 26 15:29:30 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 26 Nov 2017 12:29:30 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum wrote: > On Sat, Nov 25, 2017 at 1:05 PM, David Mertz wrote: >> >> FWIW, on a side point. I use 'yield' and 'yield from' ALL THE TIME in real >> code. Probably 80% of those would be fine with yield statements, but a >> significant fraction use `gen.send()`. >> >> On the other hand, I have yet once to use 'await', or 'async' outside of >> pedagogical contexts. There are a whole lot of generators, including ones >> utilizing state injection, that are useful without the scaffolding of an >> event loop, in synchronous code. > > > Maybe you didn't realize async/await don't need an event loop? Driving an > async/await-based coroutine is just as simple as driving a yield-from-based > one (`await` does exactly the same thing as `yield from`). Technically anything you can write with yield/yield from could also be written using async/await and vice-versa, but I think it's actually nice to have both in the language. The distinction I'd make is that yield/yield from is what you should use for ad hoc coroutines where the person writing the code that has 'yield from's in it is expected to understand the details of the coroutine runner, while async/await is what you should use when the coroutine running is handled by a library like asyncio, and the person writing code with 'await's in it is expected to treat coroutine stuff as an opaque implementation detail. (NB I'm using "coroutine" in the CS sense here, where generators and async functions are both "coroutines".) I think of this as being sort of half-way between a style guideline and a technical guideline. It's like the guideline that lists should be homogenously-typed and variable length, while tuples are heterogenously-typed and fixed length: there's nothing in the language that outright *enforces* this, but it's a helpful convention *and* things tend to work better if you go along with it. Here are some technical issues you'll run into if you try to use async/await for ad hoc coroutines: - If you don't iterate an async function, you get a "coroutine never awaited" warning. This may or may not be what you want. - async/await has associated thread-global state like sys.set_coroutine_wrapper and sys.set_asyncgen_hooks. Generally async libraries assume that they own these, and arbitrarily weird things may happen if you have multiple async/await coroutine runners in same thread with no coordination between them. - In async/await, it's not obvious how to write leaf functions: 'await' is equivalent to 'yield from', but there's no equivalent to 'yield'. You have to jump through some hoops by writing a class with a custom __await__ method or using @types.coroutine. Of course it's doable, and it's no big deal if you're writing a proper async library, but it's awkward for quick ad hoc usage. For a concrete example of 'ad hoc coroutines' where I think 'yield from' is appropriate, here's wsproto's old 'yield from'-based incremental websocket protocol parser: https://github.com/python-hyper/wsproto/blob/4b7db502cc0568ab2354798552148dadd563a4e3/wsproto/frame_protocol.py#L142 The flow here is: received_frames is the public API: it gives you an iterator over all completed frames. When it stops you're expected to add more data to the buffer and then call it again. Internally, received_frames acts as a coroutine runner for parse_more_gen, which is the main parser that calls various helper methods to parse different parts of the websocket frame. These calls eventually bottom out in _consume_exactly or _consume_at_most, which use 'yield' to "block" until enough data is available in the internal buffer. Basically this is the classic trick of using coroutines to write an incremental state machine parser as ordinary-looking code where the state is encoded in local variables on the stack. Using coroutines here isn't just a cute trick; I'm pretty confident that there is absolutely no other way to write a readable incremental websocket parser in Python. This is the 3rd rewrite of wsproto's parser, and I think I've read the code for all the other Python libraries that do this too. The websocket framing format is branchy enough that trying to write out the state machine explicitly will absolutely tie you in knots. (Of course we then rewrote wsproto's parser a 4th time for py2 compatibility; the current version's not *terrible* but the 'yield from' version was simpler and more maintainable.) For wsproto's use case, I think using 'await' would be noticeably worse than 'yield from'. It'd make the code more opaque to readers (people know generators but no-one shows up already knowing what @types.coroutine does), the "coroutine never awaited" warnings would be obnoxious (it's totally fine to instantiate a parser and then throw it away without using it!), and the global state issues would make us very nervous (wsproto is absolutely designed to be used alongside a library like asyncio or trio). But that's fine; 'yield from' exists and is perfect for this application. Basically this is a very long way of saying that actually the status quo is pretty good, at least with regard to yield from vs. async/await :-). -n -- Nathaniel J. Smith -- https://vorpus.org From guido at python.org Sun Nov 26 18:51:16 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 26 Nov 2017 15:51:16 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sun, Nov 26, 2017 at 12:29 PM, Nathaniel Smith wrote: > On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum > wrote: > > On Sat, Nov 25, 2017 at 1:05 PM, David Mertz wrote: > >> > >> FWIW, on a side point. I use 'yield' and 'yield from' ALL THE TIME in > real > >> code. Probably 80% of those would be fine with yield statements, but a > >> significant fraction use `gen.send()`. > >> > >> On the other hand, I have yet once to use 'await', or 'async' outside of > >> pedagogical contexts. There are a whole lot of generators, including > ones > >> utilizing state injection, that are useful without the scaffolding of an > >> event loop, in synchronous code. > > > > > > Maybe you didn't realize async/await don't need an event loop? Driving an > > async/await-based coroutine is just as simple as driving a > yield-from-based > > one (`await` does exactly the same thing as `yield from`). > > Technically anything you can write with yield/yield from could also be > written using async/await and vice-versa, but I think it's actually > nice to have both in the language. > Perhaps. You seem somewhat biased towards the devil you know, but you also bring up some good points. > The distinction I'd make is that yield/yield from is what you should > use for ad hoc coroutines where the person writing the code that has > 'yield from's in it is expected to understand the details of the > coroutine runner, while async/await is what you should use when the > coroutine running is handled by a library like asyncio, and the person > writing code with 'await's in it is expected to treat coroutine stuff > as an opaque implementation detail. (NB I'm using "coroutine" in the > CS sense here, where generators and async functions are both > "coroutines".) > > I think of this as being sort of half-way between a style guideline > and a technical guideline. It's like the guideline that lists should > be homogenously-typed and variable length, while tuples are > heterogenously-typed and fixed length: there's nothing in the language > that outright *enforces* this, but it's a helpful convention *and* > things tend to work better if you go along with it. > Hm. That would disappoint me. We carefully tried to design async/await to *not* require an event loop. (I'll get to the global state below.) > Here are some technical issues you'll run into if you try to use > async/await for ad hoc coroutines: > > - If you don't iterate an async function, you get a "coroutine never > awaited" warning. This may or may not be what you want. > It should indicate a bug, and the equivalent bug is silent when you're using yield-from, so I see this as a positive. If you find yourself designing an API where abandoning an async function is a valid action you should probably think twice. > - async/await has associated thread-global state like > sys.set_coroutine_wrapper and sys.set_asyncgen_hooks. Generally async > libraries assume that they own these, and arbitrarily weird things may > happen if you have multiple async/await coroutine runners in same > thread with no coordination between them. > The existence of these is indeed a bit unfortunate for this use case. I'm CC'ing Yury to ask him if he can think of a different way to deal with the problems that these are supposed to solve. For each, the reason they exist is itself an edge case -- debugging for the former, finalization for the latter. A better solution for these problem may also be important for situations where multiple event loops exist (in the same thread, e.g. running alternately). Maybe a context manager could be used to manage this state better? > - In async/await, it's not obvious how to write leaf functions: > 'await' is equivalent to 'yield from', but there's no equivalent to > 'yield'. You have to jump through some hoops by writing a class with a > custom __await__ method or using @types.coroutine. Of course it's > doable, and it's no big deal if you're writing a proper async library, > but it's awkward for quick ad hoc usage. > Ah, yes, you need the equivalent of a Future. Maybe we should have a simple one in the stdlib that's not tied to asyncio. > For a concrete example of 'ad hoc coroutines' where I think 'yield > from' is appropriate, here's wsproto's old 'yield from'-based > incremental websocket protocol parser: > > https://github.com/python-hyper/wsproto/blob/ > 4b7db502cc0568ab2354798552148dadd563a4e3/wsproto/frame_protocol.py#L142 > Ah yes, these kinds of parsers are interesting use cases for coroutines. There are many more potential use cases than protocol parsers -- e.g. the Python REPL could really use one if you want to replicate it completely in pure Python. > The flow here is: received_frames is the public API: it gives you an > iterator over all completed frames. When it stops you're expected to > add more data to the buffer and then call it again. Internally, > received_frames acts as a coroutine runner for parse_more_gen, which > is the main parser that calls various helper methods to parse > different parts of the websocket frame. These calls eventually bottom > out in _consume_exactly or _consume_at_most, which use 'yield' to > "block" until enough data is available in the internal buffer. > Basically this is the classic trick of using coroutines to write an > incremental state machine parser as ordinary-looking code where the > state is encoded in local variables on the stack. > > Using coroutines here isn't just a cute trick; I'm pretty confident > that there is absolutely no other way to write a readable incremental > websocket parser in Python. This is the 3rd rewrite of wsproto's > parser, and I think I've read the code for all the other Python > libraries that do this too. The websocket framing format is branchy > enough that trying to write out the state machine explicitly will > absolutely tie you in knots. (Of course we then rewrote wsproto's > parser a 4th time for py2 compatibility; the current version's not > *terrible* but the 'yield from' version was simpler and more > maintainable.) > No argument here. > For wsproto's use case, I think using 'await' would be noticeably > worse than 'yield from'. It'd make the code more opaque to readers > (people know generators but no-one shows up already knowing what > @types.coroutine does), That's "the devil you know" though. I expect few people have a solid understanding of how a bare "yield" works when there's also a "yield from". > the "coroutine never awaited" warnings would > be obnoxious (it's totally fine to instantiate a parser and then throw > it away without using it!), and the global state issues would make us > very nervous (wsproto is absolutely designed to be used alongside a > library like asyncio or trio). But that's fine; 'yield from' exists > and is perfect for this application. > > Basically this is a very long way of saying that actually the status > quo is pretty good, at least with regard to yield from vs. async/await > :-). > Yeah. I'm buying maybe 75% of it. But async/await is still young... -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Sun Nov 26 20:01:00 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sun, 26 Nov 2017 20:01:00 -0500 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On Sun, Nov 26, 2017 at 6:51 PM, Guido van Rossum wrote: > On Sun, Nov 26, 2017 at 12:29 PM, Nathaniel Smith wrote: [..] >> - async/await has associated thread-global state like >> sys.set_coroutine_wrapper and sys.set_asyncgen_hooks. Generally async >> libraries assume that they own these, and arbitrarily weird things may >> happen if you have multiple async/await coroutine runners in same >> thread with no coordination between them. > > > The existence of these is indeed a bit unfortunate for this use case. I'm > CC'ing Yury to ask him if he can think of a different way to deal with the > problems that these are supposed to solve. For each, the reason they exist > is itself an edge case -- debugging for the former, finalization for the > latter. A better solution for these problem may also be important for > situations where multiple event loops exist (in the same thread, e.g. > running alternately). Maybe a context manager could be used to manage this > state better? Yeah, both of them are for solving edge cases: - sys.set_coroutine_wrapper() is a debug API: asyncio uses it to slightly enhance the warning about non-awaited coroutines by showing where they were *created*. Capturing traceback when we create every coroutine is expensive, hence we only do that in asyncio debug mode. - sys.set_asyncgen_hooks() is more important, we use it to let event loops finalize partially iterated and then abandoned asynchronous generators. The rule of thumb is to get the previous coro-wrapper/asyncgen-hook, set your own (both are thread specific), and after you're done -- restore the saved old versions back. This should work fine in all use cases (even a trio event loop nested in an asyncio event loop). But nested coroutine loops is a sign of bad design, the nested loop will completely block the execution of the outer event loop, so situations like that should be avoided. Not to mention that debugging becomes much harder when you have complex nested coroutine runners. If someone wants to run some async/await code without an event loop for educational purposes or to implement some pattern, they likely don't even need to use any of the above APIs. For me the status quo is OK: asyncio/twisted/curio/trio use these APIs internally, and for them it shouldn't be a problem to use a couple low-level functions from sys. That said I have a couple ideas: - We can simplify these low-level APIs by combining set_coroutine_wrapper() and set_asyncgen_hooks() into one function -- set_async_hooks(coroutine_wrapper=, firstiter=, etc). It could return a context-manager, so the following would be possible: with sys.set_async_hooks(...): # run async code or an event loop - Another option is to design an API that would let you to stack your coro-wrapper/asyncgen-hooks, so that many coroutine runners can control coroutines/generators simultaneously. This sounds very complex to me though, and I haven't seen any compelling real-world use case that would require this kind of design. sys.set_coroutine_wrapper() is marked as an experimental debug API; sys.set_asyncgen_hooks() is still provisional. So we can in theory change/improve both of them. Yury From ncoghlan at gmail.com Sun Nov 26 21:01:52 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 27 Nov 2017 12:01:52 +1000 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: On 27 November 2017 at 06:29, Nathaniel Smith wrote: > - In async/await, it's not obvious how to write leaf functions: > 'await' is equivalent to 'yield from', but there's no equivalent to > 'yield'. You have to jump through some hoops by writing a class with a > custom __await__ method or using @types.coroutine. Of course it's > doable, and it's no big deal if you're writing a proper async library, > but it's awkward for quick ad hoc usage. > > For a concrete example of 'ad hoc coroutines' where I think 'yield > from' is appropriate, here's wsproto's old 'yield from'-based > incremental websocket protocol parser: > > https://github.com/python-hyper/wsproto/blob/4b7db502cc0568ab2354798552148dadd563a4e3/wsproto/frame_protocol.py#L142 sys.set_coroutine_wrapper itself is another case where you genuinely *can't* rely on async/await in the wrapper implementation. The current example shown at https://docs.python.org/3/library/sys.html#sys.set_coroutine_wrapper is of a case that will *fail*, since it would otherwise result in infinite recursion when the coroutine wrapper attempts to call the coroutine wrapper. Cheers, Nick. P.S. Making that point reminded me that I still haven't got around to updating those docs to also include examples of how to do it *right*: https://bugs.python.org/issue30578 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mertz at gnosis.cx Sun Nov 26 22:20:56 2017 From: mertz at gnosis.cx (David Mertz) Date: Sun, 26 Nov 2017 19:20:56 -0800 Subject: [Python-Dev] Using async/await in place of yield expression Message-ID: Changing subject line because this is way off to the side. Guido and Nathaniel point out that you can do everything yield expressions do with async/await *without* an explicit event loop. While I know that is true, it feels like the best case is adding fairly considerable ugliness to the code in the process. > On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum > wrote: > > Maybe you didn't realize async/await don't need an event loop? Driving an > > async/await-based coroutine is just as simple as driving a > yield-from-based > > one (`await` does exactly the same thing as `yield from`). > > On Sun, Nov 26, 2017 at 12:29 PM, Nathaniel Smith wrote: > Technically anything you can write with yield/yield from could also be > written using async/await and vice-versa, but I think it's actually > nice to have both in the language. > Here is some code which is definitely "toy", but follows a pattern pretty similar to things I really code using yield expressions: In [1]: from itertools import takewhile In [2]: def injectable_fib(a=1, b=2): ...: while True: ...: new = yield a ...: if new is not None: ...: a, b = new ...: a, b = b, a+b ...: In [3]: f = injectable_fib() In [4]: list(takewhile(lambda x: x<200, f)) Out[4]: [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144] In [5]: f.send((100,200)) Out[5]: 200 In [6]: list(takewhile(lambda x: x<1000, f)) Out[6]: [300, 500, 800] Imagining that 'yield' vanished from the language tomorrow, and I wanted to write the same thing with async/await, I think the best I can come up with is... actually, I just don't know who to do it without any `yield`. I can get as far as a slightly flawed: In [9]: async def atakewhile(pred, coro): ...: l = [] ...: async for x in coro: ...: if pred(x): ...: return l ...: l.append(x) But I just have no idea what would go in the body of async def afib_injectable(): (that is, if I'm prohibited a `yield` in there) -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sun Nov 26 22:43:55 2017 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 27 Nov 2017 14:43:55 +1100 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On Mon, Nov 27, 2017 at 2:20 PM, David Mertz wrote: > Changing subject line because this is way off to the side. Guido and > Nathaniel point out that you can do everything yield expressions do with > async/await *without* an explicit event loop. While I know that is true, it > feels like the best case is adding fairly considerable ugliness to the code > in the process. > >> >> On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum >> wrote: >> > Maybe you didn't realize async/await don't need an event loop? Driving >> > an >> > async/await-based coroutine is just as simple as driving a >> > yield-from-based >> > one (`await` does exactly the same thing as `yield from`). > > >> >> On Sun, Nov 26, 2017 at 12:29 PM, Nathaniel Smith wrote: >> Technically anything you can write with yield/yield from could also be >> written using async/await and vice-versa, but I think it's actually >> nice to have both in the language. > > > Here is some code which is definitely "toy", but follows a pattern pretty > similar to things I really code using yield expressions: > > In [1]: from itertools import takewhile > In [2]: def injectable_fib(a=1, b=2): > ...: while True: > ...: new = yield a > ...: if new is not None: > ...: a, b = new > ...: a, b = b, a+b > ...: > In [3]: f = injectable_fib() > In [4]: list(takewhile(lambda x: x<200, f)) > Out[4]: [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144] > In [5]: f.send((100,200)) > Out[5]: 200 > In [6]: list(takewhile(lambda x: x<1000, f)) > Out[6]: [300, 500, 800] > > > Imagining that 'yield' vanished from the language tomorrow, and I wanted to > write the same thing with async/await, I think the best I can come up with > is... actually, I just don't know who to do it without any `yield`. > > I can get as far as a slightly flawed: > > In [9]: async def atakewhile(pred, coro): > ...: l = [] > ...: async for x in coro: > ...: if pred(x): > ...: return l > ...: l.append(x) > > > But I just have no idea what would go in the body of > > async def afib_injectable(): > > > (that is, if I'm prohibited a `yield` in there) > Honestly, this is one of Python's biggest problems when it comes to async functions. I don't know the answer to that question, and I don't know where in the docs I'd go looking for it. In JavaScript, async functions are built on top of promises, so you can just say "well, you return a promise, tada". But in Python, this isn't well documented. Snooping the source code for asyncio.sleep() shows that it uses @coroutine and yield, and I have no idea what magic @coroutine does, nor how you'd use it without yield. ChrisA From caleb.hattingh at gmail.com Sun Nov 26 23:23:12 2017 From: caleb.hattingh at gmail.com (Caleb Hattingh) Date: Mon, 27 Nov 2017 14:23:12 +1000 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On 27 November 2017 at 13:20, David Mertz wrote: > > Imagining that 'yield' vanished from the language tomorrow, and I wanted > to write the same thing with async/await, I think the best I can come up > with is... actually, I just don't know who to do it without any `yield`. > I recently had to look into these things with quite some detail. When using `yield` for *iteration* specifically, you cannot use async/await to replace it. I find it easiest to think about all this in the context of the ABC table: https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes The `Generator` ABC also implement the `Iterator` protocol so this allows "normal" iteration to work, i.e. for-loop, while-loop, and comprehensions and so on. In contrast, the `Coroutine` ABC implements only send(), throw() and close(). This means that if you want to iterate a coroutine, *something* must drive send(), and Python's iteration syntax features don't do that. async/await is only useful when an event loop drives coroutines using the Coroutine ABC protocol methods. Note that AsyncIterable and AsyncIterator doesn't help because objects implementing these protocols may only legally appear inside a coroutine, i.e. an `async def` coroutine function, which you still cannot drive via the Iterator protocol (e.g., from a for-loop). The Coroutine ABC simply doesn't implement the Iterator protocol, so it seems it cannot be a replacement for generators. It is however true that `async/await` completely replaces `yield from` for *coroutines*, but both of those required a loop of some kind. I'd be very grateful if anyone can point out if my understanding of the above is incorrect. Private email is fine if you prefer not to post to the list. rgds Caleb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Nov 26 23:07:02 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 27 Nov 2017 14:07:02 +1000 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: On 27 November 2017 at 06:22, Eric V. Smith wrote: > On 11/26/2017 1:04 PM, Brett Cannon wrote: >> >> The only open issues I know of are: >> - Should object comparison require an exact match on the type? >> https://github.com/ericvsmith/dataclasses/issues/51 >> >> >> I say don't require the type comparison for duck typing purposes. > > > The problem with that is that you end up with cases like this, which I don't > think we want: > > @dataclass > class Point: > x: int > y: int > > @dataclass > class Point3d: > x: int > y: int > z: int > > assert Point(1, 2) == Point3d(1, 2, 3) Perhaps the check could be: (type(lhs) == type(rhs) or fields(lhs) == fields(rhs)) and all (individual fields match) That way the exact type check would be an optimisation to speed up the common case, while the formal semantic constraint would be that the field definitions have to match (including their names and order). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Sun Nov 26 23:53:44 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sun, 26 Nov 2017 23:53:44 -0500 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On Sun, Nov 26, 2017 at 11:23 PM, Caleb Hattingh wrote: [..] > I'd be very grateful if anyone can point out if my understanding of the > above is incorrect. Private email is fine if you prefer not to post to the > list. It is correct. While 'yield from coro()', where 'coro()' is an 'async def' coroutine would make sense in some contexts, it would require coroutines to implement the iteration protocol. That would mean that you could write 'for x in coro()', which is meaningless for coroutines in all contexts. Therefore, coroutines do not implement the iterator protocol. Yury From greg.ewing at canterbury.ac.nz Mon Nov 27 00:04:45 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 27 Nov 2017 18:04:45 +1300 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: <5A1B9CED.60708@canterbury.ac.nz> Nick Coghlan wrote: > Perhaps the check could be: > > (type(lhs) == type(rhs) or fields(lhs) == fields(rhs)) and all > (individual fields match) I think the types should *always* have to match, or at least one should be a subclass of the other. Consider: @dataclass class Point3d: x: float y: float z: float @dataclass class Vector3d: x: float y: float z: float Points and vectors are different things, and they should never compare equal, even if they have the same field names and values. -- Greg From caleb.hattingh at gmail.com Mon Nov 27 00:33:51 2017 From: caleb.hattingh at gmail.com (Caleb Hattingh) Date: Mon, 27 Nov 2017 15:33:51 +1000 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On 27 November 2017 at 14:53, Yury Selivanov wrote: > It is correct. While 'yield from coro()', where 'coro()' is an 'async > def' coroutine would make sense in some contexts, it would require > coroutines to implement the iteration protocol. That would mean that > you could write 'for x in coro()', which is meaningless for coroutines > in all contexts. Therefore, coroutines do not implement the iterator > protocol. The two worlds (iterating vs awaiting) collide in an interesting way when one plays with custom Awaitables. >From your PEP, an awaitable is either a coroutine, or an object implementing __await__, *and* that __await__ returns an iterator. The PEP only says that __await__ must return an iterator, but it turns out that it's also required that that iterator should not return any intermediate values. This requirement is only enforced in the event loop, not in the `await` call itself. I was surprised by that: >>> class A: ... def __await__(self): ... for i in range(3): ... yield i # <--- breaking the rules, returning a value ... return 123 >>> async def cf(): ... x = await A() ... return x >>> c = cf() >>> c.send(None) 0 >>> c.send(None) 1 >>> c.send(None) 2 >>> c.send(None) Traceback (most recent call last): File "", line 1, in StopIteration: 123 123 So we drive the coroutine manually using send(), and we see that intermediate calls return the illegally-yielded values. I broke the rules because my __await__ iterator is returning values (via `yield i`) on each iteration, and that isn't allowed because the event loop wouldn't know what to do with these intermediate values; it only knows that "awaiting" is finished when a value is returned via StopIteration. However, you only find out that it isn't allowed if you use the loop to run the coroutine function: >>> import asyncio >>> loop = asyncio.get_event_loop() >>> loop.run_until_complete(f()) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.6/asyncio/base_events.py", line 467, in run_until_complete return future.result() File "", line 2, in f File "", line 4, in __await__ RuntimeError: Task got bad yield: 0 Task got bad yield: 0 I found this quite confusing when I first came across it, before I understood how asyncio/async/await was put together. The __await__ method implementation must return an iterator that specifically doesn't return any intermediate values. This should probably be explained in the docs. I'm happy to help with any documentation improvements if help is desired. rgds Caleb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Nov 27 01:04:08 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 27 Nov 2017 16:04:08 +1000 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <5A1B9CED.60708@canterbury.ac.nz> References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: On 27 November 2017 at 15:04, Greg Ewing wrote: > Nick Coghlan wrote: >> >> Perhaps the check could be: >> >> (type(lhs) == type(rhs) or fields(lhs) == fields(rhs)) and all >> (individual fields match) > > > I think the types should *always* have to match, or at least > one should be a subclass of the other. Consider: > > @dataclass > class Point3d: > x: float > y: float > z: float > > @dataclass > class Vector3d: > x: float > y: float > z: float > > Points and vectors are different things, and they should never > compare equal, even if they have the same field names and values. And I guess if folks actually want more permissive structure-based matching, that's one of the features that collections.namedtuple offers that data classes don't. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Mon Nov 27 03:41:55 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 27 Nov 2017 00:41:55 -0800 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On Sun, Nov 26, 2017 at 9:33 PM, Caleb Hattingh wrote: > The PEP only says that __await__ must return an iterator, but it turns out > that it's also required that that iterator > should not return any intermediate values. I think you're confused :-). When the iterator yields an intermediate value, it does two things: (1) it suspends the current call stack and returns control to the coroutine runner (i.e. the event loop) (2) it sends some arbitrary value back to the coroutine runner The whole point of `await` is that it can do (1) -- this is what lets you switch between executing different tasks, so they can pretend to execute in parallel. However, you do need to make sure that your __await__ and your coroutine runner are on the same page with respect to (2) -- if you send a value that the coroutine runner isn't expecting, it'll get confused. Generally async libraries control both the coroutine runner and the __await__ method, so they get to invent whatever arbitrary convention they want. In asyncio, the convention is that the values you send back must be Future objects, and the coroutine runner interprets this as a request to wait for the Future to be resolved, and then resume the current call stack. In curio, the convention is that you send back a special tuple describing some operation you want the event loop to perform [1], and then it resumes your call stack once that operation has finished. And Trio barely uses this channel at all. (It does transfer a bit of information that way for convenience/speed, but the main work of setting up the task to be resumed at the appropriate time happens through other mechanisms.) What you observed is that the asyncio coroutine runner gets cranky if you send it an integer when it was expecting a Future. Since most libraries assume that they control both __await__ and the coroutine runner, they don't tend to give great error messages here (though trio does [2] ;-)). I think this is also why the asyncio docs don't talk about this. I guess in asyncio's case it is technically a semi-public API because you need to know how it works if you're the author of a library like tornado or twisted that wants to integrate with asyncio. But most people aren't the authors of tornado or twisted, and the ones who are already know how this works, so the lack of docs isn't a huge deal in practice... -n [1] https://github.com/dabeaz/curio/blob/bd0e2cb7741278d1d9288780127dc0807b1aa5b1/curio/traps.py#L48-L156 [2] https://github.com/python-trio/trio/blob/2b8e297e544088b98ff758d37c7ad84f74c3f2f5/trio/_core/_run.py#L1521-L1530 -- Nathaniel J. Smith -- https://vorpus.org From solipsis at pitrou.net Mon Nov 27 04:12:54 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 27 Nov 2017 10:12:54 +0100 Subject: [Python-Dev] Using async/await in place of yield expression References: Message-ID: <20171127101254.67cde716@fsol> On Mon, 27 Nov 2017 00:41:55 -0800 Nathaniel Smith wrote: > > Since most libraries assume that they control both __await__ and the > coroutine runner, they don't tend to give great error messages here > (though trio does [2] ;-)). I think this is also why the asyncio docs > don't talk about this. I guess in asyncio's case it is technically a > semi-public API because you need to know how it works if you're the > author of a library like tornado or twisted that wants to integrate > with asyncio. But most people aren't the authors of tornado or > twisted, and the ones who are already know how this works, so the lack > of docs isn't a huge deal in practice... This does seem to mean that it can be difficult to provide a __await__ method that works with different coroutine runners, though. For example, Tornado Futures implement __await__ for compatibility with the asyncio event loop. But what if Tornado wants to make its Future class compatible with an event loop that requires a different __await__ convention? Regards Antoine. From pmiscml at gmail.com Mon Nov 27 04:57:11 2017 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 27 Nov 2017 11:57:11 +0200 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: <20171127115711.240e5fe0@x230> Hello, On Mon, 27 Nov 2017 15:33:51 +1000 Caleb Hattingh wrote: [] > The PEP only says that __await__ must return an iterator, but it > turns out that it's also required that that iterator > should not return any intermediate values. This requirement is only > enforced in the event loop, not > in the `await` call itself. I was surprised by that: [] > So we drive the coroutine manually using send(), and we see that > intermediate calls return the illegally-yielded values. I broke the You apparently mix up the language and a particular asynchronous scheduling library (even if that library ships with the language). There're gazillion of async scheduling libraries for Python, and at least some of them welcome use of "yield" in coroutines (even if old-style). Moreover, you can always use yield in your own generators, over which you iterate yourself, all running in the coroutine async scheduler. When you do this, you will need to check type of values you get from iteration - if those are "yours", you consume them, if they're not yours, you re-yield them for higher levels to consume (ultimately, for the scheduler itself). -- Best regards, Paul mailto:pmiscml at gmail.com From eric at trueblade.com Mon Nov 27 05:56:29 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 27 Nov 2017 05:56:29 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: On 11/27/2017 1:04 AM, Nick Coghlan wrote: > On 27 November 2017 at 15:04, Greg Ewing wrote: >> Nick Coghlan wrote: >>> >>> Perhaps the check could be: >>> >>> (type(lhs) == type(rhs) or fields(lhs) == fields(rhs)) and all >>> (individual fields match) >> >> >> I think the types should *always* have to match, or at least >> one should be a subclass of the other. Consider: >> >> @dataclass >> class Point3d: >> x: float >> y: float >> z: float >> >> @dataclass >> class Vector3d: >> x: float >> y: float >> z: float >> >> Points and vectors are different things, and they should never >> compare equal, even if they have the same field names and values. > > And I guess if folks actually want more permissive structure-based > matching, that's one of the features that collections.namedtuple > offers that data classes don't. And in this case you could also do: astuple(point) == astuple(vector) Eric. From caleb.hattingh at gmail.com Mon Nov 27 06:08:51 2017 From: caleb.hattingh at gmail.com (Caleb Hattingh) Date: Mon, 27 Nov 2017 21:08:51 +1000 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On 27 November 2017 at 18:41, Nathaniel Smith wrote: > > In asyncio, the convention is that the values you send back must be > Future objects, Thanks this is useful. I didn't pick this up from the various PEPs or documentation. I guess I need to go through the src :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From srittau at rittau.biz Mon Nov 27 06:01:52 2017 From: srittau at rittau.biz (Sebastian Rittau) Date: Mon, 27 Nov 2017 12:01:52 +0100 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: On 25.11.2017 22:06, Eric V. Smith wrote: > The updated version should show up at > https://www.python.org/dev/peps/pep-0557/ shortly. This PEP looks very promising and will make my life quite a bit easier, since we are using a pattern involving data classes. Currently, we write the constructor by hand. > The major changes from the previous version are: > > - Add InitVar to specify initialize-only fields. This is the only feature that does not sit right with me. It looks very obscure and "hacky". From what I understand, we are supposed to use the field syntax to define constructor arguments. I'd argue that the name "initialize-only fields" is a misnomer, which only hides the fact that this has nothing to do with fields at all. Couldn't dataclassses just pass *args and **kwargs to __post_init__()? Type checkers need to be special-cases for InitVar anyway, couldn't they instead be special cased to look at __post_init__ argument types? ?- Sebastian From eric at trueblade.com Mon Nov 27 07:23:48 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 27 Nov 2017 07:23:48 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: <8ed8433f-be48-fbd3-cffa-f525a4efe91f@trueblade.com> On 11/27/2017 6:01 AM, Sebastian Rittau wrote: > On 25.11.2017 22:06, Eric V. Smith wrote: >> The major changes from the previous version are: >> >> - Add InitVar to specify initialize-only fields. > > This is the only feature that does not sit right with me. It looks very > obscure and "hacky". From what I understand, we are supposed to use the > field syntax to define constructor arguments. I'd argue that the name > "initialize-only fields" is a misnomer, which only hides the fact that > this has nothing to do with fields at all. Couldn't dataclassses just > pass *args and **kwargs to __post_init__()? Type checkers need to be > special-cases for InitVar anyway, couldn't they instead be special cased > to look at __post_init__ argument types? First off, I expect this feature to be used extremely rarely. I'm tempted to remove it, since it's infrequently needed and it could be added later. And as the PEP points out, you can get most of the way with an alternate classmethod constructor. I had something like your suggestion half coded up, except I inspected the args to __post_init__() and added them to __init__, avoiding the API-unfriendly *args and **kwargs. So in: @dataclass class C: x: int y: int def __post_init__(self, database: DatabaseType): pass Then the __init__ signature became: def __init__(self, x:int, y:int, database:DatabaseType): In the end, that seems like a lot of magic (but what about this isn't?), it required the inspect module to be imported, and I thought it made more sense for all of the init params to be near each other: @dataclass class C: x: int y: int database: InitVar[DatabaseType] def __post_init__(self, database): pass No matter what we do here, static type checkers are going to have to be aware of either the InitVars or the hoisting of params from __post_init__ to __init__. One other thing about InitVar: it lets you control where the init-only parameter goes in the __init__ call. This is especially important with default values: @dataclass class C: x: int database: InitVar[DatabaseType] y: int = 0 def __post_init__(self, database): pass In this case, if I were hoisting params from __post_init__ to __init__, the __init__ call would be: def __init__(self, x, y=0, database) Which is an error. I guess you could say the init-only parameters would go first in the __init__ definition, but then you have the same problem if any of them have default values. Eric. From srittau at rittau.biz Mon Nov 27 07:26:48 2017 From: srittau at rittau.biz (Sebastian Rittau) Date: Mon, 27 Nov 2017 13:26:48 +0100 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: On 27.11.2017 12:01, Sebastian Rittau wrote: > >> The major changes from the previous version are: >> >> - Add InitVar to specify initialize-only fields. > > This is the only feature that does not sit right with me. It looks > very obscure and "hacky". From what I understand, we are supposed to > use the field syntax to define constructor arguments. I'd argue that > the name "initialize-only fields" is a misnomer, which only hides the > fact that this has nothing to do with fields at all. Couldn't > dataclassses just pass *args and **kwargs to __post_init__()? Type > checkers need to be special-cases for InitVar anyway, couldn't they > instead be special cased to look at __post_init__ argument types? I am sorry for the double post, but I thought a bit more about why this does not right with me: * As written above, InitVars look like fields, but aren't. * InitVar goes against the established way to pass through arguments, *args and **kwargs. While type checking those is an unsolved problem, from what I understand, I don't think we should introduce a second way just for dataclasses. * InitVars look like a way to satisfy the type checker without providing any benefit to the programmer. Even when I'm not interested in type checking, I have to declare init vars. * InitVars force me to repeat myself. I have the InitVar declaration and then I have the repeat myself in the signature of __post_init__(). This has all the usual problems of repeated code. I hope I did not misunderstood the purpose of InitVar. ?- Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From srittau at rittau.biz Mon Nov 27 07:31:26 2017 From: srittau at rittau.biz (Sebastian Rittau) Date: Mon, 27 Nov 2017 13:31:26 +0100 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <8ed8433f-be48-fbd3-cffa-f525a4efe91f@trueblade.com> References: <8ed8433f-be48-fbd3-cffa-f525a4efe91f@trueblade.com> Message-ID: <36ca9302-85e5-2a6a-b7b4-12c06e36d924@rittau.biz> On 27.11.2017 13:23, Eric V. Smith wrote: > I had something like your suggestion half coded up, except I inspected > the args to __post_init__() and added them to __init__, avoiding the > API-unfriendly *args and **kwargs. I understand your concerns with *args and **kwargs. I think we need to find a solution for that eventually. > One other thing about InitVar: it lets you control where the init-only > parameter goes in the __init__ call. This is especially important with > default values: This is indeed a nice property. I was thinking about that myself and how to best handle it. One use case that could occur in out codebase is passing in a "context" argument. By convention, this is always the first argument to the constructor, so it would be nice if this would also work for dataclasses. ?- Sebastian From eric at trueblade.com Mon Nov 27 07:42:07 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 27 Nov 2017 07:42:07 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <36ca9302-85e5-2a6a-b7b4-12c06e36d924@rittau.biz> References: <8ed8433f-be48-fbd3-cffa-f525a4efe91f@trueblade.com> <36ca9302-85e5-2a6a-b7b4-12c06e36d924@rittau.biz> Message-ID: On 11/27/2017 7:31 AM, Sebastian Rittau wrote: > On 27.11.2017 13:23, Eric V. Smith wrote: >> I had something like your suggestion half coded up, except I inspected >> the args to __post_init__() and added them to __init__, avoiding the >> API-unfriendly *args and **kwargs. > I understand your concerns with *args and **kwargs. I think we need to > find a solution for that eventually. > >> One other thing about InitVar: it lets you control where the init-only >> parameter goes in the __init__ call. This is especially important with >> default values: > > This is indeed a nice property. I was thinking about that myself and how > to best handle it. One use case that could occur in out codebase is > passing in a "context" argument. By convention, this is always the first > argument to the constructor, so it would be nice if this would also work > for dataclasses. And that's the one thing that you can't do with an alternate classmethod constructor, and is the reason I added InitVar: you can't force a non-field parameter such as a context (or in my example, a database) to be always present when instances are constructed. And also consider the "replace()" module method. InitVars must also be supplied there, whereas with a classmethod constructor, they wouldn't be. This is for the case where a context or database is needed to construct the instance, but isn't stored as a field on the instance. Again, not super-common, but it does happen. My point here is not that InitVar is better than __post_init__ parameter hoisting for this specific need, but that both of them provide something that classmethod constructors do not. I'll add some wording on this to the PEP. Eric. From eric at trueblade.com Mon Nov 27 08:02:11 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 27 Nov 2017 08:02:11 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: Message-ID: On 11/27/2017 7:26 AM, Sebastian Rittau wrote: > On 27.11.2017 12:01, Sebastian Rittau wrote: >> >>> The major changes from the previous version are: >>> >>> - Add InitVar to specify initialize-only fields. >> >> This is the only feature that does not sit right with me. It looks >> very obscure and "hacky". From what I understand, we are supposed to >> use the field syntax to define constructor arguments. I'd argue that >> the name "initialize-only fields" is a misnomer, which only hides the >> fact that this has nothing to do with fields at all. Couldn't >> dataclassses just pass *args and **kwargs to __post_init__()? Type >> checkers need to be special-cases for InitVar anyway, couldn't they >> instead be special cased to look at __post_init__ argument types? > I am sorry for the double post, but I thought a bit more about why this > does not right with me: > > * As written above, InitVars look like fields, but aren't. Same as with ClassVars, which is where the inspiration came from. > * InitVar goes against the established way to pass through arguments, > *args and **kwargs. While type checking those is an unsolved > problem, from what I understand, I don't think we should introduce a > second way just for dataclasses. > * InitVars look like a way to satisfy the type checker without > providing any benefit to the programmer. Even when I'm not > interested in type checking, I have to declare init vars. Same as with ClassVars, if you're using them. And that's not just a dataclasses thing, although dataclasses is the first place I know of where it would change the code semantics. > * InitVars force me to repeat myself. I have the InitVar declaration > and then I have the repeat myself in the signature of > __post_init__(). This has all the usual problems of repeated code. There was some discussion about this starting at https://github.com/ericvsmith/dataclasses/issues/17#issuecomment-345529717, in particular a few messages down where we discussed what would be repeated, and what mypy would be able to deduce. You won't need to repeat the type declaration. > I hope I did not misunderstood the purpose of InitVar. I think you understand it perfectly well, especially with the "context" discussion. Thanks for bringing it up. Eric. From greg.ewing at canterbury.ac.nz Mon Nov 27 08:14:28 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 28 Nov 2017 02:14:28 +1300 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: <5A1C0FB4.2040007@canterbury.ac.nz> Chris Angelico wrote: > I'm not sure there's any distinction between a "point" and a "vector > from the origin to a point". They transform differently. For example, translation affects a point, but makes no difference to a vector. There are two ways of dealing with that. One is to use vectors to represent both and have two different operations, "transform point" and "transform vector". The other is to represent them using different types and have one operation that does different things depending on the type. The advantage of the latter is that you can't accidentally apply the wrong operation, e.g. transform_point on something that's actually a vector. (There's actually a third way -- use homogeneous coordinates and represent points as (x, y, z, 1) and vectors as (x, y, z, 0). But that's really a variation on the "different types" idea.) -- Greg From eric at trueblade.com Mon Nov 27 11:00:27 2017 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 27 Nov 2017 11:00:27 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: On 11/27/17 10:51 AM, Guido van Rossum wrote: > Following up on this subthread (inline below). > > On Mon, Nov 27, 2017 at 2:56 AM, Eric V. Smith > wrote: > > On 11/27/2017 1:04 AM, Nick Coghlan wrote: > > On 27 November 2017 at 15:04, Greg Ewing > > wrote: > > Nick Coghlan wrote: > > > Perhaps the check could be: > > (type(lhs) == type(rhs) or fields(lhs) == > fields(rhs)) and all > (individual fields match) > > > > I think the types should *always* have to match, or at least > one should be a subclass of the other. Consider: > > @dataclass > class Point3d: > x: float > y: float > z: float > > @dataclass > class Vector3d: > x: float > y: float > z: float > > Points and vectors are different things, and they should never > compare equal, even if they have the same field names and > values. > > > And I guess if folks actually want more permissive structure-based > matching, that's one of the features that collections.namedtuple > offers that data classes don't. > > > And in this case you could also do: > astuple(point) == astuple(vector) > > > Didn't we at one point have something like > > isinstance(other, self.__class__) and fields(other) == fields(self) and > > > (plus some optimization if the types are identical)? > > That feels ideal, because it means you can subclass Point just to add > some methods and it will stay comparable, but if you add fields it will > always be unequal. I don't think we had that before, but it sounds right to me. I think it could be: isinstance(other, self.__class__) and len(fields(other)) == len(fields(self)) and Since by definition if you're a subclass you'll start with all of the same fields. So if the len's match, you won't have added any new fields. That should be sufficiently cheap. Then the optimized version would be: (self.__class__ is other.__class__) or (isinstance(other, self.__class__) and len(fields(other)) == len(fields(self))) and I'd probably further optimize len(fields(obj)), but that's the general idea. Eric. From guido at python.org Mon Nov 27 10:51:58 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 07:51:58 -0800 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: Following up on this subthread (inline below). On Mon, Nov 27, 2017 at 2:56 AM, Eric V. Smith wrote: > On 11/27/2017 1:04 AM, Nick Coghlan wrote: > >> On 27 November 2017 at 15:04, Greg Ewing >> wrote: >> >>> Nick Coghlan wrote: >>> >>>> >>>> Perhaps the check could be: >>>> >>>> (type(lhs) == type(rhs) or fields(lhs) == fields(rhs)) and all >>>> (individual fields match) >>>> >>> >>> >>> I think the types should *always* have to match, or at least >>> one should be a subclass of the other. Consider: >>> >>> @dataclass >>> class Point3d: >>> x: float >>> y: float >>> z: float >>> >>> @dataclass >>> class Vector3d: >>> x: float >>> y: float >>> z: float >>> >>> Points and vectors are different things, and they should never >>> compare equal, even if they have the same field names and values. >>> >> >> And I guess if folks actually want more permissive structure-based >> matching, that's one of the features that collections.namedtuple >> offers that data classes don't. >> > > And in this case you could also do: > astuple(point) == astuple(vector) > Didn't we at one point have something like isinstance(other, self.__class__) and fields(other) == fields(self) and (plus some optimization if the types are identical)? That feels ideal, because it means you can subclass Point just to add some methods and it will stay comparable, but if you add fields it will always be unequal. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 27 11:15:42 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 08:15:42 -0800 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: Sounds good. On Nov 27, 2017 8:00 AM, "Eric V. Smith" wrote: > On 11/27/17 10:51 AM, Guido van Rossum wrote: > >> Following up on this subthread (inline below). >> >> On Mon, Nov 27, 2017 at 2:56 AM, Eric V. Smith > > wrote: >> >> On 11/27/2017 1:04 AM, Nick Coghlan wrote: >> >> On 27 November 2017 at 15:04, Greg Ewing >> > > wrote: >> >> Nick Coghlan wrote: >> >> >> Perhaps the check could be: >> >> (type(lhs) == type(rhs) or fields(lhs) == >> fields(rhs)) and all >> (individual fields match) >> >> >> >> I think the types should *always* have to match, or at least >> one should be a subclass of the other. Consider: >> >> @dataclass >> class Point3d: >> x: float >> y: float >> z: float >> >> @dataclass >> class Vector3d: >> x: float >> y: float >> z: float >> >> Points and vectors are different things, and they should never >> compare equal, even if they have the same field names and >> values. >> >> >> And I guess if folks actually want more permissive structure-based >> matching, that's one of the features that collections.namedtuple >> offers that data classes don't. >> >> >> And in this case you could also do: >> astuple(point) == astuple(vector) >> >> >> Didn't we at one point have something like >> >> isinstance(other, self.__class__) and fields(other) == fields(self) and >> >> >> (plus some optimization if the types are identical)? >> >> That feels ideal, because it means you can subclass Point just to add >> some methods and it will stay comparable, but if you add fields it will >> always be unequal. >> > > I don't think we had that before, but it sounds right to me. I think it > could be: > > isinstance(other, self.__class__) and len(fields(other)) == > len(fields(self)) and > > Since by definition if you're a subclass you'll start with all of the same > fields. So if the len's match, you won't have added any new fields. That > should be sufficiently cheap. > > Then the optimized version would be: > > (self.__class__ is other.__class__) or (isinstance(other, self.__class__) > and len(fields(other)) == len(fields(self))) and match> > > I'd probably further optimize len(fields(obj)), but that's the general > idea. > > Eric. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Mon Nov 27 11:35:38 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 27 Nov 2017 18:35:38 +0200 Subject: [Python-Dev] generator vs iterator etc. (was: How assignment should work with generators?) Message-ID: On Mon, Nov 27, 2017 at 3:55 PM, Steven D'Aprano wrote: > On Mon, Nov 27, 2017 at 12:17:31PM +0300, Kirill Balunov wrote: > ?? > > > 2. Should this work only for generators or for any iterators? > > I don't understand why you are even considering singling out *only* > generators. A generator is a particular implementation of an iterator. I > can write: > > def gen(): > yield 1; yield 2; yield 3 > > it = gen() > > or I can write: > > it = iter([1, 2, 3]) > > and the behaviour of `it` should be identical. > > > ?I can see where this is coming from. The thing is that "iterator" and "generator" are mostly synonymous, except two things: (1) Generators are iterators that are produced by a generator function (2) Generator functions are sometimes referred to as just "generators" The concept of "generator" thus overlaps with both "iterator" and "generator function". Then there's also "iterator" and "iterable", which are two different things: (3) If `obj` is an *iterable*, then `it = iter(obj)` is an *iterator* (over the contents of `obj`) ( ?4) ?Iterators yield values, for example on explicit calls to next(it). Personally I have leaned towards keeping a clear distinction between "generator function" and "generator"?, which leads to the situation that "generator" and "iterator" are mostly synonymous for me. Sometimes, for convenience, I use the term "generator" to refer to "iterators" more generally. This further seems to have a minor benefit that "generators" and "iterables" are less easily confused with each other than "iterators" and "iterables". I thought about this issue some time ago for the `views` package, which has a separation between sequences (seq) and other iterables (gen): https://github.com/k7hoven/views The functionality provided by `views.gen` is not that interesting?it's essentially a subset of itertools functionality, but with an API that parallels `views.seq` which works with sequences (iterable, sliceable, chainable, etc.). I used the name `gen`, because iterator/iterable variants of the functionality can be implemented with generator functions (although also with other kinds of iterators/iterables). Calling the thing `iter` would have conflicted with the builtin `iter`. HOWEVER, this naming can be confusing for those that lean more towards using "generator" to also mean "generator function", and for those that are comfortable with the term "iterator" despite its resemblance to "iterable". Now I'm actually seriously considering to consider renaming `views.gen` to ` views.iter` when I have time. After all, there's already `views.range` which "conflicts" with the builtin range. ?Anyway, the point is that the naming is suboptimal.? SOLUTION: Maybe (a) all iterators should be called iterators or (b) all iterators should be called generators, regardless of whether they are somehow a result of a generator function having been called in the past. (I'm not going into the distinction between things that can receive values via `send` or any other possible distinctions between different types of iterators and iterables.) ??Koos? ?(discussion originated from python-ideas, but cross-posted to python-dev in case there's more interest there)? -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 27 12:40:06 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 09:40:06 -0800 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: <20171127101254.67cde716@fsol> References: <20171127101254.67cde716@fsol> Message-ID: On Mon, Nov 27, 2017 at 1:12 AM, Antoine Pitrou wrote: > On Mon, 27 Nov 2017 00:41:55 -0800 > Nathaniel Smith wrote: > > > > Since most libraries assume that they control both __await__ and the > > coroutine runner, they don't tend to give great error messages here > > (though trio does [2] ;-)). I think this is also why the asyncio docs > > don't talk about this. I guess in asyncio's case it is technically a > > semi-public API because you need to know how it works if you're the > > author of a library like tornado or twisted that wants to integrate > > with asyncio. But most people aren't the authors of tornado or > > twisted, and the ones who are already know how this works, so the lack > > of docs isn't a huge deal in practice... > > This does seem to mean that it can be difficult to provide a __await__ > method that works with different coroutine runners, though. For > example, Tornado Futures implement __await__ for compatibility with the > asyncio event loop. But what if Tornado wants to make its Future class > compatible with an event loop that requires a different __await__ > convention? > Someone would have to write a PEP proposing a standard interoperability API for event loops. There is already such a PEP (PEP 3156, which standardized the asyncio event loop, including the interop API) but curio and trio intentionally set out to invent their own conventions. At least asyncio has an API that allows overriding the factory for Futures, so if someone comes up with a Future that is interoperable between asyncio and curio, for example, it might be possible. But likely curio would have to be modified somewhat too. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 27 12:53:10 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 09:53:10 -0800 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: On Sun, Nov 26, 2017 at 7:43 PM, Chris Angelico wrote: > Honestly, this is one of Python's biggest problems when it comes to > async functions. I don't know the answer to that question, and I don't > know where in the docs I'd go looking for it. In JavaScript, async > functions are built on top of promises, so you can just say "well, you > return a promise, tada". But in Python, this isn't well documented. > Snooping the source code for asyncio.sleep() shows that it uses > @coroutine and yield, and I have no idea what magic @coroutine does, > nor how you'd use it without yield. > The source for sleep() isn't very helpful -- e.g. @coroutine is mostly a backwards compatibility thing. The heart of it is that it creates a Future and schedules a callback at a later time to complete that Future and then awaits it -- this gives control back to the scheduler and when the callback has made the Future complete, the coroutine will (eventually) be resumed. (JS Promises are equivalent to Futures, but the event loop in JS is more built in so things feel more natural there.) What we need here is not just documentation of how it works, but a good tutorial showing a pattern for writing ad-hoc event loops using a simple Future class. A good example would be some kind of parser (similar to Nathaniel's websockets example). I wish I had the time to write this example -- I have some interest in parsers that work this way (in fact I think most parsers can and probably should be written this way). But I've got a huge list of things to do already... :-( -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 27 12:56:42 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 09:56:42 -0800 Subject: [Python-Dev] Tricky way of of creating a generator via a comprehension expression In-Reply-To: References: <5A1682B9.6010908@canterbury.ac.nz> <20171123124928.304986e9@fsol> Message-ID: I need to cut this debate short (too much to do already) but I'd like to press that I wish async/await to be available for general tinkering (like writing elegant parsers), not just for full fledged event loops. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Nov 27 12:05:57 2017 From: larry at hastings.org (Larry Hastings) Date: Mon, 27 Nov 2017 09:05:57 -0800 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? Message-ID: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> First, a thirty-second refresher, so we're all using the same terminology: A *parameter* is a declared input variable to a function. An *argument* is a value passed into a function.? (*Arguments* are stored in *parameters.*) So in the example "def foo(clonk): pass; foo(3)", clonk is a parameter, and 3 is an argument. ++ Keyword-only arguments were conceived of as being unordered. They're stored in a dictionary--by convention called **kwargs--and dictionaries didn't preserve order.? But knowing the order of arguments is occasionally very useful.? PEP 468 proposed that Python preserve the order of keyword-only arguments in kwargs.? This became easy with the order-preserving dictionaries added to Python 3.6.? I don't recall the order of events, but in the end PEP 468 was accepted, and as of 3.6 Python guarantees order in **kwargs. But that's arguments.? What about parameters? Although this isn't as directly impactful, the order of keyword-only parameters *is* visible to the programmer.? The best way to see a function's parameters is with inspect.signature, although there's also the deprecated inspect.getfullargspec; in CPython you can also directly examine fn.__code__.co_varnames.? Two of these methods present their data in a way that preserves order for all parameters, including keyword-only parameters--and the third one is deprecated. Python must (and does) guarantee the order of positional and positional-or-keyword parameters, because it uses position to map arguments to parameters when the function is called.? But conceptually this isn't necessary for keyword-only parameters because their position is irrelevant.? I only see one place in the language & library that addresses the ordering of keyword-only parameters, by way of omission.? The PEP for inspect.signature (PEP 362) says that when comparing two signatures for equality, their positional and positional-or-keyword parameters must be in the same order.? It makes a point of *not* requiring that the two functions' keyword-only parameters be in the same order. For every currently supported version of Python 3, inspect.signature and fn.__code__.co_varnames preserve the order of keyword-only parameters.? This isn't surprising; it's basically the same code path implementing those as the two types of positional-relevant parameters, so the most straightforward implementation would naturally preserve their order.? It's just not guaranteed. I'd like inspect.signature to guarantee that the order of keyword-only parameters always matches the order they were declared in.? Technically this isn't a language feature, it's a library feature.? But making this guarantee would require that CPython internally cooperate, so it's kind of a language feature too. Does this sound reasonable?? Would it need a PEP?? I'm hoping for "yes" and "no", respectively. Three final notes: * Yes, I do have a use case.? I'm using inspect.signature metadata to mechanically map arguments from an external domain (command-line arguments) to a Python function.? Relying on the declaration order of keyword-only parameters would elegantly solve one small problem. * I asked Armin Rigo about PyPy's support for Python 3.? He said it should already maintain the order of keyword-only parameters, and if I ever catch it not maintaining them in order I should file a bug.? I assert that making this guarantee would be nearly zero effort for any Python implementation--I bet they all already behave this way, all they need is a test case and some documentation. * One can extend this concept to functools.partial and inspect.Signature.bind: should its transformations of keyword-only parameters also maintain order in a consistent way?? I suspect the answer there is much the same--there's an obvious way it should behave, it almost certainly already behaves that way, but it doesn't guarantee it.? I don't think I need this for my use case. //arry/ ++ Yes, that means "Argument Clinic" should really have been called "Parameter Clinic".? But the "Parameter Clinic" sketch is nowhere near as funny. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Nov 27 15:19:39 2017 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 28 Nov 2017 09:19:39 +1300 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> Message-ID: Plus 1 from me. I'm not 100% sure the signature / inspect backport does this, but as you say, it should be trivial to do, to whatever extent the python version we're hosted on does it. Rob On 28 Nov. 2017 07:14, "Larry Hastings" wrote: > > > First, a thirty-second refresher, so we're all using the same terminology: > > A *parameter* is a declared input variable to a function. > An *argument* is a value passed into a function. (*Arguments* are stored > in *parameters.*) > > So in the example "def foo(clonk): pass; foo(3)", clonk is a parameter, > and 3 is an argument. ++ > > > Keyword-only arguments were conceived of as being unordered. They're > stored in a dictionary--by convention called **kwargs--and dictionaries > didn't preserve order. But knowing the order of arguments is occasionally > very useful. PEP 468 proposed that Python preserve the order of > keyword-only arguments in kwargs. This became easy with the > order-preserving dictionaries added to Python 3.6. I don't recall the > order of events, but in the end PEP 468 was accepted, and as of 3.6 Python > guarantees order in **kwargs. > > But that's arguments. What about parameters? > > Although this isn't as directly impactful, the order of keyword-only > parameters *is* visible to the programmer. The best way to see a > function's parameters is with inspect.signature, although there's also the > deprecated inspect.getfullargspec; in CPython you can also directly examine > fn.__code__.co_varnames. Two of these methods present their data in a way > that preserves order for all parameters, including keyword-only > parameters--and the third one is deprecated. > > Python must (and does) guarantee the order of positional and > positional-or-keyword parameters, because it uses position to map arguments > to parameters when the function is called. But conceptually this isn't > necessary for keyword-only parameters because their position is > irrelevant. I only see one place in the language & library that addresses > the ordering of keyword-only parameters, by way of omission. The PEP for > inspect.signature (PEP 362) says that when comparing two signatures for > equality, their positional and positional-or-keyword parameters must be in > the same order. It makes a point of *not* requiring that the two > functions' keyword-only parameters be in the same order. > > For every currently supported version of Python 3, inspect.signature and > fn.__code__.co_varnames preserve the order of keyword-only parameters. > This isn't surprising; it's basically the same code path implementing those > as the two types of positional-relevant parameters, so the most > straightforward implementation would naturally preserve their order. It's > just not guaranteed. > > I'd like inspect.signature to guarantee that the order of keyword-only > parameters always matches the order they were declared in. Technically > this isn't a language feature, it's a library feature. But making this > guarantee would require that CPython internally cooperate, so it's kind of > a language feature too. > > Does this sound reasonable? Would it need a PEP? I'm hoping for "yes" > and "no", respectively. > > > Three final notes: > > - Yes, I do have a use case. I'm using inspect.signature metadata to > mechanically map arguments from an external domain (command-line arguments) > to a Python function. Relying on the declaration order of keyword-only > parameters would elegantly solve one small problem. > - I asked Armin Rigo about PyPy's support for Python 3. He said it > should already maintain the order of keyword-only parameters, and if I ever > catch it not maintaining them in order I should file a bug. I assert that > making this guarantee would be nearly zero effort for any Python > implementation--I bet they all already behave this way, all they need is a > test case and some documentation. > - One can extend this concept to functools.partial and > inspect.Signature.bind: should its transformations of keyword-only > parameters also maintain order in a consistent way? I suspect the answer > there is much the same--there's an obvious way it should behave, it almost > certainly already behaves that way, but it doesn't guarantee it. I don't > think I need this for my use case. > > > > */arry* > > ++ Yes, that means "Argument Clinic" should really have been called > "Parameter Clinic". But the "Parameter Clinic" sketch is nowhere near as > funny. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > robertc%40robertcollins.net > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Nov 27 16:38:11 2017 From: larry at hastings.org (Larry Hastings) Date: Mon, 27 Nov 2017 13:38:11 -0800 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> Message-ID: <47b35436-2295-4fb2-cf60-7e8c772d7964@hastings.org> On 11/27/2017 12:19 PM, Robert Collins wrote: > Plus 1 from me. I'm not 100% sure the signature / inspect backport > does this, but as you say, it should be trivial to do, to whatever > extent the python version we're hosted on does it. I'm not sure exactly what you mean when you say "signature / inspect backport".? If you mean backporting inspect.signature to Python 2, this topic is irrelevant, as Python 2 doesn't have keyword-only parameters. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Mon Nov 27 16:58:56 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 28 Nov 2017 10:58:56 +1300 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: Message-ID: <5A1C8AA0.4040600@canterbury.ac.nz> Guido van Rossum wrote: > The source for sleep() isn't very helpful -- e.g. @coroutine is mostly a > backwards compatibility thing. So how are you supposed to write that *without* using @coroutine? -- Greg From guido at python.org Mon Nov 27 17:05:07 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 14:05:07 -0800 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: <5A1C8AA0.4040600@canterbury.ac.nz> References: <5A1C8AA0.4040600@canterbury.ac.nz> Message-ID: On Mon, Nov 27, 2017 at 1:58 PM, Greg Ewing wrote: > Guido van Rossum wrote: > >> The source for sleep() isn't very helpful -- e.g. @coroutine is mostly a >> backwards compatibility thing. >> > > So how are you supposed to write that *without* using @coroutine? > A simplified version using async def/await: async def sleep(delay): f = Future() get_event_loop().call_later(delay, f.set_result) await f -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Mon Nov 27 17:13:32 2017 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 27 Nov 2017 22:13:32 +0000 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> Message-ID: On Mon, Nov 27, 2017 at 10:13 AM Larry Hastings wrote: > > > First, a thirty-second refresher, so we're all using the same terminology: > > A *parameter* is a declared input variable to a function. > An *argument* is a value passed into a function. (*Arguments* are stored > in *parameters.*) > > So in the example "def foo(clonk): pass; foo(3)", clonk is a parameter, > and 3 is an argument. ++ > > > Keyword-only arguments were conceived of as being unordered. They're > stored in a dictionary--by convention called **kwargs--and dictionaries > didn't preserve order. But knowing the order of arguments is occasionally > very useful. PEP 468 proposed that Python preserve the order of > keyword-only arguments in kwargs. This became easy with the > order-preserving dictionaries added to Python 3.6. I don't recall the > order of events, but in the end PEP 468 was accepted, and as of 3.6 Python > guarantees order in **kwargs. > > But that's arguments. What about parameters? > > Although this isn't as directly impactful, the order of keyword-only > parameters *is* visible to the programmer. The best way to see a > function's parameters is with inspect.signature, although there's also the > deprecated inspect.getfullargspec; in CPython you can also directly examine > fn.__code__.co_varnames. Two of these methods present their data in a way > that preserves order for all parameters, including keyword-only > parameters--and the third one is deprecated. > > Python must (and does) guarantee the order of positional and > positional-or-keyword parameters, because it uses position to map arguments > to parameters when the function is called. But conceptually this isn't > necessary for keyword-only parameters because their position is > irrelevant. I only see one place in the language & library that addresses > the ordering of keyword-only parameters, by way of omission. The PEP for > inspect.signature (PEP 362) says that when comparing two signatures for > equality, their positional and positional-or-keyword parameters must be in > the same order. It makes a point of *not* requiring that the two > functions' keyword-only parameters be in the same order. > > For every currently supported version of Python 3, inspect.signature and > fn.__code__.co_varnames preserve the order of keyword-only parameters. > This isn't surprising; it's basically the same code path implementing those > as the two types of positional-relevant parameters, so the most > straightforward implementation would naturally preserve their order. It's > just not guaranteed. > > I'd like inspect.signature to guarantee that the order of keyword-only > parameters always matches the order they were declared in. Technically > this isn't a language feature, it's a library feature. But making this > guarantee would require that CPython internally cooperate, so it's kind of > a language feature too. > > Does this sound reasonable? Would it need a PEP? I'm hoping for "yes" > and "no", respectively. > Seems reasonable to me. I'm in the "yes" and "no" respectively "just do it" camp on this if want to see it happen. The groundwork was already laid for this by using the order preserving dict in 3.6. Having the inspect module behave in a similar manner follows naturally from that. -gps > > > Three final notes: > > - Yes, I do have a use case. I'm using inspect.signature metadata to > mechanically map arguments from an external domain (command-line arguments) > to a Python function. Relying on the declaration order of keyword-only > parameters would elegantly solve one small problem. > - I asked Armin Rigo about PyPy's support for Python 3. He said it > should already maintain the order of keyword-only parameters, and if I ever > catch it not maintaining them in order I should file a bug. I assert that > making this guarantee would be nearly zero effort for any Python > implementation--I bet they all already behave this way, all they need is a > test case and some documentation. > - One can extend this concept to functools.partial and > inspect.Signature.bind: should its transformations of keyword-only > parameters also maintain order in a consistent way? I suspect the answer > there is much the same--there's an obvious way it should behave, it almost > certainly already behaves that way, but it doesn't guarantee it. I don't > think I need this for my use case. > > > > */arry* > > ++ Yes, that means "Argument Clinic" should really have been called > "Parameter Clinic". But the "Parameter Clinic" sketch is nowhere near as > funny. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Nov 27 18:52:24 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 27 Nov 2017 15:52:24 -0800 Subject: [Python-Dev] PEP 565: show DeprecationWarning in __main__ (round 2) In-Reply-To: References: Message-ID: I am basically in agreement with this now. Some remarks: - I would recommend adding a note to the abstract about the recommendation for test runners to also enable these warnings by default. - In some sense, simple scripts that are distributed informally (e.g. as email attachments or via shared drives) are the most likely victims of unwanted warnings, and originally I wasn't happy with this. But such scripts are also the most likely victims of other sloppiness on their authors' part, like not specifying the needed Python version or dependencies, not checking command line arguments or input data carefully, and so on. And I now think that warnings just come with the territory. - Would be nice to know whether IPython/Jupyter is happy with this. - The sentence "As a result, API deprecation warnings encountered by development tools written in Python should continue to be hidden by default for users of those tools" is missing a final period; I also think that the argument here is stronger if "development" is left out. (Maybe development tools could be called out in a "for example" clause.) - I can't quite put my finger on it, but reading the three bullets of distinct categories of warnings something seems slightly off, perhaps due to independent editing of various phrases. Perhaps the three bullets could be rewritten for better correspondence between the various properties and audiences? And what should test runners do for each? - Also, is SyntaxWarning worth adding to the list? - The thing about FutureWarning being present since 2.3 feels odd -- if your library cares about supporting 2.7 and higher, should it use FutureWarning or DeprecationWarning? - "re-enabling deprecation warnings by default in __main__ doesn't help in handling cases where software has been factored out into support modules, but those modules still have little or no automated test coverage." This and all bullets in the same list should have an initial capital letter and trailing period. This sentence in particular also reads odd: the "but" seems to apply to everything that comes before, but actually is meant to apply only to "cases where ...". Maybe rephrasing this can help the sentence flow better. Most of these (the question about IPython/Jupyter approval excepted) are simple editing comments, so I expect this PEP will be able to move forward soon. Thanks for your patience, Nick! --Guido On Fri, Nov 24, 2017 at 9:33 PM, Nick Coghlan wrote: > This is a new version of the proposal to show DeprecationWarning in > __main__. > > The proposal itself hasn't changed (it's still recommending a new > entry in the default filter list), but there have been several updates > to the PEP text based on further development work and comments in the > initial thread: > > - there's now a linked issue and reference implementation > - it turns out we don't currently support the definition of module > based filters at startup time, so I've explicitly noted the relevant > enhancement that turned out to be necessary (allowing > plain-string-or-compiled-regex in stored filter definitions where we > currently only allow compiled regexes) > - I've noted the intended changes to the warnings-related documentation > - I've noted a couple of other relevant changes that Victor already > implemented for 3.7 > - I've noted that the motivation for the change in 2.7 & 3.1 covered > all Python applications, not just developer tools (developer tools > just provide a particularly compelling example of why "revert to the > Python 2.6 behaviour" isn't a good answer) > > Cheers, > Nick. > > ================= > PEP: 565 > Title: Show DeprecationWarning in __main__ > Author: Nick Coghlan > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 12-Nov-2017 > Python-Version: 3.7 > Post-History: 12-Nov-2017, 25-Nov-2017 > > > Abstract > ======== > > In Python 2.7 and Python 3.2, the default warning filters were updated to > hide > DeprecationWarning by default, such that deprecation warnings in > development > tools that were themselves written in Python (e.g. linters, static > analysers, > test runners, code generators), as well as any other applications that > merely > happened to be written in Python, wouldn't be visible to their users unless > those users explicitly opted in to seeing them. > > However, this change has had the unfortunate side effect of making > DeprecationWarning markedly less effective at its primary intended purpose: > providing advance notice of breaking changes in APIs (whether in CPython, > the > standard library, or in third party libraries) to users of those APIs. > > To improve this situation, this PEP proposes a single adjustment to the > default warnings filter: displaying deprecation warnings attributed to the > main > module by default. > > This change will mean that code entered at the interactive prompt and code > in > single file scripts will revert to reporting these warnings by default, > while > they will continue to be silenced by default for packaged code distributed > as > part of an importable module. > > The PEP also proposes a number of small adjustments to the reference > interpreter and standard library documentation to help make the warnings > subsystem more approachable for new Python developers. > > > Specification > ============= > > The current set of default warnings filters consists of:: > > ignore::DeprecationWarning > ignore::PendingDeprecationWarning > ignore::ImportWarning > ignore::BytesWarning > ignore::ResourceWarning > > The default ``unittest`` test runner then uses > ``warnings.catch_warnings()`` > ``warnings.simplefilter('default')`` to override the default filters while > running test cases. > > The change proposed in this PEP is to update the default warning filter > list > to be:: > > default::DeprecationWarning:__main__ > ignore::DeprecationWarning > ignore::PendingDeprecationWarning > ignore::ImportWarning > ignore::BytesWarning > ignore::ResourceWarning > > This means that in cases where the nominal location of the warning (as > determined by the ``stacklevel`` parameter to ``warnings.warn``) is in the > ``__main__`` module, the first occurrence of each DeprecationWarning will > once > again be reported. > > This change will lead to DeprecationWarning being displayed by default for: > > * code executed directly at the interactive prompt > * code executed directly as part of a single-file script > > While continuing to be hidden by default for: > > * code imported from another module in a ``zipapp`` archive's > ``__main__.py`` > file > * code imported from another module in an executable package's ``__main__`` > submodule > * code imported from an executable script wrapper generated at > installation time > based on a ``console_scripts`` or ``gui_scripts`` entry point definition > > As a result, API deprecation warnings encountered by development tools > written > in Python should continue to be hidden by default for users of those tools > > While not its originally intended purpose, the standard library > documentation > will also be updated to explicitly recommend the use of > ``FutureWarning`` (rather > than ``DeprecationWarning``) for backwards compatibility warnings that are > intended to be seen by *users* of an application. > > This will give the following three distinct categories of backwards > compatibility warning, with three different intended audiences: > > * ``PendingDeprecationWarning``: reported by default only in test runners > that > override the default set of warning filters. The intended audience is > Python > developers that take an active interest in ensuring the future > compatibility > of their software (e.g. professional Python application developers with > specific support obligations). > * ``DeprecationWarning``: reported by default for code that runs directly > in > the ``__main__`` module (as such code is considered relatively unlikely > to > have a dedicated test suite), but relies on test suite based reporting > for > code in other modules. The intended audience is Python developers that > are at > risk of upgrades to their dependencies (including upgrades to Python > itself) > breaking their software (e.g. developers using Python to script > environments > where someone else is in control of the timing of dependency upgrades). > * ``FutureWarning``: always reported by default. The intended audience is > users > of applications written in Python, rather than other Python developers > (e.g. warning about use of a deprecated setting in a configuration file > format). > > Given its presence in the standard library since Python 2.3, > ``FutureWarning`` > would then also have a secondary use case for libraries and frameworks that > support multiple Python versions: as a more reliably visible alternative to > ``DeprecationWarning`` in Python 2.7 and versions of Python 3.x prior to > 3.7. > > > Documentation Updates > ===================== > > The current reference documentation for the warnings system is relatively > short > on specific *examples* of possible settings for the ``-W`` command line > option > or the ``PYTHONWARNINGS`` environment variably that achieve particular end > results. > > The following improvements are proposed as part of the implementation of > this > PEP: > > * Explicitly list the following entries under the description of the > ``PYTHONWARNINGS`` environment variable:: > > PYTHONWARNINGS=error # Convert to exceptions > PYTHONWARNINGS=always # Warn every time > PYTHONWARNINGS=default # Warn once per call location > PYTHONWARNINGS=module # Warn once per calling module > PYTHONWARNINGS=once # Warn once per Python process > PYTHONWARNINGS=ignore # Never warn > > * Explicitly list the corresponding short options > (``-We``, ``-Wa``, ``-Wd``, ``-Wm``,``-Wo``, ``-Wi``) for each of the > warning actions listed under the ``-W`` command line switch documentation > > * Explicitly list the default filter set in the ``warnings`` module > documentation, using the ``action::category`` and > ``action::category:module`` > notation > > * Explicitly list the following snippet in the ``warnings.simplefilter`` > documentation as a recommended approach to turning off all warnings by > default in a Python application while still allowing them to be turned > back on via ``PYTHONWARNINGS`` or the ``-W`` command line switch:: > > if not sys.warnoptions: > warnings.simplefilter("ignore") > > None of these are *new* (they already work in all still supported Python > versions), but they're not especially obvious given the current structure > of the related documentation. > > > Reference Implementation > ======================== > > A reference implementation is available in the PR [4_] linked from the > related tracker issue for this PEP [5_]. > > As a side-effect of implementing this PEP, the internal warnings filter > list > will start allowing the use of plain strings as part of filter definitions > (in > addition to the existing use of compiled regular expressions). When > present, > the plain strings will be compared for exact matches only. This approach > allows > the new default filter to be added during interpreter startup without > requiring > early access to the ``re`` module. > > > Motivation > ========== > > As discussed in [1_] and mentioned in [2_], Python 2.7 and Python 3.2 > changed > the default handling of ``DeprecationWarning`` such that: > > * the warning was hidden by default during normal code execution > * the ``unittest`` test runner was updated to re-enable it when running > tests > > The intent was to avoid cases of tooling output like the following:: > > $ devtool mycode/ > /usr/lib/python3.6/site-packages/devtool/cli.py:1: > DeprecationWarning: 'async' and 'await' will become reserved keywords > in Python 3.7 > async = True > ... actual tool output ... > > Even when `devtool` is a tool specifically for Python programmers, this is > not > a particularly useful warning, as it will be shown on every invocation, > even > though the main helpful step an end user can take is to report a bug to the > developers of ``devtool``. > > The warning is even less helpful for general purpose developer tools that > are > used across more languages than just Python, and almost entirely > \*un\*helpful > for applications that simply happen to be written in Python, and aren't > necessarily intended for a developer audience at all. > > However, this change proved to have unintended consequences for the > following > audiences: > > * anyone using a test runner other than the default one built into > ``unittest`` > (the request for third party test runners to change their default > warnings > filters was never made explicitly, so many of them still rely on the > interpreter defaults that are designed to suit deployed applications) > * anyone using the default ``unittest`` test runner to test their Python > code > in a subprocess (since even ``unittest`` only adjusts the warnings > settings > in the current process) > * anyone writing Python code at the interactive prompt or as part of a > directly > executed script that didn't have a Python level test suite at all > > In these cases, ``DeprecationWarning`` ended up become almost entirely > equivalent to ``PendingDeprecationWarning``: it was simply never seen at > all. > > > Limitations on PEP Scope > ======================== > > This PEP exists specifically to explain both the proposed addition to the > default warnings filter for 3.7, *and* to more clearly articulate the > rationale > for the original change to the handling of DeprecationWarning back in > Python 2.7 > and 3.2. > > This PEP does not solve all known problems with the current approach to > handling > deprecation warnings. Most notably: > > * the default ``unittest`` test runner does not currently report > deprecation > warnings emitted at module import time, as the warnings filter > override is only > put in place during test execution, not during test discovery and > loading. > * the default ``unittest`` test runner does not currently report > deprecation > warnings in subprocesses, as the warnings filter override is applied > directly > to the loaded ``warnings`` module, not to the ``PYTHONWARNINGS`` > environment > variable. > * the standard library doesn't provide a straightforward way to opt-in to > seeing > all warnings emitted *by* a particular dependency prior to upgrading it > (the third-party ``warn`` module [3_] does provide this, but enabling it > involves monkeypatching the standard library's ``warnings`` module). > * re-enabling deprecation warnings by default in __main__ doesn't help in > handling cases where software has been factored out into support > modules, but > those modules still have little or no automated test coverage. Near > term, the > best currently available answer is to run such applications with > ``PYTHONWARNINGS=default::DeprecationWarning`` or > ``python -W default::DeprecationWarning`` and pay attention to their > ``stderr`` output. Longer term, this is really a question for researchers > working on static analysis of Python code: how to reliably find usage of > deprecated APIs, and how to infer that an API or parameter is deprecated > based on ``warnings.warn`` calls, without actually running either the > code > providing the API or the code accessing it > > While these are real problems with the status quo, they're excluded from > consideration in this PEP because they're going to require more complex > solutions than a single additional entry in the default warnings filter, > and resolving them at least potentially won't require going through the PEP > process. > > For anyone interested in pursuing them further, the first two would be > ``unittest`` module enhancement requests, the third would be a ``warnings`` > module enhancement request, while the last would only require a PEP if > inferring API deprecations from their contents was deemed to be an > intractable > code analysis problem, and an explicit function and parameter marker > syntax in > annotations was proposed instead. > > The CPython reference implementation will also include the following > related > changes in 3.7: > > * a new ``-X dev`` command line option that combines several developer > centric > settings (including ``-Wd``) into one command line flag: > https://bugs.python.org/issue32043 > * changing the behaviour in debug builds to show more of the warnings that > are > off by default in regular interpeter builds: > https://bugs.python.org/issue32088 > > > References > ========== > > .. [1] stdlib-sig thread proposing the original default filter change > (https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html > ) > > .. [2] Python 2.7 notification of the default warnings filter change > (https://docs.python.org/3/whatsnew/2.7.html#changes-to- > the-handling-of-deprecation-warnings) > > .. [3] Emitting warnings based on the location of the warning itself > (https://pypi.org/project/warn/) > > .. [4] GitHub PR for PEP 565 implementation > (https://github.com/python/cpython/pull/4458) > > .. [5] Tracker issue for PEP 565 implementation > (https://bugs.python.org/issue31975) > > .. [6] python-dev discussion thread for this PEP > (https://mail.python.org/pipermail/python-dev/2017-November/150477.html > ) > > > Copyright > ========= > > This document has been placed in the public domain. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Mon Nov 27 18:58:01 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 27 Nov 2017 18:58:01 -0500 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> Message-ID: On Mon, Nov 27, 2017 at 12:05 PM, Larry Hastings wrote: [..] > The PEP for > inspect.signature (PEP 362) says that when comparing two signatures for > equality, their positional and positional-or-keyword parameters must be in > the same order. It makes a point of *not* requiring that the two functions' > keyword-only parameters be in the same order. Yes, and I believe Signature.__eq__ should stay that way. > > For every currently supported version of Python 3, inspect.signature and > fn.__code__.co_varnames preserve the order of keyword-only parameters. This > isn't surprising; it's basically the same code path implementing those as > the two types of positional-relevant parameters, so the most straightforward > implementation would naturally preserve their order. It's just not > guaranteed. > > I'd like inspect.signature to guarantee that the order of keyword-only > parameters always matches the order they were declared in. Technically this > isn't a language feature, it's a library feature. But making this guarantee > would require that CPython internally cooperate, so it's kind of a language > feature too. We can update the documentation and say that we preserve the order in simple cases: def foo(*, a=1, b=2): pass s = inspect.signature(foo) assert list(s.parameters.keys()) == ['a', 'b'] We can't say anything about the order if someone passes a partial object, or sets custom Signature objects to func.__signature__. Yury From ericsnowcurrently at gmail.com Mon Nov 27 19:27:20 2017 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 27 Nov 2017 17:27:20 -0700 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> Message-ID: On Mon, Nov 27, 2017 at 10:05 AM, Larry Hastings wrote: > I'd like inspect.signature to guarantee that the order of keyword-only > parameters always matches the order they were declared in. Technically this > isn't a language feature, it's a library feature. But making this guarantee > would require that CPython internally cooperate, so it's kind of a language > feature too. > > Does this sound reasonable? Would it need a PEP? I'm hoping for "yes" and > "no", respectively. +1 There is definitely significant information in source code text that gets thrown away in some cases, which I'm generally in favor of preserving (see PEP 468 and PEP 520). The use case here is unclear to me, but the desired guarantee is effectively the status quo and it is a minor guarantee as well, so I don't see the harm. Furthermore, I don't see a need for a PEP given the small scale and impact. -eric From nad at python.org Mon Nov 27 22:40:20 2017 From: nad at python.org (Ned Deily) Date: Mon, 27 Nov 2017 22:40:20 -0500 Subject: [Python-Dev] 3.7.0a3 cutoff extended a week to 12-04 Message-ID: <1E7661D9-B165-4A1A-8471-57F7E2415786@python.org> We are extending the cutoff for the next 3.7 alpha preview (3.7.0a3) a week, moving it from today to 12-04 12:00 UTC. The main reason is a selfish one: I have been traveling and mainly offline for the last few weeks and I am still catching up with activity. Since we are getting close to feature code cutoff for the 3.7 cycle, it would be better to get things in sooner than later. Following alpha 3, we will have one more alpha preview, 3.7.0a4 on 2018-01-08, prior to the feature code cutoff with 3.7.0b1 on 2018-01-29. Note that 12-04 is also the scheduled date for the next 3.6.x maintenance release release candidate, 3.6.4rc1. So I hope you can take advantage of the extra days for both release cycles. Thanks again for all your efforts! --Ned -- Ned Deily nad at python.org -- [] From larry at hastings.org Tue Nov 28 00:42:28 2017 From: larry at hastings.org (Larry Hastings) Date: Mon, 27 Nov 2017 21:42:28 -0800 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> Message-ID: <653d7304-8ad0-bcb9-82f5-bee9985910ea@hastings.org> On 11/27/2017 03:58 PM, Yury Selivanov wrote: > We can't say anything about the order if someone passes a partial > object Sure we could.? We could ensure that functools.partial behaves in a sane way, then document and guarantee that behavior. > or sets custom Signature objects to func.__signature__. Consenting Adults rule applies here.? Obviously we should honor the signature they set. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Tue Nov 28 00:56:48 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 28 Nov 2017 00:56:48 -0500 Subject: [Python-Dev] Using async/await in place of yield expression In-Reply-To: References: <5A1C8AA0.4040600@canterbury.ac.nz> Message-ID: On 11/27/2017 5:05 PM, Guido van Rossum wrote: > On Mon, Nov 27, 2017 at 1:58 PM, Greg Ewing > wrote: > > Guido van Rossum wrote: > The source for sleep() isn't very helpful -- e.g. @coroutine is > mostly a backwards compatibility thing. > > So how are you supposed to write that *without* using @coroutine? > > A simplified version using async def/await: --- > async def sleep(delay): > ??? f = Future() This must be asyncio.Future as (by experiment) a concurrent.futures.Future cannot be awaited. The a.F doc could add this as a difference. Future needs an argument for the keyword-only loop parameter; as I remember, the default None gets replaced by the default asyncio loop. > ??? get_event_loop().call_later(delay, f.set_result) A result is needed, such as None or delay, to pass to f.set_result. > ??? await f I gather that 1. the value of the expression is the result set on the future, which would normally be needed, though not here; 2. the purpose of 'await f' here is simply to block exit from the coroutine, without blocking the loop, so that users of 'await sleep(n)' will actually pause (without blocking other code). Since a coroutine must be awaited, and not just called, and await can only be used in a coroutine, there seems to be a problem of where to start. The asyncio answer, in the PEP, is to wrap a coroutine call in a Task, which is, as I remember, done by the loop run methods. Based on the notes above, and adding some prints, I got this to run: --- import asyncio import time loop = asyncio.get_event_loop() async def sleep(delay): f = asyncio.Future(loop=loop) loop.call_later(delay, f.set_result, delay) print('start') start = time.perf_counter() d = await f stop = time.perf_counter() print(f'requested sleep = {d}; actual = {stop-start:f}') loop.run_until_complete(sleep(1)) --- This produces: start requested sleep = 1; actual = .9--- [usually < 1] --- Now, the question I've had since async and await were introduced, is how to drive async statements with tkinter. With the help of the working example above, I make a start. --- from asyncio import Future, Task import tkinter as tk import time class ATk(tk.Tk): "Enable tkinter program to use async def, etc, and await sleep." def __init__(self): super().__init__() def task(self, coro): "Connect async def coroutine to tk loop." Task(coro, loop=self) def get_debug(self): "Internal method required by Future." print('debug') return False def call_soon(self, callback, *args): "Internal method required by Task and Future." # TaskStep/Wakeup/MethWrapper has no .__name__ attribute. # Tk.after requires callbacks to have one (bug, I think). print('soon', callback, *args, hasattr(callback, '__name__')) def wrap2(): callback(*args) return self.after(0, wrap2) root = ATk() async def sleep(delay): f = Future(loop=root) def cb(): print('cb called') f.set_result(delay) root.after(int(delay*1000), cb) print('start') start = time.perf_counter() d = await f stop = time.perf_counter() print(f'requested sleep = {d}; actual = {stop-start:f}') root.task(sleep(1)) root.mainloop() --- Output: debug soon False debug start cb called soon False requested sleep = 1; actual = 1.01--- [always about 1.01] Replacing the last two lines with async def myloop(seconds): while True: print(f'*** {seconds} ***') await sleep(seconds) root.task(myloop(1.2)) root.task(myloop(.77)) root.mainloop() prints interleaved 1.2 and .77 lines. I will next work on animating tk widgets in the loops. -- Terry Jan Reedy From eric at trueblade.com Tue Nov 28 02:41:54 2017 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 28 Nov 2017 02:41:54 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: On 11/27/2017 10:51 AM, Guido van Rossum wrote: > Following up on this subthread (inline below). > Didn't we at one point have something like > > isinstance(other, self.__class__) and fields(other) == fields(self) and > > > (plus some optimization if the types are identical)? > > That feels ideal, because it means you can subclass Point just to add > some methods and it will stay comparable, but if you add fields it will > always be unequal. One thing this doesn't let you do is compare instances of two different subclasses of a base type: @dataclass class B: i: int @dataclass class C1(B): pass @dataclass class C2(B): pass You can't compare C1(0) and C2(0), because neither one is an instance of the other's type. The test to get this case to work would be expensive: find the common ancestor, and then make sure no fields have been added since then. And I haven't thought through multiple inheritance. I suggest we don't try to support this case. Eric. From storchaka at gmail.com Tue Nov 28 02:54:07 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 28 Nov 2017 09:54:07 +0200 Subject: [Python-Dev] PEP 565: show DeprecationWarning in __main__ (round 2) In-Reply-To: References: Message-ID: 25.11.17 07:33, Nick Coghlan ????: > * ``FutureWarning``: always reported by default. The intended audience is users > of applications written in Python, rather than other Python developers > (e.g. warning about use of a deprecated setting in a configuration file > format). > > Given its presence in the standard library since Python 2.3, ``FutureWarning`` > would then also have a secondary use case for libraries and frameworks that > support multiple Python versions: as a more reliably visible alternative to > ``DeprecationWarning`` in Python 2.7 and versions of Python 3.x prior to 3.7. I think it is worth to say more explicitly that the primary purpose of FutureWarning (warn about future behavior changes that will not be errors) is kept. It is just added a secondary purpose: a replacement for DeprecationWarning if you want to be sure that it is visible to end users. I think that showing DeprecationWarning in __main__.py is just a first step. In future we can extend the scope of showing DeprecationWarning. From ncoghlan at gmail.com Tue Nov 28 06:00:47 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Nov 2017 21:00:47 +1000 Subject: [Python-Dev] PEP 565: show DeprecationWarning in __main__ (round 2) In-Reply-To: References: Message-ID: On 28 November 2017 at 09:52, Guido van Rossum wrote: > I am basically in agreement with this now. Some remarks: I've pushed an update which should address most of these, as well as Serhiy's comment about the existing FutureWarning use case: https://github.com/python/peps/commit/aaa64f53d0434724384b056a3c195d63a5cc3761 > - I would recommend adding a note to the abstract about the recommendation > for test runners to also enable these warnings by default. Done. > - Would be nice to know whether IPython/Jupyter is happy with this. They implemented a solution along these lines some time ago, so I've added a new subsection with advice for folks writing interactive shells, and quoted their warnings.filterwarnings call as an example of how to do it (with a link to the relevant line in their repo). > - The sentence "As a result, API deprecation warnings encountered by > development tools written in Python should continue to be hidden by default > for users of those tools" is missing a final period; I also think that the > argument here is stronger if "development" is left out. (Maybe development > tools could be called out in a "for example" clause.) I ended up rewording that paragraph completely (partly prompted by your comment about the impact on single file scripts). > - I can't quite put my finger on it, but reading the three bullets of > distinct categories of warnings something seems slightly off, perhaps due to > independent editing of various phrases. Perhaps the three bullets could be > rewritten for better correspondence between the various properties and > audiences? And what should test runners do for each? I think I was trying to do too much in that list of categories, so I moved everything related to test runners out to a new dedicated section. That means the list of categories can focus solely on the actual defaults, while the new test runner section describes how we expect test runners to override that. I also noticed something that seemed worth mentioning in relation to BytesWarning, which is that "-Wd" works as well as it does because the interpreter doesn't even *try* to emit those warnings if you don't pass "-b" or "-bb" on the command line. The warnings filter only handles the "Should it be a warning or an exception?" part. > - Also, is SyntaxWarning worth adding to the list? I *haven't* added this, since our only current syntax warning that I can see is the one for "assert ('always', 'true')", and we've been more inclined to go down the DeprecationWarning->SyntaxError path in recent years than we have to settle for a persistent syntax warning. > - The thing about FutureWarning being present since 2.3 feels odd -- if your > library cares about supporting 2.7 and higher, should it use FutureWarning > or DeprecationWarning? I reworded this paragraph to make it more prescriptive and say "Use DeprecationWarning on 3.7+, use FutureWarning on earlier versions if you don't think the way they handle DeprecationWarning is noisy enough") > - "re-enabling deprecation warnings by default in __main__ doesn't help in > handling cases where software has been factored out into support modules, > but > those modules still have little or no automated test coverage." > This and all bullets in the same list should have an initial capital > letter and trailing period. Fixed. > This sentence in particular also reads odd: the > "but" seems to apply to everything that comes before, but actually is meant > to apply only to "cases where ...". Maybe rephrasing this can help the > sentence flow better. I missed this comment initially. Follow-up commit to reword that sentence: https://github.com/python/peps/commit/47ea35f0510dab2b01e18ff437f95c6b1b75f2e6 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 28 06:32:30 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Nov 2017 21:32:30 +1000 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: <653d7304-8ad0-bcb9-82f5-bee9985910ea@hastings.org> References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> <653d7304-8ad0-bcb9-82f5-bee9985910ea@hastings.org> Message-ID: On 28 November 2017 at 15:42, Larry Hastings wrote: > On 11/27/2017 03:58 PM, Yury Selivanov wrote: >> We can't say anything about the order if someone passes a partial >> object > > Sure we could. We could ensure that functools.partial behaves in a sane > way, then document and guarantee that behavior. Right, I think the main implication here would be that we need to ensure that any signature manipulation operations *we* provide are order preserving. Fortunately for Larry, we kinda cheat on that front: all the logic for dealing with this problem is in the inspect module itself, which knows about all the different ways we manipulate signatures in the standard library. That means that if we want to declare that the inspect module will be order preserving, we can, and it shouldn't require changes to anything else. Cheers, Nick. P.S. Note that inspect.getfullargspec() was actually undeprecated a while back - enough folks that didn't need access to the function annotations were reimplementing it for themselves "because the standard library API is deprecated" that the most logical course of action was to just declare it as being supported again. I don't think that changes the argument here though - it just means guaranteed order preservation in that API will only happen if we declare dicts to be insertion ordered in general. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 28 07:02:25 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Nov 2017 22:02:25 +1000 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: On 28 November 2017 at 17:41, Eric V. Smith wrote: > One thing this doesn't let you do is compare instances of two different > subclasses of a base type: > > @dataclass > class B: > i: int > > @dataclass > class C1(B): pass > > @dataclass > class C2(B): pass > > You can't compare C1(0) and C2(0), because neither one is an instance of the > other's type. The test to get this case to work would be expensive: find the > common ancestor, and then make sure no fields have been added since then. > And I haven't thought through multiple inheritance. > > I suggest we don't try to support this case. That gets you onto problematic ground as far as transitivity is concerned, since you'd end up with the following: >>> b = B(0); c1 = C1(0); c2 = C2(0) >>> c1 == b True >>> b == c2 True >>> c1 == c2 False However, I think you can fix this by injecting the first base in the MRO that defines a data field as a "__field_layout__" class attribute, and then have the comparison methods check for "other.__field_layout__ is self.__field_layout__", rather than checking the runtime class directly. So in the above example, you would have: >>> B.__field_layout__ is B True >>> C1.__field_layout__ is B True >>> C2.__field_layout__ is B True It would then be up to the dataclass decorator to set `__field_layout__` correctly, using the follow rules: 1. Use the just-defined class if the class defines any fields 2. Use the just-defined class if it inherits from multiple base classes that define fields and don't already share an MRO 3. Use a base class if that's either the only base class that defines fields, or if all other base classes that define fields are already in the MRO of that base class Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Tue Nov 28 12:13:50 2017 From: brett at python.org (Brett Cannon) Date: Tue, 28 Nov 2017 17:13:50 +0000 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> <653d7304-8ad0-bcb9-82f5-bee9985910ea@hastings.org> Message-ID: On Tue, 28 Nov 2017 at 03:33 Nick Coghlan wrote: > On 28 November 2017 at 15:42, Larry Hastings wrote: > > On 11/27/2017 03:58 PM, Yury Selivanov wrote: > >> We can't say anything about the order if someone passes a partial > >> object > > > > Sure we could. We could ensure that functools.partial behaves in a sane > > way, then document and guarantee that behavior. > > Right, I think the main implication here would be that we need to > ensure that any signature manipulation operations *we* provide are > order preserving. > > Fortunately for Larry, we kinda cheat on that front: all the logic for > dealing with this problem is in the inspect module itself, which knows > about all the different ways we manipulate signatures in the standard > library. That means that if we want to declare that the inspect module > will be order preserving, we can, and it shouldn't require changes to > anything else. > > Cheers, > Nick. > > P.S. Note that inspect.getfullargspec() was actually undeprecated a > while back - enough folks that didn't need access to the function > annotations were reimplementing it for themselves "because the > standard library API is deprecated" that the most logical course of > action was to just declare it as being supported again. I don't think > that changes the argument here though - it just means guaranteed order > preservation in that API will only happen if we declare dicts to be > insertion ordered in general. > OT for this thread, but is there an issue number tracking the un-deprecating? Basically I want to make sure there is an issue tracking deprecating it again when we stop worrying about any Python 2/3 support. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Tue Nov 28 13:25:01 2017 From: steve at holdenweb.com (Steve Holden) Date: Tue, 28 Nov 2017 18:25:01 +0000 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> <653d7304-8ad0-bcb9-82f5-bee9985910ea@hastings.org> Message-ID: I was going to suggest the final DeprecationWarning should be raised unconditionally (subject to whatever silencing rules are agreed) by the final 2.7 release to recommend migration to Python 3. "This is an ex-language ... it has ceased to be ... it is no more ... it has returned to the great heap whence it came ..." Steve Holden On Tue, Nov 28, 2017 at 5:13 PM, Brett Cannon wrote: > > > On Tue, 28 Nov 2017 at 03:33 Nick Coghlan wrote: > >> On 28 November 2017 at 15:42, Larry Hastings wrote: >> > On 11/27/2017 03:58 PM, Yury Selivanov wrote: >> >> We can't say anything about the order if someone passes a partial >> >> object >> > >> > Sure we could. We could ensure that functools.partial behaves in a sane >> > way, then document and guarantee that behavior. >> >> Right, I think the main implication here would be that we need to >> ensure that any signature manipulation operations *we* provide are >> order preserving. >> >> Fortunately for Larry, we kinda cheat on that front: all the logic for >> dealing with this problem is in the inspect module itself, which knows >> about all the different ways we manipulate signatures in the standard >> library. That means that if we want to declare that the inspect module >> will be order preserving, we can, and it shouldn't require changes to >> anything else. >> >> Cheers, >> Nick. >> >> P.S. Note that inspect.getfullargspec() was actually undeprecated a >> while back - enough folks that didn't need access to the function >> annotations were reimplementing it for themselves "because the >> standard library API is deprecated" that the most logical course of >> action was to just declare it as being supported again. I don't think >> that changes the argument here though - it just means guaranteed order >> preservation in that API will only happen if we declare dicts to be >> insertion ordered in general. >> > > OT for this thread, but is there an issue number tracking the > un-deprecating? Basically I want to make sure there is an issue tracking > deprecating it again when we stop worrying about any Python 2/3 support. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > steve%40holdenweb.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Nov 28 13:31:39 2017 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 28 Nov 2017 13:31:39 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> Message-ID: <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> On 11/28/17 7:02 AM, Nick Coghlan wrote: > On 28 November 2017 at 17:41, Eric V. Smith wrote: >> One thing this doesn't let you do is compare instances of two different >> subclasses of a base type: >> >> @dataclass >> class B: >> i: int >> >> @dataclass >> class C1(B): pass >> >> @dataclass >> class C2(B): pass >> >> You can't compare C1(0) and C2(0), because neither one is an instance of the >> other's type. The test to get this case to work would be expensive: find the >> common ancestor, and then make sure no fields have been added since then. >> And I haven't thought through multiple inheritance. >> >> I suggest we don't try to support this case. > > That gets you onto problematic ground as far as transitivity is > concerned, since you'd end up with the following: Excellent point, thanks for raising it. > >>> b = B(0); c1 = C1(0); c2 = C2(0) > >>> c1 == b > True > >>> b == c2 > True > >>> c1 == c2 > False > > However, I think you can fix this by injecting the first base in the > MRO that defines a data field as a "__field_layout__" class attribute, > and then have the comparison methods check for "other.__field_layout__ > is self.__field_layout__", rather than checking the runtime class > directly. > > So in the above example, you would have: > > >>> B.__field_layout__ is B > True > >>> C1.__field_layout__ is B > True > >>> C2.__field_layout__ is B > True > > It would then be up to the dataclass decorator to set > `__field_layout__` correctly, using the follow rules: > > 1. Use the just-defined class if the class defines any fields > 2. Use the just-defined class if it inherits from multiple base > classes that define fields and don't already share an MRO > 3. Use a base class if that's either the only base class that defines > fields, or if all other base classes that define fields are already in > the MRO of that base class That seems like a lot of complication for a feature that will be rarely used. I'll give it some thought, especially the MI logic. I think what you're laying out is an optimization for "do the classes have identical fields, inherited through a common base class or classes", right? Eric. From guido at python.org Tue Nov 28 13:57:56 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 10:57:56 -0800 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> References: <5A1B9CED.60708@canterbury.ac.nz> <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> Message-ID: I would also be happy with a retreat, where we define `__eq__` to insist that the classes are the same, and people can override this to their hearts' content. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Nov 28 14:05:58 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 11:05:58 -0800 Subject: [Python-Dev] PEP 565: show DeprecationWarning in __main__ (round 2) In-Reply-To: References: Message-ID: This is awesome. If there isn't more feedback in the next few days expect an approval early next week. (Ping me if you don't hear from me, I'm juggling way too many small tasks so I'm likely to forget some.) On Tue, Nov 28, 2017 at 3:00 AM, Nick Coghlan wrote: > On 28 November 2017 at 09:52, Guido van Rossum wrote: > > I am basically in agreement with this now. Some remarks: > > I've pushed an update which should address most of these, as well as > Serhiy's comment about the existing FutureWarning use case: > https://github.com/python/peps/commit/aaa64f53d0434724384b056a3c195d > 63a5cc3761 > > > - I would recommend adding a note to the abstract about the > recommendation > > for test runners to also enable these warnings by default. > > Done. > > > - Would be nice to know whether IPython/Jupyter is happy with this. > > They implemented a solution along these lines some time ago, so I've > added a new subsection with advice for folks writing interactive > shells, and quoted their warnings.filterwarnings call as an example of > how to do it (with a link to the relevant line in their repo). > > > - The sentence "As a result, API deprecation warnings encountered by > > development tools written in Python should continue to be hidden by > default > > for users of those tools" is missing a final period; I also think that > the > > argument here is stronger if "development" is left out. (Maybe > development > > tools could be called out in a "for example" clause.) > > I ended up rewording that paragraph completely (partly prompted by > your comment about the impact on single file scripts). > > > - I can't quite put my finger on it, but reading the three bullets of > > distinct categories of warnings something seems slightly off, perhaps > due to > > independent editing of various phrases. Perhaps the three bullets could > be > > rewritten for better correspondence between the various properties and > > audiences? And what should test runners do for each? > > I think I was trying to do too much in that list of categories, so I > moved everything related to test runners out to a new dedicated > section. > > That means the list of categories can focus solely on the actual > defaults, while the new test runner section describes how we expect > test runners to override that. > > I also noticed something that seemed worth mentioning in relation to > BytesWarning, which is that "-Wd" works as well as it does because the > interpreter doesn't even *try* to emit those warnings if you don't > pass "-b" or "-bb" on the command line. The warnings filter only > handles the "Should it be a warning or an exception?" part. > > > - Also, is SyntaxWarning worth adding to the list? > > I *haven't* added this, since our only current syntax warning that I > can see is the one for "assert ('always', 'true')", and we've been > more inclined to go down the DeprecationWarning->SyntaxError path in > recent years than we have to settle for a persistent syntax warning. > > > - The thing about FutureWarning being present since 2.3 feels odd -- if > your > > library cares about supporting 2.7 and higher, should it use > FutureWarning > > or DeprecationWarning? > > I reworded this paragraph to make it more prescriptive and say "Use > DeprecationWarning on 3.7+, use FutureWarning on earlier versions if > you don't think the way they handle DeprecationWarning is noisy > enough") > > > - "re-enabling deprecation warnings by default in __main__ doesn't help > in > > handling cases where software has been factored out into support > modules, > > but > > those modules still have little or no automated test coverage." > > This and all bullets in the same list should have an initial capital > > letter and trailing period. > > Fixed. > > > This sentence in particular also reads odd: the > > "but" seems to apply to everything that comes before, but actually is > meant > > to apply only to "cases where ...". Maybe rephrasing this can help the > > sentence flow better. > > I missed this comment initially. Follow-up commit to reword that > sentence: https://github.com/python/peps/commit/ > 47ea35f0510dab2b01e18ff437f95c6b1b75f2e6 > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-dev at mgmiller.net Tue Nov 28 14:51:14 2017 From: python-dev at mgmiller.net (Mike Miller) Date: Tue, 28 Nov 2017 11:51:14 -0800 Subject: [Python-Dev] iso8601 parsing In-Reply-To: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: This may have gotten bogged down again. Could we get the output of datetime.isoformat() parsed at a minimum? Perfection is not required. Looks like there is a patch or two and test cases on the bug. -Mike > Could anyone put this five year-old bug about parsing iso8601 format date-times > on the front burner? > > ??? http://bugs.python.org/issue15873 > > In the comments there's a lot of hand-wringing about different variations that > bogged it down, but right now I only need it to handle the output of > datetime.isoformat(): > > ??? >>> dt.isoformat() > ??? '2017-10-20T08:20:08.986166+00:00' > > Perhaps if we could get that minimum first step in, it could be iterated on and > made more lenient in the future. > From storchaka at gmail.com Tue Nov 28 15:04:49 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 28 Nov 2017 22:04:49 +0200 Subject: [Python-Dev] Regular expressions: splitting on zero-width patterns Message-ID: The two largest problems in the re module are splitting on zero-width patterns and complete and correct support of the Unicode standard. These problems are solved in regex. regex has many other features, but they are less important. I want to tell the problem of splitting on zero-width patterns. It already was discussed on Python-Dev 13 years ago [3] and maybe later. See also issues: [4], [5], [6], [7], [8]. In short it doesn't work. Splitting on the pattern r'\b' doesn't split the text at boundaries of words, and splitting on the pattern r'\s+|(?<=-)' will split the text on whitespaces, but will not split words with hypens as expected. In Python 3.4 and earlier: >>> re.split(r'\b', 'Self-Defence Class') ['Self-Defence Class'] >>> re.split(r'\s+|(?<=-)', 'Self-Defence Class') ['Self-Defence', 'Class'] >>> re.split(r'\s*', 'Self-Defence Class') ['Self-Defence', 'Class'] Note that splitting on r'\s*' (0 or more whitespaces) actually split on r'\s+' (1 or more whitespaces). Splitting on patterns that only can match the empty string (like r'\b' or r'(?<=-)') never worked, while splitting Starting since Python 3.5 splitting on a pattern that only can match the empty string raises a ValueError (this never worked), and splitting a pattern that can match the empty string but not only emits a FutureWarning. This taken developers a time for replacing their patterns r'\s*' to r'\s+' as they should be. Now I have created a final patch [9] that makes re.split() splitting on zero-width patterns. >>> re.split(r'\b', 'Self-Defence Class') ['', 'Self', '-', 'Defence', ' ', 'Class', ''] >>> re.split(r'\s+|(?<=-)', 'Self-Defence Class') ['Self-', 'Defence', 'Class'] >>> re.split(r'\s*', 'Self-Defence Class') ['', 'S', 'e', 'l', 'f', '-', 'D', 'e', 'f', 'e', 'n', 'c', 'e', 'C', 'l', 'a', 's', 's', ''] The latter case the result is differ too much from the previous result, and this likely not what the author wanted to get. But users had two Python releases for fixing their code. FutureWarning is not silent by default. Because these patterns produced errors or warnings in the recent two releases, we don't need an additional parameter for compatibility. But the problem was not just with re.split(). Other functions also worked not good with patterns that can match the empty string. >>> re.findall(r'^|\w+', 'Self-Defence Class') ['', 'elf', 'Defence', 'Class'] >>> list(re.finditer(r'^|\w+', 'Self-Defence Class')) [, , , ] >>> re.sub(r'(^|\w+)', r'<\1>', 'Self-Defence Class') '<>S- ' After matching the empty string the following character will be skipped and will be not included in the next match. My patch fixes these functions too. >>> re.findall(r'^|\w+', 'Self-Defence Class') ['', 'Self', 'Defence', 'Class'] >>> list(re.finditer(r'^|\w+', 'Self-Defence Class')) [, , , ] >>> re.sub(r'(^|\w+)', r'<\1>', 'Self-Defence Class') '<>- ' I think this change don't need preliminary warnings, because it change the behavior of more rarely used patterns. No re tests have been broken. I was needed to add new tests for detecting the behavior change. But there is one spoonful of tar in a barrel of honey. I didn't expect this, but this change have broken a pattern used with re.sub() in the doctest module: r'(?m)^\s*?$'. This was fixed by replacing it with r'(?m)^[^\S\n]+?$'). I hope that such cases are pretty rare and think this is an avoidable breakage. The new behavior of re.split() matches the behavior of regex.split() with the VERSION1 flag, the new behavior of re.findall() and re.finditer() matches the behavior of corresponding functions in the regex module (independently from the version flag). But the new behavior of re.sub() doesn't match exactly the behavior of regex.sub() with any version flag. It differs from the old behavior as you can see in the example above, but is closer to it that to regex.sub() with VERSION1. This allowed to avoid braking existing tests for re.sub(). >>> regex.sub(r'(\W+|(?<=-))', r':', 'Self-Defence Class') 'Self:Defence:Class' >>> regex.sub(r'(?V1)(\W+|(?<=-))', r':', 'Self-Defence Class') 'Self::Defence:Class' >>> re.sub(r'(\W+|(?<=-))', r':', 'Self-Defence Class') 'Self:Defence:Class' As re.split() it never matches the empty string adjacent to the previous match. re.findall() and re.finditer() only don't match the empty string adjacent to the previous empty string match. In the regex module regex.sub() is mutually consistent with regex.findall() and regex.finditer() (with the VERSION1 flag), but regex.split() is not consistent with them. In the re module re.split() and re.sub() will be mutually consistent, as well as re.findall() and re.finditer(). This is more backward compatible. And I don't know reasons for preferring the behavior of re.findall() and re.finditer() over the behavior of re.split() in this corner case. Would be nice to get this change in 3.7.0a3 for wider testing. Please make a review of the patch [9] or tell your thoughts about this change. [1] https://docs.python.org/3/library/re.html [2] https://pypi.python.org/pypi/regex/ [3] https://mail.python.org/pipermail/python-dev/2004-August/047272.html [4] https://bugs.python.org/issue852532 [5] https://bugs.python.org/issue988761 [6] https://bugs.python.org/issue1647489 [7] https://bugs.python.org/issue3262 [8] https://bugs.python.org/issue25054 [9] https://github.com/python/cpython/pull/4471 From paul at ganssle.io Tue Nov 28 15:07:45 2017 From: paul at ganssle.io (Paul G) Date: Tue, 28 Nov 2017 20:07:45 +0000 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: I think the latest version can now strptime offsets of the form ?HH:MM with %z, so there's no longer anything blocking you from parsing from all isoformat() outputs with strptime, provided you know which one you need. I think a from_isoformat() like method that *only* supports isoformat outputs is a fine idea, though, with a fairly obvious interface. I'd be happy to take a crack at it (I've been working on the somewhat harder problem of an iso8601 compliant parser for dateutil, so this is fresh in my mind). On November 28, 2017 2:51:14 PM EST, Mike Miller wrote: >This may have gotten bogged down again. Could we get the output of >datetime.isoformat() parsed at a minimum? Perfection is not required. > >Looks like there is a patch or two and test cases on the bug. > >-Mike > > >> Could anyone put this five year-old bug about parsing iso8601 format >date-times >> on the front burner? >> >> ??? http://bugs.python.org/issue15873 >> >> In the comments there's a lot of hand-wringing about different >variations that >> bogged it down, but right now I only need it to handle the output of >> datetime.isoformat(): >> >> ??? >>> dt.isoformat() >> ??? '2017-10-20T08:20:08.986166+00:00' >> >> Perhaps if we could get that minimum first step in, it could be >iterated on and >> made more lenient in the future. >> >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasz at langa.pl Tue Nov 28 15:10:54 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Tue, 28 Nov 2017 12:10:54 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? Message-ID: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> Hi Mark, it looks like the PEP is dormant for over two years now. I had multiple people ask me over the past few days about it though, so I wanted to ask if this is moving forward. I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. If the idea itself is uncontroversial, I could likely find somebody interested in implementing it. If not for 3.7 then at least for 3.8. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Tue Nov 28 15:15:31 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 12:15:31 -0800 Subject: [Python-Dev] Regular expressions: splitting on zero-width patterns In-Reply-To: References: Message-ID: I trust your instincts and powers of analysis here. Maybe MRAB has some useful feedback on the tar in the honey? On Tue, Nov 28, 2017 at 12:04 PM, Serhiy Storchaka wrote: > The two largest problems in the re module are splitting on zero-width > patterns and complete and correct support of the Unicode standard. These > problems are solved in regex. regex has many other features, but they are > less important. > > I want to tell the problem of splitting on zero-width patterns. It already > was discussed on Python-Dev 13 years ago [3] and maybe later. See also > issues: [4], [5], [6], [7], [8]. > > In short it doesn't work. Splitting on the pattern r'\b' doesn't split the > text at boundaries of words, and splitting on the pattern r'\s+|(?<=-)' > will split the text on whitespaces, but will not split words with hypens as > expected. > > In Python 3.4 and earlier: > > >>> re.split(r'\b', 'Self-Defence Class') > ['Self-Defence Class'] > >>> re.split(r'\s+|(?<=-)', 'Self-Defence Class') > ['Self-Defence', 'Class'] > >>> re.split(r'\s*', 'Self-Defence Class') > ['Self-Defence', 'Class'] > > Note that splitting on r'\s*' (0 or more whitespaces) actually split on > r'\s+' (1 or more whitespaces). Splitting on patterns that only can match > the empty string (like r'\b' or r'(?<=-)') never worked, while splitting > > Starting since Python 3.5 splitting on a pattern that only can match the > empty string raises a ValueError (this never worked), and splitting a > pattern that can match the empty string but not only emits a FutureWarning. > This taken developers a time for replacing their patterns r'\s*' to r'\s+' > as they should be. > > Now I have created a final patch [9] that makes re.split() splitting on > zero-width patterns. > > >>> re.split(r'\b', 'Self-Defence Class') > ['', 'Self', '-', 'Defence', ' ', 'Class', ''] > >>> re.split(r'\s+|(?<=-)', 'Self-Defence Class') > ['Self-', 'Defence', 'Class'] > >>> re.split(r'\s*', 'Self-Defence Class') > ['', 'S', 'e', 'l', 'f', '-', 'D', 'e', 'f', 'e', 'n', 'c', 'e', 'C', 'l', > 'a', 's', 's', ''] > > The latter case the result is differ too much from the previous result, > and this likely not what the author wanted to get. But users had two Python > releases for fixing their code. FutureWarning is not silent by default. > > Because these patterns produced errors or warnings in the recent two > releases, we don't need an additional parameter for compatibility. > > But the problem was not just with re.split(). Other functions also worked > not good with patterns that can match the empty string. > > >>> re.findall(r'^|\w+', 'Self-Defence Class') > ['', 'elf', 'Defence', 'Class'] > >>> list(re.finditer(r'^|\w+', 'Self-Defence Class')) > [, match='elf'>, , object; span=(13, 18), match='Class'>] > >>> re.sub(r'(^|\w+)', r'<\1>', 'Self-Defence Class') > '<>S- ' > > After matching the empty string the following character will be skipped > and will be not included in the next match. My patch fixes these functions > too. > > >>> re.findall(r'^|\w+', 'Self-Defence Class') > ['', 'Self', 'Defence', 'Class'] > >>> list(re.finditer(r'^|\w+', 'Self-Defence Class')) > [, match='Self'>, , object; span=(13, 18), match='Class'>] > >>> re.sub(r'(^|\w+)', r'<\1>', 'Self-Defence Class') > '<>- ' > > I think this change don't need preliminary warnings, because it change the > behavior of more rarely used patterns. No re tests have been broken. I was > needed to add new tests for detecting the behavior change. > > But there is one spoonful of tar in a barrel of honey. I didn't expect > this, but this change have broken a pattern used with re.sub() in the > doctest module: r'(?m)^\s*?$'. This was fixed by replacing it with > r'(?m)^[^\S\n]+?$'). I hope that such cases are pretty rare and think this > is an avoidable breakage. > > The new behavior of re.split() matches the behavior of regex.split() with > the VERSION1 flag, the new behavior of re.findall() and re.finditer() > matches the behavior of corresponding functions in the regex module > (independently from the version flag). But the new behavior of re.sub() > doesn't match exactly the behavior of regex.sub() with any version flag. It > differs from the old behavior as you can see in the example above, but is > closer to it that to regex.sub() with VERSION1. This allowed to avoid > braking existing tests for re.sub(). > > >>> regex.sub(r'(\W+|(?<=-))', r':', 'Self-Defence Class') > > > 'Self:Defence:Class' > > > >>> regex.sub(r'(?V1)(\W+|(?<=-))', r':', 'Self-Defence Class') > > > 'Self::Defence:Class' > >>> re.sub(r'(\W+|(?<=-))', r':', 'Self-Defence Class') > 'Self:Defence:Class' > > As re.split() it never matches the empty string adjacent to the previous > match. re.findall() and re.finditer() only don't match the empty string > adjacent to the previous empty string match. In the regex module > regex.sub() is mutually consistent with regex.findall() and > regex.finditer() (with the VERSION1 flag), but regex.split() is not > consistent with them. In the re module re.split() and re.sub() will be > mutually consistent, as well as re.findall() and re.finditer(). This is > more backward compatible. And I don't know reasons for preferring the > behavior of re.findall() and re.finditer() over the behavior of re.split() > in this corner case. > > Would be nice to get this change in 3.7.0a3 for wider testing. Please make > a review of the patch [9] or tell your thoughts about this change. > > [1] https://docs.python.org/3/library/re.html > [2] https://pypi.python.org/pypi/regex/ > [3] https://mail.python.org/pipermail/python-dev/2004-August/047272.html > [4] https://bugs.python.org/issue852532 > [5] https://bugs.python.org/issue988761 > [6] https://bugs.python.org/issue1647489 > [7] https://bugs.python.org/issue3262 > [8] https://bugs.python.org/issue25054 > [9] https://github.com/python/cpython/pull/4471 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Nov 28 15:17:57 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 12:17:57 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> Message-ID: Please, not for 3.7. I think it will be very difficult to get consensus about this, and I personally feel something like +/- zero about it -- sometimes I think it makes sense, sometimes I think we're jumping the shark here. On Tue, Nov 28, 2017 at 12:10 PM, Lukasz Langa wrote: > Hi Mark, > it looks like the PEP is dormant for over two years now. I had multiple > people ask me over the past few days about it though, so I wanted to ask if > this is moving forward. > > I also cc python-dev to see if anybody here is strongly in favor or > against this inclusion. If the idea itself is uncontroversial, I could > likely find somebody interested in implementing it. If not for 3.7 then at > least for 3.8. > > - ? > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Tue Nov 28 15:31:06 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 28 Nov 2017 12:31:06 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> Message-ID: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> > I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. timeout ?? local_timeout ?? global_timeout 'foo' in (None ?? ['foo', 'bar']) requested_quantity ?? default_quantity * price name?.strip()[4:].upper() user?.first_name.upper() Raymond From barry at python.org Tue Nov 28 15:38:18 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 28 Nov 2017 15:38:18 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> Message-ID: <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> On Nov 28, 2017, at 15:31, Raymond Hettinger wrote: > Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. I am also -1. > One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. I had occasional to speak with someone very involved in Rust development. They have a process roughly similar to our PEPs. One of the things he told me, which I found very interesting and have been mulling over for PEPs is, they require a section in their specification discussion how any new feature will be taught, both to new Rust programmers and experienced ones. I love the emphasis on teachability. Sometimes I really miss that when considering some of the PEPs and the features they introduce (look how hard it is to teach asynchronous programming). Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mariatta.wijaya at gmail.com Tue Nov 28 15:42:26 2017 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Tue, 28 Nov 2017 12:42:26 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: -1 from me too. Mariatta Wijaya On Tue, Nov 28, 2017 at 12:38 PM, Barry Warsaw wrote: > On Nov 28, 2017, at 15:31, Raymond Hettinger > wrote: > > > Put me down for a strong -1. The proposal would occasionally save a > few keystokes but comes at the expense of giving Python a more Perlish look > and a more arcane feel. > > I am also -1. > > > One of the things I like about Python is that I can walk non-programmers > through the code and explain what it does. The examples in PEP 505 look > like a step in the wrong direction. They don't "look like Python" and make > me feel like I have to decrypt the code to figure-out what it does. > > I had occasional to speak with someone very involved in Rust development. > They have a process roughly similar to our PEPs. One of the things he told > me, which I found very interesting and have been mulling over for PEPs is, > they require a section in their specification discussion how any new > feature will be taught, both to new Rust programmers and experienced ones. > I love the emphasis on teachability. Sometimes I really miss that when > considering some of the PEPs and the features they introduce (look how hard > it is to teach asynchronous programming). > > Cheers, > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariatta.wijaya%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Tue Nov 28 15:42:58 2017 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 28 Nov 2017 20:42:58 +0000 Subject: [Python-Dev] Regular expressions: splitting on zero-width patterns In-Reply-To: References: Message-ID: <460f810a-995c-c3df-6fc6-714b112f856d@mrabarnett.plus.com> On 2017-11-28 20:04, Serhiy Storchaka wrote: > The two largest problems in the re module are splitting on zero-width > patterns and complete and correct support of the Unicode standard. These > problems are solved in regex. regex has many other features, but they > are less important. > > I want to tell the problem of splitting on zero-width patterns. It > already was discussed on Python-Dev 13 years ago [3] and maybe later. > See also issues: [4], [5], [6], [7], [8]. > > In short it doesn't work. Splitting on the pattern r'\b' doesn't split > the text at boundaries of words, and splitting on the pattern > r'\s+|(?<=-)' will split the text on whitespaces, but will not split > words with hypens as expected. > > In Python 3.4 and earlier: > > >>> re.split(r'\b', 'Self-Defence Class') > ['Self-Defence Class'] > >>> re.split(r'\s+|(?<=-)', 'Self-Defence Class') > ['Self-Defence', 'Class'] > >>> re.split(r'\s*', 'Self-Defence Class') > ['Self-Defence', 'Class'] > > Note that splitting on r'\s*' (0 or more whitespaces) actually split on > r'\s+' (1 or more whitespaces). Splitting on patterns that only can > match the empty string (like r'\b' or r'(?<=-)') never worked, while > splitting > > Starting since Python 3.5 splitting on a pattern that only can match the > empty string raises a ValueError (this never worked), and splitting a > pattern that can match the empty string but not only emits a > FutureWarning. This taken developers a time for replacing their patterns > r'\s*' to r'\s+' as they should be. > > Now I have created a final patch [9] that makes re.split() splitting on > zero-width patterns. > > >>> re.split(r'\b', 'Self-Defence Class') > ['', 'Self', '-', 'Defence', ' ', 'Class', ''] > >>> re.split(r'\s+|(?<=-)', 'Self-Defence Class') > ['Self-', 'Defence', 'Class'] > >>> re.split(r'\s*', 'Self-Defence Class') > ['', 'S', 'e', 'l', 'f', '-', 'D', 'e', 'f', 'e', 'n', 'c', 'e', 'C', > 'l', 'a', 's', 's', ''] > > The latter case the result is differ too much from the previous result, > and this likely not what the author wanted to get. But users had two > Python releases for fixing their code. FutureWarning is not silent by > default. > > Because these patterns produced errors or warnings in the recent two > releases, we don't need an additional parameter for compatibility. > > But the problem was not just with re.split(). Other functions also > worked not good with patterns that can match the empty string. > > >>> re.findall(r'^|\w+', 'Self-Defence Class') > ['', 'elf', 'Defence', 'Class'] > >>> list(re.finditer(r'^|\w+', 'Self-Defence Class')) > [, 4), match='elf'>, , > ] > >>> re.sub(r'(^|\w+)', r'<\1>', 'Self-Defence Class') > '<>S- ' > > After matching the empty string the following character will be skipped > and will be not included in the next match. My patch fixes these > functions too. > > >>> re.findall(r'^|\w+', 'Self-Defence Class') > ['', 'Self', 'Defence', 'Class'] > >>> list(re.finditer(r'^|\w+', 'Self-Defence Class')) > [, 4), match='Self'>, , > ] > >>> re.sub(r'(^|\w+)', r'<\1>', 'Self-Defence Class') > '<>- ' > > I think this change don't need preliminary warnings, because it change > the behavior of more rarely used patterns. No re tests have been broken. > I was needed to add new tests for detecting the behavior change. > > But there is one spoonful of tar in a barrel of honey. I didn't expect > this, but this change have broken a pattern used with re.sub() in the > doctest module: r'(?m)^\s*?$'. This was fixed by replacing it with > r'(?m)^[^\S\n]+?$'). I hope that such cases are pretty rare and think > this is an avoidable breakage. > > The new behavior of re.split() matches the behavior of regex.split() > with the VERSION1 flag, the new behavior of re.findall() and > re.finditer() matches the behavior of corresponding functions in the > regex module (independently from the version flag). But the new behavior > of re.sub() doesn't match exactly the behavior of regex.sub() with any > version flag. It differs from the old behavior as you can see in the > example above, but is closer to it that to regex.sub() with VERSION1. > This allowed to avoid braking existing tests for re.sub(). > > >>> regex.sub(r'(\W+|(?<=-))', r':', 'Self-Defence Class') > > > > 'Self:Defence:Class' > > > > >>> regex.sub(r'(?V1)(\W+|(?<=-))', r':', 'Self-Defence Class') > > > > 'Self::Defence:Class' > >>> re.sub(r'(\W+|(?<=-))', r':', 'Self-Defence Class') > 'Self:Defence:Class' > > As re.split() it never matches the empty string adjacent to the previous > match. re.findall() and re.finditer() only don't match the empty string > adjacent to the previous empty string match. In the regex module > regex.sub() is mutually consistent with regex.findall() and > regex.finditer() (with the VERSION1 flag), but regex.split() is not > consistent with them. In the re module re.split() and re.sub() will be > mutually consistent, as well as re.findall() and re.finditer(). This is > more backward compatible. And I don't know reasons for preferring the > behavior of re.findall() and re.finditer() over the behavior of > re.split() in this corner case. > FTR, you could make an argument for either behaviour. For regex, I went with what Perl does. > Would be nice to get this change in 3.7.0a3 for wider testing. Please > make a review of the patch [9] or tell your thoughts about this change. > > [1] https://docs.python.org/3/library/re.html > [2] https://pypi.python.org/pypi/regex/ > [3] https://mail.python.org/pipermail/python-dev/2004-August/047272.html > [4] https://bugs.python.org/issue852532 > [5] https://bugs.python.org/issue988761 > [6] https://bugs.python.org/issue1647489 > [7] https://bugs.python.org/issue3262 > [8] https://bugs.python.org/issue25054 > [9] https://github.com/python/cpython/pull/4471 > From skip.montanaro at gmail.com Tue Nov 28 15:45:41 2017 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 28 Nov 2017 14:45:41 -0600 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: > I think the latest version can now strptime offsets of the form ?HH:MM with > %z, so there's no longer anything blocking you from parsing from all > isoformat() outputs with strptime, provided you know which one you need. Or just punt and install arrow: >>> import arrow >>> arrow.get('2017-10-20T08:20:08.986166+00:00') >>> arrow.get('2017-10-20T08:20:08.986166+00:00').datetime datetime.datetime(2017, 10, 20, 8, 20, 8, 986166, tzinfo=tzoffset(None, 0)) Skip From christian at python.org Tue Nov 28 15:48:01 2017 From: christian at python.org (Christian Heimes) Date: Tue, 28 Nov 2017 21:48:01 +0100 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> Message-ID: On 2017-11-28 21:31, Raymond Hettinger wrote: > >> I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. > > Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. > > One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. > > timeout ?? local_timeout ?? global_timeout > 'foo' in (None ?? ['foo', 'bar']) > requested_quantity ?? default_quantity * price > name?.strip()[4:].upper() > user?.first_name.upper() Your examples have convinced me, -1 from me. From paul at ganssle.io Tue Nov 28 15:52:15 2017 From: paul at ganssle.io (Paul G) Date: Tue, 28 Nov 2017 20:52:15 +0000 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: IIRC, arrow usually calls dateutil to parse dates anyway, and there are many other, lighter dependencies that will parse an ISO 8601 date quickly into a datetime.datetime object. I still think it's reasonable for the .isoformat() operation to have an inverse operation in the standard library. On November 28, 2017 3:45:41 PM EST, Skip Montanaro wrote: >> I think the latest version can now strptime offsets of the form >?HH:MM with >> %z, so there's no longer anything blocking you from parsing from all >> isoformat() outputs with strptime, provided you know which one you >need. > >Or just punt and install arrow: > >>>> import arrow >>>> arrow.get('2017-10-20T08:20:08.986166+00:00') > >>>> arrow.get('2017-10-20T08:20:08.986166+00:00').datetime >datetime.datetime(2017, 10, 20, 8, 20, 8, 986166, tzinfo=tzoffset(None, >0)) > >Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue Nov 28 15:57:02 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 28 Nov 2017 12:57:02 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> Message-ID: <5A1DCD9E.6080208@stoneleaf.us> On 11/28/2017 12:31 PM, Raymond Hettinger wrote: > timeout ?? local_timeout ?? global_timeout > 'foo' in (None ?? ['foo', 'bar']) > requested_quantity ?? default_quantity * price > name?.strip()[4:].upper() > user?.first_name.upper() Cryptic, indeed. -1 from me. -- ~Ethan~ From p.f.moore at gmail.com Tue Nov 28 15:59:53 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 28 Nov 2017 20:59:53 +0000 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: On 28 November 2017 at 20:38, Barry Warsaw wrote: > On Nov 28, 2017, at 15:31, Raymond Hettinger wrote: > >> Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. > > I am also -1. -1 from me, too. >> One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. > > I had occasional to speak with someone very involved in Rust development. They have a process roughly similar to our PEPs. One of the things he told me, which I found very interesting and have been mulling over for PEPs is, they require a section in their specification discussion how any new feature will be taught, both to new Rust programmers and experienced ones. I love the emphasis on teachability. Sometimes I really miss that when considering some of the PEPs and the features they introduce (look how hard it is to teach asynchronous programming). That's a really nice idea. I'd like to see Python adopt something similar (even just as a guideline on how to write a PEP). Paul From eric at trueblade.com Tue Nov 28 15:56:28 2017 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 28 Nov 2017 15:56:28 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> Message-ID: <3ece5357-e02b-875f-e7d1-a4f62f310d30@trueblade.com> On 11/28/2017 1:57 PM, Guido van Rossum wrote: > I would also be happy with a retreat, where we define `__eq__` to insist > that the classes are the same, and people can override this to their > hearts' content. I agree. And I guess we could always add it later, if there's a huge demand and someone writes a decent implementation. There would be a slight backwards incompatibility, though. Frankly, I think it would never be needed. One question remains: do we do what attrs does: for `__eq__` and `__ne__` use an exact type match, and for the 4 ordered comparison operators use an isinstance check? On the comparison operators, they also ignore attributes defined on any derived class [0]. As I said, I couldn't find an attrs issue that discusses their choice. I'll ask Hynek over on the dataclasses github issue. Currently the dataclasses code on master uses an exact type match for all 6 methods. Eric. [0] That is, they do the following (using dataclasses terms): Given: @dataclass class B: i: int j: int @dataclass class C: k: int Then B.__eq__ is: def __eq__(self, other): if other.__class__ is self.__class__: return (other.i, other.j) == (self.i, self.j) return NotImplemented And B.__lt__ is: def __lt__(self, other): if isinstance(other, self.__class__): return (other.i, other.j) < (self.i, self.j) return NotImplemented So if you do: b = B(1, 2) c = C(1, 2, 3) Then `B(1, 2) < C(1, 2, 3)` ignores `c.k`. From barry at python.org Tue Nov 28 16:03:07 2017 From: barry at python.org (Barry Warsaw) Date: Tue, 28 Nov 2017 16:03:07 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: <54547929-A933-4747-A659-D56BF20DC459@python.org> > On Nov 28, 2017, at 15:59, Paul Moore wrote: > > On 28 November 2017 at 20:38, Barry Warsaw wrote: >> I had occasional to speak with someone very involved in Rust development. They have a process roughly similar to our PEPs. One of the things he told me, which I found very interesting and have been mulling over for PEPs is, they require a section in their specification discussion how any new feature will be taught, both to new Rust programmers and experienced ones. I love the emphasis on teachability. Sometimes I really miss that when considering some of the PEPs and the features they introduce (look how hard it is to teach asynchronous programming). > > That's a really nice idea. I'd like to see Python adopt something > similar (even just as a guideline on how to write a PEP). https://github.com/python/peps/issues/485 -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From skip.montanaro at gmail.com Tue Nov 28 16:09:14 2017 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 28 Nov 2017 15:09:14 -0600 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: It's got its own little parsing language, different than the usual strftime/strptime format scheme, more like what you might see in Excel. I never worried too much about the speed of dateutil.parser.parse() unless I was calling it in an inner loop, but arrow.get() seems to be a fair bit faster than dateutil.parser.parse. This makes sense, as the latter tries to figure out what you've given it (you never give it a format string), while in the absence of a format string, arrow.get assumes you have an ISO-8601 date/time, with only a few small variations allowed. Skip On Tue, Nov 28, 2017 at 2:52 PM, Paul G wrote: > IIRC, arrow usually calls dateutil to parse dates anyway, and there are > many other, lighter dependencies that will parse an ISO 8601 date quickly > into a datetime.datetime object. > > I still think it's reasonable for the .isoformat() operation to have an > inverse operation in the standard library. > > On November 28, 2017 3:45:41 PM EST, Skip Montanaro < > skip.montanaro at gmail.com> wrote: >> >> I think the latest version can now strptime offsets of the form ?HH:MM with >>> %z, so there's no longer anything blocking you from parsing from all >>> isoformat() outputs with strptime, provided you know which one you need. >>> >> >> Or just punt and install arrow: >> >> import arrow >>>>> arrow.get('2017-10-20T08:20:08.986166+00:00') >>>>> >>>> >> >>> arrow.get('2017-10-20T08:20:08.986166+00:00').datetime >>>>> >>>> datetime.datetime(2017, 10, 20, 8, 20, 8, 986166, tzinfo=tzoffset(None, 0)) >> >> Skip >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Nov 28 16:14:35 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 13:14:35 -0800 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <3ece5357-e02b-875f-e7d1-a4f62f310d30@trueblade.com> References: <5A1B9CED.60708@canterbury.ac.nz> <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> <3ece5357-e02b-875f-e7d1-a4f62f310d30@trueblade.com> Message-ID: On Tue, Nov 28, 2017 at 12:56 PM, Eric V. Smith wrote: > On 11/28/2017 1:57 PM, Guido van Rossum wrote: > >> I would also be happy with a retreat, where we define `__eq__` to insist >> that the classes are the same, and people can override this to their >> hearts' content. >> > > I agree. And I guess we could always add it later, if there's a huge > demand and someone writes a decent implementation. There would be a slight > backwards incompatibility, though. Frankly, I think it would never be > needed. > > One question remains: do we do what attrs does: for `__eq__` and `__ne__` > use an exact type match, and for the 4 ordered comparison operators use an > isinstance check? On the comparison operators, they also ignore attributes > defined on any derived class [0]. As I said, I couldn't find an attrs issue > that discusses their choice. I'll ask Hynek over on the dataclasses github > issue. > > Currently the dataclasses code on master uses an exact type match for all > 6 methods. > > Eric. > > [0] That is, they do the following (using dataclasses terms): > > Given: > > @dataclass > class B: > i: int > j: int > > @dataclass > class C: > k: int > > Then B.__eq__ is: > > def __eq__(self, other): > if other.__class__ is self.__class__: > return (other.i, other.j) == (self.i, self.j) > return NotImplemented > > And B.__lt__ is: > > def __lt__(self, other): > if isinstance(other, self.__class__): > return (other.i, other.j) < (self.i, self.j) > return NotImplemented > > So if you do: > b = B(1, 2) > c = C(1, 2, 3) > > Then `B(1, 2) < C(1, 2, 3)` ignores `c.k`. > Hm. Maybe for the ordering comparisons we could defer to the class with the longest list of fields, as long as there's a subtype relationship? That way bb would be equivalent, and both would use C.__gt__. Which had better not reject this on the basis that other is not an instance of a subclass of C. IIRC there's already something in the interpreter that tries the most derived class first for binary operators -- that may force our hand here. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariocj89 at gmail.com Tue Nov 28 15:05:55 2017 From: mariocj89 at gmail.com (Mario Corchero) Date: Tue, 28 Nov 2017 20:05:55 +0000 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: The basics should be possible already with issue31800 , that said the issue you reference is to get a single function to parse it (without having to put the whole format), which would be neat. I believe Paul Ganssle is planning on adding it to dateutil as well: https://github.com/dateutil/dateutil/pull/489/files On 28 November 2017 at 19:51, Mike Miller wrote: > This may have gotten bogged down again. Could we get the output of > datetime.isoformat() parsed at a minimum? Perfection is not required. > > Looks like there is a patch or two and test cases on the bug. > > -Mike > > > Could anyone put this five year-old bug about parsing iso8601 format >> date-times on the front burner? >> >> http://bugs.python.org/issue15873 >> >> In the comments there's a lot of hand-wringing about different variations >> that bogged it down, but right now I only need it to handle the output of >> datetime.isoformat(): >> >> >>> dt.isoformat() >> '2017-10-20T08:20:08.986166+00:00' >> >> Perhaps if we could get that minimum first step in, it could be iterated >> on and made more lenient in the future. >> >> _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mariocj89 > %40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mehaase at gmail.com Tue Nov 28 16:15:08 2017 From: mehaase at gmail.com (Mark Haase) Date: Tue, 28 Nov 2017 16:15:08 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> Message-ID: <04BE07C8-798D-425A-BCB2-CDEBFF9B722D@gmail.com> Hi Lukasz, I don?t have plans on editing or promoting the PEP any further, unless there is renewed interest or somebody proposes a more Pythonic syntax. -- Mark E. Haase > On Nov 28, 2017, at 3:31 PM, Raymond Hettinger wrote: > > >> I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. > > Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. > > One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. > > timeout ?? local_timeout ?? global_timeout > 'foo' in (None ?? ['foo', 'bar']) > requested_quantity ?? default_quantity * price > name?.strip()[4:].upper() > user?.first_name.upper() > > > Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Tue Nov 28 17:23:04 2017 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 28 Nov 2017 22:23:04 +0000 Subject: [Python-Dev] Regular expressions: splitting on zero-width patterns In-Reply-To: References: Message-ID: On 2017-11-28 20:04, Serhiy Storchaka wrote: > The two largest problems in the re module are splitting on zero-width > patterns and complete and correct support of the Unicode standard. These > problems are solved in regex. regex has many other features, but they > are less important. > > I want to tell the problem of splitting on zero-width patterns. It > already was discussed on Python-Dev 13 years ago [3] and maybe later. > See also issues: [4], [5], [6], [7], [8]. > [snip] After some thought, I've decided that if this happens in the re module in Python 3.7, then, for the sake of compatibility (and because the edge cases are debatable anyway), I'll have the regex module do the same when used on Python 3.7. From guido at python.org Tue Nov 28 17:27:22 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 14:27:22 -0800 Subject: [Python-Dev] Regular expressions: splitting on zero-width patterns In-Reply-To: References: Message-ID: On Tue, Nov 28, 2017 at 2:23 PM, MRAB wrote: > On 2017-11-28 20:04, Serhiy Storchaka wrote: > >> The two largest problems in the re module are splitting on zero-width >> patterns and complete and correct support of the Unicode standard. These >> problems are solved in regex. regex has many other features, but they >> are less important. >> >> I want to tell the problem of splitting on zero-width patterns. It >> already was discussed on Python-Dev 13 years ago [3] and maybe later. >> See also issues: [4], [5], [6], [7], [8]. >> >> [snip] > After some thought, I've decided that if this happens in the re module in > Python 3.7, then, for the sake of compatibility (and because the edge cases > are debatable anyway), I'll have the regex module do the same when used on > Python 3.7. > Maybe it should also be selectable with a version flag? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Tue Nov 28 18:39:09 2017 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 28 Nov 2017 23:39:09 +0000 Subject: [Python-Dev] Regular expressions: splitting on zero-width patterns In-Reply-To: References: Message-ID: <7f819929-68aa-358c-def6-bb971e44965c@mrabarnett.plus.com> On 2017-11-28 22:27, Guido van Rossum wrote: > On Tue, Nov 28, 2017 at 2:23 PM, MRAB > wrote: > > On 2017-11-28 20:04, Serhiy Storchaka wrote: > > The two largest problems in the re module are splitting on > zero-width > patterns and complete and correct support of the Unicode > standard. These > problems are solved in regex. regex has many other features, > but they > are less important. > > I want to tell the problem of splitting on zero-width patterns. It > already was discussed on Python-Dev 13 years ago [3] and maybe > later. > See also issues: [4], [5], [6], [7], [8]. > > [snip] > After some thought, I've decided that if this happens in the re > module in Python 3.7, then, for the sake of compatibility (and > because the edge cases are debatable anyway), I'll have the regex > module do the same when used on Python 3.7. > > > Maybe it should also be selectable with a version flag? > Well, when anyone who uses re updates to Python 3.7, they'll be faced with the change anyway. From guido at python.org Tue Nov 28 18:54:58 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Nov 2017 15:54:58 -0800 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On Sun, Nov 19, 2017 at 5:40 AM, Nathaniel Smith wrote: > Eh, numpy does use FutureWarning for changes where the same code will > transition from doing one thing to doing something else without > passing through a state where it raises an error. But that decision > was based on FutureWarning being shown to users by default, not > because it matches the nominal purpose :-). IIRC I proposed this > policy for NumPy in the first place, and I still don't even know if it > matches the original intent because the docs are so vague. "Will > change behavior in the future" describes every case where you might > consider using FutureWarning *or* DeprecationWarning, right? > > We have been using DeprecationWarning for changes where code will > transition from working -> raising an error, and that *is* based on > the Official Recommendation to hide those by default whenever > possible. We've been doing this for a few years now, and I'd say our > experience so far has been... poor. I'm trying to figure out how to > say this politely. Basically it doesn't work at all. What happens in > practice is that we issue a DeprecationWarning for a year, mostly > no-one notices, then we make the change in a 1.x.0 release, everyone's > code breaks, we roll it back in 1.x.1, and then possibly repeat > several times in 1.(x+1).0 and 1.(x+2).0 until enough people have > updated their code that the screams die down. I'm pretty sure we'll be > changing our policy at some point, possibly to always use > FutureWarning for everything. Can one of you check that the latest version of PEP 565 gets this right? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Nov 28 20:31:36 2017 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 28 Nov 2017 20:31:36 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: References: <5A1B9CED.60708@canterbury.ac.nz> <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> <3ece5357-e02b-875f-e7d1-a4f62f310d30@trueblade.com> Message-ID: <4ceb621a-a769-e061-975c-543c6beeb825@trueblade.com> On 11/28/2017 4:14 PM, Guido van Rossum wrote: > Hm. Maybe for the ordering comparisons we could defer to the class with > the longest list of fields, as long as there's a subtype relationship? > That way bb would be equivalent, and both would use C.__gt__. > Which had better not reject this on the basis that other is not an > instance of a subclass of C. > > IIRC there's already something in the interpreter that tries the most > derived class first for binary operators -- that may force our hand here. I'm leaning toward doing the same thing attrs does. They have much more experience with this. This is my last open specification issue. I think I'm ready for a pronouncement on the PEP once I do one final editing pass, in hopes of getting this in 3.7.0a3 by next weekend. If the comparisons need to change, I'm okay with doing that in the beta. Eric. From ncoghlan at gmail.com Tue Nov 28 23:55:37 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Nov 2017 14:55:37 +1000 Subject: [Python-Dev] Can Python guarantee the order of keyword-only parameters? In-Reply-To: References: <90a367bb-c1c5-1bc1-f5f6-a537332290ea@hastings.org> <653d7304-8ad0-bcb9-82f5-bee9985910ea@hastings.org> Message-ID: On 29 November 2017 at 03:13, Brett Cannon wrote: > On Tue, 28 Nov 2017 at 03:33 Nick Coghlan wrote: >> P.S. Note that inspect.getfullargspec() was actually undeprecated a >> while back - enough folks that didn't need access to the function >> annotations were reimplementing it for themselves "because the >> standard library API is deprecated" that the most logical course of >> action was to just declare it as being supported again. I don't think >> that changes the argument here though - it just means guaranteed order >> preservation in that API will only happen if we declare dicts to be >> insertion ordered in general. > > OT for this thread, but is there an issue number tracking the > un-deprecating? Basically I want to make sure there is an issue tracking > deprecating it again when we stop worrying about any Python 2/3 support. The issue is https://bugs.python.org/issue27172, but the undeprecation isn't a Python 2/3 issue, it's a "tuples, lists and dicts are really handy representations of things, and Python developers often prefer them to more structured objects" issue. The modern inspect.getfullargspec implementation is a relatively thin wrapper around inspect.signature, and the only lossy part of the output transformation is that you can't tell the difference between positional-only and positional-or-keyword parameters. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 29 01:11:31 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Nov 2017 16:11:31 +1000 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <04BE07C8-798D-425A-BCB2-CDEBFF9B722D@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <04BE07C8-798D-425A-BCB2-CDEBFF9B722D@gmail.com> Message-ID: On 29 November 2017 at 07:15, Mark Haase wrote: > Hi Lukasz, I don?t have plans on editing or promoting the PEP any further, > unless there is renewed interest or somebody proposes a more Pythonic > syntax. We should probably set the state of both this and the related circuit breaking protocol PEPs to Deferred so folks know neither of us is actually working on them. While I still think there's merit to the idea of making it easier to write concise data cleanup pipelines in Python, I find the argument "The currently proposed spelling doesn't even come close to reading like executable pseudo code" to be a compelling one. I think a big part of the problem is that these kinds of data cleanup operations don't even have a good *vocabulary* around them yet, so nobody actually knows how to write them as pseudo code in the first place - we only know how to express them in particular programming languages. Trying to come up with pseudo code for the example cases Raymond mentioned: timeout if defined, else local_timeout if defined, else global_timeout price * (requested_quantity if defined, else default_quantity) name if not defined, else name.strip()[4:].upper() user if not defined, else name.first_name.upper() And that's not actually that different to their current spellings in Python: timeout if timeout is not None else local_timeout if local_timeout is not None, else global_timeout price * (requested_quantity if requested_quantity is not None else default_quantity) name if name is None else name.strip()[4:].upper() user if user is None else name.first_name.upper() One key aspect that Python does miss relative to the pseudocode versions is that we don't actually have a term for "expr is not None". "def" is used specifically for functions, so "defined" isn't reference right. References to "None" are bound like anything else, so "bound" isn't right. "exists" probably comes closest (hence the title of the withdrawn PEP 531). That said, if we did decide to allow "def" in a conditional expression to mean "defined" in the "lhs is not None" sense, it would look like: timeout if def else local_timeout if def else global_timeout price * (requested_quantity if def else default_quantity) name if not def else name.strip()[4:].upper() user if not def else user.first_name.upper() Cheers, Nick. P.S. Compared to this, our last symbolic feature addition (matrix multiplication), was a relatively straightforward transcription of "?" to "@", just as regular multiplication is a transcription of "?" to "*". -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 29 01:19:50 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Nov 2017 16:19:50 +1000 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> References: <5A1B9CED.60708@canterbury.ac.nz> <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> Message-ID: On 29 November 2017 at 04:31, Eric V. Smith wrote: > On 11/28/17 7:02 AM, Nick Coghlan wrote: >> So in the above example, you would have: >> >> >>> B.__field_layout__ is B >> True >> >>> C1.__field_layout__ is B >> True >> >>> C2.__field_layout__ is B >> True >> >> It would then be up to the dataclass decorator to set >> `__field_layout__` correctly, using the follow rules: >> >> 1. Use the just-defined class if the class defines any fields >> 2. Use the just-defined class if it inherits from multiple base >> classes that define fields and don't already share an MRO >> 3. Use a base class if that's either the only base class that defines >> fields, or if all other base classes that define fields are already in >> the MRO of that base class > > > That seems like a lot of complication for a feature that will be rarely > used. I'll give it some thought, especially the MI logic. > > I think what you're laying out is an optimization for "do the classes have > identical fields, inherited through a common base class or classes", right? It's a combination of that and "How do I get my own class to compare equal with a dataclass instance?". However, having the dataclass methods return NotImplemented for mismatched types should be enough to enable interoperability, since it will leave the question up to the other type. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Wed Nov 29 03:06:32 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 29 Nov 2017 10:06:32 +0200 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> Message-ID: 28.11.17 22:31, Raymond Hettinger ????: >> I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. > > Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. > > One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. > > timeout ?? local_timeout ?? global_timeout > 'foo' in (None ?? ['foo', 'bar']) > requested_quantity ?? default_quantity * price > name?.strip()[4:].upper() > user?.first_name.upper() New syntax often look unusual. "lambda" and "yield" also didn't "look like Python". But there are other considerations. 1. Languages that has the ?? and ?. operators usually have a single special (or at least a one obvious) value that is served for signaling "there-is-no-a-value". The None in Python is not so special. It can be used as a common object, and other ways can be used for denoting an "absent" value. The need of None-specific operators in Python is lower. 2. Python already have other ways for handling "absent" values: the short-circuit "or" and "and" operators which return the one of arguments, getattr() with the default value, the hack with default class attributes, exception handling. The need of the proposed operators in Python is lower. 3. These languages usually have borrowed the syntax of the ternary operator from C: "... ? ... : ...". Thus the syntax with "?" looks more natural in them. But in Python it looks less natural. I'm -1 for accepting this syntax in 3.7. In future all can be changed. From eric at trueblade.com Wed Nov 29 03:35:07 2017 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 29 Nov 2017 03:35:07 -0500 Subject: [Python-Dev] Second post: PEP 557, Data Classes In-Reply-To: <4ceb621a-a769-e061-975c-543c6beeb825@trueblade.com> References: <5A1B9CED.60708@canterbury.ac.nz> <614072db-3475-15d7-9e05-8cdb9125d84a@trueblade.com> <3ece5357-e02b-875f-e7d1-a4f62f310d30@trueblade.com> <4ceb621a-a769-e061-975c-543c6beeb825@trueblade.com> Message-ID: On 11/28/2017 8:31 PM, Eric V. Smith wrote: > On 11/28/2017 4:14 PM, Guido van Rossum wrote: >> Hm. Maybe for the ordering comparisons we could defer to the class >> with the longest list of fields, as long as there's a subtype >> relationship? That way bb would be equivalent, and both would >> use C.__gt__. Which had better not reject this on the basis that other >> is not an instance of a subclass of C. >> >> IIRC there's already something in the interpreter that tries the most >> derived class first for binary operators -- that may force our hand here. > > I'm leaning toward doing the same thing attrs does. They have much more > experience with this. Except that given Hynek's response in https://github.com/ericvsmith/dataclasses/issues/51#issuecomment-347769322, I'm just going to leave it as-is, with a strict type requirement for all 6 methods. Eric. From ncoghlan at gmail.com Wed Nov 29 04:58:45 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Nov 2017 19:58:45 +1000 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> Message-ID: On 29 November 2017 at 06:17, Guido van Rossum wrote: > Please, not for 3.7. I think it will be very difficult to get consensus > about this, and I personally feel something like +/- zero about it -- > sometimes I think it makes sense, sometimes I think we're jumping the shark > here. I've marked all 3 of the related PEPs as Deferred until 3.8 at the earliest: https://github.com/python/peps/commit/181cc79af925e06a068733a1419b1760ac1a2d6f PEP 505: None-aware operators PEP 532: A circuit breaking protocol and binary operators PEP 535: Rich comparison chaining I don't see any urgency to resolve any of them - the None-aware operators do make certain kinds of code (commonly found in JSON processing) easier to read and write, but such code is still fairly readable and writable today (it's just a bit verbose and boilerplate heavy), and the other two PEPs arise specifically from seeking to provide a common conceptual underpinning for the semantics of both the proposed None-aware operations and the existing short-circuiting logical operators. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed Nov 29 08:20:23 2017 From: donald at stufft.io (Donald Stufft) Date: Wed, 29 Nov 2017 08:20:23 -0500 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: <2FB5776C-4F24-4B4B-94E6-19133A9572D1@stufft.io> > On Nov 19, 2017, at 8:40 AM, Nathaniel Smith wrote: > > We have been using DeprecationWarning for changes where code will > transition from working -> raising an error, and that *is* based on > the Official Recommendation to hide those by default whenever > possible. We've been doing this for a few years now, and I'd say our > experience so far has been... poor. I'm trying to figure out how to > say this politely. Basically it doesn't work at all. What happens in > practice is that we issue a DeprecationWarning for a year, mostly > no-one notices, then we make the change in a 1.x.0 release, everyone's > code breaks, we roll it back in 1.x.1, and then possibly repeat > several times in 1.(x+1).0 and 1.(x+2).0 until enough people have > updated their code that the screams die down. I'm pretty sure we'll be > changing our policy at some point, possibly to always use > FutureWarning for everything. In pip we ended up working around the not-displaying-by-default so that we got the old behavior back again because hiding warnings doesn?t work great in my experience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Wed Nov 29 12:03:09 2017 From: random832 at fastmail.com (Random832) Date: Wed, 29 Nov 2017 12:03:09 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> Message-ID: <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> On Tue, Nov 28, 2017, at 15:31, Raymond Hettinger wrote: > > > I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. > > Put me down for a strong -1. The proposal would occasionally save a few > keystokes but comes at the expense of giving Python a more Perlish look > and a more arcane feel. > > One of the things I like about Python is that I can walk non-programmers > through the code and explain what it does. The examples in PEP 505 look > like a step in the wrong direction. They don't "look like Python" and > make me feel like I have to decrypt the code to figure-out what it does. > > timeout ?? local_timeout ?? global_timeout > 'foo' in (None ?? ['foo', 'bar']) > requested_quantity ?? default_quantity * price > name?.strip()[4:].upper() > user?.first_name.upper() Since we're looking at different syntax for the ?? operator, I have a suggestion for the ?. operator - and related ?[] and ?() that appeared in some of the proposals. How about this approach? Something like (or None: ...) as a syntax block in which any operation [lexically within the expression, not within e.g. called functions, so it's different from simply catching AttributeError etc, even if that could be limited to only catching when the operand is None] on None that is not valid for None will yield None instead. This isn't *entirely* equivalent, but offers finer control. v = name?.strip()[4:].upper() under the old proposal would be more or less equivalent to: v = name.strip()[4:].upper() if name is not None else None Whereas, you could get the same result with: (or None: name.strip()[4:].upper()) Though that would technically be equivalent to these steps: v = name.strip if name is not None else None v = v() if v """"" v = v[4:] """"""" v = v.upper """"""" v = v() """"""" The compiler could optimize this case since it knows none of the operations are valid on None. This has the advantage of being explicit about what scope the modified rules apply to, rather than simply implicitly being "to the end of the chain of dot/bracket/call operators" It could also be extended to apply, without any additional syntax, to binary operators (result is None if either operand is None) (or None: a + b), for example, could return None if either a or b is none. [I think I proposed this before with the syntax ?(...), the (or None: ...) is just an idea to make it look more like Python.] From mertz at gnosis.cx Wed Nov 29 12:40:21 2017 From: mertz at gnosis.cx (David Mertz) Date: Wed, 29 Nov 2017 09:40:21 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> Message-ID: I like much of the thinking in Random's approach. But I still think None isn't quite special enough to warrant it's own syntax. However, his '(or None: name.strip()[4:].upper())' makes me realize that what is being asked in all the '?(', '?.', '?[' syntax ideas is a kind of ternary expression. Except the ternary isn't based on whether a predicate holds, but rather on whether an exception occurs (AttributeError, KeyError, TypeError). And the fallback in the ternary is always None rather than being general. I think we could generalize this to get something both more Pythonic and more flexible. E.g.: val = name.strip()[4:].upper() except None This would just be catching all errors, which is perhaps too broad. But it *would* allow a fallback other than None: val = name.strip()[4:].upper() except -1 I think some syntax could be possible to only "catch" some exceptions and let others propagate. Maybe: val = name.strip()[4:].upper() except (AttributeError, KeyError): -1 I don't really like throwing a colon in an expression though. Perhaps some other word or symbol could work instead. How does this read: val = name.strip()[4:].upper() except -1 in (AttributeError, KeyError) Where the 'in' clause at the end would be optional, and default to 'Exception'. I'll note that what this idea DOES NOT get us is: val = timeout ?? local_timeout ?? global_timeout Those values that are "possibly None" don't raise exceptions, so they wouldn't apply to this syntax. Yours, David... On Wed, Nov 29, 2017 at 9:03 AM, Random832 wrote: > On Tue, Nov 28, 2017, at 15:31, Raymond Hettinger wrote: > > > > > I also cc python-dev to see if anybody here is strongly in favor or > against this inclusion. > > > > Put me down for a strong -1. The proposal would occasionally save a few > > keystokes but comes at the expense of giving Python a more Perlish look > > and a more arcane feel. > > > > One of the things I like about Python is that I can walk non-programmers > > through the code and explain what it does. The examples in PEP 505 look > > like a step in the wrong direction. They don't "look like Python" and > > make me feel like I have to decrypt the code to figure-out what it does. > > > > timeout ?? local_timeout ?? global_timeout > > 'foo' in (None ?? ['foo', 'bar']) > > requested_quantity ?? default_quantity * price > > name?.strip()[4:].upper() > > user?.first_name.upper() > > Since we're looking at different syntax for the ?? operator, I have a > suggestion for the ?. operator - and related ?[] and ?() that appeared > in some of the proposals. How about this approach? > > Something like (or None: ...) as a syntax block in which any operation > [lexically within the expression, not within e.g. called functions, so > it's different from simply catching AttributeError etc, even if that > could be limited to only catching when the operand is None] on None that > is not valid for None will yield None instead. > > This isn't *entirely* equivalent, but offers finer control. > > v = name?.strip()[4:].upper() under the old proposal would be more or > less equivalent to: > > v = name.strip()[4:].upper() if name is not None else None > > Whereas, you could get the same result with: > (or None: name.strip()[4:].upper()) > > Though that would technically be equivalent to these steps: > v = name.strip if name is not None else None > v = v() if v """"" > v = v[4:] """"""" > v = v.upper """"""" > v = v() """"""" > > The compiler could optimize this case since it knows none of the > operations are valid on None. This has the advantage of being explicit > about what scope the modified rules apply to, rather than simply > implicitly being "to the end of the chain of dot/bracket/call operators" > > It could also be extended to apply, without any additional syntax, to > binary operators (result is None if either operand is None) (or None: a > + b), for example, could return None if either a or b is none. > > [I think I proposed this before with the syntax ?(...), the (or None: > ...) is just an idea to make it look more like Python.] > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mertz%40gnosis.cx > -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Wed Nov 29 12:55:07 2017 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 29 Nov 2017 12:55:07 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> Message-ID: > On Nov 29, 2017, at 12:40 PM, David Mertz wrote: > > I like much of the thinking in Random's approach. But I still think None isn't quite special enough to warrant it's own syntax. > > However, his '(or None: name.strip()[4:].upper())' makes me realize that what is being asked in all the '?(', '?.', '?[' syntax ideas is a kind of ternary expression. Except the ternary isn't based on whether a predicate holds, but rather on whether an exception occurs (AttributeError, KeyError, TypeError). And the fallback in the ternary is always None rather than being general. > > I think we could generalize this to get something both more Pythonic and more flexible. E.g.: > > val = name.strip()[4:].upper() except None > > This would just be catching all errors, which is perhaps too broad. But it *would* allow a fallback other than None: > > val = name.strip()[4:].upper() except -1 > > I think some syntax could be possible to only "catch" some exceptions and let others propagate. Maybe: > > val = name.strip()[4:].upper() except (AttributeError, KeyError): -1 > > I don't really like throwing a colon in an expression though. Perhaps some other word or symbol could work instead. How does this read: > > val = name.strip()[4:].upper() except -1 in (AttributeError, KeyError) > > Where the 'in' clause at the end would be optional, and default to 'Exception'. > > I'll note that what this idea DOES NOT get us is: > > val = timeout ?? local_timeout ?? global_timeout > > Those values that are "possibly None" don't raise exceptions, so they wouldn't apply to this syntax. See the rejected PEP 463 for Exception catching expressions. Eric. > > Yours, David... > > >> On Wed, Nov 29, 2017 at 9:03 AM, Random832 wrote: >> On Tue, Nov 28, 2017, at 15:31, Raymond Hettinger wrote: >> > >> > > I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. >> > >> > Put me down for a strong -1. The proposal would occasionally save a few >> > keystokes but comes at the expense of giving Python a more Perlish look >> > and a more arcane feel. >> > >> > One of the things I like about Python is that I can walk non-programmers >> > through the code and explain what it does. The examples in PEP 505 look >> > like a step in the wrong direction. They don't "look like Python" and >> > make me feel like I have to decrypt the code to figure-out what it does. >> > >> > timeout ?? local_timeout ?? global_timeout >> > 'foo' in (None ?? ['foo', 'bar']) >> > requested_quantity ?? default_quantity * price >> > name?.strip()[4:].upper() >> > user?.first_name.upper() >> >> Since we're looking at different syntax for the ?? operator, I have a >> suggestion for the ?. operator - and related ?[] and ?() that appeared >> in some of the proposals. How about this approach? >> >> Something like (or None: ...) as a syntax block in which any operation >> [lexically within the expression, not within e.g. called functions, so >> it's different from simply catching AttributeError etc, even if that >> could be limited to only catching when the operand is None] on None that >> is not valid for None will yield None instead. >> >> This isn't *entirely* equivalent, but offers finer control. >> >> v = name?.strip()[4:].upper() under the old proposal would be more or >> less equivalent to: >> >> v = name.strip()[4:].upper() if name is not None else None >> >> Whereas, you could get the same result with: >> (or None: name.strip()[4:].upper()) >> >> Though that would technically be equivalent to these steps: >> v = name.strip if name is not None else None >> v = v() if v """"" >> v = v[4:] """"""" >> v = v.upper """"""" >> v = v() """"""" >> >> The compiler could optimize this case since it knows none of the >> operations are valid on None. This has the advantage of being explicit >> about what scope the modified rules apply to, rather than simply >> implicitly being "to the end of the chain of dot/bracket/call operators" >> >> It could also be extended to apply, without any additional syntax, to >> binary operators (result is None if either operand is None) (or None: a >> + b), for example, could return None if either a or b is none. >> >> [I think I proposed this before with the syntax ?(...), the (or None: >> ...) is just an idea to make it look more like Python.] >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx > > > > -- > Keeping medicines from the bloodstreams of the sick; food > from the bellies of the hungry; books from the hands of the > uneducated; technology from the underdeveloped; and putting > advocates of freedom in prisons. Intellectual property is > to the 21st century what the slave trade was to the 16th. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Nov 29 13:02:58 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 29 Nov 2017 13:02:58 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> Message-ID: <718169B1-C1AE-4FCA-92E7-D4E246096612@python.org> On Nov 29, 2017, at 12:40, David Mertz wrote: > I think some syntax could be possible to only "catch" some exceptions and let others propagate. Maybe: > > val = name.strip()[4:].upper() except (AttributeError, KeyError): -1 > > I don't really like throwing a colon in an expression though. Perhaps some other word or symbol could work instead. How does this read: > > val = name.strip()[4:].upper() except -1 in (AttributeError, KeyError) I don?t know whether I like any of this but I think a more natural spelling would be: val = name.strip()[4:].upper() except (AttributeError, KeyError) as -1 which could devolve into: val = name.strip()[4:].upper() except KeyError as -1 or: val = name.strip()[4:].upper() except KeyError # Implicit `as None` I would *not* add any spelling for an explicit bare-except equivalent. You would have to write: val = name.strip()[4:].upper() except Exception as -1 Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From python-dev at mgmiller.net Wed Nov 29 13:05:00 2017 From: python-dev at mgmiller.net (Mike Miller) Date: Wed, 29 Nov 2017 10:05:00 -0800 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: Hi, This thread isn't about the numerous third-party solutions that are a pip command away. But rather getting a minimal return parser to handle datetime.isoformat() into the std library. It's been needed for years, and hopefully someone can squeeze it into 3.7 before its too late. (It takes years for a new version to trickle down to Linux dists.) -Mike On 2017-11-28 13:09, Skip Montanaro wrote: > It's got its own little parsing language, different than the usual From jsbueno at python.org.br Wed Nov 29 13:14:13 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Wed, 29 Nov 2017 16:14:13 -0200 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: On 28 November 2017 at 18:38, Barry Warsaw wrote: > On Nov 28, 2017, at 15:31, Raymond Hettinger wrote: > >> Put me down for a strong -1. The proposal would occasionally save a few keystokes but comes at the expense of giving Python a more Perlish look and a more arcane feel. > > I am also -1. > >> One of the things I like about Python is that I can walk non-programmers through the code and explain what it does. The examples in PEP 505 look like a step in the wrong direction. They don't "look like Python" and make me feel like I have to decrypt the code to figure-out what it does. > > I had occasional to speak with someone very involved in Rust development. They have a process roughly similar to our PEPs. One of the things he told me, which I found very interesting and have been mulling over for PEPs is, they require a section in their specification discussion how any new feature will be taught, both to new Rust programmers and experienced ones. I love the emphasis on teachability. Sometimes I really miss that when considering some of the PEPs and the features they introduce (look how hard it is to teach asynchronous programming). Oh well, I would be +1 on patching PEP 1 for that. > > Cheers, > -Barry > From storchaka at gmail.com Wed Nov 29 13:26:01 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 29 Nov 2017 20:26:01 +0200 Subject: [Python-Dev] Removing files from the repository Message-ID: After removing files from the repository they disappear from the source tree, and it is even hard to notice this if you don't use it regularly. It is hard to track the history of the removed file even if you know it exact path. If you know it only approximate this is harder. I think that any file removals from the repository should pass some PEP-like process. Declaring the intention with the rationale, taking a feedback, discussing, and finally documenting the removal. Perhaps it is worth to track all removals in a special file, so if later you will find that the removed file can be useful you could restore it instead of recreating its functionality from zero in the case if you even don't know that similar file existed. From rymg19 at gmail.com Wed Nov 29 13:47:32 2017 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Wed, 29 Nov 2017 12:47:32 -0600 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: Doesn't Git make this rather easy, though? e.g. you can find all deleted files with: git log --diff-filter=D --summary and find a specific file with (showing glob patterns): git log --all --full-history -- **/thefile.* and then show it: git show -- or restore it: git checkout ^ -- https://stackoverflow.com/questions/7203515/git-how-to-search-for-a-deleted-file-in-the-project-commit-history On Wednesday, November 29, 2017 12:26:01 PM CST, Serhiy Storchaka wrote: > After removing files from the repository they disappear from > the source tree, and it is even hard to notice this if you don't > use it regularly. It is hard to track the history of the removed > file even if you know it exact path. If you know it only > approximate this is harder. > > I think that any file removals from the repository should pass > some PEP-like process. Declaring the intention with the > rationale, taking a feedback, discussing, and finally > documenting the removal. Perhaps it is worth to track all > removals in a special file, so if later you will find that the > removed file can be useful you could restore it instead of > recreating its functionality from zero in the case if you even > don't know that similar file existed. > -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else https://refi64.com/ From guido at python.org Wed Nov 29 14:00:13 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 29 Nov 2017 11:00:13 -0800 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: That sounds a bit excessive. Is there a recent incident that inspired this proposal? On Wed, Nov 29, 2017 at 10:26 AM, Serhiy Storchaka wrote: > After removing files from the repository they disappear from the source > tree, and it is even hard to notice this if you don't use it regularly. It > is hard to track the history of the removed file even if you know it exact > path. If you know it only approximate this is harder. > > I think that any file removals from the repository should pass some > PEP-like process. Declaring the intention with the rationale, taking a > feedback, discussing, and finally documenting the removal. Perhaps it is > worth to track all removals in a special file, so if later you will find > that the removed file can be useful you could restore it instead of > recreating its functionality from zero in the case if you even don't know > that similar file existed. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Nov 29 14:03:34 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 29 Nov 2017 14:03:34 -0500 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: On Wed, Nov 29, 2017 at 1:47 PM, Ryan Gonzalez wrote: > Doesn't Git make this rather easy, though? +1. PEP-like process for removing/renaming files is too much, in my opinion. Yury From njs at pobox.com Wed Nov 29 14:28:52 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 29 Nov 2017 11:28:52 -0800 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: On Nov 28, 2017 3:55 PM, "Guido van Rossum" wrote: On Sun, Nov 19, 2017 at 5:40 AM, Nathaniel Smith wrote: > Eh, numpy does use FutureWarning for changes where the same code will > transition from doing one thing to doing something else without > passing through a state where it raises an error. But that decision > was based on FutureWarning being shown to users by default, not > because it matches the nominal purpose :-). IIRC I proposed this > policy for NumPy in the first place, and I still don't even know if it > matches the original intent because the docs are so vague. "Will > change behavior in the future" describes every case where you might > consider using FutureWarning *or* DeprecationWarning, right? > > We have been using DeprecationWarning for changes where code will > transition from working -> raising an error, and that *is* based on > the Official Recommendation to hide those by default whenever > possible. We've been doing this for a few years now, and I'd say our > experience so far has been... poor. I'm trying to figure out how to > say this politely. Basically it doesn't work at all. What happens in > practice is that we issue a DeprecationWarning for a year, mostly > no-one notices, then we make the change in a 1.x.0 release, everyone's > code breaks, we roll it back in 1.x.1, and then possibly repeat > several times in 1.(x+1).0 and 1.(x+2).0 until enough people have > updated their code that the screams die down. I'm pretty sure we'll be > changing our policy at some point, possibly to always use > FutureWarning for everything. Can one of you check that the latest version of PEP 565 gets this right? If you're asking about the the proposed new language about FutureWarnings, it seems fine to me. If you're asking about the PEP as a whole, it seems fine but I don't think it will make much difference in our case. IPython has been showing deprecation warnings in __main__ for a few years now, and it's nice enough. Getting warnings for scripts seems nice too. But we aren't rolling back changes because they broke someone's one off script ? I'm sure it happens but we don't tend to hear about it. We're responding to things like major downstream dependencies that nonetheless totally missed all the warnings. The part that might help there is evangelising popular test runners like pytest to change their defaults. To me that's the most interesting change to come out of this. But it's hard to predict in advance how effective it will be. tl;dr: I don't think PEP 565 solves all my problems, but I don't have any objections to what it does to. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Nov 29 17:22:29 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 30 Nov 2017 00:22:29 +0200 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: 29.11.17 20:47, Ryan Gonzalez ????: > Doesn't Git make this rather easy, though? > > e.g. you can find all deleted files with: > > git log --diff-filter=D --summary > > and find a specific file with (showing glob patterns): > > git log --all --full-history -- **/thefile.* > > and then show it: > > git show -- > > or restore it: > > git checkout ^ -- > > https://stackoverflow.com/questions/7203515/git-how-to-search-for-a-deleted-file-in-the-project-commit-history Thank you Ryan. I didn't know this. But the first command produces much noise. It includes reverted changes that added new files (they could be reapplied again), files Misc/NEWS.d/next/ which were merged into Misc/NEWS, and full content of deleted directories. If the list of deleted files be supported manually, it would contain only the root of deleted directories, and wouldn't contain files which were not released. From victor.stinner at gmail.com Wed Nov 29 17:25:38 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 29 Nov 2017 23:25:38 +0100 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: Hi, Serhiy opened this thread after I removed tools for CVS and Subversion from the master branch: two scripts and a svnmap.txt file. I removed Misc/svnmap.txt, a mapping of Subversion commits to Mercurial commits. The change was approved by 3 core dev, but then I was asked to restore (only) the svnmap.txt and so I reverted it. See the issue and the pull request for the full story: https://bugs.python.org/issue32159 https://github.com/python/cpython/pull/4615 I misunderstood the purpose of the file. I understood that it was used by removed scripts, whereas it was kept for historic purpose, to find Suversion commits like r12345. The mapping maps to Mercurial commits, whereas the CPython repository was converted to Git in the meanwhile. The last 3 years, I someone need to get access to an old r12345 commit, but it was always in messages on the bug tracker which is able to give me a working link to the actual change. Victor Le 29 nov. 2017 8:03 PM, "Guido van Rossum" a ?crit : > That sounds a bit excessive. Is there a recent incident that inspired this > proposal? > > On Wed, Nov 29, 2017 at 10:26 AM, Serhiy Storchaka > wrote: > >> After removing files from the repository they disappear from the source >> tree, and it is even hard to notice this if you don't use it regularly. It >> is hard to track the history of the removed file even if you know it exact >> path. If you know it only approximate this is harder. >> >> I think that any file removals from the repository should pass some >> PEP-like process. Declaring the intention with the rationale, taking a >> feedback, discussing, and finally documenting the removal. Perhaps it is >> worth to track all removals in a special file, so if later you will find >> that the removed file can be useful you could restore it instead of >> recreating its functionality from zero in the case if you even don't know >> that similar file existed. >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40p >> ython.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Nov 29 17:28:55 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 30 Nov 2017 00:28:55 +0200 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: 29.11.17 21:00, Guido van Rossum ????: > That sounds a bit excessive. Is there a recent incident that inspired > this proposal? The concrete inspiration is issue32159 [1]. I am still not sure that removing these scripts is needed. But there were other cases in which I was not sure about the rationale of removing files. https://bugs.python.org/issue32159 From encukou at gmail.com Wed Nov 29 17:40:06 2017 From: encukou at gmail.com (Petr Viktorin) Date: Wed, 29 Nov 2017 23:40:06 +0100 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: <60787989-9299-ab3b-2cdd-4ca14267757f@gmail.com> On 11/29/2017 07:26 PM, Serhiy Storchaka wrote: > [...] Perhaps it is > worth to track all removals in a special file, so if later you will find > that the removed file can be useful you could restore it instead of > recreating its functionality from zero in the case if you even don't > know that similar file existed. All removals are tracked by Git, necessarily. It's the command to show them that's not obvious (unless you're Finnish): git log --oneline --diff-filter=D --summary -- :^/Misc/NEWS.d/ From guido at python.org Wed Nov 29 18:18:32 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 29 Nov 2017 15:18:32 -0800 Subject: [Python-Dev] Removing files from the repository In-Reply-To: References: Message-ID: Hm. For the file used for lookup, I see the point of keeping it. But in general, I don't see the point of keeping files we no longer need -- that's what VCS systems are for! On Wed, Nov 29, 2017 at 2:28 PM, Serhiy Storchaka wrote: > 29.11.17 21:00, Guido van Rossum ????: > >> That sounds a bit excessive. Is there a recent incident that inspired >> this proposal? >> > > The concrete inspiration is issue32159 [1]. I am still not sure that > removing these scripts is needed. But there were other cases in which I was > not sure about the rationale of removing files. > > https://bugs.python.org/issue32159 > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Wed Nov 29 18:26:12 2017 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 29 Nov 2017 18:26:12 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes Message-ID: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> I've posted a new version of PEP 557, it should soon be available at https://www.python.org/dev/peps/pep-0557/. The only significant changes since the last version are: - changing the "compare" parameter to be "order", since that more accurately reflects what it does. - Having the combination of "eq=False" and "order=True" raise an exception instead of silently changing eq to True. There were no other issues raised with the previous version of the PEP. So with that, I think it's ready for a pronouncement. Eric. From gvanrossum at gmail.com Wed Nov 29 18:44:19 2017 From: gvanrossum at gmail.com (Guido van Rossum) Date: Wed, 29 Nov 2017 15:44:19 -0800 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: I plan to accept this on Monday if no new objections are raised. On Nov 29, 2017 3:28 PM, "Eric V. Smith" wrote: > I've posted a new version of PEP 557, it should soon be available at > https://www.python.org/dev/peps/pep-0557/. > > The only significant changes since the last version are: > > - changing the "compare" parameter to be "order", since that more > accurately reflects what it does. > - Having the combination of "eq=False" and "order=True" raise an exception > instead of silently changing eq to True. > > There were no other issues raised with the previous version of the PEP. > > So with that, I think it's ready for a pronouncement. > > Eric. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Nov 29 18:42:06 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 29 Nov 2017 15:42:06 -0800 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: On Wed, Nov 29, 2017 at 10:05 AM, Mike Miller wrote: > This thread isn't about the numerous third-party solutions that are a pip > command away. But rather getting a minimal return parser to handle > datetime.isoformat() into the std library. > > It's been needed for years, indeed what is the holdup? I don't recall anyone saying it was a bad idea in the last discussion. Do we just need an implementation? Is the one in the Bug Report not up to snuff? If not, then what's wrong with it? This is just not that hard a problem to solve. -CHB > and hopefully someone can squeeze it into 3.7 before its too late. (It > takes years for a new version to trickle down to Linux dists.) > > -Mike > > > On 2017-11-28 13:09, Skip Montanaro wrote: > >> It's got its own little parsing language, different than the usual >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris. > barker%40noaa.gov > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Wed Nov 29 18:51:16 2017 From: carl at oddbird.net (Carl Meyer) Date: Wed, 29 Nov 2017 15:51:16 -0800 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: On 11/29/2017 03:26 PM, Eric V. Smith wrote: > I've posted a new version of PEP 557, it should soon be available at > https://www.python.org/dev/peps/pep-0557/. > > The only significant changes since the last version are: > > - changing the "compare" parameter to be "order", since that more > accurately reflects what it does. > - Having the combination of "eq=False" and "order=True" raise an > exception instead of silently changing eq to True. > > There were no other issues raised with the previous version of the PEP. Not quite; I also raised the issue of isdataclass(ADataClass) returning False. I still think that's likely to be a cause of bug reports if left as-is. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From victor.stinner at gmail.com Wed Nov 29 18:56:37 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 30 Nov 2017 00:56:37 +0100 Subject: [Python-Dev] PEP 565: Show DeprecationWarning in __main__ In-Reply-To: References: Message-ID: 2017-11-12 10:24 GMT+01:00 Nick Coghlan : > I've written a short(ish) PEP for the proposal to change the default > warnings filters to show DeprecationWarning in __main__: > https://www.python.org/dev/peps/pep-0565/ I understand the rationale of the PEP, but I dislike the proposed implementation. End users will start to get warnings that they don't understand and cannot fix, so these warnings would just be annoying. For scripts written to only be run once and then deleted, again, these warnings are just annoying since the script is going to be deleted anyway. It's like the annoying ResourceWarning (in debug mode) when I write open(filename).read() in the REPL. I know that it's bad, but the code will only be run once and lost when I quit the REPL, so who cares? (not me, stop bothering me with good programming practices, I do know them, let me write crappy code!) On the REPL case, I have no strong opinion. For developers who want to see warnings, the warnings are not shown for applications using an entry point and any code base larger than a single file (warnings outside the __main__ module). -- In practice, I'm running tests with python -Wd (to see ResourceWarning in my case), and I'm happy with that :-) If tests pass with -Wd, you're good. This is why I implemented a new "development mode", python3 -X dev, in Python 3.7 which "shows all warnings except of BytesWarning". (This mode is already mentioned in Nick's PEP.) https://docs.python.org/dev/using/cmdline.html#id5 My advice is to always run your tests with -X dev. If they pass with -X dev, you are good. I'm not even sure that developers should use -X dev to test their code manually. I see it as a "cheap linter". I don't want to run a linter each time I run Python. The "-X dev" option is smarter than -Wd: it adds the default filter *at the end*, to respect -b and -bb options for the BytesWarning. Release build: $ ./python -c 'import pprint, warnings; pprint.pprint(warnings.filters)' [('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0)] -Wd on the release build adds a filter at the start: $ ./release.python -Wd -c 'import pprint, warnings; pprint.pprint(warnings.filters)' [('default', None, , None, 0), <~~~ HERE ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0)] Debug build: $ ./python -c 'import pprint, warnings; pprint.pprint(warnings.filters)' [('ignore', None, , None, 0), ('default', None, , None, 0)] -X dev adds a default filter *at the end*: $ ./python -X dev -c 'import pprint, warnings; pprint.pprint(warnings.filters)' [('ignore', None, , None, 0), ('default', None, , None, 0), ('default', None, , None, 0)] <~~~ HERE Note: you can combine -X dev with -W if you want ;-) (It works as expected.) Victor From alexander.belopolsky at gmail.com Wed Nov 29 19:06:58 2017 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Wed, 29 Nov 2017 19:06:58 -0500 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: On Wed, Nov 29, 2017 at 6:42 PM, Chris Barker wrote: > > indeed what is the holdup? I don't recall anyone saying it was a bad idea > in the last discussion. > > Do we just need an implementation? > > Is the one in the Bug Report not up to snuff? If not, then what's wrong > with it? This is just not that hard a problem to solve. > See my comment from over a year ago: < https://bugs.python.org/issue15873#msg273609>. The proposed patch did not have a C implementation, but we can use the same approach as with strptime and call Python code from C. If users will start complaining about performance, we can speed it up in later releases. Also the new method needs to be documented. Overall, it does not seem to require more than an hour of work from a motivated developer, but the people who contributed to the issue in the past seem to have lost their interest. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Wed Nov 29 19:08:47 2017 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 29 Nov 2017 19:08:47 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: On 11/29/2017 6:51 PM, Carl Meyer wrote: > On 11/29/2017 03:26 PM, Eric V. Smith wrote: >> I've posted a new version of PEP 557, it should soon be available at >> https://www.python.org/dev/peps/pep-0557/. >> >> The only significant changes since the last version are: >> >> - changing the "compare" parameter to be "order", since that more >> accurately reflects what it does. >> - Having the combination of "eq=False" and "order=True" raise an >> exception instead of silently changing eq to True. >> >> There were no other issues raised with the previous version of the PEP. > > Not quite; I also raised the issue of isdataclass(ADataClass) returning > False. I still think that's likely to be a cause of bug reports if left > as-is. Oops, sorry about that! I think you're probably right. attr.has(), which is the equivalent attrs API, returns True for both classes and instances (although interestingly, the code only talks about it working on classes). https://github.com/ericvsmith/dataclasses/issues/99 Eric. From mariocj89 at gmail.com Wed Nov 29 19:18:28 2017 From: mariocj89 at gmail.com (Mario Corchero) Date: Thu, 30 Nov 2017 00:18:28 +0000 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: There were discussions about having it a function, making the constructor of datetime accept a string(this was strongly rejected), having a static funcion in datetime, etc... and there was no real agreement. If the agreement is that we want a funcion to be able to parse it I am sure Paul G will be kind to do it (he told me not long ago he was thinking on sending a PR for it). If he is busy I am happy to chip in time this weekend. All I wanted when I sent https://bugs.python.org/issue31800 was actually to be able to parse isoformat datetime ^^. Thu, 30 Nov 2017 at 00:09, Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > On Wed, Nov 29, 2017 at 6:42 PM, Chris Barker > wrote: > >> >> indeed what is the holdup? I don't recall anyone saying it was a bad idea >> in the last discussion. >> >> Do we just need an implementation? >> >> Is the one in the Bug Report not up to snuff? If not, then what's wrong >> with it? This is just not that hard a problem to solve. >> > > > See my comment from over a year ago: < > https://bugs.python.org/issue15873#msg273609>. The proposed patch did > not have a C implementation, but we can use the same approach as with > strptime and call Python code from C. If users will start complaining > about performance, we can speed it up in later releases. Also the new > method needs to be documented. Overall, it does not seem to require more > than an hour of work from a motivated developer, but the people who > contributed to the issue in the past seem to have lost their interest. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/mariocj89%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at ganssle.io Wed Nov 29 19:19:04 2017 From: paul at ganssle.io (Paul G) Date: Thu, 30 Nov 2017 00:19:04 +0000 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: <4AC956F5-9FD1-40BC-972D-619DC91AC600@ganssle.io> I can write at least a pure Python implementation in the next few days, if not a full C implementation. Shouldn't be too hard since I've got a few different Cython implementations sitting around anyway. On November 29, 2017 7:06:58 PM EST, Alexander Belopolsky wrote: >On Wed, Nov 29, 2017 at 6:42 PM, Chris Barker >wrote: > >> >> indeed what is the holdup? I don't recall anyone saying it was a bad >idea >> in the last discussion. >> >> Do we just need an implementation? >> >> Is the one in the Bug Report not up to snuff? If not, then what's >wrong >> with it? This is just not that hard a problem to solve. >> > > >See my comment from over a year ago: < >https://bugs.python.org/issue15873#msg273609>. The proposed patch did >not >have a C implementation, but we can use the same approach as with >strptime >and call Python code from C. If users will start complaining about >performance, we can speed it up in later releases. Also the new method >needs to be documented. Overall, it does not seem to require more than >an >hour of work from a motivated developer, but the people who contributed >to >the issue in the past seem to have lost their interest. -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-dev at mgmiller.net Wed Nov 29 19:36:07 2017 From: python-dev at mgmiller.net (Mike Miller) Date: Wed, 29 Nov 2017 16:36:07 -0800 Subject: [Python-Dev] iso8601 parsing In-Reply-To: <4AC956F5-9FD1-40BC-972D-619DC91AC600@ganssle.io> References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> <4AC956F5-9FD1-40BC-972D-619DC91AC600@ganssle.io> Message-ID: <84fd7261-d39a-67c2-67de-e49f02d7f436@mgmiller.net> Yeah! I'm available for writing docs and testing it if needed. Performance is not a big concern in this first version, unless you've already written most of it. If it is a concern for others then the third-party modules will still be available as well. -Mike On 2017-11-29 16:19, Paul G wrote: > I can write at least a pure Python implementation in the next few days, if not a > full C implementation. Shouldn't be too hard since I've got a few different > Cython implementations sitting around anyway. > From guido at python.org Wed Nov 29 20:02:21 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 29 Nov 2017 17:02:21 -0800 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer wrote: > On 11/29/2017 03:26 PM, Eric V. Smith wrote: > > I've posted a new version of PEP 557, it should soon be available at > > https://www.python.org/dev/peps/pep-0557/. > > > > The only significant changes since the last version are: > > > > - changing the "compare" parameter to be "order", since that more > > accurately reflects what it does. > > - Having the combination of "eq=False" and "order=True" raise an > > exception instead of silently changing eq to True. > > > > There were no other issues raised with the previous version of the PEP. > > Not quite; I also raised the issue of isdataclass(ADataClass) returning > False. I still think that's likely to be a cause of bug reports if left > as-is. > I tried to look up the discussion but didn't find much except that you flagged this as an issue. To repeat, your concern is that isdataclass() applies to *instances*, not classes, which is how Eric has designed it, but you worry that either through the name or just because people don't read the docs it will be confusing. What do you suppose we do? I think making it work for classes as well as for instances would cause another category of bugs (confusion between cases where a class is needed vs. an instance abound in other situations -- we don't want to add to that). Maybe it should raise TypeError when passed a class (unless its metaclass is a dataclass)? Maybe it should be renamed to isdataclassinstance()? That's a mouthful, but I don't know how common the need to call this is, and people who call it a lot can define their own shorter alias. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Wed Nov 29 20:17:44 2017 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Wed, 29 Nov 2017 20:17:44 -0500 Subject: [Python-Dev] iso8601 parsing In-Reply-To: References: <01e69881-3710-87c8-f47a-dfc427ec65b5@mgmiller.net> Message-ID: On Wed, Nov 29, 2017 at 7:18 PM, Mario Corchero wrote: > There were discussions about having it a function, making the constructor > of datetime accept a string(this was strongly rejected), having a static > function in datetime, etc... and there was no real agreement. > Guido has written several times that a named constructor is the way forward. The name "fromisoformat" was more or less agreed upon as well. In fact, Mathieu Dupuy's patch was 95% there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Thu Nov 30 03:27:45 2017 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 30 Nov 2017 03:27:45 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: On 11/29/2017 6:26 PM, Eric V. Smith wrote: > I've posted a new version of PEP 557, it should soon be available at > https://www.python.org/dev/peps/pep-0557/. Also, I've posted version 0.2 to PyPI as dataclasses, so you can play with it on 3.6 and 3.7. Eric. From solipsis at pitrou.net Thu Nov 30 04:16:58 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 30 Nov 2017 10:16:58 +0100 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: <20171130101658.7f7e5807@fsol> isdataclass() testing for instance-ship does sound like a bug magnet to me. If isdataclassinstance() is too long (that's understandable), how about isdatainstance()? Regards Antoine. On Wed, 29 Nov 2017 17:02:21 -0800 Guido van Rossum wrote: > On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer wrote: > > > On 11/29/2017 03:26 PM, Eric V. Smith wrote: > > > I've posted a new version of PEP 557, it should soon be available at > > > https://www.python.org/dev/peps/pep-0557/. > > > > > > The only significant changes since the last version are: > > > > > > - changing the "compare" parameter to be "order", since that more > > > accurately reflects what it does. > > > - Having the combination of "eq=False" and "order=True" raise an > > > exception instead of silently changing eq to True. > > > > > > There were no other issues raised with the previous version of the PEP. > > > > Not quite; I also raised the issue of isdataclass(ADataClass) returning > > False. I still think that's likely to be a cause of bug reports if left > > as-is. > > > > I tried to look up the discussion but didn't find much except that you > flagged this as an issue. To repeat, your concern is that isdataclass() > applies to *instances*, not classes, which is how Eric has designed it, but > you worry that either through the name or just because people don't read > the docs it will be confusing. What do you suppose we do? I think making it > work for classes as well as for instances would cause another category of > bugs (confusion between cases where a class is needed vs. an instance > abound in other situations -- we don't want to add to that). Maybe it > should raise TypeError when passed a class (unless its metaclass is a > dataclass)? Maybe it should be renamed to isdataclassinstance()? That's a > mouthful, but I don't know how common the need to call this is, and people > who call it a lot can define their own shorter alias. > From agriff at tin.it Thu Nov 30 05:48:56 2017 From: agriff at tin.it (Andrea Griffini) Date: Thu, 30 Nov 2017 11:48:56 +0100 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: Not really related but the PEP says that arguments in Python are evaluated before the function (as a reason to reject the idea of None-aware function call) but this is not the case: >>> import dis >>> dis.dis(lambda : f()(g())) 1 0 LOAD_GLOBAL 0 (f) 3 CALL_FUNCTION 0 6 LOAD_GLOBAL 1 (g) 9 CALL_FUNCTION 0 12 CALL_FUNCTION 1 15 RETURN_VALUE Andrea On Wed, Nov 29, 2017 at 7:14 PM, Joao S. O. Bueno wrote: > On 28 November 2017 at 18:38, Barry Warsaw wrote: > > On Nov 28, 2017, at 15:31, Raymond Hettinger < > raymond.hettinger at gmail.com> wrote: > > > >> Put me down for a strong -1. The proposal would occasionally save a > few keystokes but comes at the expense of giving Python a more Perlish look > and a more arcane feel. > > > > I am also -1. > > > >> One of the things I like about Python is that I can walk > non-programmers through the code and explain what it does. The examples in > PEP 505 look like a step in the wrong direction. They don't "look like > Python" and make me feel like I have to decrypt the code to figure-out what > it does. > > > > I had occasional to speak with someone very involved in Rust > development. They have a process roughly similar to our PEPs. One of the > things he told me, which I found very interesting and have been mulling > over for PEPs is, they require a section in their specification discussion > how any new feature will be taught, both to new Rust programmers and > experienced ones. I love the emphasis on teachability. Sometimes I really > miss that when considering some of the PEPs and the features they introduce > (look how hard it is to teach asynchronous programming). > > Oh well, > I would be +1 on patching PEP 1 for that. > > > > > > Cheers, > > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > agriff%40tin.it > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Nov 30 06:30:44 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 30 Nov 2017 06:30:44 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <20171130101658.7f7e5807@fsol> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> Message-ID: Could these be things in types? types.ClassType types.InstanceType types.DataClass types.DataClassInstanceType (?) I sent a PR with typo fixes and ``.. code:: python`` directives so that syntax highlighting works (at least on GitHub). https://github.com/python/peps/blob/master/pep-0557.rst https://github.com/python/peps/pull/488 Additional notes: - "DataClass" instead of "Data Class" would be easier to search for. s/DataClass/Data Class/g? - It's probably worth mentioning how hash works when frozen=True also here: https://github.com/python/peps/blob/master/pep-0557.rst#frozen-instances - The `hash` explanation could be a two column table for easier readability What a great feature. - Runtime data validation from annotations (like PyContracts,) would be cool - __slots__ are worth the time On Thursday, November 30, 2017, Antoine Pitrou wrote: > > isdataclass() testing for instance-ship does sound like a bug magnet to > me. > > If isdataclassinstance() is too long (that's understandable), how about > isdatainstance()? > > Regards > > Antoine. > > > On Wed, 29 Nov 2017 17:02:21 -0800 > Guido van Rossum > wrote: > > On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer > wrote: > > > > > On 11/29/2017 03:26 PM, Eric V. Smith wrote: > > > > I've posted a new version of PEP 557, it should soon be available at > > > > https://www.python.org/dev/peps/pep-0557/. > > > > > > > > The only significant changes since the last version are: > > > > > > > > - changing the "compare" parameter to be "order", since that more > > > > accurately reflects what it does. > > > > - Having the combination of "eq=False" and "order=True" raise an > > > > exception instead of silently changing eq to True. > > > > > > > > There were no other issues raised with the previous version of the > PEP. > > > > > > Not quite; I also raised the issue of isdataclass(ADataClass) returning > > > False. I still think that's likely to be a cause of bug reports if left > > > as-is. > > > > > > > I tried to look up the discussion but didn't find much except that you > > flagged this as an issue. To repeat, your concern is that isdataclass() > > applies to *instances*, not classes, which is how Eric has designed it, > but > > you worry that either through the name or just because people don't read > > the docs it will be confusing. What do you suppose we do? I think making > it > > work for classes as well as for instances would cause another category of > > bugs (confusion between cases where a class is needed vs. an instance > > abound in other situations -- we don't want to add to that). Maybe it > > should raise TypeError when passed a class (unless its metaclass is a > > dataclass)? Maybe it should be renamed to isdataclassinstance()? That's a > > mouthful, but I don't know how common the need to call this is, and > people > > who call it a lot can define their own shorter alias. > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Nov 30 06:43:10 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 30 Nov 2017 06:43:10 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> Message-ID: Sorry, this one shouldn't be "an"; " a" was correct:: ``repr``: If true (the default), an ``__repr__`` method will be generated. Note that __repr__ can be dangerous with user-supplied input because of terminal control character injection (e.g. broken syntax highlighting, \r, character set, LEDs,) On Thursday, November 30, 2017, Wes Turner wrote: > Could these be things in types? > > types.ClassType > types.InstanceType > > types.DataClass > types.DataClassInstanceType (?) > > I sent a PR with typo fixes and ``.. code:: python`` directives so that > syntax highlighting works (at least on GitHub). > > https://github.com/python/peps/blob/master/pep-0557.rst > > https://github.com/python/peps/pull/488 > > Additional notes: > > - "DataClass" instead of "Data Class" would be easier to search for. > s/DataClass/Data Class/g? > - It's probably worth mentioning how hash works when frozen=True also here: > https://github.com/python/peps/blob/master/pep-0557.rst#frozen-instances > - The `hash` explanation could be a two column table for easier readability > > What a great feature. > > - Runtime data validation from annotations (like PyContracts,) would be > cool > - __slots__ are worth the time > > On Thursday, November 30, 2017, Antoine Pitrou > wrote: > >> >> isdataclass() testing for instance-ship does sound like a bug magnet to >> me. >> >> If isdataclassinstance() is too long (that's understandable), how about >> isdatainstance()? >> >> Regards >> >> Antoine. >> >> >> On Wed, 29 Nov 2017 17:02:21 -0800 >> Guido van Rossum wrote: >> > On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer wrote: >> > >> > > On 11/29/2017 03:26 PM, Eric V. Smith wrote: >> > > > I've posted a new version of PEP 557, it should soon be available at >> > > > https://www.python.org/dev/peps/pep-0557/. >> > > > >> > > > The only significant changes since the last version are: >> > > > >> > > > - changing the "compare" parameter to be "order", since that more >> > > > accurately reflects what it does. >> > > > - Having the combination of "eq=False" and "order=True" raise an >> > > > exception instead of silently changing eq to True. >> > > > >> > > > There were no other issues raised with the previous version of the >> PEP. >> > > >> > > Not quite; I also raised the issue of isdataclass(ADataClass) >> returning >> > > False. I still think that's likely to be a cause of bug reports if >> left >> > > as-is. >> > > >> > >> > I tried to look up the discussion but didn't find much except that you >> > flagged this as an issue. To repeat, your concern is that isdataclass() >> > applies to *instances*, not classes, which is how Eric has designed it, >> but >> > you worry that either through the name or just because people don't read >> > the docs it will be confusing. What do you suppose we do? I think >> making it >> > work for classes as well as for instances would cause another category >> of >> > bugs (confusion between cases where a class is needed vs. an instance >> > abound in other situations -- we don't want to add to that). Maybe it >> > should raise TypeError when passed a class (unless its metaclass is a >> > dataclass)? Maybe it should be renamed to isdataclassinstance()? That's >> a >> > mouthful, but I don't know how common the need to call this is, and >> people >> > who call it a lot can define their own shorter alias. >> > >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes. >> turner%40gmail.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Thu Nov 30 06:56:52 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 30 Nov 2017 13:56:52 +0200 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: 30.11.17 03:02, Guido van Rossum ????: > I tried to look up the discussion but didn't find much except that you > flagged this as an issue. To repeat, your concern is that isdataclass() > applies to *instances*, not classes, which is how Eric has designed it, > but you worry that either through the name or just because people don't > read the docs it will be confusing. What do you suppose we do? I think > making it work for classes as well as for instances would cause another > category of bugs (confusion between cases where a class is needed vs. an > instance abound in other situations -- we don't want to add to that). > Maybe it should raise TypeError when passed a class (unless its > metaclass is a dataclass)? Maybe it should be renamed to > isdataclassinstance()? That's a mouthful, but I don't know how common > the need to call this is, and people who call it a lot can define their > own shorter alias. There is isdatadescriptor() which is not too shorter. From solipsis at pitrou.net Thu Nov 30 06:59:03 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 30 Nov 2017 12:59:03 +0100 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> Message-ID: <20171130125903.26fc1054@fsol> Or, simply, is_dataclass_instance(), which is even longer, but far more readable thanks to explicit word boundaries :-) On Thu, 30 Nov 2017 10:16:58 +0100 Antoine Pitrou wrote: > isdataclass() testing for instance-ship does sound like a bug magnet to > me. > > If isdataclassinstance() is too long (that's understandable), how about > isdatainstance()? > > Regards > > Antoine. > > > On Wed, 29 Nov 2017 17:02:21 -0800 > Guido van Rossum wrote: > > On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer wrote: > > > > > On 11/29/2017 03:26 PM, Eric V. Smith wrote: > > > > I've posted a new version of PEP 557, it should soon be available at > > > > https://www.python.org/dev/peps/pep-0557/. > > > > > > > > The only significant changes since the last version are: > > > > > > > > - changing the "compare" parameter to be "order", since that more > > > > accurately reflects what it does. > > > > - Having the combination of "eq=False" and "order=True" raise an > > > > exception instead of silently changing eq to True. > > > > > > > > There were no other issues raised with the previous version of the PEP. > > > > > > Not quite; I also raised the issue of isdataclass(ADataClass) returning > > > False. I still think that's likely to be a cause of bug reports if left > > > as-is. > > > > > > > I tried to look up the discussion but didn't find much except that you > > flagged this as an issue. To repeat, your concern is that isdataclass() > > applies to *instances*, not classes, which is how Eric has designed it, but > > you worry that either through the name or just because people don't read > > the docs it will be confusing. What do you suppose we do? I think making it > > work for classes as well as for instances would cause another category of > > bugs (confusion between cases where a class is needed vs. an instance > > abound in other situations -- we don't want to add to that). Maybe it > > should raise TypeError when passed a class (unless its metaclass is a > > dataclass)? Maybe it should be renamed to isdataclassinstance()? That's a > > mouthful, but I don't know how common the need to call this is, and people > > who call it a lot can define their own shorter alias. > > > > > From eric at trueblade.com Thu Nov 30 07:59:54 2017 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 30 Nov 2017 07:59:54 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <20171130125903.26fc1054@fsol> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> <20171130125903.26fc1054@fsol> Message-ID: <13c3f7dc-ec8d-14ea-af2c-a8156e1e1936@trueblade.com> On 11/30/2017 6:59 AM, Antoine Pitrou wrote: > > Or, simply, is_dataclass_instance(), which is even longer, but far more > readable thanks to explicit word boundaries :-) That actually doesn't bother me. I think this API will be used rarely, if ever. Or more realistically, it should be used rarely: what actually happens will no doubt surprise me. So I'm okay with is_dataclass_instance() and is_dataclass_class(). But then I'm also okay with dropping the API entirely. nametuple has lived for years without it, although Raymond's advice there is that if you really want to know, look for _fields. See https://bugs.python.org/issue7796#msg99869 and the following discussion. Eric. > > On Thu, 30 Nov 2017 10:16:58 +0100 > Antoine Pitrou wrote: >> isdataclass() testing for instance-ship does sound like a bug magnet to >> me. >> >> If isdataclassinstance() is too long (that's understandable), how about >> isdatainstance()? >> >> Regards >> >> Antoine. >> >> >> On Wed, 29 Nov 2017 17:02:21 -0800 >> Guido van Rossum wrote: >>> On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer wrote: >>> >>>> On 11/29/2017 03:26 PM, Eric V. Smith wrote: >>>>> I've posted a new version of PEP 557, it should soon be available at >>>>> https://www.python.org/dev/peps/pep-0557/. >>>>> >>>>> The only significant changes since the last version are: >>>>> >>>>> - changing the "compare" parameter to be "order", since that more >>>>> accurately reflects what it does. >>>>> - Having the combination of "eq=False" and "order=True" raise an >>>>> exception instead of silently changing eq to True. >>>>> >>>>> There were no other issues raised with the previous version of the PEP. >>>> >>>> Not quite; I also raised the issue of isdataclass(ADataClass) returning >>>> False. I still think that's likely to be a cause of bug reports if left >>>> as-is. >>>> >>> >>> I tried to look up the discussion but didn't find much except that you >>> flagged this as an issue. To repeat, your concern is that isdataclass() >>> applies to *instances*, not classes, which is how Eric has designed it, but >>> you worry that either through the name or just because people don't read >>> the docs it will be confusing. What do you suppose we do? I think making it >>> work for classes as well as for instances would cause another category of >>> bugs (confusion between cases where a class is needed vs. an instance >>> abound in other situations -- we don't want to add to that). Maybe it >>> should raise TypeError when passed a class (unless its metaclass is a >>> dataclass)? Maybe it should be renamed to isdataclassinstance()? That's a >>> mouthful, but I don't know how common the need to call this is, and people >>> who call it a lot can define their own shorter alias. >>> >> >> >> > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > From wes.turner at gmail.com Thu Nov 30 08:00:10 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 30 Nov 2017 08:00:10 -0500 Subject: [Python-Dev] PEPs: ``.. code:: python`` or ``::`` (syntax highlighting) Message-ID: In ReStructuredText, this gets syntax highlighted because of the code directive [1][2][3]: .. code:: python import this def func(*args, **kwargs): pass This also gets syntax highlighted as python[3]: .. code:: python import this def func(*args, **kwargs): pass This does not:: import this def func(*args, **kwargs): pass Syntax highlighting in Docutils 0.9+ is powered by Pygments. If Pygments is not installed, or there is a syntax error, syntax highlighting is absent. GitHub does show Pygments syntax highlighting in .. code:: blocks for .rst and .restructuredtext documents [4] 1. Does the python.org PEP view support .. code:: blocks? [5] 2. Syntax highlighting is an advantage for writers, editors, and readers. 3. Should PEPs use .. code:: blocks to provide this advantage? [1] http://docutils.sourceforge.net/docs/ref/rst/directives.html#code [2] http://www.sphinx-doc.org/en/stable/markup/code.html [3] http://www.sphinx-doc.org/en/stable/config.html#confval-highlight_language [4] https://github.com/python/peps/blob/master/pep-0557.rst [5] https://www.python.org/dev/peps/pep-0557/ https://www.python.org/dev/peps/pep-0458/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dacut at kanga.org Thu Nov 30 02:28:24 2017 From: dacut at kanga.org (David Cuthbert) Date: Thu, 30 Nov 2017 07:28:24 +0000 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements In-Reply-To: References: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> Message-ID: Henk-Jaap noted that the grammar section of the language ref for yield and return should also be updated from expression_list to starred_list with this change. As noted elsewhere, this isn't in-sync with the Grammar file (intentionally, if I understand correctly). I took a look, and I believe that every instance of expression_list (which doesn't allow the unparenthesized tuple unpacking) should be changed to starred_list. Which might really mean that starred_list should have never existed, and the changes should have been put into expression_list in the first place (though I understand the desire to be conservative with syntax changes). Here are the places where expression_list is still allowed (after fixing return and yield): subscription ::= primary "[" expression_list "]" augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression) for_stmt ::= "for" target_list "in" expression_list ":" suite ["else" ":" suite] In other words, the following all produce SyntaxErrors today (and enclosing them in parentheses avoids this): a[1, *rest] a += 1, *rest # and other augops: -= *= /= etc. for i in 1, *rest: My hunch is these cases should also be fixed to be consistent. While I can't see myself using something like "a += 1, *rest" in the immediate future, it seems weird to be inconsistent in these cases (and reinforces the oft-mistaken assumption, from Terry's earlier reply, that tuples are defined by parentheses instead of commas). Any reason I shouldn't dig in and fix this while I'm here? Dave On 11/25/17, 9:03 PM, Nick Coghlan wrote: On 26 November 2017 at 09:22, Terry Reedy wrote: > Since return and yield are often the first half of a cross-namespace > assignment, requiring the () is a bit surprising. Perhaps someone else has > a good reason for the difference. These kinds of discrepancies tend to arise because there are a few different grammar nodes for "comma separated sequence of expressions", which makes it possible to miss some when enhancing the tuple syntax. Refactoring the grammar to eliminate the duplication isn't especially easy, and we don't change the syntax all that often, so it makes sense to treat cases like this one as bugs in the implementation of the original syntax change (except that the "don't change the Grammar in maintenance releases" guideline means they still need to be handled as new features when it comes to fixing them). Cheers, Nick. P.S. That said, I do wonder if it might be feasible to write a "Grammar consistency check" test that ensured the known duplicate nodes at least have consistent definitions, such that missing one in a syntax update will cause an automated test failure. Unfortunately, the nodes typically haven't been combined because they have some *intentional* differences in exactly what they allow, so I also suspect that this is easier said than done. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Thu Nov 30 10:42:50 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 30 Nov 2017 16:42:50 +0100 Subject: [Python-Dev] Py_DECREF(m) on PyInit_xxx() failure? Message-ID: Hi, CPython has many C extensions with non-trivial PyInit_xxx() functions which has to handle failures. A few modules use "Py_DECREF(m); retutrn NULL;", but most functions only do "return NULL;". Is it a reference leak or not? Example from Modules/posixmodule.c: v = convertenviron(); Py_XINCREF(v); if (v == NULL || PyModule_AddObject(m, "environ", v) != 0) return NULL; Py_DECREF(v); Does this code leak a reference on m? ... Oh, and maybe also leaks a reference on v? Victor From storchaka at gmail.com Thu Nov 30 11:00:32 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 30 Nov 2017 18:00:32 +0200 Subject: [Python-Dev] Py_DECREF(m) on PyInit_xxx() failure? In-Reply-To: References: Message-ID: 30.11.17 17:42, Victor Stinner ????: > CPython has many C extensions with non-trivial PyInit_xxx() functions > which has to handle failures. A few modules use "Py_DECREF(m); retutrn > NULL;", but most functions only do "return NULL;". Is it a reference > leak or not? > > Example from Modules/posixmodule.c: > > v = convertenviron(); > Py_XINCREF(v); > if (v == NULL || PyModule_AddObject(m, "environ", v) != 0) > return NULL; > Py_DECREF(v); > > Does this code leak a reference on m? ... Oh, and maybe also leaks a > reference on v? https://mail.python.org/pipermail/python-dev/2016-April/144359.html https://bugs.python.org/issue26871 From ericfahlgren at gmail.com Thu Nov 30 11:08:43 2017 From: ericfahlgren at gmail.com (Eric Fahlgren) Date: Thu, 30 Nov 2017 08:08:43 -0800 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: On Thu, Nov 30, 2017 at 2:48 AM, Andrea Griffini wrote: > Not really related but the PEP says that arguments in Python are evaluated > before the function (as a reason to reject the idea of None-aware function > call) but this is not the case: > ?I think you're missing something here, since it seems clear to me that indeed the arguments are evaluated prior to the function call.? Maybe unrolling it would help? This is equivalent to the body of your lambda, and you can see that the argument is evaluated prior to the call which receives it. >>> func = f() >>> arg = g() >>> func(arg) >>> import dis > >>> dis.dis(lambda : f()(g())) > 1 0 LOAD_GLOBAL 0 (f) > 3 CALL_FUNCTION 0 > ?Call 'f()' with all of its arguments evaluated prior to the call (there are none, that's the '0' on the CALL_FUNCTION operator). ? > 6 LOAD_GLOBAL 1 (g) > 9 CALL_FUNCTION 0 > ?Next, evaluate the arguments for the next function call.? ? ? ?Call 'g()' with all of its arguments evaluated. ? > 12 CALL_FUNCTION 1 > ?Call the function that 'f()' returned with its argument ('g()') evaluated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Thu Nov 30 11:31:56 2017 From: random832 at fastmail.com (Random832) Date: Thu, 30 Nov 2017 11:31:56 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: <1512059516.1025120.1189594304.120D267A@webmail.messagingengine.com> On Thu, Nov 30, 2017, at 11:08, Eric Fahlgren wrote: > On Thu, Nov 30, 2017 at 2:48 AM, Andrea Griffini wrote: > > > Not really related but the PEP says that arguments in Python are evaluated > > before the function (as a reason to reject the idea of None-aware function > > call) but this is not the case: > > > > ?I think you're missing something here, since it seems clear to me that > indeed the arguments are evaluated prior to the function call.? Maybe > unrolling it would help? This is equivalent to the body of your lambda, > and you can see that the argument is evaluated prior to the call which > receives it. Of course they're evaluated prior to the function *call*, but the pep says they're evaluated prior to the function *itself* [i.e. arg = g(); func = f(); func(arg)]. From brett at python.org Thu Nov 30 13:27:58 2017 From: brett at python.org (Brett Cannon) Date: Thu, 30 Nov 2017 18:27:58 +0000 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements In-Reply-To: References: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> Message-ID: On Thu, 30 Nov 2017 at 05:01 David Cuthbert wrote: > Henk-Jaap noted that the grammar section of the language ref for yield and > return should also be updated from expression_list to starred_list with > this change. As noted elsewhere, this isn't in-sync with the Grammar file > (intentionally, if I understand correctly). > > I took a look, and I believe that every instance of expression_list (which > doesn't allow the unparenthesized tuple unpacking) should be changed to > starred_list. Which might really mean that starred_list should have never > existed, and the changes should have been put into expression_list in the > first place (though I understand the desire to be conservative with syntax > changes). > > Here are the places where expression_list is still allowed (after fixing > return and yield): > > subscription ::= primary "[" expression_list "]" > augmented_assignment_stmt ::= augtarget augop (expression_list | > yield_expression) > for_stmt ::= "for" target_list "in" expression_list ":" suite > ["else" ":" suite] > > In other words, the following all produce SyntaxErrors today (and > enclosing them in parentheses avoids this): > a[1, *rest] > a += 1, *rest # and other augops: -= *= /= etc. > for i in 1, *rest: > > My hunch is these cases should also be fixed to be consistent. While I > can't see myself using something like "a += 1, *rest" in the immediate > future, it seems weird to be inconsistent in these cases (and reinforces > the oft-mistaken assumption, from Terry's earlier reply, that tuples are > defined by parentheses instead of commas). > > Any reason I shouldn't dig in and fix this while I'm here? > It's really a question of ramifications. Do we want every place where parentheses tuples are required to allow for the non-paren version? If there was a way to get an exhaustive list of examples showing what would change in those instances then we could make a judgement call as to whether this change is desired. -Brett > > Dave > > > On 11/25/17, 9:03 PM, Nick Coghlan wrote: > > On 26 November 2017 at 09:22, Terry Reedy wrote: > > Since return and yield are often the first half of a cross-namespace > > assignment, requiring the () is a bit surprising. Perhaps someone > else has > > a good reason for the difference. > > These kinds of discrepancies tend to arise because there are a few > different grammar nodes for "comma separated sequence of expressions", > which makes it possible to miss some when enhancing the tuple syntax. > > Refactoring the grammar to eliminate the duplication isn't especially > easy, and we don't change the syntax all that often, so it makes > sense to treat cases like this one as bugs in the implementation of > the original syntax change (except that the "don't change the Grammar > in maintenance releases" guideline means they still need to be handled > as new features when it comes to fixing them). > > Cheers, > Nick. > > P.S. That said, I do wonder if it might be feasible to write a > "Grammar consistency check" test that ensured the known duplicate > nodes at least have consistent definitions, such that missing one in a > syntax update will cause an automated test failure. Unfortunately, the > nodes typically haven't been combined because they have some > *intentional* differences in exactly what they allow, so I also > suspect that this is easier said than done. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Nov 30 13:30:23 2017 From: brett at python.org (Brett Cannon) Date: Thu, 30 Nov 2017 18:30:23 +0000 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <13c3f7dc-ec8d-14ea-af2c-a8156e1e1936@trueblade.com> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> <20171130125903.26fc1054@fsol> <13c3f7dc-ec8d-14ea-af2c-a8156e1e1936@trueblade.com> Message-ID: On Thu, 30 Nov 2017 at 05:00 Eric V. Smith wrote: > On 11/30/2017 6:59 AM, Antoine Pitrou wrote: > > > > Or, simply, is_dataclass_instance(), which is even longer, but far more > > readable thanks to explicit word boundaries :-) > > That actually doesn't bother me. I think this API will be used rarely, > if ever. Or more realistically, it should be used rarely: what actually > happens will no doubt surprise me. > > So I'm okay with is_dataclass_instance() and is_dataclass_class(). > > But then I'm also okay with dropping the API entirely. nametuple has > lived for years without it, although Raymond's advice there is that if > you really want to know, look for _fields. See > https://bugs.python.org/issue7796#msg99869 and the following discussion. > My question was going to be whether this is even necessary. :) Perhaps we just drop it for now and add it in if we find there's a public need for it? -Brett > > Eric. > > > > > On Thu, 30 Nov 2017 10:16:58 +0100 > > Antoine Pitrou wrote: > >> isdataclass() testing for instance-ship does sound like a bug magnet to > >> me. > >> > >> If isdataclassinstance() is too long (that's understandable), how about > >> isdatainstance()? > >> > >> Regards > >> > >> Antoine. > >> > >> > >> On Wed, 29 Nov 2017 17:02:21 -0800 > >> Guido van Rossum wrote: > >>> On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer > wrote: > >>> > >>>> On 11/29/2017 03:26 PM, Eric V. Smith wrote: > >>>>> I've posted a new version of PEP 557, it should soon be available at > >>>>> https://www.python.org/dev/peps/pep-0557/. > >>>>> > >>>>> The only significant changes since the last version are: > >>>>> > >>>>> - changing the "compare" parameter to be "order", since that more > >>>>> accurately reflects what it does. > >>>>> - Having the combination of "eq=False" and "order=True" raise an > >>>>> exception instead of silently changing eq to True. > >>>>> > >>>>> There were no other issues raised with the previous version of the > PEP. > >>>> > >>>> Not quite; I also raised the issue of isdataclass(ADataClass) > returning > >>>> False. I still think that's likely to be a cause of bug reports if > left > >>>> as-is. > >>>> > >>> > >>> I tried to look up the discussion but didn't find much except that you > >>> flagged this as an issue. To repeat, your concern is that isdataclass() > >>> applies to *instances*, not classes, which is how Eric has designed > it, but > >>> you worry that either through the name or just because people don't > read > >>> the docs it will be confusing. What do you suppose we do? I think > making it > >>> work for classes as well as for instances would cause another category > of > >>> bugs (confusion between cases where a class is needed vs. an instance > >>> abound in other situations -- we don't want to add to that). Maybe it > >>> should raise TypeError when passed a class (unless its metaclass is a > >>> dataclass)? Maybe it should be renamed to isdataclassinstance()? > That's a > >>> mouthful, but I don't know how common the need to call this is, and > people > >>> who call it a lot can define their own shorter alias. > >>> > >> > >> > >> > > > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Nov 30 13:33:38 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 30 Nov 2017 19:33:38 +0100 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> Message-ID: <20171130193338.66ba1945@fsol> On Tue, 28 Nov 2017 12:10:54 -0800 Lukasz Langa wrote: > Hi Mark, > it looks like the PEP is dormant for over two years now. I had multiple people ask me over the past few days about it though, so I wanted to ask if this is moving forward. I am -1 on this PEP. I also think we don't need any additional syntax for the feature, regardless of how it's spelt exactly (or whether it's spelled rather than spelt). Regards Antoine. From tseaver at palladion.com Thu Nov 30 13:45:07 2017 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 30 Nov 2017 13:45:07 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <718169B1-C1AE-4FCA-92E7-D4E246096612@python.org> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> <718169B1-C1AE-4FCA-92E7-D4E246096612@python.org> Message-ID: On 11/29/2017 01:02 PM, Barry Warsaw wrote: > I don?t know whether I like any of this but I think a more > natural spelling would be: > > val = name.strip()[4:].upper() except (AttributeError, KeyError) as -1 > > which could devolve into: > > val = name.strip()[4:].upper() except KeyError as -1 > > or: > > val = name.strip()[4:].upper() except KeyError # Implicit `as None` Of all the proposed spellings for the idea, this one feels most "normal" to me, too (I'm -0 on the idea as a whole). > I would *not* add any spelling for an explicit bare-except equivalent. > You would have to write: > > val = name.strip()[4:].upper() except Exception as -1 Wouldn't that really need to be this instead, for a true 'except:' equivalence: val = name.strip()[4:].upper() except BaseException as -1 Tres. -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com From rosuav at gmail.com Thu Nov 30 14:45:33 2017 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 1 Dec 2017 06:45:33 +1100 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <1511974989.903268.1188352080.36BF6ADC@webmail.messagingengine.com> <718169B1-C1AE-4FCA-92E7-D4E246096612@python.org> Message-ID: On Fri, Dec 1, 2017 at 5:45 AM, Tres Seaver wrote: >> I would *not* add any spelling for an explicit bare-except equivalent. >> You would have to write: >> >> val = name.strip()[4:].upper() except Exception as -1 > > > Wouldn't that really need to be this instead, for a true 'except:' equivalence: > > val = name.strip()[4:].upper() except BaseException as -1 > Read the rejected PEP 463 for all the details and arguments. All this has been gone into. ChrisA From carl at oddbird.net Thu Nov 30 15:35:10 2017 From: carl at oddbird.net (Carl Meyer) Date: Thu, 30 Nov 2017 12:35:10 -0800 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> Message-ID: <5d98fa5b-fc53-b4bb-119e-57e569403ec2@oddbird.net> On 11/29/2017 05:02 PM, Guido van Rossum wrote: > I tried to look up the discussion but didn't find much except that you > flagged this as an issue. To repeat, your concern is that isdataclass() > applies to *instances*, not classes, which is how Eric has designed it, > but you worry that either through the name or just because people don't > read the docs it will be confusing. What do you suppose we do? I think > making it work for classes as well as for instances would cause another > category of bugs (confusion between cases where a class is needed vs. an > instance abound in other situations -- we don't want to add to that). > Maybe it should raise TypeError when passed a class (unless its > metaclass is a dataclass)? Maybe it should be renamed to > isdataclassinstance()? That's a mouthful, but I don't know how common > the need to call this is, and people who call it a lot can define their > own shorter alias. Yeah, I didn't propose a specific fix because I think there are several options (all mentioned in this thread already), and I don't really have strong feelings about them: 1) Keep the existing function and name, let it handle either classes or instances. I agree that this is probably not the best option available, though IMO it's still marginally better than the status quo). 2) Punt the problem by removing the function; don't add it to the public API at all until we have demonstrated demand. 3) Rename it to "is_dataclass_instance" (and maybe also keep a separate "is_dataclass" for testing classes directly). (Then there's also the choice about raising TypeError vs just returning False if a function is given the wrong type; I think TypeError is better.) Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From lukasz at langa.pl Thu Nov 30 16:09:47 2017 From: lukasz at langa.pl (Lukasz Langa) Date: Thu, 30 Nov 2017 13:09:47 -0800 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: <5d98fa5b-fc53-b4bb-119e-57e569403ec2@oddbird.net> References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <5d98fa5b-fc53-b4bb-119e-57e569403ec2@oddbird.net> Message-ID: <46546FCB-06BC-422A-A8DF-7DA01D1FAA2E@langa.pl> +1 to (3), I like the type error idea, too. I don't care much about naming... but if I were to bikeshed this, I'd go for isdataclass (like issubclass) isdatainstance (like isinstance) - ? > On Nov 30, 2017, at 12:35 PM, Carl Meyer wrote: > > On 11/29/2017 05:02 PM, Guido van Rossum wrote: >> I tried to look up the discussion but didn't find much except that you >> flagged this as an issue. To repeat, your concern is that isdataclass() >> applies to *instances*, not classes, which is how Eric has designed it, >> but you worry that either through the name or just because people don't >> read the docs it will be confusing. What do you suppose we do? I think >> making it work for classes as well as for instances would cause another >> category of bugs (confusion between cases where a class is needed vs. an >> instance abound in other situations -- we don't want to add to that). >> Maybe it should raise TypeError when passed a class (unless its >> metaclass is a dataclass)? Maybe it should be renamed to >> isdataclassinstance()? That's a mouthful, but I don't know how common >> the need to call this is, and people who call it a lot can define their >> own shorter alias. > > Yeah, I didn't propose a specific fix because I think there are several > options (all mentioned in this thread already), and I don't really have > strong feelings about them: > > 1) Keep the existing function and name, let it handle either classes or > instances. I agree that this is probably not the best option available, > though IMO it's still marginally better than the status quo). > > 2) Punt the problem by removing the function; don't add it to the public > API at all until we have demonstrated demand. > > 3) Rename it to "is_dataclass_instance" (and maybe also keep a separate > "is_dataclass" for testing classes directly). (Then there's also the > choice about raising TypeError vs just returning False if a function is > given the wrong type; I think TypeError is better.) > > Carl > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/lukasz%40langa.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From greg.ewing at canterbury.ac.nz Thu Nov 30 17:26:18 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 01 Dec 2017 11:26:18 +1300 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> Message-ID: <5A20858A.3070807@canterbury.ac.nz> Eric Fahlgren wrote: > ?I think you're missing something here, since it seems clear to me that > indeed the arguments are evaluated prior to the function call.? I think the OP may be confusing "evaluating the function" with "calling the function". If the function being called is determined by some computation, that computation may be performed before its arguments are evaluated (and is probably required to be, by the "left to right" rule). But the arguments will always be evaluated before the actual call happens. -- Greg From eric at trueblade.com Thu Nov 30 19:22:49 2017 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 30 Nov 2017 19:22:49 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> <20171130125903.26fc1054@fsol> <13c3f7dc-ec8d-14ea-af2c-a8156e1e1936@trueblade.com> Message-ID: <25d8d06d-ac8b-865e-6e58-b16c1e91e61e@trueblade.com> On 11/30/2017 1:30 PM, Brett Cannon wrote: > > > On Thu, 30 Nov 2017 at 05:00 Eric V. Smith > wrote: > > On 11/30/2017 6:59 AM, Antoine Pitrou wrote: > > > > Or, simply, is_dataclass_instance(), which is even longer, but > far more > > readable thanks to explicit word boundaries :-) > > That actually doesn't bother me. I think this API will be used rarely, > if ever. Or more realistically, it should be used rarely: what actually > happens will no doubt surprise me. > > So I'm okay with is_dataclass_instance() and is_dataclass_class(). > > But then I'm also okay with dropping the API entirely. nametuple has > lived for years without it, although Raymond's advice there is that if > you really want to know, look for _fields. See > https://bugs.python.org/issue7796#msg99869 and the following discussion. > > > My question was going to be whether this is even necessary. :) Perhaps > we just drop it for now and add it in if we find there's a public need > for it? That's what I'm leaning toward. I've been trying to figure out what attr.has() or hasattr(obj, '_fields') are actually used for. The attrs version is hard to search for, and while I see the question about namedtuples asked fairly often on SO, I haven't seen an actual use case. It's easy enough for someone to write their own isdataclass(), admittedly using an undocumented feature. So I'm thinking let's drop it and then gauge the demand for it, if any. Eric. From wes.turner at gmail.com Thu Nov 30 19:31:16 2017 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 30 Nov 2017 19:31:16 -0500 Subject: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes In-Reply-To: References: <516a9e5b-e89e-0dd8-64c5-8a712b58f1ca@trueblade.com> <20171130101658.7f7e5807@fsol> Message-ID: On Thursday, November 30, 2017, Wes Turner wrote: > Could these be things in types? > > types.ClassType > types.InstanceType > > types.DataClass > types.DataClassInstanceType (?) > types.DataClass(types.ClassType) types.DataClassInstanceType(types.InstanceType) Would be logical? > > I sent a PR with typo fixes and ``.. code:: python`` directives so that > syntax highlighting works (at least on GitHub). > > https://github.com/python/peps/blob/master/pep-0557.rst > > https://github.com/python/peps/pull/488 > > Additional notes: > > - "DataClass" instead of "Data Class" would be easier to search for. > s/DataClass/Data Class/g? > - It's probably worth mentioning how hash works when frozen=True also here: > https://github.com/python/peps/blob/master/pep-0557.rst#frozen-instances > - The `hash` explanation could be a two column table for easier readability > > What a great feature. > > - Runtime data validation from annotations (like PyContracts,) would be > cool > - __slots__ are worth the time > > On Thursday, November 30, 2017, Antoine Pitrou > wrote: > >> >> isdataclass() testing for instance-ship does sound like a bug magnet to >> me. >> >> If isdataclassinstance() is too long (that's understandable), how about >> isdatainstance()? >> >> Regards >> >> Antoine. >> >> >> On Wed, 29 Nov 2017 17:02:21 -0800 >> Guido van Rossum wrote: >> > On Wed, Nov 29, 2017 at 3:51 PM, Carl Meyer wrote: >> > >> > > On 11/29/2017 03:26 PM, Eric V. Smith wrote: >> > > > I've posted a new version of PEP 557, it should soon be available at >> > > > https://www.python.org/dev/peps/pep-0557/. >> > > > >> > > > The only significant changes since the last version are: >> > > > >> > > > - changing the "compare" parameter to be "order", since that more >> > > > accurately reflects what it does. >> > > > - Having the combination of "eq=False" and "order=True" raise an >> > > > exception instead of silently changing eq to True. >> > > > >> > > > There were no other issues raised with the previous version of the >> PEP. >> > > >> > > Not quite; I also raised the issue of isdataclass(ADataClass) >> returning >> > > False. I still think that's likely to be a cause of bug reports if >> left >> > > as-is. >> > > >> > >> > I tried to look up the discussion but didn't find much except that you >> > flagged this as an issue. To repeat, your concern is that isdataclass() >> > applies to *instances*, not classes, which is how Eric has designed it, >> but >> > you worry that either through the name or just because people don't read >> > the docs it will be confusing. What do you suppose we do? I think >> making it >> > work for classes as well as for instances would cause another category >> of >> > bugs (confusion between cases where a class is needed vs. an instance >> > abound in other situations -- we don't want to add to that). Maybe it >> > should raise TypeError when passed a class (unless its metaclass is a >> > dataclass)? Maybe it should be renamed to isdataclassinstance()? That's >> a >> > mouthful, but I don't know how common the need to call this is, and >> people >> > who call it a lot can define their own shorter alias. >> > >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes. >> turner%40gmail.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Nov 30 22:50:57 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 30 Nov 2017 19:50:57 -0800 Subject: [Python-Dev] Allow tuple unpacking in return and yield statements In-Reply-To: References: <3F73239C-7B09-46A7-AC39-5C39324E0823@kanga.org> Message-ID: I'd be wary of going too far here. The parser (which uses a different representation for the grammar) may not support all of these -- in particular I would guess that the syntax for subscriptions is actually more complicated than shown, because of slices. Do you allow a[x:y, *rest]? On Thu, Nov 30, 2017 at 10:27 AM, Brett Cannon wrote: > > > On Thu, 30 Nov 2017 at 05:01 David Cuthbert wrote: > >> Henk-Jaap noted that the grammar section of the language ref for yield >> and return should also be updated from expression_list to starred_list with >> this change. As noted elsewhere, this isn't in-sync with the Grammar file >> (intentionally, if I understand correctly). >> >> I took a look, and I believe that every instance of expression_list >> (which doesn't allow the unparenthesized tuple unpacking) should be changed >> to starred_list. Which might really mean that starred_list should have >> never existed, and the changes should have been put into expression_list in >> the first place (though I understand the desire to be conservative with >> syntax changes). >> >> Here are the places where expression_list is still allowed (after fixing >> return and yield): >> >> subscription ::= primary "[" expression_list "]" >> augmented_assignment_stmt ::= augtarget augop (expression_list | >> yield_expression) >> for_stmt ::= "for" target_list "in" expression_list ":" suite >> ["else" ":" suite] >> >> In other words, the following all produce SyntaxErrors today (and >> enclosing them in parentheses avoids this): >> a[1, *rest] >> a += 1, *rest # and other augops: -= *= /= etc. >> for i in 1, *rest: >> >> My hunch is these cases should also be fixed to be consistent. While I >> can't see myself using something like "a += 1, *rest" in the immediate >> future, it seems weird to be inconsistent in these cases (and reinforces >> the oft-mistaken assumption, from Terry's earlier reply, that tuples are >> defined by parentheses instead of commas). >> >> Any reason I shouldn't dig in and fix this while I'm here? >> > > It's really a question of ramifications. Do we want every place where > parentheses tuples are required to allow for the non-paren version? If > there was a way to get an exhaustive list of examples showing what would > change in those instances then we could make a judgement call as to whether > this change is desired. > > -Brett > > >> >> Dave >> >> >> On 11/25/17, 9:03 PM, Nick Coghlan wrote: >> >> On 26 November 2017 at 09:22, Terry Reedy wrote: >> > Since return and yield are often the first half of a cross-namespace >> > assignment, requiring the () is a bit surprising. Perhaps someone >> else has >> > a good reason for the difference. >> >> These kinds of discrepancies tend to arise because there are a few >> different grammar nodes for "comma separated sequence of expressions", >> which makes it possible to miss some when enhancing the tuple syntax. >> >> Refactoring the grammar to eliminate the duplication isn't especially >> easy, and we don't change the syntax all that often, so it makes >> sense to treat cases like this one as bugs in the implementation of >> the original syntax change (except that the "don't change the Grammar >> in maintenance releases" guideline means they still need to be handled >> as new features when it comes to fixing them). >> >> Cheers, >> Nick. >> >> P.S. That said, I do wonder if it might be feasible to write a >> "Grammar consistency check" test that ensured the known duplicate >> nodes at least have consistent definitions, such that missing one in a >> syntax update will cause an automated test failure. Unfortunately, the >> nodes typically haven't been combined because they have some >> *intentional* differences in exactly what they allow, so I also >> suspect that this is easier said than done. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> brett%40python.org >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Thu Nov 30 23:54:39 2017 From: random832 at fastmail.com (Random832) Date: Thu, 30 Nov 2017 23:54:39 -0500 Subject: [Python-Dev] What's the status of PEP 505: None-aware operators? In-Reply-To: <5A20858A.3070807@canterbury.ac.nz> References: <28D91255-56A9-4CC2-B45D-F83ECD715544@langa.pl> <50C74ECC-D462-4FAC-8A9C-A12BC939ADEB@gmail.com> <5D409985-1D45-4957-9A27-B41C6311BA8B@python.org> <5A20858A.3070807@canterbury.ac.nz> Message-ID: <1512104079.2959729.1190270136.08EB0A92@webmail.messagingengine.com> On Thu, Nov 30, 2017, at 17:26, Greg Ewing wrote: > Eric Fahlgren wrote: > > ?I think you're missing something here, since it seems clear to me that > > indeed the arguments are evaluated prior to the function call.? > > I think the OP may be confusing "evaluating the function" with > "calling the function". > > If the function being called is determined by some computation, > that computation may be performed before its arguments are > evaluated (and is probably required to be, by the "left to > right" rule). But the arguments will always be evaluated > before the actual call happens. Right, but if the function is evaluated before either of those things happen, then it can indeed short-circuit the argument evaluation. The OP isn't confusing anything; it's Eric who is confused. The quoted paragraph of the PEP clearly and unambiguously claims that the sequence is "arguments -> function -> call", meaning that something happens after the "function" stage [i.e. a None check] cannot short-circuit the "arguments" stage. But in fact the sequence is "function -> arguments -> call".