From steve at pearwood.info Mon Jan 1 02:29:25 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 1 Jan 2018 18:29:25 +1100 Subject: [Python-Dev] [ssl] The weird case of IDNA In-Reply-To: References: <081550d6-c884-d9b5-e5e9-8c62d48d787e@python.org> <20171230112837.25247c63@fsol> <23113.1105.231843.272117@turnbull.sk.tsukuba.ac.jp> <20180101013956.GW4215@ando.pearwood.info> Message-ID: <20180101072925.GZ4215@ando.pearwood.info> On Sun, Dec 31, 2017 at 05:51:47PM -0800, Nathaniel Smith wrote: > On Sun, Dec 31, 2017 at 5:39 PM, Steven D'Aprano wrote: > > On Sun, Dec 31, 2017 at 09:07:01AM -0800, Nathaniel Smith wrote: > > > >> This is another reason why we ought to let users do their own IDNA handling > >> if they want... > > > > I expect that letting users do their own IDNA handling will correspond > > to not doing any IDNA handling at all. > > You did see the words "if they want", right? Yes. Its the people who don't know that they ought to handle IDNA that concern me. They would "want to" if they knew they ought to, but they don't because they never even thought of non-ASCII URLs and consequently they write libraries or applications open to IDNA security issues. > I'm not talking about > removing the stdlib's default IDNA handling, I'm talking about fixing > the cases where the stdlib goes out of its way to prevent users from > overriding its IDNA handling. That wasn't clear to me. I completely agree that the stdlib preventing people from overriding the IDNA is a bad thing that ought to be fixed, and that users should be able to opt out of it (presumably if they know enough to do that, they know enough to avoid IDNA vulnerabilities). I thought you meant it ought to be opt-in. Sorry for misunderstanding you, but your wording suggested to me that you meant that the stdlib shouldn't do IDNA handling at all unless the user did it themselves (perhaps by calling an IDNA library in the std lib). I see now that's not what you meant. -- Steve From chris.barker at noaa.gov Mon Jan 1 20:03:38 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 1 Jan 2018 17:03:38 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: On Sat, Dec 30, 2017 at 7:27 AM, Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Just use the simple rule that a new > __repr__ is generated unless provided in the dataclass. > are we only talking about __repr__ here ??? I interpretted Guido's proposal as being about all methods -- we _may_ want something special for __repr__, but I hope not. But +1 for Guido's proposal, not only because it's easy to explain, but because it more naturally follows the usual inheritance logic: The decorator's entire point is to auto-generate boilerplate code for you. Once it's done that it shouldn't, in the end, behave any differently than if you hand wrote that code. If you hand wrote the methods that the decorator creates for you, they would override any base class versions. So that's what it should do. And the fact that you can optionally tell it not to in some particular case keeps full flexibility. -CHB > I grant that there may be many reasons why one would be deriving > dataclasses from dataclasses Will you get the "right" __repr__ now if you derive a datacalss from a dataclass? That would be a nice feature. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at ethanhs.me Mon Jan 1 22:50:00 2018 From: ethan at ethanhs.me (Ethan Smith) Date: Mon, 1 Jan 2018 19:50:00 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: On Mon, Jan 1, 2018 at 5:03 PM, Chris Barker wrote: > On Sat, Dec 30, 2017 at 7:27 AM, Stephen J. Turnbull < > turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > >> Just use the simple rule that a new >> __repr__ is generated unless provided in the dataclass. >> > > are we only talking about __repr__ here ??? > > I interpretted Guido's proposal as being about all methods -- we _may_ > want something special for __repr__, but I hope not. > > But +1 for Guido's proposal, not only because it's easy to explain, but > because it more naturally follows the usual inheritance logic: > > The decorator's entire point is to auto-generate boilerplate code for you. > Once it's done that it shouldn't, in the end, behave any differently than > if you hand wrote that code. If you hand wrote the methods that the > decorator creates for you, they would override any base class versions. So > that's what it should do. > > And the fact that you can optionally tell it not to in some particular > case keeps full flexibility. > > -CHB > I interpreted this to be for all methods as well, which makes sense. Special casing just __repr__ doesn't make sense to me, but I will wait for Guido to clarify. > > > I grant that there may be many reasons why one would be deriving > >> dataclasses from dataclasses > > > Will you get the "right" __repr__ now if you derive a datacalss from a > dataclass? That would be a nice feature. > The __repr__ will be generated by the child dataclass unless the user overrides it. So I believe this is the "right" __repr__. ~>Ethan Smith > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > ethan%40ethanhs.me > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Jan 1 23:44:21 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 1 Jan 2018 20:44:21 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: On Mon, Jan 1, 2018 at 7:50 PM, Ethan Smith wrote: > > Will you get the "right" __repr__ now if you derive a datacalss from a >> dataclass? That would be a nice feature. >> > > > The __repr__ will be generated by the child dataclass unless the user > overrides it. So I believe this is the "right" __repr__. > what I was wondering is if the child will know about all the fields in the parent -- so it could make a full __repr__. -CHB > ~>Ethan Smith > >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE >> >> (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ethan% >> 40ethanhs.me >> >> > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Jan 2 00:01:15 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 1 Jan 2018 22:01:15 -0700 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: On Mon, Jan 1, 2018 at 8:50 PM, Ethan Smith wrote: > > > On Mon, Jan 1, 2018 at 5:03 PM, Chris Barker > wrote: > >> On Sat, Dec 30, 2017 at 7:27 AM, Stephen J. Turnbull < >> turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: >> >>> Just use the simple rule that a new >>> __repr__ is generated unless provided in the dataclass. >>> >> >> are we only talking about __repr__ here ??? >> >> I interpreted Guido's proposal as being about all methods -- we _may_ >> want something special for __repr__, but I hope not. >> > [...] >> > > I interpreted this to be for all methods as well, which makes sense. > Special casing just __repr__ doesn't make sense to me, but I will wait for > Guido to clarify. > Indeed, I just wrote __repr__ for simplicity. This should apply to all special methods. (Though there may be some complications for __eq__/__ne__ and for the ordering operators.) On Mon, Jan 1, 2018 at 9:44 PM, Chris Barker wrote: > On Mon, Jan 1, 2018 at 7:50 PM, Ethan Smith wrote: > >> >> Will you get the "right" __repr__ now if you derive a dataclass from a >>> dataclass? That would be a nice feature. >>> >> >> >> > The __repr__ will be generated by the child dataclass unless the user >> overrides it. So I believe this is the "right" __repr__. >> > > what I was wondering is if the child will know about all the fields in the > parent -- so it could make a full __repr__. > Yes, there's a class variable (__dataclass_fields__) that identifies the parent fields. The PEP doesn't mention this or the fact that special methods (like __repr__ and __init__) can tell whether a base class is a dataclass. It probably should though. (@Eric) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Tue Jan 2 06:37:36 2018 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 02 Jan 2018 12:37:36 +0100 Subject: [Python-Dev] [ssl] The weird case of IDNA In-Reply-To: References: <081550d6-c884-d9b5-e5e9-8c62d48d787e@python.org> <20171230112837.25247c63@fsol> <23113.1105.231843.272117@turnbull.sk.tsukuba.ac.jp> Message-ID: > On 31 Dec 2017, at 18:07, Nathaniel Smith wrote: > > On Dec 31, 2017 7:37 AM, "Stephen J. Turnbull" > wrote: > Nathaniel Smith writes: > > > Issue 1: Python's built-in IDNA implementation is wrong (implements > > IDNA 2003, not IDNA 2008). > > Is "wrong" the right word here? I'll grant you that 2008 is *better*, > but typically in practice versions coexist for years. Ie, is there no > backward compatibility issue with registries that specified IDNA 2003? > > Well, yeah, I was simplifying, but at the least we can say that always and only using IDNA 2003 certainly isn't right :-). I think in most cases the preferred way to deal with these kinds of issues is not to carry around an IDNA 2003 implementation, but instead to use an IDNA 2008 implementation with the "transitional compatibility" flag enabled in the UTS46 preprocessor? But this is rapidly exceeding my knowledge. > > This is another reason why we ought to let users do their own IDNA handling if they want? Do you know what the major browser do w.r.t. IDNA support? If those unconditionally use IDNA 2008 is should be fairly safe to move to that in Python as well because that would mean we?re less likely to run into backward compatibility issues. Ronald -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Jan 2 12:57:36 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Jan 2018 10:57:36 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Oh, the "Specification" section in the PEP is too brief on several of these subjects. It doesn't really specify what var.get() does if the value is not set, nor does it even mention var.get() except in the code examples for var.reset(). It's also subtle that ctx[var] returns the default (if there is one). I suppose it will raise if there isn't one -- resulting in the somewhat surprising behavior where `var in ctx` may be true but `ctx[var]` may raise. And what does it raise? (All these questions are answered by the code, but they should be clearly stated in the PEP.) I would really like to invite more people to review this PEP! I expect I'll be accepting it in the next two weeks, but it needs to go through more rigorous review. On Thu, Dec 28, 2017 at 4:48 PM, Victor Stinner wrote: > NLe 28 d?c. 2017 11:20 AM, "Nathaniel Smith" a ?crit : > > On Thu, Dec 28, 2017 at 1:51 AM, Victor Stinner > wrote: > > var = ContextVar('var', default=42) > > > > and: > > > > var = ContextVar('var') > > var.set (42) > > > > behaves the same, no? > > No, they're different. The second sets the value in the current > context. The first sets the value in all contexts that currently > exist, and all empty contexts created in the future. > > > Oh, that's an important information. In this case, "default" is the best > name. > > The PEP may be more explicit about the effect on all contexts. Proposition > of documentation: > > "The optional *default* parameter is the default value in all contexts. If the > variable is not set in the current context, it is returned by by > context[var_name] and by var.get(), when get() is called without the > default parameter." > > Victor > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Jan 2 15:35:37 2018 From: barry at python.org (Barry Warsaw) Date: Tue, 2 Jan 2018 15:35:37 -0500 Subject: [Python-Dev] Unique loader per module Message-ID: <1CBF1AB4-A327-4B40-A3ED-BF3F793C5EFA@python.org> We have what I think is one last design question for importlib.resources. https://gitlab.com/python-devs/importlib_resources/issues/49 The problem is that the ResourceReader ABC omits the package from the function signatures, so that on a compatible loader, you only need to specify the resource you are interested in. This is fine for file loaders because every package will have a unique loader instance associated with it, so it will know which package the requested resource is homed to. But AFAICT, there is no specification or requirement in Python that every module/package have a unique loader instance. In fact, it?s not unreasonable given some of the text in PEP 302 to think that loader instances can be shared. The PEP says "In many cases the finder and loader can be one and the same object: finder.find_module() would just return self? and you aren?t going to typically have a unique finder per module, so that would implied a shared loader per finder. We even have an existence proof in the zip importer: >>> import test.test_importlib.zipdata02 >>> import sys, os >>> sys.path.insert(0, os.path.join(os.path.dirname(test.test_importlib.zipdata02.__file__), 'ziptestdata.zip')) >>> import ziptestdata.two >>> import ziptestdata.one >>> ziptestdata.one.__spec__.loader == ziptestdata.two.__spec__.loader True The issue above proposes two solutions. The first is to change the ABC so that it includes the package argument in the ABC method signatures. That way, a shared loader will know which package the requested resource is relative to. Brett doesn?t like this, for several reasons (quoting): 1. redundant API in all cases where the loader is unique to the module 2. the memory savings of sharing a loader is small 3. it's implementation complexity/overhead for an optimization case. The second solution, and the one Brett prefers, is to reimplement zip importer to not use a shared loader. This may not be that difficult, if for example we were to use a delegate loader wrapping a shared loader. The bigger problem IMHO is two-fold: 1. It would be backward incompatible. If there?s any code out there expecting a shared loader in zipimport, it would break 2. More problematic is that we?d have to impose an additional requirement on loaders - that they always be unique per module, contradicting the advice in PEP 302 The reason for this is third party finder/loaders. Sure, we can fix zipimport but any third party finder/loaders could have the same problem, and they?d be within their rights given the current specification. We?d have to prohibit that, or at least say that any third party finder/loaders that shared their loader can?t implement ResourceReader (which would be the practical effect anyway). I think that would be a shame. So while I agree with Brett that it?s uglier, and that once we decide we?re essentially locked into the API, I don?t see a whole lot of options. Thoughts, feedback, suggestions are welcome. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From nas-python at arctrix.com Tue Jan 2 15:31:49 2018 From: nas-python at arctrix.com (Neil Schemenauer) Date: Tue, 2 Jan 2018 14:31:49 -0600 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause Message-ID: <20180102203149.vkepsh5da5pon67q@python.ca> Serhiy Storchaka wrote: > Currently 'break' and 'return' are never used inside 'finally' > clause in the stdlib. See the _recv_bytes() function: Lib/multiprocessing/connection.py: 316 > I would want to see a third-party code that uses them. These are the only ones I found so far: ./gevent/src/gevent/libev/corecffi.py: 147 ./gevent/src/gevent/threadpool.py: 226 I have an AST walker script that finds them. Regards, Neil From victor.stinner at gmail.com Tue Jan 2 18:34:29 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 00:34:29 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: > I would really like to invite more people to review this PEP! I expect I'll be accepting it in the next two weeks, but it needs to go through more rigorous review. I read again the PEP and I am still very confused by Context.run(). The PEP states multiple times that a context is immutable: * "read-only mapping" * inherit from Mapping, not from MutableMapping But run() does modify the context (or please correct me if I completely misunderstood the PEP! I had to read it 3 times to check if run() mutates or not the context). It would help if the ctx.run() example in the PEP would not only test var.get() but also test ctx.get(var). Or maybe show that the variable value is kept in a second function call, but the variable is "restored" between run() calls. The PEP tries hard to hide "context data", which is the only read only thing in the whole PEP, whereas it's a key concept to understand the implementation. I understood that: * _ContextData is immutable * ContextVar.set() creates a new _ContextData and sets it in the current Python thread state * When the called function completes, Context.run() sets its context data to the new context data from the Python thread state: so run() does modify the "immutable" context The distinction between the internal/hiden *immutable* context data and public/visible "mutable" (from my point of view) context is unclear to me in the PEP. The concept of "current context" is not defined in the PEP. In practice, there is no "current context", there is only a "current context data" in the current Python thread. There is no need for a concrete context instance to store variable variables values. It's also hard to understand that in the PEP. Why Context could not inherit from MutableMapping? (Allow ctx.set(var, value) and ctx [var] = value.) Is it just to keep the API small: changes should only be made using var.set()? Or maybe Context.run() should really be immutable and return the result of the called function *and* a new context? But I dislike such theorical API, since it would be complex to return the new context if the called function raises an exception. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 2 18:45:59 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 00:45:59 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Le 2 janv. 2018 18:57, "Guido van Rossum" a ?crit : Oh, the "Specification" section in the PEP is too brief on several of these subjects. It doesn't really specify what var.get() does if the value is not set, nor does it even mention var.get() except in the code examples for var.reset(). It's also subtle that ctx[var] returns the default (if there is one). I suppose it will raise if there isn't one -- resulting in the somewhat surprising behavior where `var in ctx` may be true but `ctx[var]` may raise. And what does it raise? (All these questions are answered by the code, but they should be clearly stated in the PEP.) A variable has or has no default value. Would it make sense to expose the default value as a public read-only attribute (it would be equal to _NO_DEFAULT or Token.MISSING if there is no default) and/or add a is_set() method? is_set() returns true if the variable has a default value or if it was set in the "current context". Currently, a custom sentinel is needed to check if var.get(), ctx.get(var) and ctx[var] would raise an exception or not. Example: my_sentinel = object() is_set = (var.get(default=my_sentinel) is not my_sentinel) # no exception if is_set is true ContextVar.get() is non obvious because the variable has an optinal default, get() has an optional default parameter, and the variable can be set or not in the current context. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 2 18:51:41 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 00:51:41 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Why ContextVar.reset(token) does nothing at the second call with the same token? What is the purpose of Token._used? I guess that there is an use case to justify this behaviour. reset() should have a result: true if the variable was restored to its previous state, false if reset() did nothing because the token was already used. And/Or Token should have a read-only "used" property. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 2 18:55:11 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 00:55:11 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: PEP: "int PyContext_Enter(PyContext *) and int PyContext_Exit(PyContext *) allow to set and restore the context for the current OS thread." What is the difference between Enter and Exit? Why not having a single Py_SetContext() function? Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 2 19:03:00 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 01:03:00 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: What is the behaviour of ContextVar.reset(token) if the token was created from a different variable? Raise an exception? token = var1.set("value") var2.reset(token) The PEP states that Token.var only exists for debug or introspection. Victor Le 3 janv. 2018 00:51, "Victor Stinner" a ?crit : Why ContextVar.reset(token) does nothing at the second call with the same token? What is the purpose of Token._used? I guess that there is an use case to justify this behaviour. reset() should have a result: true if the variable was restored to its previous state, false if reset() did nothing because the token was already used. And/Or Token should have a read-only "used" property. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 2 19:30:18 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 01:30:18 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Hum, it seems like the specification (API) part of the PEP is polluted by its implementation. The PEP just require a few minor changes to better describe the behaviour/API instead of insisting on the read only internal thing which is specific to the proposed implementation which is just one arbitrary implemention (designed for best performances). IMHO the PEP shouldn't state that a context is read only. From my point of view, it's mutable and it's the mapping holding variable values. There is a current context which holds the current values. Context.run() switchs temporarely the current context with another context. The fact that there is no concrete context instance by default doesn't really matter in term of API. Victor Le 3 janv. 2018 00:34, "Victor Stinner" a ?crit : > > I would really like to invite more people to review this PEP! I expect > I'll be accepting it in the next two weeks, but it needs to go through more > rigorous review. > > I read again the PEP and I am still very confused by Context.run(). > > The PEP states multiple times that a context is immutable: > > * "read-only mapping" > * inherit from Mapping, not from MutableMapping > > But run() does modify the context (or please correct me if I completely misunderstood > the PEP! I had to read it 3 times to check if run() mutates or not the > context). > > It would help if the ctx.run() example in the PEP would not only test > var.get() but also test ctx.get(var). Or maybe show that the variable value > is kept in a second function call, but the variable is "restored" between > run() calls. > > The PEP tries hard to hide "context data", which is the only read only > thing in the whole PEP, whereas it's a key concept to understand the > implementation. > > I understood that: > > * _ContextData is immutable > * ContextVar.set() creates a new _ContextData and sets it in the current > Python thread state > * When the called function completes, Context.run() sets its context data > to the new context data from the Python thread state: so run() does modify > the "immutable" context > > > The distinction between the internal/hiden *immutable* context data and > public/visible "mutable" (from my point of view) context is unclear to me > in the PEP. > > The concept of "current context" is not defined in the PEP. In practice, > there is no "current context", there is only a "current context data" in > the current Python thread. There is no need for a concrete context instance > to store variable variables values. It's also hard to understand that in > the PEP. > > > Why Context could not inherit from MutableMapping? (Allow ctx.set(var, > value) and ctx [var] = value.) Is it just to keep the API small: changes > should only be made using var.set()? > > Or maybe Context.run() should really be immutable and return the result of > the called function *and* a new context? But I dislike such theorical API, > since it would be complex to return the new context if the called function > raises an exception. > > Victor > -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Jan 2 21:11:39 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Wed, 3 Jan 2018 11:11:39 +0900 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> Chris Barker writes: > are we only talking about __repr__ here ??? I am, because I haven't thought about the other methods, except to note I find it hard to imagine a use case *for me* that would require any of them. That sorta disqualifies me from comment. ;-) I assumed others were talking about all of the dataclass autogenerated methods, though. > And the fact that you can optionally tell it not to in some particular case > keeps full flexibility. AFAICS there is no question about that, just about *how* you indicate that you do or don't want autobogotification. > Will you get the "right" __repr__ now if you derive a datacalss from a > dataclass? That would be a nice feature. I hadn't thought about that: I wouldn't call it "nice", I'd say it's a sine-qua-back-to-the-drawing-board. From yselivanov.ml at gmail.com Wed Jan 3 00:05:11 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 03 Jan 2018 05:05:11 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 2:36 AM Victor Stinner wrote: > > I would really like to invite more people to review this PEP! I expect > I'll be accepting it in the next two weeks, but it needs to go through more > rigorous review. > > I read again the PEP and I am still very confused by Context.run(). > > The PEP states multiple times that a context is immutable: > > * "read-only mapping" > * inherit from Mapping, not from MutableMapping > > But run() does modify the context (or please correct me if I completely misunderstood > the PEP! I had to read it 3 times to check if run() mutates or not the > context). > > It would help if the ctx.run() example in the PEP would not only test > var.get() but also test ctx.get(var). Or maybe show that the variable value > is kept in a second function call, but the variable is "restored" between > run() calls. > > The PEP tries hard to hide "context data", which is the only read only > thing in the whole PEP, whereas it's a key concept to understand the > implementation. > > I understood that: > > * _ContextData is immutable > * ContextVar.set() creates a new _ContextData and sets it in the current > Python thread state > * When the called function completes, Context.run() sets its context data > to the new context data from the Python thread state: so run() does modify > the "immutable" context > tuples in Python are immutable, but you can have a tuple with a dict as its single element. The tuple is immutable, the dict is mutable. At the C level we have APIs that can mutate a tuple though. Now, tuple is not a direct analogy to Context, but there are some parallels. Context is a container like tuple, with some additional APIs on top. > > The distinction between the internal/hiden *immutable* context data and > public/visible "mutable" (from my point of view) context is unclear to me > in the PEP. > > The concept of "current context" is not defined in the PEP. In practice, > there is no "current context", there is only a "current context data" in > the current Python thread. There is no need for a concrete context instance > to store variable variables values. It's also hard to understand that in > the PEP. > > > Why Context could not inherit from MutableMapping? (Allow ctx.set(var, > value) and ctx [var] = value.) Is it just to keep the API small: changes > should only be made using var.set()? > Because that would be confusing to end users. ctx = copy_context() ctx[var] = something What did we just do? Did we modify the 'var' in the code that is currently executing? No, you still need to call Context.run to see the new value for var. Another problem is that MutableMapping defines a __delitem__ method, which i don't want the Context to implement. Deleting variables like that is incompatible with PEP 550, where it's ambiguous (due to the stacked nature of contexts). Now we don't want PEP 550 in 3.7, but I want to keep the door open for its design, in case we want context to work with generators. > Or maybe Context.run() should really be immutable and return the result of > the called function *and* a new context? But I dislike such theorical API, > since it would be complex to return the new context if the called function > raises an exception. > It can't return a new context because the callable you're running can raise an exception. In which case you'd lose modifications prior to the error. ps i'm on vacation and don't always have an internet connection. yury > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jan 3 00:06:43 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 03 Jan 2018 05:06:43 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 3:04 AM Victor Stinner wrote: > What is the behaviour of ContextVar.reset(token) if the token was created > from a different variable? Raise an exception? > > token = var1.set("value") > var2.reset(token) > > The PEP states that Token.var only exists for debug or introspection. > It will raise an error. I'll specify this in the PEP. Yury > Victor > > > Le 3 janv. 2018 00:51, "Victor Stinner" a > ?crit : > > Why ContextVar.reset(token) does nothing at the second call with the same > token? What is the purpose of Token._used? I guess that there is an use > case to justify this behaviour. > > reset() should have a result: true if the variable was restored to its > previous state, false if reset() did nothing because the token was already > used. And/Or Token should have a read-only "used" property. > > Victor > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jan 3 00:13:33 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 03 Jan 2018 05:13:33 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: I don't want to expose a SetContext operation because of, again, potential incompatibility with PEP 550, where generators expect to fully control push/pop context operation. Second, Context.run is 100% enough for *any* async framework to add support for PEP 567. And because the PEP is focused just on async, I think that we don't need anything more than 'run'. Third, I have a suspicion that we focus too much on actual Context and Context.run. These APIs are meant for asyncio/twisted/trio/etc maintainers, not for an average Python user. An average person will likely not interact with any of the PEP 567 machinery directly, wven when using PEP 567-enabled libraries like numpy/decimal. Yury On Wed, Jan 3, 2018 at 2:56 AM Victor Stinner wrote: > PEP: > "int PyContext_Enter(PyContext *) and int PyContext_Exit(PyContext *) > allow to set and restore the context for the current OS thread." > > What is the difference between Enter and Exit? Why not having a single > Py_SetContext() function? > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 3 00:34:04 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Jan 2018 22:34:04 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 5:30 PM, Victor Stinner wrote: > Hum, it seems like the specification (API) part of the PEP is polluted by > its implementation. The PEP just require a few minor changes to better > describe the behaviour/API instead of insisting on the read only internal > thing which is specific to the proposed implementation which is just one > arbitrary implemention (designed for best performances). > Yeah, we need some more words. I still hope someone proposes some, but if threats fail I will have to try to find time myself. > IMHO the PEP shouldn't state that a context is read only. From my point of > view, it's mutable and it's the mapping holding variable values. There is a > current context which holds the current values. Context.run() switchs > temporarely the current context with another context. The fact that there > is no concrete context instance by default doesn't really matter in term of > API. > I think the issue here is a bit different than Yury's response suggests -- it's more like how a variable containing an immutable value (e.g. a string) can be modified, e.g. x = 'a' x += 'b' In our case the *variable* is the current thread state (in particular the slot therein that holds the context -- this slot can be modified by the C API). The *value* is the Context object. It is a collections.Mapping (or typing.Mapping) which does not have mutating methods. (The mutable type is called MutableMapping.) The *reason* for doing it this way is that Yury doesn't want Context to implement __delitem__, since it would complicate the specification of chained lookups by a future PEP, and chained lookups look to be the best option to extend the Context machinery for generators. We're not doing that in Python 3.7 (PEP 550 v2 and later did this but it was too complex) but we might want to do in 3.8 or 3.9 once we are comfortable with PEP 567. (Or not -- it's not clear to me that generators bite decimal users the way tasks do. Coroutines always run on behalf of a task so they're not a problem.) --Guido > Victor > > Le 3 janv. 2018 00:34, "Victor Stinner" a > ?crit : > >> > I would really like to invite more people to review this PEP! I expect >> I'll be accepting it in the next two weeks, but it needs to go through more >> rigorous review. >> >> I read again the PEP and I am still very confused by Context.run(). >> >> The PEP states multiple times that a context is immutable: >> >> * "read-only mapping" >> * inherit from Mapping, not from MutableMapping >> >> But run() does modify the context (or please correct me if I completely misunderstood >> the PEP! I had to read it 3 times to check if run() mutates or not the >> context). >> >> It would help if the ctx.run() example in the PEP would not only test >> var.get() but also test ctx.get(var). Or maybe show that the variable value >> is kept in a second function call, but the variable is "restored" between >> run() calls. >> >> The PEP tries hard to hide "context data", which is the only read only >> thing in the whole PEP, whereas it's a key concept to understand the >> implementation. >> >> I understood that: >> >> * _ContextData is immutable >> * ContextVar.set() creates a new _ContextData and sets it in the current >> Python thread state >> * When the called function completes, Context.run() sets its context data >> to the new context data from the Python thread state: so run() does modify >> the "immutable" context >> >> >> The distinction between the internal/hiden *immutable* context data and >> public/visible "mutable" (from my point of view) context is unclear to me >> in the PEP. >> >> The concept of "current context" is not defined in the PEP. In practice, >> there is no "current context", there is only a "current context data" in >> the current Python thread. There is no need for a concrete context instance >> to store variable variables values. It's also hard to understand that in >> the PEP. >> >> >> Why Context could not inherit from MutableMapping? (Allow ctx.set(var, >> value) and ctx [var] = value.) Is it just to keep the API small: changes >> should only be made using var.set()? >> >> Or maybe Context.run() should really be immutable and return the result >> of the called function *and* a new context? But I dislike such theorical API, >> since it would be complex to return the new context if the called function >> raises an exception. >> >> Victor >> > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 3 00:38:07 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Jan 2018 22:38:07 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 4:45 PM, Victor Stinner wrote: > Le 2 janv. 2018 18:57, "Guido van Rossum" a ?crit : > > Oh, the "Specification" section in the PEP is too brief on several of > these subjects. It doesn't really specify what var.get() does if the value > is not set, nor does it even mention var.get() except in the code > examples for var.reset(). It's also subtle that ctx[var] returns the > default (if there is one). I suppose it will raise if there isn't one -- > resulting in the somewhat surprising behavior where `var in ctx` may be > true but `ctx[var]` may raise. And what does it raise? (All these questions > are answered by the code, but they should be clearly stated in the PEP.) > > > A variable has or has no default value. Would it make sense to expose the > default value as a public read-only attribute (it would be equal to > _NO_DEFAULT or Token.MISSING if there is no default) and/or add a is_set() > method? is_set() returns true if the variable has a default value or if it > was set in the "current context". > > Currently, a custom sentinel is needed to check if var.get(), ctx.get(var) > and ctx[var] would raise an exception or not. Example: > > my_sentinel = object() > is_set = (var.get(default=my_sentinel) is not my_sentinel) > # no exception if is_set is true > > ContextVar.get() is non obvious because the variable has an optinal > default, get() has an optional default parameter, and the variable can be > set or not in the current context. > But is there a common use case? For var.get() I'd rather just pass the default or catch the exception if the flow is different. Using ctx[var] is rare (mostly for printing contexts, and perhaps for explaining var.get()). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 3 00:42:17 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Jan 2018 22:42:17 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 4:51 PM, Victor Stinner wrote: > Why ContextVar.reset(token) does nothing at the second call with the same > token? What is the purpose of Token._used? I guess that there is an use > case to justify this behaviour. > > reset() should have a result: true if the variable was restored to its > previous state, false if reset() did nothing because the token was already > used. And/Or Token should have a read-only "used" property. > That depends again on the use case. The only real purpose for reset() is to be able to write a context manager that sets and restores a context variable (like in `with decimal.localcontext()`). Handling double resets is about as useful as specifying what happens if __exit__ is called twice. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 3 00:46:57 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Jan 2018 22:46:57 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Do you see any average Python users on this thread? We are trying to pick the PEP apart from the POV of having to use the complicated parts of the API in a framework. Victor's questions are reasonable. On Tue, Jan 2, 2018 at 10:13 PM, Yury Selivanov wrote: > I don't want to expose a SetContext operation because of, again, potential > incompatibility with PEP 550, where generators expect to fully control > push/pop context operation. > > Second, Context.run is 100% enough for *any* async framework to add > support for PEP 567. And because the PEP is focused just on async, I think > that we don't need anything more than 'run'. > > Third, I have a suspicion that we focus too much on actual Context and > Context.run. These APIs are meant for asyncio/twisted/trio/etc maintainers, > not for an average Python user. An average person will likely not interact > with any of the PEP 567 machinery directly, wven when using PEP 567-enabled > libraries like numpy/decimal. > > Yury > > On Wed, Jan 3, 2018 at 2:56 AM Victor Stinner > wrote: > >> PEP: >> "int PyContext_Enter(PyContext *) and int PyContext_Exit(PyContext *) >> allow to set and restore the context for the current OS thread." >> >> What is the difference between Enter and Exit? Why not having a single >> Py_SetContext() function? >> >> Victor >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> yselivanov.ml%40gmail.com >> > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 04:26:28 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 10:26:28 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Le 3 janv. 2018 06:05, "Yury Selivanov" a ?crit : tuples in Python are immutable, but you can have a tuple with a dict as its single element. The tuple is immutable, the dict is mutable. At the C level we have APIs that can mutate a tuple though. Now, tuple is not a direct analogy to Context, but there are some parallels. Context is a container like tuple, with some additional APIs on top. Sorry, I don't think that it's a good analogy. Context.run() is a public method accessible in Python which allows to modify the context. A tuple doesn't have such method. While it's technically possible to modify a tuple or a str at C level, it's a bad practice leading to complex bugs when it's not done carefully: see https://bugs.python.org/issue30156 property_descr_get() optimization was fixed twice but still has a bug. I proposed a PR to remove the hack. Why Context could not inherit from MutableMapping? (Allow ctx.set(var, > value) and ctx [var] = value.) Is it just to keep the API small: changes > should only be made using var.set()? > Because that would be confusing to end users. ctx = copy_context() ctx[var] = something What did we just do? Did we modify the 'var' in the code that is currently executing? No, you still need to call Context.run to see the new value for var. IMHO it's easy to understand that modifying a *copy* of the current context doesn't impact the current context. It's one the first thing to learn when learning Python: a = [1, 2] b = a.copy() b.append(3) assert a == [1, 2] assert b == [1, 2, 3] Another problem is that MutableMapping defines a __delitem__ method, which i don't want the Context to implement. I wouldn't be shocked if "del ctx [var]" would raise an exception. I almost never use del anyway. I prefer to assign a variable to None, since "del var" looks like C++ destructor whereas it's more complex than a direct call to the destructor. But it's annoying to have to call a function with Context.run() whereas context is just a mutable mapping. It seems overkill to me to have to call run() to modify a context variable: run() changes temporarely the context and requires to use the indirect ContextVar API, while I know that ContextVar.set() modifies the context. Except of del corner case, I don't see any technical reason to prevent direct modification of a context. contextvars isn't new, it extends what we already have: decimal context. And decimal quick start documentation shows how to modify a context and then set it as the current context: >>> myothercontext = Context(prec=60, rounding=ROUND_HALF_DOWN) >>> setcontext(myothercontext) >>> Decimal(1) / Decimal(7) Decimal('0.142857142857142857142857142857142857142857142857142857142857') https://docs.python.org/dev/library/decimal.html Well, technically it doesn't modify a context. An example closer to contextvars would be: >>> mycontext = getcontext().copy() >>> mycontext.prec = 60 >>> setcontext(mycontext) >>> Decimal(1) / Decimal(7) Decimal('0.142857142857142857142857142857142857142857142857142857142857') Note: "getcontext().prec = 6" does modify the decimal context directly, and it's the *first* example in the doc. But here contextvars is different since there is no API to get the current API. The lack of API to access directly the current contextvars context is the main difference with decimal context, and I'm fine with that. It's easy to see a parallel since decimal context can be copied using Context.copy(), it has also multiple (builtin) "variables", it's just that the API is different (decimal context variables are modified as attributes), and it's possible to set a context using decimal.setcontext(). Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov at gmail.com Wed Jan 3 04:48:45 2018 From: yselivanov at gmail.com (Yury Selivanov) Date: Wed, 3 Jan 2018 12:48:45 +0300 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: > On Jan 3, 2018, at 12:26 PM, Victor Stinner wrote: > > Le 3 janv. 2018 06:05, "Yury Selivanov" a ?crit : > tuples in Python are immutable, but you can have a tuple with a dict as its single element. The tuple is immutable, the dict is mutable. > > At the C level we have APIs that can mutate a tuple though. > > Now, tuple is not a direct analogy to Context, but there are some parallels. Context is a container like tuple, with some additional APIs on top. > > Sorry, I don't think that it's a good analogy. Context.run() is a public method accessible in Python which allows to modify the context. A tuple doesn't have such method. > > While it's technically possible to modify a tuple or a str at C level, it's a bad practice leading to complex bugs when it's not done carefully: see https://bugs.python.org/issue30156 property_descr_get() optimization was fixed twice but still has a bug. I proposed a PR to remove the hack. > >> Why Context could not inherit from MutableMapping? (Allow ctx.set(var, value) and ctx [var] = value.) Is it just to keep the API small: changes should only be made using var.set()? > > Because that would be confusing to end users. > > ctx = copy_context() > ctx[var] = something > > What did we just do? Did we modify the 'var' in the code that is currently executing? No, you still need to call Context.run to see the new value for var. > > IMHO it's easy to understand that modifying a *copy* of the current context doesn't impact the current context. It's one the first thing to learn when learning Python: > > a = [1, 2] > b = a.copy() > b.append(3) > assert a == [1, 2] > assert b == [1, 2, 3] > > Another problem is that MutableMapping defines a __delitem__ method, which i don't want the Context to implement. > > I wouldn't be shocked if "del ctx [var]" would raise an exception. > > I almost never use del anyway. I prefer to assign a variable to None, since "del var" looks like C++ destructor whereas it's more complex than a direct call to the destructor. > > But it's annoying to have to call a function with Context.run() whereas context is just a mutable mapping. It seems overkill to me to have to call run() to modify a context variable: Do you have any use case for modifying a variable inside some context? numpy, decimal, or some sort of tracing for http requests or async frameworks like asyncio do not need that. > run() changes temporarely the context and requires to use the indirect ContextVar API, while I know that ContextVar.set() modifies the context. > > Except of del corner case, I don't see any technical reason to prevent direct modification of a context. > > contextvars isn't new, it extends what we already have: decimal context. And decimal quick start documentation shows how to modify a context and then set it as the current context: > I think you are confusing context in decimal and pep 567. Decimal context is a mutable object. We use threading.local to store it. With pep 567 you will use a context variable behind the scenes to store it. I think it's incorrect to compare decimal contexts to pep567 in any way. Yury > >>> myothercontext = Context(prec=60, rounding=ROUND_HALF_DOWN) > >>> setcontext(myothercontext) > >>> Decimal(1) / Decimal(7) > Decimal('0.142857142857142857142857142857142857142857142857142857142857') > > https://docs.python.org/dev/library/decimal.html > > Well, technically it doesn't modify a context. An example closer to contextvars would be: > > >>> mycontext = getcontext().copy() > >>> mycontext.prec = 60 > >>> setcontext(mycontext) > >>> Decimal(1) / Decimal(7) > Decimal('0.142857142857142857142857142857142857142857142857142857142857') > > Note: "getcontext().prec = 6" does modify the decimal context directly, and it's the *first* example in the doc. But here contextvars is different since there is no API to get the current API. The lack of API to access directly the current contextvars context is the main difference with decimal context, and I'm fine with that. > > It's easy to see a parallel since decimal context can be copied using Context.copy(), it has also multiple (builtin) "variables", it's just that the API is different (decimal context variables are modified as attributes), and it's possible to set a context using decimal.setcontext(). > > Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 04:49:11 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 10:49:11 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Le 3 janv. 2018 06:34, "Guido van Rossum" a ?crit : I think the issue here is a bit different than Yury's response suggests -- it's more like how a variable containing an immutable value (e.g. a string) can be modified, e.g. x = 'a' x += 'b' In our case the *variable* is the current thread state (in particular the slot therein that holds the context -- this slot can be modified by the C API). The *value* is the Context object. It is a collections.Mapping (or typing.Mapping) which does not have mutating methods. (The mutable type is called MutableMapping.) I can see a parallel with a Python namespace, like globals and locals arguments of exec(): ns = globals().copy() # ctx = copy_context() exec("x = 'a'", ns, ns) # ctx.run(...) ns['x'] += 'b' # Context ??? print(ns ['x']) # print(ctx[x]) The *reason* for doing it this way is that Yury doesn't want Context to implement __delitem__, since it would complicate the specification of chained lookups by a future PEP, and chained lookups look to be the best option to extend the Context machinery for generators. Again, why not just raise an exception on "del ctx[var]"? Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 05:04:46 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 11:04:46 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Le 3 janv. 2018 06:38, "Guido van Rossum" a ?crit : But is there a common use case? For var.get() I'd rather just pass the default or catch the exception if the flow is different. Using ctx[var] is rare (mostly for printing contexts, and perhaps for explaining var.get()). I don't think that it would be a common use case. Maybe we don't need is_set(), I'm fine with catching an exception. But for introspection at least, it would help to expose the default as a read-only attribute, no? Another example of a mapping with default value: https://docs.python.org/dev/library/collections.html#collections.defaultdict And defaultdict has a default_factory attribute. The difference here is that default_factory is mandatory. ContextVar would be simpler if the default would be mandatory as well :-) Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov at gmail.com Wed Jan 3 05:47:35 2018 From: yselivanov at gmail.com (Yury Selivanov) Date: Wed, 3 Jan 2018 13:47:35 +0300 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: I think we can expose the default property. If it's not set we can return MISSING. Yury Sent from my iPhone > On Jan 3, 2018, at 1:04 PM, Victor Stinner wrote: > > Le 3 janv. 2018 06:38, "Guido van Rossum" a ?crit : > But is there a common use case? For var.get() I'd rather just pass the default or catch the exception if the flow is different. Using ctx[var] is rare (mostly for printing contexts, and perhaps for explaining var.get()). > > I don't think that it would be a common use case. Maybe we don't need is_set(), I'm fine with catching an exception. > > But for introspection at least, it would help to expose the default as a read-only attribute, no? > > Another example of a mapping with default value: > > https://docs.python.org/dev/library/collections.html#collections.defaultdict > > And defaultdict has a default_factory attribute. The difference here is that default_factory is mandatory. ContextVar would be simpler if the default would be mandatory as well :-) > > Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Jan 3 06:34:08 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 3 Jan 2018 11:34:08 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On 28 December 2017 at 06:08, Yury Selivanov wrote: > This is a second version of PEP 567. Overall, I like the proposal. It's relatively straightforward to follow, and makes sense. One thing I *don't* see in the PEP is an example of how code using thread-local storage should be modified to use context variables. My impression is that it's simply a matter of replacing the TLS calls with equivalent ContextVar calls, but an example might be helpful. Some detail points below. > Rationale > ========= > > Thread-local variables are insufficient for asynchronous tasks that > execute concurrently in the same OS thread. Any context manager that > saves and restores a context value using ``threading.local()`` will > have its context values bleed to other code unexpectedly when used > in async/await code. I understand how this could happen, having followed the discussions here, but a (simple) example of the issue might be useful. > A few examples where having a working context local storage for > asynchronous code is desirable: > > * Context managers like ``decimal`` contexts and ``numpy.errstate``. > > * Request-related data, such as security tokens and request > data in web applications, language context for ``gettext``, etc. > > * Profiling, tracing, and logging in large code bases. > > > Introduction > ============ > > The PEP proposes a new mechanism for managing context variables. > The key classes involved in this mechanism are ``contextvars.Context`` > and ``contextvars.ContextVar``. The PEP also proposes some policies > for using the mechanism around asynchronous tasks. > > The proposed mechanism for accessing context variables uses the > ``ContextVar`` class. A module (such as ``decimal``) that wishes to > store a context variable should: > > * declare a module-global variable holding a ``ContextVar`` to > serve as a key; > > * access the current value via the ``get()`` method on the > key variable; > > * modify the current value via the ``set()`` method on the > key variable. > > The notion of "current value" deserves special consideration: > different asynchronous tasks that exist and execute concurrently > may have different values for the same key. This idea is well-known > from thread-local storage but in this case the locality of the value is > not necessarily bound to a thread. Instead, there is the notion of the > "current ``Context``" which is stored in thread-local storage, and > is accessed via ``contextvars.copy_context()`` function. Accessed by copying it? That seems weird to me. I'd expect either that you'd be able to access the current Context directly, *or* that you'd say that the current Context is not directly accessible by the user, but that a copy can be obtained using copy_context. But given that the Context is immutable, why the need tp copy it? Also, the references to threads in the above are confusing. It says that this is a well-known concept in terms of thread-local storage, but this case is different. It then goes on to say that the current Context is stored in thread local storage, which gives me the impression that the new idea *is* related to thread local storage... I think that the fact that a Context is held in thread-local storage is an implementation detail. Assuming I'm right, don't bother mentioning it - simply say that there's a notion of a current Context and leave it at that. > Manipulation of the current ``Context`` is the responsibility of the > task framework, e.g. asyncio. > > A ``Context`` is conceptually a read-only mapping, implemented using > an immutable dictionary. The ``ContextVar.get()`` method does a > lookup in the current ``Context`` with ``self`` as a key, raising a > ``LookupError`` or returning a default value specified in > the constructor. > > The ``ContextVar.set(value)`` method clones the current ``Context``, > assigns the ``value`` to it with ``self`` as a key, and sets the > new ``Context`` as the new current ``Context``. > On first reading, this confused me because I didn't spot that you're saying a *Context* is read-only, but a *ContextVar* has get and set methods. Maybe reword this to say that a Context is a read-only mapping from ContextVars to values. A ContextVar has a get method that looks up its value in the current Context, and a set method that replaces the current Context with a new one that associates the specified value with this ContextVar. (The current version feels confusing to me because it goes into too much detail on how the implementation does this, rather than sticking to the high-level specification) > Specification > ============= > > A new standard library module ``contextvars`` is added with the > following APIs: > > 1. ``copy_context() -> Context`` function is used to get a copy of > the current ``Context`` object for the current OS thread. > > 2. ``ContextVar`` class to declare and access context variables. > > 3. ``Context`` class encapsulates context state. Every OS thread > stores a reference to its current ``Context`` instance. > It is not possible to control that reference manually. > Instead, the ``Context.run(callable, *args, **kwargs)`` method is > used to run Python code in another context. Context.run() came a bit out of nowhere here. Maybe the part from "It is not possible..." should be in the introduction above? Something like the following, covering this and copy_context: The current Context cannot be accessed directly by user code. If the frameowrk wants to run some code in a different Context, the Context.run(callable, *args, **kwargs) method is used to do that. To construct a new context for this purpose, the current context can be copied via the copy_context function, and manipulated prior to the call to run(). > > contextvars.ContextVar > ---------------------- > > The ``ContextVar`` class has the following constructor signature: > ``ContextVar(name, *, default=_NO_DEFAULT)``. The ``name`` parameter > is used only for introspection and debug purposes, and is exposed > as a read-only ``ContextVar.name`` attribute. The ``default`` > parameter is optional. Example:: > > # Declare a context variable 'var' with the default value 42. > var = ContextVar('var', default=42) > > (The ``_NO_DEFAULT`` is an internal sentinel object used to > detect if the default value was provided.) My first thought was that default was the context variable's initial value. But if that's what it is, why not call it that? If the default has another effect as well as being the initial value, maybe clarify here what that is? > ``ContextVar.get()`` returns a value for context variable from the > current ``Context``:: > > # Get the value of `var`. > var.get() > > ``ContextVar.set(value) -> Token`` is used to set a new value for > the context variable in the current ``Context``:: > > # Set the variable 'var' to 1 in the current context. > var.set(1) > > ``ContextVar.reset(token)`` is used to reset the variable in the > current context to the value it had before the ``set()`` operation > that created the ``token``:: > > assert var.get(None) is None get doesn't take an argument. Typo? > token = var.set(1) > try: > ... > finally: > var.reset(token) > > assert var.get(None) is None same typo? > ``ContextVar.reset()`` method is idempotent and can be called > multiple times on the same Token object: second and later calls > will be no-ops. > > > contextvars.Token > ----------------- > > ``contextvars.Token`` is an opaque object that should be used to > restore the ``ContextVar`` to its previous value, or remove it from > the context if the variable was not set before. It can be created > only by calling ``ContextVar.set()``. > > For debug and introspection purposes it has: > > * a read-only attribute ``Token.var`` pointing to the variable > that created the token; > > * a read-only attribute ``Token.old_value`` set to the value the > variable had before the ``set()`` call, or to ``Token.MISSING`` > if the variable wasn't set before. > > Having the ``ContextVar.set()`` method returning a ``Token`` object > and the ``ContextVar.reset(token)`` method, allows context variables > to be removed from the context if they were not in it before the > ``set()`` call. > > > contextvars.Context > ------------------- > > ``Context`` object is a mapping of context variables to values. > > ``Context()`` creates an empty context. To get a copy of the current > ``Context`` for the current OS thread, use the > ``contextvars.copy_context()`` method:: > > ctx = contextvars.copy_context() > > To run Python code in some ``Context``, use ``Context.run()`` > method:: > > ctx.run(function) > > Any changes to any context variables that ``function`` causes will > be contained in the ``ctx`` context:: > > var = ContextVar('var') > var.set('spam') > > def function(): > assert var.get() == 'spam' > > var.set('ham') > assert var.get() == 'ham' > > ctx = copy_context() > > # Any changes that 'function' makes to 'var' will stay > # isolated in the 'ctx'. > ctx.run(function) > > assert var.get() == 'spam' > > Any changes to the context will be contained in the ``Context`` > object on which ``run()`` is called on. > > ``Context.run()`` is used to control in which context asyncio > callbacks and Tasks are executed. It can also be used to run some > code in a different thread in the context of the current thread:: > > executor = ThreadPoolExecutor() > current_context = contextvars.copy_context() > > executor.submit( > lambda: current_context.run(some_function)) > > ``Context`` objects implement the ``collections.abc.Mapping`` ABC. > This can be used to introspect context objects:: > > ctx = contextvars.copy_context() > > # Print all context variables and their values in 'ctx': > print(ctx.items()) > > # Print the value of 'some_variable' in context 'ctx': > print(ctx[some_variable]) > > > asyncio > ------- [...] > > C API > ----- > [...] I haven't commented on these as they aren't my area of expertise. > Implementation > ============== > > This section explains high-level implementation details in > pseudo-code. Some optimizations are omitted to keep this section > short and clear. Again, I'm ignoring this as I don't really have an interest in how the facility is implemented. > > Implementation Notes > ==================== > > * The internal immutable dictionary for ``Context`` is implemented > using Hash Array Mapped Tries (HAMT). They allow for O(log N) > ``set`` operation, and for O(1) ``copy_context()`` function, where > *N* is the number of items in the dictionary. For a detailed > analysis of HAMT performance please refer to :pep:`550` [1]_. Would it be worth exposing this data structure elsewhere, in case other uses for it exist? > * ``ContextVar.get()`` has an internal cache for the most recent > value, which allows to bypass a hash lookup. This is similar > to the optimization the ``decimal`` module implements to > retrieve its context from ``PyThreadState_GetDict()``. > See :pep:`550` which explains the implementation of the cache > in a great detail. > Should the cache (or at least the performance guarantees it implies) be part of the spec? Do we care if other implementations fail to implement a cache? From olegs at traiana.com Wed Jan 3 05:24:04 2018 From: olegs at traiana.com (Oleg Sivokon) Date: Wed, 3 Jan 2018 10:24:04 +0000 Subject: [Python-Dev] Possible bug in base64.decode: linebreaks are not ignored Message-ID: Hello, I've tried reading various RFCs around Base64 encoding, but I couldn't make the ends meet. Yet there is an inconsistency between base64.decodebytes() and base64.decode() in that how they handle linebreaks that were used to collate the encoded text. Below is an example of what I'm talking about: >>> import base64 >>> foo = base64.encodebytes(b'123456789') >>> foo b'MTIzNDU2Nzg5\n' >>> foo = b'MTIzND\n' + b'U2Nzg5\n' >>> foo b'MTIzND\nU2Nzg5\n' >>> base64.decodebytes(foo) b'123456789' >>> from io import BytesIO >>> bytes_in = BytesIO(foo) >>> bytes_out = BytesIO() >>> bytes_in.seek(0) 0 >>> base64.decode(bytes_in, bytes_out) Traceback (most recent call last): File "", line 1, in File "/somewhere/lib/python3.6/base64.py", line 512, in decode s = binascii.a2b_base64(line) binascii.Error: Incorrect padding >>> bytes_in = BytesIO(base64.encodebytes(b'123456789')) >>> bytes_in.seek(0) 0 >>> base64.decode(bytes_in, bytes_out) >>> bytes_out.getvalue() b'123456789' Obviously, I'd expect encodebytes() and encode both to either accept or to reject the same input. Thanks. Oleg PS. I couldn't register to the bug-tracker (never received an email confirmation, not even in a spam folder), this is why I'm sending it here. This communication and all information contained in or attached to it is confidential, intended solely for the addressee, may be legally privileged and is the intellectual property of one of the companies of NEX Group plc ("NEX") or third parties. If you are not the intended addressee or receive this message in error, please immediately delete all copies of it and notify the sender. We have taken precautions to minimise the risk of transmitting software viruses, but we advise you to carry out your own virus checks on any attachments. We do not accept liability for any loss or damage caused by software viruses. NEX reserves the right to monitor all communications. We do not accept any legal responsibility for the content of communications, and no communication shall be considered legally binding. Furthermore, if the content of this communication is personal or unconnected with our business, we accept no liability or responsibility for it. NEX Group plc is a public limited company registered in England and Wales under number 10013770 and certain of its affiliates are authorised and regulated by regulatory authorities. For further regulatory information please see www.NEX.com. From nas-python at arctrix.com Wed Jan 3 02:53:42 2018 From: nas-python at arctrix.com (Neil Schemenauer) Date: Wed, 3 Jan 2018 01:53:42 -0600 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: <20180102203149.vkepsh5da5pon67q@python.ca> References: <20180102203149.vkepsh5da5pon67q@python.ca> Message-ID: <20180103075342.io4cfnfv6jwpknud@python.ca> Generally I think programming language implementers don't get to decide how the language works. You just have to implement it as specified, inconvenient as that might be. However, from a languge design prespective, I think there is a good argument that this is a corner of the language we should consider changing. First, I analyzed over one million lines of Python code with my AST walker and only found this construct being used in four different places. It seems to be extremely rare. Second, the existance of a pylint warning for it suggests that it is confusing. I did a little more searching using the pylint warning and found these pages: https://stackoverflow.com/questions/35505624/break-statement-in-finally-block-swallows-exception http://thegreyblog.blogspot.ca/2011/02/do-not-return-in-finally-block-return.html So, given the above and that the implementation (both compiler and bytecode evaluator) is pretty complicated, I vote that we should disallow it. Regards, Neil From levkivskyi at gmail.com Wed Jan 3 12:31:54 2018 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Wed, 3 Jan 2018 18:31:54 +0100 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> Message-ID: I like the Guido's proposal, i.e. if '__repr__' not in cls.__dict__: ... # generate the method etc. I didn't find an issue to track this. Maybe we should open one? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 12:37:38 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 18:37:38 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Victor: > ContextVar would be simpler if the default would be mandatory as well :-) If we modify ContextVar to use None as the default value, the 'default' parameter of ContextVar.get() becomes useless, ContextVar.get() and context[var] behaviour becomes obvious. Token.MISSING could also be removed. Since it's not possible to delete a variable, would it be crazy to always initialize variables with a value, None by default? Decimal context is given as an example of contextvars user. Decimal already has a default context. I don't know numpy.errstate: would it make sense to initialize it to None? If someone really needs the crazy case of "uninitialized" variable, a custom "UNINITIALIZED = object()" can be used, no? With these proposed changes, there is no more need to worry if a variable is set or not. It's unclear to me if context.items() contains variables which weren't explicit set, and so are set to their default value. Same question for "var in context" test. If variables always have a value, I would expect that "var in context" is always true but I don't understand if it's technically possible to implement it or even if it makes sense :-) Maybe my whole proposed change doesn't make sense? Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 13:17:27 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 3 Jan 2018 19:17:27 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: > Do you have any use case for modifying a variable inside some context? > numpy, decimal, or some sort of tracing for http requests or async frameworks like asyncio do not need that. Maybe I misunderstood how contextvars is supposed to be used. So let me give you an example. I understand that decimal.py will declare its context variable like this: --- contextvar = contextvars.ContextVar('decimal', default=Context(...)) --- Later if I would like to run an asyncio callback with a different decimal context, I would like to write: --- cb_context = contextvars.copy_context() decimal_context = cb_context[decimal.contextvar].copy() decimal_context.prec = 100 cb_context[decimal.contextvar] = decimal_context # <--- HERE loop.call_soon(func, context=cb_context) ---- The overall code would behaves as: --- with localcontext() as ctx: ctx.prec = 100 loop.call_soon(func) --- I don't know if the two code snippets have exactly the same behaviour. I don't want to modify func() to run it with a different decimal context. So I would prefer to not have to call decimal.contextvar.set() or decimal.setcontext() in func(). But I would need contextvars.Context[var]=value to support such use case. Decimal contexts are mutable, so modifying directly the decimal context object would impact all contexts which isn't my intent here. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Wed Jan 3 13:17:56 2018 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 3 Jan 2018 13:17:56 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> Message-ID: I?ll open an issue after I have time to read this thread and comment on it. -- Eric. > On Jan 3, 2018, at 12:31 PM, Ivan Levkivskyi wrote: > > I like the Guido's proposal, i.e. > > if '__repr__' not in cls.__dict__: > ... # generate the method > > etc. I didn't find an issue to track this. Maybe we should open one? > > -- > Ivan > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com From guido at python.org Wed Jan 3 14:16:10 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 3 Jan 2018 12:16:10 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: I think decimal is a bit of a red herring, because the entire decimal context is stored as a single thread-local variable (_local.__decimal_context__ in _pydecimal.py, where _local is a global threading.local instance; using "___DECIMAL_CTX__" as a key in the C-level dict of thread-locals in the C version). Nevertheless there is the issue of whether it's better to make contextvars.Context a MutableMapping or to make it an (immutable) Mapping. >From the POV of explaining the implementation, a MutableMapping is simpler. E.g. ContextVar.set() just does `get_context()[self] = value`, and ContextVar.get() is `return get_context()[self]` (some details left out relating to defaults). Token and reset() are still useful because they make it simpler to write a context manager that restores the previous value, regardless of whether a value was set or not. Context.run() makes a copy of the current context, sets that as the current context, runs the function, and then restores the previous context (the one that it copied). With a truly immutable Context offering only the (immutable) Mapping interface (plus an internal API that returns a new Context that has a different value for one key), ContextVar.set() is a bit more complicated because it has to use set_context() (actually an internal thing that updates the current context in the thread state) and similar for ContextVar.reset(token). (An alternative design is possible where a Context is an immutable-looking wrapper around a plain dict, with private APIs that mutate that dict, but apart from having different invariants about the identities of Context objects it works out about the same from a user's POV.) Anyway, the differences between these are user-visible so we can't make this an implementation detail: We have to choose. Should Context be a MutableMapping or not? Yury strongly favors an immutable Context, and that's what his reference implementation has (https://github.com/python/cpython/pull/5027). His reasoning is that in the future we *might* want to support automatic context management for generators by default (like described in his original PEP 550), and then it's essential to use the immutable version so that "copying" the context when a generator is created or resumed is super fast (and in particular O(1)). I am personally not sure that we'll ever need it but at the same time I'm also not confident that we won't, so I give Yury the benefit of the doubt here -- after all he has spent an enormous amount of time thinking this design through so I value his intuition. In addition I agree that for most users the basic interface will be ContextVar, not Context, and the needs of framework authors are easily met by Context.run(). So I think that Yury's desire for an immutable Context will not harm anyone, and hence I support the current design of the PEP. (Though I want some of the details to be written up clearer -- everyone seems to agree with that. :-) Maybe I should clarify again what run() does. Here's how I think of it in pseudo code: def run(self, func, *args, **kwds): old = _get_current_context() new = old.copy() _set_current_context(new) try: return func(*args, **kwds) finally: _set_current_context(old) If you look carefully at the version in the PEP you'll see that it's the same thing, but the PEP inlines the implementations of _get_current_context() and _set_current_context() (which I made up -- these won't be real APIs) through manipulation of the current thread state. I hope this clarifies everything. --Guido On Wed, Jan 3, 2018 at 2:26 AM, Victor Stinner wrote: > Le 3 janv. 2018 06:05, "Yury Selivanov" a > ?crit : > > tuples in Python are immutable, but you can have a tuple with a dict as > its single element. The tuple is immutable, the dict is mutable. > > At the C level we have APIs that can mutate a tuple though. > > Now, tuple is not a direct analogy to Context, but there are some > parallels. Context is a container like tuple, with some additional APIs on > top. > > > Sorry, I don't think that it's a good analogy. Context.run() is a public > method accessible in Python which allows to modify the context. A tuple > doesn't have such method. > > While it's technically possible to modify a tuple or a str at C level, > it's a bad practice leading to complex bugs when it's not done carefully: > see https://bugs.python.org/issue30156 property_descr_get() optimization > was fixed twice but still has a bug. I proposed a PR to remove the hack. > > Why Context could not inherit from MutableMapping? (Allow ctx.set(var, >> value) and ctx [var] = value.) Is it just to keep the API small: changes >> should only be made using var.set()? >> > > Because that would be confusing to end users. > > ctx = copy_context() > ctx[var] = something > > What did we just do? Did we modify the 'var' in the code that is > currently executing? No, you still need to call Context.run to see the new > value for var. > > > IMHO it's easy to understand that modifying a *copy* of the current > context doesn't impact the current context. It's one the first thing to > learn when learning Python: > > a = [1, 2] > b = a.copy() > b.append(3) > assert a == [1, 2] > assert b == [1, 2, 3] > > Another problem is that MutableMapping defines a __delitem__ method, which > i don't want the Context to implement. > > > I wouldn't be shocked if "del ctx [var]" would raise an exception. > > I almost never use del anyway. I prefer to assign a variable to None, > since "del var" looks like C++ destructor whereas it's more complex than a > direct call to the destructor. > > But it's annoying to have to call a function with Context.run() whereas > context is just a mutable mapping. It seems overkill to me to have to call > run() to modify a context variable: run() changes temporarely the context > and requires to use the indirect ContextVar API, while I know that > ContextVar.set() modifies the context. > > Except of del corner case, I don't see any technical reason to prevent > direct modification of a context. > > contextvars isn't new, it extends what we already have: decimal context. > And decimal quick start documentation shows how to modify a context and > then set it as the current context: > > >>> myothercontext = Context(prec=60, rounding=ROUND_HALF_DOWN) > >>> setcontext(myothercontext) > >>> Decimal(1) / Decimal(7) > Decimal('0.142857142857142857142857142857142857142857142857142857142857') > > https://docs.python.org/dev/library/decimal.html > > Well, technically it doesn't modify a context. An example closer to > contextvars would be: > > >>> mycontext = getcontext().copy() > >>> mycontext.prec = 60 > >>> setcontext(mycontext) > >>> Decimal(1) / Decimal(7) > Decimal('0.142857142857142857142857142857142857142857142857142857142857') > > Note: "getcontext().prec = 6" does modify the decimal context directly, > and it's the *first* example in the doc. But here contextvars is different > since there is no API to get the current API. The lack of API to access > directly the current contextvars context is the main difference with > decimal context, and I'm fine with that. > > It's easy to see a parallel since decimal context can be copied using > Context.copy(), it has also multiple (builtin) "variables", it's just that > the API is different (decimal context variables are modified as > attributes), and it's possible to set a context using > decimal.setcontext(). > > Victor > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Jan 3 14:54:15 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 3 Jan 2018 21:54:15 +0200 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: <20180102203149.vkepsh5da5pon67q@python.ca> References: <20180102203149.vkepsh5da5pon67q@python.ca> Message-ID: 02.01.18 22:31, Neil Schemenauer ????: > Serhiy Storchaka wrote: >> Currently 'break' and 'return' are never used inside 'finally' >> clause in the stdlib. > > See the _recv_bytes() function: > > Lib/multiprocessing/connection.py: 316 Thank you Neil! I missed this case because ran only fast tests, without enabling network tests. >> I would want to see a third-party code that uses them. > > These are the only ones I found so far: > > ../gevent/src/gevent/libev/corecffi.py: 147 I haven't found 'finally' clauses in https://github.com/gevent/gevent/blob/master/src/gevent/libev/corecffi.py. Perhaps this code was changed in recent versions. In any case we now know that this combination is occurred (but very rarely) in the wild. From v+python at g.nevcal.com Wed Jan 3 14:32:11 2018 From: v+python at g.nevcal.com (Glenn Linderman) Date: Wed, 3 Jan 2018 11:32:11 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On 1/3/2018 11:16 AM, Guido van Rossum wrote: > Maybe I should clarify again what run() does. Here's how I think of it > in pseudo code: > > def run(self, func, *args, **kwds): > ??? old = _get_current_context() > ??? new = old.copy() > ??? _set_current_context(new) > ??? try: > ??????? return func(*args, **kwds) > ??? finally: > ??????? _set_current_context(old) > I find it interesting that self isn't used in the above pseudo-code. I thought that Context.run() would run the function in the "context" of self, not in the context of a copy of "current context". -------------- next part -------------- An HTML attachment was scrubbed... URL: From nas-python at arctrix.com Wed Jan 3 16:30:45 2018 From: nas-python at arctrix.com (Neil Schemenauer) Date: Wed, 3 Jan 2018 15:30:45 -0600 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: References: <20180102203149.vkepsh5da5pon67q@python.ca> Message-ID: <20180103213045.eq7mwhzjcv3refui@python.ca> On 2018-01-03, Serhiy Storchaka wrote: > I haven't found 'finally' clauses in > https://github.com/gevent/gevent/blob/master/src/gevent/libev/corecffi.py. > Perhaps this code was changed in recent versions. Yes, I was looking at was git revision bcf4f65e. I reran my AST checker and found this: ./src/gevent/_ffi/loop.py: 181: return inside finally > In any case we now know that this combination is occurred (but > very rarely) in the wild. Looks like it. If we do want to seriously consider changing the grammar, I will download more packages of PyPI and check them. BTW, ./src/gevent/threadpool.py doesn't compile with 3.7 because it uses 'async' as a variable name. So either they didn't notice the deprecation warnings or they didn't care to update their code. Regards, Neil From guido at python.org Wed Jan 3 17:01:49 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 3 Jan 2018 15:01:49 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 12:32 PM, Glenn Linderman wrote: > On 1/3/2018 11:16 AM, Guido van Rossum wrote: > > Maybe I should clarify again what run() does. Here's how I think of it in > pseudo code: > > def run(self, func, *args, **kwds): > old = _get_current_context() > new = old.copy() > _set_current_context(new) > try: > return func(*args, **kwds) > finally: > _set_current_context(old) > > > > I find it interesting that self isn't used in the above pseudo-code. I > thought that Context.run() would run the function in the "context" of self, > not in the context of a copy of "current context". > > > Heh, you're right, I forgot about that. It should be more like this: def run(self, func, *args, **kwds): old = _get_current_context() _set_current_context(self) # <--- changed line try: return func(*args, **kwds) finally: _set_current_context(old) This version, like the PEP, assumes that the Context object is truly immutable (not just in name) and that you should call it like this: contextvars.copy_context().run(func, ) The PEP definitely needs to be clearer about this -- right now one has to skip back and forth between the specification and the implementation (and sometimes the introduction) to figure out how things really work. Help is wanted! -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 3 17:15:26 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 3 Jan 2018 15:15:26 -0700 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: <20180103213045.eq7mwhzjcv3refui@python.ca> References: <20180102203149.vkepsh5da5pon67q@python.ca> <20180103213045.eq7mwhzjcv3refui@python.ca> Message-ID: I'm sorry, I don't think more research can convince me either way. I want all three of return/break/continue to work inside finally clauses, despite there being few use cases. On Wed, Jan 3, 2018 at 2:30 PM, Neil Schemenauer wrote: > On 2018-01-03, Serhiy Storchaka wrote: > > I haven't found 'finally' clauses in > > https://github.com/gevent/gevent/blob/master/src/gevent/ > libev/corecffi.py. > > Perhaps this code was changed in recent versions. > > Yes, I was looking at was git revision bcf4f65e. I reran my AST > checker and found this: > > ./src/gevent/_ffi/loop.py: 181: return inside finally > > > In any case we now know that this combination is occurred (but > > very rarely) in the wild. > > Looks like it. If we do want to seriously consider changing the > grammar, I will download more packages of PyPI and check them. > > BTW, ./src/gevent/threadpool.py doesn't compile with 3.7 because it > uses 'async' as a variable name. So either they didn't notice the > deprecation warnings or they didn't care to update their code. > > Regards, > > Neil > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 18:25:56 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 4 Jan 2018 00:25:56 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: 2018-01-03 23:01 GMT+01:00 Guido van Rossum : > Heh, you're right, I forgot about that. It should be more like this: > > def run(self, func, *args, **kwds): > old = _get_current_context() > _set_current_context(self) # <--- changed line > try: > return func(*args, **kwds) > finally: > _set_current_context(old) > > This version, like the PEP, assumes that the Context object is truly > immutable (not just in name) and that you should call it like this: > > contextvars.copy_context().run(func, ) I don't see how asyncio would use Context.run() to keep the state (variables values) between callbacks and tasks, if run() is "stateless": forgets everything at exit. I asked if it would be possible to modify run() to return a new context object with the new state, but Yury confirmed that it's not doable: Yury: > [Context.run()] can't return a new context because the callable you're running can raise an exception. In which case you'd lose modifications prior to the error. Guido: > Yury strongly favors an immutable Context, and that's what his reference implementation has (https://github.com/python/cpython/pull/5027). His reasoning is that in the future we *might* want to support automatic context management for generators by default (like described in his original PEP 550), and then it's essential to use the immutable version so that "copying" the context when a generator is created or resumed is super fast (and in particular O(1)). To get acceptable performances, PEP 550 and 567 require O(1) cost when copying a context, since the API requires to copy contexts frequently (in asyncio, each task has its own private context, creating a task copies the current context). Yury proposed to use "Hash Array Mapped Tries (HAMT)" to get O(1) copy. Each ContextVar.set() change creates a *new* HAMT. Extract of the PEP 567: --- def set(self, value): ts : PyThreadState = PyThreadState_Get() data : _ContextData = ts.context_data try: old_value = data.get(self) except KeyError: old_value = Token.MISSING ts.context_data = data.set(self, value) return Token(self, old_value) --- The link between ContextVar, Context and HAMT (called "context data" in the PEP 567) is non obvious: * ContextVar.set() creates a new HAMT from PyThreadState_Get().context_data and writes the new one into PyThreadState_Get().context_data -- technically, it works on a thread local storage (TLS) * Context.run() doesn't set the "current context": in practice, it sets its internal "context data" as the current context data, and then save the *new* context data in its own context data PEP 567: --- def run(self, callable, *args, **kwargs): ts : PyThreadState = PyThreadState_Get() saved_data : _ContextData = ts.context_data try: ts.context_data = self._data return callable(*args, **kwargs) finally: self._data = ts.context_data ts.context_data = saved_data --- The main key of the PEP 567 implementation is that there is no "current context" in practice. There is only a private *current* context data. Not having get_current_contex() allows the trick of context data handled by a TLS. Otherwise, I'm not sure that it would be possible to synchronize a Context object with a TLS variable. >From the user point of view, Context.run() does modify the context. After the call, variables values changed. A second run() call gives you the updated context. I don't think that a mutable context would have an impact in performance, since copying "context data" will still have a cost of O(1). IMHO it's just a matter of taste for the API. Or maybe I missed something. Victor From guido at python.org Wed Jan 3 18:43:31 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 3 Jan 2018 16:43:31 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Oh, dang, I forgot about this. ContextVar.set() modifies the current Context in-place using a private API. In the PEP, asyncio copies the Context once and then calls run() repeatedly (for each _step call). So run() isn't stateless, it just saves and restores the notion of the current context (in the thread state). I don't have time right now to respond in more detail, sorry for shedding darkness. :-( Hopefully I'll have time Friday. On Wed, Jan 3, 2018 at 4:25 PM, Victor Stinner wrote: > 2018-01-03 23:01 GMT+01:00 Guido van Rossum : > > Heh, you're right, I forgot about that. It should be more like this: > > > > def run(self, func, *args, **kwds): > > old = _get_current_context() > > _set_current_context(self) # <--- changed line > > try: > > return func(*args, **kwds) > > finally: > > _set_current_context(old) > > > > This version, like the PEP, assumes that the Context object is truly > > immutable (not just in name) and that you should call it like this: > > > > contextvars.copy_context().run(func, ) > > I don't see how asyncio would use Context.run() to keep the state > (variables values) between callbacks and tasks, if run() is > "stateless": forgets everything at exit. > > I asked if it would be possible to modify run() to return a new > context object with the new state, but Yury confirmed that it's not > doable: > > Yury: > > [Context.run()] can't return a new context because the callable you're > running can raise an exception. In which case you'd lose modifications > prior to the error. > > Guido: > > Yury strongly favors an immutable Context, and that's what his reference > implementation has (https://github.com/python/cpython/pull/5027). His > reasoning is that in the future we *might* want to support automatic > context management for generators by default (like described in his > original PEP 550), and then it's essential to use the immutable version so > that "copying" the context when a generator is created or resumed is super > fast (and in particular O(1)). > > To get acceptable performances, PEP 550 and 567 require O(1) cost when > copying a context, since the API requires to copy contexts frequently > (in asyncio, each task has its own private context, creating a task > copies the current context). Yury proposed to use "Hash Array Mapped > Tries (HAMT)" to get O(1) copy. > > Each ContextVar.set() change creates a *new* HAMT. Extract of the PEP 567: > --- > def set(self, value): > ts : PyThreadState = PyThreadState_Get() > data : _ContextData = ts.context_data > > try: > old_value = data.get(self) > except KeyError: > old_value = Token.MISSING > > ts.context_data = data.set(self, value) > return Token(self, old_value) > --- > > The link between ContextVar, Context and HAMT (called "context data" > in the PEP 567) is non obvious: > > * ContextVar.set() creates a new HAMT from > PyThreadState_Get().context_data and writes the new one into > PyThreadState_Get().context_data -- technically, it works on a thread > local storage (TLS) > * Context.run() doesn't set the "current context": in practice, it > sets its internal "context data" as the current context data, and then > save the *new* context data in its own context data > > PEP 567: > --- > def run(self, callable, *args, **kwargs): > ts : PyThreadState = PyThreadState_Get() > saved_data : _ContextData = ts.context_data > > try: > ts.context_data = self._data > return callable(*args, **kwargs) > finally: > self._data = ts.context_data > ts.context_data = saved_data > --- > > The main key of the PEP 567 implementation is that there is no > "current context" in practice. There is only a private *current* > context data. > > Not having get_current_contex() allows the trick of context data > handled by a TLS. Otherwise, I'm not sure that it would be possible to > synchronize a Context object with a TLS variable. > > From the user point of view, Context.run() does modify the context. > After the call, variables values changed. A second run() call gives > you the updated context. > > I don't think that a mutable context would have an impact in > performance, since copying "context data" will still have a cost of > O(1). IMHO it's just a matter of taste for the API. > > Or maybe I missed something. > > Victor > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 18:44:10 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 4 Jan 2018 00:44:10 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Ok, I finally got access to a computer and I was able to test the PEP 567 implementation: see my code snippet below. The behaviour is more tricky than what I expected. While running context.run(), the context object is out of sync of the "current context". It's only synchronized again at run() exit. So ContextVar.set() doesn't immediately modifies the "current context" object (set by Context.run()). Ok, and now something completely different! What if Context looses its whole mapping API and becomes a "blackbox" object only with a run() method and no attribute? It can be documented as an object mapping context variables to their values, but don't give access to these values directly. It would avoid to have explain the weird run() behaviour (context out of sync). It would avoid to have to decide if it's a mutable or immutable mapping. It would avoid to have to explain the internal O(1) copy using HAMT. Code testing Context.run(): --- import contextvars name = contextvars.ContextVar('name', default='name') def assertNotIn(var, context): try: context[var] except LookupError: pass else: raise AssertionError("name is set is context") def func1(): name.set('yury') assert name.get() == 'yury' assertNotIn(name, context) def func2(): assert name.get() == 'yury' assert context[name] == 'yury' context = contextvars.copy_context() assert name.get() == 'name' assertNotIn(name, context) context.run(func1) assert name.get() == 'name' assert context[name] == 'yury' context.run(func2) assert name.get() == 'name' assert context[name] == 'yury' --- Victor From guido at python.org Wed Jan 3 18:49:31 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 3 Jan 2018 16:49:31 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: That was actually Yury's original design, but I insisted on a Mapping so you can introspect it. (There needs to be some way to introspect a Context, for debugging.) Again, more later. :-( On Wed, Jan 3, 2018 at 4:44 PM, Victor Stinner wrote: > Ok, I finally got access to a computer and I was able to test the PEP > 567 implementation: see my code snippet below. > > The behaviour is more tricky than what I expected. While running > context.run(), the context object is out of sync of the "current > context". It's only synchronized again at run() exit. So > ContextVar.set() doesn't immediately modifies the "current context" > object (set by Context.run()). > > Ok, and now something completely different! What if Context looses its > whole mapping API and becomes a "blackbox" object only with a run() > method and no attribute? It can be documented as an object mapping > context variables to their values, but don't give access to these > values directly. It would avoid to have explain the weird run() > behaviour (context out of sync). It would avoid to have to decide if > it's a mutable or immutable mapping. It would avoid to have to explain > the internal O(1) copy using HAMT. > > Code testing Context.run(): > --- > import contextvars > > name = contextvars.ContextVar('name', default='name') > > def assertNotIn(var, context): > try: > context[var] > except LookupError: > pass > else: > raise AssertionError("name is set is context") > > > def func1(): > name.set('yury') > assert name.get() == 'yury' > assertNotIn(name, context) > > def func2(): > assert name.get() == 'yury' > assert context[name] == 'yury' > > context = contextvars.copy_context() > > assert name.get() == 'name' > assertNotIn(name, context) > > context.run(func1) > > assert name.get() == 'name' > assert context[name] == 'yury' > > context.run(func2) > > assert name.get() == 'name' > assert context[name] == 'yury' > --- > > Victor > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Wed Jan 3 18:37:23 2018 From: greg at krypto.org (Gregory P. Smith) Date: Wed, 03 Jan 2018 23:37:23 +0000 Subject: [Python-Dev] Possible bug in base64.decode: linebreaks are not ignored In-Reply-To: References: Message-ID: I opened https://bugs.python.org/issue32491 so this wouldn't be lost under the assumption that you'll eventually resolve your bugtracker account situation. :) I don't know what the correct behavior *should* be but agree that it seems odd for decode to behave different than decodebytes. -gps On Wed, Jan 3, 2018 at 8:00 AM Oleg Sivokon wrote: > Hello, > > I've tried reading various RFCs around Base64 encoding, but I couldn't > make the ends meet. Yet there is an inconsistency between > base64.decodebytes() and base64.decode() in that how they handle linebreaks > that were used to collate the encoded text. Below is an example of what > I'm talking about: > > >>> import base64 > >>> foo = base64.encodebytes(b'123456789') > >>> foo > b'MTIzNDU2Nzg5\n' > >>> foo = b'MTIzND\n' + b'U2Nzg5\n' > >>> foo > b'MTIzND\nU2Nzg5\n' > >>> base64.decodebytes(foo) > b'123456789' > >>> from io import BytesIO > >>> bytes_in = BytesIO(foo) > >>> bytes_out = BytesIO() > >>> bytes_in.seek(0) > 0 > >>> base64.decode(bytes_in, bytes_out) > Traceback (most recent call last): > File "", line 1, in > File "/somewhere/lib/python3.6/base64.py", line 512, in decode > s = binascii.a2b_base64(line) > binascii.Error: Incorrect padding > >>> bytes_in = BytesIO(base64.encodebytes(b'123456789')) > >>> bytes_in.seek(0) > 0 > >>> base64.decode(bytes_in, bytes_out) > >>> bytes_out.getvalue() > b'123456789' > > Obviously, I'd expect encodebytes() and encode both to either accept or to > reject the same input. > > Thanks. > > Oleg > > PS. I couldn't register to the bug-tracker (never received an email > confirmation, not even in a spam folder), this is why I'm sending it here. > This communication and all information contained in or attached to it is > confidential, intended solely for the addressee, may be legally privileged > and is the intellectual property of one of the companies of NEX Group plc > ("NEX") or third parties. If you are not the intended addressee or receive > this message in error, please immediately delete all copies of it and > notify the sender. We have taken precautions to minimise the risk of > transmitting software viruses, but we advise you to carry out your own > virus checks on any attachments. We do not accept liability for any loss or > damage caused by software viruses. NEX reserves the right to monitor all > communications. We do not accept any legal responsibility for the content > of communications, and no communication shall be considered legally > binding. Furthermore, if the content of this communication is personal or > unconnected with our business, we accept no liability or responsibility for > it. NEX Group plc is a public limited company regi > stered in England and Wales under number 10013770 and certain of its > affiliates are authorised and regulated by regulatory authorities. For > further regulatory information please see www.NEX.com. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 19:09:42 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 4 Jan 2018 01:09:42 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: 2018-01-04 0:44 GMT+01:00 Victor Stinner : > The behaviour is more tricky than what I expected. While running > context.run(), the context object is out of sync of the "current > context". It's only synchronized again at run() exit. So > ContextVar.set() doesn't immediately modifies the "current context" > object (set by Context.run()). A better description of Context.run() behaviour would be: "Create a copy of the context and sets it as the current context. Once the function completes, updates the context from the copy." This description explains why run() doesn't immediately updates the context object. Victor From nas-python at arctrix.com Wed Jan 3 20:05:27 2018 From: nas-python at arctrix.com (Neil Schemenauer) Date: Wed, 3 Jan 2018 19:05:27 -0600 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: References: <20180102203149.vkepsh5da5pon67q@python.ca> <20180103213045.eq7mwhzjcv3refui@python.ca> Message-ID: <20180104010527.fpfdxplya52xcewg@python.ca> On 2018-01-03, Guido van Rossum wrote: > I'm sorry, I don't think more research can convince me either way. > I want all three of return/break/continue to work inside finally > clauses, despite there being few use cases. That's fine. The history of 'continue' inside 'finally' is interesting. The restriction dates back to at least when Jeremy committed the AST-based compiler (I have fond memories of hacking on it with Armin Rigo and others at a Python core sprint). Going further back, I looked at 1.5.2 and there is the comment in compile.c: TO DO: ... XXX Allow 'continue' inside try-finally So if we allow 'continue' we will be knocking off a nearly 20 year old todo item. ;-) For giggles, I unpacked a Python 0.9.1 tarball. The source code is all under 'src' in that version. There doesn't seem to be a restriction on 'continue' but only because the grammar doesn't include it! Without doing more research, I think the restriction could be as old as the 'continue' keyword. BTW, the bytecode structure for try/except shown in the compile.c comments is very simlar to what is currently generated. It is quite remarkable how well your initial design and implementation have stood the test of time. Thank you for making it open source. Regards, Neil From njs at pobox.com Wed Jan 3 20:35:56 2018 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 3 Jan 2018 17:35:56 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 3:44 PM, Victor Stinner wrote: > Ok, I finally got access to a computer and I was able to test the PEP > 567 implementation: see my code snippet below. > > The behaviour is more tricky than what I expected. While running > context.run(), the context object is out of sync of the "current > context". It's only synchronized again at run() exit. So > ContextVar.set() doesn't immediately modifies the "current context" > object (set by Context.run()). To me this sounds like a oversight (= bug), not intended behavior. At the conceptual level, I think what we want is: - Context is a mutable object representing a mapping - BUT it doesn't allow mutation through the MutableMapping interface; instead, the only way to mutate it is by calling Context.run and then ContextVar.set(). Funneling all 'set' operations through a single place makes it easier to do clever caching tricks, and it lets us avoid dealing with operations that we don't want here (like 'del') just because they happen to be in the MutableMapping interface. - OTOH we do implement the (read-only) Mapping interface because there's no harm in it and it's probably useful for debuggers. (Note that I didn't say anything about HAMTs here, because that's orthogonal implementation detail. It would make perfect sense to have Context be an opaque wrapper around a regular dict; it would just give different performance trade-offs.) -n -- Nathaniel J. Smith -- https://vorpus.org From victor.stinner at gmail.com Wed Jan 3 20:42:50 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 4 Jan 2018 02:42:50 +0100 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context Message-ID: Hi, It seems like many people, including myself, are confused by the lack of concrete current context in the PEP 567 (contextvars). But it isn't difficult to implement the current context (I implemented it, see below). It might have a *minor* impact on performance, but Context mapping API is supposed to only be used for introspection according to Yury. With contextvars.get_context(), it becomes possible to introspect the current context, rather than a copy, and it becomes even more obvious than a context is mutable ;-) vstinner at apu$ ./python >>> import contextvars >>> ctx=contextvars.get_context() >>> name=contextvars.ContextVar('name', default='victor') >>> print(list(ctx.items())) [] >>> name.set('yury') >>> print(list(ctx.items())) [(, 'yury')] With my changes, the running context remains up to date in Context.run(). Example: --- import contextvars name = contextvars.ContextVar('name', default='victor') def func(): name.set('yury') print(f"context[name]: {context.get(name)}") print(f"name in context: {name in context}") context = contextvars.copy_context() context.run(func) --- Output: --- context[name]: yury name in context: True --- Compare it to the output without my changes: --- context[name]: None name in context: False --- If we have contextvars.get_context(), maybe contextvars.copy_context() can be removed and add a new Context.copy() method instead: new_context = contextvars.get_context().copy() *** I implemented contextvars.get_context() which returns the current context: https://github.com/vstinner/cpython/commit/1e5ee71c15e2b1387c888d6eca2b08ef14595130 from https://github.com/vstinner/cpython/commits/current_context I added a context thread local storage (TLS): class PyThreadState: context: Context # can be None context_data: _ContextData I modified Context mapping API to use the context variables from the current thread state if it's the current thread state. PyContext_Enter() now not only sets PyThreadState.context_data, but also PyThreadState.context to itself. Pseudo-code for Context.get(): --- class Context(collections.abc.Mapping): def __init__(self): self._data = _ContextData() def _get_data(self): ts : PyThreadState = PyThreadState_Get() if ts.context is self: return ts.context_data else: return self._data def get(self, var): # FIXME: implement default data = self._get_data() return data.get(var) --- And Context.run() is modified to set also the context TLS: --- def run(self, callable, *args, **kwargs): ts : PyThreadState = PyThreadState_Get() saved_context : Optional[Context] = ts.context # NEW saved_data : _ContextData = ts.context_data try: ts.context_data = self._data return callable(*args, **kwargs) finally: self._data = ts.context_data ts.context = saved_context # NEW ts.context_data = saved_data --- Victor From victor.stinner at gmail.com Wed Jan 3 21:12:10 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 4 Jan 2018 03:12:10 +0100 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context In-Reply-To: References: Message-ID: Victor: > I modified Context mapping API to use the context variables from the > current thread state if it's the current thread state. Oops, I mean: "if it's the current context". Nathaniel: """ - BUT it doesn't allow mutation through the MutableMapping interface; instead, the only way to mutate it is by calling Context.run and then ContextVar.set(). Funneling all 'set' operations through a single place makes it easier to do clever caching tricks, and it lets us avoid dealing with operations that we don't want here (like 'del') just because they happen to be in the MutableMapping interface. """ If a context knows that it's the current context, Context.set() can delegate the change to ContextVar.set(), since Context access directly the thread local storage in this case (with my suggested changes), and so the optimization is kept. If the context is not the current context, the cache doesn't have to be invalidated. Moreover, PyContext_Enter() and PyContext_Exit() already increases ts->contextvars_stack_ver which indirectly invalidates cached values. Nathaniel: Would you be ok to implement the MutableMapping API if the optimization is kept? "del context[var]" would raise a TypeError('Context' object doesn't support item deletion) exception, as it does currently. Victor From njs at pobox.com Wed Jan 3 21:15:32 2018 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 3 Jan 2018 18:15:32 -0800 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 5:42 PM, Victor Stinner wrote: > Hi, > > It seems like many people, including myself, are confused by the lack > of concrete current context in the PEP 567 (contextvars). But it isn't > difficult to implement the current context (I implemented it, see > below). The problem with such an API is that it doesn't work (or at the very least creates a lot of complications) in a potential future PEP 550 world, where the "current context" becomes something like a ChainMap-of-Contexts instead of just the last Context that had run() called on it. This isn't a big problem for contextvars.get_context(), which returns a snapshot of the current context -- in a PEP 550 world it would return a snapshot of the current "effective" (flattened) context. Maybe it would help a little to rename get_context() to something like snapshot_context()? -n -- Nathaniel J. Smith -- https://vorpus.org From greg.ewing at canterbury.ac.nz Wed Jan 3 19:26:12 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 04 Jan 2018 13:26:12 +1300 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: <5A4D74A4.4040903@canterbury.ac.nz> Guido van Rossum wrote: > There needs to be some way to introspect a > Context, for debugging. It could have a read-only introspection interface without having to be a full Mapping. -- Greg From greg.ewing at canterbury.ac.nz Wed Jan 3 19:17:08 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 04 Jan 2018 13:17:08 +1300 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: <5A4D7284.9000706@canterbury.ac.nz> Guido van Rossum wrote: > contextvars.copy_context().run(func, ) If contexts are immutable, why is there something called copy_context? -- Greg From waultah at gmail.com Wed Jan 3 20:25:41 2018 From: waultah at gmail.com (Dmitry Kazakov) Date: Thu, 04 Jan 2018 01:25:41 +0000 Subject: [Python-Dev] Discussion about the proposed ignore_modules argument for traceback functions Message-ID: Hello! I'd like to draw some attention to the feature(s) proposed in the issue 31299. https://bugs.python.org/issue31299 It's a dependency of the other issue, it still needs discussion, and it hasn't received any comments from committers since last September. Personally, I think that a general traceback cleaning facility proposed by Antoine should be accompanied by a similar non-destructive feature in a traceback module. If it's decided that the latter makes sense implementing, I'm willing to revive (and update) the PR I've closed earlier in time for the approaching Python 3.7 feature code cutoff. Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 3 21:45:46 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 4 Jan 2018 03:45:46 +0100 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context In-Reply-To: References: Message-ID: 2018-01-04 3:15 GMT+01:00 Nathaniel Smith : > The problem with such an API is that it doesn't work (or at the very > least creates a lot of complications) in a potential future PEP 550 > world, (...) Hum, it's annoying that many design choices of the PEP 567 are motivated by the hypothetical acceptance of the PEP 550, whereas this PEP is not scheduled for Python 3.7. What if the PEP 550 is rejected because it's too complicated? We might have a nicer PEP 567 API without the big shadow of the PEP 550. Is it possible to run a generator in a context explicitly using PEP 567 Context.run() API? -- I never looked seriously at the PEP 550, because I'm scared by its length and its complexity :-) While I easily see use cases for asyncio web frameworks with the PEP 567, it's harder for me to see the use cases of the PEP 550. I never had to keep a context in a generator. I guess that developers who have to do that learnt how to workaround this Python limitation, maybe passing manually the context to the generator and reapply it at each step? Victor From njs at pobox.com Thu Jan 4 02:52:34 2018 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 3 Jan 2018 23:52:34 -0800 Subject: [Python-Dev] Discussion about the proposed ignore_modules argument for traceback functions In-Reply-To: References: Message-ID: On Jan 3, 2018 18:38, "Dmitry Kazakov" wrote: Hello! I'd like to draw some attention to the feature(s) proposed in the issue 31299. https://bugs.python.org/issue31299 It's a dependency of the other issue, it still needs discussion, and it hasn't received any comments from committers since last September. Personally, I think that a general traceback cleaning facility proposed by Antoine should be accompanied by a similar non-destructive feature in a traceback module. If it's decided that the latter makes sense implementing, I'm willing to revive (and update) the PR I've closed earlier in time for the approaching Python 3.7 feature code cutoff. Regarding the ability to mutate a traceback to add/remove frames, there's a PR here, which I think is enough to do what Antoine was talking about: https://github.com/python/cpython/pull/4793 (It's been sitting for 24 days, maybe someone could review it?) If you want a general way to mark certain frames in tracebacks as being hidden, then I think the big question is what the actual API for marking them should look like. Being able to mark a whole module as never showing up in tracebacks is pretty crude -- should it be per-function? What would it look like, a decorator? That's probably difficult to implement without first implementing bpo-12857 (which would be a good idea anyway!). If an invisible function calls another function that's neither marked as visible or invisible, should the default be that it's visible, or that it inherits its parent's visibility? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Jan 4 05:25:57 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 4 Jan 2018 10:25:57 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: <5A4D7284.9000706@canterbury.ac.nz> References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On 4 January 2018 at 00:17, Greg Ewing wrote: > Guido van Rossum wrote: >> >> contextvars.copy_context().run(func, ) > > > If contexts are immutable, why is there something > called copy_context? Agreed. This was something that bothered me, too. I mentioned it in my review, but that seemed to get lost in the other comments in this thread... I get the impression that the logic is that the context is immutable, but the ContextVars that it contains aren't, and the copy is deep (at least 1 level deep) so you copy then change the value of a ContextVar. But rereading that sentence, it sounds confused even to me, so it's either not right or the implementation falls foul of "If the implementation is hard to explain, it's a bad idea." :-) Paul From p.f.moore at gmail.com Thu Jan 4 06:11:33 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 4 Jan 2018 11:11:33 +0000 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context In-Reply-To: References: Message-ID: On 4 January 2018 at 02:15, Nathaniel Smith wrote: > On Wed, Jan 3, 2018 at 5:42 PM, Victor Stinner wrote: >> Hi, >> >> It seems like many people, including myself, are confused by the lack >> of concrete current context in the PEP 567 (contextvars). But it isn't >> difficult to implement the current context (I implemented it, see >> below). > > The problem with such an API is that it doesn't work (or at the very > least creates a lot of complications) in a potential future PEP 550 > world, where the "current context" becomes something like a > ChainMap-of-Contexts instead of just the last Context that had run() > called on it. This isn't a big problem for contextvars.get_context(), > which returns a snapshot of the current context -- in a PEP 550 world > it would return a snapshot of the current "effective" (flattened) > context. But PEP 567 is specifically a restricted version of PEP 550 that doesn't try to solve the case of generators and async generators - precisely because it was proving impossible to gain consensus on how to handle those cases. We can't expect PEP 567 to satisfy an unstated requirement that "it must be possible to extend it to provide full PEP 550 functionality later". Is there a reason within the stated design goals of PEP 567 why Victor's implementation is incorrect? In PEP 567 the *only* point of the Context is to provide a means of implementing the consideration described in the Introduction: "The notion of "current value" deserves special consideration: different asynchronous tasks that exist and execute concurrently may have different values for the same key". Having said this, I don't really see the need for Victor's re-implementation. The wording of the PEP needs some work, as it mixes implementation details with design in a way that makes it confusing, but unlike Victor I don't see this as unfixable. I'd be perfectly happy with Yury's implementation, but with better documentation that focuses on the use of the feature and separates the implementation aspects out clearly. This seems to be a general problem with async - usage-focused documentation that ignores implementation details is very hard to find, and the experts seem to struggle to keep implementation details "behind the scenes". IMO, we need to do more to help push for user-friendly documentation, which is why I'd rather concentrate on helping Yury document context variables in a way that doesn't expose implementation details, rather than trying to understand and critique those implementation details (people like Victor are far better at doing that than I am :-)) > Maybe it would help a little to rename get_context() to something like > snapshot_context()? Maybe? But at the moment, the PEP says "the context is immutable" so it shouldn't make a difference (and in fact get_ makes more sense than snapshot_ as long as the context is immutable). I'd prefer we work on clarifying whether (conceptually - not in terms of implementation!) the context should be described as immutable, and once we understand that I suspect answers to questions like this will be obvious. Paul From guido at python.org Thu Jan 4 10:56:23 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 08:56:23 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On Thu, Jan 4, 2018 at 3:25 AM, Paul Moore wrote: > On 4 January 2018 at 00:17, Greg Ewing > wrote: > > Guido van Rossum wrote: > >> > >> contextvars.copy_context().run(func, ) > > > > If contexts are immutable, why is there something > > called copy_context? > > Agreed. This was something that bothered me, too. I mentioned it in my > review, but that seemed to get lost in the other comments in this > thread... > > I get the impression that the logic is that the context is immutable, > but the ContextVars that it contains aren't, and the copy is deep (at > least 1 level deep) so you copy then change the value of a ContextVar. > But rereading that sentence, it sounds confused even to me, so it's > either not right or the implementation falls foul of "If the > implementation is hard to explain, it's a bad idea." :-) It was get_context() in an earlier version of PEP 567. We changed it to copy_context() believing that that would clarify that you get a clone that is unaffected by subsequent ContextVar.set() operations (which affect the *current* context rather than the copy you just got). [The discussion is fragmentary because Yury is on vacation until the 15th and I am scrambling for time myself. But your long post is saved, and not forgotten.] -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Jan 4 11:27:11 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 4 Jan 2018 16:27:11 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On 4 January 2018 at 15:56, Guido van Rossum wrote: > It was get_context() in an earlier version of PEP 567. We changed it to > copy_context() believing that that would clarify that you get a clone that > is unaffected by subsequent ContextVar.set() operations (which affect the > *current* context rather than the copy you just got). Ah thanks. In which case, simply changing the emphasis to avoid the implication that Context objects are immutable (while that may be true in a technical/implementation sense, it's not really true in a design sense if ContextVar.set modifies the value of a variable in a context) is probably sufficient. > [The discussion is fragmentary because Yury is on vacation until the 15th > and I am scrambling for time myself. But your long post is saved, and not > forgotten.] OK. I knew you were short on time, but hadn't realised Yury was on vacation too. Sorry if I sounded like I was nagging. Paul From guido at python.org Thu Jan 4 11:30:55 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 09:30:55 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 6:35 PM, Nathaniel Smith wrote: > On Wed, Jan 3, 2018 at 3:44 PM, Victor Stinner > wrote: > > Ok, I finally got access to a computer and I was able to test the PEP > > 567 implementation: see my code snippet below. > > > > The behaviour is more tricky than what I expected. While running > > context.run(), the context object is out of sync of the "current > > context". It's only synchronized again at run() exit. So > > ContextVar.set() doesn't immediately modifies the "current context" > > object (set by Context.run()). > > To me this sounds like a oversight (= bug), not intended behavior. At > the conceptual level, I think what we want is: > > - Context is a mutable object representing a mapping > - BUT it doesn't allow mutation through the MutableMapping interface; > instead, the only way to mutate it is by calling Context.run and then > ContextVar.set(). Funneling all 'set' operations through a single > place makes it easier to do clever caching tricks, and it lets us > avoid dealing with operations that we don't want here (like 'del') > just because they happen to be in the MutableMapping interface. > - OTOH we do implement the (read-only) Mapping interface because > there's no harm in it and it's probably useful for debuggers. > I think that in essence what Victor saw is a cache consistency issue. If you look at the implementation section in the PEP, the ContextVar.set() operation mutates _ContextData, which is a private (truly) immutable data structure that stands in for the HAMT, and the threadstate contains one of these (not a Context). When you call copy_context() you get a fresh Context that wraps the current _ContextData. Because the latter is effectively immutable this is a true clone. ctx.run() manipulates the threadstate to make the current _ContextData the one from ctx, then calls the function. If the function calls var.set(), this will create a new _ContextData that is stored in the threadstate, but it doesn't update the ctx. This is where the current state and ctx go out of sync. Once the function returns or raises, run() takes the _ContextData from the threadstate and stuffs it into ctx, resolving the inconsistency. (It then also restores the previous _ContextData that it had saved before any of this started.) So all in all Context is mutable but the only time it is mutated is when run() returns. I think Yury's POV is that you rarely if ever want to introspect a Context object that's not freshly obtained from copy_context(). I'm not sure if that's really true; it means that introspecting the context stored in an asyncio.Task may give incorrect results if it's the currently running task. Should we declare it a bug? The fix would be complex given the current implementation (either the PEP's pseudo-code or Yury's actual HAMT-based implementation). I think it would involve keeping track of the current Context in the threadstate rather than just the _ContextData, and updating the Context object on each var.set() call. And this is something that Yury wants to avoid, so that he can do more caching for var.get() (IIUC). We could also add extra words to the PEP's spec for run() explaining this temporary inconsistency. I think changing the introspection method from Mapping to something custom won't fix the basic issue (which is that there's a difference between the Context and the _ContextData, and ContextVar actually only manipulates the latter, always accessing it via the threadstate). However there's another problem with the Mapping interface, which is: what should it do with variables that are not set and have no default value? Should they be considered to have a value equal to _NO_DEFAULT or Token.MISSING? Or should they be left out of the keys altogether? The PEP hand-waves on this issue (we didn't think of missing values when we made the design). Should it be possible to introspect a Context that's not the current context? > (Note that I didn't say anything about HAMTs here, because that's > orthogonal implementation detail. It would make perfect sense to have > Context be an opaque wrapper around a regular dict; it would just give > different performance trade-offs.) > Agreed, that's how the PEP pseudo-code does it. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Jan 4 18:18:24 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 4 Jan 2018 15:18:24 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 8:30 AM, Guido van Rossum wrote: > On Wed, Jan 3, 2018 at 6:35 PM, Nathaniel Smith wrote: >> - Context is a mutable object representing a mapping >> - BUT it doesn't allow mutation through the MutableMapping interface; >> instead, the only way to mutate it is by calling Context.run and then >> ContextVar.set(). Funneling all 'set' operations through a single >> place makes it easier to do clever caching tricks, and it lets us >> avoid dealing with operations that we don't want here (like 'del') >> just because they happen to be in the MutableMapping interface. >> - OTOH we do implement the (read-only) Mapping interface because >> there's no harm in it and it's probably useful for debuggers. > > > I think that in essence what Victor saw is a cache consistency issue. Yeah, that's a good way to think about it. > If you > look at the implementation section in the PEP, the ContextVar.set() > operation mutates _ContextData, which is a private (truly) immutable data > structure that stands in for the HAMT, and the threadstate contains one of > these (not a Context). When you call copy_context() you get a fresh Context > that wraps the current _ContextData. Because the latter is effectively > immutable this is a true clone. ctx.run() manipulates the threadstate to > make the current _ContextData the one from ctx, then calls the function. If > the function calls var.set(), this will create a new _ContextData that is > stored in the threadstate, but it doesn't update the ctx. This is where the > current state and ctx go out of sync. Once the function returns or raises, > run() takes the _ContextData from the threadstate and stuffs it into ctx, > resolving the inconsistency. (It then also restores the previous > _ContextData that it had saved before any of this started.) > > So all in all Context is mutable but the only time it is mutated is when > run() returns. > > I think Yury's POV is that you rarely if ever want to introspect a Context > object that's not freshly obtained from copy_context(). I'm not sure if > that's really true; it means that introspecting the context stored in an > asyncio.Task may give incorrect results if it's the currently running task. > > Should we declare it a bug? The fix would be complex given the current > implementation (either the PEP's pseudo-code or Yury's actual HAMT-based > implementation). I think it would involve keeping track of the current > Context in the threadstate rather than just the _ContextData, and updating > the Context object on each var.set() call. And this is something that Yury > wants to avoid, so that he can do more caching for var.get() (IIUC). I think the fix is a little bit cumbersome, but straightforward, and actually *simplifies* caching. If we track both the _ContextData and the Context in the threadstate, then set() becomes something like: def set(self, value): # These two lines are like the current implementation new_context_data = tstate->current_context_data->hamt_clone_with_new_item(key=self, value=value) tstate->current_context_data = new_context_data # Update the Context to have the new _ContextData tstate->current_context->data = new_context_data # New caching: instead of tracking integer ids, we just need to track the Context object # This relies on the assumption that self->last_value is updated every time any Context is mutated self->last_value = value self->last_context = tstate->current_context And then the caching in get() becomes: def get(self): if tstate->current_context != self->last_context: # Update cache self->last_value = tstate->current_context_data->hamt_lookup(self) self->last_context = tstate->current_context return self->last_value (I think the current cache invalidation logic is necessary for a PEP 550 implementation, but until/unless we implement that we can get away with something simpler.) So I'd say yeah, let's declare it a bug. If it turns out that I'm wrong and there's some reason this is really difficult, then we could consider making introspection on a currently-in-use Context raise an error, instead of returning stale data. This should be pretty easy, since Contexts already track whether they're currently in use (so they can raise an error if you try to use the same Context in two threads simultaneously). > We could also add extra words to the PEP's spec for run() explaining this > temporary inconsistency. > > I think changing the introspection method from Mapping to something custom > won't fix the basic issue (which is that there's a difference between the > Context and the _ContextData, and ContextVar actually only manipulates the > latter, always accessing it via the threadstate). > > However there's another problem with the Mapping interface, which is: what > should it do with variables that are not set and have no default value? > Should they be considered to have a value equal to _NO_DEFAULT or > Token.MISSING? Or should they be left out of the keys altogether? The PEP > hand-waves on this issue (we didn't think of missing values when we made the > design). I've been thinking this over, and I don't *think* there are any design constraints that force us towards one approach or another, so it's just about what's most convenient for users. My preference for how missing values / defaults / etc. should be handled is, Context acts just like a dict that's missing its mutation methods, and ContextVar does: class ContextVar: # Note: default=None instead of default=_MAGIC_SENTINEL_VALUE # If users want to distinguish between unassigned and None, then they can # pass their own sentinel value. IME this is very rare though. def __init__(self, name, *, default=None): self.name = name self.default = default # Note: no default= argument here, because merging conflicting default= values # is inherently confusing, and not really needed. def get(self): return current_context().get(self, self.default) Rationale: I've never seen a thread local use case where you wanted *different* default values at different calls to getattr. I've seen lots of thread local use cases that jumped through hoops to make sure they used the same default everywhere, either by defining a wrapper around getattr() or by subclassing local to define fallback values. Likewise, I've seen lots of cases where having to check for whether a thread local attribute was actually defined or not was a nuisance, and never any where it was important to distinguish between missing and None. But, just in case someone does fine such a case, we should make it possible to distinguish. Allowing users to override the default= is enough to do this. And the default= argument is also useful on those occasions where someone wants a default value other than None, which does happen occasionally. For example, django.urls.resolvers.RegexURLResolver._local.populating is semantically a bool with a default value of False. Currently, it's always accessed by writing getattr(_local, "populating", False). With this approach it could instead use ContextVar("populating", default=False) and then just call get(). Everything I just said is about the ergonomics for ContextVar users, so it makes sense to handle all this inside ContextVar. OTOH, Context is a low-level interface for people writing task schedulers and debuggers, so it makes sense to keep it as simple and transparent as possible, and "it's just like a dict" is about as simple and transparent as it gets. Also, this way the pseudocode is really really short. > Should it be possible to introspect a Context that's not the > current context? I think debuggers will definitely want to be able to do things like print Context values from arbitrary tasks. -n -- Nathaniel J. Smith -- https://vorpus.org From greg.ewing at canterbury.ac.nz Thu Jan 4 18:56:52 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 05 Jan 2018 12:56:52 +1300 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: <5A4EBF44.5090909@canterbury.ac.nz> Guido van Rossum wrote: > It was get_context() in an earlier version of PEP 567. We changed it to > copy_context() believing that that would clarify that you get a clone > that is unaffected by subsequent ContextVar.set() operations (which > affect the *current* context rather than the copy you just got). In that case it seems clear to me that "the context" is conceptually a mutable mapping. The fact that it happens to be built out of immutable components is an implementation detail that user-level docs should not be talking about. -- Greg From guido at python.org Thu Jan 4 18:58:04 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 16:58:04 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On Thu, Jan 4, 2018 at 9:27 AM, Paul Moore wrote: > On 4 January 2018 at 15:56, Guido van Rossum wrote: > > It was get_context() in an earlier version of PEP 567. We changed it to > > copy_context() believing that that would clarify that you get a clone > that > > is unaffected by subsequent ContextVar.set() operations (which affect the > > *current* context rather than the copy you just got). > > Ah thanks. In which case, simply changing the emphasis to avoid the > implication that Context objects are immutable (while that may be true > in a technical/implementation sense, it's not really true in a design > sense if ContextVar.set modifies the value of a variable in a context) > is probably sufficient. Do you have a specific proposal for a wording change? PEP 567 describes Context as "a read-only mapping, implemented using an immutable dictionary." This sounds all right to me -- "read-only" is weaker than "immutable". Maybe the implementation should not be mentioned here? (The crux here is that a given Context acts as a variable referencing an immutable dict -- but it may reference different immutable dicts at different times.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Jan 4 19:52:09 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 17:52:09 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: <5A4EBF44.5090909@canterbury.ac.nz> References: <5A4D7284.9000706@canterbury.ac.nz> <5A4EBF44.5090909@canterbury.ac.nz> Message-ID: On Thu, Jan 4, 2018 at 4:56 PM, Greg Ewing wrote: > Guido van Rossum wrote: > >> It was get_context() in an earlier version of PEP 567. We changed it to >> copy_context() believing that that would clarify that you get a clone that >> is unaffected by subsequent ContextVar.set() operations (which affect the >> *current* context rather than the copy you just got). >> > > In that case it seems clear to me that "the context" is > conceptually a mutable mapping. The fact that it happens > to be built out of immutable components is an implementation > detail that user-level docs should not be talking about. > Well, it's not *immutable* (it shouldn't support hash()), but it doesn't follow the MutableMapping protocol -- it only follows the Mapping protocol. Note that the latter carefully doesn't call itself ImmutableMapping. Context is a mutable object that implements the Mapping protocol. The only way to mutate a Context is to use var.set() when that Context is the current context. (Modulo the caching bug discussed in the subthread with Nathaniel.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Jan 4 19:48:51 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 17:48:51 -0700 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Your suggestions sound reasonable, but we are now running into a logistical problem -- I don't want to decide this unilaterally but Yury is on vacation until Jan 15. That gives us at most 2 weeks for approval of the PEP and review + commit of the implementation ( https://github.com/python/cpython/pull/5027) before the 3.7.0 feature freeze / beta (Jan 29). Your approach to defaults *may* have another advantage: perhaps we could get rid of reset() and Token. Code to set and restore a context var could just obtain the old value with get() and later restore it with set(). OTOH this would not be entirely transparent, since it would permanently add the var to the keys of the Context if the var was previously not set. [Later] Oh, drat! Writing that paragraph made me realize there was a bug in my own reasoning that led to this question about variables that have no value: they aren't present in the keys of _ContextData, so my concern about keys and values being inconsistent was unfounded. So the only reason to change the situation with defaults would be that it's more convenient. Though I personally kind of like that you can cause var.get() to raise LookupError if the value is not set -- just like all other variables. [Even later] Re: your other suggestion, why couldn't the threadstate contain just the Context? It seems one would just write tstate->current_context->_data everywhere instead of tstate->current_context_data. On Thu, Jan 4, 2018 at 4:18 PM, Nathaniel Smith wrote: > On Thu, Jan 4, 2018 at 8:30 AM, Guido van Rossum wrote: > > On Wed, Jan 3, 2018 at 6:35 PM, Nathaniel Smith wrote: > >> - Context is a mutable object representing a mapping > >> - BUT it doesn't allow mutation through the MutableMapping interface; > >> instead, the only way to mutate it is by calling Context.run and then > >> ContextVar.set(). Funneling all 'set' operations through a single > >> place makes it easier to do clever caching tricks, and it lets us > >> avoid dealing with operations that we don't want here (like 'del') > >> just because they happen to be in the MutableMapping interface. > >> - OTOH we do implement the (read-only) Mapping interface because > >> there's no harm in it and it's probably useful for debuggers. > > > > > > I think that in essence what Victor saw is a cache consistency issue. > > Yeah, that's a good way to think about it. > > > If you > > look at the implementation section in the PEP, the ContextVar.set() > > operation mutates _ContextData, which is a private (truly) immutable data > > structure that stands in for the HAMT, and the threadstate contains one > of > > these (not a Context). When you call copy_context() you get a fresh > Context > > that wraps the current _ContextData. Because the latter is effectively > > immutable this is a true clone. ctx.run() manipulates the threadstate to > > make the current _ContextData the one from ctx, then calls the function. > If > > the function calls var.set(), this will create a new _ContextData that is > > stored in the threadstate, but it doesn't update the ctx. This is where > the > > current state and ctx go out of sync. Once the function returns or > raises, > > run() takes the _ContextData from the threadstate and stuffs it into ctx, > > resolving the inconsistency. (It then also restores the previous > > _ContextData that it had saved before any of this started.) > > > > So all in all Context is mutable but the only time it is mutated is when > > run() returns. > > > > I think Yury's POV is that you rarely if ever want to introspect a > Context > > object that's not freshly obtained from copy_context(). I'm not sure if > > that's really true; it means that introspecting the context stored in an > > asyncio.Task may give incorrect results if it's the currently running > task. > > > > Should we declare it a bug? The fix would be complex given the current > > implementation (either the PEP's pseudo-code or Yury's actual HAMT-based > > implementation). I think it would involve keeping track of the current > > Context in the threadstate rather than just the _ContextData, and > updating > > the Context object on each var.set() call. And this is something that > Yury > > wants to avoid, so that he can do more caching for var.get() (IIUC). > > I think the fix is a little bit cumbersome, but straightforward, and > actually *simplifies* caching. If we track both the _ContextData and > the Context in the threadstate, then set() becomes something like: > > def set(self, value): > # These two lines are like the current implementation > new_context_data = > tstate->current_context_data->hamt_clone_with_new_item(key=self, > value=value) > tstate->current_context_data = new_context_data > # Update the Context to have the new _ContextData > tstate->current_context->data = new_context_data > # New caching: instead of tracking integer ids, we just need to > track the Context object > # This relies on the assumption that self->last_value is updated > every time any Context is mutated > self->last_value = value > self->last_context = tstate->current_context > > And then the caching in get() becomes: > > def get(self): > if tstate->current_context != self->last_context: > # Update cache > self->last_value = tstate->current_context_data->hamt_lookup(self) > self->last_context = tstate->current_context > return self->last_value > > (I think the current cache invalidation logic is necessary for a PEP > 550 implementation, but until/unless we implement that we can get away > with something simpler.) So I'd say yeah, let's declare it a bug. > > If it turns out that I'm wrong and there's some reason this is really > difficult, then we could consider making introspection on a > currently-in-use Context raise an error, instead of returning stale > data. This should be pretty easy, since Contexts already track whether > they're currently in use (so they can raise an error if you try to use > the same Context in two threads simultaneously). > > > We could also add extra words to the PEP's spec for run() explaining this > > temporary inconsistency. > > > > I think changing the introspection method from Mapping to something > custom > > won't fix the basic issue (which is that there's a difference between the > > Context and the _ContextData, and ContextVar actually only manipulates > the > > latter, always accessing it via the threadstate). > > > > However there's another problem with the Mapping interface, which is: > what > > should it do with variables that are not set and have no default value? > > Should they be considered to have a value equal to _NO_DEFAULT or > > Token.MISSING? Or should they be left out of the keys altogether? The PEP > > hand-waves on this issue (we didn't think of missing values when we made > the > > design). > > I've been thinking this over, and I don't *think* there are any design > constraints that force us towards one approach or another, so it's > just about what's most convenient for users. > > My preference for how missing values / defaults / etc. should be > handled is, Context acts just like a dict that's missing its mutation > methods, and ContextVar does: > > class ContextVar: > # Note: default=None instead of default=_MAGIC_SENTINEL_VALUE > # If users want to distinguish between unassigned and None, then they > can > # pass their own sentinel value. IME this is very rare though. > def __init__(self, name, *, default=None): > self.name = name > self.default = default > > # Note: no default= argument here, because merging conflicting > default= values > # is inherently confusing, and not really needed. > def get(self): > return current_context().get(self, self.default) > > Rationale: > > I've never seen a thread local use case where you wanted *different* > default values at different calls to getattr. I've seen lots of thread > local use cases that jumped through hoops to make sure they used the > same default everywhere, either by defining a wrapper around getattr() > or by subclassing local to define fallback values. > > Likewise, I've seen lots of cases where having to check for whether a > thread local attribute was actually defined or not was a nuisance, and > never any where it was important to distinguish between missing and > None. > > But, just in case someone does fine such a case, we should make it > possible to distinguish. Allowing users to override the default= is > enough to do this. And the default= argument is also useful on those > occasions where someone wants a default value other than None, which > does happen occasionally. For example, > django.urls.resolvers.RegexURLResolver._local.populating is > semantically a bool with a default value of False. Currently, it's > always accessed by writing getattr(_local, "populating", False). With > this approach it could instead use ContextVar("populating", > default=False) and then just call get(). > > Everything I just said is about the ergonomics for ContextVar users, > so it makes sense to handle all this inside ContextVar. > > OTOH, Context is a low-level interface for people writing task > schedulers and debuggers, so it makes sense to keep it as simple and > transparent as possible, and "it's just like a dict" is about as > simple and transparent as it gets. > > Also, this way the pseudocode is really really short. > > > Should it be possible to introspect a Context that's not the > > current context? > > I think debuggers will definitely want to be able to do things like > print Context values from arbitrary tasks. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Jan 4 20:24:56 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 18:24:56 -0700 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: <20180104010527.fpfdxplya52xcewg@python.ca> References: <20180102203149.vkepsh5da5pon67q@python.ca> <20180103213045.eq7mwhzjcv3refui@python.ca> <20180104010527.fpfdxplya52xcewg@python.ca> Message-ID: We should interview you for the paper we may be writing for HOPL. On Wed, Jan 3, 2018 at 6:05 PM, Neil Schemenauer wrote: > On 2018-01-03, Guido van Rossum wrote: > > I'm sorry, I don't think more research can convince me either way. > > I want all three of return/break/continue to work inside finally > > clauses, despite there being few use cases. > > That's fine. The history of 'continue' inside 'finally' is > interesting. The restriction dates back to at least when Jeremy > committed the AST-based compiler (I have fond memories of hacking on > it with Armin Rigo and others at a Python core sprint). Going > further back, I looked at 1.5.2 and there is the comment in > compile.c: > > TO DO: > ... > XXX Allow 'continue' inside try-finally > > So if we allow 'continue' we will be knocking off a nearly 20 year > old todo item. ;-) > > For giggles, I unpacked a Python 0.9.1 tarball. The source code is > all under 'src' in that version. There doesn't seem to be a > restriction on 'continue' but only because the grammar doesn't > include it! Without doing more research, I think the restriction > could be as old as the 'continue' keyword. > > BTW, the bytecode structure for try/except shown in the compile.c > comments is very simlar to what is currently generated. It is quite > remarkable how well your initial design and implementation have stood > the test of time. Thank you for making it open source. > > Regards, > > Neil > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Jan 4 20:44:34 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 17:44:34 -0800 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 4:11 AM, Paul Moore wrote: > On 4 January 2018 at 02:15, Nathaniel Smith wrote: > > On Wed, Jan 3, 2018 at 5:42 PM, Victor Stinner > wrote: > >> It seems like many people, including myself, are confused by the lack > >> of concrete current context in the PEP 567 (contextvars). But it isn't > >> difficult to implement the current context (I implemented it, see > >> below). > > > [Nathaniel] > > The problem with such an API is that it doesn't work (or at the very > > least creates a lot of complications) in a potential future PEP 550 > > world, where the "current context" becomes something like a > > ChainMap-of-Contexts instead of just the last Context that had run() > > called on it. This isn't a big problem for contextvars.get_context(), > > which returns a snapshot of the current context -- in a PEP 550 world > > it would return a snapshot of the current "effective" (flattened) > > context. > > [Paul] > But PEP 567 is specifically a restricted version of PEP 550 that > doesn't try to solve the case of generators and async generators - > precisely because it was proving impossible to gain consensus on how > to handle those cases. We can't expect PEP 567 to satisfy an unstated > requirement that "it must be possible to extend it to provide full PEP > 550 functionality later". Is there a reason within the stated design > goals of PEP 567 why Victor's implementation is incorrect? In PEP 567 > the *only* point of the Context is to provide a means of implementing > the consideration described in the Introduction: "The notion of > "current value" deserves special consideration: different asynchronous > tasks that exist and execute concurrently may have different values > for the same key". > > Having said this, I don't really see the need for Victor's > re-implementation. The wording of the PEP needs some work, as it mixes > implementation details with design in a way that makes it confusing, > but unlike Victor I don't see this as unfixable. I'd be perfectly > happy with Yury's implementation, but with better documentation that > focuses on the use of the feature and separates the implementation > aspects out clearly. This seems to be a general problem with async - > usage-focused documentation that ignores implementation details is > very hard to find, and the experts seem to struggle to keep > implementation details "behind the scenes". IMO, we need to do more to > help push for user-friendly documentation, which is why I'd rather > concentrate on helping Yury document context variables in a way that > doesn't expose implementation details, rather than trying to > understand and critique those implementation details (people like > Victor are far better at doing that than I am :-)) > > > Maybe it would help a little to rename get_context() to something like > > snapshot_context()? > > Maybe? But at the moment, the PEP says "the context is immutable" so > it shouldn't make a difference (and in fact get_ makes more sense than > snapshot_ as long as the context is immutable). I'd prefer we work on > clarifying whether (conceptually - not in terms of implementation!) > the context should be described as immutable, and once we understand > that I suspect answers to questions like this will be obvious. > In the other thread I think we've clarified what the (im)mutability status of Context is. I don't think we need Victor's implementation either and I agree with Paul that the PEP just needs a better specification / description of the API it offers. Also let's not tweak the name of the function that retrieves a copy of the current context just yet. The PEP has it as copy_context(). Let's keep that. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Jan 4 20:50:27 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 17:50:27 -0800 Subject: [Python-Dev] PEP 567 (contextvars) idea: really implement the current context In-Reply-To: References: Message-ID: On Wed, Jan 3, 2018 at 6:45 PM, Victor Stinner wrote: > Is it possible to run a generator in a context explicitly using PEP > 567 Context.run() API? > Yes. You execute 'ctx = copy_context()' once, and then wrap all .next() calls in ctx.run(). This is what the PEP proposes to do for asyncio.Task -- constructing the task calls copy_context() _step() is always wrapped in a run() call for that context. (This was a good question because it teases out more of the semantics of Context.run().) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Thu Jan 4 21:09:45 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 18:09:45 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 4:30 PM, Victor Stinner wrote: > Hum, it seems like the specification (API) part of the PEP is polluted by > its implementation. The PEP just require a few minor changes to better > describe the behaviour/API instead of insisting on the read only internal > thing which is specific to the proposed implementation which is just one > arbitrary implementation (designed for best performances). > > IMHO the PEP shouldn't state that a context is read only. From my point of > view, it's mutable and it's the mapping holding variable values. There is a > current context which holds the current values. Context.run() switches > temporarily the current context with another context. The fact that there > is no concrete context instance by default doesn't really matter in term of > API. > You've convinced me that Context is neither immutable nor read-only, and the PEP should admit this. Its *Mapping* interface doesn't allow mutations but its run() method does. E.g. here's a function that mutates a Context, effectively doing vtx[var] = value: def mutate_context(ctx, var, value): ctx.run(lambda: var.set(value)) However you've not convinced me that it would be better to make Context implement the full MutableMapping interface (with `__delitem__` always raising). There are use cases for inspecting Context, e.g. a debugger that wants to print the Context for some or all suspended tasks. But I don't see a use case for mutating a Context object that's not the current context, and when it is the current context, ContextVar.set() is more direct. I also don't see use cases for other MutableMapping methods like pop() or update(). (And clear() falls under the same prohibition as __delitem__().) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Jan 4 22:58:49 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 4 Jan 2018 19:58:49 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 6:09 PM, Guido van Rossum wrote: > However you've not convinced me that it would be better to make Context > implement the full MutableMapping interface (with `__delitem__` always > raising). There are use cases for inspecting Context, e.g. a debugger that > wants to print the Context for some or all suspended tasks. But I don't see > a use case for mutating a Context object that's not the current context, and > when it is the current context, ContextVar.set() is more direct. I also > don't see use cases for other MutableMapping methods like pop() or update(). > (And clear() falls under the same prohibition as __delitem__().) I was looking at this again, and I realized there's some confusion. I've been repeating the thing about not wanting to implement __delitem__ too, but actually, I think __delitem__ is not the problem :-). The problem is that we don't want to provide ContextVar.unset() -- that's the operation that adds complication in a PEP 550 world. If you have a stack of Contexts that ContextVar.get() searches, and set/unset are only allowed to mutate the top entry in the stack, then the only way to implement unset() is to do something like context_stack[-1][self] = _MISSING, so it can hide any entries below it in the stack. This is extra complication for a feature that it's not clear anyone cares about. (And if it turns out people do care, we could add it later.) Deleting entries from individual Context objects shouldn't create conceptual problems. OTOH I don't see how it's useful either. I don't think implementing MutableMapping would actually cause problems, but it's a bunch of extra code to write/test/maintain without any clear use cases. This does make me think that I should write up a short PEP for extending PEP 567 to add context lookup, PEP 550 style: it can start out in Status: deferred and then we can debate it properly before 3.8, but at least having the roadmap written down now would make it easier to catch these details. (And it might also help address Paul's reasonable complaint about "unstated requirements".) -n -- Nathaniel J. Smith -- https://vorpus.org From greg.ewing at canterbury.ac.nz Thu Jan 4 23:15:11 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 05 Jan 2018 17:15:11 +1300 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> <5A4EBF44.5090909@canterbury.ac.nz> Message-ID: <5A4EFBCF.3090408@canterbury.ac.nz> Guido van Rossum wrote: > Well, it's not *immutable* (it shouldn't support hash()), but it doesn't > follow the MutableMapping protocol -- it only follows the Mapping > protocol. That's why I wrote "mapping" with a small "m". If there's a better term for "collection of associations between keys and values that can be changed somehow" then by all means use it. -- Greg From nas-python at arctrix.com Fri Jan 5 00:22:56 2018 From: nas-python at arctrix.com (Neil Schemenauer) Date: Thu, 4 Jan 2018 23:22:56 -0600 Subject: [Python-Dev] 'continue'/'break'/'return' inside 'finally' clause In-Reply-To: References: <20180102203149.vkepsh5da5pon67q@python.ca> <20180103213045.eq7mwhzjcv3refui@python.ca> <20180104010527.fpfdxplya52xcewg@python.ca> Message-ID: <20180105052256.yckp47h6aq4rrfme@python.ca> On 2018-01-04, Guido van Rossum wrote: > We should interview you for the paper we may be writing for HOPL. History of Programming Languages? I did some more digging this afternoon, trying to find source code between versions 1.0.1 and 0.9.1. No luck though. It looks like 0.9.1 might have been the last one you uploaded to alt.sources. Later 0.9.X releases were uploaded to ftp.cwi.nl and wuarchive.wustle.edu. No one seems to have an archive of those. I think all my old PCs have been sent to the scrapyard. I might have some old hard disk images somewhere. Maybe on a writable DVD or CDR. Probably unreadable at this point. I don't know exactly which version of Python I first downloaded. No earlier than the fall of 1992 and maybe 1993 but it could have been pre-1.0. I do recall running a DOS port at some point. Here is the announcement of 0.9.4alpha: http://legacy.python.org/search/hypermail/python-1992/0270.html The Misc/HISTORY file has quite a lot of details. It shows that 'continue' was added in 0.9.2. Back on topic, it looks like allowing 'continue' will be trival once Serhiy's unwind stack PR lands. Just a few lines of code and I think everything works. If Mark implements his alternative "wordcode for finally blocks gets copied" thing, it will make things more complicated but not much more so than handling 'break' and 'return'. So, those three should probably be all allowed or all forbidden. Regards, Neil From guido at python.org Fri Jan 5 00:42:17 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Jan 2018 21:42:17 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 7:58 PM, Nathaniel Smith wrote: > On Thu, Jan 4, 2018 at 6:09 PM, Guido van Rossum wrote: > > However you've not convinced me that it would be better to make Context > > implement the full MutableMapping interface (with `__delitem__` always > > raising). There are use cases for inspecting Context, e.g. a debugger > that > > wants to print the Context for some or all suspended tasks. But I don't > see > > a use case for mutating a Context object that's not the current context, > and > > when it is the current context, ContextVar.set() is more direct. I also > > don't see use cases for other MutableMapping methods like pop() or > update(). > > (And clear() falls under the same prohibition as __delitem__().) > > I was looking at this again, and I realized there's some confusion. > I've been repeating the thing about not wanting to implement > __delitem__ too, but actually, I think __delitem__ is not the problem > :-). > > The problem is that we don't want to provide ContextVar.unset() -- > that's the operation that adds complication in a PEP 550 world. If you > have a stack of Contexts that ContextVar.get() searches, and set/unset > are only allowed to mutate the top entry in the stack, then the only > way to implement unset() is to do something like > context_stack[-1][self] = _MISSING, so it can hide any entries below > it in the stack. This is extra complication for a feature that it's > not clear anyone cares about. (And if it turns out people do care, we > could add it later.) > Ah yes, that's it. A stack of Contexts could have the same semantics as ChainMap, in which __delitem__ deletes the key from the first mapping if present and otherwise raises KeyError even if it is present in a later mapping. That's enough to implement var.reset(), but not enough to implement arbitrary var.unset(). I'm fine with only having var.reset() and not var.unset(). > Deleting entries from individual Context objects shouldn't create > conceptual problems. OTOH I don't see how it's useful either. I don't > think implementing MutableMapping would actually cause problems, but > it's a bunch of extra code to write/test/maintain without any clear > use cases. > The implementation would have to maintain cache consistency, but that's not necessarily a big deal, and it only needs to be done in a few places. But I agree that the a case hasn't been indicated yet. (I like being able to specify ContextVar's behavior using Context as a MutableMapping, but that's not a real use case.) > This does make me think that I should write up a short PEP for > extending PEP 567 to add context lookup, PEP 550 style: it can start > out in Status: deferred and then we can debate it properly before 3.8, > but at least having the roadmap written down now would make it easier > to catch these details. (And it might also help address Paul's > reasonable complaint about "unstated requirements".) > Anything that will help us kill a 550-pound gorilla sounds good to me. :-) It might indeed be pretty short if we follow the lead of ChainMap (even using a different API than MutableMapping to mutate it). Maybe copy_context() would map to new_child()? Using ChainMap as a model we might even avoid the confusion between Lo[gi]calContext and ExecutionContext which was the nail in PEP 550's coffin. The LC associated with a generator in PEP 550 would be akin to a loose dict which can be pushed on top of a ChainMap using cm = cm.new_child(). (Always taking for granted that instead of an actual dict we'd use some specialized mutable object implementing the Mapping protocol and a custom mutation protocol so it can maintain ContextVar cache consistency.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jan 5 02:12:44 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 5 Jan 2018 17:12:44 +1000 Subject: [Python-Dev] Unique loader per module In-Reply-To: <1CBF1AB4-A327-4B40-A3ED-BF3F793C5EFA@python.org> References: <1CBF1AB4-A327-4B40-A3ED-BF3F793C5EFA@python.org> Message-ID: On 3 January 2018 at 06:35, Barry Warsaw wrote: > Brett doesn?t like this, for several reasons (quoting): > > 1. redundant API in all cases where the loader is unique to the module > 2. the memory savings of sharing a loader is small > 3. it's implementation complexity/overhead for an optimization case. > > The second solution, and the one Brett prefers, is to reimplement zip importer to not use a shared loader. This may not be that difficult, if for example we were to use a delegate loader wrapping a shared loader. > > The bigger problem IMHO is two-fold: > > 1. It would be backward incompatible. If there?s any code out there expecting a shared loader in zipimport, it would break > 2. More problematic is that we?d have to impose an additional requirement on loaders - that they always be unique per module, contradicting the advice in PEP 302 We added module.__spec__.loader_state as part of PEP 451 precisely so shared loaders had a place to store per-module state without have to switch to a unique-loader-per-module model. I think the main reason you're seeing a problem here is because ResourceReader has currently been designed to be implemented directly by loaders, rather than being a subcomponent that you can request *from* a loader. If you instead had an indirection API (that could optionally return self in the case of non-shared loaders), you'd keep the current resource reader method signatures, but the way you'd access the itself would be: resources = module.__spec__.loader.get_resource_reader(module) # resources implements the ResourceReader ABC For actual use, the loader protocol could be hidden behind a helper function: resources = importlib_resources.get_resource_reader(module) For a shared loader, get_resource_reader(module) would return a new *non*-shared resource reader (perhaps caching it in __spec__.loader_state). For a non-shared loader, get_resource_reader(module) would just return self. In both cases, we'd recommend that loaders ensure "self is module.__spec__.loader" as part of their get_resource_reader() implementation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Fri Jan 5 04:57:02 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Jan 2018 01:57:02 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? Message-ID: PEP 3104 says: """ A shorthand form is also permitted, in which nonlocal is prepended to an assignment or augmented assignment: nonlocal x = 3 The above has exactly the same meaning as nonlocal x; x = 3. (Guido supports a similar form of the global statement [24].) """ The PEP metadata says it was approved and implemented in 3.0, yet this part never seems to have been implemented: Python 3.7.0a3+ (heads/master:53f9135667, Dec 29 2017, 19:08:19) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> def f(): ... x = 1 ... def g(): ... nonlocal x = 2 File "", line 4 nonlocal x = 2 ^ SyntaxError: invalid syntax Was this just an oversight, or did it get rejected at some point and no-one remembered to update that PEP? -n -- Nathaniel J. Smith -- https://vorpus.org From victor.stinner at gmail.com Fri Jan 5 05:05:18 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 5 Jan 2018 11:05:18 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Currently, Context.get(var) returns None when "var in context" is false. That's surprising and different than var.get(), especially when var has a default value. Code: --- import contextvars name = contextvars.ContextVar('name', default='victor') context = contextvars.copy_context() print(name in context) print(context.get(name)) print(name.get()) --- Output: --- False None victor --- Context.get() must raise a lookup error by default if var is not in context. It should return the default argument if it's set, it's just that the default parameter must not have a default value (None). I'l fine that Context.get(default=None) and var.get() behaves differently (return None vs victor in my example) when var isn't set and var has a default value. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jan 5 05:41:01 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 5 Jan 2018 10:41:01 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On 4 January 2018 at 23:58, Guido van Rossum wrote: > On Thu, Jan 4, 2018 at 9:27 AM, Paul Moore wrote: >> >> On 4 January 2018 at 15:56, Guido van Rossum wrote: >> > It was get_context() in an earlier version of PEP 567. We changed it to >> > copy_context() believing that that would clarify that you get a clone >> > that >> > is unaffected by subsequent ContextVar.set() operations (which affect >> > the >> > *current* context rather than the copy you just got). >> >> Ah thanks. In which case, simply changing the emphasis to avoid the >> implication that Context objects are immutable (while that may be true >> in a technical/implementation sense, it's not really true in a design >> sense if ContextVar.set modifies the value of a variable in a context) >> is probably sufficient. > > > Do you have a specific proposal for a wording change? PEP 567 describes > Context as "a read-only mapping, implemented using an immutable dictionary." > This sounds all right to me -- "read-only" is weaker than "immutable". Maybe > the implementation should not be mentioned here? (The crux here is that a > given Context acts as a variable referencing an immutable dict -- but it may > reference different immutable dicts at different times.) I've been struggling to think of good alternative wordings (it's a case of "I'm not sure what you're trying to say, so I can't work out how you should say it", unfortunately). The best I can come up with is """ A Context is a mapping from ContextVar objects to their values. The Context itself exposes the Mapping interface, so cannot be modified directly - to modify the value associated with a variable you need to use the ContextVar.set() method. """ Does that explain things correctly? One thing I am sure of is that we should remove "implemented using an immutable dictionary" - it's an implementation detail, and adds nothing but confusion to mention it here. Paul From victor.stinner at gmail.com Fri Jan 5 06:06:22 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 5 Jan 2018 12:06:22 +0100 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: You can only modify a context when it's the current context, so using ContextVar.set() in Context.run(). Victor Le 5 janv. 2018 11:42 AM, "Paul Moore" a ?crit : On 4 January 2018 at 23:58, Guido van Rossum wrote: > On Thu, Jan 4, 2018 at 9:27 AM, Paul Moore wrote: >> >> On 4 January 2018 at 15:56, Guido van Rossum wrote: >> > It was get_context() in an earlier version of PEP 567. We changed it to >> > copy_context() believing that that would clarify that you get a clone >> > that >> > is unaffected by subsequent ContextVar.set() operations (which affect >> > the >> > *current* context rather than the copy you just got). >> >> Ah thanks. In which case, simply changing the emphasis to avoid the >> implication that Context objects are immutable (while that may be true >> in a technical/implementation sense, it's not really true in a design >> sense if ContextVar.set modifies the value of a variable in a context) >> is probably sufficient. > > > Do you have a specific proposal for a wording change? PEP 567 describes > Context as "a read-only mapping, implemented using an immutable dictionary." > This sounds all right to me -- "read-only" is weaker than "immutable". Maybe > the implementation should not be mentioned here? (The crux here is that a > given Context acts as a variable referencing an immutable dict -- but it may > reference different immutable dicts at different times.) I've been struggling to think of good alternative wordings (it's a case of "I'm not sure what you're trying to say, so I can't work out how you should say it", unfortunately). The best I can come up with is """ A Context is a mapping from ContextVar objects to their values. The Context itself exposes the Mapping interface, so cannot be modified directly - to modify the value associated with a variable you need to use the ContextVar.set() method. """ Does that explain things correctly? One thing I am sure of is that we should remove "implemented using an immutable dictionary" - it's an implementation detail, and adds nothing but confusion to mention it here. Paul _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ victor.stinner%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jan 5 06:10:07 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 5 Jan 2018 11:10:07 +0000 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On 5 January 2018 at 11:06, Victor Stinner wrote: >> Le 5 janv. 2018 11:42 AM, "Paul Moore" a ?crit : >> """ >> A Context is a mapping from ContextVar objects to their values. The >> Context itself exposes the Mapping interface, so cannot be modified >> directly - to modify the value associated with a variable you need to >> use the ContextVar.set() method. >> """ >> >> Does that explain things correctly? One thing I am sure of is that we >> should remove "implemented using an immutable dictionary" - it's an >> implementation detail, and adds nothing but confusion to mention it >> here. > You can only modify a context when it's the current context, so using > ContextVar.set() in Context.run(). Thanks. That's a useful qualification. But it may be too detailed for the summary - it's certainly something that should be covered in the specification section, though. Maybe "... you need to use the ContextVar.set() method from Context.run()" would be OK, although I don't want the summary to get too long. Paul From eric at trueblade.com Fri Jan 5 08:08:36 2018 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 5 Jan 2018 08:08:36 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: On 1/2/2018 12:01 AM, Guido van Rossum wrote: > On Mon, Jan 1, 2018 at 8:50 PM, Ethan Smith > wrote: > > > > On Mon, Jan 1, 2018 at 5:03 PM, Chris Barker > wrote: > > On Sat, Dec 30, 2017 at 7:27 AM, Stephen J. Turnbull > > wrote: > > ? Just use the simple rule that a new > __repr__ is generated unless provided in the dataclass. > > > are we only talking about __repr__ here ??? > > I interpreted Guido's proposal as being about all methods -- we > _may_ want something special for __repr__, but I hope not. > > [...] > > > I interpreted this to be for all methods as well, which makes sense. > Special casing just __repr__ doesn't make sense to me, but I will > wait for Guido to clarify. > > > Indeed, I just wrote __repr__ for simplicity. This should apply to all > special methods. (Though there may be some complications for > __eq__/__ne__ and for the ordering operators.) > > On Mon, Jan 1, 2018 at 9:44 PM, Chris Barker > wrote: > > On Mon, Jan 1, 2018 at 7:50 PM, Ethan Smith > wrote: > > > Will you get the "right" __repr__ now if you derive a > dataclass from a dataclass? That would be a nice feature. > > > The __repr__ will be generated by the child dataclass unless the > user overrides it. So I believe this is the "right" __repr__. > > > what I was wondering is if the child will know about all the fields > in the parent -- so it could make a full __repr__. > > > Yes, there's a class variable (__dataclass_fields__) that identifies the > parent fields. The PEP doesn't mention this or the fact that special > methods (like __repr__ and __init__) can tell whether a base class is a > dataclass. It probably should though. (@Eric) I think that's covered in this section: https://www.python.org/dev/peps/pep-0557/#inheritance Eric. From guido at python.org Fri Jan 5 10:47:21 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 07:47:21 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 1:57 AM, Nathaniel Smith wrote: > PEP 3104 says: > > """ > A shorthand form is also permitted, in which nonlocal is prepended to > an assignment or augmented assignment: > > nonlocal x = 3 > > The above has exactly the same meaning as nonlocal x; x = 3. (Guido > supports a similar form of the global statement [24].) > """ > > The PEP metadata says it was approved and implemented in 3.0, yet this > part never seems to have been implemented: > > Python 3.7.0a3+ (heads/master:53f9135667, Dec 29 2017, 19:08:19) > [GCC 7.2.0] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> def f(): > ... x = 1 > ... def g(): > ... nonlocal x = 2 > File "", line 4 > nonlocal x = 2 > ^ > SyntaxError: invalid syntax > > Was this just an oversight, or did it get rejected at some point and > no-one remembered to update that PEP? > I don't recall (though someone with more time might find the discussion in the archives or on the tracker). It was never implemented and I think it shouldn't be. So we might as well update the PEP. It wouldn't be particularly useful, since (by definition) the function that declares the nonlocal variable is not its owner, and hence it's unlikely to make sense to initialize it here. The same reasoning applies to global BTW. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jan 5 11:24:21 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 08:24:21 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: On Fri, Jan 5, 2018 at 5:08 AM, Eric V. Smith wrote: > On 1/2/2018 12:01 AM, Guido van Rossum wrote: > >> Yes, there's a class variable (__dataclass_fields__) that identifies the >> parent fields. The PEP doesn't mention this or the fact that special >> methods (like __repr__ and __init__) can tell whether a base class is a >> dataclass. It probably should though. (@Eric) >> > > I think that's covered in this section: https://www.python.org/dev/pep > s/pep-0557/#inheritance > I was specifically talking about the name and contents of __dataclass_fields__, which are not documented by the PEP. I expect it's inevitable that people will be looking at this (since they can see it in the source code). Or do you recommend that people use dataclasses.fields() and catch ValueError? I notice that _isdataclass() exists but is private and I don't recall why. (Also now I'm curious what the "pseudo-fields" are that fields() ignores, but that's OT.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jan 5 11:29:26 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 08:29:26 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 2:05 AM, Victor Stinner wrote: > Currently, Context.get(var) returns None when "var in context" is false. > That's surprising and different than var.get(), especially when var has a > default value. > I don't see the problem. Context.get() is inherited from Mapping.get(); if you want it to raise use Context.__getitem__() (i.e. ctx[var]). Lots of classes define get() methods with various behaviors. Context.get() and ContextVar.get() are just different -- ContextVar is not a Mapping. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jan 5 11:42:36 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 08:42:36 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On Fri, Jan 5, 2018 at 2:41 AM, Paul Moore wrote: > On 4 January 2018 at 23:58, Guido van Rossum wrote: > > Do you have a specific proposal for a wording change? PEP 567 describes > > Context as "a read-only mapping, implemented using an immutable > dictionary." > > This sounds all right to me -- "read-only" is weaker than "immutable". > Maybe > > the implementation should not be mentioned here? (The crux here is that a > > given Context acts as a variable referencing an immutable dict -- but it > may > > reference different immutable dicts at different times.) > > I've been struggling to think of good alternative wordings (it's a > case of "I'm not sure what you're trying to say, so I can't work out > how you should say it", unfortunately). The best I can come up with is > > """ > A Context is a mapping from ContextVar objects to their values. The > Context itself exposes the Mapping interface, so cannot be modified > directly - to modify the value associated with a variable you need to > use the ContextVar.set() method. > """ > This is clear, but IMO there's one important detail missing: using ContextVar.set() you can only modify the current Context. This part of the PEP (in particular the definition of ContextVar.set() on line 90) also lies about what ContextVar.set() does -- it implies that Context is immutable. If it was, the recommended use of Context.run() in asyncio would make no sense, since run() clearly modifies the Context object in place. This is not an implementation detail -- it is an API detail that affects how frameworks should use Context objects. Re-reading, there's a lot of language in this part of the PEP that needs updating... :-( > Does that explain things correctly? One thing I am sure of is that we > should remove "implemented using an immutable dictionary" - it's an > implementation detail, and adds nothing but confusion to mention it > here. Agreed. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Fri Jan 5 11:43:41 2018 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 5 Jan 2018 11:43:41 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> Message-ID: <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> On 1/5/2018 11:24 AM, Guido van Rossum wrote: > On Fri, Jan 5, 2018 at 5:08 AM, Eric V. Smith > wrote: > > On 1/2/2018 12:01 AM, Guido van Rossum wrote: > > Yes, there's a class variable (__dataclass_fields__) that > identifies the parent fields. The PEP doesn't mention this or > the fact that special methods (like __repr__ and __init__) can > tell whether a base class is a dataclass. It probably should > though. (@Eric) > > > I think that's covered in this section: > https://www.python.org/dev/peps/pep-0557/#inheritance > > > > I was specifically talking about the name and contents of > __dataclass_fields__, which are not documented by the PEP. I expect it's > inevitable that people will be looking at this (since they can see it in > the source code). Or do you recommend that people use > dataclasses.fields() and catch ValueError? The expectation is to use dataclasses.fields(). Both it and __dataclass_fields__ contain the fields for this class and the parents. The only difference is the pseudo-fields. I can add some words describing .fields() returning which fields are present. > I notice that _isdataclass() > exists but is private and I don't recall why. I think the argument was that it's an anti-pattern, and if you really want to know, just call dataclasses.fields() and catch the TypeError. I have this in a helper file: def isdataclass(obj): """Returns True for dataclass classes and instances.""" try: dataclasses.fields(obj) return True except TypeError: return False (Also now I'm curious what > the "pseudo-fields" are that fields() ignores, but that's OT.) ClassVar and InitVar "fields". dataclasses.fields() doesn't return them. Eric. From guido at python.org Fri Jan 5 11:48:07 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 08:48:07 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: <5A4D7284.9000706@canterbury.ac.nz> Message-ID: On Fri, Jan 5, 2018 at 3:10 AM, Paul Moore wrote: > On 5 January 2018 at 11:06, Victor Stinner > wrote: > >> Le 5 janv. 2018 11:42 AM, "Paul Moore" a ?crit : > >> """ > >> A Context is a mapping from ContextVar objects to their values. The > >> Context itself exposes the Mapping interface, so cannot be modified > >> directly - to modify the value associated with a variable you need to > >> use the ContextVar.set() method. > >> """ > >> > >> Does that explain things correctly? One thing I am sure of is that we > >> should remove "implemented using an immutable dictionary" - it's an > >> implementation detail, and adds nothing but confusion to mention it > >> here. > > > You can only modify a context when it's the current context, so using > > ContextVar.set() in Context.run(). > > Thanks. That's a useful qualification. But it may be too detailed for > the summary - it's certainly something that should be covered in the > specification section, though. Maybe "... you need to use the > ContextVar.set() method from Context.run()" would be OK, although I > don't want the summary to get too long. > (Sorry, I wrote my previous response before seeing this part of the exchange.) Maybe "... you need to use the ContextVar.set() method, which modifies the current context." Then we also need to update the summary for ContextVar.set(), which currently IMO is plain wrong. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Jan 5 12:09:53 2018 From: status at bugs.python.org (Python tracker) Date: Fri, 5 Jan 2018 18:09:53 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20180105170953.25AC819337C@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-12-29 - 2018-01-05) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6377 (+22) closed 37871 (+28) total 44248 (+50) Open issues with patches: 2479 Issues opened (37) ================== #15982: asyncore.dispatcher does not handle windows socket error code https://bugs.python.org/issue15982 reopened by vstinner #32448: subscriptable https://bugs.python.org/issue32448 opened by thehesiod #32449: MappingView must inherit from Collection instead of Sized https://bugs.python.org/issue32449 opened by yahya-abou-imran #32450: non-descriptive variable name https://bugs.python.org/issue32450 opened by Yuri Kanivetsky #32451: python -m venv activation issue when using cygwin on windows https://bugs.python.org/issue32451 opened by Kevin #32453: shutil.rmtree can have O(n^2) performance on large dirs https://bugs.python.org/issue32453 opened by nh2 #32454: Add socket.close(fd) function https://bugs.python.org/issue32454 opened by christian.heimes #32455: PyCompile_OpcodeStackEffect() and dis.stack_effect() are not p https://bugs.python.org/issue32455 opened by serhiy.storchaka #32456: PYTHONIOENCODING=undefined doesn't work in Python 3 https://bugs.python.org/issue32456 opened by serhiy.storchaka #32457: Windows Python cannot handle an early PATH entry containing ". https://bugs.python.org/issue32457 opened by Ray Donnelly #32458: test_asyncio failures on Windows https://bugs.python.org/issue32458 opened by pitrou #32459: Capsule API usage docs are incompatible with module reloading https://bugs.python.org/issue32459 opened by ncoghlan #32461: the first build after a change to Makefile.pre.in uses the old https://bugs.python.org/issue32461 opened by xdegaye #32462: mimetypes.guess_type() returns incorrectly formatted type https://bugs.python.org/issue32462 opened by csabella #32463: problems with shutil.py and os.get_terminal_size https://bugs.python.org/issue32463 opened by Dhruve #32464: raise NotImplemented vs return NotImplemented https://bugs.python.org/issue32464 opened by thatiparthy #32465: [urllib] proxy_bypass_registry - extra error handling required https://bugs.python.org/issue32465 opened by chansol kim #32466: Fix missing test coverage for fractions.Fraction.__new__ https://bugs.python.org/issue32466 opened by gphemsley #32467: dict_values isn't considered a Collection nor a Container https://bugs.python.org/issue32467 opened by yahya-abou-imran #32469: Generator and coroutine repr could be more helpful https://bugs.python.org/issue32469 opened by pitrou #32471: Add an UML class diagram to the collections.abc module documen https://bugs.python.org/issue32471 opened by yahya-abou-imran #32473: Readibility of ABCMeta._dump_registry() https://bugs.python.org/issue32473 opened by yahya-abou-imran #32475: Add ability to query number of buffered bytes available on buf https://bugs.python.org/issue32475 opened by kata198 #32476: Add concat functionality to ElementTree xpath find https://bugs.python.org/issue32476 opened by jjolly #32477: Move jumps optimization from the peepholer to the compiler https://bugs.python.org/issue32477 opened by serhiy.storchaka #32479: inconsistent ImportError message executing same import stateme https://bugs.python.org/issue32479 opened by xiang.zhang #32485: Multiprocessing dict sharing between forked processes https://bugs.python.org/issue32485 opened by Andr?? Neto #32486: tail optimization for 'yield from' https://bugs.python.org/issue32486 opened by Robert Smart #32489: Allow 'continue' in 'finally' clause https://bugs.python.org/issue32489 opened by serhiy.storchaka #32490: subprocess: duplicate filename in exception message https://bugs.python.org/issue32490 opened by jwilk #32491: base64.decode: linebreaks are not ignored https://bugs.python.org/issue32491 opened by gregory.p.smith #32492: C Fast path for namedtuple's property/itemgetter pair https://bugs.python.org/issue32492 opened by rhettinger #32493: UUID Module - FreeBSD build failure https://bugs.python.org/issue32493 opened by David Carlier #32494: interface to gdbm_count https://bugs.python.org/issue32494 opened by sam-s #32495: Adding Timer to multiprocessing https://bugs.python.org/issue32495 opened by jcrotts #32496: lib2to3 fails to parse a ** of a conditional expression https://bugs.python.org/issue32496 opened by gregory.p.smith #32497: datetime.strptime creates tz naive object from value containin https://bugs.python.org/issue32497 opened by akeeman Most recent 15 issues with no replies (15) ========================================== #32496: lib2to3 fails to parse a ** of a conditional expression https://bugs.python.org/issue32496 #32494: interface to gdbm_count https://bugs.python.org/issue32494 #32490: subprocess: duplicate filename in exception message https://bugs.python.org/issue32490 #32486: tail optimization for 'yield from' https://bugs.python.org/issue32486 #32476: Add concat functionality to ElementTree xpath find https://bugs.python.org/issue32476 #32473: Readibility of ABCMeta._dump_registry() https://bugs.python.org/issue32473 #32467: dict_values isn't considered a Collection nor a Container https://bugs.python.org/issue32467 #32465: [urllib] proxy_bypass_registry - extra error handling required https://bugs.python.org/issue32465 #32459: Capsule API usage docs are incompatible with module reloading https://bugs.python.org/issue32459 #32454: Add socket.close(fd) function https://bugs.python.org/issue32454 #32450: non-descriptive variable name https://bugs.python.org/issue32450 #32446: ResourceLoader.get_data() should accept a PathLike https://bugs.python.org/issue32446 #32436: Implement PEP 567 https://bugs.python.org/issue32436 #32433: Provide optimized HMAC digest https://bugs.python.org/issue32433 #32423: The Windows SDK version 10.0.15063.0 was not found https://bugs.python.org/issue32423 Most recent 15 issues waiting for review (15) ============================================= #32497: datetime.strptime creates tz naive object from value containin https://bugs.python.org/issue32497 #32492: C Fast path for namedtuple's property/itemgetter pair https://bugs.python.org/issue32492 #32477: Move jumps optimization from the peepholer to the compiler https://bugs.python.org/issue32477 #32476: Add concat functionality to ElementTree xpath find https://bugs.python.org/issue32476 #32475: Add ability to query number of buffered bytes available on buf https://bugs.python.org/issue32475 #32473: Readibility of ABCMeta._dump_registry() https://bugs.python.org/issue32473 #32469: Generator and coroutine repr could be more helpful https://bugs.python.org/issue32469 #32462: mimetypes.guess_type() returns incorrectly formatted type https://bugs.python.org/issue32462 #32461: the first build after a change to Makefile.pre.in uses the old https://bugs.python.org/issue32461 #32458: test_asyncio failures on Windows https://bugs.python.org/issue32458 #32454: Add socket.close(fd) function https://bugs.python.org/issue32454 #32449: MappingView must inherit from Collection instead of Sized https://bugs.python.org/issue32449 #32436: Implement PEP 567 https://bugs.python.org/issue32436 #32433: Provide optimized HMAC digest https://bugs.python.org/issue32433 #32431: Two bytes objects of zero length don't compare equal https://bugs.python.org/issue32431 Top 10 most discussed issues (10) ================================= #17611: Move unwinding of stack for "pseudo exceptions" from interpret https://bugs.python.org/issue17611 16 msgs #32453: shutil.rmtree can have O(n^2) performance on large dirs https://bugs.python.org/issue32453 10 msgs #32466: Fix missing test coverage for fractions.Fraction.__new__ https://bugs.python.org/issue32466 10 msgs #32449: MappingView must inherit from Collection instead of Sized https://bugs.python.org/issue32449 9 msgs #32346: Speed up slot lookup for class creation https://bugs.python.org/issue32346 7 msgs #32430: Simplify Modules/Setup{,.dist,.local} https://bugs.python.org/issue32430 7 msgs #25095: test_httpservers hangs since Python 3.5 https://bugs.python.org/issue25095 6 msgs #32428: dataclasses: make it an error to have initialized non-fields i https://bugs.python.org/issue32428 5 msgs #32455: PyCompile_OpcodeStackEffect() and dis.stack_effect() are not p https://bugs.python.org/issue32455 5 msgs #32477: Move jumps optimization from the peepholer to the compiler https://bugs.python.org/issue32477 5 msgs Issues closed (28) ================== #18035: telnetlib incorrectly assumes that select.error has an errno a https://bugs.python.org/issue18035 closed by gregory.p.smith #23749: asyncio missing wrap_socket (starttls) https://bugs.python.org/issue23749 closed by yselivanov #31699: Deadlocks in `concurrent.futures.ProcessPoolExecutor` with pic https://bugs.python.org/issue31699 closed by pitrou #31778: ast.literal_eval supports non-literals in Python 3 https://bugs.python.org/issue31778 closed by serhiy.storchaka #32211: Document the bug in re.findall() and re.finditer() in 2.7 and https://bugs.python.org/issue32211 closed by serhiy.storchaka #32308: Replace empty matches adjacent to a previous non-empty match i https://bugs.python.org/issue32308 closed by serhiy.storchaka #32390: AIX compile error with Modules/posixmodule.c: Function argumen https://bugs.python.org/issue32390 closed by xdegaye #32399: _uuidmodule.c cannot build on AIX - different typedefs of uuid https://bugs.python.org/issue32399 closed by pitrou #32418: Implement Server.get_loop() method https://bugs.python.org/issue32418 closed by asvetlov #32420: LookupError : unknown encoding : [0x7FF092395AD0] ANOMALY https://bugs.python.org/issue32420 closed by terry.reedy #32439: Clean up the code for compiling comparison expressions https://bugs.python.org/issue32439 closed by serhiy.storchaka #32441: os.dup2 should return the new fd https://bugs.python.org/issue32441 closed by benjamin.peterson #32445: Skip creating redundant wrapper functions in ExitStack.callbac https://bugs.python.org/issue32445 closed by ncoghlan #32447: IDLE shell won't open on Mac OS 10.13.1 https://bugs.python.org/issue32447 closed by ned.deily #32452: Brackets and Parentheses used in an ambiguous way https://bugs.python.org/issue32452 closed by r.david.murray #32460: don't use tentative declarations https://bugs.python.org/issue32460 closed by benjamin.peterson #32468: Frame repr should be more helpful https://bugs.python.org/issue32468 closed by pitrou #32470: Unexpected behavior of struct.pack https://bugs.python.org/issue32470 closed by benjamin.peterson #32472: Mention of __await__ missing in Coroutine Abstract Methods https://bugs.python.org/issue32472 closed by asvetlov #32474: argparse nargs should support string wrapped integers too https://bugs.python.org/issue32474 closed by r.david.murray #32478: Add tests for 'break' and 'return' inside 'finally' clause https://bugs.python.org/issue32478 closed by serhiy.storchaka #32480: czipfile installation failure https://bugs.python.org/issue32480 closed by ronaldoussoren #32481: Hitting the acute accent ?? button on a Danish keyboard causes https://bugs.python.org/issue32481 closed by ned.deily #32482: Improve tests for syntax and grammar in 2.7 https://bugs.python.org/issue32482 closed by serhiy.storchaka #32483: Misleading reflective behaviour due to PEP 3131 NFKC identifie https://bugs.python.org/issue32483 closed by benjamin.peterson #32484: ImportError for gdbm 1.14 https://bugs.python.org/issue32484 closed by benjamin.peterson #32487: assertRaises should return the "captured" exception https://bugs.python.org/issue32487 closed by r.david.murray #32488: Fatal error using pydoc https://bugs.python.org/issue32488 closed by ned.deily From guido at python.org Fri Jan 5 12:54:37 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 09:54:37 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: Some inline responses to Paul (I snipped everything I agree with). On Wed, Jan 3, 2018 at 3:34 AM, Paul Moore wrote: > On 28 December 2017 at 06:08, Yury Selivanov > wrote: > > This is a second version of PEP 567. > [...] > > > The notion of "current value" deserves special consideration: > > different asynchronous tasks that exist and execute concurrently > > may have different values for the same key. This idea is well-known > > from thread-local storage but in this case the locality of the value is > > not necessarily bound to a thread. Instead, there is the notion of the > > "current ``Context``" which is stored in thread-local storage, and > > is accessed via ``contextvars.copy_context()`` function. > > Accessed by copying it? That seems weird to me. I'd expect either that > you'd be > able to access the current Context directly, *or* that you'd say that the > current Context is not directly accessible by the user, but that a copy > can be > obtained using copy_context. But given that the Context is immutable, why > the > need to copy it? > Because it's not immutable. (I think by now people following this thread understand that.) The claims or implications in the PEP that Context is immutable are wrong (and contradict the recommended use of run() for asyncio, for example). > Also, the references to threads in the above are confusing. It says that > this > is a well-known concept in terms of thread-local storage, but this case is > different. It then goes on to say that the current Context is stored in > thread > local storage, which gives me the impression that the new idea *is* > related to > thread local storage... > The PEP's language does seem confused. This is because it doesn't come out and define the concept of "task". The correspondence is roughly that thread-locals are to threads what Contexts and ContextVars are to tasks. (This still requires quite a bit of squinting due to the API differences but it's better than what the PEP says.) > I think that the fact that a Context is held in thread-local storage is an > implementation detail. Assuming I'm right, don't bother mentioning it - > simply > say that there's a notion of a current Context and leave it at that. > No, actually it's important that each thread has its own current context. It is even possible to pass Contexts between threads, e.g. if you have a ThreadExecutor e, you can call e.submit(ctx.run, some_function, args...). However, run() is not thread-safe! @Yury: There's an example that does almost exactly this in the PEP, but I think it could result in a race condition if you called run() concurrently on that same context in a different thread -- whichever run() finishes last will overwrite the other's state. I think we should just document this and recommend always using a fresh copy in such scenarios. Hm, does this mean it would be good to have an explicit Context.copy() method? Or should we show how to create a copy of an arbitrary Context using ctx.run(copy_context)? > > Manipulation of the current ``Context`` is the responsibility of the > > task framework, e.g. asyncio. > > > > A ``Context`` is conceptually a read-only mapping, implemented using > > an immutable dictionary. The ``ContextVar.get()`` method does a > > lookup in the current ``Context`` with ``self`` as a key, raising a > > ``LookupError`` or returning a default value specified in > > the constructor. > > > > The ``ContextVar.set(value)`` method clones the current ``Context``, > > assigns the ``value`` to it with ``self`` as a key, and sets the > > new ``Context`` as the new current ``Context``. > > > > On first reading, this confused me because I didn't spot that you're > saying a > *Context* is read-only, but a *ContextVar* has get and set methods. > > Maybe reword this to say that a Context is a read-only mapping from > ContextVars > to values. A ContextVar has a get method that looks up its value in the > current > Context, and a set method that replaces the current Context with a new one > that > associates the specified value with this ContextVar. > > (The current version feels confusing to me because it goes into too much > detail > on how the implementation does this, rather than sticking to the high-level > specification) > We went over this passage in another subthread. IMO what it says about ContextVar.set() is incorrect. > > Specification > > ============= > > > > A new standard library module ``contextvars`` is added with the > > following APIs: > > > > 1. ``copy_context() -> Context`` function is used to get a copy of > > the current ``Context`` object for the current OS thread. > > > > 2. ``ContextVar`` class to declare and access context variables. > > > > 3. ``Context`` class encapsulates context state. Every OS thread > > stores a reference to its current ``Context`` instance. > > It is not possible to control that reference manually. > > Instead, the ``Context.run(callable, *args, **kwargs)`` method is > > used to run Python code in another context. > > Context.run() came a bit out of nowhere here. Maybe the part from "It > is not possible..." should be in the introduction above? Something > like the following, covering this and copy_context: > > The current Context cannot be accessed directly by user code. If the > framework wants to run some code in a different Context, the > Context.run(callable, *args, **kwargs) method is used to do that. To > construct a new context for this purpose, the current context can be > copied > via the copy_context function, and manipulated prior to the call to > run(). > I agree that run() comes out of nowhere but I'd suggest a simpler fix -- just say "Instead, Context.run() must be used, see below." > > > > contextvars.ContextVar > > ---------------------- > > > > The ``ContextVar`` class has the following constructor signature: > > ``ContextVar(name, *, default=_NO_DEFAULT)``. The ``name`` parameter > > is used only for introspection and debug purposes, and is exposed > > as a read-only ``ContextVar.name`` attribute. The ``default`` > > parameter is optional. Example:: > > > > # Declare a context variable 'var' with the default value 42. > > var = ContextVar('var', default=42) > > > > (The ``_NO_DEFAULT`` is an internal sentinel object used to > > detect if the default value was provided.) > > My first thought was that default was the context variable's initial > value. But > if that's what it is, why not call it that? If the default has another > effect > as well as being the initial value, maybe clarify here what that is? > IMO it's more and different than the "initial value". The ContextVar never gets set directly to the default -- you can verify this by checking "var in ctx" for a variable that has a default but isn't set -- it's not present. It really is used as the "default default" by ContextVar.get(). That's not an implementation detail. > > ``ContextVar.get()`` returns a value for context variable from the > > current ``Context``:: > > > > # Get the value of `var`. > > var.get() > > > > ``ContextVar.set(value) -> Token`` is used to set a new value for > > the context variable in the current ``Context``:: > > > > # Set the variable 'var' to 1 in the current context. > > var.set(1) > > > > ``ContextVar.reset(token)`` is used to reset the variable in the > > current context to the value it had before the ``set()`` operation > > that created the ``token``:: > > > > assert var.get(None) is None > > get doesn't take an argument. Typo? > Actually it does, the argument specifies a default (to override the "default default" set in the constructor). However this hasn't been mentioned yet at this point (the description of ContextVar.get() earlier doesn't mention it, only the implementation below). It would be good to update the earlier description of ContextVar.get() to mention the optional default (and how it interacts with the "default default"). > asyncio > ------- > [...] > > > > C API > > ----- > > > [...] > > I haven't commented on these as they aren't my area of expertise. > (Too bad, since there's an important clue about the mutability of Context hidden in this section! :-) > > Implementation > > ============== > > > > This section explains high-level implementation details in > > pseudo-code. Some optimizations are omitted to keep this section > > short and clear. > > Again, I'm ignoring this as I don't really have an interest in how the > facility > is implemented. > (Again, too bad, since despite the section heading this acts as a pseudo-code specification that is much more exact than the "specification" section above.) > > Implementation Notes > > ==================== > > > > * The internal immutable dictionary for ``Context`` is implemented > > using Hash Array Mapped Tries (HAMT). They allow for O(log N) > > ``set`` operation, and for O(1) ``copy_context()`` function, where > > *N* is the number of items in the dictionary. For a detailed > > analysis of HAMT performance please refer to :pep:`550` [1]_. > > Would it be worth exposing this data structure elsewhere, in case > other uses for it exist? > I've asked Yury this several times, but he's shy about exposing it. Maybe it's better to wait until 3.8 so the implementation and its API can stabilize a bit before the API is constrained by backwards compatibility. > > * ``ContextVar.get()`` has an internal cache for the most recent > > value, which allows to bypass a hash lookup. This is similar > > to the optimization the ``decimal`` module implements to > > retrieve its context from ``PyThreadState_GetDict()``. > > See :pep:`550` which explains the implementation of the cache > > in a great detail. > > > > Should the cache (or at least the performance guarantees it implies) be > part of > the spec? Do we care if other implementations fail to implement a cache? > IMO it's a quality-of-implementation issue, but the speed of the CPython implementation plays an important role in acceptance of the PEP (since we don't want to slow down e.g. asyncio task creation). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jan 5 12:58:33 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 09:58:33 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> Message-ID: Hm. I don't know that people will conclude that checking for a dataclass is an anti-pattern. They'll probably just invent a myriad of different hacks like the one you showed. I recommend making it public. I still worry a bit about ClassVar and InitVar being potentially useful but I concede I have no use case so I'll drop it. On Fri, Jan 5, 2018 at 8:43 AM, Eric V. Smith wrote: > On 1/5/2018 11:24 AM, Guido van Rossum wrote: > >> On Fri, Jan 5, 2018 at 5:08 AM, Eric V. Smith > > wrote: >> >> On 1/2/2018 12:01 AM, Guido van Rossum wrote: >> >> Yes, there's a class variable (__dataclass_fields__) that >> identifies the parent fields. The PEP doesn't mention this or >> the fact that special methods (like __repr__ and __init__) can >> tell whether a base class is a dataclass. It probably should >> though. (@Eric) >> >> >> I think that's covered in this section: >> https://www.python.org/dev/peps/pep-0557/#inheritance >> >> >> >> I was specifically talking about the name and contents of >> __dataclass_fields__, which are not documented by the PEP. I expect it's >> inevitable that people will be looking at this (since they can see it in >> the source code). Or do you recommend that people use dataclasses.fields() >> and catch ValueError? >> > > The expectation is to use dataclasses.fields(). Both it and > __dataclass_fields__ contain the fields for this class and the parents. The > only difference is the pseudo-fields. > > I can add some words describing .fields() returning which fields are > present. > > I notice that _isdataclass() exists but is private and I don't recall why. >> > > I think the argument was that it's an anti-pattern, and if you really want > to know, just call dataclasses.fields() and catch the TypeError. I have > this in a helper file: > > def isdataclass(obj): > """Returns True for dataclass classes and instances.""" > try: > dataclasses.fields(obj) > return True > except TypeError: > return False > > > (Also now I'm curious what > >> the "pseudo-fields" are that fields() ignores, but that's OT.) >> > > ClassVar and InitVar "fields". dataclasses.fields() doesn't return them. > > Eric. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Fri Jan 5 13:48:32 2018 From: random832 at fastmail.com (Random832) Date: Fri, 05 Jan 2018 13:48:32 -0500 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: Message-ID: <1515178112.587444.1225595544.053ABAE8@webmail.messagingengine.com> On Fri, Jan 5, 2018, at 10:47, Guido van Rossum wrote: > I don't recall (though someone with more time might find the discussion in > the archives or on the tracker). It was never implemented and I think it > shouldn't be. So we might as well update the PEP. It wouldn't be > particularly useful, since (by definition) the function that declares the > nonlocal variable is not its owner, and hence it's unlikely to make sense > to initialize it here. The same reasoning applies to global BTW. I'm not so sure... The only situation in which you're *required* to declare a nonlocal/global variable, after all, is if you intend to assign it - a name that you never assign is presumed to be non-local. The description in the PEP also applies to augmented assignments, and "global some_counter; some_counter += 1" is certainly a pattern I've needed in the past. The PEP also quotes you as endorsing this for global. https://mail.python.org/pipermail/python-3000/2006-November/004166.html From eric at trueblade.com Fri Jan 5 14:06:38 2018 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 5 Jan 2018 14:06:38 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> Message-ID: <64610599-a65f-2709-7abd-e97f5698df5f@trueblade.com> On 1/5/2018 12:58 PM, Guido van Rossum wrote: > Hm. I don't know that people will conclude that checking for a dataclass > is an anti-pattern. They'll probably just invent a myriad of different > hacks like the one you showed. I recommend making it public. I'm trying to track down the original discussion. We got bogged down on whether it worked for classes or instances or both, then we got tied up in naming it (surprise!), then it looks like we decided to just not include it since you could make those decisions for yourself. I think the discussion is buried in this thread: https://mail.python.org/pipermail/python-dev/2017-November/150966.html Which references: https://github.com/ericvsmith/dataclasses/issues/99 So, ignoring the naming issue, I think if we want to revive it, the question is: should isdataclass() return True on just instances, just classes, or both? And should it ever raise an exception, or just return False? > I still worry a bit about ClassVar and InitVar being potentially useful > but I concede I have no use case so I'll drop it. IIRC, we decided that we could add a parameter to dataclasses.fields() if we ever wanted to return pseudo-fields. But no one came up with a use case. Eric. > > On Fri, Jan 5, 2018 at 8:43 AM, Eric V. Smith > wrote: > > On 1/5/2018 11:24 AM, Guido van Rossum wrote: > > On Fri, Jan 5, 2018 at 5:08 AM, Eric V. Smith > > >> wrote: > > ? ? On 1/2/2018 12:01 AM, Guido van Rossum wrote: > > ? ? ? ? Yes, there's a class variable (__dataclass_fields__) that > ? ? ? ? identifies the parent fields. The PEP doesn't mention > this or > ? ? ? ? the fact that special methods (like __repr__ and > __init__) can > ? ? ? ? tell whether a base class is a dataclass. It probably > should > ? ? ? ? though. (@Eric) > > > ? ? I think that's covered in this section: > https://www.python.org/dev/peps/pep-0557/#inheritance > > ? ? > > > > I was specifically talking about the name and contents of > __dataclass_fields__, which are not documented by the PEP. I > expect it's inevitable that people will be looking at this > (since they can see it in the source code). Or do you recommend > that people use dataclasses.fields() and catch ValueError? > > > The expectation is to use dataclasses.fields(). Both it and > __dataclass_fields__ contain the fields for this class and the > parents. The only difference is the pseudo-fields. > > I can add some words describing .fields() returning which fields are > present. > > I notice that _isdataclass() exists but is private and I don't > recall why. > > > I think the argument was that it's an anti-pattern, and if you > really want to know, just call dataclasses.fields() and catch the > TypeError. I have this in a helper file: > > def isdataclass(obj): > ? ? """Returns True for dataclass classes and instances.""" > ? ? try: > ? ? ? ? dataclasses.fields(obj) > ? ? ? ? return True > ? ? except TypeError: > ? ? ? ? return False > > > (Also now I'm curious what > > the "pseudo-fields" are that fields() ignores, but that's OT.) > > > ClassVar and InitVar "fields". dataclasses.fields() doesn't return them. > > Eric. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > -- > --Guido van Rossum (python.org/~guido ) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > From guido at python.org Fri Jan 5 14:07:17 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 11:07:17 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: <1515178112.587444.1225595544.053ABAE8@webmail.messagingengine.com> References: <1515178112.587444.1225595544.053ABAE8@webmail.messagingengine.com> Message-ID: Yeah, but I've changed my mind on this -- I think it's needless added complexity that helps save one line of code in very few use cases. And you don't really think the PEP endorses `nonlocal foo += 1` do you? On Fri, Jan 5, 2018 at 10:48 AM, Random832 wrote: > On Fri, Jan 5, 2018, at 10:47, Guido van Rossum wrote: > > I don't recall (though someone with more time might find the discussion > in > > the archives or on the tracker). It was never implemented and I think it > > shouldn't be. So we might as well update the PEP. It wouldn't be > > particularly useful, since (by definition) the function that declares the > > nonlocal variable is not its owner, and hence it's unlikely to make sense > > to initialize it here. The same reasoning applies to global BTW. > > I'm not so sure... > > The only situation in which you're *required* to declare a nonlocal/global > variable, after all, is if you intend to assign it - a name that you never > assign is presumed to be non-local. The description in the PEP also applies > to augmented assignments, and "global some_counter; some_counter += 1" is > certainly a pattern I've needed in the past. > > The PEP also quotes you as endorsing this for global. > https://mail.python.org/pipermail/python-3000/2006-November/004166.html > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jan 5 14:09:21 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 11:09:21 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: <64610599-a65f-2709-7abd-e97f5698df5f@trueblade.com> References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> <64610599-a65f-2709-7abd-e97f5698df5f@trueblade.com> Message-ID: I'm normally no big fan of things that take either a class or an instance, but since fields() does this, I think is_dataclass() should to. And that's the name I'd choose. OK on the pseudo-fields. On Fri, Jan 5, 2018 at 11:06 AM, Eric V. Smith wrote: > On 1/5/2018 12:58 PM, Guido van Rossum wrote: > >> Hm. I don't know that people will conclude that checking for a dataclass >> is an anti-pattern. They'll probably just invent a myriad of different >> hacks like the one you showed. I recommend making it public. >> > > I'm trying to track down the original discussion. We got bogged down on > whether it worked for classes or instances or both, then we got tied up in > naming it (surprise!), then it looks like we decided to just not include it > since you could make those decisions for yourself. > > I think the discussion is buried in this thread: > https://mail.python.org/pipermail/python-dev/2017-November/150966.html > > Which references: > https://github.com/ericvsmith/dataclasses/issues/99 > > So, ignoring the naming issue, I think if we want to revive it, the > question is: should isdataclass() return True on just instances, just > classes, or both? And should it ever raise an exception, or just return > False? > > I still worry a bit about ClassVar and InitVar being potentially useful >> but I concede I have no use case so I'll drop it. >> > > IIRC, we decided that we could add a parameter to dataclasses.fields() if > we ever wanted to return pseudo-fields. But no one came up with a use case. > > Eric. > > >> On Fri, Jan 5, 2018 at 8:43 AM, Eric V. Smith > > wrote: >> >> On 1/5/2018 11:24 AM, Guido van Rossum wrote: >> >> On Fri, Jan 5, 2018 at 5:08 AM, Eric V. Smith >> >> >> wrote: >> >> On 1/2/2018 12:01 AM, Guido van Rossum wrote: >> >> Yes, there's a class variable (__dataclass_fields__) that >> identifies the parent fields. The PEP doesn't mention >> this or >> the fact that special methods (like __repr__ and >> __init__) can >> tell whether a base class is a dataclass. It probably >> should >> though. (@Eric) >> >> >> I think that's covered in this section: >> https://www.python.org/dev/peps/pep-0557/#inheritance >> >> > > >> >> >> I was specifically talking about the name and contents of >> __dataclass_fields__, which are not documented by the PEP. I >> expect it's inevitable that people will be looking at this >> (since they can see it in the source code). Or do you recommend >> that people use dataclasses.fields() and catch ValueError? >> >> >> The expectation is to use dataclasses.fields(). Both it and >> __dataclass_fields__ contain the fields for this class and the >> parents. The only difference is the pseudo-fields. >> >> I can add some words describing .fields() returning which fields are >> present. >> >> I notice that _isdataclass() exists but is private and I don't >> recall why. >> >> >> I think the argument was that it's an anti-pattern, and if you >> really want to know, just call dataclasses.fields() and catch the >> TypeError. I have this in a helper file: >> >> def isdataclass(obj): >> """Returns True for dataclass classes and instances.""" >> try: >> dataclasses.fields(obj) >> return True >> except TypeError: >> return False >> >> >> (Also now I'm curious what >> >> the "pseudo-fields" are that fields() ignores, but that's OT.) >> >> >> ClassVar and InitVar "fields". dataclasses.fields() doesn't return >> them. >> >> Eric. >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido ) >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba- >> python-dev%40trueblade.com >> >> > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Fri Jan 5 14:11:03 2018 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 5 Jan 2018 14:11:03 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> <64610599-a65f-2709-7abd-e97f5698df5f@trueblade.com> Message-ID: <07766052-accb-7a9a-6f4d-704567e81834@trueblade.com> On 1/5/2018 2:09 PM, Guido van Rossum wrote: > I'm normally no big fan of things that take either a class or an > instance, but since fields() does this, I think is_dataclass() should > to. And that's the name I'd choose. OK on the pseudo-fields. Sounds good. I'll open a bpo issue. Eric. > On Fri, Jan 5, 2018 at 11:06 AM, Eric V. Smith > wrote: > > On 1/5/2018 12:58 PM, Guido van Rossum wrote: > > Hm. I don't know that people will conclude that checking for a > dataclass is an anti-pattern. They'll probably just invent a > myriad of different hacks like the one you showed. I recommend > making it public. > > > I'm trying to track down the original discussion. We got bogged down > on whether it worked for classes or instances or both, then we got > tied up in naming it (surprise!), then it looks like we decided to > just not include it since you could make those decisions for yourself. > > I think the discussion is buried in this thread: > https://mail.python.org/pipermail/python-dev/2017-November/150966.html > > > Which references: > https://github.com/ericvsmith/dataclasses/issues/99 > > > So, ignoring the naming issue, I think if we want to revive it, the > question is: should isdataclass() return True on just instances, > just classes, or both? And should it ever raise an exception, or > just return False? > > I still worry a bit about ClassVar and InitVar being potentially > useful but I concede I have no use case so I'll drop it. > > > IIRC, we decided that we could add a parameter to > dataclasses.fields() if we ever wanted to return pseudo-fields. But > no one came up with a use case. > > Eric. > > > On Fri, Jan 5, 2018 at 8:43 AM, Eric V. Smith > > >> wrote: > > ? ? On 1/5/2018 11:24 AM, Guido van Rossum wrote: > > ? ? ? ? On Fri, Jan 5, 2018 at 5:08 AM, Eric V. Smith > ? ? ? ? > > > ? ? ? ? > >>> wrote: > > ? ? ? ? ?? ? On 1/2/2018 12:01 AM, Guido van Rossum wrote: > > ? ? ? ? ?? ? ? ? Yes, there's a class variable > (__dataclass_fields__) that > ? ? ? ? ?? ? ? ? identifies the parent fields. The PEP doesn't > mention > ? ? ? ? this or > ? ? ? ? ?? ? ? ? the fact that special methods (like __repr__ and > ? ? ? ? __init__) can > ? ? ? ? ?? ? ? ? tell whether a base class is a dataclass. It > probably > ? ? ? ? should > ? ? ? ? ?? ? ? ? though. (@Eric) > > > ? ? ? ? ?? ? I think that's covered in this section: > https://www.python.org/dev/peps/pep-0557/#inheritance > > ? ? ? ? > > > > ? ? ? ? >> > > > ? ? ? ? I was specifically talking about the name and contents of > ? ? ? ? __dataclass_fields__, which are not documented by the > PEP. I > ? ? ? ? expect it's inevitable that people will be looking at this > ? ? ? ? (since they can see it in the source code). Or do you > recommend > ? ? ? ? that people use dataclasses.fields() and catch ValueError? > > > ? ? The expectation is to use dataclasses.fields(). Both it and > ? ? __dataclass_fields__ contain the fields for this class and the > ? ? parents. The only difference is the pseudo-fields. > > ? ? I can add some words describing .fields() returning which > fields are > ? ? present. > > ? ? ? ? I notice that _isdataclass() exists but is private and > I don't > ? ? ? ? recall why. > > > ? ? I think the argument was that it's an anti-pattern, and if you > ? ? really want to know, just call dataclasses.fields() and > catch the > ? ? TypeError. I have this in a helper file: > > ? ? def isdataclass(obj): > ? ? ?? ? """Returns True for dataclass classes and instances.""" > ? ? ?? ? try: > ? ? ?? ? ? ? dataclasses.fields(obj) > ? ? ?? ? ? ? return True > ? ? ?? ? except TypeError: > ? ? ?? ? ? ? return False > > > ? ? (Also now I'm curious what > > ? ? ? ? the "pseudo-fields" are that fields() ignores, but > that's OT.) > > > ? ? ClassVar and InitVar "fields". dataclasses.fields() doesn't > return them. > > ? ? Eric. > > ? ? _______________________________________________ > ? ? Python-Dev mailing list > Python-Dev at python.org > > > https://mail.python.org/mailman/listinfo/python-dev > > ? ? > > ? ? Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > > > > -- > --Guido van Rossum (python.org/~guido > ) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > > > > > > > -- > --Guido van Rossum (python.org/~guido ) From brett at python.org Fri Jan 5 14:34:54 2018 From: brett at python.org (Brett Cannon) Date: Fri, 05 Jan 2018 19:34:54 +0000 Subject: [Python-Dev] Unique loader per module In-Reply-To: References: <1CBF1AB4-A327-4B40-A3ED-BF3F793C5EFA@python.org> Message-ID: Barry and I had a meeting at work today and we decided to go with Nick's idea of using a get_resource_reader(fullname) method on loaders. We aren't going to go with an ABC and simply depend on the method existing as implementing the API (and then returning None if the loader can't handle the specified module). I'll have a PR to update the docs out hopefully today for those that care. On Thu, 4 Jan 2018 at 23:14 Nick Coghlan wrote: > On 3 January 2018 at 06:35, Barry Warsaw wrote: > > Brett doesn?t like this, for several reasons (quoting): > > > > 1. redundant API in all cases where the loader is unique to the module > > 2. the memory savings of sharing a loader is small > > 3. it's implementation complexity/overhead for an optimization case. > > > > The second solution, and the one Brett prefers, is to reimplement zip > importer to not use a shared loader. This may not be that difficult, if > for example we were to use a delegate loader wrapping a shared loader. > > > > The bigger problem IMHO is two-fold: > > > > 1. It would be backward incompatible. If there?s any code out there > expecting a shared loader in zipimport, it would break > > 2. More problematic is that we?d have to impose an additional > requirement on loaders - that they always be unique per module, > contradicting the advice in PEP 302 > > We added module.__spec__.loader_state as part of PEP 451 precisely so > shared loaders had a place to store per-module state without have to > switch to a unique-loader-per-module model. > > I think the main reason you're seeing a problem here is because > ResourceReader has currently been designed to be implemented directly > by loaders, rather than being a subcomponent that you can request > *from* a loader. > > If you instead had an indirection API (that could optionally return > self in the case of non-shared loaders), you'd keep the current > resource reader method signatures, but the way you'd access the itself > would be: > > resources = module.__spec__.loader.get_resource_reader(module) > # resources implements the ResourceReader ABC > > For actual use, the loader protocol could be hidden behind a helper > function: > > resources = importlib_resources.get_resource_reader(module) > > For a shared loader, get_resource_reader(module) would return a new > *non*-shared resource reader (perhaps caching it in > __spec__.loader_state). > > For a non-shared loader, get_resource_reader(module) would just return > self. > > In both cases, we'd recommend that loaders ensure "self is > module.__spec__.loader" as part of their get_resource_reader() > implementation. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Fri Jan 5 15:51:11 2018 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 5 Jan 2018 15:51:11 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: <07766052-accb-7a9a-6f4d-704567e81834@trueblade.com> References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <6b821c9a-c5a5-14c7-8085-e9ea15bd88bd@trueblade.com> <64610599-a65f-2709-7abd-e97f5698df5f@trueblade.com> <07766052-accb-7a9a-6f4d-704567e81834@trueblade.com> Message-ID: <23696f90-cd89-7214-c536-67a5c939fc83@trueblade.com> On 1/5/2018 2:11 PM, Eric V. Smith wrote: > On 1/5/2018 2:09 PM, Guido van Rossum wrote: >> I'm normally no big fan of things that take either a class or an >> instance, but since fields() does this, I think is_dataclass() should >> to. And that's the name I'd choose. OK on the pseudo-fields. > > Sounds good. I'll open a bpo issue. https://bugs.python.org/issue32499 I'm slowly reading through the rest of this thread and will respond this weekend. Eric. From chris.jerdonek at gmail.com Fri Jan 5 17:43:24 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 5 Jan 2018 14:43:24 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 8:29 AM, Guido van Rossum wrote: > On Fri, Jan 5, 2018 at 2:05 AM, Victor Stinner > wrote: >> >> Currently, Context.get(var) returns None when "var in context" is false. >> That's surprising and different than var.get(), especially when var has a >> default value. > > I don't see the problem. Context.get() is inherited from Mapping.get(); if > you want it to raise use Context.__getitem__() (i.e. ctx[var]). Lots of > classes define get() methods with various behaviors. Context.get() and > ContextVar.get() are just different -- ContextVar is not a Mapping. One thing that I think could be contributing to confusion around the proposed API is that there is a circular relationship between Context and ContextVar, e.g. ContextVar.get() does a lookup in the current Context with "self" (the ContextVar object) as a key.... Also, it's the "keys" (the ContextVar objects) that have the get() method that should be used rather than the container object (the Context). This gives the confusing *feeling* of a mapping of mappings. This is different from how the containers people are most familiar with work -- like dict. Is there a reason ContextVar needs to be exposed publicly at all? For example, the API could use string keys like contextvars.get(name) or Context.get(name) (class method). There could be separate functions to initialize keys with desired default values, etc (internally creating ContextVars as needed). If the issue is key collisions, it seems like this could be handled by namespacing or using objects (namespaced by modules) instead of strings. Maybe this approach was ruled out early on in discussions, but I don't see it mentioned in the PEP. --Chris > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com > From njs at pobox.com Fri Jan 5 17:59:00 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Jan 2018 14:59:00 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 7:47 AM, Guido van Rossum wrote: > I don't recall (though someone with more time might find the discussion in > the archives or on the tracker). It was never implemented and I think it > shouldn't be. So we might as well update the PEP. It wouldn't be > particularly useful, since (by definition) the function that declares the > nonlocal variable is not its owner, and hence it's unlikely to make sense to > initialize it here. The same reasoning applies to global BTW. The reason I got curious and looked into it is that recently I've been finding myself using it a lot for passing values back out of concurrent functions (examples below). So it does have use cases, but I agree that it's not clear how much value is really added by saving a line here. Maybe in a year or two if this style catches on as idiomatic then it'd be worth revisiting. ####### Example: run several functions, return the value of the one that finishes first (or non-deterministic if several finish at ~the same time): async def race(*async_fns): async with trio.open_nursery() as nursery: winning_value = None async def driver(async_fn): nonlocal winning_value winning_value = await async_fn() # we're done, so cancel competitors nursery.cancel_scope.cancel() for async_fn in async_fns: nursery.start_soon(driver, async_fn) return winner ####### Example: an async iterator version of zip, with concurrent evaluation of the different iterators (based on an idea from github user @matham: https://github.com/python-trio/trio/issues/393): async def async_zip(*aiterables): aiterators = [aiterable.__aiter__() for aiterable in aiterables] done = False while True: items = [None] * len(aiterators) async def fill_in(i): try: items[i] = await aiterators[i].__anext__() except StopAsyncIteration: nonlocal done done = True async with trio.open_nursery() as nursery: for i in range(len(aiterators)): nursery.start_soon(fill_in, i) if done: break yield tuple(items) -n -- Nathaniel J. Smith -- https://vorpus.org From guido at python.org Fri Jan 5 18:02:15 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 15:02:15 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: Message-ID: I don't like those examples -- "nonlocal foo = bar" sounds like bar is used as the *initializer*, but it actually is just an assignment that overwrites the actual initial value. IMO those shouldn't be combined. On Fri, Jan 5, 2018 at 2:59 PM, Nathaniel Smith wrote: > On Fri, Jan 5, 2018 at 7:47 AM, Guido van Rossum wrote: > > I don't recall (though someone with more time might find the discussion > in > > the archives or on the tracker). It was never implemented and I think it > > shouldn't be. So we might as well update the PEP. It wouldn't be > > particularly useful, since (by definition) the function that declares the > > nonlocal variable is not its owner, and hence it's unlikely to make > sense to > > initialize it here. The same reasoning applies to global BTW. > > The reason I got curious and looked into it is that recently I've been > finding myself using it a lot for passing values back out of > concurrent functions (examples below). So it does have use cases, but > I agree that it's not clear how much value is really added by saving a > line here. Maybe in a year or two if this style catches on as > idiomatic then it'd be worth revisiting. > > ####### > > Example: run several functions, return the value of the one that > finishes first (or non-deterministic if several finish at ~the same > time): > > async def race(*async_fns): > async with trio.open_nursery() as nursery: > winning_value = None > > async def driver(async_fn): > nonlocal winning_value > winning_value = await async_fn() > # we're done, so cancel competitors > nursery.cancel_scope.cancel() > > for async_fn in async_fns: > nursery.start_soon(driver, async_fn) > > return winner > > ####### > > Example: an async iterator version of zip, with concurrent evaluation > of the different iterators (based on an idea from github user @matham: > https://github.com/python-trio/trio/issues/393): > > async def async_zip(*aiterables): > aiterators = [aiterable.__aiter__() for aiterable in aiterables] > done = False > while True: > items = [None] * len(aiterators) > > async def fill_in(i): > try: > items[i] = await aiterators[i].__anext__() > except StopAsyncIteration: > nonlocal done > done = True > > async with trio.open_nursery() as nursery: > for i in range(len(aiterators)): > nursery.start_soon(fill_in, i) > > if done: > break > > yield tuple(items) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Jan 5 18:00:11 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Jan 2018 15:00:11 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 2:43 PM, Chris Jerdonek wrote: > On Fri, Jan 5, 2018 at 8:29 AM, Guido van Rossum wrote: > > On Fri, Jan 5, 2018 at 2:05 AM, Victor Stinner > > > wrote: > >> > >> Currently, Context.get(var) returns None when "var in context" is false. > >> That's surprising and different than var.get(), especially when var has > a > >> default value. > > > > I don't see the problem. Context.get() is inherited from Mapping.get(); > if > > you want it to raise use Context.__getitem__() (i.e. ctx[var]). Lots of > > classes define get() methods with various behaviors. Context.get() and > > ContextVar.get() are just different -- ContextVar is not a Mapping. > > One thing that I think could be contributing to confusion around the > proposed API is that there is a circular relationship between Context > and ContextVar, e.g. ContextVar.get() does a lookup in the current > Context with "self" (the ContextVar object) as a key.... > > Also, it's the "keys" (the ContextVar objects) that have the get() > method that should be used rather than the container object (the > Context). This gives the confusing *feeling* of a mapping of mappings. > This is different from how the containers people are most familiar > with work -- like dict. > Only if you think of ContextVar as a mapping, which is not at all how it works. (If anything, its get() method is more like that on weak references.) > Is there a reason ContextVar needs to be exposed publicly at all? For > example, the API could use string keys like contextvars.get(name) or > Context.get(name) (class method). There could be separate functions to > initialize keys with desired default values, etc (internally creating > ContextVars as needed). > > If the issue is key collisions, it seems like this could be handled by > namespacing or using objects (namespaced by modules) instead of > strings. > > Maybe this approach was ruled out early on in discussions, but I don't > see it mentioned in the PEP. > Yes this was litigated repeatedly. Maybe the PEP 550 discussion or Rejected Ideas section have more. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jan 5 18:14:16 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 5 Jan 2018 23:14:16 +0000 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: Message-ID: On 5 January 2018 at 23:02, Guido van Rossum wrote: > I don't like those examples -- "nonlocal foo = bar" sounds like bar is used > as the *initializer*, but it actually is just an assignment that overwrites > the actual initial value. IMO those shouldn't be combined. That was my immediate reaction too. I would find the "nonlocal x = value" version more confusing for this reason. Paul From njs at pobox.com Fri Jan 5 18:26:42 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Jan 2018 15:26:42 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 3:18 PM, Nathaniel Smith wrote: > I think the fix is a little bit cumbersome, but straightforward, and > actually *simplifies* caching. [...] > And then the caching in get() becomes: > > def get(self): > if tstate->current_context != self->last_context: > # Update cache > self->last_value = tstate->current_context_data->hamt_lookup(self) > self->last_context = tstate->current_context > return self->last_value Actually, this trick of using the Context* as the cache validation key doesn't work :-/. The problem is that the cache doesn't hold a strong reference to the Context, so there's an ABA problem [1] if the Context is deallocated, and then later the same memory location gets used for a new Context object. It could be fixed by using weakref callbacks, but that's not really simpler than the current counter-based cache validation logic. -n [1] https://en.wikipedia.org/wiki/ABA_problem -- Nathaniel J. Smith -- https://vorpus.org From elprans at gmail.com Fri Jan 5 18:23:34 2018 From: elprans at gmail.com (Elvis Pranskevichus) Date: Fri, 05 Jan 2018 18:23:34 -0500 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: <2012561.z8avARLVRT@hammer.magicstack.net> On Friday, January 5, 2018 5:43:24 PM EST Chris Jerdonek wrote: > Is there a reason ContextVar needs to be exposed publicly at all? For > example, the API could use string keys like contextvars.get(name) or > Context.get(name) (class method). There could be separate functions > to initialize keys with desired default values, etc (internally > creating ContextVars as needed). Mainly because the design of contextvars follows the spirit of threading.local() (but see [1]). A ContextVar is not unlike a global variable, and we don't normally write globals()['foo']. Elvis [1] https://www.python.org/dev/peps/pep-0550/#replication-of-threading-local-interface From njs at pobox.com Fri Jan 5 18:31:06 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Jan 2018 15:31:06 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 4:48 PM, Guido van Rossum wrote: > Your suggestions sound reasonable, but we are now running into a logistical > problem -- I don't want to decide this unilaterally but Yury is on vacation > until Jan 15. That gives us at most 2 weeks for approval of the PEP and > review + commit of the implementation > (https://github.com/python/cpython/pull/5027) before the 3.7.0 feature > freeze / beta (Jan 29). I don't think there are any fundamental disagreements here, so I guess we can sort something out pretty quickly once Yury gets back. > [Even later] Re: your other suggestion, why couldn't the threadstate contain > just the Context? It seems one would just write > tstate->current_context->_data everywhere instead of > tstate->current_context_data. Yeah, that's probably better :-). -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Fri Jan 5 18:45:00 2018 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Jan 2018 15:45:00 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 2:43 PM, Chris Jerdonek wrote: > One thing that I think could be contributing to confusion around the > proposed API is that there is a circular relationship between Context > and ContextVar, e.g. ContextVar.get() does a lookup in the current > Context with "self" (the ContextVar object) as a key.... It's maybe helpful to remember that while Context is an important part of the overall API, it's not intended to be "user facing" at all. For folks writing libraries that need context-local state, they just use ContextVar and ignore the rest of the API. > Also, it's the "keys" (the ContextVar objects) that have the get() > method that should be used rather than the container object (the > Context). This gives the confusing *feeling* of a mapping of mappings. > This is different from how the containers people are most familiar > with work -- like dict. > > Is there a reason ContextVar needs to be exposed publicly at all? For > example, the API could use string keys like contextvars.get(name) or > Context.get(name) (class method). There could be separate functions to > initialize keys with desired default values, etc (internally creating > ContextVars as needed). > > If the issue is key collisions, it seems like this could be handled by > namespacing or using objects (namespaced by modules) instead of > strings. With strings you have the problem of collisions, you lose the ability to cache values to make ContextVar.get() really fast (which is a critical requirement for decimal and numpy), and you lose the ability to detect when a ContextVar has been garbage collected and clean up its values. (We're not bothering to implement that kind of GC right now, but it's nice to know that we have the option later if it becomes a problem.) Allowing arbitrary objects as keys somewhat alleviates the problem with collisions, but not the others. What we could do is make the API look like: import contextvars key = contextvars.ContextKey(...) contextvars.set(key, value) value = contextvars.get(key) That's isomorphic to the current proposal -- it's just renaming ContextVar and making the existing get/set methods be globals instead of methods. So then it's just a question of aesthetics. To me, the current version looks nicer: from contextvars import ContextVar var = ContextVar(...) var.set(value) value = var.get() In particular, you can't reasonably do 'from contextvars import get, set'; that'd be gross :-). But 'from contextvars import ContextVar' is fine and gives you the whole user-oriented API in one piece. -n -- Nathaniel J. Smith -- https://vorpus.org From benjamin at python.org Sat Jan 6 01:52:00 2018 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 05 Jan 2018 22:52:00 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: Message-ID: <1515221520.1164095.1226073416.4858200A@webmail.messagingengine.com> On Fri, Jan 5, 2018, at 01:57, Nathaniel Smith wrote: > Was this just an oversight, or did it get rejected at some point and > no-one remembered to update that PEP? There was an implementation https://bugs.python.org/issue4199. But several years ago, we again reached the conclusion that the feature shouldn't be added. Normally, I think final PEPs shouldn't be updated. But maybe in this case, it's worth deleting those lines to avoid future confusion. From guido at python.org Sat Jan 6 11:43:35 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Jan 2018 08:43:35 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: <1515221520.1164095.1226073416.4858200A@webmail.messagingengine.com> References: <1515221520.1164095.1226073416.4858200A@webmail.messagingengine.com> Message-ID: Maybe we should not delete them outright but add something like "(UPDATE: during later discussions it was decided that this feature shouldn't be added.)" On Fri, Jan 5, 2018 at 10:52 PM, Benjamin Peterson wrote: > > > On Fri, Jan 5, 2018, at 01:57, Nathaniel Smith wrote: > > Was this just an oversight, or did it get rejected at some point and > > no-one remembered to update that PEP? > > There was an implementation https://bugs.python.org/issue4199. But > several years ago, we again reached the conclusion that the feature > shouldn't be added. > > Normally, I think final PEPs shouldn't be updated. But maybe in this case, > it's worth deleting those lines to avoid future confusion. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Jan 6 11:56:38 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Jan 2018 11:56:38 -0500 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: References: <1515221520.1164095.1226073416.4858200A@webmail.messagingengine.com> Message-ID: <07965BFE-1EC4-4D92-893E-C22D7C6B0598@python.org> On Jan 6, 2018, at 11:43, Guido van Rossum wrote: > > Maybe we should not delete them outright but add something like "(UPDATE: during later discussions it was decided that this feature shouldn't be added.)" +1 -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From random832 at fastmail.com Sat Jan 6 15:59:41 2018 From: random832 at fastmail.com (Random832) Date: Sat, 06 Jan 2018 15:59:41 -0500 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? Message-ID: <1515272381.1913194.1226511928.50009766@webmail.messagingengine.com> On Fri, Jan 5, 2018, at 14:07, Guido van Rossum wrote: > Yeah, but I've changed my mind on this -- I think it's needless added > complexity that helps save one line of code in very few use cases. And you > don't really think the PEP endorses `nonlocal foo += 1` do you? The PEP itself is very clear about supporting "augmented assignment", even defining the grammar differently for this case (such that you can't do it with multiple names in a single statement). The grammar shown also supports "nonlocal foo = bar = baz", though the text doesn't mention this form. From benjamin at python.org Sat Jan 6 16:03:59 2018 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 06 Jan 2018 13:03:59 -0800 Subject: [Python-Dev] Whatever happened to 'nonlocal x = y'? In-Reply-To: <07965BFE-1EC4-4D92-893E-C22D7C6B0598@python.org> References: <1515221520.1164095.1226073416.4858200A@webmail.messagingengine.com> <07965BFE-1EC4-4D92-893E-C22D7C6B0598@python.org> Message-ID: <1515272639.1914172.1226518920.306CC3A1@webmail.messagingengine.com> https://github.com/python/peps/commit/2d2ac2d2b66d4e37e8b930f5963735616bddbbe8 On Sat, Jan 6, 2018, at 08:56, Barry Warsaw wrote: > On Jan 6, 2018, at 11:43, Guido van Rossum wrote: > > > > Maybe we should not delete them outright but add something like "(UPDATE: during later discussions it was decided that this feature shouldn't be added.)" > > +1 > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org > Email had 1 attachment: > + signature.asc > 1k (application/pgp-signature) From eric at trueblade.com Sat Jan 6 17:13:49 2018 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 6 Jan 2018 17:13:49 -0500 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> Message-ID: <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> On 12/10/2017 5:00 PM, Raymond Hettinger wrote: > > >> On Dec 10, 2017, at 1:37 PM, Eric V. Smith wrote: >> >> On 12/10/2017 4:29 PM, Ivan Levkivskyi wrote: >>> On 10 December 2017 at 22:24, Raymond Hettinger > wrote: >>> Without typing (only the first currently works): >>> Point = namedtuple('Point', ['x', 'y', 'z']) # >>> underlying store is a tuple >>> Point = make_dataclass('Point', ['x', 'y', 'z']) # >>> underlying store is an instance dict >>> Hm, I think this is a bug in implementation. The second form should also work. >> >> Agreed. I've checked this under bpo-32278. >> >> I have a bunch of pending changes for dataclasses. I'll add this. >> >> Eric. > > Thanks Eric and Ivan. You're both very responsive. I appreciate the enormous efforts you're putting in to getting this right. > > I suggest two other fix-ups: > > 1) Let make_dataclass() pass through keyword arguments to _process_class(), so that this will work: > > Point = make_dataclass('Point', ['x', 'y', 'z'], order=True) And I've checked this in under bpo-32279. > 2) Change the default value for "hash" from "None" to "False". This might take a little effort because there is currently an oddity where setting hash=False causes it to be hashable. I'm pretty sure this wasn't intended ;-) I haven't looked at this yet. Eric. From tismer at stackless.com Sun Jan 7 11:17:23 2018 From: tismer at stackless.com (Christian Tismer) Date: Sun, 7 Jan 2018 17:17:23 +0100 Subject: [Python-Dev] subprocess not escaping "^" on Windows Message-ID: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Hi Guys, yes I know there was a lengthy thread on python-dev in 2014 called "subprocess shell=True on Windows doesn't escape ^ character". But in the end, I still don't understand why subprocess does escape the double quote when shell=True but not other special characters like "^"? Yes I know that certain characters are escaped under certain Windows versions and others are not. And it is not trivial to make that work correctly in all cases. But I think if we support some escaping at all, then we should also support all special cases. Or what sense should an escape make if it works sometimes and sometimes not? The user would have to know which cases work and which not. But I thought we want to remove exactly that burden from him? ----- As a side note: In most cases where shell=True is found, people seem to need evaluation of the PATH variable. To my understanding, >>> from subprocess import call >>> call(("ls",)) works in Linux, but (with dir) not in Windows. But that is misleading because "dir" is a builtin command but "ls" is not. The same holds for "del" (Windows) and "rm" (Linux). So I thought that using shell=True was a good Thing on windows, but actually it is the start of all evil. Using regular commands like "git" works fine on Windows and Linux without the shell=True parameter. Perhaps it would be a good thing to emulate the builtin programs in python by some shell=True replacement (emulate_shell=True?) to match the normal user expectations without using the shell? Cheers - Chris -- Christian Tismer-Sperling :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From eric at trueblade.com Sun Jan 7 12:09:30 2018 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 7 Jan 2018 12:09:30 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> Message-ID: <359d2272-6351-323a-12c0-3023404b6d06@trueblade.com> On 1/3/2018 1:17 PM, Eric V. Smith wrote: > I?ll open an issue after I have time to read this thread and comment on it. https://bugs.python.org/issue32513 I need to think though how __eq__ and __ne__ work, as well as the ordering operators. My specific concern with __ne__ is that there's one flag to control their generation, but python will use "not __eq__" if you don't provide __ne__. I need to think through what happens if the user only provides __eq__: does dataclasses do nothing, does it add __ne__, and how does this interact with a base class that does provide __ne__. Eric. From guido at python.org Sun Jan 7 12:20:18 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 7 Jan 2018 09:20:18 -0800 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: I assume you're talking about list2cmdline()? That seems to be used to construct a string that can be passed to `cmd /c "{}"` -- it gets substituted instead of the {}, i.e. surrounded by ". I honestly can't say I follow that code completely, but I see that it escapes double quotes. Why is there a need to escape other characters? Is there a definitive list of special characters somewhere? On Sun, Jan 7, 2018 at 8:17 AM, Christian Tismer wrote: > Hi Guys, > > yes I know there was a lengthy thread on python-dev in 2014 > called "subprocess shell=True on Windows doesn't escape ^ character". > > But in the end, I still don't understand why subprocess does > escape the double quote when shell=True but not other special > characters like "^"? > > Yes I know that certain characters are escaped under certain > Windows versions and others are not. And it is not trivial to make > that work correctly in all cases. But I think if we support > some escaping at all, then we should also support all special > cases. Or what sense should an escape make if it works sometimes > and sometimes not? > > The user would have to know which cases work and which not. But > I thought we want to remove exactly that burden from him? > > ----- > > As a side note: In most cases where shell=True is found, people > seem to need evaluation of the PATH variable. To my understanding, > > >>> from subprocess import call > >>> call(("ls",)) > > works in Linux, but (with dir) not in Windows. But that is misleading > because "dir" is a builtin command but "ls" is not. The same holds for > "del" (Windows) and "rm" (Linux). > > So I thought that using shell=True was a good Thing on windows, > but actually it is the start of all evil. > Using regular commands like "git" works fine on Windows and Linux > without the shell=True parameter. > > Perhaps it would be a good thing to emulate the builtin programs > in python by some shell=True replacement (emulate_shell=True?) > to match the normal user expectations without using the shell? > > Cheers - Chris > > -- > Christian Tismer-Sperling :^) tismer at stackless.com > Software Consulting : http://www.stackless.com/ > Karl-Liebknecht-Str. 121 : https://github.com/PySide > 14482 Potsdam : GPG key -> 0xFB7BEE0E > phone +49 173 24 18 776 fax +49 (30) 700143-0023 > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Jan 7 12:22:14 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 7 Jan 2018 09:22:14 -0800 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: On Sun, Jan 7, 2018 at 8:17 AM, Christian Tismer wrote: > As a side note: In most cases where shell=True is found, people > seem to need evaluation of the PATH variable. To my understanding, > > >>> from subprocess import call > >>> call(("ls",)) > > works in Linux, but (with dir) not in Windows. But that is misleading > because "dir" is a builtin command but "ls" is not. The same holds for > "del" (Windows) and "rm" (Linux). > > So I thought that using shell=True was a good Thing on windows, > but actually it is the start of all evil. > Using regular commands like "git" works fine on Windows and Linux > without the shell=True parameter. > > Perhaps it would be a good thing to emulate the builtin programs > in python by some shell=True replacement (emulate_shell=True?) > to match the normal user expectations without using the shell? > That feels like a terrible idea to me. How do you define "normal user expectations" here? If people want shell builtins they should just use shell=True. (Also note IIUC there are several quite different shells commonly used on Windows, e.g. PowerShell.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Jan 7 12:25:44 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 7 Jan 2018 09:25:44 -0800 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: <359d2272-6351-323a-12c0-3023404b6d06@trueblade.com> References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> <359d2272-6351-323a-12c0-3023404b6d06@trueblade.com> Message-ID: On Sun, Jan 7, 2018 at 9:09 AM, Eric V. Smith wrote: > On 1/3/2018 1:17 PM, Eric V. Smith wrote: > >> I?ll open an issue after I have time to read this thread and comment on >> it. >> > > https://bugs.python.org/issue32513 > I need to think though how __eq__ and __ne__ work, as well as the ordering > operators. > > My specific concern with __ne__ is that there's one flag to control their > generation, but python will use "not __eq__" if you don't provide __ne__. I > need to think through what happens if the user only provides __eq__: does > dataclasses do nothing, does it add __ne__, and how does this interact with > a base class that does provide __ne__. Maybe dataclasses should only ever provide __eq__ and always assume Python's default for __ne__ kicks in? If that's not acceptable (maybe there are cases where a user did write an explicit __ne__ that needs to be overridden) I would recommend the following rule: - If there's an __eq__, don't do anything (regardless of whether there's an __ne__) - If there no __eq__ but there is an __ne__, generate __eq__ but don't generate __ne__ - If neither exists, generate both -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tismer at stackless.com Sun Jan 7 13:48:14 2018 From: tismer at stackless.com (Christian Tismer) Date: Sun, 7 Jan 2018 19:48:14 +0100 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: That is true. list2cmdline escapes partially, but on NT and Windows10, the "^" must also be escaped, but is not. The "|" pipe symbol must also be escaped by "^", as many others as well. The effect was that passing a rexexp as parameter to a windows program gave me strange effects, and I recognized that "^" was missing. So I was asking for a coherent solution: Escape things completely or omit "shell=True". Yes, there is a list of chars to escape, and it is Windows version dependent. I can provide it if it makes sense. Cheers -- Chris On 07.01.18 18:20, Guido van Rossum wrote: > I assume you're talking about list2cmdline()? That seems to be used to > construct a string that can be passed to `cmd /c "{}"` -- it gets > substituted instead of the {}, i.e. surrounded by ". I honestly can't > say I follow that code completely, but I see that it escapes double > quotes. Why is there a need to escape other characters? Is there a > definitive list of special characters somewhere? > > On Sun, Jan 7, 2018 at 8:17 AM, Christian Tismer > wrote: > > Hi Guys, > > yes I know there was a lengthy thread on python-dev in 2014 > called "subprocess shell=True on Windows doesn't escape ^ character". > > But in the end, I still don't understand why subprocess does > escape the double quote when shell=True but not other special > characters like "^"? > > Yes I know that certain characters are escaped under certain > Windows versions and others are not. And it is not trivial to make > that work correctly in all cases. But I think if we support > some escaping at all, then we should also support all special > cases. Or what sense should an escape make if it works sometimes > and sometimes not? > > The user would have to know which cases work and which not. But > I thought we want to remove exactly that burden from him? > > ----- > > As a side note: In most cases where shell=True is found, people > seem to need evaluation of the PATH variable. To my understanding, > > >>> from subprocess import call > >>> call(("ls",)) > > works in Linux, but (with dir) not in Windows. But that is misleading > because "dir" is a builtin command but "ls" is not. The same holds for > "del" (Windows) and "rm" (Linux). > > So I thought that using shell=True was a good Thing on windows, > but actually it is the start of all evil. > Using regular commands like "git" works fine on Windows and Linux > without the shell=True parameter. > > Perhaps it would be a good thing to emulate the builtin programs > in python by some shell=True replacement (emulate_shell=True?) > to match the normal user expectations without using the shell? > > Cheers - Chris > > -- > Christian Tismer-Sperling? ? :^)? ?tismer at stackless.com > > Software Consulting? ? ? ? ? :? ? ?http://www.stackless.com/ > Karl-Liebknecht-Str . > 121? ? ?:? ? ?https://github.com/PySide > 14482 Potsdam? ? ? ? ? ? ? ? :? ? ?GPG key -> 0xFB7BEE0E > phone +49 173 24 18 776 ? fax +49 > (30) 700143-0023 > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > -- > --Guido van Rossum (python.org/~guido ) -- Christian Tismer-Sperling :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From tismer at stackless.com Sun Jan 7 13:54:16 2018 From: tismer at stackless.com (Christian Tismer) Date: Sun, 7 Jan 2018 19:54:16 +0100 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: By "normal user expectations" I meant the behavior when the builtin commands were normal programs. Using "shell=True" is everywhere recommended to avoid, and I believe we could avoid it by giving them replacements for build-ins. But I don't care if the shell escaping is correct. And that is not trivial, either. On 07.01.18 18:22, Guido van Rossum wrote: > On Sun, Jan 7, 2018 at 8:17 AM, Christian Tismer > wrote: > > As a side note: In most cases where shell=True is found, people > seem to need evaluation of the PATH variable. To my understanding, > > >>> from subprocess import call > >>> call(("ls",)) > > works in Linux, but (with dir) not in Windows. But that is misleading > because "dir" is a builtin command but "ls" is not. The same holds for > "del" (Windows) and "rm" (Linux). > > So I thought that using shell=True was a good Thing on windows, > but actually it is the start of all evil. > Using regular commands like "git" works fine on Windows and Linux > without the shell=True parameter. > > Perhaps it would be a good thing to emulate the builtin programs > in python by some shell=True replacement (emulate_shell=True?) > to match the normal user expectations without using the shell? > > > That feels like a terrible idea to me. How do you define "normal user > expectations" here? If people want shell builtins they should just use > shell=True. (Also note IIUC there are several quite different shells > commonly used on Windows, e.g. PowerShell.) > > -- > --Guido van Rossum (python.org/~guido ) -- Christian Tismer-Sperling :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From tismer at stackless.com Sun Jan 7 14:38:25 2018 From: tismer at stackless.com (Christian Tismer) Date: Sun, 7 Jan 2018 20:38:25 +0100 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> Ok, I thought only about Windows where people often use shell=True. I did not see that as a Linux problem, too. Not meant as a proposal, just loud thinking... :-) But as said, the incomplete escaping is a complete mess. Ciao -- Chris On 07.01.18 19:54, Christian Tismer wrote: > By "normal user expectations" I meant the behavior when the builtin commands > were normal programs. > > Using "shell=True" is everywhere recommended to avoid, and I believe > we could avoid it by giving them replacements for build-ins. > > But I don't care if the shell escaping is correct. And that is not > trivial, either. > > On 07.01.18 18:22, Guido van Rossum wrote: >> On Sun, Jan 7, 2018 at 8:17 AM, Christian Tismer > > wrote: >> >> As a side note: In most cases where shell=True is found, people >> seem to need evaluation of the PATH variable. To my understanding, >> >> >>> from subprocess import call >> >>> call(("ls",)) >> >> works in Linux, but (with dir) not in Windows. But that is misleading >> because "dir" is a builtin command but "ls" is not. The same holds for >> "del" (Windows) and "rm" (Linux). >> >> So I thought that using shell=True was a good Thing on windows, >> but actually it is the start of all evil. >> Using regular commands like "git" works fine on Windows and Linux >> without the shell=True parameter. >> >> Perhaps it would be a good thing to emulate the builtin programs >> in python by some shell=True replacement (emulate_shell=True?) >> to match the normal user expectations without using the shell? >> >> >> That feels like a terrible idea to me. How do you define "normal user >> expectations" here? If people want shell builtins they should just use >> shell=True. (Also note IIUC there are several quite different shells >> commonly used on Windows, e.g. PowerShell.) >> >> -- >> --Guido van Rossum (python.org/~guido ) > > -- Christian Tismer-Sperling :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From greg at krypto.org Sun Jan 7 15:30:32 2018 From: greg at krypto.org (Gregory P. Smith) Date: Sun, 07 Jan 2018 20:30:32 +0000 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> Message-ID: the best way to improve shell escaping on windows is to send a PR against the list2cmdline code that escapes everything you believe it should when running on windows. With hyperlinks to the relevant msdn info about what might need escaping. On Sun, Jan 7, 2018 at 11:38 AM Christian Tismer wrote: > Ok, I thought only about Windows where people often use shell=True. > I did not see that as a Linux problem, too. > > Not meant as a proposal, just loud thinking... :-) > > But as said, the incomplete escaping is a complete mess. > > Ciao -- Chris > > On 07.01.18 19:54, Christian Tismer wrote: > > By "normal user expectations" I meant the behavior when the builtin > commands > > were normal programs. > > > > Using "shell=True" is everywhere recommended to avoid, and I believe > > we could avoid it by giving them replacements for build-ins. > > > > But I don't care if the shell escaping is correct. And that is not > > trivial, either. > > > > On 07.01.18 18:22, Guido van Rossum wrote: > >> On Sun, Jan 7, 2018 at 8:17 AM, Christian Tismer >> > wrote: > >> > >> As a side note: In most cases where shell=True is found, people > >> seem to need evaluation of the PATH variable. To my understanding, > >> > >> >>> from subprocess import call > >> >>> call(("ls",)) > >> > >> works in Linux, but (with dir) not in Windows. But that is > misleading > >> because "dir" is a builtin command but "ls" is not. The same holds > for > >> "del" (Windows) and "rm" (Linux). > >> > >> So I thought that using shell=True was a good Thing on windows, > >> but actually it is the start of all evil. > >> Using regular commands like "git" works fine on Windows and Linux > >> without the shell=True parameter. > >> > >> Perhaps it would be a good thing to emulate the builtin programs > >> in python by some shell=True replacement (emulate_shell=True?) > >> to match the normal user expectations without using the shell? > >> > >> > >> That feels like a terrible idea to me. How do you define "normal user > >> expectations" here? If people want shell builtins they should just use > >> shell=True. (Also note IIUC there are several quite different shells > >> commonly used on Windows, e.g. PowerShell.) > >> > >> -- > >> --Guido van Rossum (python.org/~guido ) > > > > > > > -- > Christian Tismer-Sperling :^) tismer at stackless.com > Software Consulting : http://www.stackless.com/ > Karl-Liebknecht-Str. 121 : https://github.com/PySide > 14482 Potsdam : GPG key -> 0xFB7BEE0E > phone +49 173 24 18 776 <+49%20173%202418776> fax +49 (30) 700143-0023 > <+49%2030%207001430023> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Jan 7 15:59:26 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 7 Jan 2018 12:59:26 -0800 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> Message-ID: On Sun, Jan 7, 2018 at 12:30 PM, Gregory P. Smith wrote: > the best way to improve shell escaping on windows is to send a PR against > the list2cmdline code that escapes everything you believe it should when > running on windows. With hyperlinks to the relevant msdn info about what > might need escaping. > Agreed. FWIW the call to list2cmdline seems to compound the problem, since it just takes args and puts double quotes around it, mostly undoing the work of list2cmdline. For example if I use (args=['a', 'b c'], shell=True) I think list2cmdline turns that to args='a "b c"', and then the format() expression constructs the command: cmd.exe /c "a "b c"" I really have no idea what that means on Windows (and no quick access to a Windows box to try it) but on Windows that would create *two* arguments, the first one being 'a b' and the second one 'c'. At this point I can understand that Christian recommends against shell=True -- it's totally messed up! But the fix should really be to fix this, not inventing a new feature. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tismer at stackless.com Sun Jan 7 16:19:20 2018 From: tismer at stackless.com (Christian Tismer) Date: Sun, 7 Jan 2018 22:19:20 +0100 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> Message-ID: <90a14f7e-930c-1eb3-502c-d49f3a3346a6@stackless.com> Ok, then I'm happy to improve the escaping! I was confused because I could not understand that nobody than me should have run into this problem before. There are many special cases. I'll try my very best :) Cheers -- Chris On 07.01.18 21:59, Guido van Rossum wrote: > On Sun, Jan 7, 2018 at 12:30 PM, Gregory P. Smith > wrote: > > the best way to improve shell escaping on windows is to send a PR > against the list2cmdline code that escapes everything you believe it > should when running on windows. With hyperlinks to the relevant msdn > info about what might need escaping. > > > Agreed. FWIW the call to list2cmdline seems to compound the problem, > since it just takes args and puts double quotes around it, mostly > undoing the work of list2cmdline. For example if I use (args=['a', 'b > c'], shell=True) I think list2cmdline turns that to args='a "b c"', and > then the format() expression constructs the command: > > ??? cmd.exe /c "a "b c"" > > I really have no idea what that means on Windows (and no quick access to > a Windows box to try it) but on Windows that would create *two* > arguments, the first one being 'a b' and the second one 'c'. > > At this point I can understand that Christian recommends against > shell=True -- it's totally messed up! But the fix should really be to > fix this, not inventing a new feature. > > -- > --Guido van Rossum (python.org/~guido ) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/tismer%40stackless.com > -- Christian Tismer :^) tismer at stackless.com Software Consulting : http://www.stackless.com/ Karl-Liebknecht-Str. 121 : https://github.com/PySide 14482 Potsdam : GPG key -> 0xFB7BEE0E phone +49 173 24 18 776 fax +49 (30) 700143-0023 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 522 bytes Desc: OpenPGP digital signature URL: From steve.dower at python.org Sun Jan 7 19:50:55 2018 From: steve.dower at python.org (Steve Dower) Date: Mon, 8 Jan 2018 11:50:55 +1100 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> Message-ID: Quoting the /c and /k values to cmd.exe is... complicated, to say the least. I struggle to get it right for a single example, let alone generalising it. The /s option also has an impact ? sometimes it helps you avoid double-escaping everything, but not always. Writing complex shell=True commands to a temporary batch file and executing that is more reliable wrt quoting, though now you'd need somewhere writable and executable on disk, which is becoming hard to come by. Considering there is no cross-platform compatibility here anyway, I don?t think it?s that bad an option to let users do their own escaping, especially since those who are successfully using this feature already do. Cheers, Steve Top-posted from my Windows phone From: Guido van Rossum Sent: Monday, January 8, 2018 8:02 To: Gregory P. Smith Cc: Christian Tismer; Python-Dev Subject: Re: [Python-Dev] subprocess not escaping "^" on Windows On Sun, Jan 7, 2018 at 12:30 PM, Gregory P. Smith wrote: the best way to improve shell escaping on windows is to send a PR against the list2cmdline code that escapes everything you believe it should when running on windows. With hyperlinks to the relevant msdn info about what might need escaping. Agreed. FWIW the call to list2cmdline seems to compound the problem, since it just takes args and puts double quotes around it, mostly undoing the work of list2cmdline. For example if I use (args=['a', 'b c'], shell=True) I think list2cmdline turns that to args='a "b c"', and then the format() expression constructs the command: ??? cmd.exe /c "a "b c"" I really have no idea what that means on Windows (and no quick access to a Windows box to try it) but on Windows that would create *two* arguments, the first one being 'a b' and the second one 'c'. At this point I can understand that Christian recommends against shell=True -- it's totally messed up! But the fix should really be to fix this, not inventing a new feature. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Sun Jan 7 23:47:56 2018 From: random832 at fastmail.com (Random832) Date: Sun, 07 Jan 2018 23:47:56 -0500 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: <3zFGw42btdzFrHR@mail.python.org> References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> <674c1c31-d95f-5e64-d8ae-eacc0e3a90d2@stackless.com> <3zFGw42btdzFrHR@mail.python.org> Message-ID: <1515386876.3107144.1227512296.36522324@webmail.messagingengine.com> On Sun, Jan 7, 2018, at 19:50, Steve Dower wrote: > Considering there is no cross-platform compatibility here anyway, I > don?t think it?s that bad an option to let users do their own escaping, > especially since those who are successfully using this feature already > do. I don't really think we should give up on cross-platform compatibility that easily. There are a number of constructs supported with the same syntax by both cmd and unix shells (pipes and redirection, mainly) that people may want to use. From njs at pobox.com Mon Jan 8 02:28:56 2018 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 7 Jan 2018 23:28:56 -0800 Subject: [Python-Dev] PEP 568: how we could extend PEP 567 to handle generator contexts Message-ID: See below for a first pass version of PEP 568, which builds on PEP 567 to add context isolation for generators. Basically, as far as features go: PEP 567 + PEP 568 = PEP 550 The actual API details are all different though. To reiterate, *this is not currently a proposal for adding to Python*. It probably will become one for 3.8, because Yury and I still think this is a good idea. But for *right now*, it's just a hypothetical "what if?" to help guide the PEP 567 discussion, because if we want to avoid accidentally ruling out the option of making an extension like this later, it helps to know how such an extension would work. There's a rendered version at: https://www.python.org/dev/peps/pep-0568 (very slightly out of date compared to the version pasted below) -n ------ PEP: 568 Title: Generator-sensitivity for Context Variables Author: Nathaniel J. Smith Status: Deferred Type: Standards Track Content-Type: text/x-rst Created: 04-Jan-2018 Python-Version: 3.8 Post-History: 2018-01-07 Abstract ======== Context variables provide a generic mechanism for tracking dynamic, context-local state, similar to thread-local storage but generalized to cope work with other kinds of thread-like contexts, such as asyncio Tasks. PEP 550 proposed a mechanism for context-local state that was also sensitive to generator context, but this was pretty complicated, so the BDFL requested it be simplified. The result was PEP 567, which is targeted for inclusion in 3.7. This PEP then extends PEP 567's machinery to add generator context sensitivity. This PEP is starting out in the "deferred" status, because there isn't enough time to give it proper consideration before the 3.7 feature freeze. The only goal *right now* is to understand what would be required to add generator context sensitivity in 3.8, so that we can avoid shipping something in 3.7 that would rule it out by accident. (Ruling it out on purpose can wait until 3.8 ;-).) Rationale ========= [Currently the point of this PEP is just to understand *how* this would work, with discussion of *whether* it's a good idea deferred until after the 3.7 feature freeze. So rationale is TBD.] High-level summary ================== Instead of holding a single ``Context``, the threadstate now holds a ``ChainMap`` of ``Context``\s. ``ContextVar.get`` and ``ContextVar.set`` are backed by the ``ChainMap``. Generators and async generators each have an associated ``Context`` that they push onto the ``ChainMap`` while they're running to isolate their context-local changes from their callers, though this can be overridden in cases like ``@contextlib.contextmanager`` where "leaking" context changes from the generator into its caller is desireable. Specification ============= Review of PEP 567 ----------------- Let's start by reviewing how PEP 567 works, and then in the next section we'll describe the differences. In PEP 567, a ``Context`` is a ``Mapping`` from ``ContextVar`` objects to arbitrary values. In our pseudo-code here we'll pretend that it uses a ``dict`` for backing storage. (The real implementation uses a HAMT, which is semantically equivalent to a ``dict`` but with different performance trade-offs.):: class Context(collections.abc.Mapping): def __init__(self): self._data = {} self._in_use = False def __getitem__(self, key): return self._data[key] def __iter__(self): return iter(self._data) def __len__(self): return len(self._data) At any given moment, the threadstate holds a current ``Context`` (initialized to an empty ``Context`` when the threadstate is created); we can use ``Context.run`` to temporarily switch the current ``Context``:: # Context.run def run(self, fn, *args, **kwargs): if self._in_use: raise RuntimeError("Context already in use") tstate = get_thread_state() old_context = tstate.current_context tstate.current_context = self self._in_use = True try: return fn(*args, **kwargs) finally: state.current_context = old_context self._in_use = False We can fetch a shallow copy of the current ``Context`` by calling ``copy_context``; this is commonly used when spawning a new task, so that the child task can inherit context from its parent:: def copy_context(): tstate = get_thread_state() new_context = Context() new_context._data = dict(tstate.current_context) return new_context In practice, what end users generally work with is ``ContextVar`` objects, which also provide the only way to mutate a ``Context``. They work with a utility class ``Token``, which can be used to restore a ``ContextVar`` to its previous value:: class Token: MISSING = sentinel_value() # Note: constructor is private def __init__(self, context, var, old_value): self._context = context self.var = var self.old_value = old_values # XX: PEP 567 currently makes this a method on ContextVar, but # I'm going to propose it switch to this API because it's simpler. def reset(self): # XX: should we allow token reuse? # XX: should we allow tokens to be used if the saved # context is no longer active? if self.old_value is self.MISSING: del self._context._data[self.context_var] else: self._context._data[self.context_var] = self.old_value # XX: the handling of defaults here uses the simplified proposal from # https://mail.python.org/pipermail/python-dev/2018-January/151596.html # This can be updated to whatever we settle on, it was just less # typing this way :-) class ContextVar: def __init__(self, name, *, default=None): self.name = name self.default = default def get(self): context = get_thread_state().current_context return context.get(self, self.default) def set(self, new_value): context = get_thread_state().current_context token = Token(context, self, context.get(self, Token.MISSING)) context._data[self] = new_value return token Changes from PEP 567 to this PEP -------------------------------- In general, ``Context`` remains the same. However, now instead of holding a single ``Context`` object, the threadstate stores a stack of them. This stack acts just like a ``collections.ChainMap``, so we'll use that in our pseudocode. ``Context.run`` then becomes:: # Context.run def run(self, fn, *args, **kwargs): if self._in_use: raise RuntimeError("Context already in use") tstate = get_thread_state() old_context_stack = tstate.current_context_stack tstate.current_context_stack = ChainMap([self]) # changed self._in_use = True try: return fn(*args, **kwargs) finally: state.current_context_stack = old_context_stack self._in_use = False Aside from some updated variables names (e.g., ``tstate.current_context`` ? ``tstate.current_context_stack``), the only change here is on the marked line, which now wraps the context in a ``ChainMap`` before stashing it in the threadstate. We also add a ``Context.push`` method, which is almost exactly like ``Context.run``, except that it temporarily pushes the ``Context`` onto the existing stack, instead of temporarily replacing the whole stack:: # Context.push def push(self, fn, *args, **kwargs): if self._in_use: raise RuntimeError("Context already in use") tstate = get_thread_state() tstate.current_context_stack.maps.insert(0, self) # different from run self._in_use = True try: return fn(*args, **kwargs) finally: tstate.current_context_stack.maps.pop(0) # different from run self._in_use = False In most cases, we don't expect ``push`` to be used directly; instead, it will be used implicitly by generators. Specifically, every generator object and async generator object gains a new attribute ``.context``. When an (async) generator object is created, this attribute is initialized to an empty ``Context``, like: ``self.context = Context()``. This is a mutable attribute; it can be changed by user code. But it has type ``Optional[Context]`` and this is enforced: trying to set it to anything that isn't a ``Context`` object or ``None`` will raise an error. Whenever we enter an generator via ``__next__``, ``send``, ``throw``, or ``close``, or enter an async generator by calling one of those methods on its ``__anext__``, ``asend``, ``athrow``, or ``aclose`` coroutines, then its ``.context`` attribute is checked, and if non-``None``, is automatically pushed:: # GeneratorType.__next__ def __next__(self): if self.context is not None: return self.context.push(self.__real_next__) else: return self.__real_next__() While we don't expect people to use ``Context.push`` often, making it a public API preserves the principle that a generator can always be rewritten as an explicit iterator class with equivalent semantics. Also, we modify ``contextlib.(async)contextmanager`` to always set its (async) generator objects' ``.context`` attribute to ``None``:: # contextlib._GeneratorContextManagerBase.__init__ def __init__(self, func, args, kwds): self.gen = func(*args, **kwds) self.gen.context = None # added ... This makes sure that code like this continues to work as expected:: @contextmanager def decimal_precision(prec): with decimal.localcontext() as ctx: ctx.prec = prec yield with decimal_precision(2): ... The general idea here is that by default, every generator object gets its own local context, but if users want to explicitly get some other behavior then they can do that. Otherwise, things mostly work as before, except that we go through and swap everything to use the threadstate ``ChainMap`` instead of the threadstate ``Context``. In full detail: The ``copy_context`` function now returns a flattened copy of the "effective" context. (As an optimization, the implementation might choose to do this flattening lazily, but if so this will be made invisible to the user.) Compared to our previous implementation above, the only change here is that ``tstate.current_context`` has been replaced with ``tstate.current_context_stack``:: def copy_context() -> Context: tstate = get_thread_state() new_context = Context() new_context._data = dict(tstate.current_context_stack) return new_context ``Token`` is unchanged, and the changes to ``ContextVar.get`` are trivial:: # ContextVar.get def get(self): context_stack = get_thread_state().current_context_stack return context_stack.get(self, self.default) ``ContextVar.set`` is a little more interesting: instead of going through the ``ChainMap`` machinery like everything else, it always mutates the top ``Context`` in the stack, and ? crucially! ? sets up the returned ``Token`` to restore *its* state later. This allows us to avoid accidentally "promoting" values between different levels in the stack, as would happen if we did ``old = var.get(); ...; var.set(old)``:: # ContextVar.set def set(self, new_value): top_context = get_thread_state().current_context_stack.maps[0] # Note: top_context.get, NOT current_context_stack.get token = Token(top_context, self, top_context.get(self, Token.MISSING)) top_context._data[self] = new_value return token And finally, to allow for introspection of the full context stack, we provide a new function ``contextvars.get_context_stack``:: def get_context_stack() -> List[Context]: return list(get_thread_state().current_context_stack.maps) That's all. Comparison to PEP 550 ===================== The main difference from PEP 550 is that it reified what we're calling "contexts" and "context stacks" as two different concrete types (``LocalContext`` and ``ExecutionContext`` respectively). This led to lots of confusion about what the differences were, and which object should be used in which places. This proposal simplifies things by only reifying the ``Context``, which is "just a dict", and makes the "context stack" an unnamed feature of the interpreter's runtime state ? though it is still possible to introspect it using ``get_context_stack``, for debugging and other purposes. Implementation notes ==================== ``Context`` will continue to use a HAMT-based mapping structure under the hood instead of ``dict``, since we expect that calls to ``copy_context`` are much more common than ``ContextVar.set``. In almost all cases, ``copy_context`` will find that there's only one ``Context`` in the stack (because it's rare for generators to spawn new tasks), and can simply re-use its HAMT directly, copy-on-write style; in other cases HAMTs are cheap to merge and if necessary this can be done lazily. Rather than using an actual ``ChainMap`` object, we'll represent the context stack using some appropriate structure ? the most appropriate options are probably either a bare ``list`` with the "top" of the stack being the end of the list so we can use ``push``\/``pop``, or else an intrusive linked list (``PyThreadState`` ? ``Context`` ? ``Context`` ? ...), with the "top" of the stack at the beginning of the list to allow efficient push/pop. A critical optimization in PEP 567 is the caching of values inside ``ContextVar``. Switching from a single context to a context stack makes this a little bit more complicated, but not too much. Currently, we invalidate the cache whenever the threadstate's current ``Context`` changes (on thread switch, and when entering/exiting ``Context.run``). The simplest approach here would be to invalidate the cache whenever stack changes (on thread switch, when entering/exiting ``Context.run``, and when entering/leaving ``Context.push``). The main effect of this is that iterating a generator will invalidate the cache. It seems unlikely that this will cause serious problems, but if it does, then I think it can be avoided with a cleverer cache key that recognizes that pushing and then popping a ``Context`` returns the threadstate to its previous state. (Idea: store the cache key for a particular stack configuration in the topmost ``Context``.) It seems unavoidable in this design that uncached ``get`` will be O(n), where n is the size of the context stack. However, n will generally be very small ? it's roughly the number of nested generators, so usually n=1, and it will be extremely rare to see n greater than, say, 5. At worst, n is bounded by the recursion limit. In addition, we can expect that in most cases of deep generator recursion, most of the ``Context``\s in the stack will be empty, and thus can be skipped extremely quickly during lookup. And for repeated lookups the caching mechanism will kick in. So it's probably possible to construct some extreme case where this causes performance problems, but ordinary code should be essentially unaffected. Copyright ========= This document has been placed in the public domain. .. Local Variables: indent-tabs-mode: nil coding: utf-8 End: -- Nathaniel J. Smith -- https://vorpus.org From pablogsal at gmail.com Mon Jan 8 04:11:38 2018 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Mon, 08 Jan 2018 09:11:38 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn Message-ID: Hi, I'm currently working on exposing posix_spawn in the posix module (and by extension in the os module). You can find the initial implementation in this PR: https://github.com/python/cpython/pull/5109 As pointed out by Gregory P. Smith, some changes are needed in the way the file_actions arguments is passed from Python. For context, posix_spawn has the following declaration: int posix_spawn(pid_t *pid, const char *path, const posix_spawn_file_actions_t *file_actions, const posix_spawnattr_t *attrp, char *const argv[], char *const envp[]); Here, file_actions is an object that represents a list of file actions (open, close or dup2) that is populated using helper functions on the C API. The question is: what is the best way to deal with this argument? Following Gregory's comment on the PR I understand that he is proposing to have three objects in the os module representing each action and pass a sequence of these objects to the Python API. What I am not sure about this is that there is no previous example of such classes in the os module for other similar APIs and therefore I am not sure if there is a better approach. Thanks you very much for your time! Pablo Galindo -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Mon Jan 8 14:34:57 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 8 Jan 2018 23:34:57 +0400 Subject: [Python-Dev] PEP 567 pre v3 Message-ID: Hi, Thanks to everybody participating in the PEP 567 discussion! I want to summarize a few topics to make sure that we are all on the same page (and maybe provoke more discussion). 1. Proposal: ContextVar has default set to None. >From the typing point of view that would mean that if a context variable is declared without an explicit default, its type would be Optional. E.g. say we have a hypothetical web framework that allows to access the current request object through a context variable: request_var: ContextVar[Optional[Request]] = \ ContextVar('current_request') When we need to get the current request object, we would write: request: Optional[Request] = request_var.get() And we'd also need to explicitly handle when 'request' is set to None. Of course we could create request_var with its default set to some "InvalidRequest" object, but that would complicate things. It would be easier to just state that the framework always sets the current request and it's a bug if it's not set. Therefore, in my opinion, it's better to keep the current behaviour: if a context variable was created without a default value, ContextVar.get() can raise a LookupError. 2. Context.__contains__, Context.__getitem__ and ContexVar.default So if we keep the current PEP 567 behaviour w.r.t. defaults, ContextVar.get() might return a different value from Context.get(): v = ContextVar('v', default=42) ctx = contextvars.copy_context() ctx.get(v) # returns None v.get() # returns 42 v in ctx # returns False I think this discrepancy is OK. Context is a mapping-like object and it reflects the contents of the underlying _ContextData mapping object. ContextVar.default is meant to be used only by ContextVar.get(). Context objects should not use it. Maybe we can rename ContextVar.get() to ContextVar.lookup()? This would help to avoid potential confusion between Context.get() and ContextVar.get(). 3. Proposal: Context.get() and __getitem__() should always return up to date values. The issue with the current PEP 567 design is that PyThreadState points to a _ContextData object, and not to the current Context. The following code illustrates how this manifests in Python code: v = ContextVar('v') def foo(): v.set(42) print(v.get(), ctx.get(v, 'missing')) ctx = Context() ctx.run(foo) The above code will print "42 missing", because 'ctx' points to an outdated _ContextData. This is easily fixable if we make PyThreadState to point to the current Context object (instead of it pointing to a _ContextData). This change will also make "contextvars.copy_context()" easier to understand--it will actually return a copy of the current context that the thread state points to. Adding a private Context._in_use attribute would allow us to make sure that Context.run() cannot be simultaneously called in two OS threads. As Nathaniel points out, this will also simplify cache implementation in ContextVar.get(). So let's do this. 4. Add Context.copy(). I was actually going to suggest this addition myself. With the current PEP 567 design, Context.copy() can be implemented with "ctx.run(contextvars.copy_context)", but this is very cumbersome. An example of when a copy() method could be useful is capturing the current context and executing a few functions with it using ThreadPoolExecutor.map(). Copying the Context object will ensure that every mapped function executes in its own context copy (i.e. isolated). So I'm +1 for this one. 5. PEP language. I agree that PEP is vague about some details and is incorrect in some places (like calling Context objects immutable, which is not really true, because .run() can modify them). I'll fix the language in v3 once I'm back home. Yury From brett at python.org Mon Jan 8 14:35:26 2018 From: brett at python.org (Brett Cannon) Date: Mon, 08 Jan 2018 19:35:26 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: On Mon, 8 Jan 2018 at 07:57 Pablo Galindo Salgado wrote: > Hi, > > I'm currently working on exposing posix_spawn in the posix module (and by > extension in the os module). You can find the initial implementation in > this PR: > > https://github.com/python/cpython/pull/5109 > > As pointed out by Gregory P. Smith, some changes are needed in the way the > file_actions arguments is passed from Python. For context, posix_spawn has > the following declaration: > > int posix_spawn(pid_t *pid, const char *path, > const posix_spawn_file_actions_t *file_actions, > const posix_spawnattr_t *attrp, > char *const argv[], char *const envp[]); > > Here, file_actions is an object that represents a list of file actions > (open, close or dup2) that is populated using helper functions on the C API. > > The question is: what is the best way to deal with this argument? > > Following Gregory's comment on the PR I understand that he is proposing to > have three objects in the os module representing each action and pass a > sequence of these objects to the Python API. What I am not sure about this > is that there is no previous example of such classes in the os module for > other similar APIs and therefore I am not sure if there is a better > approach. > > Thanks you very much for your time! > Any chance of using an enum? -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Mon Jan 8 15:36:00 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 8 Jan 2018 22:36:00 +0200 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: 08.01.18 11:11, Pablo Galindo Salgado ????: > Following Gregory's comment on the PR I understand that he is proposing > to have three objects in the os module representing each action and pass > a sequence of these objects to the Python API. What I am not sure about > this is that there is no previous example of such classes in the os > module for other similar APIs and therefore I am not sure if there is a > better approach. I would pass a sequence like: [(os.close, 0), (os.open, 1, '/tmp/mylog', os.O_WRONLY, 0o700), (os.dup2, 1, 2), ] From eryksun at gmail.com Mon Jan 8 15:44:51 2018 From: eryksun at gmail.com (eryk sun) Date: Mon, 8 Jan 2018 20:44:51 +0000 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: On Sun, Jan 7, 2018 at 6:48 PM, Christian Tismer wrote: > That is true. > list2cmdline escapes partially, but on NT and Windows10, the "^" must > also be escaped, but is not. The "|" pipe symbol must also be escaped > by "^", as many others as well. > > The effect was that passing a rexexp as parameter to a windows program > gave me strange effects, and I recognized that "^" was missing. > > So I was asking for a coherent solution: > Escape things completely or omit "shell=True". > > Yes, there is a list of chars to escape, and it is Windows version > dependent. I can provide it if it makes sense. subprocess.list2cmdline is meant to help support cross-platform code, since Windows uses a command-line instead of an argv array. The command-line parsing rules used by VC++ (and CommandLineToArgvW) are the most common in practice. list2cmdline is intended for this set of applications. Otherwise pass args as a string instead of a list. In CMD we can quote part of a command line in double quotes to escape special characters. The quotes are preserved in the application command line. This can get complicated when we need to preserve literal quotes in the command line of an application that uses VC++ backslash escaping. CMD doesn't recognize backslash as an escape character, which gives rise to a quoting conflict between CMD and the application. Some applications support translating single quotes to double quotes in this case (e.g. schtasks.exe). Single quotes generally aren't used in CMD, except in a `for /f` loop, but this can be forced to use backquotes instead via `usebackq`. Quoting doesn't escape the percent character that's used for environment variables. In batch scripts percent can be escaped by doubling it, but not in /c commands. Some applications can translate a substitute character in this case, such as "~" (e.g. setx.exe). Otherwise, we can usually disrupt matching an existing variable by adding a "^" character after the first percent character. The "^" escape character gets consumed later on in parsing -- as long as it's not quoted (see the previous paragraph for complications). Nonetheless, "^" is a valid name character, so there's still a possibility of matching an environment variable (perhaps a malicious one). For example: C:\>python -c "print('"%^"time%')" %time% C:\>set "^"time=spam" C:\>python -c "print('"%^"time%')" spam Anyway, we're supposed to pass args as a string when using the shell in POSIX, so we may as well stay consistent with this in Windows. Practically no one wants the resulting behavior when passing a shell command as a list in POSIX. For example: >>> subprocess.call(['echo \\$0=$0 \\$1=$1', 'spam', 'eggs'], shell=True) $0=spam $1=eggs It's common to discourage using `shell=True` because it's considered insecure. One of the reasons to use CMD in Windows is that it tries ShellExecuteEx if CreateProcess fails. ShellExecuteEx supports "App Paths" commands, file actions (open, edit, print), UAC elevation (via "runas" or if requested by the manifest), protocols (including "shell:"), and opening folders in Explorer. It isn't a scripting language, however, so it doesn't pose the same risk as using CMD. Calling ShellExecuteEx could be integrated in subprocess as a new Popen parameter, such as `winshell` or `shellex`. From steve.dower at python.org Mon Jan 8 16:26:26 2018 From: steve.dower at python.org (Steve Dower) Date: Tue, 9 Jan 2018 08:26:26 +1100 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: On 09Jan2018 0744, eryk sun wrote: > It's common to discourage using `shell=True` because it's considered > insecure. One of the reasons to use CMD in Windows is that it tries > ShellExecuteEx if CreateProcess fails. ShellExecuteEx supports "App > Paths" commands, file actions (open, edit, print), UAC elevation (via > "runas" or if requested by the manifest), protocols (including > "shell:"), and opening folders in Explorer. It isn't a scripting > language, however, so it doesn't pose the same risk as using CMD. > Calling ShellExecuteEx could be integrated in subprocess as a new > Popen parameter, such as `winshell` or `shellex`. This can also be used directly as os.startfile, the only downside being that you can't wait for the process to complete (but that's due to the underlying API, which may not end up starting a process but rather sending a message to an existing long-running one such as explorer.exe). I'd certainly recommend it for actions like "open this file with its default editor" or "browse to this web page with the default browser". Cheers, Steve From victor.stinner at gmail.com Mon Jan 8 17:35:40 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 8 Jan 2018 23:35:40 +0100 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: Le 8 janv. 2018 8:36 PM, "Yury Selivanov" a ?crit : 2. Context.__contains__, Context.__getitem__ and ContexVar.default So if we keep the current PEP 567 behaviour w.r.t. defaults, ContextVar.get() might return a different value from Context.get(): v = ContextVar('v', default=42) ctx = contextvars.copy_context() ctx.get(v) # returns None v.get() # returns 42 v in ctx # returns False I think this discrepancy is OK. Context is a mapping-like object and it reflects the contents of the underlying _ContextData mapping object. ctx[var] raises an exception but ctx.get(var) returns None in such case. My point is just that Context.get() behaves differently than dict.get(). If dict[key] raises, I expect that dict.get() raises too and that I have to write explicitely dict.get(default=None). I suggest to modify Context.get() to raise an exception or require to explicitely write ctx.get(var, default=None). ContextVar.default is meant to be used only by ContextVar.get(). Context objects should not use it. I now agree. The difference between ContextVar.get() and Context.get() is fine and can be explained. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Jan 8 18:02:50 2018 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 8 Jan 2018 15:02:50 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Mon, Jan 8, 2018 at 2:35 PM, Victor Stinner wrote: > ctx[var] raises an exception but ctx.get(var) returns None in such case. My > point is just that Context.get() behaves differently than dict.get(). If > dict[key] raises, I expect that dict.get() raises too and that I have to > write explicitely dict.get(default=None). But that's not how dict.get works? In [1]: d = {} In [2]: print(d.get(1)) None -n -- Nathaniel J. Smith -- https://vorpus.org From greg at krypto.org Mon Jan 8 18:05:50 2018 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 08 Jan 2018 23:05:50 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: On Mon, Jan 8, 2018 at 12:36 PM Serhiy Storchaka wrote: > 08.01.18 11:11, Pablo Galindo Salgado ????: > > Following Gregory's comment on the PR I understand that he is proposing > > to have three objects in the os module representing each action and pass > > a sequence of these objects to the Python API. What I am not sure about > > this is that there is no previous example of such classes in the os > > module for other similar APIs and therefore I am not sure if there is a > > better approach. > > I would pass a sequence like: > > [(os.close, 0), > (os.open, 1, '/tmp/mylog', os.O_WRONLY, 0o700), > (os.dup2, 1, 2), > ] > i agree with just a list of tuples, but i suggest creating namedtuple instances in the posix module for the purpose (one each for close, dup2, open) . Don't put a reference to a function in the tuple as Serhiy suggested as, while obvious what it means, it gives the wrong impression to the user: nothing is calling the Python functions. This is a posix API that takes a list of arguments for a specific set of system calls for _it_ to make for us in a specific order. -gps > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Jan 8 18:15:17 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 9 Jan 2018 00:15:17 +0100 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: Hum, now I'm confused. I was probably confused by ContextVar.get() differences with Context.get(). It's fine if it behaves with a dict. Victor Le 9 janv. 2018 12:02 AM, "Nathaniel Smith" a ?crit : > On Mon, Jan 8, 2018 at 2:35 PM, Victor Stinner > wrote: > > ctx[var] raises an exception but ctx.get(var) returns None in such case. > My > > point is just that Context.get() behaves differently than dict.get(). If > > dict[key] raises, I expect that dict.get() raises too and that I have to > > write explicitely dict.get(default=None). > > But that's not how dict.get works? > > In [1]: d = {} > > In [2]: print(d.get(1)) > None > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Jan 8 18:47:33 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 8 Jan 2018 15:47:33 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: I am +1 on everything Yury says here. On Mon, Jan 8, 2018 at 11:34 AM, Yury Selivanov wrote: > Hi, > > Thanks to everybody participating in the PEP 567 discussion! I want > to summarize a few topics to make sure that we are all on the same > page (and maybe provoke more discussion). > > > 1. Proposal: ContextVar has default set to None. > > From the typing point of view that would mean that if a context > variable is declared without an explicit default, its type would be > Optional. E.g. say we have a hypothetical web framework that allows > to access the current request object through a context variable: > > request_var: ContextVar[Optional[Request]] = \ > ContextVar('current_request') > > When we need to get the current request object, we would write: > > request: Optional[Request] = request_var.get() > > And we'd also need to explicitly handle when 'request' is set to None. > Of course we could create request_var with its default set to some > "InvalidRequest" object, but that would complicate things. It would > be easier to just state that the framework always sets the current > request and it's a bug if it's not set. > > Therefore, in my opinion, it's better to keep the current behaviour: > if a context variable was created without a default value, > ContextVar.get() can raise a LookupError. > > > 2. Context.__contains__, Context.__getitem__ and ContexVar.default > > So if we keep the current PEP 567 behaviour w.r.t. defaults, > ContextVar.get() might return a different value from Context.get(): > > v = ContextVar('v', default=42) > ctx = contextvars.copy_context() > > ctx.get(v) # returns None > v.get() # returns 42 > v in ctx # returns False > > I think this discrepancy is OK. Context is a mapping-like object and > it reflects the contents of the underlying _ContextData mapping > object. > > ContextVar.default is meant to be used only by ContextVar.get(). > Context objects should not use it. > > Maybe we can rename ContextVar.get() to ContextVar.lookup()? This > would help to avoid potential confusion between Context.get() and > ContextVar.get(). > > > 3. Proposal: Context.get() and __getitem__() should always return up > to date values. > > The issue with the current PEP 567 design is that PyThreadState points > to a _ContextData object, and not to the current Context. The > following code illustrates how this manifests in Python code: > > v = ContextVar('v') > > def foo(): > v.set(42) > print(v.get(), ctx.get(v, 'missing')) > > ctx = Context() > ctx.run(foo) > > The above code will print "42 missing", because 'ctx' points to an > outdated _ContextData. > > This is easily fixable if we make PyThreadState to point to the > current Context object (instead of it pointing to a _ContextData). > This change will also make "contextvars.copy_context()" easier to > understand--it will actually return a copy of the current context that > the thread state points to. > > Adding a private Context._in_use attribute would allow us to make sure > that Context.run() cannot be simultaneously called in two OS threads. > As Nathaniel points out, this will also simplify cache implementation > in ContextVar.get(). So let's do this. > > > 4. Add Context.copy(). > > I was actually going to suggest this addition myself. With the > current PEP 567 design, Context.copy() can be implemented with > "ctx.run(contextvars.copy_context)", but this is very cumbersome. > > An example of when a copy() method could be useful is capturing the > current context and executing a few functions with it using > ThreadPoolExecutor.map(). Copying the Context object will ensure that > every mapped function executes in its own context copy (i.e. > isolated). So I'm +1 for this one. > > > 5. PEP language. > > I agree that PEP is vague about some details and is incorrect in some > places (like calling Context objects immutable, which is not really > true, because .run() can modify them). I'll fix the language in v3 > once I'm back home. > > > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Mon Jan 8 19:00:32 2018 From: random832 at fastmail.com (Random832) Date: Mon, 08 Jan 2018 19:00:32 -0500 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: <1515456032.7757.1228637496.2FF0B603@webmail.messagingengine.com> On Mon, Jan 8, 2018, at 18:05, Gregory P. Smith wrote: > i agree with just a list of tuples, but i suggest creating namedtuple > instances in the posix module for the purpose (one each for close, dup2, > open) . Don't put a reference to a function in the tuple as Serhiy > suggested as, while obvious what it means, it gives the wrong impression to > the user: nothing is calling the Python functions. This is a posix API > that takes a list of arguments for a specific set of system calls for _it_ > to make for us in a specific order. Instead of a sequence of functions to call, it'd be nice if a higher-level API could allow just passing in a mapping of file descriptor numbers to what they should point to in the new process, and the implementation figures out what sequence is necessary to get that result. And at that point we could just extend the subprocess API to allow redirection of file descriptors other than 0/1/2, and have an implementation of it in terms of posix_spawn. From greg at krypto.org Mon Jan 8 19:51:30 2018 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 09 Jan 2018 00:51:30 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: <1515456032.7757.1228637496.2FF0B603@webmail.messagingengine.com> References: <1515456032.7757.1228637496.2FF0B603@webmail.messagingengine.com> Message-ID: On Mon, Jan 8, 2018 at 4:03 PM Random832 wrote: > On Mon, Jan 8, 2018, at 18:05, Gregory P. Smith wrote: > > i agree with just a list of tuples, but i suggest creating namedtuple > > instances in the posix module for the purpose (one each for close, dup2, > > open) . Don't put a reference to a function in the tuple as Serhiy > > suggested as, while obvious what it means, it gives the wrong impression > to > > the user: nothing is calling the Python functions. This is a posix API > > that takes a list of arguments for a specific set of system calls for > _it_ > > to make for us in a specific order. > > Instead of a sequence of functions to call, it'd be nice if a higher-level > API could allow just passing in a mapping of file descriptor numbers to > what they should point to in the new process, and the implementation > figures out what sequence is necessary to get that result. > > And at that point we could just extend the subprocess API to allow > redirection of file descriptors other than 0/1/2, and have an > implementation of it in terms of posix_spawn. > sure, but high level APIs don't belong in the os/posix module. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jan 8 20:05:26 2018 From: brett at python.org (Brett Cannon) Date: Tue, 09 Jan 2018 01:05:26 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: On Mon, 8 Jan 2018 at 15:06 Gregory P. Smith wrote: > On Mon, Jan 8, 2018 at 12:36 PM Serhiy Storchaka > wrote: > >> 08.01.18 11:11, Pablo Galindo Salgado ????: >> > Following Gregory's comment on the PR I understand that he is proposing >> > to have three objects in the os module representing each action and pass >> > a sequence of these objects to the Python API. What I am not sure about >> > this is that there is no previous example of such classes in the os >> > module for other similar APIs and therefore I am not sure if there is a >> > better approach. >> >> I would pass a sequence like: >> >> [(os.close, 0), >> (os.open, 1, '/tmp/mylog', os.O_WRONLY, 0o700), >> (os.dup2, 1, 2), >> ] >> > > i agree with just a list of tuples, but i suggest creating namedtuple > instances in the posix module for the purpose (one each for close, dup2, > open) . > I a namedtuple really necessary for this versus a simple object? There is no backwards-compatibility here with an old tuple-based interface so supporting both tuples and named access doesn't seem necessary to me. -Brett > Don't put a reference to a function in the tuple as Serhiy suggested as, > while obvious what it means, it gives the wrong impression to the user: > nothing is calling the Python functions. This is a posix API that takes a > list of arguments for a specific set of system calls for _it_ to make for > us in a specific order. > > -gps > > >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jan 8 22:22:15 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 9 Jan 2018 13:22:15 +1000 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On 9 January 2018 at 05:34, Yury Selivanov wrote: > Maybe we can rename ContextVar.get() to ContextVar.lookup()? This > would help to avoid potential confusion between Context.get() and > ContextVar.get(). I think this would also tie in nicely with the PEP 568 draft, where "ContextVar.lookup()" may end up scanning a chain of Context mappings before falling back on the given default value. That said, I do wonder if this may be a case where a dual API might be appropriate (ala dict.__getitem__ vs dict.get), such that you have: ContextVar.get(default=None) -> Optional[T] # Missing -> None ContextVar.lookup() -> T # Missing -> raise LookupError If you set a default on the ContextVar itself, they'd always be identical (since you'll never hit the "Missing" case), but they'd mimic the dict.__getitem__ vs dict.get split if no var level default was specified. The conservative option would be to start with only the `ContextVar.lookup` method, and then add `ContextVar.get` later if it's absence proved sufficiently irritating. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Jan 8 22:31:59 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 9 Jan 2018 13:31:59 +1000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: On 8 January 2018 at 19:11, Pablo Galindo Salgado wrote: > Following Gregory's comment on the PR I understand that he is proposing to > have three objects in the os module representing each action and pass a > sequence of these objects to the Python API. What I am not sure about this > is that there is no previous example of such classes in the os module for > other similar APIs and therefore I am not sure if there is a better > approach. Probably the closest prior art would be the os.DirEntry objects used for the items yielded from os.scandir - that is the same general idea (a dedicated Python class to represent a C struct), just in the other direction. As with DirEntry, I don't see any obvious value in making the new objects iterable though - we should be able to just use named field access in both the C and Python APIs. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Mon Jan 8 23:04:24 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 8 Jan 2018 20:04:24 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: When I +1'ed Yury's message I forgot about this issue. I actually prefer the current PEP 567 version -- .get() raises an error if there's no default on the ContextVar, and .get(None) returns None if there's no default. The idea here is that by far the most common use will be .get(), so it should be a short name. (In that sense it's similar to the Queue API.) On Mon, Jan 8, 2018 at 7:22 PM, Nick Coghlan wrote: > On 9 January 2018 at 05:34, Yury Selivanov > wrote: > > Maybe we can rename ContextVar.get() to ContextVar.lookup()? This > > would help to avoid potential confusion between Context.get() and > > ContextVar.get(). > > I think this would also tie in nicely with the PEP 568 draft, where > "ContextVar.lookup()" may end up scanning a chain of Context mappings > before falling back on the given default value. > > That said, I do wonder if this may be a case where a dual API might be > appropriate (ala dict.__getitem__ vs dict.get), such that you have: > > ContextVar.get(default=None) -> Optional[T] # Missing -> None > ContextVar.lookup() -> T # Missing -> raise LookupError > > If you set a default on the ContextVar itself, they'd always be > identical (since you'll never hit the "Missing" case), but they'd > mimic the dict.__getitem__ vs dict.get split if no var level default > was specified. > > The conservative option would be to start with only the > `ContextVar.lookup` method, and then add `ContextVar.get` later if > it's absence proved sufficiently irritating. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jan 9 02:02:07 2018 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 8 Jan 2018 23:02:07 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Mon, Jan 8, 2018 at 11:34 AM, Yury Selivanov wrote: > 1. Proposal: ContextVar has default set to None. > > From the typing point of view that would mean that if a context > variable is declared without an explicit default, its type would be > Optional. E.g. say we have a hypothetical web framework that allows > to access the current request object through a context variable: > > request_var: ContextVar[Optional[Request]] = \ > ContextVar('current_request') > > When we need to get the current request object, we would write: > > request: Optional[Request] = request_var.get() > > And we'd also need to explicitly handle when 'request' is set to None. > Of course we could create request_var with its default set to some > "InvalidRequest" object, but that would complicate things. It would > be easier to just state that the framework always sets the current > request and it's a bug if it's not set. > > Therefore, in my opinion, it's better to keep the current behaviour: > if a context variable was created without a default value, > ContextVar.get() can raise a LookupError. All the different behaviors here can work, so I don't want to make a huge deal about this. But the current behavior is bugging me, and I don't think anyone has brought up the reason why, so here goes :-). Right now, the set of valid states for a ContextVar are: it can hold any Python object, or it can be undefined. However, the only way it can be in the "undefined" state is in a new Context where it has never had a value; once it leaves the undefined state, it can never return to it. This makes me itch. It's very weird to have a mutable variable with a valid state that you can't reach by mutating it. I see two self-consistent ways to make me stop itching: (a) double-down on undefined as being part of ContextVar's domain, or (b) reduce the domain so that undefined is never a valid state. # Option 1 In the first approach, we conceptualize ContextVar as being a container that either holds a value or is empty (and then there's one of these containers for each context). We also want to be able to define an initial value that the container takes on when a new context materializes, because that's really convenient. And then after that we provide ways to get the value (if present), or control the value (either set it to a particular value or unset it). So something like: var1 = ContextVar("var1") # no initial value var2 = ContextVar("var2", initial_value="hello") with assert_raises(SomeError): var1.get() # get's default lets us give a different outcome in cases where it would otherwise raise assert var1.get(None) is None assert var2.get() == "hello" # If get() doesn't raise, then the argument is ignored assert var2.get(None) == "hello" # We can set to arbitrary values for var in [var1, var2]: var.set("new value") assert var.get() == "new value" # We can unset again, so get() will raise for var in [var1, var2]: var.unset() with assert_raises(SomeError): var.get() assert var.get(None) is None To fulfill all that, we need an implementation like: MISSING = make_sentinel() class ContextVar: def __init__(self, name, *, initial_value=MISSING): self.name = name self.initial_value = initial_value def set(self, value): if value is MISSING: raise TypeError current_context()._dict[self] = value # Token handling elided because it's orthogonal to this issue return Token(...) def unset(self): current_context()._dict[self] = MISSING # Token handling elided because it's orthogonal to this issue return Token(...) def get(self, default=_NOT_GIVEN): value = current_context().get(self, self.initial_value) if value is MISSING: if default is _NOT_GIVEN: raise ... else: return default else: return value Note that the implementation here is somewhat tricky and non-obvious. In particular, to preserve the illusion of a simple container with an optional initial value, we have to encode a logically undefined ContextVar as one that has Context[var] set to MISSING, and a missing entry in Context encodes the presence of the inital value. If we defined unset() as 'del current_context._dict[self]', then we'd have: var2.unset() assert var2.get() is None which would be very surprising to users who just want to think about ContextVars and ignore all that stuff about Contexts. This, in turn, means that we need to expose the MISSING sentinel in general, because anyone introspecting Context objects directly needs to know how to recognize this magic value to interpret things correctly. AFAICT this is the minimum complexity required to get a complete and internally-consistent set of operations for a ContextVar that's conceptualized as being a container that either holds an arbitrary value or is empty. # Option 2 The other complete and coherent conceptualization I see is to say that a ContextVar always holds a value. If we eliminate the "unset" state entirely, then there's no "missing unset method" -- there just isn't any concept of an unset value in the first place, so there's nothing to miss. This idea shows up in lots of types in Python, actually -- e.g. for any exception object, obj.__context__ is always defined. Its value might be None, but it has a value. In this approach, ContextVar's are similar. To fulfill all that, we need an implementation like: class ContextVar: # Or maybe it'd be better to make initial_value mandatory, like this? # def __init__(self, name, *, initial_value): def __init__(self, name, *, initial_value=None): self.name = name self.initial_value = initial_value def set(self, value): current_context()._dict[self] = value # Token handling elided because it's orthogonal to this issue return Token(...) def get(self): return current_context().get(self, self.initial_value) This is also a complete and internally consistent set of operations, but this time for a somewhat different way of conceptualizing ContextVar. Actually, the more I think about it, the more I think that if we take this approach and say that every ContextVar always has a value, it makes sense to make initial_value= a mandatory argument instead of defaulting it to None. Then the typing works too, right? Something like: ContextVar(name: str, *, initial_value: T) -> ContextVar[T] ContextVar.get() -> T ContextVar.set(T) -> Token ? And it's hardly a burden on users to type 'ContextVar("myvar", initial_value=None)' if that's what they want. Anyway... between these two options, I like Option 2 better because it's substantially simpler without (AFAICT) any meaningful reduction in usability. But I'd prefer either of them to the current PEP 567, which seems like an internally-contradictory hybrid of these ideas. It makes sense if you know how the code and Contexts work. But if I was talking to someone who wanted to ignore those details and just use a ContextVar, and they asked me for a one sentence summary of how it worked, I wouldn't know what to tell them. -n -- Nathaniel J. Smith -- https://vorpus.org From storchaka at gmail.com Tue Jan 9 02:07:00 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 9 Jan 2018 09:07:00 +0200 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: 09.01.18 05:31, Nick Coghlan ????: > On 8 January 2018 at 19:11, Pablo Galindo Salgado wrote: >> Following Gregory's comment on the PR I understand that he is proposing to >> have three objects in the os module representing each action and pass a >> sequence of these objects to the Python API. What I am not sure about this >> is that there is no previous example of such classes in the os module for >> other similar APIs and therefore I am not sure if there is a better >> approach. > > Probably the closest prior art would be the os.DirEntry objects used > for the items yielded from os.scandir - that is the same general idea > (a dedicated Python class to represent a C struct), just in the other > direction. > > As with DirEntry, I don't see any obvious value in making the new > objects iterable though - we should be able to just use named field > access in both the C and Python APIs. Do you suggest to add a class corresponding to posix_spawn_file_actions_t with methods corresponding to posix_spawn_file_* functions? class posix_spawn_file_actions: def __init__(self): ... def addclose(self, fildes): ... def addopen(self, fildes, path, oflag, mode): ... def adddup2(self, fildes, newfildes): ... def destroy(self): pass __del__ = destroy This maximally corresponds the C API. But doesn't look Pythonic. From njs at pobox.com Tue Jan 9 02:18:01 2018 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 8 Jan 2018 23:18:01 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Thu, Jan 4, 2018 at 9:42 PM, Guido van Rossum wrote: > On Thu, Jan 4, 2018 at 7:58 PM, Nathaniel Smith wrote: >> This does make me think that I should write up a short PEP for >> extending PEP 567 to add context lookup, PEP 550 style: it can start >> out in Status: deferred and then we can debate it properly before 3.8, >> but at least having the roadmap written down now would make it easier >> to catch these details. (And it might also help address Paul's >> reasonable complaint about "unstated requirements".) > > Anything that will help us kill a 550-pound gorilla sounds good to me. :-) > > It might indeed be pretty short if we follow the lead of ChainMap (even > using a different API than MutableMapping to mutate it). Maybe > copy_context() would map to new_child()? Using ChainMap as a model we might > even avoid the confusion between Lo[gi]calContext and ExecutionContext which > was the nail in PEP 550's coffin. The LC associated with a generator in PEP > 550 would be akin to a loose dict which can be pushed on top of a ChainMap > using cm = cm.new_child(). (Always taking for granted that instead of > an actual dict we'd use some specialized mutable object implementing the > Mapping protocol and a custom mutation protocol so it can maintain > ContextVar cache consistency.) The approach I took in PEP 568 is even simpler, I think. The PEP is a few pages long because I wanted to be exhaustive to make sure we weren't missing any details, but the tl;dr is: The ChainMap lives entirely inside the threadstate, so there's no need to create a LC/EC distinction -- users just see Contexts, or there's the one stack introspection API, get_context_stack(), which returns a List[Context]. Instead of messing with new_child, copy_context is just Context(dict(chain_map)) -- i.e., it creates a flattened copy of the current mapping. (If we used new_child, then we'd have to have a way to return a ChainMap, reintroducing the LC/EC mess. Plus it would allow for changes to the Context's inside one ChainMap to "leak" into the child or vice-versa.) And Context push/pop is just push/pop on the threadstate's ChainMap's '.maps' attribute. -n -- Nathaniel J. Smith -- https://vorpus.org From storchaka at gmail.com Tue Jan 9 02:24:08 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 9 Jan 2018 09:24:08 +0200 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: 09.01.18 01:05, Gregory P. Smith ????: > On Mon, Jan 8, 2018 at 12:36 PM Serhiy Storchaka > wrote: > > 08.01.18 11:11, Pablo Galindo Salgado ????: > > Following Gregory's comment on the PR I understand that he is > proposing > > to have three objects in the os module representing each action > and pass > > a sequence of these objects to the Python API. What I am not sure > about > > this is that there is no previous example of such classes in the os > > module for other similar APIs and therefore I am not sure if > there is a > > better approach. > > I would pass a sequence like: > > [(os.close, 0), > ? (os.open, 1, '/tmp/mylog', os.O_WRONLY, 0o700), > ? (os.dup2, 1, 2), > ] > > > i agree with just a list of tuples, but i suggest creating namedtuple > instances in the posix module for the purpose (one each for close, dup2, > open) .? Don't put a reference to a function in the tuple as Serhiy > suggested as, while obvious what it means, it gives the wrong impression > to the user: nothing is calling the Python functions.? This is a posix > API that takes a list of arguments for a specific set of system calls > for _it_ to make for us in a specific order. Creating three new classes has higher cost than creating three singletones. There are some advantages of using existing functions as tags. But this is not the only possible interface. If there is a single order of actions (first close, then open, finally dup2), actions can be specified as three keyword-only arguments taking sequences of integers or tuples: posix_spawn(..., close=[0], open=[(1, '/tmp/mylog', os.O_WRONLY, 0o700)], dup2=[(1, 2)]) But this perhaps is not able to express all useful cases. From eryksun at gmail.com Tue Jan 9 03:10:56 2018 From: eryksun at gmail.com (eryk sun) Date: Tue, 9 Jan 2018 08:10:56 +0000 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: On Mon, Jan 8, 2018 at 9:26 PM, Steve Dower wrote: > On 09Jan2018 0744, eryk sun wrote: >> >> It's common to discourage using `shell=True` because it's considered >> insecure. One of the reasons to use CMD in Windows is that it tries >> ShellExecuteEx if CreateProcess fails. ShellExecuteEx supports "App >> Paths" commands, file actions (open, edit, print), UAC elevation (via >> "runas" or if requested by the manifest), protocols (including >> "shell:"), and opening folders in Explorer. It isn't a scripting >> language, however, so it doesn't pose the same risk as using CMD. >> Calling ShellExecuteEx could be integrated in subprocess as a new >> Popen parameter, such as `winshell` or `shellex`. > > This can also be used directly as os.startfile, the only downside being that > you can't wait for the process to complete (but that's due to the underlying > API, which may not end up starting a process but rather sending a message to > an existing long-running one such as explorer.exe). I'd certainly recommend > it for actions like "open this file with its default editor" or "browse to > this web page with the default browser". Yes, I forgot to mention that os.startfile can work sometimes. But often one needs to pass command-line parameters. Also, os.startfile also can't set a different working directory, nShow SW_* window state, or flags such as SEE_MASK_NO_CONSOLE (prevent allocating a new console). Rather than extend os.startfile, it seems more useful in general to wrap ShellExecuteEx in _winapi and extend subprocess. Then os.startfile can be reimplemented in terms of subprocess.Popen, like os.popen. From ncoghlan at gmail.com Tue Jan 9 03:29:21 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 9 Jan 2018 18:29:21 +1000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: On 9 January 2018 at 17:07, Serhiy Storchaka wrote: > 09.01.18 05:31, Nick Coghlan ????: >> As with DirEntry, I don't see any obvious value in making the new >> objects iterable though - we should be able to just use named field >> access in both the C and Python APIs. > > Do you suggest to add a class corresponding to posix_spawn_file_actions_t > with methods corresponding to posix_spawn_file_* functions? Sorry, I should have said explicitly that I liked Greg's suggestion of modeling this as an iterable-of-objects at the Python layer - I was just agreeing with Brett that those should be objects with named attributes, rather than relying on tuples with a specific field order. That way a passed in list of actions would look something like one of the following: # Three distinct classes, "FileActions" namespace for disambiguation [os.FileActions.Close(0), os.FileActions.Open(1, '/tmp/mylog', os.O_WRONLY, 0o700), os.FileActions.Dup2(1, 2), ] # Single class, three distinct class constructor methods [os.FileAction.close(0), os.FileAction.open(1, '/tmp/mylog', os.O_WRONLY, 0o700), os.FileAction.dup2(1, 2), ] While my initial comment supported having 3 distinct classes, writing it out that way pushes me more towards having a single type where the constructor names match the relevant module function names. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Tue Jan 9 05:01:01 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 9 Jan 2018 11:01:01 +0100 Subject: [Python-Dev] Best Python API for exposing posix_spawn References: Message-ID: <20180109110101.0ac615eb@fsol> On Mon, 08 Jan 2018 09:11:38 +0000 Pablo Galindo Salgado wrote: > Hi, > > I'm currently working on exposing posix_spawn in the posix module (and by > extension in the os module). You can find the initial implementation in > this PR: > > https://github.com/python/cpython/pull/5109 > > As pointed out by Gregory P. Smith, some changes are needed in the way the > file_actions arguments is passed from Python. For context, posix_spawn has > the following declaration: > > int posix_spawn(pid_t *pid, const char *path, > const posix_spawn_file_actions_t *file_actions, > const posix_spawnattr_t *attrp, > char *const argv[], char *const envp[]); > > Here, file_actions is an object that represents a list of file actions > (open, close or dup2) that is populated using helper functions on the C API. > > The question is: what is the best way to deal with this argument? How about a list of tuples like: [(os.SPAWN_OPEN, 4, 'README.txt', os.O_RDONLY, 0), (os.SPAWN_CLOSE, 5), (os.SPAWN_DUP2, 3, 6), ] I don't expect this API to be invoked directly by user code so it doesn't have to be extremely pretty. Regards Antoine. From ncoghlan at gmail.com Tue Jan 9 05:41:48 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 9 Jan 2018 20:41:48 +1000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: <20180109110101.0ac615eb@fsol> References: <20180109110101.0ac615eb@fsol> Message-ID: On 9 January 2018 at 20:01, Antoine Pitrou wrote: > On Mon, 08 Jan 2018 09:11:38 +0000 > Pablo Galindo Salgado wrote: >> Hi, >> >> I'm currently working on exposing posix_spawn in the posix module (and by >> extension in the os module). You can find the initial implementation in >> this PR: >> >> https://github.com/python/cpython/pull/5109 >> >> As pointed out by Gregory P. Smith, some changes are needed in the way the >> file_actions arguments is passed from Python. For context, posix_spawn has >> the following declaration: >> >> int posix_spawn(pid_t *pid, const char *path, >> const posix_spawn_file_actions_t *file_actions, >> const posix_spawnattr_t *attrp, >> char *const argv[], char *const envp[]); >> >> Here, file_actions is an object that represents a list of file actions >> (open, close or dup2) that is populated using helper functions on the C API. >> >> The question is: what is the best way to deal with this argument? > > How about a list of tuples like: > [(os.SPAWN_OPEN, 4, 'README.txt', os.O_RDONLY, 0), > (os.SPAWN_CLOSE, 5), > (os.SPAWN_DUP2, 3, 6), > ] > > I don't expect this API to be invoked directly by user code so it > doesn't have to be extremely pretty. I'll note that one advantage of this approach is that it ties in well with how the C API is going to have to deal with it anyway: a switch statement dispatching on the first value, and then passing the remaining arguments to the corresponding posix_file_actions API. Wrapping it all up in a more Pythonic self-validating API would then be the responsibility of the subprocess module (in the standard library), or third party modules. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov at gmail.com Tue Jan 9 05:59:57 2018 From: yselivanov at gmail.com (Yury Selivanov) Date: Tue, 9 Jan 2018 14:59:57 +0400 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: > On Jan 9, 2018, at 11:18 AM, Nathaniel Smith wrote: > >> On Thu, Jan 4, 2018 at 9:42 PM, Guido van Rossum wrote: >>> On Thu, Jan 4, 2018 at 7:58 PM, Nathaniel Smith wrote: >>> This does make me think that I should write up a short PEP for >>> extending PEP 567 to add context lookup, PEP 550 style: it can start >>> out in Status: deferred and then we can debate it properly before 3.8, >>> but at least having the roadmap written down now would make it easier >>> to catch these details. (And it might also help address Paul's >>> reasonable complaint about "unstated requirements".) >> >> Anything that will help us kill a 550-pound gorilla sounds good to me. :-) >> >> It might indeed be pretty short if we follow the lead of ChainMap (even >> using a different API than MutableMapping to mutate it). Maybe >> copy_context() would map to new_child()? Using ChainMap as a model we might >> even avoid the confusion between Lo[gi]calContext and ExecutionContext which >> was the nail in PEP 550's coffin. The LC associated with a generator in PEP >> 550 would be akin to a loose dict which can be pushed on top of a ChainMap >> using cm = cm.new_child(). (Always taking for granted that instead of >> an actual dict we'd use some specialized mutable object implementing the >> Mapping protocol and a custom mutation protocol so it can maintain >> ContextVar cache consistency.) > > The approach I took in PEP 568 is even simpler, I think. The PEP is a > few pages long because I wanted to be exhaustive to make sure we > weren't missing any details, but the tl;dr is: The ChainMap lives > entirely inside the threadstate, so there's no need to create a LC/EC > distinction -- users just see Contexts, or there's the one stack > introspection API, get_context_stack(), which returns a List[Context]. > Instead of messing with new_child, copy_context is just > Context(dict(chain_map)) -- i.e., it creates a flattened copy of the > current mapping. (If we used new_child, then we'd have to have a way > to return a ChainMap, reintroducing the LC/EC mess. This sounds reasonable. Although keep in mind that merging hamt is still an expensive operation, so flattening shouldn't always be performed (this is covered in 550). I also wouldn't call LC/EC a "mess". Your pep just names things differently, but otherwise is entirely built on concepts and ideas introduced in pep 550. Yury From yselivanov.ml at gmail.com Tue Jan 9 06:41:07 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 9 Jan 2018 15:41:07 +0400 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Tue, Jan 9, 2018 at 11:02 AM, Nathaniel Smith wrote: > On Mon, Jan 8, 2018 at 11:34 AM, Yury Selivanov wrote: >> 1. Proposal: ContextVar has default set to None. >> >> From the typing point of view that would mean that if a context >> variable is declared without an explicit default, its type would be >> Optional. E.g. say we have a hypothetical web framework that allows >> to access the current request object through a context variable: >> >> request_var: ContextVar[Optional[Request]] = \ >> ContextVar('current_request') >> >> When we need to get the current request object, we would write: >> >> request: Optional[Request] = request_var.get() >> >> And we'd also need to explicitly handle when 'request' is set to None. >> Of course we could create request_var with its default set to some >> "InvalidRequest" object, but that would complicate things. It would >> be easier to just state that the framework always sets the current >> request and it's a bug if it's not set. >> >> Therefore, in my opinion, it's better to keep the current behaviour: >> if a context variable was created without a default value, >> ContextVar.get() can raise a LookupError. > > All the different behaviors here can work, so I don't want to make a > huge deal about this. But the current behavior is bugging me, and I > don't think anyone has brought up the reason why, so here goes :-). > > Right now, the set of valid states for a ContextVar are: it can hold > any Python object, or it can be undefined. However, the only way it > can be in the "undefined" state is in a new Context where it has never > had a value; once it leaves the undefined state, it can never return > to it. Is "undefined" a state when a context variable doesn't have a default and isn't yet set? If so, why can't it be returned back to the "undefined" state? That's why we have the 'reset' method: c = ContextVar('c') c.get() # LookupError t = c.set(42) c.get() # 42 c.reset(t) c.get() # LookupError I don't like how context variables are defined in Option 1 and Option 2. I view ContextVars as keys in some global context mapping--akin to Python variables. Similar to how we have a NameError for variables, we have a LookupError for context variables. When we write a variable name, Python looks it up in locals and globals. When we call ContextVar.get(), Python will look up that context variable in the current Context. I don't think we should try to classify ContextVar objects as containers or something capable of holding a value on their own. Even when you have a "del some_var" statement, you are only guaranteed to remove the "some_var" name from the innermost scope. This is similar to what ContextVar.unset() will do in PEP 568, by removing the variable only from the head of the chain. So the sole purpose of ContextVar.default is to make ContextVar.get() convenient. Context objects don't know about ContextVar.default, and ContextVars don't know about values they are mapped to in some Context object. In any case, at this point I think that the best option is to simply drop the "default" parameter from the ContextVar constructor. This would leave us with only one default in ContextVar.get() method: c.get() # Will raise a LookupError if 'c' is not set c.get('python') # Will return 'python' if 'c' is not set I also now see how having two different 'default' values: one defined when a ContextVar is created, and one can be passed to ContextVar.get() is confusing. But I'd be -1 on making all ContextVars have a None default (effectively have a "ContextVar.get(default=None)" signature. This would be a very loose semantics in my opinion. Yury From victor.stinner at gmail.com Tue Jan 9 10:14:40 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 9 Jan 2018 16:14:40 +0100 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: 2018-01-09 12:41 GMT+01:00 Yury Selivanov : > But I'd be -1 on making all ContextVars have a None default > (effectively have a "ContextVar.get(default=None)" signature. This > would be a very loose semantics in my opinion. Why do you think that it's a loose semantics? For me ContextVar/Context are similar to Python namespaces and thread local storage. To "declare" a variable in a Python namespace, you have to set it: "global x" doesn't create a variable, only "x = None". It's not possible to define a thread local variable without specifying a "default" value neither. Victor From yselivanov.ml at gmail.com Tue Jan 9 10:41:28 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 09 Jan 2018 15:41:28 +0000 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: By default, threading.local raises an AttributeError (unless you subclass it.) Similar to that and to NameErrors, I think it's a good idea for ContextVars to raise a LookupError if a variable was not explicitly set. Yury On Tue, Jan 9, 2018 at 7:15 PM Victor Stinner wrote: > 2018-01-09 12:41 GMT+01:00 Yury Selivanov : > > But I'd be -1 on making all ContextVars have a None default > > (effectively have a "ContextVar.get(default=None)" signature. This > > would be a very loose semantics in my opinion. > > Why do you think that it's a loose semantics? For me > ContextVar/Context are similar to Python namespaces and thread local > storage. > > To "declare" a variable in a Python namespace, you have to set it: > "global x" doesn't create a variable, only "x = None". > > It's not possible to define a thread local variable without specifying > a "default" value neither. > > Victor > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at python.org Tue Jan 9 10:48:01 2018 From: nad at python.org (Ned Deily) Date: Tue, 9 Jan 2018 10:48:01 -0500 Subject: [Python-Dev] [RELEASE] Python 3.7.0a4 is now available for testing Message-ID: Python 3.7.0a4 is the last of four planned alpha releases of Python 3.7, the next feature release of Python. During the alpha phase, Python 3.7 remains under heavy development: additional features will be added and existing features may be modified or deleted. Please keep in mind that this is a preview release and its use is not recommended for production environments. The next preview release, 3.7.0b1, is planned for 2018-01-29. You can find Python 3.7.0a4 and more information here: https://www.python.org/downloads/release/python-370a4/ -- Ned Deily nad at python.org -- [] From victor.stinner at gmail.com Tue Jan 9 11:34:37 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 9 Jan 2018 17:34:37 +0100 Subject: [Python-Dev] [RELEASE] Python 3.7.0a4 is now available for testing In-Reply-To: References: Message-ID: Hi, Python 3.7.0a4 includes the implementation of the PEP 538 (C locale coercion) and PEP 540 (UTF-8 Mode). Please test this Python with various locales, especially with the POSIX ("C") locale! Note: The UTF-8 Mode has a known issue with the readline module, I see how to fix it (add new encode/decode functions which ignore the UTF-8 mode and really use the current locale encoding), but I didn't have time to fix it yet: https://bugs.python.org/issue29240#msg308217 (I skipped the test to repair the CI, until I can fix the bug.) Victor 2018-01-09 16:48 GMT+01:00 Ned Deily : > Python 3.7.0a4 is the last of four planned alpha releases of Python 3.7, > the next feature release of Python. During the alpha phase, Python 3.7 > remains under heavy development: additional features will be added > and existing features may be modified or deleted. Please keep in mind > that this is a preview release and its use is not recommended for > production environments. The next preview release, 3.7.0b1, is planned > for 2018-01-29. You can find Python 3.7.0a4 and more information here: > > https://www.python.org/downloads/release/python-370a4/ > > -- > Ned Deily > nad at python.org -- [] > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From brett at python.org Tue Jan 9 12:56:15 2018 From: brett at python.org (Brett Cannon) Date: Tue, 09 Jan 2018 17:56:15 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: <20180109110101.0ac615eb@fsol> Message-ID: On Tue, 9 Jan 2018 at 02:42 Nick Coghlan wrote: > On 9 January 2018 at 20:01, Antoine Pitrou wrote: > > On Mon, 08 Jan 2018 09:11:38 +0000 > > Pablo Galindo Salgado wrote: > >> Hi, > >> > >> I'm currently working on exposing posix_spawn in the posix module (and > by > >> extension in the os module). You can find the initial implementation in > >> this PR: > >> > >> https://github.com/python/cpython/pull/5109 > >> > >> As pointed out by Gregory P. Smith, some changes are needed in the way > the > >> file_actions arguments is passed from Python. For context, posix_spawn > has > >> the following declaration: > >> > >> int posix_spawn(pid_t *pid, const char *path, > >> const posix_spawn_file_actions_t *file_actions, > >> const posix_spawnattr_t *attrp, > >> char *const argv[], char *const envp[]); > >> > >> Here, file_actions is an object that represents a list of file actions > >> (open, close or dup2) that is populated using helper functions on the C > API. > >> > >> The question is: what is the best way to deal with this argument? > > > > How about a list of tuples like: > > [(os.SPAWN_OPEN, 4, 'README.txt', os.O_RDONLY, 0), > > (os.SPAWN_CLOSE, 5), > > (os.SPAWN_DUP2, 3, 6), > > ] > > > > I don't expect this API to be invoked directly by user code so it > > doesn't have to be extremely pretty. > > I'll note that one advantage of this approach is that it ties in well > with how the C API is going to have to deal with it anyway: a switch > statement dispatching on the first value, and then passing the > remaining arguments to the corresponding posix_file_actions API. > Plus the posix module tends to stick reasonably close to the C API anyway since it's such a thin wrapper. > > Wrapping it all up in a more Pythonic self-validating API would then > be the responsibility of the subprocess module (in the standard > library), or third party modules. > +1 from me on Antoine's suggestion. Might as well keep it simple. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Jan 9 15:37:43 2018 From: steve.dower at python.org (Steve Dower) Date: Wed, 10 Jan 2018 07:37:43 +1100 Subject: [Python-Dev] [RELEASE] Python 3.7.0a4 is now available fortesting In-Reply-To: References: Message-ID: FWIW, the ansi and mbcs encodings on Windows do exactly this (as does the oem encoding with a minor twist). Feel free to reuse either alias if it makes sense, or make sure a new one also works on Windows. Top-posted from my Windows phone From: Victor Stinner Sent: Wednesday, January 10, 2018 3:37 To: Python Dev Subject: Re: [Python-Dev] [RELEASE] Python 3.7.0a4 is now available fortesting Hi, Python 3.7.0a4 includes the implementation of the PEP 538 (C locale coercion) and PEP 540 (UTF-8 Mode). Please test this Python with various locales, especially with the POSIX ("C") locale! Note: The UTF-8 Mode has a known issue with the readline module, I see how to fix it (add new encode/decode functions which ignore the UTF-8 mode and really use the current locale encoding), but I didn't have time to fix it yet: https://bugs.python.org/issue29240#msg308217 (I skipped the test to repair the CI, until I can fix the bug.) Victor 2018-01-09 16:48 GMT+01:00 Ned Deily : > Python 3.7.0a4 is the last of four planned alpha releases of Python 3.7, > the next feature release of Python. During the alpha phase, Python 3.7 > remains under heavy development: additional features will be added > and existing features may be modified or deleted. Please keep in mind > that this is a preview release and its use is not recommended for > production environments. The next preview release, 3.7.0b1, is planned > for 2018-01-29. You can find Python 3.7.0a4 and more information here: > > https://www.python.org/downloads/release/python-370a4/ > > -- > Ned Deily > nad at python.org -- [] > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at barrys-emacs.org Tue Jan 9 15:17:02 2018 From: barry at barrys-emacs.org (Barry Scott) Date: Tue, 9 Jan 2018 20:17:02 +0000 Subject: [Python-Dev] subprocess not escaping "^" on Windows In-Reply-To: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> References: <79eabfed-7e8a-b570-485c-fecbe5c94725@stackless.com> Message-ID: My feeling is that the number of uses for calling cmd /c is rather limited on Windows. Certainly calling out to use the CMD builtin is not to be encouraged I'd say. Between shutil and the os module you have most of the file handling commands. Admin tools might want to run special commands, but they are not builtins. In all the cases where you have a command line exe to run you can avoid calling into cmd and the associated quoting problems. I've found that in all my windows python apps I typically end up using CreateProcess and ShellExecute for the useful stuff. (I use ctypes to call them). Is it worth changing the quoting at all? I would say not. Barry From guido at python.org Tue Jan 9 16:22:46 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 9 Jan 2018 13:22:46 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: There are too many words here for me to follow. I'll wait a few days and then hopefully there's a new proposal that you are all in agreement with, or there are two brief alternatives that I have to choose between. On Tue, Jan 9, 2018 at 7:41 AM, Yury Selivanov wrote: > By default, threading.local raises an AttributeError (unless you subclass > it.) Similar to that and to NameErrors, I think it's a good idea for > ContextVars to raise a LookupError if a variable was not explicitly set. > > Yury > > > On Tue, Jan 9, 2018 at 7:15 PM Victor Stinner > wrote: > >> 2018-01-09 12:41 GMT+01:00 Yury Selivanov : >> > But I'd be -1 on making all ContextVars have a None default >> > (effectively have a "ContextVar.get(default=None)" signature. This >> > would be a very loose semantics in my opinion. >> >> Why do you think that it's a loose semantics? For me >> ContextVar/Context are similar to Python namespaces and thread local >> storage. >> >> To "declare" a variable in a Python namespace, you have to set it: >> "global x" doesn't create a variable, only "x = None". >> >> It's not possible to define a thread local variable without specifying >> a "default" value neither. >> >> Victor >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Tue Jan 9 19:06:06 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 10 Jan 2018 02:06:06 +0200 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 Message-ID: Hi all, I feel like I should write some thoughts regarding the "context" discussion, related to the various PEPs. I like PEP 567 (+ 567 ?) better than PEP 550. However, besides providing cvar.set(), I'm not really sure about the gain compared to PEP 555 (which could easily have e.g. a dict-like interface to the context). I'm still not a big fan of "get"/"set" here, but the idea was indeed to provide those on top of a PEP 555 type thing too. "Tokens" in PEP 567, seems to resemble assignment context managers in PEP 555. However, they feel a bit messy to me, because they make it look like one could just set a variable and then revert the change at any point in time after that. PEP 555 is in fact a simplification of my previous sketch that had a .set(..) in it, but was somewhat different from PEP 550. The idea was to always explicitly define the scope of contextvar values. A context manager / with statement determined the scope of .set(..) operations inside the with statement: # Version A: cvar.set(1) with context_scope(): cvar.set(2) assert cvar.get() == 2 assert cvar.get() == 1 Then I added the ability to define scopes for different variables separately: # Version B cvar1.set(1) cvar2.set(2) with context_scope(cvar1): cvar1.set(11) cvar2.set(22) assert cvar1.get() == 1 assert cvar2.get() == 22 However, in practice, most libraries would wrap __enter__, set and __exit__ into another context manager. So maybe one might want to allow something like # Version C: assert cvar.get() == something with context_scope(cvar, 2): assert cvar.get() == 2 assert cvar.get() == something But this then led to combining "__enter__" and ".set(..)" into Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: # PEP 555 draft version: assert cvar.value == something with cvar.assign(1): assert cvar.value == 1 assert cvar.value == something Anyway, given the schedule, I'm not really sure about the best thing to do here. In principle, something like in versions A, B and C above could be done (I hope the proposal was roughly self-explanatory based on earlier discussions). However, at this point, I'd probably need a lot of help to make that happen for 3.7. -- Koos -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jan 10 00:17:28 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 10 Jan 2018 05:17:28 +0000 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: Wasn't PEP 555 rejected by Guido? What's the point of this post? Yury On Wed, Jan 10, 2018 at 4:08 AM Koos Zevenhoven wrote: > Hi all, > > I feel like I should write some thoughts regarding the "context" > discussion, related to the various PEPs. > > I like PEP 567 (+ 567 ?) better than PEP 550. However, besides providing > cvar.set(), I'm not really sure about the gain compared to PEP 555 (which > could easily have e.g. a dict-like interface to the context). I'm still not > a big fan of "get"/"set" here, but the idea was indeed to provide those on > top of a PEP 555 type thing too. > > "Tokens" in PEP 567, seems to resemble assignment context managers in PEP > 555. However, they feel a bit messy to me, because they make it look like > one could just set a variable and then revert the change at any point in > time after that. > > PEP 555 is in fact a simplification of my previous sketch that had a > .set(..) in it, but was somewhat different from PEP 550. The idea was to > always explicitly define the scope of contextvar values. A context manager > / with statement determined the scope of .set(..) operations inside the > with statement: > > # Version A: > cvar.set(1) > with context_scope(): > cvar.set(2) > > assert cvar.get() == 2 > > assert cvar.get() == 1 > > Then I added the ability to define scopes for different variables > separately: > > # Version B > cvar1.set(1) > cvar2.set(2) > with context_scope(cvar1): > cvar1.set(11) > cvar2.set(22) > > assert cvar1.get() == 1 > assert cvar2.get() == 22 > > > However, in practice, most libraries would wrap __enter__, set and > __exit__ into another context manager. So maybe one might want to allow > something like > > # Version C: > assert cvar.get() == something > with context_scope(cvar, 2): > assert cvar.get() == 2 > > assert cvar.get() == something > > > But this then led to combining "__enter__" and ".set(..)" into > Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: > > # PEP 555 draft version: > assert cvar.value == something > with cvar.assign(1): > assert cvar.value == 1 > > assert cvar.value == something > > > Anyway, given the schedule, I'm not really sure about the best thing to do > here. In principle, something like in versions A, B and C above could be > done (I hope the proposal was roughly self-explanatory based on earlier > discussions). However, at this point, I'd probably need a lot of help to > make that happen for 3.7. > > -- Koos > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Wed Jan 10 01:29:58 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 10 Jan 2018 08:29:58 +0200 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: On Jan 10, 2018 07:17, "Yury Selivanov" wrote: Wasn't PEP 555 rejected by Guido? What's the point of this post? I sure hope there is a point. I don't think mentioning PEP 555 in the discussions should hurt. A typo in my post btw: should be "PEP 567 (+568 ?)" in the second paragraph of course. -- Koos (mobile) Yury On Wed, Jan 10, 2018 at 4:08 AM Koos Zevenhoven wrote: > Hi all, > > I feel like I should write some thoughts regarding the "context" > discussion, related to the various PEPs. > > I like PEP 567 (+ 567 ?) better than PEP 550. However, besides providing > cvar.set(), I'm not really sure about the gain compared to PEP 555 (which > could easily have e.g. a dict-like interface to the context). I'm still not > a big fan of "get"/"set" here, but the idea was indeed to provide those on > top of a PEP 555 type thing too. > > "Tokens" in PEP 567, seems to resemble assignment context managers in PEP > 555. However, they feel a bit messy to me, because they make it look like > one could just set a variable and then revert the change at any point in > time after that. > > PEP 555 is in fact a simplification of my previous sketch that had a > .set(..) in it, but was somewhat different from PEP 550. The idea was to > always explicitly define the scope of contextvar values. A context manager > / with statement determined the scope of .set(..) operations inside the > with statement: > > # Version A: > cvar.set(1) > with context_scope(): > cvar.set(2) > > assert cvar.get() == 2 > > assert cvar.get() == 1 > > Then I added the ability to define scopes for different variables > separately: > > # Version B > cvar1.set(1) > cvar2.set(2) > with context_scope(cvar1): > cvar1.set(11) > cvar2.set(22) > > assert cvar1.get() == 1 > assert cvar2.get() == 22 > > > However, in practice, most libraries would wrap __enter__, set and > __exit__ into another context manager. So maybe one might want to allow > something like > > # Version C: > assert cvar.get() == something > with context_scope(cvar, 2): > assert cvar.get() == 2 > > assert cvar.get() == something > > > But this then led to combining "__enter__" and ".set(..)" into > Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: > > # PEP 555 draft version: > assert cvar.value == something > with cvar.assign(1): > assert cvar.value == 1 > > assert cvar.value == something > > > Anyway, given the schedule, I'm not really sure about the best thing to do > here. In principle, something like in versions A, B and C above could be > done (I hope the proposal was roughly self-explanatory based on earlier > discussions). However, at this point, I'd probably need a lot of help to > make that happen for 3.7. > > -- Koos > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > yselivanov.ml%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablogsal at gmail.com Wed Jan 10 04:17:02 2018 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Wed, 10 Jan 2018 09:17:02 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn Message-ID: I think I really like Antoine's suggestion so I'm going to finish implementing it that way. I think this keeps the API simple, does not bring in the os module new dependencies, keeps the C implementation clean and is consistent with the rest of the posix module. I will post an update when is ready. Thank you everyone for sharing your view and advice! From guido at python.org Wed Jan 10 11:21:41 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 10 Jan 2018 08:21:41 -0800 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: The current status of PEP 555 is "Withdrawn". I have no interest in considering it any more, so if you'd rather see a decision from me I'll be happy to change it to "Rejected". On Tue, Jan 9, 2018 at 10:29 PM, Koos Zevenhoven wrote: > On Jan 10, 2018 07:17, "Yury Selivanov" wrote: > > Wasn't PEP 555 rejected by Guido? What's the point of this post? > > > I sure hope there is a point. I don't think mentioning PEP 555 in the > discussions should hurt. > > A typo in my post btw: should be "PEP 567 (+568 ?)" in the second > paragraph of course. > > -- Koos (mobile) > > > Yury > > On Wed, Jan 10, 2018 at 4:08 AM Koos Zevenhoven wrote: > >> Hi all, >> >> I feel like I should write some thoughts regarding the "context" >> discussion, related to the various PEPs. >> >> I like PEP 567 (+ 567 ?) better than PEP 550. However, besides providing >> cvar.set(), I'm not really sure about the gain compared to PEP 555 (which >> could easily have e.g. a dict-like interface to the context). I'm still not >> a big fan of "get"/"set" here, but the idea was indeed to provide those on >> top of a PEP 555 type thing too. >> >> "Tokens" in PEP 567, seems to resemble assignment context managers in PEP >> 555. However, they feel a bit messy to me, because they make it look like >> one could just set a variable and then revert the change at any point in >> time after that. >> >> PEP 555 is in fact a simplification of my previous sketch that had a >> .set(..) in it, but was somewhat different from PEP 550. The idea was to >> always explicitly define the scope of contextvar values. A context manager >> / with statement determined the scope of .set(..) operations inside the >> with statement: >> >> # Version A: >> cvar.set(1) >> with context_scope(): >> cvar.set(2) >> >> assert cvar.get() == 2 >> >> assert cvar.get() == 1 >> >> Then I added the ability to define scopes for different variables >> separately: >> >> # Version B >> cvar1.set(1) >> cvar2.set(2) >> with context_scope(cvar1): >> cvar1.set(11) >> cvar2.set(22) >> >> assert cvar1.get() == 1 >> assert cvar2.get() == 22 >> >> >> However, in practice, most libraries would wrap __enter__, set and >> __exit__ into another context manager. So maybe one might want to allow >> something like >> >> # Version C: >> assert cvar.get() == something >> with context_scope(cvar, 2): >> assert cvar.get() == 2 >> >> assert cvar.get() == something >> >> >> But this then led to combining "__enter__" and ".set(..)" into >> Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: >> >> # PEP 555 draft version: >> assert cvar.value == something >> with cvar.assign(1): >> assert cvar.value == 1 >> >> assert cvar.value == something >> >> >> Anyway, given the schedule, I'm not really sure about the best thing to >> do here. In principle, something like in versions A, B and C above could be >> done (I hope the proposal was roughly self-explanatory based on earlier >> discussions). However, at this point, I'd probably need a lot of help to >> make that happen for 3.7. >> >> -- Koos >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/yselivano >> v.ml%40gmail.com >> > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Wed Jan 10 11:58:25 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Wed, 10 Jan 2018 18:58:25 +0200 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: The status of PEP 555 is just a side track. Here, I took a step back compared to what went into PEP 555. ?Koos On Wed, Jan 10, 2018 at 6:21 PM, Guido van Rossum wrote: > The current status of PEP 555 is "Withdrawn". I have no interest in > considering it any more, so if you'd rather see a decision from me I'll be > happy to change it to "Rejected". > > On Tue, Jan 9, 2018 at 10:29 PM, Koos Zevenhoven > wrote: > >> On Jan 10, 2018 07:17, "Yury Selivanov" wrote: >> >> Wasn't PEP 555 rejected by Guido? What's the point of this post? >> >> >> I sure hope there is a point. I don't think mentioning PEP 555 in the >> discussions should hurt. >> >> A typo in my post btw: should be "PEP 567 (+568 ?)" in the second >> paragraph of course. >> >> -- Koos (mobile) >> >> >> Yury >> >> On Wed, Jan 10, 2018 at 4:08 AM Koos Zevenhoven >> wrote: >> >>> Hi all, >>> >>> I feel like I should write some thoughts regarding the "context" >>> discussion, related to the various PEPs. >>> >>> I like PEP 567 (+ 567 ?) better than PEP 550. However, besides providing >>> cvar.set(), I'm not really sure about the gain compared to PEP 555 (which >>> could easily have e.g. a dict-like interface to the context). I'm still not >>> a big fan of "get"/"set" here, but the idea was indeed to provide those on >>> top of a PEP 555 type thing too. >>> >>> "Tokens" in PEP 567, seems to resemble assignment context managers in >>> PEP 555. However, they feel a bit messy to me, because they make it look >>> like one could just set a variable and then revert the change at any point >>> in time after that. >>> >>> PEP 555 is in fact a simplification of my previous sketch that had a >>> .set(..) in it, but was somewhat different from PEP 550. The idea was to >>> always explicitly define the scope of contextvar values. A context manager >>> / with statement determined the scope of .set(..) operations inside the >>> with statement: >>> >>> # Version A: >>> cvar.set(1) >>> with context_scope(): >>> cvar.set(2) >>> >>> assert cvar.get() == 2 >>> >>> assert cvar.get() == 1 >>> >>> Then I added the ability to define scopes for different variables >>> separately: >>> >>> # Version B >>> cvar1.set(1) >>> cvar2.set(2) >>> with context_scope(cvar1): >>> cvar1.set(11) >>> cvar2.set(22) >>> >>> assert cvar1.get() == 1 >>> assert cvar2.get() == 22 >>> >>> >>> However, in practice, most libraries would wrap __enter__, set and >>> __exit__ into another context manager. So maybe one might want to allow >>> something like >>> >>> # Version C: >>> assert cvar.get() == something >>> with context_scope(cvar, 2): >>> assert cvar.get() == 2 >>> >>> assert cvar.get() == something >>> >>> >>> But this then led to combining "__enter__" and ".set(..)" into >>> Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: >>> >>> # PEP 555 draft version: >>> assert cvar.value == something >>> with cvar.assign(1): >>> assert cvar.value == 1 >>> >>> assert cvar.value == something >>> >>> >>> Anyway, given the schedule, I'm not really sure about the best thing to >>> do here. In principle, something like in versions A, B and C above could be >>> done (I hope the proposal was roughly self-explanatory based on earlier >>> discussions). However, at this point, I'd probably need a lot of help to >>> make that happen for 3.7. >>> >>> -- Koos >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailma >>> n/options/python-dev/yselivanov.ml%40gmail.com >>> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% >> 40python.org >> >> > > > -- > --Guido van Rossum (python.org/~guido) > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 10 12:17:07 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 10 Jan 2018 09:17:07 -0800 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: I'm sorry, Koos, but based on your past contributions I am not interested in discussing this topic with you. On Wed, Jan 10, 2018 at 8:58 AM, Koos Zevenhoven wrote: > The status of PEP 555 is just a side track. Here, I took a step back > compared to what went into PEP 555. > > ?Koos > > > On Wed, Jan 10, 2018 at 6:21 PM, Guido van Rossum > wrote: > >> The current status of PEP 555 is "Withdrawn". I have no interest in >> considering it any more, so if you'd rather see a decision from me I'll be >> happy to change it to "Rejected". >> >> On Tue, Jan 9, 2018 at 10:29 PM, Koos Zevenhoven >> wrote: >> >>> On Jan 10, 2018 07:17, "Yury Selivanov" wrote: >>> >>> Wasn't PEP 555 rejected by Guido? What's the point of this post? >>> >>> >>> I sure hope there is a point. I don't think mentioning PEP 555 in the >>> discussions should hurt. >>> >>> A typo in my post btw: should be "PEP 567 (+568 ?)" in the second >>> paragraph of course. >>> >>> -- Koos (mobile) >>> >>> >>> Yury >>> >>> On Wed, Jan 10, 2018 at 4:08 AM Koos Zevenhoven >>> wrote: >>> >>>> Hi all, >>>> >>>> I feel like I should write some thoughts regarding the "context" >>>> discussion, related to the various PEPs. >>>> >>>> I like PEP 567 (+ 567 ?) better than PEP 550. However, besides >>>> providing cvar.set(), I'm not really sure about the gain compared to PEP >>>> 555 (which could easily have e.g. a dict-like interface to the context). >>>> I'm still not a big fan of "get"/"set" here, but the idea was indeed to >>>> provide those on top of a PEP 555 type thing too. >>>> >>>> "Tokens" in PEP 567, seems to resemble assignment context managers in >>>> PEP 555. However, they feel a bit messy to me, because they make it look >>>> like one could just set a variable and then revert the change at any point >>>> in time after that. >>>> >>>> PEP 555 is in fact a simplification of my previous sketch that had a >>>> .set(..) in it, but was somewhat different from PEP 550. The idea was to >>>> always explicitly define the scope of contextvar values. A context manager >>>> / with statement determined the scope of .set(..) operations inside the >>>> with statement: >>>> >>>> # Version A: >>>> cvar.set(1) >>>> with context_scope(): >>>> cvar.set(2) >>>> >>>> assert cvar.get() == 2 >>>> >>>> assert cvar.get() == 1 >>>> >>>> Then I added the ability to define scopes for different variables >>>> separately: >>>> >>>> # Version B >>>> cvar1.set(1) >>>> cvar2.set(2) >>>> with context_scope(cvar1): >>>> cvar1.set(11) >>>> cvar2.set(22) >>>> >>>> assert cvar1.get() == 1 >>>> assert cvar2.get() == 22 >>>> >>>> >>>> However, in practice, most libraries would wrap __enter__, set and >>>> __exit__ into another context manager. So maybe one might want to allow >>>> something like >>>> >>>> # Version C: >>>> assert cvar.get() == something >>>> with context_scope(cvar, 2): >>>> assert cvar.get() == 2 >>>> >>>> assert cvar.get() == something >>>> >>>> >>>> But this then led to combining "__enter__" and ".set(..)" into >>>> Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: >>>> >>>> # PEP 555 draft version: >>>> assert cvar.value == something >>>> with cvar.assign(1): >>>> assert cvar.value == 1 >>>> >>>> assert cvar.value == something >>>> >>>> >>>> Anyway, given the schedule, I'm not really sure about the best thing to >>>> do here. In principle, something like in versions A, B and C above could be >>>> done (I hope the proposal was roughly self-explanatory based on earlier >>>> discussions). However, at this point, I'd probably need a lot of help to >>>> make that happen for 3.7. >>>> >>>> -- Koos >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: https://mail.python.org/mailma >>>> n/options/python-dev/yselivanov.ml%40gmail.com >>>> >>> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailma >>> n/options/python-dev/guido%40python.org >>> >>> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > > > -- > + Koos Zevenhoven + http://twitter.com/k7hoven + > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mosesbobadilla at gmail.com Wed Jan 10 15:37:27 2018 From: mosesbobadilla at gmail.com (Moses Egypt) Date: Wed, 10 Jan 2018 15:37:27 -0500 Subject: [Python-Dev] Bug report in audioop module. Message-ID: I was told to post this here when I asked what to do on the python reddit. This is the issue: https://bugs.python.org/issue32004 It has received no response since I posted it two months ago, so I figured I didn't fill something out correctly to get it put on someone's tracker. This is my first bug report, so please let me know if there is anything I need to do get it appointed to the correct person next time. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jan 10 18:52:21 2018 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 10 Jan 2018 15:52:21 -0800 Subject: [Python-Dev] PEP 567 v2 In-Reply-To: References: Message-ID: On Tue, Jan 9, 2018 at 2:59 AM, Yury Selivanov wrote: > > >> On Jan 9, 2018, at 11:18 AM, Nathaniel Smith wrote: >> The approach I took in PEP 568 is even simpler, I think. The PEP is a >> few pages long because I wanted to be exhaustive to make sure we >> weren't missing any details, but the tl;dr is: The ChainMap lives >> entirely inside the threadstate, so there's no need to create a LC/EC >> distinction -- users just see Contexts, or there's the one stack >> introspection API, get_context_stack(), which returns a List[Context]. >> Instead of messing with new_child, copy_context is just >> Context(dict(chain_map)) -- i.e., it creates a flattened copy of the >> current mapping. (If we used new_child, then we'd have to have a way >> to return a ChainMap, reintroducing the LC/EC mess. > > This sounds reasonable. Although keep in mind that merging hamt is still an expensive operation, so flattening shouldn't always be performed (this is covered in 550). Right, the PEP mostly focuses on the semantics rather than the implementation and this is an implementation detail (the user can't tell whether a Context internally holds a stack of HAMTs or just one). But there is a note that we might choose to perform the actual flattening lazily if it turns out to be worthwhile. > I also wouldn't call LC/EC a "mess". Your pep just names things differently, but otherwise is entirely built on concepts and ideas introduced in pep 550. Sorry for phrasing it like that -- I just meant that at the API level, the LC/EC split caused a lot of confusion (and apparently this was it's "nail in the coffin"!). In the PEP 567/568 version, the underlying concepts are the same, but the API ends up being simpler. -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Wed Jan 10 19:06:33 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Jan 2018 10:06:33 +1000 Subject: [Python-Dev] Bug report in audioop module. In-Reply-To: References: Message-ID: On 11 January 2018 at 06:37, Moses Egypt wrote: > I was told to post this here when I asked what to do on the python reddit. > This is the issue: > https://bugs.python.org/issue32004 Thank you for taking the time to file that! > It has received no response since I posted it two months ago, so I figured I > didn't fill something out correctly to get it put on someone's tracker. This > is my first bug report, so please let me know if there is anything I need to > do get it appointed to the correct person next time. Thanks. You've actually done everything right - it's just hit or miss as to whether or not issues will attract a core developer's attention. (Unfortunately there aren't currently any jobs in the world that have "Help limit the growth of the CPython issue count in general" as a requirement, so there aren't any kind of consistent response time assurance for issues or pull requests). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From python at mrabarnett.plus.com Wed Jan 10 19:14:39 2018 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 11 Jan 2018 00:14:39 +0000 Subject: [Python-Dev] Bug report in audioop module. In-Reply-To: References: Message-ID: <5ce88a29-0bcf-29fa-1b2a-31ef0f2b103a@mrabarnett.plus.com> On 2018-01-10 20:37, Moses Egypt wrote: > I was told to post this here when I asked what to do on the python > reddit. This is the issue: > https://bugs.python.org/issue32004 > > It has received no response since I posted it two months ago, so I > figured I didn't fill something out correctly to get it put on someone's > tracker. This is my first bug report, so please let me know if there is > anything I need to do get it appointed to the correct person next time. > Thanks. > This is open source; nothing gets done unless someone chooses to do it, usually because it "scratches their itch" or they have a problem and no-one else is interested in doing it. If you don't get any response, you could always try fixing it yourself. From njs at pobox.com Wed Jan 10 19:44:10 2018 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 10 Jan 2018 16:44:10 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Tue, Jan 9, 2018 at 3:41 AM, Yury Selivanov wrote: > On Tue, Jan 9, 2018 at 11:02 AM, Nathaniel Smith wrote: >> Right now, the set of valid states for a ContextVar are: it can hold >> any Python object, or it can be undefined. However, the only way it >> can be in the "undefined" state is in a new Context where it has never >> had a value; once it leaves the undefined state, it can never return >> to it. > > Is "undefined" a state when a context variable doesn't have a default > and isn't yet set? If so, why can't it be returned back to the > "undefined" state? That's why we have the 'reset' method: Sorry, yes, you can return to the "undefined" state if you have a valid token that hasn't been used yet. But my point is that it's weird to have a variable that needs so many words to describe which kinds of state transitions are possible. > I don't like how context variables are defined in Option 1 and Option > 2. I view ContextVars as keys in some global context mapping--akin to > Python variables. This is one totally reasonable option but, I mean... it's software, there are lots of options for how to view things that we could potentially make true, and we get to pick the one that works best :-). And I think it's easier to explain to users how ContextVar works if we can do it without talking about Context mappings etc. Thread-local storage is also implemented using some per-thread maps and various clever tricks, but I don't think I've ever seen documentation that described it that way. > In any case, at this point I think that the best option is to simply > drop the "default" parameter from the ContextVar constructor. This > would leave us with only one default in ContextVar.get() method: > > c.get() # Will raise a LookupError if 'c' is not set > c.get('python') # Will return 'python' if 'c' is not set > > I also now see how having two different 'default' values: one defined > when a ContextVar is created, and one can be passed to > ContextVar.get() is confusing. But the constructor default is way more important for usability than any of the other features we're talking about! Every time I use threading.local, I get annoyed that there isn't a simpler way to specify a default value. OTOH I've never found a case where I actually wanted undefined values. To find out whether my experience is typical, I did a quick grep of the stdlib, and found 5 thread local variables: - asyncio.events._BaseEventLoopPolicy._local.{_loop, _set_called}: These two use the 'subclass threading.local' trick to make it seem like these are initialized to None. - asyncio.events._running_loop: This uses the subclass trick to to make it seem like it's initialized to (None, None). - multiprocessing.context._tls.spawning_popen: Here the code defines two accessors (get_spawning_popen, set_spawning_popen) that make it seem like it's initialized to None. Of course you could do this with ContextVar's too, but if ContextVar had native support for specifying an initial value then they'd be unnecessary, because spawning_popen.get()/set() would already do the right thing. - _pydecimal.local.__decimal_context__: This is a little trickier. It has a default value, but it's mutable, so access is hidden behind two accessors (getcontext, setcontext) and it's initialized on first access. Currently this is done with a try: except:, but if thread locals had the ability to set the default to None, then using that would make the implementation shorter and faster (exceptions are expensive). So that's 5 out of 5 cases where the code would get simpler if thread-locals had the ability to specify a default initial value, 0 out of 5 cases where anyone would miss support for undefined values or wants to be able to control the default get() value on a call-by-call basis. The argument for supporting undefined values in ContextVar is mostly by analogy with regular Python variables. I like analogies, but I don't think we should sacrifice actual use cases to preserve the analogy. > But I'd be -1 on making all ContextVars have a None default > (effectively have a "ContextVar.get(default=None)" signature. This > would be a very loose semantics in my opinion. It may have gotten lost in that email, but my actual favorite approach is that we make the signatures: ContextVar(name, *, initial_value) # or even (*, name, initial_value) ContextVar.get() ContextVar.set(value) so that when you create a ContextVar you always state the initial value, whatever makes sense in a particular case. (Obviously None will be a very popular choice, but this way it won't be implicit, and no-one will be surprised to see it returned from get().) -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Wed Jan 10 20:18:30 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Jan 2018 11:18:30 +1000 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On 11 January 2018 at 10:44, Nathaniel Smith wrote: > It may have gotten lost in that email, but my actual favorite approach > is that we make the signatures: > > ContextVar(name, *, initial_value) # or even (*, name, initial_value) > ContextVar.get() > ContextVar.set(value) > > so that when you create a ContextVar you always state the initial > value, whatever makes sense in a particular case. (Obviously None will > be a very popular choice, but this way it won't be implicit, and > no-one will be surprised to see it returned from get().) I also like this idea, and I think it aligns better with "dict.get" than it may first appear: the trick is that in this simplified API "ContextVar.get()" would be akin to "get_thread_state().current_context.get(cv, cv.initial_value)" (and PEP 568 wouldn't really change that). So both the key and the default value are specified at ContextVar initialisation time, and if you want to supply a non-standard default, then you need to break the ContextVar abstraction, and access "contextvars.copy_context().get(cv, custom_default)" instead. If we later decide that we do want to support raising an exception for unitialised access after all, then we could introduce a "missing" callback to the ContextVar constructor that would be akin to defaultdict's "default_factory" callback (with the requirement becoming that you have to specify either an initial_value, or a missing callback, but not both). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Thu Jan 11 01:23:43 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 11 Jan 2018 10:23:43 +0400 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 4:44 AM, Nathaniel Smith wrote: [..] > It may have gotten lost in that email, but my actual favorite approach > is that we make the signatures: > > ContextVar(name, *, initial_value) # or even (*, name, initial_value) > ContextVar.get() > ContextVar.set(value) > > so that when you create a ContextVar you always state the initial > value, whatever makes sense in a particular case. (Obviously None will > be a very popular choice, but this way it won't be implicit, and > no-one will be surprised to see it returned from get().) Alright, you've shown that most of the time when we use threading.local in the standard library we subclass it in order to provide a default value (and avoid AttributeError being thrown). This is a solid argument in favour of keeping the 'default' parameter for the ContextVar constructor. Let's keep it. However I still don't like the idea of making defaults mandatory. I have at least one exemplary use case (I can come up with more of such examples, btw) which shows that it's not always desired to have a None default: getting the current request object in a web application. With the current PEP 567 semantics: request_var: ContextVar[Request] = ContextVar('current_request') and later: request : Request = request_var.get() 'request_var.get()' will throw a LookupError, which will indicate that something went wrong in the framework layer. The user should never see this error, and they can just rely on the fact that the current request is always available (cannot be None). With mandatory defaults, the type of 'request' variable will be 'Optional[Request]', and the user will be forced to add an 'if' statement to guard against None values. Otherwise the user risks having occasional AttributeErrors that don't really explain what actually happened. I would prefer them to see a LookupError('cannot lookup current_request context variable') instead. I think that when you have an int stored in a context variable it would usually make sense to give it a 0 default (or some other number). However, for a complex object (like current request object) there is *no* sensible default value sometimes. Forcing the user to set it to None feels like a badly designed API that forces the user to work around it. Therefore I'm still in favour of keeping the current PEP 567 behaviour. It feels very consistent with how variable lookups and threading.local objects work in Python now. Yury From chris.jerdonek at gmail.com Thu Jan 11 01:35:14 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 10 Jan 2018 22:35:14 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Mon, Jan 8, 2018 at 11:02 PM, Nathaniel Smith wrote: > Right now, the set of valid states for a ContextVar are: it can hold > any Python object, or it can be undefined. However, the only way it > can be in the "undefined" state is in a new Context where it has never > had a value; once it leaves the undefined state, it can never return > to it. I know Yury responded to one aspect of this point later on in the thread. However, in terms of describing the possible states without reference to the internal Context mappings, IIUC, wouldn't it be more accurate to view a ContextVar as a stack of values rather than just the binary "holding an object or not"? This is to reflect the number of times set() has been called (and so the number of times reset() would need to be called to "empty" the ContextVar). --Chris > > This makes me itch. It's very weird to have a mutable variable with a > valid state that you can't reach by mutating it. I see two > self-consistent ways to make me stop itching: (a) double-down on > undefined as being part of ContextVar's domain, or (b) reduce the > domain so that undefined is never a valid state. > > # Option 1 > > In the first approach, we conceptualize ContextVar as being a > container that either holds a value or is empty (and then there's one > of these containers for each context). We also want to be able to > define an initial value that the container takes on when a new context > materializes, because that's really convenient. And then after that we > provide ways to get the value (if present), or control the value > (either set it to a particular value or unset it). So something like: > > var1 = ContextVar("var1") # no initial value > var2 = ContextVar("var2", initial_value="hello") > > with assert_raises(SomeError): > var1.get() > # get's default lets us give a different outcome in cases where it > would otherwise raise > assert var1.get(None) is None > assert var2.get() == "hello" > # If get() doesn't raise, then the argument is ignored > assert var2.get(None) == "hello" > > # We can set to arbitrary values > for var in [var1, var2]: > var.set("new value") > assert var.get() == "new value" > > # We can unset again, so get() will raise > for var in [var1, var2]: > var.unset() > with assert_raises(SomeError): > var.get() > assert var.get(None) is None > > To fulfill all that, we need an implementation like: > > MISSING = make_sentinel() > > class ContextVar: > def __init__(self, name, *, initial_value=MISSING): > self.name = name > self.initial_value = initial_value > > def set(self, value): > if value is MISSING: raise TypeError > current_context()._dict[self] = value > # Token handling elided because it's orthogonal to this issue > return Token(...) > > def unset(self): > current_context()._dict[self] = MISSING > # Token handling elided because it's orthogonal to this issue > return Token(...) > > def get(self, default=_NOT_GIVEN): > value = current_context().get(self, self.initial_value) > if value is MISSING: > if default is _NOT_GIVEN: > raise ... > else: > return default > else: > return value > > Note that the implementation here is somewhat tricky and non-obvious. > In particular, to preserve the illusion of a simple container with an > optional initial value, we have to encode a logically undefined > ContextVar as one that has Context[var] set to MISSING, and a missing > entry in Context encodes the presence of the inital value. If we > defined unset() as 'del current_context._dict[self]', then we'd have: > > var2.unset() > assert var2.get() is None > > which would be very surprising to users who just want to think about > ContextVars and ignore all that stuff about Contexts. This, in turn, > means that we need to expose the MISSING sentinel in general, because > anyone introspecting Context objects directly needs to know how to > recognize this magic value to interpret things correctly. > > AFAICT this is the minimum complexity required to get a complete and > internally-consistent set of operations for a ContextVar that's > conceptualized as being a container that either holds an arbitrary > value or is empty. > > # Option 2 > > The other complete and coherent conceptualization I see is to say that > a ContextVar always holds a value. If we eliminate the "unset" state > entirely, then there's no "missing unset method" -- there just isn't > any concept of an unset value in the first place, so there's nothing > to miss. This idea shows up in lots of types in Python, actually -- > e.g. for any exception object, obj.__context__ is always defined. Its > value might be None, but it has a value. In this approach, > ContextVar's are similar. > > To fulfill all that, we need an implementation like: > > class ContextVar: > # Or maybe it'd be better to make initial_value mandatory, like this? > # def __init__(self, name, *, initial_value): > def __init__(self, name, *, initial_value=None): > self.name = name > self.initial_value = initial_value > > def set(self, value): > current_context()._dict[self] = value > # Token handling elided because it's orthogonal to this issue > return Token(...) > > def get(self): > return current_context().get(self, self.initial_value) > > This is also a complete and internally consistent set of operations, > but this time for a somewhat different way of conceptualizing > ContextVar. > > Actually, the more I think about it, the more I think that if we take > this approach and say that every ContextVar always has a value, it > makes sense to make initial_value= a mandatory argument instead of > defaulting it to None. Then the typing works too, right? Something > like: > > ContextVar(name: str, *, initial_value: T) -> ContextVar[T] > ContextVar.get() -> T > ContextVar.set(T) -> Token > > ? And it's hardly a burden on users to type 'ContextVar("myvar", > initial_value=None)' if that's what they want. > > Anyway... between these two options, I like Option 2 better because > it's substantially simpler without (AFAICT) any meaningful reduction > in usability. But I'd prefer either of them to the current PEP 567, > which seems like an internally-contradictory hybrid of these ideas. It > makes sense if you know how the code and Contexts work. But if I was > talking to someone who wanted to ignore those details and just use a > ContextVar, and they asked me for a one sentence summary of how it > worked, I wouldn't know what to tell them. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com From ethan at stoneleaf.us Thu Jan 11 01:39:05 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 10 Jan 2018 22:39:05 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: <5A570689.6040805@stoneleaf.us> On 01/10/2018 10:23 PM, Yury Selivanov wrote: > On Thu, Jan 11, 2018 at 4:44 AM, Nathaniel Smith wrote: >> It may have gotten lost in that email, but my actual favorite approach >> is that we make the signatures: >> >> ContextVar(name, *, initial_value) # or even (*, name, initial_value) >> ContextVar.get() >> ContextVar.set(value) >> >> so that when you create a ContextVar you always state the initial >> value, whatever makes sense in a particular case. (Obviously None will >> be a very popular choice, but this way it won't be implicit, and >> no-one will be surprised to see it returned from get().) > > Alright, you've shown that most of the time when we use > threading.local in the standard library we subclass it in order to > provide a default value (and avoid AttributeError being thrown). This > is a solid argument in favour of keeping the 'default' parameter for > the ContextVar constructor. Let's keep it. [...] > I think that when you have an int stored in a context variable it > would usually make sense to give it a 0 default (or some other > number). However, for a complex object (like current request object) > there is *no* sensible default value sometimes. Forcing the user to > set it to None feels like a badly designed API that forces the user to > work around it. > > Therefore I'm still in favour of keeping the current PEP 567 > behaviour. To be clear: We'll now be able to specify a default when we create the variable, but we can also leave it out so a LookupError can be raised later? -- ~Ethan~ From yselivanov.ml at gmail.com Thu Jan 11 01:45:12 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 11 Jan 2018 10:45:12 +0400 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: <5A570689.6040805@stoneleaf.us> References: <5A570689.6040805@stoneleaf.us> Message-ID: On Thu, Jan 11, 2018 at 10:39 AM, Ethan Furman wrote: > On 01/10/2018 10:23 PM, Yury Selivanov wrote: [..] >> Therefore I'm still in favour of keeping the current PEP 567 >> behaviour. > > > To be clear: We'll now be able to specify a default when we create the > variable, but we can also leave it out so a LookupError can be raised later? Correct. Yury From yselivanov.ml at gmail.com Thu Jan 11 01:58:42 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 11 Jan 2018 10:58:42 +0400 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 10:35 AM, Chris Jerdonek wrote: > On Mon, Jan 8, 2018 at 11:02 PM, Nathaniel Smith wrote: >> Right now, the set of valid states for a ContextVar are: it can hold >> any Python object, or it can be undefined. However, the only way it >> can be in the "undefined" state is in a new Context where it has never >> had a value; once it leaves the undefined state, it can never return >> to it. > > I know Yury responded to one aspect of this point later on in the > thread. However, in terms of describing the possible states without > reference to the internal Context mappings, IIUC, wouldn't it be more > accurate to view a ContextVar as a stack of values rather than just > the binary "holding an object or not"? This is to reflect the number > of times set() has been called (and so the number of times reset() > would need to be called to "empty" the ContextVar). But why do you want to think of ContextVar as a stack of values? Or as something that is holding even one value? Do Python variables hold/envelope objects they reference? No, they don't. They are simple names and are used to lookup objects in globals/locals dicts. ContextVars are very similar! They are *keys* in Context objects?that is it. ContextVar.default is returned by ContextVar.get() when it cannot find the value for the context variable in the current Context object. If ContextVar.default was not provided, a LookupError is raised. The reason why this is simpler for regular variables is because they have a dedicated syntax. Instead of writing print(globals()['some_variable']) we simply write print(some_variable) Similarly for context variables, we could have written: print(copy_context()[var]) But instead we use a ContextVar.get(): print(var.get()) If we had a syntax support for context variables, it would be like this: context var print(var) # Lookups 'var' in the current context Although I very much doubt that we would *ever* want to have a dedicated syntax for context variables (they are very niche and are only needed in some very special cases), I hope that this line of thinking would help to clear the waters. Yury From chris.jerdonek at gmail.com Thu Jan 11 02:39:36 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 10 Jan 2018 23:39:36 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On Wed, Jan 10, 2018 at 10:58 PM, Yury Selivanov wrote: > On Thu, Jan 11, 2018 at 10:35 AM, Chris Jerdonek > wrote: >> On Mon, Jan 8, 2018 at 11:02 PM, Nathaniel Smith wrote: >>> Right now, the set of valid states for a ContextVar are: it can hold >>> any Python object, or it can be undefined. However, the only way it >>> can be in the "undefined" state is in a new Context where it has never >>> had a value; once it leaves the undefined state, it can never return >>> to it. >> >> I know Yury responded to one aspect of this point later on in the >> thread. However, in terms of describing the possible states without >> reference to the internal Context mappings, IIUC, wouldn't it be more >> accurate to view a ContextVar as a stack of values rather than just >> the binary "holding an object or not"? This is to reflect the number >> of times set() has been called (and so the number of times reset() >> would need to be called to "empty" the ContextVar). > > > But why do you want to think of ContextVar as a stack of values? Or > as something that is holding even one value? I was primarily responding to Nathaniel's comment about how to describe or talk about the state and not necessarily advocating that view. But to your question, like it or not, I think the API encourages this way of thinking because the get() method is on the ContextVar itself, and so it's the ContextVar which is doing the looking up rather than just fulfilling the role of a key name. The API brings to mind other containers and things holding values like dict.get(), queue.get(), BytesIO.getvalue(), and container type's object.__getitem__(), etc. So I think one will need to be prepared for many or most users having this conception with the current API. (I think renaming to something like ContextVar.lookup() or even ContextVar.value() would go a long way towards dispelling that, but Guido said earlier in the thread that he likes the shorter name.) > Do Python variables hold/envelope objects they reference? No, they > don't. They are simple names and are used to lookup objects in > globals/locals dicts. ContextVars are very similar! They are *keys* > in Context objects?that is it. Python variables don't hold the objects. But the analogy also doesn't quite match because variables also don't have get() methods. It's Python which is doing the looking up in that case rather than the variable itself. With ContextVars, it's serving both roles of name and thing doing the looking up. This is one reason why I suggested several days ago that I thought something like contextvars.get(key) (where key is a ContextVar) would be a less confusing API. That way the ContextVar / ContextKey(?) would only be acting as a key and not also be responsible for doing the lookup and knowing about what is containing it. --Chris > > ContextVar.default is returned by ContextVar.get() when it cannot find > the value for the context variable in the current Context object. If > ContextVar.default was not provided, a LookupError is raised. > > The reason why this is simpler for regular variables is because they > have a dedicated syntax. Instead of writing > > print(globals()['some_variable']) > > we simply write > > print(some_variable) > > Similarly for context variables, we could have written: > > print(copy_context()[var]) > > But instead we use a ContextVar.get(): > > print(var.get()) > > If we had a syntax support for context variables, it would be like this: > > context var > print(var) # Lookups 'var' in the current context > > Although I very much doubt that we would *ever* want to have a > dedicated syntax for context variables (they are very niche and are > only needed in some very special cases), I hope that this line of > thinking would help to clear the waters. > > Yury From p.f.moore at gmail.com Thu Jan 11 03:55:14 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Jan 2018 08:55:14 +0000 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: On 11 January 2018 at 07:39, Chris Jerdonek wrote: > But to your question, like it or not, I think the API encourages this > way of thinking because the get() method is on the ContextVar itself, > and so it's the ContextVar which is doing the looking up rather than > just fulfilling the role of a key name. The API brings to mind other > containers and things holding values like dict.get(), queue.get(), > BytesIO.getvalue(), and container type's object.__getitem__(), etc. So > I think one will need to be prepared for many or most users having > this conception with the current API. (I think renaming to something > like ContextVar.lookup() or even ContextVar.value() would go a long > way towards dispelling that, but Guido said earlier in the thread that > he likes the shorter name.) I can only offer anecdotal evidence, but I am perfectly comfortable with seeing ContextVars as names (variables) that refer to values, and not as containers of values. The "Var" part of the class name is what makes that link in my head, I think, and it is a stronger association for me than the idea that get() implies a container. So I'm 100% fine, personally, with ContextVars as names that refer to values (that you access via the get() method), and with the Context as a hidden lookup table for those values (corresponding to globals()/locals()). I'm also OK on that same basis with ContextVars having an "unset" state, and with it being unusual/difficult to get back to the unset state once you've set a value. tl;dr If you think of a ContextVar as a "variable" or "name", the current design makes sense (at least to me). Paul From status at bugs.python.org Fri Jan 12 12:09:54 2018 From: status at bugs.python.org (Python tracker) Date: Fri, 12 Jan 2018 18:09:54 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20180112170954.17D5211A93D@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2018-01-05 - 2018-01-12) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6369 ( -8) closed 37921 (+50) total 44290 (+42) Open issues with patches: 2470 Issues opened (38) ================== #20104: expose posix_spawn(p) https://bugs.python.org/issue20104 reopened by gregory.p.smith #23749: asyncio missing wrap_socket (starttls) https://bugs.python.org/issue23749 reopened by vstinner #29137: Fix fpectl-induced ABI breakage https://bugs.python.org/issue29137 reopened by doko #31804: multiprocessing calls flush on sys.stdout at exit even if it i https://bugs.python.org/issue31804 reopened by Pox TheGreat #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 reopened by serhiy.storchaka #32206: Run modules with pdb https://bugs.python.org/issue32206 reopened by ncoghlan #32498: urllib.parse.unquote raises incorrect errormessage when string https://bugs.python.org/issue32498 opened by stein-k #32500: PySequence_Length() raises TypeError on dict type https://bugs.python.org/issue32500 opened by mgorny #32501: Documentation for dir([object]) https://bugs.python.org/issue32501 opened by Vladislavs Burakovs #32502: uuid1() broken on macos high sierra https://bugs.python.org/issue32502 opened by anpetral #32503: Avoid creating small frames in pickle protocol 4 https://bugs.python.org/issue32503 opened by serhiy.storchaka #32505: dataclasses: make field() with no annotation an error https://bugs.python.org/issue32505 opened by eric.smith #32506: dataclasses: no need for OrderedDict now that dict guarantees https://bugs.python.org/issue32506 opened by eric.smith #32509: doctest syntax ambiguity between continuation line and ellipsi https://bugs.python.org/issue32509 opened by jason.coombs #32511: Thread primitives do not report the OS-level error on failure https://bugs.python.org/issue32511 opened by zwol #32512: Add an option to profile to run library module as a script https://bugs.python.org/issue32512 opened by mariocj89 #32513: dataclasses: make it easier to use user-supplied special metho https://bugs.python.org/issue32513 opened by eric.smith #32514: 0x80070002 - The system cannot find the file specified https://bugs.python.org/issue32514 opened by Beatty0111 #32515: Add an option to trace to run module as a script https://bugs.python.org/issue32515 opened by mariocj89 #32516: Add a shared library mechanism for win32 https://bugs.python.org/issue32516 opened by xoviat #32517: test_read_pty_output() of test_asyncio hangs on macOS 10.13.2 https://bugs.python.org/issue32517 opened by vstinner #32519: venv API docs - symlinks default incorrect https://bugs.python.org/issue32519 opened by jason.coombs #32521: NIS module fails to build due to the removal of interfaces rel https://bugs.python.org/issue32521 opened by cstratak #32522: Add precision argument to datetime.now https://bugs.python.org/issue32522 opened by p-ganssle #32523: inconsistent spacing in changelog.html https://bugs.python.org/issue32523 opened by ned.deily #32524: Python 2.7 leaks a packages __init__.py module object on Synta https://bugs.python.org/issue32524 opened by Segev Finer #32526: Closing async generator while it is running does not raise an https://bugs.python.org/issue32526 opened by achimnol #32528: Change base class for futures.CancelledError https://bugs.python.org/issue32528 opened by socketpair #32529: Call readinto in shutil.copyfileobj https://bugs.python.org/issue32529 opened by YoSTEALTH #32530: How ro fix the chm encoding in Non western european codepage(c https://bugs.python.org/issue32530 opened by Nim #32531: gdb.execute can not put string value. https://bugs.python.org/issue32531 opened by callmekohei #32532: improve sys.settrace and sys.setprofile documentation https://bugs.python.org/issue32532 opened by xiang.zhang #32533: SSLSocket read/write thread-unsafety https://bugs.python.org/issue32533 opened by Alexey Baldin #32534: Speed-up list.insert https://bugs.python.org/issue32534 opened by jeethu #32536: ast and tokenize disagree about line number https://bugs.python.org/issue32536 opened by Mark.Shannon #32537: multiprocessing.pool.Pool.starmap_async - wrong parameter name https://bugs.python.org/issue32537 opened by devnull #32538: Multiprocessing Manager on 3D list - no change of the list pos https://bugs.python.org/issue32538 opened by John_81 #32539: os.listdir(...) on deep path on windows in python2.7 fails wit https://bugs.python.org/issue32539 opened by Anthony Sottile Most recent 15 issues with no replies (15) ========================================== #32539: os.listdir(...) on deep path on windows in python2.7 fails wit https://bugs.python.org/issue32539 #32538: Multiprocessing Manager on 3D list - no change of the list pos https://bugs.python.org/issue32538 #32537: multiprocessing.pool.Pool.starmap_async - wrong parameter name https://bugs.python.org/issue32537 #32536: ast and tokenize disagree about line number https://bugs.python.org/issue32536 #32532: improve sys.settrace and sys.setprofile documentation https://bugs.python.org/issue32532 #32531: gdb.execute can not put string value. https://bugs.python.org/issue32531 #32530: How ro fix the chm encoding in Non western european codepage(c https://bugs.python.org/issue32530 #32524: Python 2.7 leaks a packages __init__.py module object on Synta https://bugs.python.org/issue32524 #32519: venv API docs - symlinks default incorrect https://bugs.python.org/issue32519 #32515: Add an option to trace to run module as a script https://bugs.python.org/issue32515 #32513: dataclasses: make it easier to use user-supplied special metho https://bugs.python.org/issue32513 #32511: Thread primitives do not report the OS-level error on failure https://bugs.python.org/issue32511 #32505: dataclasses: make field() with no annotation an error https://bugs.python.org/issue32505 #32496: lib2to3 fails to parse a ** of a conditional expression https://bugs.python.org/issue32496 #32494: interface to gdbm_count https://bugs.python.org/issue32494 Most recent 15 issues waiting for review (15) ============================================= #32534: Speed-up list.insert https://bugs.python.org/issue32534 #32529: Call readinto in shutil.copyfileobj https://bugs.python.org/issue32529 #32524: Python 2.7 leaks a packages __init__.py module object on Synta https://bugs.python.org/issue32524 #32521: NIS module fails to build due to the removal of interfaces rel https://bugs.python.org/issue32521 #32515: Add an option to trace to run module as a script https://bugs.python.org/issue32515 #32512: Add an option to profile to run library module as a script https://bugs.python.org/issue32512 #32506: dataclasses: no need for OrderedDict now that dict guarantees https://bugs.python.org/issue32506 #32503: Avoid creating small frames in pickle protocol 4 https://bugs.python.org/issue32503 #32497: datetime.strptime creates tz naive object from value containin https://bugs.python.org/issue32497 #32492: C Fast path for namedtuple's property/itemgetter pair https://bugs.python.org/issue32492 #32477: Move jumps optimization from the peepholer to the compiler https://bugs.python.org/issue32477 #32476: Add concat functionality to ElementTree xpath find https://bugs.python.org/issue32476 #32475: Add ability to query number of buffered bytes available on buf https://bugs.python.org/issue32475 #32471: Add an UML class diagram to the collections.abc module documen https://bugs.python.org/issue32471 #32469: Generator and coroutine repr could be more helpful https://bugs.python.org/issue32469 Top 10 most discussed issues (10) ================================= #32522: Add precision argument to datetime.now https://bugs.python.org/issue32522 19 msgs #32509: doctest syntax ambiguity between continuation line and ellipsi https://bugs.python.org/issue32509 11 msgs #31993: pickle.dump allocates unnecessary temporary bytes / str https://bugs.python.org/issue31993 10 msgs #32521: NIS module fails to build due to the removal of interfaces rel https://bugs.python.org/issue32521 10 msgs #32500: PySequence_Length() raises TypeError on dict type https://bugs.python.org/issue32500 8 msgs #9325: Add an option to pdb/trace/profile to run library module as a https://bugs.python.org/issue9325 7 msgs #32346: Speed up slot lookup for class creation https://bugs.python.org/issue32346 7 msgs #32528: Change base class for futures.CancelledError https://bugs.python.org/issue32528 7 msgs #31804: multiprocessing calls flush on sys.stdout at exit even if it i https://bugs.python.org/issue31804 6 msgs #32248: Port importlib_resources (module and ABC) to Python 3.7 https://bugs.python.org/issue32248 6 msgs Issues closed (51) ================== #2518: smtpd.py to handle huge email https://bugs.python.org/issue2518 closed by barry #3802: smtpd.py __getaddr insufficient handling https://bugs.python.org/issue3802 closed by barry #8503: smtpd SMTPServer does not allow domain filtering https://bugs.python.org/issue8503 closed by barry #11260: smtpd-as-a-script feature should be documented and should use https://bugs.python.org/issue11260 closed by barry #12815: Coverage of smtpd.py https://bugs.python.org/issue12815 closed by barry #12816: smtpd uses library outside of the standard libraries https://bugs.python.org/issue12816 closed by barry #14261: Cleanup in smtpd module https://bugs.python.org/issue14261 closed by barry #16462: smtpd should return greeting https://bugs.python.org/issue16462 closed by barry #17607: missed peephole optimization (unnecessary jump at end of funct https://bugs.python.org/issue17607 closed by serhiy.storchaka #19678: smtpd.py: channel should be passed to process_message https://bugs.python.org/issue19678 closed by barry #19679: smtpd.py (SMTPChannel): implement enhanced status codes https://bugs.python.org/issue19679 closed by barry #19806: smtpd crashes when a multi-byte UTF-8 sequence is split betwee https://bugs.python.org/issue19806 closed by barry #22071: Remove long-time deprecated attributes from smtpd https://bugs.python.org/issue22071 closed by barry #22158: RFC 6531 (SMTPUTF8) support in smtpd.PureProxy https://bugs.python.org/issue22158 closed by barry #22159: smtpd.PureProxy and smtpd.DebuggingServer do not work with dec https://bugs.python.org/issue22159 closed by barry #24340: co_stacksize estimate can be highly off https://bugs.python.org/issue24340 closed by serhiy.storchaka #25546: python 3.5 installation problem; Error 0x80240017: Failed to e https://bugs.python.org/issue25546 closed by steve.dower #25954: Python 3.5.1 installer fails on Windows 7 https://bugs.python.org/issue25954 closed by steve.dower #26036: Unnecessary arguments on smtpd.SMTPServer https://bugs.python.org/issue26036 closed by barry #28416: defining persistent_id in _pickle.Pickler subclass causes refe https://bugs.python.org/issue28416 closed by serhiy.storchaka #28747: Expose SSL_CTX_set_cert_verify_callback https://bugs.python.org/issue28747 closed by steve.dower #28888: Installer fails when newer version of CRT is pending installat https://bugs.python.org/issue28888 closed by steve.dower #29409: Implement PEP 529 for io.FileIO https://bugs.python.org/issue29409 closed by steve.dower #30121: Windows: subprocess debug assertion on failure to execute the https://bugs.python.org/issue30121 closed by Segev Finer #30579: Allow traceback objects to be instantiated/mutated/annotated https://bugs.python.org/issue30579 closed by ncoghlan #31113: Stack overflow with large program https://bugs.python.org/issue31113 closed by serhiy.storchaka #31145: PriorityQueue.put() fails with TypeError if priority_number in https://bugs.python.org/issue31145 closed by rhettinger #31975: Add a default filter for DeprecationWarning in __main__ https://bugs.python.org/issue31975 closed by ncoghlan #32267: strptime misparses offsets with microsecond format https://bugs.python.org/issue32267 closed by belopolsky #32278: Allow dataclasses.make_dataclass() to omit type information https://bugs.python.org/issue32278 closed by eric.smith #32279: Pass keyword arguments from dataclasses.make_dataclass() to @d https://bugs.python.org/issue32279 closed by eric.smith #32320: Add default value support to collections.namedtuple() https://bugs.python.org/issue32320 closed by rhettinger #32427: Rename and expose dataclasses._MISSING https://bugs.python.org/issue32427 closed by eric.smith #32428: dataclasses: make it an error to have initialized non-fields i https://bugs.python.org/issue32428 closed by eric.smith #32448: subscriptable https://bugs.python.org/issue32448 closed by terry.reedy #32449: MappingView must inherit from Collection instead of Sized https://bugs.python.org/issue32449 closed by rhettinger #32450: non-descriptive variable name https://bugs.python.org/issue32450 closed by benjamin.peterson #32467: dict_values isn't considered a Collection nor a Container https://bugs.python.org/issue32467 closed by rhettinger #32473: Readibility of ABCMeta._dump_registry() https://bugs.python.org/issue32473 closed by inada.naoki #32486: tail optimization for 'yield from' https://bugs.python.org/issue32486 closed by benjamin.peterson #32493: UUID Module - FreeBSD build failure https://bugs.python.org/issue32493 closed by pitrou #32499: Add dataclasses.is_dataclass(obj) https://bugs.python.org/issue32499 closed by eric.smith #32504: Wheel failed include data_files https://bugs.python.org/issue32504 closed by benjamin.peterson #32507: Change Windows install to applocal UCRT https://bugs.python.org/issue32507 closed by steve.dower #32508: Problem while reading back from a list of lists https://bugs.python.org/issue32508 closed by steven.daprano #32510: Broken comparisons (probably caused by wrong caching of values https://bugs.python.org/issue32510 closed by steven.daprano #32518: HTTPServer can't deal with persistent connection properly https://bugs.python.org/issue32518 closed by r.david.murray #32520: error writing to file in binary mode - python 3.6.3 https://bugs.python.org/issue32520 closed by serhiy.storchaka #32525: Empty tuples are not optimized as constant expressions https://bugs.python.org/issue32525 closed by brett.cannon #32527: windows 7 python 3.6 : after some security updates, import ibm https://bugs.python.org/issue32527 closed by christian.heimes #32535: msvcr140.dll has been replaced with vcruntime140.dll https://bugs.python.org/issue32535 closed by steve.dower From guido at python.org Fri Jan 12 14:38:55 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 12 Jan 2018 11:38:55 -0800 Subject: [Python-Dev] PEP 567 pre v3 In-Reply-To: References: Message-ID: I think we've debated the design of ContextVar and default values enough. Regardless of the philosophical questions around "what is a variable", I agree that Yury's current design is right, so let's move on to other topics (if there are any). On Thu, Jan 11, 2018 at 12:55 AM, Paul Moore wrote: > On 11 January 2018 at 07:39, Chris Jerdonek > wrote: > > But to your question, like it or not, I think the API encourages this > > way of thinking because the get() method is on the ContextVar itself, > > and so it's the ContextVar which is doing the looking up rather than > > just fulfilling the role of a key name. The API brings to mind other > > containers and things holding values like dict.get(), queue.get(), > > BytesIO.getvalue(), and container type's object.__getitem__(), etc. So > > I think one will need to be prepared for many or most users having > > this conception with the current API. (I think renaming to something > > like ContextVar.lookup() or even ContextVar.value() would go a long > > way towards dispelling that, but Guido said earlier in the thread that > > he likes the shorter name.) > > I can only offer anecdotal evidence, but I am perfectly comfortable > with seeing ContextVars as names (variables) that refer to values, and > not as containers of values. The "Var" part of the class name is what > makes that link in my head, I think, and it is a stronger association > for me than the idea that get() implies a container. > > So I'm 100% fine, personally, with ContextVars as names that refer to > values (that you access via the get() method), and with the Context as > a hidden lookup table for those values (corresponding to > globals()/locals()). I'm also OK on that same basis with ContextVars > having an "unset" state, and with it being unusual/difficult to get > back to the unset state once you've set a value. > > tl;dr If you think of a ContextVar as a "variable" or "name", the > current design makes sense (at least to me). > > Paul > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Sat Jan 13 07:54:33 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 13:54:33 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 Message-ID: Hi, I'm still working on a ssl module PEP for 3.7 [1], but it's probably not going to be finished before beta 1 deadline. I have a bunch of fixes and improvements for the ssl module in queue, most of them require OpenSSL 1.0.2 features. The features are also present and working properly since LibreSSL 2.5.3 If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land bunch of useful goodies like proper hostname verification [2], proper fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate and PrivateKey types (support loading certs and keys from file and memory) [4], and simplified cipher suite configuration [5]. I can finally clean up _ssl.c during the beta phase, too. OpenSSL 1.0.1 is out of support since December 2016, 0.9.8 since 2015. These versions haven't received any security updates for more than a year! All major Linux and BSD distributions have at least 1.0.2 [6]. The only relevant exception is Ubuntu 14.04 LTS, because Travis CI is running 14.04. PR 3562 [7] contains a PoC to compile a custom build of OpenSSL on Travis. Builds are cached. Regards, Christian [1] https://github.com/tiran/peps/blob/sslmodule37/pep-9999.txt [2] https://bugs.python.org/issue31399 [3] https://bugs.python.org/issue32185 [4] https://bugs.python.org/issue18369 [5] https://bugs.python.org/issue31429 [6] https://gist.github.com/tiran/c5409bbd60a5f082f654d967add8cc79 [7] https://github.com/python/cpython/pull/3462 From solipsis at pitrou.net Sat Jan 13 08:23:19 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 13 Jan 2018 14:23:19 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 References: Message-ID: <20180113142319.38f77f39@fsol> On Sat, 13 Jan 2018 13:54:33 +0100 Christian Heimes wrote: > > If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land > bunch of useful goodies like proper hostname verification [2], proper > fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate > and PrivateKey types (support loading certs and keys from file and > memory) [4], and simplified cipher suite configuration [5]. I can > finally clean up _ssl.c during the beta phase, too. Given the annoyance of supporting old OpenSSL versions, I'd say +1 to this. We'll have to deal with the complaints of users of Debian oldstable, CentOS 6 and RHEL 6, though. Regards Antoine. From christian at python.org Sat Jan 13 09:49:21 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 15:49:21 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <20180113142319.38f77f39@fsol> References: <20180113142319.38f77f39@fsol> Message-ID: On 2018-01-13 14:23, Antoine Pitrou wrote: > On Sat, 13 Jan 2018 13:54:33 +0100 > Christian Heimes wrote: >> >> If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land >> bunch of useful goodies like proper hostname verification [2], proper >> fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate >> and PrivateKey types (support loading certs and keys from file and >> memory) [4], and simplified cipher suite configuration [5]. I can >> finally clean up _ssl.c during the beta phase, too. > > Given the annoyance of supporting old OpenSSL versions, I'd say +1 to > this. > > We'll have to deal with the complaints of users of Debian oldstable, > CentOS 6 and RHEL 6, though. It's more of an issue for Travis CI. The Python 3.7-dev target won't have a functional ssl module. Travis either has to update their build base to 16.04, provide a custom build of OpenSSL, or all packages have to use a container. [1] Christian [1] https://github.com/travis-ci/travis-ci/issues/5821#issuecomment-214452987 From solipsis at pitrou.net Sat Jan 13 10:15:37 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 13 Jan 2018 16:15:37 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 References: <20180113142319.38f77f39@fsol> Message-ID: <20180113161537.15ad0fd2@fsol> On Sat, 13 Jan 2018 15:49:21 +0100 Christian Heimes wrote: > On 2018-01-13 14:23, Antoine Pitrou wrote: > > On Sat, 13 Jan 2018 13:54:33 +0100 > > Christian Heimes wrote: > >> > >> If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land > >> bunch of useful goodies like proper hostname verification [2], proper > >> fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate > >> and PrivateKey types (support loading certs and keys from file and > >> memory) [4], and simplified cipher suite configuration [5]. I can > >> finally clean up _ssl.c during the beta phase, too. > > > > Given the annoyance of supporting old OpenSSL versions, I'd say +1 to > > this. > > > > We'll have to deal with the complaints of users of Debian oldstable, > > CentOS 6 and RHEL 6, though. > > It's more of an issue for Travis CI. The Python 3.7-dev target won't > have a functional ssl module. Travis either has to update their build > base to 16.04, provide a custom build of OpenSSL, or all packages have > to use a container. [1] That's Travis-CI's problem. And hopefully they'll migrate to Ubuntu 16.04 soon (it's almost 2 years old...). Regards Antoine. From christian at python.org Sat Jan 13 12:06:16 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 18:06:16 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? Message-ID: Hi, PEP 370 [1] was my first PEP that got accepted. I created it exactly one decade and two days ago for Python 2.6 and 3.0. Back then we didn't have virtual environment support in Python. Ian Bicking had just started to create the virtualenv project a couple of months earlier. Fast forward 10 years... Nowadays Python has venv in the standard library. The user-specific site-packages directory is no longer that useful. I would even say it's causing more trouble than it's worth. For example it's common for system script to use "#!/usr/bin/python3" shebang without -s or -I option. I propose to deprecate the feature and remove it in Python 4.0. Regards, Christian [1] https://www.python.org/dev/peps/pep-0370/ From random832 at fastmail.com Sat Jan 13 13:04:47 2018 From: random832 at fastmail.com (Random832) Date: Sat, 13 Jan 2018 13:04:47 -0500 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> On Sat, Jan 13, 2018, at 12:06, Christian Heimes wrote: > Hi, > > PEP 370 [1] was my first PEP that got accepted. I created it exactly one > decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > virtual environment support in Python. Ian Bicking had just started to > create the virtualenv project a couple of months earlier. > > Fast forward 10 years... > > Nowadays Python has venv in the standard library. The user-specific > site-packages directory is no longer that useful. I would even say it's > causing more trouble than it's worth. For example it's common for system > script to use "#!/usr/bin/python3" shebang without -s or -I option. > > I propose to deprecate the feature and remove it in Python 4.0. Where would pip install --user put packages, and how would one run scripts that require those packages? Right now these things Just Work; I've never had to learn how to use virtual environments. From christian at python.org Sat Jan 13 13:18:41 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 19:18:41 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> Message-ID: On 2018-01-13 19:04, Random832 wrote: > On Sat, Jan 13, 2018, at 12:06, Christian Heimes wrote: >> Hi, >> >> PEP 370 [1] was my first PEP that got accepted. I created it exactly one >> decade and two days ago for Python 2.6 and 3.0. Back then we didn't have >> virtual environment support in Python. Ian Bicking had just started to >> create the virtualenv project a couple of months earlier. >> >> Fast forward 10 years... >> >> Nowadays Python has venv in the standard library. The user-specific >> site-packages directory is no longer that useful. I would even say it's >> causing more trouble than it's worth. For example it's common for system >> script to use "#!/usr/bin/python3" shebang without -s or -I option. >> >> I propose to deprecate the feature and remove it in Python 4.0. > > Where would pip install --user put packages, and how would one run scripts that require those packages? Right now these things Just Work; I've never had to learn how to use virtual environments. I see two option: 1) "pip install --user" is no longer supported. You have to learn how to use virtual envs. It's really easy: "python3 -m venv path; path/bin/pip install package". 2) "pip install --user" automatically creates or uses a custom virtual (~/.pip/virtualenv-$VERSION/) and links entry points to ~/.local/bin. Christian From solipsis at pitrou.net Sat Jan 13 13:57:38 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 13 Jan 2018 19:57:38 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> Message-ID: <20180113195738.657c446c@fsol> On Sat, 13 Jan 2018 19:18:41 +0100 Christian Heimes wrote: > On 2018-01-13 19:04, Random832 wrote: > > On Sat, Jan 13, 2018, at 12:06, Christian Heimes wrote: > >> Hi, > >> > >> PEP 370 [1] was my first PEP that got accepted. I created it exactly one > >> decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > >> virtual environment support in Python. Ian Bicking had just started to > >> create the virtualenv project a couple of months earlier. > >> > >> Fast forward 10 years... > >> > >> Nowadays Python has venv in the standard library. The user-specific > >> site-packages directory is no longer that useful. I would even say it's > >> causing more trouble than it's worth. For example it's common for system > >> script to use "#!/usr/bin/python3" shebang without -s or -I option. > >> > >> I propose to deprecate the feature and remove it in Python 4.0. > > > > Where would pip install --user put packages, and how would one run scripts that require those packages? Right now these things Just Work; I've never had to learn how to use virtual environments. > > I see two option: > > 1) "pip install --user" is no longer supported. You have to learn how to > use virtual envs. It's really easy: "python3 -m venv path; path/bin/pip > install package". > 2) "pip install --user" automatically creates or uses a custom virtual > (~/.pip/virtualenv-$VERSION/) and links entry points to ~/.local/bin. Option 2 doesn't work, since the installed package then isn't known to the system Python. I'm not sure user site-packages adds a lot of complexity to Python, so I don't think it's worth breaking some people's usage. Regards Antoine. From mhroncok at redhat.com Sat Jan 13 14:12:26 2018 From: mhroncok at redhat.com (=?UTF-8?Q?Miro_Hron=c4=8dok?=) Date: Sat, 13 Jan 2018 20:12:26 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: On 13.1.2018 18:06, Christian Heimes wrote: > Nowadays Python has venv in the standard library. The user-specific > site-packages directory is no longer that useful. I would even say it's > causing more trouble than it's worth. For example it's common for system > script to use "#!/usr/bin/python3" shebang without -s or -I option. While I consider venvs easy and cool, this just moves the barrier for the users a little bit higher. We (Fedora Python SIG) are fighting users that run `sudo pip install` all the time (because the Interwebz are full of such instructions). The users might be willing to listen to "please, don't use pip with sudo, use --user instead". However, if you tell them "learn how to use a venv", they'll just stick with sudo. > I propose to deprecate the feature and remove it in Python 4.0. We would very much like to see --user the default rather than having it removed. -- Miro Hron?ok -- Phone: +420777974800 IRC: mhroncok From pmiscml at gmail.com Sat Jan 13 14:34:00 2018 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Sat, 13 Jan 2018 21:34:00 +0200 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> Message-ID: <20180113213400.7bc07f2b@x230> Hello, On Sat, 13 Jan 2018 19:18:41 +0100 Christian Heimes wrote: [] > >> Nowadays Python has venv in the standard library. The user-specific > >> site-packages directory is no longer that useful. I would even say > >> it's causing more trouble than it's worth. For example it's common > >> for system script to use "#!/usr/bin/python3" shebang without -s > >> or -I option. > >> > >> I propose to deprecate the feature and remove it in Python 4.0. > > > > Where would pip install --user put packages, and how would one run > > scripts that require those packages? Right now these things Just > > Work; I've never had to learn how to use virtual environments. > > I see two option: > > 1) "pip install --user" is no longer supported. You have to learn how > to use virtual envs. It's really easy: "python3 -m venv path; > path/bin/pip install package". Easy for whom? C, Ruby, JavaScript users, random grandmas and grandpas? Please don't make innocent hate Python, and don't make developers who chose Python to develop software hate it for impossibility to provide decent user support for their software. -- Best regards, Paul mailto:pmiscml at gmail.com From phd at phdru.name Sat Jan 13 14:08:09 2018 From: phd at phdru.name (Oleg Broytman) Date: Sat, 13 Jan 2018 20:08:09 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: <20180113190809.GA9475@phdru.name> Hi! On Sat, Jan 13, 2018 at 06:06:16PM +0100, Christian Heimes wrote: > Hi, > > PEP 370 [1] was my first PEP that got accepted. I created it exactly one > decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > virtual environment support in Python. Ian Bicking had just started to > create the virtualenv project a couple of months earlier. > > Fast forward 10 years... > > Nowadays Python has venv in the standard library. The user-specific > site-packages directory is no longer that useful. Can I disagree? > I would even say it's > causing more trouble than it's worth. For example it's common for system > script to use "#!/usr/bin/python3" shebang without -s or -I option. System scripts are run under user root which seldom has user-specific site-packages so why worry? > I propose to deprecate the feature and remove it in Python 4.0. > > Regards, > Christian > > [1] https://www.python.org/dev/peps/pep-0370/ Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From phd at phdru.name Sat Jan 13 14:57:48 2018 From: phd at phdru.name (Oleg Broytman) Date: Sat, 13 Jan 2018 20:57:48 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> Message-ID: <20180113195748.GA12656@phdru.name> On Sat, Jan 13, 2018 at 07:18:41PM +0100, Christian Heimes wrote: > On 2018-01-13 19:04, Random832 wrote: > > On Sat, Jan 13, 2018, at 12:06, Christian Heimes wrote: > >> Hi, > >> > >> PEP 370 [1] was my first PEP that got accepted. I created it exactly one > >> decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > >> virtual environment support in Python. Ian Bicking had just started to > >> create the virtualenv project a couple of months earlier. > >> > >> Fast forward 10 years... > >> > >> Nowadays Python has venv in the standard library. The user-specific > >> site-packages directory is no longer that useful. I would even say it's > >> causing more trouble than it's worth. For example it's common for system > >> script to use "#!/usr/bin/python3" shebang without -s or -I option. > >> > >> I propose to deprecate the feature and remove it in Python 4.0. > > > > Where would pip install --user put packages, and how would one run scripts that require those packages? Right now these things Just Work; I've never had to learn how to use virtual environments. > > I see two option: > > 1) "pip install --user" is no longer supported. You have to learn how to > use virtual envs. It's really easy: "python3 -m venv path; path/bin/pip > install package". I've learned virtual envs and use them every day. I also use ``pip install --user``. Different use cases. Virtual envs are for development, ``pip install --user`` for deployment. > Christian Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From christian at python.org Sat Jan 13 15:00:07 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 21:00:07 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: <20180113190809.GA9475@phdru.name> References: <20180113190809.GA9475@phdru.name> Message-ID: On 2018-01-13 20:08, Oleg Broytman wrote: > Hi! > > On Sat, Jan 13, 2018 at 06:06:16PM +0100, Christian Heimes wrote: >> Hi, >> >> PEP 370 [1] was my first PEP that got accepted. I created it exactly one >> decade and two days ago for Python 2.6 and 3.0. Back then we didn't have >> virtual environment support in Python. Ian Bicking had just started to >> create the virtualenv project a couple of months earlier. >> >> Fast forward 10 years... >> >> Nowadays Python has venv in the standard library. The user-specific >> site-packages directory is no longer that useful. > > Can I disagree? > >> I would even say it's >> causing more trouble than it's worth. For example it's common for system >> script to use "#!/usr/bin/python3" shebang without -s or -I option. > > System scripts are run under user root which seldom has user-specific > site-packages so why worry? You'd be surprised how many tools and programs are using Python these days. You can easily break important user programs by installing a package with --user. Christian From brett at python.org Sat Jan 13 15:01:02 2018 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jan 2018 20:01:02 +0000 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: On Sat, Jan 13, 2018, 11:13 Miro Hron?ok, wrote: > On 13.1.2018 18:06, Christian Heimes wrote: > > Nowadays Python has venv in the standard library. The user-specific > > site-packages directory is no longer that useful. I would even say it's > > causing more trouble than it's worth. For example it's common for system > > script to use "#!/usr/bin/python3" shebang without -s or -I option. > > While I consider venvs easy and cool, this just moves the barrier for > the users a little bit higher. We (Fedora Python SIG) are fighting users > that run `sudo pip install` all the time (because the Interwebz are full > of such instructions). The users might be willing to listen to "please, > don't use pip with sudo, use --user instead". However, if you tell them > "learn how to use a venv", they'll just stick with sudo. > > > I propose to deprecate the feature and remove it in Python 4.0. > > We would very much like to see --user the default rather than having it > removed. > I concur with Miro. On VS Code we rely on people installing the e.g. linter of their choosing. We have been moving people away from sudo installs to user installs to minimize polluting the system Python (and we are doing what we can to promote venvs by using them automatically when present, but we can only do so much). Basically the only way I see it being reasonable to drop user installs is if we move entirely to venvs for installs and I think that's probably too radical to work. -Brett > > -- > Miro Hron?ok > -- > Phone: +420777974800 > IRC: mhroncok > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jan 13 15:02:59 2018 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jan 2018 20:02:59 +0000 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <20180113142319.38f77f39@fsol> References: <20180113142319.38f77f39@fsol> Message-ID: On Sat, Jan 13, 2018, 05:24 Antoine Pitrou, wrote: > On Sat, 13 Jan 2018 13:54:33 +0100 > Christian Heimes wrote: > > > > If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land > > bunch of useful goodies like proper hostname verification [2], proper > > fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate > > and PrivateKey types (support loading certs and keys from file and > > memory) [4], and simplified cipher suite configuration [5]. I can > > finally clean up _ssl.c during the beta phase, too. > > Given the annoyance of supporting old OpenSSL versions, I'd say +1 to > this. > +1 from me as well for the improved security. -Brett > We'll have to deal with the complaints of users of Debian oldstable, > CentOS 6 and RHEL 6, though. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Sat Jan 13 15:06:19 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 21:06:19 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: <20180113195738.657c446c@fsol> References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> <20180113195738.657c446c@fsol> Message-ID: On 2018-01-13 19:57, Antoine Pitrou wrote: > On Sat, 13 Jan 2018 19:18:41 +0100 > Christian Heimes wrote: >> On 2018-01-13 19:04, Random832 wrote: >>> On Sat, Jan 13, 2018, at 12:06, Christian Heimes wrote: >>>> Hi, >>>> >>>> PEP 370 [1] was my first PEP that got accepted. I created it exactly one >>>> decade and two days ago for Python 2.6 and 3.0. Back then we didn't have >>>> virtual environment support in Python. Ian Bicking had just started to >>>> create the virtualenv project a couple of months earlier. >>>> >>>> Fast forward 10 years... >>>> >>>> Nowadays Python has venv in the standard library. The user-specific >>>> site-packages directory is no longer that useful. I would even say it's >>>> causing more trouble than it's worth. For example it's common for system >>>> script to use "#!/usr/bin/python3" shebang without -s or -I option. >>>> >>>> I propose to deprecate the feature and remove it in Python 4.0. >>> >>> Where would pip install --user put packages, and how would one run scripts that require those packages? Right now these things Just Work; I've never had to learn how to use virtual environments. >> >> I see two option: >> >> 1) "pip install --user" is no longer supported. You have to learn how to >> use virtual envs. It's really easy: "python3 -m venv path; path/bin/pip >> install package". >> 2) "pip install --user" automatically creates or uses a custom virtual >> (~/.pip/virtualenv-$VERSION/) and links entry points to ~/.local/bin. > > Option 2 doesn't work, since the installed package then isn't known to > the system Python. I see that as a benefit. User installed packages will no longer be able to break system-wide programs. These days a lot of packages are using setuptools' entry points to create console scripts. Entry point have no option to create a console script with -s or -I flag. On my system, only 40 out of 360 scripts in /usr/bin have -s or -I. From phd at phdru.name Sat Jan 13 15:07:47 2018 From: phd at phdru.name (Oleg Broytman) Date: Sat, 13 Jan 2018 21:07:47 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: <20180113190809.GA9475@phdru.name> Message-ID: <20180113200747.GA13965@phdru.name> On Sat, Jan 13, 2018 at 09:00:07PM +0100, Christian Heimes wrote: > On 2018-01-13 20:08, Oleg Broytman wrote: > > Hi! > > > > On Sat, Jan 13, 2018 at 06:06:16PM +0100, Christian Heimes wrote: > >> Hi, > >> > >> PEP 370 [1] was my first PEP that got accepted. I created it exactly one > >> decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > >> virtual environment support in Python. Ian Bicking had just started to > >> create the virtualenv project a couple of months earlier. > >> > >> Fast forward 10 years... > >> > >> Nowadays Python has venv in the standard library. The user-specific > >> site-packages directory is no longer that useful. > > > > Can I disagree? > > > >> I would even say it's > >> causing more trouble than it's worth. For example it's common for system > >> script to use "#!/usr/bin/python3" shebang without -s or -I option. > > > > System scripts are run under user root which seldom has user-specific > > site-packages so why worry? > > You'd be surprised how many tools and programs are using Python these > days. Certainly not. I wrote or helped to write a lot of them myself. :-) > You can easily break important user programs by installing a > package with --user. Under root? Probably. Then don't do that -- or do not allow system Python to import user-specific site-packages (i.e., distinguish system Python from normal Python running under user root). But for a non-root user user-specific site-packages is quite a convenient thing. Please don't remove it. > Christian Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From solipsis at pitrou.net Sat Jan 13 15:12:43 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 13 Jan 2018 21:12:43 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> <20180113195738.657c446c@fsol> Message-ID: <20180113211243.05b11ee1@fsol> On Sat, 13 Jan 2018 21:06:19 +0100 Christian Heimes wrote: > >> > >> I see two option: > >> > >> 1) "pip install --user" is no longer supported. You have to learn how to > >> use virtual envs. It's really easy: "python3 -m venv path; path/bin/pip > >> install package". > >> 2) "pip install --user" automatically creates or uses a custom virtual > >> (~/.pip/virtualenv-$VERSION/) and links entry points to ~/.local/bin. > > > > Option 2 doesn't work, since the installed package then isn't known to > > the system Python. > > I see that as a benefit. User installed packages will no longer be able > to break system-wide programs. I don't know if it's better or worse. I'm just saying it's not a migration path from user site-packages since it doesn't have the same semantics. > These days a lot of packages are using setuptools' entry points to > create console scripts. Entry point have no option to create a console > script with -s or -I flag. Perhaps that should be fixed. Regards Antoine. From steve.dower at python.org Sat Jan 13 15:55:24 2018 From: steve.dower at python.org (Steve Dower) Date: Sun, 14 Jan 2018 07:55:24 +1100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: I?m generally +1, though I don?t see an easy migration path. Moving to a node.js project model might be feasible, but I suspect our real solution will end up having to be ensuring use of -s where it?s needed. Top-posted from my Windows phone From: Christian Heimes Sent: Sunday, January 14, 2018 4:09 To: python-dev at python.org Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? Hi, PEP 370 [1] was my first PEP that got accepted. I created it exactly one decade and two days ago for Python 2.6 and 3.0. Back then we didn't have virtual environment support in Python. Ian Bicking had just started to create the virtualenv project a couple of months earlier. Fast forward 10 years... Nowadays Python has venv in the standard library. The user-specific site-packages directory is no longer that useful. I would even say it's causing more trouble than it's worth. For example it's common for system script to use "#!/usr/bin/python3" shebang without -s or -I option. I propose to deprecate the feature and remove it in Python 4.0. Regards, Christian [1] https://www.python.org/dev/peps/pep-0370/ _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruno.m.voss at gmail.com Sat Jan 13 15:58:40 2018 From: bruno.m.voss at gmail.com (=?UTF-8?Q?Bruno_Maximilian_Vo=C3=9F?=) Date: Sat, 13 Jan 2018 21:58:40 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? Message-ID: Hi all, I'm a user who decided to read the mailing list and respond to argue against and maybe stop things I don't think will help users as much as you think.I think deprecating user site-packages is such a change. That venvs exist doesn't mean most or even many people use them, even though I'm sure you and everyone you know does. I couldn't find usage statistics on short notice, do you have any? As far as I understand it site-packages is the default location for all packages that are installed and as long as the packages that are installed aren't causing a conflict, there is no problem. I've never had a problem in six years of using python. So I'd really like a more detailed break down of the troubles the existence of site packages causes and of the up- and downsides removing it would bring, before anything is decided. Another point I'd like to raise is that even though it's a good idea to isolate programs with venvs for stability, development and sometimes maybe security, idk, the idea of having a shared pool of packages has benefits too, namely code reuse and their availability, for example when you're offline. Also I don't really know why you would think it's necessary to force every user into venvs. The applications and users that do need their environments are free to use one after all. Regards, Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Sat Jan 13 17:45:07 2018 From: christian at python.org (Christian Heimes) Date: Sat, 13 Jan 2018 23:45:07 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> Message-ID: <1ff41667-efc2-db94-5476-27e6e326c673@python.org> On 2018-01-13 21:02, Brett Cannon wrote: > +1 from me as well for the improved security. Thanks, Brett! How should we handle CPython's Travis CI tests? The 14.04 boxes have OpenSSL 1.0.1. To the best of my knowledge, Travis doesn't offer 16.04. We could either move to container-based testing with a 16.04 container, which would give us 1.0.2 Or we could compile our own copy of OpenSSL with my multissl builder and use some rpath magic. In order to test all new features, Ubuntu doesn't cut it. Even current snapshot of Ubuntu doesn't contain OpenSSL 1.1. Debian Stretch or Fedora would do the trick, though. Maybe Barry's work on official test container could leveraged testing? Regards, Christian From steve at pearwood.info Sat Jan 13 19:18:38 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 14 Jan 2018 11:18:38 +1100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: <20180113190809.GA9475@phdru.name> Message-ID: <20180114001837.GB1982@ando.pearwood.info> On Sat, Jan 13, 2018 at 09:00:07PM +0100, Christian Heimes wrote: > You'd be surprised how many tools and programs are using Python these > days. You can easily break important user programs by installing a > package with --user. Or by writing a Python script and innocently giving it the same name as a system module. On the tutor@ and python-list@ mailing lists, it is very frequent for users to accidently break Python by accidently shadowing an installed module, e.g. installing xlrd and then calling their own script "xlrd.py". But I've never come across somebody breaking anything by installing a package with --user. I presume it must happen, but I would be surprised if it happens often enough to justify removing what is otherwise a useful piece of functionality. -- Steve From steve at pearwood.info Sat Jan 13 19:03:22 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 14 Jan 2018 11:03:22 +1100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <20180113142319.38f77f39@fsol> References: <20180113142319.38f77f39@fsol> Message-ID: <20180114000321.GA1982@ando.pearwood.info> On Sat, Jan 13, 2018 at 02:23:19PM +0100, Antoine Pitrou wrote: > On Sat, 13 Jan 2018 13:54:33 +0100 > Christian Heimes wrote: > > > > If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land > > bunch of useful goodies like proper hostname verification [2], proper > > fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate > > and PrivateKey types (support loading certs and keys from file and > > memory) [4], and simplified cipher suite configuration [5]. I can > > finally clean up _ssl.c during the beta phase, too. > > Given the annoyance of supporting old OpenSSL versions, I'd say +1 to > this. > > We'll have to deal with the complaints of users of Debian oldstable, > CentOS 6 and RHEL 6, though. It will probably be more work for Christian, but is it reasonable to keep support for the older versions of OpenSSL, but make the useful goodies conditional on a newer version? -- Steve From tritium-list at sdamon.com Sat Jan 13 20:27:09 2018 From: tritium-list at sdamon.com (Alex Walters) Date: Sat, 13 Jan 2018 20:27:09 -0500 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: <018901d38cd6$cc75ab70$65610250$@sdamon.com> I would suggest throwing this to -ideas, rather than just keeping it in -dev as there is a much wider community of users and usecases in -ideas. ... and -ideas will shoot it down because user installs are too useful. It is also my understanding that it is the desire of PyPA to eventually have pip default to --user when run outside of a virtualenv (to mitigate people running sudo pip). Eliminating user installs would change the situation from one where, with pip install --user, one can recoverably break their system to one, with sudo pip install, one can un-recoverably break their system. > -----Original Message----- > From: Python-Dev [mailto:python-dev-bounces+tritium- > list=sdamon.com at python.org] On Behalf Of Christian Heimes > Sent: Saturday, January 13, 2018 12:06 PM > To: python-dev at python.org > Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? > > Hi, > > PEP 370 [1] was my first PEP that got accepted. I created it exactly one > decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > virtual environment support in Python. Ian Bicking had just started to > create the virtualenv project a couple of months earlier. > > Fast forward 10 years... > > Nowadays Python has venv in the standard library. The user-specific > site-packages directory is no longer that useful. I would even say it's > causing more trouble than it's worth. For example it's common for system > script to use "#!/usr/bin/python3" shebang without -s or -I option. > > I propose to deprecate the feature and remove it in Python 4.0. > > Regards, > Christian > > [1] https://www.python.org/dev/peps/pep-0370/ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/tritium- > list%40sdamon.com From greg at krypto.org Sat Jan 13 21:02:59 2018 From: greg at krypto.org (Gregory P. Smith) Date: Sun, 14 Jan 2018 02:02:59 +0000 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <20180114000321.GA1982@ando.pearwood.info> References: <20180113142319.38f77f39@fsol> <20180114000321.GA1982@ando.pearwood.info> Message-ID: On Sat, Jan 13, 2018 at 4:34 PM Steven D'Aprano wrote: > On Sat, Jan 13, 2018 at 02:23:19PM +0100, Antoine Pitrou wrote: > > On Sat, 13 Jan 2018 13:54:33 +0100 > > Christian Heimes wrote: > > > > > > If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can > land > > > bunch of useful goodies like proper hostname verification [2], proper > > > fix for IP address in SNI TLS header [3], PEP 543 compatible > Certificate > > > and PrivateKey types (support loading certs and keys from file and > > > memory) [4], and simplified cipher suite configuration [5]. I can > > > finally clean up _ssl.c during the beta phase, too. > > > > Given the annoyance of supporting old OpenSSL versions, I'd say +1 to > > this. > > > > We'll have to deal with the complaints of users of Debian oldstable, > > CentOS 6 and RHEL 6, though. > > It will probably be more work for Christian, but is it reasonable to > keep support for the older versions of OpenSSL, but make the useful > goodies conditional on a newer version? > I don't think it is worth spending our limited engineering time supporting an unsupported library version. Leave that burden to stale distro maintainers who continue to choose dangerously stale software versions if they ironically want to use something as modern as 3.7 on top of an ancient set of libraries. +1 from me for requiring OpenSSL >= 1.0.2 in Python 3.7. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jan 13 21:16:53 2018 From: brett at python.org (Brett Cannon) Date: Sun, 14 Jan 2018 02:16:53 +0000 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <1ff41667-efc2-db94-5476-27e6e326c673@python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> Message-ID: On Sat, Jan 13, 2018, 14:45 Christian Heimes, wrote: > On 2018-01-13 21:02, Brett Cannon wrote: > > +1 from me as well for the improved security. > > Thanks, Brett! > > How should we handle CPython's Travis CI tests? The 14.04 boxes have > OpenSSL 1.0.1. To the best of my knowledge, Travis doesn't offer 16.04. > We could either move to container-based testing with a 16.04 container, > which would give us 1.0.2 Or we could compile our own copy of OpenSSL > with my multissl builder and use some rpath magic. > > In order to test all new features, Ubuntu doesn't cut it. Even current > snapshot of Ubuntu doesn't contain OpenSSL 1.1. Debian Stretch or Fedora > would do the trick, though. > > Maybe Barry's work on official test container could leveraged testing? > My guess is we either move to containers on Travis, see if we can manually install -- through apt or something -- a newer version of OpenSSL, or we look at alternative CI options. -Brett > Regards, > Christian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jan 13 21:28:16 2018 From: brett at python.org (Brett Cannon) Date: Sun, 14 Jan 2018 02:28:16 +0000 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: <018901d38cd6$cc75ab70$65610250$@sdamon.com> References: <018901d38cd6$cc75ab70$65610250$@sdamon.com> Message-ID: On Sat, Jan 13, 2018, 17:27 Alex Walters, wrote: > I would suggest throwing this to -ideas, rather than just keeping it in > -dev > as there is a much wider community of users and usecases in -ideas. > > ... and -ideas will shoot it down because user installs are too useful. It > is also my understanding that it is the desire of PyPA to eventually have > pip default to --user when run outside of a virtualenv (to mitigate people > running sudo pip). Eliminating user installs would change the situation > from one where, with pip install --user, one can recoverably break their > system to one, with sudo pip install, one can un-recoverably break their > system. > Which suggests that doing away with user installs would mean making it so interpreter installs don't break your system, which would require Linux and distros and macOS to not publicly expose their system Python installs so people can't accidentally break them. I think if that were to occur then you could consider dropping user installs as people would be installing into their own interpreter. But until then I think the attraction/lack of knowledge for people will be high enough that user installs will be the most pragmatic solution we have. -Brett > > -----Original Message----- > > From: Python-Dev [mailto:python-dev-bounces+tritium- > > list=sdamon.com at python.org] On Behalf Of Christian Heimes > > Sent: Saturday, January 13, 2018 12:06 PM > > To: python-dev at python.org > > Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? > > > > Hi, > > > > PEP 370 [1] was my first PEP that got accepted. I created it exactly one > > decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > > virtual environment support in Python. Ian Bicking had just started to > > create the virtualenv project a couple of months earlier. > > > > Fast forward 10 years... > > > > Nowadays Python has venv in the standard library. The user-specific > > site-packages directory is no longer that useful. I would even say it's > > causing more trouble than it's worth. For example it's common for system > > script to use "#!/usr/bin/python3" shebang without -s or -I option. > > > > I propose to deprecate the feature and remove it in Python 4.0. > > > > Regards, > > Christian > > > > [1] https://www.python.org/dev/peps/pep-0370/ > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/tritium- > > list%40sdamon.com > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Jan 13 22:14:07 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 13 Jan 2018 19:14:07 -0800 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: <459E75C3-5034-466F-8A37-DF5CF5D25CC0@python.org> On Jan 13, 2018, at 11:12, Miro Hron?ok wrote: > > We would very much like to see --user the default rather than having it removed. Very much +1. In Debian/Ubuntu we?ve carried patches to do exactly that for years, and I think our users have been very happy about it. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Sat Jan 13 22:16:47 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 13 Jan 2018 19:16:47 -0800 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> <20180113195738.657c446c@fsol> Message-ID: On Jan 13, 2018, at 12:06, Christian Heimes wrote: > These days a lot of packages are using setuptools' entry points to > create console scripts. Entry point have no option to create a console > script with -s or -I flag. On my system, only 40 out of 360 scripts in > /usr/bin have -s or -I. -I should be the default for system scripts; it?s not on Debian/Ubuntu though unfortunately. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From paul at ganssle.io Sat Jan 13 21:48:49 2018 From: paul at ganssle.io (Paul G) Date: Sun, 14 Jan 2018 02:48:49 +0000 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> Message-ID: One thing to note is that if getting Travis working with Python 3.7 is a pain, a huge number of libraries on PyPI probably just won't test against Python 3.7, which is not a great situation to be in. It's probably worth contacting Travis to give them a head's up and see how likely it is that they'll be able to support Python 3.7 if it requires a newer version of these libraries. On January 14, 2018 2:16:53 AM UTC, Brett Cannon wrote: >On Sat, Jan 13, 2018, 14:45 Christian Heimes, >wrote: > >> On 2018-01-13 21:02, Brett Cannon wrote: >> > +1 from me as well for the improved security. >> >> Thanks, Brett! >> >> How should we handle CPython's Travis CI tests? The 14.04 boxes have >> OpenSSL 1.0.1. To the best of my knowledge, Travis doesn't offer >16.04. >> We could either move to container-based testing with a 16.04 >container, >> which would give us 1.0.2 Or we could compile our own copy of OpenSSL >> with my multissl builder and use some rpath magic. >> >> In order to test all new features, Ubuntu doesn't cut it. Even >current >> snapshot of Ubuntu doesn't contain OpenSSL 1.1. Debian Stretch or >Fedora >> would do the trick, though. >> >> Maybe Barry's work on official test container could leveraged >testing? >> > >My guess is we either move to containers on Travis, see if we can >manually >install -- through apt or something -- a newer version of OpenSSL, or >we >look at alternative CI options. > >-Brett > > >> Regards, >> Christian >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Sat Jan 13 22:57:57 2018 From: phd at phdru.name (Oleg Broytman) Date: Sun, 14 Jan 2018 04:57:57 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> Message-ID: <20180114035757.GA11694@phdru.name> On Sun, Jan 14, 2018 at 02:16:53AM +0000, Brett Cannon wrote: > My guess is we either move to containers on Travis, see if we can manually > install -- through apt or something -- a newer version of OpenSSL OpenSSL 1.0.2 cannot be installed with apt on Trusty but I think it can be compiled from sources. > -Brett Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From a.badger at gmail.com Sat Jan 13 23:00:36 2018 From: a.badger at gmail.com (Toshio Kuratomi) Date: Sat, 13 Jan 2018 20:00:36 -0800 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: On Jan 13, 2018 9:08 AM, "Christian Heimes" wrote: Hi, PEP 370 [1] was my first PEP that got accepted. I created it exactly one decade and two days ago for Python 2.6 and 3.0. I didn't know I had you to thank for this! Thanks Christian! This is one of the best features of the python software packaging ecosystem! I almost exclusively install into user site packages these days. It lets me pull in the latest version of software when I want it for everyday use and revert to what my system shipped with if the updates break something. It's let me I install libraries ported to python3 before my distro got stopping to packaging the updates. It's let me perform an install when I want to test my packages as my users might be using it without touching the system dirs. It's been a godsend! Fast forward 10 years... Nowadays Python has venv in the standard library. The user-specific site-packages directory is no longer that useful. I would even say it's causing more trouble than it's worth. For example it's common for system script to use "#!/usr/bin/python3" shebang without -s or -I option. With great power comes great responsibility... Sure, installing something into user site packages can break system scripts. But it can also fix them. I can recall breaking system scripts twice by installing something into user site packages (both times, the tracebacks rapidly lead me to the reason that the scripts were failing). As a counter point to that I can recall *fixing* problems in system scripts by installing newer libraries into site packages twice in the last two months. (I've also fixed system software by installing into user and then modifying that version but I do that less frequently... Perhaps only a couple times a year...) Removing the user site packages also doesn't prevent people from making local changes that break system scripts (removing the pre-configuration of user site packages does not stop honoring usage of PYTHONPATH); it only makes people work a little harder to place their overridden packages into a location that python will find and leads to nonstandard locations for these overrides. This will make it harder for people to troubleshoot the problems other people may be having. Instead of asking "do you have any libraries in .local in your tracebacks?" as an easy first troubleshooting step. Without the user site packages standard we'll be back to trying to determine which directories are official for the user's install and then finding any local directories that their site may have defined for overrides.... I propose to deprecate the feature and remove it in Python 4.0. Although I don't like the idea of system scripts adding -s and -l because it prevents me from fixing them for my own use by installing just a newer or modified library into user site packages (similar to how c programs can use overridden libraries via ld_library_path), it seems that if you want to prevent users from choosing to use their own libraries with system scripts, the right thing to do is to get changes to allow adding those to setuptools and distutils. Those flags will do a much more thorough job of preventing this usage than removing user site packages can. -Toshio -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanjekyejoannah at gmail.com Sun Jan 14 03:10:33 2018 From: nanjekyejoannah at gmail.com (joannah nanjekye) Date: Sun, 14 Jan 2018 11:10:33 +0300 Subject: [Python-Dev] Why wont duplicate methods be flagged as error (syntax or anything suitable error) Message-ID: Hello, Apparently when you implement two methods with the same name: def sub(x, y): print(x -y) def sub(x, y): print(x -y) Even with type hints. def sub(x: int, y:int) -> int: return x - y def sub(x: float, y:float) -> float: return 8 If you are from another background, you will expect the syntax with type hints to act as though method overloading but instead last implementation is always called. If this is the required behavior,then just flag any duplicate method implementations as syntax errors. Is this sort of method name duplication important in any cases? Not aimed at criticism, just to understand. -- Joannah Nanjekye +256776468213 F : Nanjekye Captain Joannah S : joannah.nanjekye T : @Captain_Joannah SO : joannah *"You think you know when you learn, are more sure when you can write, even more when you can teach, but certain when you can program." Alan J. Perlis* -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sun Jan 14 03:20:57 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 14 Jan 2018 19:20:57 +1100 Subject: [Python-Dev] Why wont duplicate methods be flagged as error (syntax or anything suitable error) In-Reply-To: References: Message-ID: On Sun, Jan 14, 2018 at 7:10 PM, joannah nanjekye wrote: > Hello, > > Apparently when you implement two methods with the same name: > > def sub(x, y): > print(x -y) > > def sub(x, y): > print(x -y) > > Even with type hints. > > def sub(x: int, y:int) -> int: > return x - y > > def sub(x: float, y:float) -> float: > return 8 > > If you are from another background, you will expect the syntax with type > hints to act as though method overloading but instead last implementation is > always called. If this is the required behavior,then just flag any duplicate > method implementations as syntax errors. > > Is this sort of method name duplication important in any cases? > > Not aimed at criticism, just to understand. This is not an error in the language for the same reason that any other assignment isn't an error: x = 5 x = 6 But you will find that a number of linters will flag this as a warning. You can configure your editor to constantly run a linter and show you when something's wrong. ChrisA From tjreedy at udel.edu Sun Jan 14 03:23:01 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 14 Jan 2018 03:23:01 -0500 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> Message-ID: On 1/13/2018 3:02 PM, Brett Cannon wrote: > > > On Sat, Jan 13, 2018, 05:24 Antoine Pitrou, > wrote: > > On Sat, 13 Jan 2018 13:54:33 +0100 > Christian Heimes > wrote: > > > > If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I > can land > > bunch of useful goodies like proper hostname verification [2], proper > > fix for IP address in SNI TLS header [3], PEP 543 compatible > Certificate > > and PrivateKey types (support loading certs and keys from file and > > memory) [4], and simplified cipher suite configuration [5]. I can > > finally clean up _ssl.c during the beta phase, too. > > Given the annoyance of supporting old OpenSSL versions, I'd say +1 to > this. > > > +1 from me as well for the improved security. FWIW, given that I will not be doing any of the work, +1 from me also. -- Terry Jan Reedy From christian at python.org Sun Jan 14 03:57:51 2018 From: christian at python.org (Christian Heimes) Date: Sun, 14 Jan 2018 09:57:51 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <20180114000321.GA1982@ando.pearwood.info> References: <20180113142319.38f77f39@fsol> <20180114000321.GA1982@ando.pearwood.info> Message-ID: On 2018-01-14 01:03, Steven D'Aprano wrote: > On Sat, Jan 13, 2018 at 02:23:19PM +0100, Antoine Pitrou wrote: >> On Sat, 13 Jan 2018 13:54:33 +0100 >> Christian Heimes wrote: >>> >>> If we agree to drop support for OpenSSL 0.9.8 and 1.0.1, then I can land >>> bunch of useful goodies like proper hostname verification [2], proper >>> fix for IP address in SNI TLS header [3], PEP 543 compatible Certificate >>> and PrivateKey types (support loading certs and keys from file and >>> memory) [4], and simplified cipher suite configuration [5]. I can >>> finally clean up _ssl.c during the beta phase, too. >> >> Given the annoyance of supporting old OpenSSL versions, I'd say +1 to >> this. >> >> We'll have to deal with the complaints of users of Debian oldstable, >> CentOS 6 and RHEL 6, though. > > It will probably be more work for Christian, but is it reasonable to > keep support for the older versions of OpenSSL, but make the useful > goodies conditional on a newer version? It's much more than just goodies. For example the X509_VERIFY_PARAM_set1_host() API fixes a whole lot of issues with ssl.match_hostname(). The feature is OpenSSL 1.0.2+ and baked into the certificate validation system. I don't see a realistic way to perform the same task with 1.0.1. Christian From christian at python.org Sun Jan 14 04:03:14 2018 From: christian at python.org (Christian Heimes) Date: Sun, 14 Jan 2018 10:03:14 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> Message-ID: On 2018-01-14 03:48, Paul G wrote: > One thing to note is that if getting Travis working with Python 3.7 is a > pain, a huge number of libraries on PyPI probably just won't test > against Python 3.7, which is not a great situation to be in. > > It's probably worth contacting Travis to give them a head's up and see > how likely it is that they'll be able to support Python 3.7 if it > requires a newer version of these libraries. Unless my proposal isn't rejected, I'll contact Travis CI tomorrow. Christian From steve at pearwood.info Sun Jan 14 04:09:16 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 14 Jan 2018 20:09:16 +1100 Subject: [Python-Dev] Why wont duplicate methods be flagged as error (syntax or anything suitable error) In-Reply-To: References: Message-ID: <20180114090916.GD1982@ando.pearwood.info> On Sun, Jan 14, 2018 at 11:10:33AM +0300, joannah nanjekye wrote: [...] > Is this sort of method name duplication important in any cases? Yes. For example, inside a class: class MyClass(object): @property def something(self): pass @something.setter def something(self): pass > Not aimed at criticism, just to understand. This mailing list is not really for general discussions about Python, this is for the development of the Python interpreter. For questions about how Python works and the reasons for design choices such as allowing duplicate function or method definitions, please try Python-List at python.org, or a forum such as Reddit /r/python. Thank you. -- Steve From levkivskyi at gmail.com Sun Jan 14 04:04:06 2018 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Sun, 14 Jan 2018 09:04:06 +0000 Subject: [Python-Dev] Why wont duplicate methods be flagged as error (syntax or anything suitable error) In-Reply-To: References: Message-ID: On 14 January 2018 at 08:20, Chris Angelico wrote: > On Sun, Jan 14, 2018 at 7:10 PM, joannah nanjekye > wrote: > > Hello, > > > > Apparently when you implement two methods with the same name: > > > > def sub(x, y): > > print(x -y) > > > > def sub(x, y): > > print(x -y) > > > > Even with type hints. > > > > def sub(x: int, y:int) -> int: > > return x - y > > > > def sub(x: float, y:float) -> float: > > return 8 > > > > If you are from another background, you will expect the syntax with type > > hints to act as though method overloading but instead last > implementation is > > always called. If this is the required behavior,then just flag any > duplicate > > method implementations as syntax errors. > > > > Is this sort of method name duplication important in any cases? > > > > Not aimed at criticism, just to understand. > > This is not an error in the language for the same reason that any > other assignment isn't an error: > > x = 5 > x = 6 > > But you will find that a number of linters will flag this as a > warning. You can configure your editor to constantly run a linter and > show you when something's wrong. > For example mypy (and probably also PyCharm) warn about variable/function/class re-definition. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Jan 14 05:17:14 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 14 Jan 2018 11:17:14 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <1ff41667-efc2-db94-5476-27e6e326c673@python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> Message-ID: <20180114111714.0f23ff54@fsol> On Sat, 13 Jan 2018 23:45:07 +0100 Christian Heimes wrote: > On 2018-01-13 21:02, Brett Cannon wrote: > > +1 from me as well for the improved security. > > Thanks, Brett! > > How should we handle CPython's Travis CI tests? The 14.04 boxes have > OpenSSL 1.0.1. To the best of my knowledge, Travis doesn't offer 16.04. > We could either move to container-based testing with a 16.04 container, > which would give us 1.0.2 Or we could compile our own copy of OpenSSL > with my multissl builder and use some rpath magic. I don't think you need some rpath magic, just set LD_LIBRARY_PATH to the right value. Regards Antoine. From christian at python.org Sun Jan 14 07:01:02 2018 From: christian at python.org (Christian Heimes) Date: Sun, 14 Jan 2018 13:01:02 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <20180114111714.0f23ff54@fsol> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <20180114111714.0f23ff54@fsol> Message-ID: On 2018-01-14 11:17, Antoine Pitrou wrote: > On Sat, 13 Jan 2018 23:45:07 +0100 > Christian Heimes wrote: >> On 2018-01-13 21:02, Brett Cannon wrote: >>> +1 from me as well for the improved security. >> >> Thanks, Brett! >> >> How should we handle CPython's Travis CI tests? The 14.04 boxes have >> OpenSSL 1.0.1. To the best of my knowledge, Travis doesn't offer 16.04. >> We could either move to container-based testing with a 16.04 container, >> which would give us 1.0.2 Or we could compile our own copy of OpenSSL >> with my multissl builder and use some rpath magic. > > I don't think you need some rpath magic, just set LD_LIBRARY_PATH to > the right value. I prefer LD_RUN_PATH because it adds rpath to the ELF header of shared libraries and binaries. https://github.com/python/cpython/pull/5180 Christian From matt at vazor.com Sun Jan 14 03:24:46 2018 From: matt at vazor.com (Matt Billenstein) Date: Sun, 14 Jan 2018 08:24:46 +0000 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> Message-ID: <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> Correct me if I'm wrong, but Python3 on osx bundles openssl since Apple has deprecated (and no longer ships the header files for) the version shipped with recent versions of osx. Perhaps this is an option to support the various flavors of Linux as well? m On Sun, Jan 14, 2018 at 02:48:49AM +0000, Paul G wrote: > One thing to note is that if getting Travis working with Python 3.7 is a > pain, a huge number of libraries on PyPI probably just won't test against > Python 3.7, which is not a great situation to be in. > > It's probably worth contacting Travis to give them a head's up and see how > likely it is that they'll be able to support Python 3.7 if it requires a > newer version of these libraries. > > On January 14, 2018 2:16:53 AM UTC, Brett Cannon wrote: > > On Sat, Jan 13, 2018, 14:45 Christian Heimes, > wrote: > > On 2018-01-13 21:02, Brett Cannon wrote: > > +1 from me as well for the improved security. > > Thanks, Brett! > > How should we handle CPython's Travis CI tests? The 14.04 boxes have > OpenSSL 1.0.1. To the best of my knowledge, Travis doesn't offer > 16.04. > We could either move to container-based testing with a 16.04 > container, > which would give us 1.0.2 Or we could compile our own copy of OpenSSL > with my multissl builder and use some rpath magic. > > In order to test all new features, Ubuntu doesn't cut it. Even current > snapshot of Ubuntu doesn't contain OpenSSL 1.1. Debian Stretch or > Fedora > would do the trick, though. > > Maybe Barry's work on official test container could leveraged testing? > > My guess is we either move to containers on Travis, see if we can > manually install -- through apt or something -- a newer version of > OpenSSL, or we look at alternative CI options. > -Brett > > Regards, > Christian > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/matt%40vazor.com -- Matt Billenstein matt at vazor.com http://www.vazor.com/ From christian at python.org Sun Jan 14 08:39:54 2018 From: christian at python.org (Christian Heimes) Date: Sun, 14 Jan 2018 14:39:54 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> Message-ID: On 2018-01-14 09:24, Matt Billenstein wrote: > Correct me if I'm wrong, but Python3 on osx bundles openssl since Apple has > deprecated (and no longer ships the header files for) the version shipped with > recent versions of osx. > > Perhaps this is an option to support the various flavors of Linux as well? AFAK Apple has decided to compile and statically link CPython's ssl with an ancient, customized LibreSSL version. Cory posted [1] a couple of months ago Can confirm: macOS 10.13 will ship a Python linked against LibreSSL 2.2.7. A downside: this continues to use the TEA, meaning you cannot choose to distrust the system roots with it. For TEA, see Hynek's blog post [2] I'm not going to add OpenSSL sources or builds to CPython. We just got rid of copies of libffi and other 3rd party dependencies. Crypto and TLS libraries are much, MUCH more complicated to handle than libffi. It's a constant moving targets of attacks. Vendors and distributions also have different opinions about trust store and policies. Let's keep build dependencies a downstream and vendor problem. Christian [1] https://twitter.com/lukasaoz/status/872085966579802112 [2] https://hynek.me/articles/apple-openssl-verification-surprises/ From christian at python.org Sun Jan 14 08:42:53 2018 From: christian at python.org (Christian Heimes) Date: Sun, 14 Jan 2018 14:42:53 +0100 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: <1515866687.2802555.1234289904.3207C4EF@webmail.messagingengine.com> <20180113195738.657c446c@fsol> Message-ID: On 2018-01-14 04:16, Barry Warsaw wrote: > On Jan 13, 2018, at 12:06, Christian Heimes wrote: > >> These days a lot of packages are using setuptools' entry points to >> create console scripts. Entry point have no option to create a console >> script with -s or -I flag. On my system, only 40 out of 360 scripts in >> /usr/bin have -s or -I. > > -I should be the default for system scripts; it?s not on Debian/Ubuntu though unfortunately. Same for most Fedora scripts. :/ Christian From ncoghlan at gmail.com Sun Jan 14 09:39:08 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jan 2018 00:39:08 +1000 Subject: [Python-Dev] Deprecate PEP 370 Per user site-packages directory? In-Reply-To: References: Message-ID: On 14 January 2018 at 03:06, Christian Heimes wrote: > Hi, > > PEP 370 [1] was my first PEP that got accepted. I created it exactly one > decade and two days ago for Python 2.6 and 3.0. Back then we didn't have > virtual environment support in Python. Ian Bicking had just started to > create the virtualenv project a couple of months earlier. > > Fast forward 10 years... > > Nowadays Python has venv in the standard library. The user-specific > site-packages directory is no longer that useful. I would even say it's > causing more trouble than it's worth. For example it's common for system > script to use "#!/usr/bin/python3" shebang without -s or -I option. > > I propose to deprecate the feature and remove it in Python 4.0. Given that we're working towards making user site-packages the default install location in pip, removing that feature at the interpreter level would be rather counterproductive :) Virtual environments are a useful tool if you're a professional developer, but for a lot of folks just doing ad hoc personal scripting, they're more complexity than is needed, and the simple "my packages" vs "the system's package" split is a better option. (It's also rather useful for bootstrapping tools like "pipsi" - "pip install --user pipsi", then "pipsi install" the other commands you want access to). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From nad at python.org Sun Jan 14 10:54:57 2018 From: nad at python.org (Ned Deily) Date: Sun, 14 Jan 2018 10:54:57 -0500 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> Message-ID: <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> On Jan 14, 2018, at 08:39, Christian Heimes wrote: > On 2018-01-14 09:24, Matt Billenstein wrote: >> Correct me if I'm wrong, but Python3 on osx bundles openssl since Apple has >> deprecated (and no longer ships the header files for) the version shipped with >> recent versions of osx. >> >> Perhaps this is an option to support the various flavors of Linux as well? > > AFAK Apple has decided to compile and statically link CPython's ssl with > an ancient, customized LibreSSL version. Cory posted [1] a couple of > months ago I think you're conflating some things here. Apple has not yet shipped a version of Python 3 with macOS so the fact that Apple now links their version of Python2.7 with a "private" copy of LibreSSL is irrelevant. (It's private in the sense that they don't ship the header files for it; the shared libs are there just for the use of the open source products they ship with macOS that don't yet use the macOS native crypto APIs, products like Python and Perl.) What Matt is likely thinking of is the Python 3 versions provided by the python.org macOS binary installers where we do build and link with our own 1.0.2 (and soon 1.1.0 for 3.7) versions of OpenSSL. Currently, the OpenSSL (and several other third-party libs such as libxz which is not shipped by Apple) are built as part of the installer build script in the Mac section of the source repo. I would like to refactor and generalize that so those third-party libs could optionally be used for non-installer builds as well. But, in any case, we don't have much choice for the installer builds until such time as cPython has support for the Apple-provided crypto APIs. > I'm not going to add OpenSSL sources or builds to CPython. We just got > rid of copies of libffi and other 3rd party dependencies. Crypto and TLS > libraries are much, MUCH more complicated to handle than libffi. It's a > constant moving targets of attacks. Vendors and distributions also have > different opinions about trust store and policies. > > Let's keep build dependencies a downstream and vendor problem. That's not always an option, unfortunately. -- Ned Deily nad at python.org -- [] From wes.turner at gmail.com Sun Jan 14 13:06:20 2018 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 14 Jan 2018 13:06:20 -0500 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> Message-ID: FWIW, anaconda and conda-forge currently have 1.0.2 X https://anaconda.org/anaconda/openssl https://anaconda.org/conda-forge/openssl On Sunday, January 14, 2018, Ned Deily wrote: > On Jan 14, 2018, at 08:39, Christian Heimes wrote: > > On 2018-01-14 09:24, Matt Billenstein wrote: > >> Correct me if I'm wrong, but Python3 on osx bundles openssl since Apple > has > >> deprecated (and no longer ships the header files for) the version > shipped with > >> recent versions of osx. > >> > >> Perhaps this is an option to support the various flavors of Linux as > well? > > > > AFAK Apple has decided to compile and statically link CPython's ssl with > > an ancient, customized LibreSSL version. Cory posted [1] a couple of > > months ago > > I think you're conflating some things here. Apple has not yet shipped a > version of Python 3 with macOS so the fact that Apple now links their > version of Python2.7 with a "private" copy of LibreSSL is irrelevant. > (It's private in the sense that they don't ship the header files for it; > the shared libs are there just for the use of the open source products > they ship with macOS that don't yet use the macOS native crypto APIs, > products like Python and Perl.) > > What Matt is likely thinking of is the Python 3 versions provided by the > python.org macOS binary installers where we do build and link with our > own 1.0.2 (and soon 1.1.0 for 3.7) versions of OpenSSL. Currently, > the OpenSSL (and several other third-party libs such as libxz which > is not shipped by Apple) are built as part of the installer build > script in the Mac section of the source repo. I would like to > refactor and generalize that so those third-party libs > could optionally be used for non-installer builds as well. But, in > any case, we don't have much choice for the installer builds until > such time as cPython has support for the Apple-provided crypto APIs. Support for Apple SecureTransport is part of the TLS module. IDK how far along that work is (whether it'll be ready for 3.7 beta 1)? https://github.com/python/peps/blob/master/pep-0543.rst https://www.python.org/dev/peps/pep-0543/ http://markmail.org/search/?q=list%3Aorg.python+PEP+543+TLS > > > I'm not going to add OpenSSL sources or builds to CPython. We just got > > rid of copies of libffi and other 3rd party dependencies. Crypto and TLS > > libraries are much, MUCH more complicated to handle than libffi. It's a > > constant moving targets of attacks. Vendors and distributions also have > > different opinions about trust store and policies. > > > > Let's keep build dependencies a downstream and vendor problem. > > That's not always an option, unfortunately. > > -- > Ned Deily > nad at python.org -- [] > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Sun Jan 14 14:49:48 2018 From: christian at python.org (Christian Heimes) Date: Sun, 14 Jan 2018 20:49:48 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> Message-ID: <1f0d6ec3-f20f-df5d-3056-96ff62601a37@python.org> On 2018-01-14 16:54, Ned Deily wrote: > On Jan 14, 2018, at 08:39, Christian Heimes wrote: >> On 2018-01-14 09:24, Matt Billenstein wrote: >>> Correct me if I'm wrong, but Python3 on osx bundles openssl since Apple has >>> deprecated (and no longer ships the header files for) the version shipped with >>> recent versions of osx. >>> >>> Perhaps this is an option to support the various flavors of Linux as well? >> >> AFAK Apple has decided to compile and statically link CPython's ssl with >> an ancient, customized LibreSSL version. Cory posted [1] a couple of >> months ago > > I think you're conflating some things here. Apple has not yet shipped a > version of Python 3 with macOS so the fact that Apple now links their > version of Python2.7 with a "private" copy of LibreSSL is irrelevant. > (It's private in the sense that they don't ship the header files for it; > the shared libs are there just for the use of the open source products > they ship with macOS that don't yet use the macOS native crypto APIs, > products like Python and Perl.) > > What Matt is likely thinking of is the Python 3 versions provided by the > python.org macOS binary installers where we do build and link with our > own 1.0.2 (and soon 1.1.0 for 3.7) versions of OpenSSL. Currently, > the OpenSSL (and several other third-party libs such as libxz which > is not shipped by Apple) are built as part of the installer build > script in the Mac section of the source repo. I would like to > refactor and generalize that so those third-party libs > could optionally be used for non-installer builds as well. But, in > any case, we don't have much choice for the installer builds until > such time as cPython has support for the Apple-provided crypto APIs. Yeah, that sounds useful for macOS and Windows development. Back when I was doing more Windows stuff, I used our buildbot scripts to provide local builds of dependencies such as expat and OpenSSL. >> I'm not going to add OpenSSL sources or builds to CPython. We just got >> rid of copies of libffi and other 3rd party dependencies. Crypto and TLS >> libraries are much, MUCH more complicated to handle than libffi. It's a >> constant moving targets of attacks. Vendors and distributions also have >> different opinions about trust store and policies. >> >> Let's keep build dependencies a downstream and vendor problem. > > That's not always an option, unfortunately. For Python.org macOS and Windows installers, I'm considering us as our own downstream vendors. I should rather say "Steve and you" instead of us. You are both doing the heavy lifting. Thanks for you hard work. :) Christian From matt at vazor.com Sun Jan 14 16:28:35 2018 From: matt at vazor.com (Matt Billenstein) Date: Sun, 14 Jan 2018 21:28:35 +0000 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> Message-ID: <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> On Sun, Jan 14, 2018 at 10:54:57AM -0500, Ned Deily wrote: > On Jan 14, 2018, at 08:39, Christian Heimes wrote: > > On 2018-01-14 09:24, Matt Billenstein wrote: > >> Correct me if I'm wrong, but Python3 on osx bundles openssl since Apple has > >> deprecated (and no longer ships the header files for) the version shipped with > >> recent versions of osx. > >> > >> Perhaps this is an option to support the various flavors of Linux as well? > > > > AFAK Apple has decided to compile and statically link CPython's ssl with > > an ancient, customized LibreSSL version. Cory posted [1] a couple of > > months ago > > What Matt is likely thinking of is the Python 3 versions provided by the > python.org macOS binary installers where we do build and link with our > own 1.0.2 (and soon 1.1.0 for 3.7) versions of OpenSSL. Yes, referring to the Python3 python.org installers -- I'm seeing this practice of bundling libs (particularly ssl) become more common as operating system support lags behind. In my mind it becomes easier to bundle deps in a binary installer across the board (Linux, OSX, Windows) rather than rely on whatever version the operating system provides. m -- Matt Billenstein matt at vazor.com http://www.vazor.com/ From k7hoven at gmail.com Sun Jan 14 17:06:29 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 15 Jan 2018 00:06:29 +0200 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: The timing of all of this is unfortunate. I'm sorry that my participation in the discussion has been a bit "on-off" lately. But my recent contributions have involved studying things like the interaction of threading/concurrency aspects of signal handling, as well as investigating subtleties of various proposals for context variables, including my own. Those are not exactly low-hanging fruit, and I'm sorry about not being able to eat them. It is also unfortunate that I haven't written down this proposal ("versions" A-C) to anywhere near the amount of precision than I did for PEP 555, which wasn't 100% specified in the first draft either. For consideration, I just thought it's better to at least mention it, so that those that now have a good understanding of the issues involved could perhaps understand it. I can add more detail, but to make it a full proposal now, I would probably need to join forces with a coauthor (with a good understanding of these issues) to figure out missing parts. I could tune in later to finish the PEP and write docs in case the approach gets implemented. -- Koos On Wed, Jan 10, 2018 at 7:17 PM, Guido van Rossum wrote: > I'm sorry, Koos, but based on your past contributions I am not interested > in discussing this topic with you. > > On Wed, Jan 10, 2018 at 8:58 AM, Koos Zevenhoven > wrote: > >> The status of PEP 555 is just a side track. Here, I took a step back >> compared to what went into PEP 555. >> >> ?Koos >> >> >> On Wed, Jan 10, 2018 at 6:21 PM, Guido van Rossum >> wrote: >> >>> The current status of PEP 555 is "Withdrawn". I have no interest in >>> considering it any more, so if you'd rather see a decision from me I'll be >>> happy to change it to "Rejected". >>> >>> On Tue, Jan 9, 2018 at 10:29 PM, Koos Zevenhoven >>> wrote: >>> >>>> On Jan 10, 2018 07:17, "Yury Selivanov" >>>> wrote: >>>> >>>> Wasn't PEP 555 rejected by Guido? What's the point of this post? >>>> >>>> >>>> I sure hope there is a point. I don't think mentioning PEP 555 in the >>>> discussions should hurt. >>>> >>>> A typo in my post btw: should be "PEP 567 (+568 ?)" in the second >>>> paragraph of course. >>>> >>>> -- Koos (mobile) >>>> >>>> >>>> Yury >>>> >>>> On Wed, Jan 10, 2018 at 4:08 AM Koos Zevenhoven >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> I feel like I should write some thoughts regarding the "context" >>>>> discussion, related to the various PEPs. >>>>> >>>>> I like PEP 567 (+ 567 ?) better than PEP 550. However, besides >>>>> providing cvar.set(), I'm not really sure about the gain compared to PEP >>>>> 555 (which could easily have e.g. a dict-like interface to the context). >>>>> I'm still not a big fan of "get"/"set" here, but the idea was indeed to >>>>> provide those on top of a PEP 555 type thing too. >>>>> >>>>> "Tokens" in PEP 567, seems to resemble assignment context managers in >>>>> PEP 555. However, they feel a bit messy to me, because they make it look >>>>> like one could just set a variable and then revert the change at any point >>>>> in time after that. >>>>> >>>>> PEP 555 is in fact a simplification of my previous sketch that had a >>>>> .set(..) in it, but was somewhat different from PEP 550. The idea was to >>>>> always explicitly define the scope of contextvar values. A context manager >>>>> / with statement determined the scope of .set(..) operations inside the >>>>> with statement: >>>>> >>>>> # Version A: >>>>> cvar.set(1) >>>>> with context_scope(): >>>>> cvar.set(2) >>>>> >>>>> assert cvar.get() == 2 >>>>> >>>>> assert cvar.get() == 1 >>>>> >>>>> Then I added the ability to define scopes for different variables >>>>> separately: >>>>> >>>>> # Version B >>>>> cvar1.set(1) >>>>> cvar2.set(2) >>>>> with context_scope(cvar1): >>>>> cvar1.set(11) >>>>> cvar2.set(22) >>>>> >>>>> assert cvar1.get() == 1 >>>>> assert cvar2.get() == 22 >>>>> >>>>> >>>>> However, in practice, most libraries would wrap __enter__, set and >>>>> __exit__ into another context manager. So maybe one might want to allow >>>>> something like >>>>> >>>>> # Version C: >>>>> assert cvar.get() == something >>>>> with context_scope(cvar, 2): >>>>> assert cvar.get() == 2 >>>>> >>>>> assert cvar.get() == something >>>>> >>>>> >>>>> But this then led to combining "__enter__" and ".set(..)" into >>>>> Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: >>>>> >>>>> # PEP 555 draft version: >>>>> assert cvar.value == something >>>>> with cvar.assign(1): >>>>> assert cvar.value == 1 >>>>> >>>>> assert cvar.value == something >>>>> >>>>> >>>>> Anyway, given the schedule, I'm not really sure about the best thing >>>>> to do here. In principle, something like in versions A, B and C above could >>>>> be done (I hope the proposal was roughly self-explanatory based on earlier >>>>> discussions). However, at this point, I'd probably need a lot of help to >>>>> make that happen for 3.7. >>>>> >>>>> -- Koos >>>>> >>>>> _______________________________________________ >>>>> Python-Dev mailing list >>>>> Python-Dev at python.org >>>>> https://mail.python.org/mailman/listinfo/python-dev >>>>> Unsubscribe: https://mail.python.org/mailma >>>>> n/options/python-dev/yselivanov.ml%40gmail.com >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: https://mail.python.org/mailma >>>> n/options/python-dev/guido%40python.org >>>> >>>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido) >>> >> >> >> >> -- >> + Koos Zevenhoven + http://twitter.com/k7hoven + >> > > > > -- > --Guido van Rossum (python.org/~guido) > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Sun Jan 14 17:47:22 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 15 Jan 2018 00:47:22 +0200 Subject: [Python-Dev] Thoughts on "contexts". PEPs 550, 555, 567, 568 In-Reply-To: References: Message-ID: I'll quickly add a few things below just in case there's anyone that cares. On Wed, Jan 10, 2018 at 2:06 AM, Koos Zevenhoven wrote: > > The idea was to always explicitly define the scope of contextvar values. A > context manager / with statement determined the scope of .set(..) > operations inside the with statement: > > # Version A: > cvar.set(1) > with context_scope(): > cvar.set(2) > > assert cvar.get() == 2 > > assert cvar.get() == 1 > > Then I added the ability to define scopes for different variables > separately: > > # Version B > cvar1.set(1) > cvar2.set(2) > with context_scope(cvar1): > cvar1.set(11) > cvar2.set(22) > > assert cvar1.get() == 1 > assert cvar2.get() == 22 > > > However, in practice, most libraries would wrap __enter__, set and > __exit__ into another context manager. So maybe one might want to allow > something like > > # Version C: > assert cvar.get() == something > with context_scope(cvar, 2): > assert cvar.get() == 2 > > assert cvar.get() == something > > Note here, that the point is to get a natural way to "undo" changes made to variables when exiting the scope. Undoing everything that is done within the defined scope is a very natural way to do it. Undoing individual .set(..) operations is more problematic. Features B+C could be essentially implemented as described in PEP 555, except with context_scope(cvar) being essentially the same as pushing and popping an empty Assignment object onto the reverse-linked stack. By empty, I mean a "key-value pair with a missing value". Then any set operations would replace the topmost assignment object for that variable with a new key-value pair (or push a new Assignment if there isn't one). ?However, to also get feature A, the stack may have to contain full mappings instead of assignemnt objects with just one key-value pair. I hope that clarifies some parts. Otherwise, in terms of semantics, the same things apply as for PEP 555 when it comes to generator function calls and next(..) etc., so we'd need to make sure it works well enough for all use cases. For instance, I'm not quite sure if I have a good enough understanding of the timeout example that Nathaniel wrote in the PEP 550 discussion to tell what would be required in terms of semantics, but I suppose it should be fine. -- Koos > But this then led to combining "__enter__" and ".set(..)" into > Assignment.__enter__ -- and "__exit__" into Assignment.__exit__ like this: > > # PEP 555 draft version: > assert cvar.value == something > with cvar.assign(1): > assert cvar.value == 1 > > assert cvar.value == something > > > Anyway, given the schedule, I'm not really sure about the best thing to do > here. In principle, something like in versions A, B and C above could be > done (I hope the proposal was roughly self-explanatory based on earlier > discussions). However, at this point, I'd probably need a lot of help to > make that happen for 3.7. > > -- Koos > > -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Tue Jan 16 01:42:50 2018 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 16 Jan 2018 15:42:50 +0900 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> Message-ID: <23133.40682.283754.157854@turnbull.sk.tsukuba.ac.jp> Matt Billenstein writes: > In my mind it becomes easier to bundle deps in a binary installer > across the board (Linux, OSX, Windows) rather than rely on whatever > version the operating system provides. Thing is, as Christian points out, TLS is a rapidly moving target. Every Mac OS or iOS update seems to link to a dozen CVEs for TLS support. We can go there if we have to, but it's often hard to go back when vendor support catches up to something reasonable. I think this is something for Ned and Christian and Steve to negotiate, since they're the ones who are most aware of the tradeoffs and bear the costs. From steve.dower at python.org Tue Jan 16 02:08:54 2018 From: steve.dower at python.org (Steve Dower) Date: Tue, 16 Jan 2018 18:08:54 +1100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >=2.5.3 In-Reply-To: <23133.40682.283754.157854@turnbull.sk.tsukuba.ac.jp> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> <23133.40682.283754.157854@turnbull.sk.tsukuba.ac.jp> Message-ID: >From my perspective, we can?t keep an OpenSSL-like API and use Windows platform libraries (we could do a requests-like API easily enough, but even urllib3 is painfully low-level). We have to continue shipping our own copy of OpenSSL on Windows. Nothing to negotiate here except whether OpenSSL releases should trigger a Python release, and I think that decision can stay with the RM. Good luck solving macOS :o) Cheers, Steve Top-posted from my Windows phone From: Stephen J. Turnbull Sent: Tuesday, January 16, 2018 17:45 To: Matt Billenstein Cc: Christian Heimes; python-dev at python.org Subject: Re: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >=2.5.3 Matt Billenstein writes: > In my mind it becomes easier to bundle deps in a binary installer > across the board (Linux, OSX, Windows) rather than rely on whatever > version the operating system provides. Thing is, as Christian points out, TLS is a rapidly moving target. Every Mac OS or iOS update seems to link to a dozen CVEs for TLS support. We can go there if we have to, but it's often hard to go back when vendor support catches up to something reasonable. I think this is something for Ned and Christian and Steve to negotiate, since they're the ones who are most aware of the tradeoffs and bear the costs. _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablogsal at gmail.com Tue Jan 16 04:12:06 2018 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Tue, 16 Jan 2018 09:12:06 +0000 Subject: [Python-Dev] Best Python API for exposing posix_spawn In-Reply-To: References: Message-ID: Thank you everyone that commented in this thread about the best interface for posix_spawn. I have finished implementing Antoine's suggestion in the PR: https://github.com/python/cpython/pull/5109 I think it would be good if we can have this merged before the feature lock at the end of the month if possible. Thanks you very much everyone for your time and suggestions! On Wed, 10 Jan 2018, 09:17 Pablo Galindo Salgado, wrote: > I think I really like Antoine's suggestion so I'm going to finish > implementing it that way. I think this keeps the API simple, does not bring > in the os module new dependencies, keeps the C implementation clean and is > consistent with the rest of the posix module. I will post an update when is > ready. > > Thank you everyone for sharing your view and advice! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Tue Jan 16 05:39:41 2018 From: christian at python.org (Christian Heimes) Date: Tue, 16 Jan 2018 11:39:41 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >=2.5.3 In-Reply-To: <3zLLxd54RFzFqv1@mail.python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> <23133.40682.283754.157854@turnbull.sk.tsukuba.ac.jp> <3zLLxd54RFzFqv1@mail.python.org> Message-ID: On 2018-01-16 08:08, Steve Dower wrote: > From my perspective, we can?t keep an OpenSSL-like API and use Windows > platform libraries (we *could* do a requests-like API easily enough, but > even urllib3 is painfully low-level). > > ? > > We have to continue shipping our own copy of OpenSSL on Windows. Nothing > to negotiate here except whether OpenSSL releases should trigger a > Python release, and I think that decision can stay with the RM. 3.7 will no longer use static linking. We can offer out-of-bounds updates of the OpenSSL DLLs. And by "we", I'm talking about you. :) Christian From wes.turner at gmail.com Tue Jan 16 06:28:20 2018 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 16 Jan 2018 06:28:20 -0500 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 Message-ID: On Tuesday, January 16, 2018, Steve Dower wrote: > From my perspective, we can?t keep an OpenSSL-like API and use Windows > platform libraries (we *could* do a requests-like API easily enough, but > even urllib3 is painfully low-level). > > Support for Windows SChannel and Apple SecureTransport is part of the TLS module. IDK how far along that work is (whether it'll be ready for 3.7 beta 1)? Or where those volunteering to help with the TLS module can send PRs? https://github.com/python/peps/blob/master/pep-0543.rst https://www.python.org/dev/peps/pep-0543/ http://markmail.org/search/?q=list%3Aorg.python+PEP+543+TLS https://www.python.org/dev/peps/pep-0543/#interfaces > > > We have to continue shipping our own copy of OpenSSL on Windows. Nothing > to negotiate here except whether OpenSSL releases should trigger a Python > release, and I think that decision can stay with the RM. > > > > Good luck solving macOS :o) > > > > Cheers, > > Steve > > > > Top-posted from my Windows phone > > > > *From: *Stephen J. Turnbull > *Sent: *Tuesday, January 16, 2018 17:45 > *To: *Matt Billenstein > *Cc: *Christian Heimes ; python-dev at python.org > *Subject: *Re: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / > LibreSSL >=2.5.3 > > > > Matt Billenstein writes: > > > > > In my mind it becomes easier to bundle deps in a binary installer > > > across the board (Linux, OSX, Windows) rather than rely on whatever > > > version the operating system provides. > > > > Thing is, as Christian points out, TLS is a rapidly moving target. > > Every Mac OS or iOS update seems to link to a dozen CVEs for TLS > > support. We can go there if we have to, but it's often hard to go > > back when vendor support catches up to something reasonable. I think > > this is something for Ned and Christian and Steve to negotiate, since > > they're the ones who are most aware of the tradeoffs and bear the > > costs. > > > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > steve.dower%40python.org > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Tue Jan 16 06:49:47 2018 From: christian at python.org (Christian Heimes) Date: Tue, 16 Jan 2018 12:49:47 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: Message-ID: On 2018-01-16 12:28, Wes Turner wrote: > > > On Tuesday, January 16, 2018, Steve Dower > wrote: > > From my perspective, we can?t keep an OpenSSL-like API and use > Windows platform libraries (we *could* do a requests-like API easily > enough, but even urllib3 is painfully low-level).____ > > Support for Windows SChannel and Apple SecureTransport is part of the > TLS module. > > IDK how far along that work is (whether it'll be ready for 3.7 beta 1)? > Or where those volunteering to help with the TLS module can send PRs? You are misunderstanding the goal of PEP 543. It's not about providing implementations of various backends. The PEP merely defines an minimal abstraction layer. Neither the PEP nor the API are finalized or complete yet, too Some parts of the PEP must be changed before it can be finalized. Cory and I are discussion the matter. Python 3.7's ssl module won't be compatible with PEP 543. For 3.8 it *might* be possible to provide a 543 compatible implementation on top of the ssl module. I will not work on SChannel or SecureTransport, since I have neither expertise, knowledge, interest or resources to work on other implementations. AFAIK Steve would rather plug in Windows' cert validation API into OpenSSL than to provide another TLS implementation. For Apple ... no clue. How about you contact Apple support? Regards, Christian From wes.turner at gmail.com Tue Jan 16 10:12:52 2018 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 16 Jan 2018 10:12:52 -0500 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: Message-ID: On Tuesday, January 16, 2018, Christian Heimes wrote: > On 2018-01-16 12:28, Wes Turner wrote: > > > > > > On Tuesday, January 16, 2018, Steve Dower > > wrote: > > > > From my perspective, we can?t keep an OpenSSL-like API and use > > Windows platform libraries (we *could* do a requests-like API easily > > enough, but even urllib3 is painfully low-level).____ > > > > Support for Windows SChannel and Apple SecureTransport is part of the > > TLS module. > > > > IDK how far along that work is (whether it'll be ready for 3.7 beta 1)? > > Or where those volunteering to help with the TLS module can send PRs? > > You are misunderstanding the goal of PEP 543. It's not about providing > implementations of various backends. The PEP merely defines an minimal > abstraction layer. Neither the PEP nor the API are finalized or complete > yet, too Some parts of the PEP must be changed before it can be > finalized. Cory and I are discussion the matter. > > Python 3.7's ssl module won't be compatible with PEP 543. For 3.8 it > *might* be possible to provide a 543 compatible implementation on top of > the ssl module. Got it. Thanks! > > I will not work on SChannel or SecureTransport, since I have neither > expertise, knowledge, interest or resources to work on other > implementations. AFAIK Steve would rather plug in Windows' cert > validation API into OpenSSL than to provide another TLS implementation. > For Apple ... no clue. How about you contact Apple support? A HUP to their seclist about this work awhile back doesn't seem to have upgraded OpenSSL. Presumably there's another mailing list thread or GitHub issue for PEP 543 interface and implementation development. > > Regards, > Christian > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Tue Jan 16 15:17:54 2018 From: christian at python.org (Christian Heimes) Date: Tue, 16 Jan 2018 21:17:54 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >= 2.5.3 In-Reply-To: References: Message-ID: FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have created a daily cronjob to populate Travis' cache with OpenSSL builds. Until the cache is filled, Linux CI will take an extra 5 minute. Christian [1] https://github.com/python/cpython/pull/5180 From steve.dower at python.org Tue Jan 16 16:44:59 2018 From: steve.dower at python.org (Steve Dower) Date: Wed, 17 Jan 2018 08:44:59 +1100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >=2.5.3 In-Reply-To: References: Message-ID: Honestly, I?d rather plug into the WinHTTP API and just not even bother with sockets :) Certificate validation is about the only thing broken in OpenSSL on Windows (as far as not working well with system config), and it?s relatively easy to replace with a couple of API calls. Now that we don?t statically link OpenSSL anymore, it can be done easily with ctypes, so I?ll probably put out a package for it sometime soon. Top-posted from my Windows phone From: Christian Heimes Sent: Tuesday, January 16, 2018 22:52 To: python-dev at python.org Subject: Re: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL >=2.5.3 On 2018-01-16 12:28, Wes Turner wrote: > > > On Tuesday, January 16, 2018, Steve Dower > wrote: > > From my perspective, we can?t keep an OpenSSL-like API and use > Windows platform libraries (we *could* do a requests-like API easily > enough, but even urllib3 is painfully low-level).____ > > Support for Windows SChannel and Apple SecureTransport is part of the > TLS module. > > IDK how far along that work is (whether it'll be ready for 3.7 beta 1)? > Or where those volunteering to help with the TLS module can send PRs? You are misunderstanding the goal of PEP 543. It's not about providing implementations of various backends. The PEP merely defines an minimal abstraction layer. Neither the PEP nor the API are finalized or complete yet, too Some parts of the PEP must be changed before it can be finalized. Cory and I are discussion the matter. Python 3.7's ssl module won't be compatible with PEP 543. For 3.8 it *might* be possible to provide a 543 compatible implementation on top of the ssl module. I will not work on SChannel or SecureTransport, since I have neither expertise, knowledge, interest or resources to work on other implementations. AFAIK Steve would rather plug in Windows' cert validation API into OpenSSL than to provide another TLS implementation. For Apple ... no clue. How about you contact Apple support? Regards, Christian _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Jan 16 16:47:14 2018 From: steve.dower at python.org (Steve Dower) Date: Wed, 17 Jan 2018 08:47:14 +1100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL>=2.5.3 In-Reply-To: References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> <23133.40682.283754.157854@turnbull.sk.tsukuba.ac.jp> <3zLLxd54RFzFqv1@mail.python.org> Message-ID: I think you mean out-of-band updates, and by ?you? I'm going to pretend you mean PyCA ;) Top-posted from my Windows phone From: Christian Heimes Sent: Tuesday, January 16, 2018 21:42 To: python-dev at python.org Subject: Re: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL>=2.5.3 On 2018-01-16 08:08, Steve Dower wrote: > From my perspective, we can?t keep an OpenSSL-like API and use Windows > platform libraries (we *could* do a requests-like API easily enough, but > even urllib3 is painfully low-level). > > ? > > We have to continue shipping our own copy of OpenSSL on Windows. Nothing > to negotiate here except whether OpenSSL releases should trigger a > Python release, and I think that decision can stay with the RM. 3.7 will no longer use static linking. We can offer out-of-bounds updates of the OpenSSL DLLs. And by "we", I'm talking about you. :) Christian _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Tue Jan 16 17:44:14 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 16 Jan 2018 17:44:14 -0500 Subject: [Python-Dev] PEP 567 v3 Message-ID: Hi, This is a third version of PEP 567. Changes from v2: 1. PyThreadState now references Context objects directly (instead of referencing _ContextData). This fixes out of sync Context.get() and ContextVar.get(). 2. Added a new Context.copy() method. 3. Renamed Token.old_val property to Token.old_value 4. ContextVar.reset(token) now raises a ValueError if the token was created in a different Context. 5. All areas of the PEP were updated to be more precise. Context is *no longer* defined as a read-only or an immutable mapping; ContextVar.get() behaviour is fully defined; the immutability is only mentioned in the Implementation section to avoid confusion; etc. 6. Added a new Examples section. The reference implementation has been updated to include all these changes. The only open question I personally have is whether ContextVar.reset() should be idempotent or not. Maybe we should be strict and raise an error if a user tries to reset a variable more than once with the same token object? Other than that, I'm pretty happy with this version. Big thanks to everybody helping with the PEP! PEP: 567 Title: Context Variables Version: $Revision$ Last-Modified: $Date$ Author: Yury Selivanov Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 12-Dec-2017 Python-Version: 3.7 Post-History: 12-Dec-2017, 28-Dec-2017, 16-Jan-2018 Abstract ======== This PEP proposes a new ``contextvars`` module and a set of new CPython C APIs to support context variables. This concept is similar to thread-local storage (TLS), but, unlike TLS, it also allows correctly keeping track of values per asynchronous task, e.g. ``asyncio.Task``. This proposal is a simplified version of :pep:`550`. The key difference is that this PEP is concerned only with solving the case for asynchronous tasks, not for generators. There are no proposed modifications to any built-in types or to the interpreter. This proposal is not strictly related to Python Context Managers. Although it does provide a mechanism that can be used by Context Managers to store their state. Rationale ========= Thread-local variables are insufficient for asynchronous tasks that execute concurrently in the same OS thread. Any context manager that saves and restores a context value using ``threading.local()`` will have its context values bleed to other code unexpectedly when used in async/await code. A few examples where having a working context local storage for asynchronous code is desirable: * Context managers like ``decimal`` contexts and ``numpy.errstate``. * Request-related data, such as security tokens and request data in web applications, language context for ``gettext``, etc. * Profiling, tracing, and logging in large code bases. Introduction ============ The PEP proposes a new mechanism for managing context variables. The key classes involved in this mechanism are ``contextvars.Context`` and ``contextvars.ContextVar``. The PEP also proposes some policies for using the mechanism around asynchronous tasks. The proposed mechanism for accessing context variables uses the ``ContextVar`` class. A module (such as ``decimal``) that wishes to use the new mechanism should: * declare a module-global variable holding a ``ContextVar`` to serve as a key; * access the current value via the ``get()`` method on the key variable; * modify the current value via the ``set()`` method on the key variable. The notion of "current value" deserves special consideration: different asynchronous tasks that exist and execute concurrently may have different values for the same key. This idea is well-known from thread-local storage but in this case the locality of the value is not necessarily bound to a thread. Instead, there is the notion of the "current ``Context``" which is stored in thread-local storage. Manipulation of the current context is the responsibility of the task framework, e.g. asyncio. A ``Context`` is a mapping of ``ContextVar`` objects to their values. The ``Context`` itself exposes the ``abc.Mapping`` interface (not ``abc.MutableMapping``!), so it cannot be modified directly. To set a new value for a context variable in a ``Context`` object, the user needs to: * make the ``Context`` object "current" using the ``Context.run()`` method; * use ``ContextVar.set()`` to set a new value for the context variable. The ``ContextVar.get()`` method looks for the variable in the current ``Context`` object using ``self`` as a key. It is not possible to get a direct reference to the current ``Context`` object, but it is possible to obtain a shallow copy of it using the ``contextvars.copy_context()`` function. This ensures that the caller of ``Context.run()`` is the sole owner of its ``Context`` object. Specification ============= A new standard library module ``contextvars`` is added with the following APIs: 1. ``copy_context() -> Context`` function is used to get a copy of the current ``Context`` object for the current OS thread. 2. ``ContextVar`` class to declare and access context variables. 3. ``Context`` class encapsulates context state. Every OS thread stores a reference to its current ``Context`` instance. It is not possible to control that reference directly. Instead, the ``Context.run(callable, *args, **kwargs)`` method is used to run Python code in another context. contextvars.ContextVar ---------------------- The ``ContextVar`` class has the following constructor signature: ``ContextVar(name, *, default=_NO_DEFAULT)``. The ``name`` parameter is used for introspection and debug purposes, and is exposed as a read-only ``ContextVar.name`` attribute. The ``default`` parameter is optional. Example:: # Declare a context variable 'var' with the default value 42. var = ContextVar('var', default=42) (The ``_NO_DEFAULT`` is an internal sentinel object used to detect if the default value was provided.) ``ContextVar.get(default=_NO_DEFAULT)`` returns a value for the context variable for the current ``Context``:: # Get the value of `var`. var.get() If there is no value for the variable in the current context, ``ContextVar.get()`` will: * return the value of the *default* argument of the ``get()`` method, if provided; or * return the default value for the context variable, if provided; or * raise a ``LookupError``. ``ContextVar.set(value) -> Token`` is used to set a new value for the context variable in the current ``Context``:: # Set the variable 'var' to 1 in the current context. var.set(1) ``ContextVar.reset(token)`` is used to reset the variable in the current context to the value it had before the ``set()`` operation that created the ``token`` (or to remove the variable if it was not set):: assert var.get(None) is None token = var.set(1) try: ... finally: var.reset(token) assert var.get(None) is None ``ContextVar.reset()`` method is idempotent and can be called multiple times on the same Token object: second and later calls will be no-ops. The method raises a ``ValueError`` if: * called with a token object created by another variable; or * the current ``Context`` object does not match the one where the token object was created. contextvars.Token ----------------- ``contextvars.Token`` is an opaque object that should be used to restore the ``ContextVar`` to its previous value, or to remove it from the context if the variable was not set before. It can be created only by calling ``ContextVar.set()``. For debug and introspection purposes it has: * a read-only attribute ``Token.var`` pointing to the variable that created the token; * a read-only attribute ``Token.old_value`` set to the value the variable had before the ``set()`` call, or to ``Token.MISSING`` if the variable wasn't set before. contextvars.Context ------------------- ``Context`` object is a mapping of context variables to values. ``Context()`` creates an empty context. To get a copy of the current ``Context`` for the current OS thread, use the ``contextvars.copy_context()`` method:: ctx = contextvars.copy_context() To run Python code in some ``Context``, use ``Context.run()`` method:: ctx.run(function) Any changes to any context variables that ``function`` causes will be contained in the ``ctx`` context:: var = ContextVar('var') var.set('spam') def function(): assert var.get() == 'spam' assert ctx[var] == 'spam' var.set('ham') assert var.get() == 'ham' assert ctx[var] == 'ham' ctx = copy_context() # Any changes that 'function' makes to 'var' will stay # isolated in the 'ctx'. ctx.run(function) assert var.get() == 'spam' assert ctx[var] == 'ham' ``Context.run()`` raises a ``RuntimeError`` when called on the same context object from more than one OS thread, or when called recursively. ``Context.copy()`` returns a shallow copy of the context object. ``Context`` objects implement the ``collections.abc.Mapping`` ABC. This can be used to introspect contexts:: ctx = contextvars.copy_context() # Print all context variables and their values in 'ctx': print(ctx.items()) # Print the value of 'some_variable' in context 'ctx': print(ctx[some_variable]) Note that all Mapping methods, including ``Context.__getitem__`` and ``Context.get``, ignore default values for context variables (i.e. ``ContextVar.default``). This means that for a variable *var* that was created with a default value and was not set in the *context*: * ``context[var]`` raises a ``KeyError``, * ``var in context`` returns ``False``, * the variable isn't included in ``context.items()``, etc. asyncio ------- ``asyncio`` uses ``Loop.call_soon()``, ``Loop.call_later()``, and ``Loop.call_at()`` to schedule the asynchronous execution of a function. ``asyncio.Task`` uses ``call_soon()`` to run the wrapped coroutine. We modify ``Loop.call_{at,later,soon}`` and ``Future.add_done_callback()`` to accept the new optional *context* keyword-only argument, which defaults to the current context:: def call_soon(self, callback, *args, context=None): if context is None: context = contextvars.copy_context() # ... some time later context.run(callback, *args) Tasks in asyncio need to maintain their own context that they inherit from the point they were created at. ``asyncio.Task`` is modified as follows:: class Task: def __init__(self, coro): ... # Get the current context snapshot. self._context = contextvars.copy_context() self._loop.call_soon(self._step, context=self._context) def _step(self, exc=None): ... # Every advance of the wrapped coroutine is done in # the task's context. self._loop.call_soon(self._step, context=self._context) ... Implementation ============== This section explains high-level implementation details in pseudo-code. Some optimizations are omitted to keep this section short and clear. The ``Context`` mapping is implemented using an immutable dictionary. This allows for a O(1) implementation of the ``copy_context()`` function. The reference implementation implements the immutable dictionary using Hash Array Mapped Tries (HAMT); see :pep:`550` for analysis of HAMT performance [1]_. For the purposes of this section, we implement an immutable dictionary using a copy-on-write approach and built-in dict type:: class _ContextData: def __init__(self): self._mapping = dict() def get(self, key): return self._mapping[key] def set(self, key, value): copy = _ContextData() copy._mapping = self._mapping.copy() copy._mapping[key] = value return copy def delete(self, key): copy = _ContextData() copy._mapping = self._mapping.copy() del copy._mapping[key] return copy Every OS thread has a reference to the current ``Context`` object:: class PyThreadState: context: Context ``contextvars.Context`` is a wrapper around ``_ContextData``:: class Context(collections.abc.Mapping): _data: _ContextData _prev_context: Optional[Context] def __init__(self): self._data = _ContextData() self._prev_context = None def run(self, callable, *args, **kwargs): if self._prev_context is not None: raise RuntimeError( f'cannot enter context: {self} is already entered') ts: PyThreadState = PyThreadState_Get() if ts.context is None: ts.context = Context() self._prev_context = ts.context try: ts.context = self return callable(*args, **kwargs) finally: ts.context = self._prev_context self._prev_context = None def copy(self): new = Context() new._data = self._data return new # Mapping API methods are implemented by delegating # `get()` and other Mapping methods to `self._data`. ``contextvars.copy_context()`` is implemented as follows:: def copy_context(): ts: PyThreadState = PyThreadState_Get() if ts.context is None: ts.context = Context() return ts.context.copy() ``contextvars.ContextVar`` interacts with ``PyThreadState.context`` directly:: class ContextVar: def __init__(self, name, *, default=_NO_DEFAULT): self._name = name self._default = default @property def name(self): return self._name def get(self, default=_NO_DEFAULT): ts: PyThreadState = PyThreadState_Get() if ts.context is not None: try: return ts.context[self] except KeyError: pass if default is not _NO_DEFAULT: return default if self._default is not _NO_DEFAULT: return self._default raise LookupError def set(self, value): ts: PyThreadState = PyThreadState_Get() if ts.context is None: ts.context = Context() data: _ContextData = ts.context._data try: old_value = data.get(self) except KeyError: old_value = Token.MISSING updated_data = data.set(self, value) ts.context._data = updated_data return Token(ts.context, self, old_value) def reset(self, token): if token._var is not self: raise ValueError( "Token was created by a different ContextVar") ts: PyThreadState = PyThreadState_Get() if token._ctx is not ts.context: raise ValueError( "Token was created in a different Context") if token._used: return if token._old_value is Token.MISSING: ts.context._data = data.delete(token._var) else: ts.context._data = data.set(token._var, token._old_value) token._used = True Note that the in the reference implementation, ``ContextVar.get()`` has an internal cache for the most recent value, which allows to bypass a hash lookup. This is similar to the optimization the ``decimal`` module implements to retrieve its context from ``PyThreadState_GetDict()``. See :pep:`550` which explains the implementation of the cache in great detail. The ``Token`` class is implemented as follows:: class Token: MISSING = object() def __init__(self, ctx, var, old_value): self._ctx = ctx self._var = var self._old_value = old_value self._used = False @property def var(self): return self._var @property def old_value(self): return self._old_value Summary of the New APIs ======================= Python API ---------- 1. A new ``contextvars`` module with ``ContextVar``, ``Context``, and ``Token`` classes, and a ``copy_context()`` function. 2. ``asyncio.Loop.call_at()``, ``asyncio.Loop.call_later()``, ``asyncio.Loop.call_soon()``, and ``asyncio.Future.add_done_callback()`` run callback functions in the context they were called in. A new *context* keyword-only parameter can be used to specify a custom context. 3. ``asyncio.Task`` is modified internally to maintain its own context. C API ----- 1. ``PyContextVar * PyContextVar_New(char *name, PyObject *default)``: create a ``ContextVar`` object. The *default* argument can be ``NULL``, which means that the variable has no default value. 2. ``int PyContextVar_Get(PyContextVar *, PyObject *default_value, PyObject **value)``: return ``-1`` if an error occurs during the lookup, ``0`` otherwise. If a value for the context variable is found, it will be set to the ``value`` pointer. Otherwise, ``value`` will be set to ``default_value`` when it is not ``NULL``. If ``default_value`` is ``NULL``, ``value`` will be set to the default value of the variable, which can be ``NULL`` too. ``value`` is always a new reference. 3. ``PyContextToken * PyContextVar_Set(PyContextVar *, PyObject *)``: set the value of the variable in the current context. 4. ``PyContextVar_Reset(PyContextVar *, PyContextToken *)``: reset the value of the context variable. 5. ``PyContext * PyContext_New()``: create a new empty context. 6. ``PyContext * PyContext_Copy()``: get a copy of the current context. 7. ``int PyContext_Enter(PyContext *)`` and ``int PyContext_Exit(PyContext *)`` allow to set and restore the context for the current OS thread. It is required to always restore the previous context:: PyContext *old_ctx = PyContext_Copy(); if (old_ctx == NULL) goto error; if (PyContext_Enter(new_ctx)) goto error; // run some code if (PyContext_Exit(old_ctx)) goto error; Design Considerations ===================== Why contextvars.Token and not ContextVar.unset()? ------------------------------------------------- The Token API allows to get around having a ``ContextVar.unset()`` method, which is incompatible with chained contexts design of :pep:`550`. Future compatibility with :pep:`550` is desired (at least for Python 3.7) in case there is demand to support context variables in generators and asynchronous generators. The Token API also offers better usability: the user does not have to special-case absence of a value. Compare:: token = cv.get() try: cv.set(blah) # code finally: cv.reset(token) with:: _deleted = object() old = cv.get(default=_deleted) try: cv.set(blah) # code finally: if old is _deleted: cv.unset() else: cv.set(old) Rejected Ideas ============== Replication of threading.local() interface ------------------------------------------ Please refer to :pep:`550` where this topic is covered in detail: [2]_. Backwards Compatibility ======================= This proposal preserves 100% backwards compatibility. Libraries that use ``threading.local()`` to store context-related values, currently work correctly only for synchronous code. Switching them to use the proposed API will keep their behavior for synchronous code unmodified, but will automatically enable support for asynchronous code. Examples ======== Converting code that uses threading.local() ------------------------------------------- A typical code that uses ``threading.local()`` usually looks like the following snippet:: class mylocal(threading.local): # Subclass threading.local to specify a default value. value = 'spam' mylocal = mylocal() # To set a new value: mylocal.value = 'new value' # To read the current value: mylocal.value Such code can be converted to use the ``contextvars`` module:: mylocal = contextvars.ContextVar('mylocal', 'spam') # To set a new value: mylocal.set('new value') # To read the current value: mylocal.get() Offloading execution to other threads ------------------------------------- It is possible to run code in a separate OS thread using a copy of the current thread context:: executor = ThreadPoolExecutor() current_context = contextvars.copy_context() executor.submit( lambda: current_context.run(some_function)) Reference Implementation ======================== The reference implementation can be found here: [3]_. References ========== .. [1] https://www.python.org/dev/peps/pep-0550/#appendix-hamt-performance-analysis .. [2] https://www.python.org/dev/peps/pep-0550/#replication-of-threading-local-interface .. [3] https://github.com/python/cpython/pull/5027 Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From victor.stinner at gmail.com Tue Jan 16 18:26:58 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 17 Jan 2018 00:26:58 +0100 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: Hi Yury, Thanks for the updated PEP v3, it's now much better than PEP v2! I have no more complain against your PEP. I vote +1 for PEP 567 contextvars! > The only open question I personally have is whether ContextVar.reset() > should be idempotent or not. Maybe we should be strict and raise an > error if a user tries to reset a variable more than once with the same > token object? I don't think that it's worth it to prevent misuage of reset(). IMHO it's fine if calling reset() twice reverts the variable state twice. Victor From guido at python.org Tue Jan 16 18:53:10 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Jan 2018 15:53:10 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: On Tue, Jan 16, 2018 at 3:26 PM, Victor Stinner wrote: > Thanks for the updated PEP v3, it's now much better than PEP v2! > > I have no more complain against your PEP. I vote +1 for PEP 567 > contextvars! > Yeah! > The only open question I personally have is whether ContextVar.reset() > > should be idempotent or not. Maybe we should be strict and raise an > > error if a user tries to reset a variable more than once with the same > > token object? > > I don't think that it's worth it to prevent misuage of reset(). IMHO > it's fine if calling reset() twice reverts the variable state twice. > Maybe the effect of calling it twice should be specified as undefined -- the implementation can try to raise in simple cases. Unless Yury has a use case for the idempotency? (But with __enter__/__exit__ as the main use case for reset() I wouldn't know what the use case for idempotency would be.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jan 16 19:37:34 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 17 Jan 2018 01:37:34 +0100 Subject: [Python-Dev] PEP 567 v3 References: Message-ID: <20180117013734.7d6e5a72@fsol> On Tue, 16 Jan 2018 17:44:14 -0500 Yury Selivanov wrote: > Offloading execution to other threads > ------------------------------------- > > It is possible to run code in a separate OS thread using a copy > of the current thread context:: > > executor = ThreadPoolExecutor() > current_context = contextvars.copy_context() > > executor.submit( > lambda: current_context.run(some_function)) Does it also support offloading to a separate process (using ProcessPoolExecutor in the example above)? This would require the Context to support pickling. Regards Antoine. From guido at python.org Tue Jan 16 19:45:42 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Jan 2018 16:45:42 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: <20180117013734.7d6e5a72@fsol> References: <20180117013734.7d6e5a72@fsol> Message-ID: On Tue, Jan 16, 2018 at 4:37 PM, Antoine Pitrou wrote: > On Tue, 16 Jan 2018 17:44:14 -0500 > Yury Selivanov wrote: > > > Offloading execution to other threads > > ------------------------------------- > > > > It is possible to run code in a separate OS thread using a copy > > of the current thread context:: > > > > executor = ThreadPoolExecutor() > > current_context = contextvars.copy_context() > > > > executor.submit( > > lambda: current_context.run(some_function)) > > Does it also support offloading to a separate process (using > ProcessPoolExecutor in the example above)? This would require the > Context to support pickling. > I don't think that's a requirement. The transparency between the two different types of executor is mostly misleading anyway -- it's like the old RPC transparency problem, which was never solved IIRC. There are just too many things you need to be aware of before you can successfully offload something to a different process. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Tue Jan 16 20:00:40 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 16 Jan 2018 20:00:40 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: On Tue, Jan 16, 2018 at 6:53 PM, Guido van Rossum wrote: > On Tue, Jan 16, 2018 at 3:26 PM, Victor Stinner [..] >> I don't think that it's worth it to prevent misuage of reset(). IMHO >> it's fine if calling reset() twice reverts the variable state twice. > > > Maybe the effect of calling it twice should be specified as undefined -- the > implementation can try to raise in simple cases. > > Unless Yury has a use case for the idempotency? (But with __enter__/__exit__ > as the main use case for reset() I wouldn't know what the use case for > idempotency would be.) I don't have any use case for idempotent reset, so I'd change it to raise an error on second call. We can always relax this in 3.8 if people request it to be idempotent. Yury From yselivanov.ml at gmail.com Tue Jan 16 20:01:03 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 16 Jan 2018 20:01:03 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: Thanks, Victor! From yselivanov.ml at gmail.com Tue Jan 16 20:06:41 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 16 Jan 2018 20:06:41 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> Message-ID: On Tue, Jan 16, 2018 at 7:45 PM, Guido van Rossum wrote: > On Tue, Jan 16, 2018 at 4:37 PM, Antoine Pitrou wrote: >> >> On Tue, 16 Jan 2018 17:44:14 -0500 >> Yury Selivanov wrote: >> >> > Offloading execution to other threads >> > ------------------------------------- >> > >> > It is possible to run code in a separate OS thread using a copy >> > of the current thread context:: >> > >> > executor = ThreadPoolExecutor() >> > current_context = contextvars.copy_context() >> > >> > executor.submit( >> > lambda: current_context.run(some_function)) >> >> Does it also support offloading to a separate process (using >> ProcessPoolExecutor in the example above)? This would require the >> Context to support pickling. > > > I don't think that's a requirement. The transparency between the two > different types of executor is mostly misleading anyway -- it's like the old > RPC transparency problem, which was never solved IIRC. There are just too > many things you need to be aware of before you can successfully offload > something to a different process. I agree. I think it would be a very fragile thing In practice: if you have even one variable in the context that isn't pickleable, your code that uses a ProcessPool would stop working. I would defer Context pickleability to 3.8+. Yury From njs at pobox.com Tue Jan 16 20:18:06 2018 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 16 Jan 2018 17:18:06 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> Message-ID: On Tue, Jan 16, 2018 at 5:06 PM, Yury Selivanov wrote: > On Tue, Jan 16, 2018 at 7:45 PM, Guido van Rossum wrote: >> On Tue, Jan 16, 2018 at 4:37 PM, Antoine Pitrou wrote: >>> >>> On Tue, 16 Jan 2018 17:44:14 -0500 >>> Yury Selivanov wrote: >>> >>> > Offloading execution to other threads >>> > ------------------------------------- >>> > >>> > It is possible to run code in a separate OS thread using a copy >>> > of the current thread context:: >>> > >>> > executor = ThreadPoolExecutor() >>> > current_context = contextvars.copy_context() >>> > >>> > executor.submit( >>> > lambda: current_context.run(some_function)) >>> >>> Does it also support offloading to a separate process (using >>> ProcessPoolExecutor in the example above)? This would require the >>> Context to support pickling. >> >> >> I don't think that's a requirement. The transparency between the two >> different types of executor is mostly misleading anyway -- it's like the old >> RPC transparency problem, which was never solved IIRC. There are just too >> many things you need to be aware of before you can successfully offload >> something to a different process. > > I agree. > > I think it would be a very fragile thing In practice: if you have even > one variable in the context that isn't pickleable, your code that uses > a ProcessPool would stop working. I would defer Context pickleability > to 3.8+. There's also a more fundamental problem: you need some way to match up the ContextVar objects across the two processes, and right now they don't have an attached __module__ or __qualname__. I guess we could do like namedtuple and (a) capture the module where the ContextVar was instantiated, on the assumption that that's where it will be stored, (b) require that users pass in the name of variable where it will be stored as the 'name' argument to ContextVar.__init__. I tend to agree that this is something to worry about for 3.8 though. (If we need to retrofit pickle support, we could add a pickleable=False argument to ContextVar, and require people to pass pickleable=True to signal that they've done the appropriate setup to make the ContextVar identifiable across processes, and that its contents are safe to pickle.) -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Tue Jan 16 20:27:53 2018 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 16 Jan 2018 17:27:53 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: On Tue, Jan 16, 2018 at 2:44 PM, Yury Selivanov wrote: > 4. ContextVar.reset(token) now raises a ValueError if the token was > created in a different Context. A minor bit of polish: given that Token objects have to track the associated ContextVar anyway, I think it'd be cleaner if instead of doing: token = cvar.set(...) cvar.reset(token) we made the API be: token = cvar.set(...) token.reset() In the first version, we use 'cvar' twice, and it's a mandatory invariant that the same ContextVar object gets used in both places; you had to add extra code to check this and raise an error if that's violated. It's level 5 on Rusty's scale (http://sweng.the-davies.net/Home/rustys-api-design-manifesto) In the second version, the ContextVar is only mentioned once, so the invariant is automatically enforced by the API -- you can't even express the broken version. That's level 10 on Rusty's scale, and gives a simpler implementation too. -n -- Nathaniel J. Smith -- https://vorpus.org From yselivanov.ml at gmail.com Tue Jan 16 20:33:53 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 16 Jan 2018 20:33:53 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: On Tue, Jan 16, 2018 at 8:27 PM, Nathaniel Smith wrote: [..] > token = cvar.set(...) > token.reset() I see the point, but I think that having the 'reset' method defined on the ContextVar class is easier to grasp. It also feels natural that a pair of set/reset methods is defined on the same class. This is highly subjective though, so let's see which option Guido likes more. Yury From guido at python.org Wed Jan 17 00:33:16 2018 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Jan 2018 21:33:16 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: On Tue, Jan 16, 2018 at 5:33 PM, Yury Selivanov wrote: > On Tue, Jan 16, 2018 at 8:27 PM, Nathaniel Smith wrote: > [..] > > token = cvar.set(...) > > token.reset() > > I see the point, but I think that having the 'reset' method defined on > the ContextVar class is easier to grasp. It also feels natural that a > pair of set/reset methods is defined on the same class. This is > highly subjective though, so let's see which option Guido likes more. > I think this came up in one of the previous reviews of the PEP. I like Yury's (redundant) version -- it makes it clear to the human reader of the code which variable is being reset. And it's not like it's going to be used that much -- it'll be likely hidden inside a context manager. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jan 17 00:45:03 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jan 2018 15:45:03 +1000 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: Message-ID: On 17 January 2018 at 11:27, Nathaniel Smith wrote: > On Tue, Jan 16, 2018 at 2:44 PM, Yury Selivanov wrote: >> 4. ContextVar.reset(token) now raises a ValueError if the token was >> created in a different Context. > > A minor bit of polish: given that Token objects have to track the > associated ContextVar anyway, I think it'd be cleaner if instead of > doing: > > token = cvar.set(...) > cvar.reset(token) > > we made the API be: > > token = cvar.set(...) > token.reset() As a counterpoint to this, consider the case where you're working with *two* cvars: token1 = cvar1.set(...) token2 = cvar2.set(...) ... cvar1.reset(token1) ... cvar2.reset(token2) At the point where the resets happen, you know exactly which cvar is being reset, even if you don't know where the token was created. With reset-on-the-token, you're entirely reliant on variable naming to know which ContextVar is going to be affected: token1 = cvar1.set(...) token2 = cvar2.set(...) ... token1.reset() # Resets cvar1 ... token2.reset() # Resets cvar2 If someone really does want an auto-reset API, it's also fairly easy to build atop the more explicit one: def set_cvar(cvar, value): token = cvar.set(value) return functools.partial(cvar.reset, token) reset_cvar1 = set_cvar(cvar1, ...) ... reset_cvar1() Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From christian at python.org Wed Jan 17 03:55:50 2018 From: christian at python.org (Christian Heimes) Date: Wed, 17 Jan 2018 09:55:50 +0100 Subject: [Python-Dev] Python 3.7: Require OpenSSL >=1.0.2 / LibreSSL>=2.5.3 In-Reply-To: <3zLkQz2PyyzFqyc@mail.python.org> References: <20180113142319.38f77f39@fsol> <1ff41667-efc2-db94-5476-27e6e326c673@python.org> <01010160f3c55ec9-df65f3b3-4cad-4ae7-9176-891851bd97ee-000000@us-west-2.amazonses.com> <378BD3B5-36E1-4598-8C93-BBDFA271B21B@python.org> <01010160f692f82b-01bceac7-ce63-4bec-be3f-205b20657f44-000000@us-west-2.amazonses.com> <23133.40682.283754.157854@turnbull.sk.tsukuba.ac.jp> <3zLLxd54RFzFqv1@mail.python.org> <3zLkQz2PyyzFqyc@mail.python.org> Message-ID: On 2018-01-16 22:47, Steve Dower wrote: > I think you mean out-of-band updates, and by ?you? I'm going to pretend > you mean PyCA ;) Err, yes :) From solipsis at pitrou.net Wed Jan 17 06:03:31 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 17 Jan 2018 12:03:31 +0100 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> Message-ID: <20180117120331.17ba3b64@fsol> On Tue, 16 Jan 2018 17:18:06 -0800 Nathaniel Smith wrote: > On Tue, Jan 16, 2018 at 5:06 PM, Yury Selivanov wrote: > > > > I think it would be a very fragile thing In practice: if you have even > > one variable in the context that isn't pickleable, your code that uses > > a ProcessPool would stop working. I would defer Context pickleability > > to 3.8+. > > There's also a more fundamental problem: you need some way to match up > the ContextVar objects across the two processes, and right now they > don't have an attached __module__ or __qualname__. They have a name, though. So perhaps the name could serve as a unique identifier? Instead of being serialized as a bunch of ContextVars, the Context would then be serialized as a {name: value} dict. Regards Antoine. From victor.stinner at gmail.com Wed Jan 17 09:34:59 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 17 Jan 2018 15:34:59 +0100 Subject: [Python-Dev] Positional-only parameters in Python Message-ID: Hi, In Februrary 2017, I proposed on python-ideas to change the Python syntax to allow to declare positional-only parameters in Python: https://mail.python.org/pipermail/python-ideas/2017-February/044879.html https://mail.python.org/pipermail/python-ideas/2017-March/044956.html There are already supported at the C level, but not at the Python level. Our BDFL approved the idea: https://mail.python.org/pipermail/python-ideas/2017-March/044959.html But I didn't find time to implement it. Does someone want to work on an implementation of the idea? March 2, 2017 7:16 PM, "Brett Cannon" wrote: > It seems all the core devs who have commented on this are in the positive > (Victor, Yury, Ethan, Yury, Guido, Terry, and Steven; MAL didn't explicitly > vote). So to me that suggests there's enough support to warrant writing a > PEP. Are you up for writing it, Victor, or is someone else going to write > it? It seems like a PEP is needed. Victor From yselivanov.ml at gmail.com Wed Jan 17 10:21:34 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 17 Jan 2018 10:21:34 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: <20180117120331.17ba3b64@fsol> References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: On Wed, Jan 17, 2018 at 6:03 AM, Antoine Pitrou wrote: > On Tue, 16 Jan 2018 17:18:06 -0800 > Nathaniel Smith wrote: >> On Tue, Jan 16, 2018 at 5:06 PM, Yury Selivanov wrote: >> > >> > I think it would be a very fragile thing In practice: if you have even >> > one variable in the context that isn't pickleable, your code that uses >> > a ProcessPool would stop working. I would defer Context pickleability >> > to 3.8+. >> >> There's also a more fundamental problem: you need some way to match up >> the ContextVar objects across the two processes, and right now they >> don't have an attached __module__ or __qualname__. > > They have a name, though. So perhaps the name could serve as a unique > identifier? Instead of being serialized as a bunch of ContextVars, the > Context would then be serialized as a {name: value} dict. One of the points of the ContextVar design is to avoid having unique identifiers requirement. Names can clash which leads to data being lost. If you prohibit them from clashing, then if libraries A and B happen to use the same context variable name?you can't use them both in your projects. And without enforcing name uniqueness, your approach to serialize context as a dict with string keys won't work. I like Nathaniel's idea to explicitly enable ContextVars pickling support on a per-var basis. Unfortunately we don't have time to seriously consider and debate (and implement!) this idea in time before the 3.7 freeze. In the meanwhile, given that Context objects are fully introspectable, users can implement their own ad-hoc solutions for serializers or cross-process execution. Yury From mariocj89 at gmail.com Wed Jan 17 10:23:22 2018 From: mariocj89 at gmail.com (Mario Corchero) Date: Wed, 17 Jan 2018 15:23:22 +0000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: Message-ID: Hi Victor, I'd like to work on it if you accept "a random person" to work on it (saying it in case the mail was directed to core developers). Regards, Mario On 17 January 2018 at 14:34, Victor Stinner wrote: > Hi, > > In Februrary 2017, I proposed on python-ideas to change the Python > syntax to allow to declare positional-only parameters in Python: > > https://mail.python.org/pipermail/python-ideas/2017-February/044879.html > https://mail.python.org/pipermail/python-ideas/2017-March/044956.html > > There are already supported at the C level, but not at the Python level. > > Our BDFL approved the idea: > > https://mail.python.org/pipermail/python-ideas/2017-March/044959.html > > But I didn't find time to implement it. Does someone want to work on > an implementation of the idea? > > March 2, 2017 7:16 PM, "Brett Cannon" wrote: > > It seems all the core devs who have commented on this are in the positive > > (Victor, Yury, Ethan, Yury, Guido, Terry, and Steven; MAL didn't > explicitly > > vote). So to me that suggests there's enough support to warrant writing a > > PEP. Are you up for writing it, Victor, or is someone else going to write > > it? > > It seems like a PEP is needed. > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariocj89%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjoshua59 at gmail.com Wed Jan 17 06:31:13 2018 From: yjoshua59 at gmail.com (Joshua Yeow) Date: Wed, 17 Jan 2018 19:31:13 +0800 Subject: [Python-Dev] python exe installer is broken Message-ID: Dear Python Developers I hope to be able to use the latest version Python on my Windows 7 PC soon. The installer is very buggy and as I have discovered that the new exe installer since version 3.5.0 introduces more and more bugs that are yet to be resolved. Python erases all it's files, except the installer and state.rsm within the Package Cache folder, if I change the installation directory and or select "Install for all users" when undergoing the custom installation. After upgrading to version 3.6.4, Python erased all it's files, except the installer and state.rsm within the Package Cache folder, because I have Python in a directory other than "..\AppData\Local\Programs\Python\Python36". The Python 3.6.4 installer won't install pip, and pip.exe won't be present in the Package Cache folder, even if it is selected in the custom installation and modifying the installation to do so will cause Python to erases all it's files, except the installer and state.rsm within the Package Cache folder. Trying to install pip from a pip.exe of a previous version to 3.6.4, such as 3.6.3, will be unsuccessful. Repairing Python 3.6.4 will instead install Python into "..\AppData\Local\Programs\Python\Python36" with all optional features except for pip. My only option is to keep the default directory and install Python from 3.6.3 or below. Ever since the first version of the new exe installer, installing to a directory other than "..\AppData\Local\Programs\Python\Python36" has proven impossible. The old msi installer had none of the issues so I hope you developers actually try out the exe installer and see the problems for yourselves. -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Jan 17 11:14:16 2018 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 17 Jan 2018 18:14:16 +0200 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: Message-ID: 17.01.18 16:34, Victor Stinner ????: > In Februrary 2017, I proposed on python-ideas to change the Python > syntax to allow to declare positional-only parameters in Python: > > https://mail.python.org/pipermail/python-ideas/2017-February/044879.html > https://mail.python.org/pipermail/python-ideas/2017-March/044956.html > > There are already supported at the C level, but not at the Python level. > > Our BDFL approved the idea: > > https://mail.python.org/pipermail/python-ideas/2017-March/044959.html > > But I didn't find time to implement it. Does someone want to work on > an implementation of the idea? The main problem -- designing a syntax that does not look ugly. I think there are too small time is left before features freezing for experimenting with it. It would be better to make such changes at the early stage of development. From ethan at stoneleaf.us Wed Jan 17 11:29:16 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 17 Jan 2018 08:29:16 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: Message-ID: <5A5F79DC.7080407@stoneleaf.us> On 01/17/2018 08:14 AM, Serhiy Storchaka wrote: > 17.01.18 16:34, Victor Stinner ????: >> In Februrary 2017, I proposed on python-ideas to change the Python >> syntax to allow to declare positional-only parameters in Python: >> >> https://mail.python.org/pipermail/python-ideas/2017-February/044879.html >> https://mail.python.org/pipermail/python-ideas/2017-March/044956.html > > The main problem -- designing a syntax that does not look ugly. I > think there are too small time is left before features freezing for > experimenting with it. It would be better to make such changes at the > early stage of development. The syntax question is already solved: def some_func(a, b, /, this, that, *, the_other): # some stuff Everything before the slash is positional-only, between the slash and star is positional-or-keyword, and after the star is keyword-only. This is what is in our generated help(), and there is a nice symmetry between '/' and '*' being opposites, and positional/keyword being opposites. And slash is certainly no uglier than star. ;) -- ~Ethan~ From phd at phdru.name Wed Jan 17 11:44:21 2018 From: phd at phdru.name (Oleg Broytman) Date: Wed, 17 Jan 2018 17:44:21 +0100 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: <5A5F79DC.7080407@stoneleaf.us> References: <5A5F79DC.7080407@stoneleaf.us> Message-ID: <20180117164421.GA30371@phdru.name> On Wed, Jan 17, 2018 at 08:29:16AM -0800, Ethan Furman wrote: > def some_func(a, b, /, this, that, *, the_other): > # some stuff > > Everything before the slash is positional-only, between the slash and star > is positional-or-keyword, and after the star is keyword-only. Is there syntax to combine def some_func(a, b, /, *, the_other): ??? May be def some_func(a, b, /*, the_other): ??? > And slash is > certainly no uglier than star. ;) I tend to agree. Both are absolutely but equally ugly. :-( > -- > ~Ethan~ Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From barry at python.org Wed Jan 17 11:52:21 2018 From: barry at python.org (Barry Warsaw) Date: Wed, 17 Jan 2018 08:52:21 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: Message-ID: <4AD0CE69-E3C9-4445-A001-096A7D55A911@python.org> On Jan 17, 2018, at 08:14, Serhiy Storchaka wrote: > > The main problem -- designing a syntax that does not look ugly. I think there are too small time is left before features freezing for experimenting with it. It would be better to make such changes at the early stage of development. A PEP is definitely needed. Can someone get a PEP written, approved, with the feature implemented in 12 days? As much as I like the idea, I think it?s unlikely. But hey, you never know so if someone is really motivated, go for it! The effort won?t be wasted in any case, since 3.8?s right around the corner . -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Wed Jan 17 12:04:26 2018 From: guido at python.org (Guido van Rossum) Date: Wed, 17 Jan 2018 09:04:26 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: <4AD0CE69-E3C9-4445-A001-096A7D55A911@python.org> References: <4AD0CE69-E3C9-4445-A001-096A7D55A911@python.org> Message-ID: Let's aim for 3.8 here. We're already cramming in a lot of stuff for the feature freeze. On Wed, Jan 17, 2018 at 8:52 AM, Barry Warsaw wrote: > On Jan 17, 2018, at 08:14, Serhiy Storchaka wrote: > > > > The main problem -- designing a syntax that does not look ugly. I think > there are too small time is left before features freezing for experimenting > with it. It would be better to make such changes at the early stage of > development. > > A PEP is definitely needed. Can someone get a PEP written, approved, with > the feature implemented in 12 days? As much as I like the idea, I think > it?s unlikely. But hey, you never know so if someone is really motivated, > go for it! The effort won?t be wasted in any case, since 3.8?s right > around the corner . > > -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanyam.khurana01 at gmail.com Wed Jan 17 12:16:32 2018 From: sanyam.khurana01 at gmail.com (Sanyam Khurana) Date: Wed, 17 Jan 2018 22:46:32 +0530 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: Message-ID: Hi, On Wed, Jan 17, 2018 at 8:04 PM, Victor Stinner wrote: > Hi, > > In Februrary 2017, I proposed on python-ideas to change the Python > syntax to allow to declare positional-only parameters in Python: > > https://mail.python.org/pipermail/python-ideas/2017-February/044879.html > https://mail.python.org/pipermail/python-ideas/2017-March/044956.html > > There are already supported at the C level, but not at the Python level. > > Our BDFL approved the idea: > > https://mail.python.org/pipermail/python-ideas/2017-March/044959.html > > But I didn't find time to implement it. Does someone want to work on > an implementation of the idea? > > March 2, 2017 7:16 PM, "Brett Cannon" wrote: >> It seems all the core devs who have commented on this are in the positive >> (Victor, Yury, Ethan, Yury, Guido, Terry, and Steven; MAL didn't explicitly >> vote). So to me that suggests there's enough support to warrant writing a >> PEP. Are you up for writing it, Victor, or is someone else going to write >> it? > > It seems like a PEP is needed. I followed the threads mentioned above, which led me to PEP 457: https://www.python.org/dev/peps/pep-0457/ I didn't find a clear indication if it was still to be modified, approved or rejected. Can anyone help? Also, on a second note, I can help with writing a PEP (would prefer co-authoring with someone ), but this would be my first time and I would really appreciate guidance of core-developers. -- Mozilla Rep http://www.SanyamKhurana.com Github: CuriousLearner From gvanrossum at gmail.com Wed Jan 17 14:24:34 2018 From: gvanrossum at gmail.com (Guido van Rossum) Date: Wed, 17 Jan 2018 11:24:34 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: Perhaps you can update the PEP with a summary of the rejected ideas from this thread? On Jan 17, 2018 7:23 AM, "Yury Selivanov" wrote: > On Wed, Jan 17, 2018 at 6:03 AM, Antoine Pitrou > wrote: > > On Tue, 16 Jan 2018 17:18:06 -0800 > > Nathaniel Smith wrote: > >> On Tue, Jan 16, 2018 at 5:06 PM, Yury Selivanov < > yselivanov.ml at gmail.com> wrote: > >> > > >> > I think it would be a very fragile thing In practice: if you have even > >> > one variable in the context that isn't pickleable, your code that uses > >> > a ProcessPool would stop working. I would defer Context pickleability > >> > to 3.8+. > >> > >> There's also a more fundamental problem: you need some way to match up > >> the ContextVar objects across the two processes, and right now they > >> don't have an attached __module__ or __qualname__. > > > > They have a name, though. So perhaps the name could serve as a unique > > identifier? Instead of being serialized as a bunch of ContextVars, the > > Context would then be serialized as a {name: value} dict. > > One of the points of the ContextVar design is to avoid having unique > identifiers requirement. Names can clash which leads to data being > lost. If you prohibit them from clashing, then if libraries A and B > happen to use the same context variable name?you can't use them both > in your projects. And without enforcing name uniqueness, your > approach to serialize context as a dict with string keys won't work. > > I like Nathaniel's idea to explicitly enable ContextVars pickling > support on a per-var basis. Unfortunately we don't have time to > seriously consider and debate (and implement!) this idea in time > before the 3.7 freeze. > > In the meanwhile, given that Context objects are fully introspectable, > users can implement their own ad-hoc solutions for serializers or > cross-process execution. > > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Jan 17 16:02:18 2018 From: brett at python.org (Brett Cannon) Date: Wed, 17 Jan 2018 21:02:18 +0000 Subject: [Python-Dev] python exe installer is broken In-Reply-To: References: Message-ID: This seems like a bug report and is best reported on bugs.python.org. On Wed, 17 Jan 2018 at 08:05 Joshua Yeow wrote: > Dear Python Developers > I hope to be able to use the latest version Python on my Windows 7 PC > soon. The installer is very buggy and as I have discovered that the new exe > installer since version 3.5.0 introduces more and more bugs that are yet to > be resolved. Python erases all it's files, except the installer and > state.rsm within the Package Cache folder, if I change the installation > directory and or select "Install for all users" when undergoing the custom > installation. After upgrading to version 3.6.4, Python erased all it's > files, except the installer and state.rsm within the Package Cache folder, > because I have Python in a directory other than > "..\AppData\Local\Programs\Python\Python36". The Python 3.6.4 installer > won't install pip, and pip.exe won't be present in the Package Cache > folder, even if it is selected in the custom installation and modifying the > installation to do so will cause Python to erases all it's files, > except the installer and state.rsm within the Package Cache folder. Trying > to install pip from a pip.exe of a previous version to 3.6.4, such as > 3.6.3, will be unsuccessful. Repairing Python 3.6.4 will instead install > Python into "..\AppData\Local\Programs\Python\Python36" with all optional > features except for pip. My only option is to keep the default directory > and install Python from 3.6.3 or below. Ever since the first version of the > new exe installer, installing to a directory other than > "..\AppData\Local\Programs\Python\Python36" has proven impossible. The old > msi installer had none of the issues so I hope you developers actually try > out the exe installer and see the problems for yourselves. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Wed Jan 17 19:13:23 2018 From: steve.dower at python.org (Steve Dower) Date: Thu, 18 Jan 2018 11:13:23 +1100 Subject: [Python-Dev] python exe installer is broken In-Reply-To: References: Message-ID: <26b7570b-207c-6ad9-ce4f-4aeb50388036@python.org> And please include all the Python log files from your %TEMP% directory. On 18Jan2018 0802, Brett Cannon wrote: > This seems like a bug report and is best reported on bugs.python.org > . > > On Wed, 17 Jan 2018 at 08:05 Joshua Yeow > wrote: > > Dear Python Developers > I hope to be able to use the latest version Python on my Windows 7 > PC soon. The installer is very buggy and as I have discovered that > the new exe installer since version 3.5.0 introduces more and more > bugs that are yet to be resolved. Python erases all it's files, > except?the installer and state.rsm within the Package Cache folder, > if I change the installation directory and or select "Install for > all users" when undergoing the custom installation. After upgrading > to version 3.6.4, Python erased all it's files, except?the installer > and state.rsm?within the Package Cache folder, because I have Python > in a directory other than > "..\AppData\Local\Programs\Python\Python36". The Python 3.6.4 > installer won't install pip, and pip.exe won't be present in the > Package Cache folder, even if it is selected in the custom > installation and modifying the installation to do so will cause > Python to erases all it's files, except?the installer and > state.rsm?within the Package Cache folder. Trying to install pip > from a pip.exe of a previous version to 3.6.4, such as 3.6.3, will > be unsuccessful. Repairing Python 3.6.4 will instead install Python > into "..\AppData\Local\Programs\Python\Python36" with all optional > features except for pip. My only option is to keep the default > directory and install Python from 3.6.3 or below. Ever since the > first version of the new exe installer, installing to a directory > other than "..\AppData\Local\Programs\Python\Python36" has proven > impossible. The old msi installer had none of the issues so I hope > you developers actually try out the exe installer and see the > problems for yourselves. From victor.stinner at gmail.com Wed Jan 17 19:23:39 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Jan 2018 01:23:39 +0100 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: FYI In the PEP 540, I didn't try to elaborate on each design change, but I wrote a very short version history at the end: https://www.python.org/dev/peps/pep-0540/#version-history Maybe something like that would help for the PEP 567? Victor Le 17 janv. 2018 8:26 PM, "Guido van Rossum" a ?crit : > Perhaps you can update the PEP with a summary of the rejected ideas from > this thread? > > On Jan 17, 2018 7:23 AM, "Yury Selivanov" wrote: > >> On Wed, Jan 17, 2018 at 6:03 AM, Antoine Pitrou >> wrote: >> > On Tue, 16 Jan 2018 17:18:06 -0800 >> > Nathaniel Smith wrote: >> >> On Tue, Jan 16, 2018 at 5:06 PM, Yury Selivanov < >> yselivanov.ml at gmail.com> wrote: >> >> > >> >> > I think it would be a very fragile thing In practice: if you have >> even >> >> > one variable in the context that isn't pickleable, your code that >> uses >> >> > a ProcessPool would stop working. I would defer Context >> pickleability >> >> > to 3.8+. >> >> >> >> There's also a more fundamental problem: you need some way to match up >> >> the ContextVar objects across the two processes, and right now they >> >> don't have an attached __module__ or __qualname__. >> > >> > They have a name, though. So perhaps the name could serve as a unique >> > identifier? Instead of being serialized as a bunch of ContextVars, the >> > Context would then be serialized as a {name: value} dict. >> >> One of the points of the ContextVar design is to avoid having unique >> identifiers requirement. Names can clash which leads to data being >> lost. If you prohibit them from clashing, then if libraries A and B >> happen to use the same context variable name?you can't use them both >> in your projects. And without enforcing name uniqueness, your >> approach to serialize context as a dict with string keys won't work. >> >> I like Nathaniel's idea to explicitly enable ContextVars pickling >> support on a per-var basis. Unfortunately we don't have time to >> seriously consider and debate (and implement!) this idea in time >> before the 3.7 freeze. >> >> In the meanwhile, given that Context objects are fully introspectable, >> users can implement their own ad-hoc solutions for serializers or >> cross-process execution. >> >> Yury >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% >> 40python.org >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Jan 17 20:53:42 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 17 Jan 2018 20:53:42 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: On Wed, Jan 17, 2018 at 2:24 PM, Guido van Rossum wrote: > Perhaps you can update the PEP with a summary of the rejected ideas from > this thread? The Rejected Ideas section of the PEP is now updated with the below: Token.reset() instead of ContextVar.reset() ------------------------------------------- Nathaniel Smith suggested to implement the ``ContextVar.reset()`` method directly on the ``Token`` class, so instead of:: token = var.set(value) # ... var.reset(token) we would write:: token = var.set(value) # ... token.reset() Having ``Token.reset()`` would make it impossible for a user to attempt to reset a variable with a token object created by another variable. This proposal was rejected for the reason of ``ContextVar.reset()`` being clearer to the human reader of the code which variable is being reset. Make Context objects picklable ------------------------------ Proposed by Antoine Pitrou, this could enable transparent cross-process use of ``Context`` objects, so the `Offloading execution to other threads`_ example would work with a ``ProcessPoolExecutor`` too. Enabling this is problematic because of the following reasons: 1. ``ContextVar`` objects do not have ``__module__`` and ``__qualname__`` attributes, making straightforward pickling of ``Context`` objects impossible. This is solvable by modifying the API to either auto detect the module where a context variable is defined, or by adding a new keyword-only "module" parameter to ``ContextVar`` constructor. 2. Not all context variables refer to picklable objects. Making a ``ContextVar`` picklable must be an opt-in. Given the time frame of the Python 3.7 release schedule it was decided to defer this proposal to Python 3.8. Yury From ncoghlan at gmail.com Thu Jan 18 00:14:55 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jan 2018 15:14:55 +1000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: Message-ID: On 18 January 2018 at 03:16, Sanyam Khurana wrote: > On Wed, Jan 17, 2018 at 8:04 PM, Victor Stinner > wrote: >> It seems like a PEP is needed. > > I followed the threads mentioned above, which led me to PEP 457: > https://www.python.org/dev/peps/pep-0457/ > > I didn't find a clear indication if it was still to be modified, > approved or rejected. Can anyone help? Effectively deferred, since Guido decided we didn't need a PEP for the __text_signature__ syntax in the inspect module: >>> import inspect >>> inspect.signature(ord) >>> ord.__text_signature__ '($module, c, /)' (The motivation was to give Argument Clinic a way to communicate C level signatures up to Python code) A PEP for Python level positional-only argument syntax would be able to rely on Signature.__repr__ and __text_signature__ as precedent for using "/" to indicate that the preceding parameters are positional-only, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Thu Jan 18 00:18:05 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 18 Jan 2018 00:18:05 -0500 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: On Wed, Jan 17, 2018 at 8:53 PM, Yury Selivanov wrote: > On Wed, Jan 17, 2018 at 2:24 PM, Guido van Rossum wrote: >> Perhaps you can update the PEP with a summary of the rejected ideas from >> this thread? > > The Rejected Ideas section of the PEP is now updated with the below: I've added two more subsections to Rejected Ideas: Make Context a MutableMapping ----------------------------- Making the ``Context`` class implement the ``abc.MutableMapping`` interface would mean that it is possible to set and unset variables using ``Context[var] = value`` and ``del Context[var]`` operations. This proposal was deferred to Python 3.8+ because of the following: 1. If in Python 3.8 it is decided that generators should support context variables (see :pep:`550` and :pep:`568`), then ``Context`` would be transformed into a chain-map of context variables mappings (as every generator would have its own mapping). That would make mutation operations like ``Context.__delitem__`` confusing, as they would operate only on the topmost mapping of the chain. 2. Having a single way of mutating the context (``ContextVar.set()`` and ``ContextVar.reset()`` methods) makes the API more straightforward. For example, it would be non-obvious why the below code fragment does not work as expected:: var = ContextVar('var') ctx = copy_context() ctx[var] = 'value' print(ctx[var]) # Prints 'value' print(var.get()) # Raises a LookupError While the following code would work:: ctx = copy_context() def func(): ctx[var] = 'value' # Contrary to the previous example, this would work # because 'func()' is running within 'ctx'. print(ctx[var]) print(var.get()) ctx.run(func) Have initial values for ContextVars ----------------------------------- Nathaniel Smith proposed to have a required ``initial_value`` keyword-only argument for the ``ContextVar`` constructor. The main argument against this proposal is that for some types there is simply no sensible "initial value" except ``None``. E.g. consider a web framework that stores the current HTTP request object in a context variable. With the current semantics it is possible to create a context variable without a default value:: # Framework: current_request: ContextVar[Request] = \ ContextVar('current_request') # Later, while handling an HTTP request: request: Request = current_request.get() # Work with the 'request' object: return request.method Note that in the above example there is no need to check if ``request`` is ``None``. It is simply expected that the framework always sets the ``current_request`` variable, or it is a bug (in which case ``current_request.get()`` would raise a ``LookupError``). If, however, we had a required initial value, we would have to guard against ``None`` values explicitly:: # Framework: current_request: ContextVar[Optional[Request]] = \ ContextVar('current_request', initial_value=None) # Later, while handling an HTTP request: request: Optional[Request] = current_request.get() # Check if the current request object was set: if request is None: raise RuntimeError # Work with the 'request' object: return request.method Moreover, we can loosely compare context variables to regular Python variables and to ``threading.local()`` objects. Both of them raise errors on failed lookups (``NameError`` and ``AttributeError`` respectively). Yury From solipsis at pitrou.net Thu Jan 18 03:03:52 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 18 Jan 2018 09:03:52 +0100 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: <20180118090352.03d183a5@fsol> On Wed, 17 Jan 2018 20:53:42 -0500 Yury Selivanov wrote: > > Proposed by Antoine Pitrou, this could enable transparent > cross-process use of ``Context`` objects, so the > `Offloading execution to other threads`_ example would work with > a ``ProcessPoolExecutor`` too. > > Enabling this is problematic because of the following reasons: > > 1. ``ContextVar`` objects do not have ``__module__`` and > ``__qualname__`` attributes, making straightforward pickling > of ``Context`` objects impossible. This is solvable by modifying > the API to either auto detect the module where a context variable > is defined, or by adding a new keyword-only "module" parameter > to ``ContextVar`` constructor. > > 2. Not all context variables refer to picklable objects. Making a > ``ContextVar`` picklable must be an opt-in. This is a red herring. If a value isn't picklable, pickle will simply raise as it does in other contexts. You should't need to opt in for anything here. Regards Antoine. From njs at pobox.com Thu Jan 18 03:40:39 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 18 Jan 2018 00:40:39 -0800 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: <20180118090352.03d183a5@fsol> References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> <20180118090352.03d183a5@fsol> Message-ID: On Thu, Jan 18, 2018 at 12:03 AM, Antoine Pitrou wrote: > On Wed, 17 Jan 2018 20:53:42 -0500 > Yury Selivanov wrote: >> >> Proposed by Antoine Pitrou, this could enable transparent >> cross-process use of ``Context`` objects, so the >> `Offloading execution to other threads`_ example would work with >> a ``ProcessPoolExecutor`` too. >> >> Enabling this is problematic because of the following reasons: >> >> 1. ``ContextVar`` objects do not have ``__module__`` and >> ``__qualname__`` attributes, making straightforward pickling >> of ``Context`` objects impossible. This is solvable by modifying >> the API to either auto detect the module where a context variable >> is defined, or by adding a new keyword-only "module" parameter >> to ``ContextVar`` constructor. >> >> 2. Not all context variables refer to picklable objects. Making a >> ``ContextVar`` picklable must be an opt-in. > > This is a red herring. If a value isn't picklable, pickle will simply > raise as it does in other contexts. You should't need to opt in for > anything here. The complication is that Contexts collect ContextVars from all over the process. So if people are going to pickle Contexts, we need some mechanism to make sure that we don't end up in a situation where it seems to work and users depend on it, and then they import a new library and suddenly pickling raises an error (because the new library internally uses a ContextVar that happens not to be pickleable). -n -- Nathaniel J. Smith -- https://vorpus.org From larry at hastings.org Thu Jan 18 04:26:44 2018 From: larry at hastings.org (Larry Hastings) Date: Thu, 18 Jan 2018 01:26:44 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: <5A5F79DC.7080407@stoneleaf.us> References: <5A5F79DC.7080407@stoneleaf.us> Message-ID: <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> On 01/17/2018 08:29 AM, Ethan Furman wrote: > On 01/17/2018 08:14 AM, Serhiy Storchaka wrote: >> 17.01.18 16:34, Victor Stinner ????: >>> In Februrary 2017, I proposed on python-ideas to change the Python >>> syntax to allow to declare positional-only parameters in Python: >>> >>> https://mail.python.org/pipermail/python-ideas/2017-February/044879.html >>> >>> https://mail.python.org/pipermail/python-ideas/2017-March/044956.html >> >> The main problem -- designing a syntax that does not look ugly. > > The syntax question is already solved: > > ?? def some_func(a, b, /, this, that, *, the_other): > ?????? # some stuff > > Everything before the slash is positional-only, between the slash and > star is positional-or-keyword, and after the star is keyword-only.? > This is what is in our generated help(), and there is a nice symmetry > between '/' and '*' being opposites, and positional/keyword being > opposites.? And slash is certainly no uglier than star.? ;) To clarify: this is the syntax used by "Argument Clinic", both as its input language, and as part of its output, exposed via the __text_signature__ attribute on builtins. Why did Argument Clinic choose that syntax?? It was suggested by one Guido van Rossum in March 2012: https://mail.python.org/pipermail/python-ideas/2012-March/014364.html https://mail.python.org/pipermail/python-ideas/2012-March/014378.html https://mail.python.org/pipermail/python-ideas/2012-March/014417.html I'm not wading into the debate over what syntax Python should use if it adds positional-only parameters, except to say that I think "/" is reasonable.? If Python winds up using a different syntax, I'd look into modifying Argument Clinic so that it accepts both this hypothetical new syntax and the existing syntax using "/". Would we be adding yet a third argument-parsing function, PyArg_ParseTupleAndKeywordsWithPositionalOnly()?? I would actually propose a different approach: modify Argument Clinic so it generates custom argument-parsing code for each function, adding a new call type (which I propose calling "METH_RAW" or "METH_STACK") where the stack is passed in directly.? I spent some time on this in the past, though I got distracted and now haven't touched it in years. Cheers, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Jan 18 06:12:40 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Jan 2018 12:12:40 +0100 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: 2018-01-18 10:26 GMT+01:00 Larry Hastings : > Why did Argument Clinic choose that syntax? It was suggested by one Guido > van Rossum in March 2012: (...) > I'm not wading into the debate over what syntax Python should use if it adds > positional-only parameters, except to say that I think "/" is reasonable. > If Python winds up using a different syntax, I'd look into modifying > Argument Clinic so that it accepts both this hypothetical new syntax and the > existing syntax using "/". The "/" syntax is used since Python 3.5, at least in some function docstrings: --- $ python3.5 Python 3.5.4+ (heads/3.5:fd8614c5c5, Dec 18 2017, 12:53:10) >>> help(abs) Help on built-in function abs in module builtins: abs(x, /) Return the absolute value of the argument. --- inspect.signature() is able to parse this syntax, but currently, it is only used to parse __text_signature__ attribute of builtin functions. Victor From ncoghlan at gmail.com Thu Jan 18 09:40:15 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jan 2018 00:40:15 +1000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On 18 January 2018 at 19:26, Larry Hastings wrote: > Would we be adding yet a third argument-parsing function, > PyArg_ParseTupleAndKeywordsWithPositionalOnly()? Checking the docs, it turns out PyArg_ParseTupleAndKeywords already gained positional-only argument support in 3.6 by way of empty strings in the keyword array. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From christian at python.org Thu Jan 18 10:32:16 2018 From: christian at python.org (Christian Heimes) Date: Thu, 18 Jan 2018 16:32:16 +0100 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On 2018-01-16 21:17, Christian Heimes wrote: > FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have > created a daily cronjob to populate Travis' cache with OpenSSL builds. > Until the cache is filled, Linux CI will take an extra 5 minute. I have messed up my initial research. :( When I was checking LibreSSL and OpenSSL for features, I draw a wrong conclusion. LibreSSL is *not* OpenSSL 1.0.2 compatible. It only implements some of the required features from 1.0.2 (e.g. X509_check_hostname) but not X509_VERIFY_PARAM_set1_host. X509_VERIFY_PARAM_set1_host() is required to perform hostname verification during the TLS handshake. Without the function, I'm unable to fix Python's hostname matching code [1]. LibreSSL upstream knows about the issue since 2016 [2]. I have opened another bug report [3]. We have two options until LibreSSL has addressed the issue: 1) Make the SSL module more secure, simpler and standard conform 2) Support LibreSSL I started a vote on Twitter [4]. So far most people prefer security. Christian [1] https://bugs.python.org/issue31399 [2] https://github.com/pyca/cryptography/issues/3247 [3] https://github.com/libressl-portable/portable/issues/381 [4] https://twitter.com/reaperhulk/status/953991843565490176 From wes.turner at gmail.com Thu Jan 18 13:42:08 2018 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 18 Jan 2018 13:42:08 -0500 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: Is there a build flag or a ./configure-time autodetection that would allow for supporting LibreSSL while they port X509_VERIFY_PARAM_set1_host? On Thursday, January 18, 2018, Christian Heimes wrote: > On 2018-01-16 21:17, Christian Heimes wrote: > > FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have > > created a daily cronjob to populate Travis' cache with OpenSSL builds. > > Until the cache is filled, Linux CI will take an extra 5 minute. > > I have messed up my initial research. :( When I was checking LibreSSL > and OpenSSL for features, I draw a wrong conclusion. LibreSSL is *not* > OpenSSL 1.0.2 compatible. It only implements some of the required > features from 1.0.2 (e.g. X509_check_hostname) but not > X509_VERIFY_PARAM_set1_host. > > X509_VERIFY_PARAM_set1_host() is required to perform hostname > verification during the TLS handshake. Without the function, I'm unable > to fix Python's hostname matching code [1]. LibreSSL upstream knows > about the issue since 2016 [2]. I have opened another bug report [3]. > > We have two options until LibreSSL has addressed the issue: > > 1) Make the SSL module more secure, simpler and standard conform > 2) Support LibreSSL > > I started a vote on Twitter [4]. So far most people prefer security. > > Christian > > [1] https://bugs.python.org/issue31399 > [2] https://github.com/pyca/cryptography/issues/3247 > [3] https://github.com/libressl-portable/portable/issues/381 > [4] https://twitter.com/reaperhulk/status/953991843565490176 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Thu Jan 18 14:15:03 2018 From: christian at python.org (Christian Heimes) Date: Thu, 18 Jan 2018 20:15:03 +0100 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On 2018-01-18 19:42, Wes Turner wrote: > Is there a build flag or a ./configure-time autodetection that would > allow for supporting LibreSSL while they port X509_VERIFY_PARAM_set1_host? X509_VERIFY_PARAM_set1_host() is a fundamental and essential piece in the new hostname verification code. I cannot replace ssl.match_hostname() easily without the API. There might be a way to add a callback, but it would take a couple of days of R&D to implement it. It won't be finished for beta1 feature freeze. Christian From wes.turner at gmail.com Thu Jan 18 14:54:08 2018 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 18 Jan 2018 14:54:08 -0500 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: LibreSSL is not a pressing need for me; but fallback to the existing insecure check if LibreSSL is present shouldn't be too difficult? On Thursday, January 18, 2018, Christian Heimes wrote: > On 2018-01-18 19:42, Wes Turner wrote: > > Is there a build flag or a ./configure-time autodetection that would > > allow for supporting LibreSSL while they port > X509_VERIFY_PARAM_set1_host? > > X509_VERIFY_PARAM_set1_host() is a fundamental and essential piece in > the new hostname verification code. I cannot replace > ssl.match_hostname() easily without the API. There might be a way to add > a callback, but it would take a couple of days of R&D to implement it. > It won't be finished for beta1 feature freeze. > > Christian > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Thu Jan 18 15:10:30 2018 From: christian at python.org (Christian Heimes) Date: Thu, 18 Jan 2018 21:10:30 +0100 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On 2018-01-18 20:54, Wes Turner wrote: > LibreSSL is not a pressing need for me; but fallback to the existing > insecure check if LibreSSL is present shouldn't be too difficult? Please give it a try and report back. Patches welcome :) Christian From victor.stinner at gmail.com Thu Jan 18 15:27:29 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Jan 2018 21:27:29 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? Message-ID: Hi, I'm working on a exhaustive list of platforms supported by Python: http://vstinner.readthedocs.io/cpython.html#supported-platforms I noticed that the extended support phase of Windows Vista is expired, so I proposed to drop Vista support: "Drop support of Windows Vista in Python 3.7" https://bugs.python.org/issue32592 https://github.com/python/cpython/pull/5231 Python has an explicit policy for Windows support, extract of the PEP 11: "CPython?s Windows support now follows [Microsoft product support lifecycle]. A new feature release X.Y.0 will support all Windows releases whose extended support phase is not yet expired. Subsequent bug fix releases will support the same Windows releases as the original feature release (even if the extended support phase has ended)." For Linux and FreeBSD, we have no explicit rule. CPython code base still contains code for FreeBSD 4... but FreeBSD 4 support ended longer than 10 years ago (January 31, 2007). Maybe it's time to drop support of these old platforms to cleanup the CPython code base to ease its maintainance. I proposed: "Drop FreeBSD 9 and older support:" https://bugs.python.org/issue32593 https://github.com/python/cpython/pull/5232 FreeBSD 9 supported ended 1 year ago (December 2016). FreeBSD support: https://www.freebsd.org/security/ https://www.freebsd.org/security/unsupported.html CPython still has compatibility code for Linux 2.6, whereas the support of Linux 2.6.x ended in August 2011, longer than 6 years ago. Should we also drop support for old Linux kernels? If yes, which ones? The Linux kernel has LTS version, the oldest is Linux 3.2 (support will end in May, 2018). Linux kernel support: https://www.kernel.org/category/releases.html Note: I'm only talking about changing the future Python 3.7. We should have the same support policy than for Windows. If Python 3.x.0 supports a platform, this support should be kept in the whole lifetime of the 3.x cycle (until it's end-of-line). Victor From chris.jerdonek at gmail.com Thu Jan 18 15:49:50 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 18 Jan 2018 20:49:50 +0000 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On Thu, Jan 18, 2018 at 7:34 AM Christian Heimes wrote: > On 2018-01-16 21:17, Christian Heimes wrote: > We have two options until LibreSSL has addressed the issue: > > 1) Make the SSL module more secure, simpler and standard conform > 2) Support LibreSSL > > I started a vote on Twitter [4]. So far most people prefer security. It?s not exactly the most balanced (neutral) presentation of a ballot question though. :) ?Chris > > Christian > > [1] https://bugs.python.org/issue31399 > [2] https://github.com/pyca/cryptography/issues/3247 > [3] https://github.com/libressl-portable/portable/issues/381 > [4] https://twitter.com/reaperhulk/status/953991843565490176 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Thu Jan 18 16:09:24 2018 From: christian at python.org (Christian Heimes) Date: Thu, 18 Jan 2018 22:09:24 +0100 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On 2018-01-18 21:49, Chris Jerdonek wrote: > > On Thu, Jan 18, 2018 at 7:34 AM Christian Heimes > wrote: > > On 2018-01-16 21:17, Christian Heimes wrote: > We have two options until LibreSSL has addressed the issue: > > 1) Make the SSL module more secure, simpler and standard conform > 2) Support LibreSSL > > I started a vote on Twitter [4]. So far most people prefer security. > > > It?s not exactly the most balanced (neutral) presentation of a ballot > question though. :) It's more venting than voting :) From njs at pobox.com Thu Jan 18 19:09:28 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 18 Jan 2018 16:09:28 -0800 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On Jan 18, 2018 07:34, "Christian Heimes" wrote: On 2018-01-16 21:17, Christian Heimes wrote: > FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have > created a daily cronjob to populate Travis' cache with OpenSSL builds. > Until the cache is filled, Linux CI will take an extra 5 minute. I have messed up my initial research. :( When I was checking LibreSSL and OpenSSL for features, I draw a wrong conclusion. LibreSSL is *not* OpenSSL 1.0.2 compatible. It only implements some of the required features from 1.0.2 (e.g. X509_check_hostname) but not X509_VERIFY_PARAM_set1_host. X509_VERIFY_PARAM_set1_host() is required to perform hostname verification during the TLS handshake. Without the function, I'm unable to fix Python's hostname matching code [1]. LibreSSL upstream knows about the issue since 2016 [2]. I have opened another bug report [3]. We have two options until LibreSSL has addressed the issue: 1) Make the SSL module more secure, simpler and standard conform 2) Support LibreSSL There are tons of different SSL libraries out there that we could theoretically support, but don't. IIUC, the reasons we started supporting LibreSSL in the first place were: - it claimed to be OpenSSL compatible, so supporting it is supposed to be "free" - when it started (just after heartbleed), there was a lot of uncertainty about whether OpenSSL would remain a viable option, and LibreSSL looked like it might become the new de facto standard. Now it's a few years later, and things have turned out differently: they aren't compatible in practice, and OpenSSL has turned things around so that it's clearly ahead of LibreSSL technically (cleaner API, new features like TLS 1.3, ...), and in terms of developer momentum. We have *very* few people qualified to maintain the ssl module, so given the new landscape I think we should focus on keeping our core OpenSSL support solid and not worry about LibreSSL. If LibreSSL wants to be supported as well then ? like any other 2nd tier platform ? they need to find someone to do the work. And if people are worried about supporting more diversity in SSL implementations, then PEP 543 is probably the thing to focus on. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Jan 19 00:36:58 2018 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 18 Jan 2018 21:36:58 -0800 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: References: Message-ID: <1516340218.2647377.1240674648.28C9CA42@webmail.messagingengine.com> +1 to both of your specific proposals. More generally, I think it makes good sense to allow dropping support for a platform in the next major Python release after vendor support for the platform stops. Even we say we support something, it will break quickly without buildbot validation. On Thu, Jan 18, 2018, at 12:27, Victor Stinner wrote: > Hi, > > I'm working on a exhaustive list of platforms supported by Python: > > http://vstinner.readthedocs.io/cpython.html#supported-platforms > > > I noticed that the extended support phase of Windows Vista is expired, > so I proposed to drop Vista support: > > "Drop support of Windows Vista in Python 3.7" > https://bugs.python.org/issue32592 > https://github.com/python/cpython/pull/5231 > > Python has an explicit policy for Windows support, extract of the PEP 11: > > "CPython?s Windows support now follows [Microsoft product support > lifecycle]. A new feature release X.Y.0 will support all Windows > releases whose extended support phase is not yet expired. Subsequent > bug fix releases will support the same Windows releases as the > original feature release (even if the extended support phase has > ended)." > > > For Linux and FreeBSD, we have no explicit rule. CPython code base > still contains code for FreeBSD 4... but FreeBSD 4 support ended > longer than 10 years ago (January 31, 2007). Maybe it's time to drop > support of these old platforms to cleanup the CPython code base to > ease its maintainance. > > I proposed: "Drop FreeBSD 9 and older support:" > > https://bugs.python.org/issue32593 > https://github.com/python/cpython/pull/5232 > > FreeBSD 9 supported ended 1 year ago (December 2016). > > FreeBSD support: > > https://www.freebsd.org/security/ > https://www.freebsd.org/security/unsupported.html > > > CPython still has compatibility code for Linux 2.6, whereas the > support of Linux 2.6.x ended in August 2011, longer than 6 years ago. > Should we also drop support for old Linux kernels? If yes, which ones? > The Linux kernel has LTS version, the oldest is Linux 3.2 (support > will end in May, 2018). > > Linux kernel support: > > https://www.kernel.org/category/releases.html > > > Note: I'm only talking about changing the future Python 3.7. We should > have the same support policy than for Windows. If Python 3.x.0 > supports a platform, this support should be kept in the whole lifetime > of the 3.x cycle (until it's end-of-line). > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org From njs at pobox.com Fri Jan 19 01:04:34 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 18 Jan 2018 22:04:34 -0800 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: References: Message-ID: On Thu, Jan 18, 2018 at 12:27 PM, Victor Stinner wrote: > CPython still has compatibility code for Linux 2.6, whereas the > support of Linux 2.6.x ended in August 2011, longer than 6 years ago. > Should we also drop support for old Linux kernels? If yes, which ones? > The Linux kernel has LTS version, the oldest is Linux 3.2 (support > will end in May, 2018). > > Linux kernel support: > > https://www.kernel.org/category/releases.html RHEL 5 uses 2.6.28, and still has "extended life cycle support" until 2020, but I guess no-one should be running Python 3.7 on that. CentOS 6 and RHEL 6 use 2.6.32, and their EOL is also 2020 (or 2024 for RHEL 6 with extended life cycle support). Redhat does ship and support 3.6 on CentOS/RHEL 6 through their "software collections" product, and presumably is planning to do the same for 3.7. It is a little surprising to see a Redhat employee suggest dropping support for RHEL 6. Hopefully you know what you're doing :-) -n -- Nathaniel J. Smith -- https://vorpus.org From christian at python.org Fri Jan 19 03:34:35 2018 From: christian at python.org (Christian Heimes) Date: Fri, 19 Jan 2018 09:34:35 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: <1516340218.2647377.1240674648.28C9CA42@webmail.messagingengine.com> References: <1516340218.2647377.1240674648.28C9CA42@webmail.messagingengine.com> Message-ID: On 2018-01-19 06:36, Benjamin Peterson wrote: > +1 to both of your specific proposals. > > More generally, I think it makes good sense to allow dropping support for a platform in the next major Python release after vendor support for the platform stops. Even we say we support something, it will break quickly without buildbot validation. Do you mean minor release? We haven't done a major release in about a decade. :) I was going to suggest a similar policy for OpenSSL. Christian From tjreedy at udel.edu Fri Jan 19 04:10:51 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 19 Jan 2018 04:10:51 -0500 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: References: Message-ID: On 1/19/2018 1:04 AM, Nathaniel Smith wrote: > On Thu, Jan 18, 2018 at 12:27 PM, Victor Stinner > wrote: >> CPython still has compatibility code for Linux 2.6, whereas the >> support of Linux 2.6.x ended in August 2011, longer than 6 years ago. >> Should we also drop support for old Linux kernels? If yes, which ones? >> The Linux kernel has LTS version, the oldest is Linux 3.2 (support >> will end in May, 2018). >> >> Linux kernel support: >> >> https://www.kernel.org/category/releases.html > > RHEL 5 uses 2.6.28, and still has "extended life cycle support" until > 2020, but I guess no-one should be running Python 3.7 on that. CentOS > 6 and RHEL 6 use 2.6.32, and their EOL is also 2020 (or 2024 for RHEL > 6 with extended life cycle support). Redhat does ship and support 3.6 > on CentOS/RHEL 6 through their "software collections" product, and > presumably is planning to do the same for 3.7. > > It is a little surprising to see a Redhat employee suggest dropping > support for RHEL 6. Hopefully you know what you're doing :-) Microsoft offers *paid* private support of some sort to corporations for publicly unsupported versions of Windows, such as xp and, we may expect, Vista. But we dropped support of xp and should, by policy, do so for Vista. I am not familiar with what Redhat does, but if it is similar, I think we should apply the same policy. -- Terry Jan Reedy From solipsis at pitrou.net Fri Jan 19 04:26:33 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 19 Jan 2018 10:26:33 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? References: Message-ID: <20180119102633.1358b54d@fsol> On Thu, 18 Jan 2018 22:04:34 -0800 Nathaniel Smith wrote: > On Thu, Jan 18, 2018 at 12:27 PM, Victor Stinner > wrote: > > CPython still has compatibility code for Linux 2.6, whereas the > > support of Linux 2.6.x ended in August 2011, longer than 6 years ago. > > Should we also drop support for old Linux kernels? If yes, which ones? > > The Linux kernel has LTS version, the oldest is Linux 3.2 (support > > will end in May, 2018). > > > > Linux kernel support: > > > > https://www.kernel.org/category/releases.html > > RHEL 5 uses 2.6.28, and still has "extended life cycle support" until > 2020, but I guess no-one should be running Python 3.7 on that. CentOS > 6 and RHEL 6 use 2.6.32, and their EOL is also 2020 (or 2024 for RHEL > 6 with extended life cycle support). What is the problem with supporting Linux 2.6? Do we need to rely on newer features? (which ones?) I'm sure some people will want to be running Python 3.7 on RHEL 6 or CentOS 6. After all, there's a reason RedHat provides 3.6 builds for it. Of course, you may say it's RedHat's (or other vendors') responsibility to apply compatibility patches. But if the compatibility code is already there, maybe it doesn't cost us much to keep it? Regards Antoine. From pablogsal at gmail.com Fri Jan 19 04:28:01 2018 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Fri, 19 Jan 2018 09:28:01 +0000 Subject: [Python-Dev] Exposing different versions of a system call in Python Message-ID: Hello everyone, In today's episode of exposing useful Linux system calls I am exposing preadv2 in this PR: https://github.com/python/cpython/pull/5239 as requested in this issue: https://bugs.python.org/issue31368 As njsmith has commented in the PR, preadv2 only exists because regular preadv was missing a flags argument, and in C the only way to add an argument is to make a new function. In Python we have already exposed preadv2 and a possible solution would be add a optional argument that passes the new parametera to preadv and calls preadv2 if this happens. On the other side, we have pipe and pipe2 as an example of exposing two versions when this situation happens. The question is: What is preferable, exposing both functions or augment the old one? Thank you everyone for your time! -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Jan 19 04:29:58 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 19 Jan 2018 10:29:58 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? References: Message-ID: <20180119102958.6b51602c@fsol> On Thu, 18 Jan 2018 22:04:34 -0800 Nathaniel Smith wrote: > On Thu, Jan 18, 2018 at 12:27 PM, Victor Stinner > wrote: > > CPython still has compatibility code for Linux 2.6, whereas the > > support of Linux 2.6.x ended in August 2011, longer than 6 years ago. > > Should we also drop support for old Linux kernels? If yes, which ones? > > The Linux kernel has LTS version, the oldest is Linux 3.2 (support > > will end in May, 2018). > > > > Linux kernel support: > > > > https://www.kernel.org/category/releases.html > > RHEL 5 uses 2.6.28, and still has "extended life cycle support" until > 2020, but I guess no-one should be running Python 3.7 on that. CentOS > 6 and RHEL 6 use 2.6.32, and their EOL is also 2020 (or 2024 for RHEL > 6 with extended life cycle support). As another data point, 2.6 is often run on cheap OpenVZ-based virtual servers. For example, I have this OVH entry-level VPS (2014 range) with an up-to-date Debian stable: $ cat /etc/debian_version 9.3 $ uname -rv 2.6.32-042stab127.2 #1 SMP Thu Jan 4 16:41:44 MSK 2018 Regards Antoine. From steve at holdenweb.com Fri Jan 19 04:43:35 2018 From: steve at holdenweb.com (Steve Holden) Date: Fri, 19 Jan 2018 09:43:35 +0000 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 12:09 AM, Nathaniel Smith wrote: > On Jan 18, 2018 07:34, "Christian Heimes" wrote: > > On 2018-01-16 21:17, Christian Heimes wrote: > > FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have > > created a daily cronjob to populate Travis' cache with OpenSSL builds. > > Until the cache is filled, Linux CI will take an extra 5 minute. > > I have messed up my initial research. :( When I was checking LibreSSL > and OpenSSL for features, I draw a wrong conclusion. LibreSSL is *not* > OpenSSL 1.0.2 compatible. It only implements some of the required > features from 1.0.2 (e.g. X509_check_hostname) but not > X509_VERIFY_PARAM_set1_host. > > X509_VERIFY_PARAM_set1_host() is required to perform hostname > verification during the TLS handshake. Without the function, I'm unable > to fix Python's hostname matching code [1]. LibreSSL upstream knows > about the issue since 2016 [2]. I have opened another bug report [3]. > > We have two options until LibreSSL has addressed the issue: > > 1) Make the SSL module more secure, simpler and standard conform > 2) Support LibreSSL > > > ?[...] > > We have *very* few people qualified to maintain the ssl module, so given > the new landscape I think we should focus on keeping our core OpenSSL > support solid and not worry about LibreSSL. If LibreSSL wants to be > supported as well then ? like any other 2nd tier platform ? they need to > find someone to do the work. And if people are worried about supporting > more diversity in SSL implementations, then PEP 543 is probably the thing > to focus on. > > ?Given the hard limit on resources it seems only sensible to focus on the "industry standard" library?. I'm rather disappointed that LibreSSL isn't a choice, but given the lack of compatibility that's hardly Python's problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Fri Jan 19 09:42:52 2018 From: christian at python.org (Christian Heimes) Date: Fri, 19 Jan 2018 15:42:52 +0100 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On 2018-01-19 10:43, Steve Holden wrote: > On Fri, Jan 19, 2018 at 12:09 AM, Nathaniel Smith > wrote: > > On Jan 18, 2018 07:34, "Christian Heimes" > wrote: > > On 2018-01-16 21:17, Christian Heimes wrote: > > FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have > > created a daily cronjob to populate Travis' cache with OpenSSL builds. > > Until the cache is filled, Linux CI will take an extra 5 minute. > > I have messed up my initial research. :( When I was checking > LibreSSL > and OpenSSL for features, I draw a wrong conclusion. LibreSSL is > *not* > OpenSSL 1.0.2 compatible. It only implements some of the required > features from 1.0.2 (e.g. X509_check_hostname) but not > X509_VERIFY_PARAM_set1_host. > > X509_VERIFY_PARAM_set1_host() is required to perform hostname > verification during the TLS handshake. Without the function, I'm > unable > to fix Python's hostname matching code [1]. LibreSSL upstream knows > about the issue since 2016 [2]. I have opened another bug report > [3]. > > We have two options until LibreSSL has addressed the issue: > > 1) Make the SSL module more secure, simpler and standard conform > 2) Support LibreSSL > > > ?[...] > > ? > > We have *very* few people qualified to maintain the ssl module, so > given the new landscape I think we should focus on keeping our core > OpenSSL support solid and not worry about LibreSSL. If LibreSSL > wants to be supported as well then ? like any other 2nd tier > platform ? they need to find someone to do the work. And if people > are worried about supporting more diversity in SSL implementations, > then PEP 543 is probably the thing to focus on. > > ?Given the hard limit on resources it seems only sensible to focus on > the "industry standard" library?. I'm rather disappointed that LibreSSL > isn't a choice, but given the lack of compatibility that's hardly > Python's problem. Thanks! I'd prefer to support LibreSSL, too. Paul Kehrer from PyCA summed up the issue with LibreSSL nicely: > It was marketed as an API compatible drop-in replacement and it is failing in that capacity. Additionally, it is missing features needed to improve the security and ease the maintenance burden of CPython?s dev team. Since I haven given up on LibreSSL, I spent some time and implemented some autoconf magic in https://github.com/python/cpython/pull/5242. It checks for the presence of libssl and X509_VERIFY_PARAM_set1_host() function family: ... checking whether compiling and linking against OpenSSL works... yes checking for X509_VERIFY_PARAM_set1_host in libssl... yes ... The ssl module will regain compatibility with LibreSSL as soon as a new release provides the necessary functions. Christian From random832 at fastmail.com Fri Jan 19 11:32:19 2018 From: random832 at fastmail.com (Random832) Date: Fri, 19 Jan 2018 11:32:19 -0500 Subject: [Python-Dev] Exposing different versions of a system call in Python In-Reply-To: References: Message-ID: <1516379539.3409900.1241239696.7DE2F89C@webmail.messagingengine.com> On Fri, Jan 19, 2018, at 04:28, Pablo Galindo Salgado wrote: > On the other side, we have pipe and pipe2 as an example of exposing two > versions when this situation happens. > > The question is: > > What is preferable, exposing both functions or augment the old one? A large number, possibly a majority, of system calls in the os module, have "dir_fd" arguments that cause them to call the *at counterpart of the underlying system call. And chdir can also be called with a file descriptor, which will call fchdir (though there is also a fchdir function) I don't know why pipe2 was implemented as a separate call. From status at bugs.python.org Fri Jan 19 12:09:50 2018 From: status at bugs.python.org (Python tracker) Date: Fri, 19 Jan 2018 18:09:50 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20180119170950.C3FAB11A8FB@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2018-01-12 - 2018-01-19) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6400 (+31) closed 37949 (+28) total 44349 (+59) Open issues with patches: 2489 Issues opened (46) ================== #32493: UUID Module - FreeBSD build failure https://bugs.python.org/issue32493 reopened by vstinner #32540: venv docs - doesn't match behavior https://bugs.python.org/issue32540 opened by jason.coombs #32541: cgi.FieldStorage constructor assumes all lines terminate with https://bugs.python.org/issue32541 opened by Ian Craggs #32542: memory not freed, aka memory leak continues... https://bugs.python.org/issue32542 opened by Michael.Felt #32545: Unable to install Python 3.7.0a4 on Windows 10 - Error 0x80070 https://bugs.python.org/issue32545 opened by mwr256 #32546: Unusual TypeError with dataclass decorator https://bugs.python.org/issue32546 opened by rhettinger #32547: csv.DictWriter emits strange errors if fieldnames is an iterat https://bugs.python.org/issue32547 opened by bendotc #32548: IDLE: Add links for email and docs to help_about https://bugs.python.org/issue32548 opened by csabella #32549: Travis: Test with OpenSSL 1.1.0 https://bugs.python.org/issue32549 opened by christian.heimes #32550: STORE_ANNOTATION bytecode is unnecessary and can be removed. https://bugs.python.org/issue32550 opened by Mark.Shannon #32551: Zipfile & directory execution in 3.5.4 also adds the parent di https://bugs.python.org/issue32551 opened by nedbat #32552: Improve text for file arguments in argparse.ArgumentDefaultsHe https://bugs.python.org/issue32552 opened by elypter #32553: venv says to use python3 which does not exist in 3.6.4 https://bugs.python.org/issue32553 opened by Paul Watson #32554: random.seed(tuple) uses the randomized hash function and so is https://bugs.python.org/issue32554 opened by johnnyd #32555: Encoding issues with the locale encoding https://bugs.python.org/issue32555 opened by vstinner #32556: support bytes paths in nt _getdiskusage, _getvolumepathname, a https://bugs.python.org/issue32556 opened by eryksun #32557: allow shutil.disk_usage to take a file path https://bugs.python.org/issue32557 opened by eryksun #32560: inherit the py launcher's STARTUPINFO https://bugs.python.org/issue32560 opened by eryksun #32561: Add API to io objects for non-blocking reads/writes https://bugs.python.org/issue32561 opened by njs #32562: Support fspath protocol in AF_UNIX sockaddr resolution https://bugs.python.org/issue32562 opened by njs #32563: -Werror=declaration-after-statement expat build failure on Pyt https://bugs.python.org/issue32563 opened by ncoghlan #32565: Document the version of adding opcodes https://bugs.python.org/issue32565 opened by serhiy.storchaka #32566: Not able to open Python IDLE https://bugs.python.org/issue32566 opened by Kiran #32567: Venv???s config file (pyvenv.cfg) should be compatible with Co https://bugs.python.org/issue32567 opened by uranusjr #32568: Fix handling of sizehint=-1 in select.epoll() https://bugs.python.org/issue32568 opened by taleinat #32571: Speed up and clean up getting optional attributes in C code https://bugs.python.org/issue32571 opened by serhiy.storchaka #32572: Add the ftplib option, overrides the IP address. https://bugs.python.org/issue32572 opened by studioes #32573: sys.argv documentation should include caveat for embedded envi https://bugs.python.org/issue32573 opened by pgacv2 #32574: asyncio.Queue, put() leaks memory if the queue is full https://bugs.python.org/issue32574 opened by Mordis #32579: UUID module fix, uuid1 python module function https://bugs.python.org/issue32579 opened by David CARLIER2 #32580: Fallback to dev_urandom doesn't work when py_getrandom returns https://bugs.python.org/issue32580 opened by jernejs #32581: A bug of the write funtion of ConfigParser.py https://bugs.python.org/issue32581 opened by jiangjinhu666 #32582: chr raises OverflowError https://bugs.python.org/issue32582 opened by ukl #32583: Crash during decoding using UTF-16/32 and custom error handler https://bugs.python.org/issue32583 opened by sibiryakov #32584: Uninitialized free_extra in code_dealloc https://bugs.python.org/issue32584 opened by jeethu #32585: Add ttk::spinbox to tkinter.ttk https://bugs.python.org/issue32585 opened by Alan Moore #32587: Make REG_MULTI_SZ support PendingFileRenameOperations https://bugs.python.org/issue32587 opened by nanonyme #32589: Statistics as a result from timeit https://bugs.python.org/issue32589 opened by MGilch #32590: Proposal: add an "ensure(arg)" builtin for parameter validatio https://bugs.python.org/issue32590 opened by ncoghlan #32591: Deprecate sys.set_coroutine_wrapper and replace it with more f https://bugs.python.org/issue32591 opened by njs #32592: Drop support of Windows Vista in Python 3.7 https://bugs.python.org/issue32592 opened by vstinner #32593: Drop support of FreeBSD 9 and older in Python 3.7 https://bugs.python.org/issue32593 opened by vstinner #32594: File object 'name' attribute inconsistent type and not obvious https://bugs.python.org/issue32594 opened by skip.montanaro #32596: Lazy import concurrent.futures.process and thread https://bugs.python.org/issue32596 opened by inada.naoki #32597: Bad detection of clang https://bugs.python.org/issue32597 opened by bapt #32598: Use autoconf to detect OpenSSL and libssl features https://bugs.python.org/issue32598 opened by christian.heimes Most recent 15 issues with no replies (15) ========================================== #32598: Use autoconf to detect OpenSSL and libssl features https://bugs.python.org/issue32598 #32597: Bad detection of clang https://bugs.python.org/issue32597 #32596: Lazy import concurrent.futures.process and thread https://bugs.python.org/issue32596 #32593: Drop support of FreeBSD 9 and older in Python 3.7 https://bugs.python.org/issue32593 #32585: Add ttk::spinbox to tkinter.ttk https://bugs.python.org/issue32585 #32584: Uninitialized free_extra in code_dealloc https://bugs.python.org/issue32584 #32583: Crash during decoding using UTF-16/32 and custom error handler https://bugs.python.org/issue32583 #32579: UUID module fix, uuid1 python module function https://bugs.python.org/issue32579 #32574: asyncio.Queue, put() leaks memory if the queue is full https://bugs.python.org/issue32574 #32572: Add the ftplib option, overrides the IP address. https://bugs.python.org/issue32572 #32567: Venv???s config file (pyvenv.cfg) should be compatible with Co https://bugs.python.org/issue32567 #32565: Document the version of adding opcodes https://bugs.python.org/issue32565 #32562: Support fspath protocol in AF_UNIX sockaddr resolution https://bugs.python.org/issue32562 #32557: allow shutil.disk_usage to take a file path https://bugs.python.org/issue32557 #32556: support bytes paths in nt _getdiskusage, _getvolumepathname, a https://bugs.python.org/issue32556 Most recent 15 issues waiting for review (15) ============================================= #32598: Use autoconf to detect OpenSSL and libssl features https://bugs.python.org/issue32598 #32596: Lazy import concurrent.futures.process and thread https://bugs.python.org/issue32596 #32593: Drop support of FreeBSD 9 and older in Python 3.7 https://bugs.python.org/issue32593 #32592: Drop support of Windows Vista in Python 3.7 https://bugs.python.org/issue32592 #32589: Statistics as a result from timeit https://bugs.python.org/issue32589 #32585: Add ttk::spinbox to tkinter.ttk https://bugs.python.org/issue32585 #32582: chr raises OverflowError https://bugs.python.org/issue32582 #32580: Fallback to dev_urandom doesn't work when py_getrandom returns https://bugs.python.org/issue32580 #32574: asyncio.Queue, put() leaks memory if the queue is full https://bugs.python.org/issue32574 #32572: Add the ftplib option, overrides the IP address. https://bugs.python.org/issue32572 #32571: Speed up and clean up getting optional attributes in C code https://bugs.python.org/issue32571 #32565: Document the version of adding opcodes https://bugs.python.org/issue32565 #32563: -Werror=declaration-after-statement expat build failure on Pyt https://bugs.python.org/issue32563 #32555: Encoding issues with the locale encoding https://bugs.python.org/issue32555 #32550: STORE_ANNOTATION bytecode is unnecessary and can be removed. https://bugs.python.org/issue32550 Top 10 most discussed issues (10) ================================= #32534: Speed-up list.insert: use memmove() https://bugs.python.org/issue32534 22 msgs #31900: localeconv() should decode numeric fields from LC_NUMERIC enco https://bugs.python.org/issue31900 19 msgs #32551: Zipfile & directory execution in 3.5.4 also adds the parent di https://bugs.python.org/issue32551 9 msgs #32553: venv says to use python3 which does not exist in 3.6.4 https://bugs.python.org/issue32553 9 msgs #30693: tarfile add uses random order https://bugs.python.org/issue30693 8 msgs #32561: Add API to io objects for non-blocking reads/writes https://bugs.python.org/issue32561 8 msgs #32550: STORE_ANNOTATION bytecode is unnecessary and can be removed. https://bugs.python.org/issue32550 7 msgs #32594: File object 'name' attribute inconsistent type and not obvious https://bugs.python.org/issue32594 7 msgs #29708: support reproducible Python builds https://bugs.python.org/issue29708 6 msgs #32493: UUID Module - FreeBSD build failure https://bugs.python.org/issue32493 6 msgs Issues closed (26) ================== #15221: os.path.is*() may return False if path can't be accessed https://bugs.python.org/issue15221 closed by Mariatta #26330: shutil.disk_usage() on Windows can't properly handle unicode https://bugs.python.org/issue26330 closed by Mariatta #29240: PEP 540: Add a new UTF-8 mode https://bugs.python.org/issue29240 closed by vstinner #29476: Simplify set_add_entry() https://bugs.python.org/issue29476 closed by rhettinger #29911: Uninstall command line in Windows registry does not uninstall https://bugs.python.org/issue29911 closed by steve.dower #32346: Speed up slot lookup for class creation https://bugs.python.org/issue32346 closed by pitrou #32403: date, time and datetime alternate constructors should take fas https://bugs.python.org/issue32403 closed by belopolsky #32404: fromtimestamp does not call __new__ in datetime subclasses https://bugs.python.org/issue32404 closed by belopolsky #32516: Add a shared library mechanism for win32 https://bugs.python.org/issue32516 closed by steve.dower #32529: Call readinto in shutil.copyfileobj https://bugs.python.org/issue32529 closed by YoSTEALTH #32537: multiprocessing.pool.Pool.starmap_async - wrong parameter name https://bugs.python.org/issue32537 closed by pablogsal #32539: os.listdir(...) on deep path on windows in python2.7 fails wit https://bugs.python.org/issue32539 closed by serhiy.storchaka #32543: odd floor division behavior https://bugs.python.org/issue32543 closed by serhiy.storchaka #32544: Speed up hasattr(o, name) and getattr(o, name, default) https://bugs.python.org/issue32544 closed by inada.naoki #32558: Socketserver documentation : error in server example https://bugs.python.org/issue32558 closed by tcolombo #32559: logging - public function to get level from name https://bugs.python.org/issue32559 closed by vinay.sajip #32564: Syntax error on using variable async https://bugs.python.org/issue32564 closed by Andrew Olefira #32569: Blake2 module, memory clearance update https://bugs.python.org/issue32569 closed by David Carlier #32570: Python crash 0xc00000fd Windows10 x64 - "fail without words https://bugs.python.org/issue32570 closed by eryksun #32575: IDLE cannot locate certain SyntaxErrors raised by f-string exp https://bugs.python.org/issue32575 closed by terry.reedy #32576: concurrent.futures.thread deadlock due to Queue in weakref cal https://bugs.python.org/issue32576 closed by pitrou #32577: Pip creates entry point commands in the wrong place when invok https://bugs.python.org/issue32577 closed by uranusjr #32578: x86-64 Sierra 3.6: test_asyncio fails with timeout after 15 mi https://bugs.python.org/issue32578 closed by vstinner #32586: urllib2 HOWTO URLError example minor error https://bugs.python.org/issue32586 closed by Mariatta #32588: Move _findvs into its own extension module https://bugs.python.org/issue32588 closed by steve.dower #32595: Deque with iterable object as one object https://bugs.python.org/issue32595 closed by rhettinger From guido at python.org Fri Jan 19 12:30:54 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Jan 2018 09:30:54 -0800 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) Message-ID: There has been useful and effective discussion on several of the finer points of PEP 567. I think we've arrived at a solid specification, where every part of the design is well motivated. I plan to accept it on Monday, unless someone brings up something significant that we've overlooked before then. Please don't rehash issues that have already been debated -- we're unlikely to reach a different conclusion upon revisiting the same issue (read the Rejected Ideas section first). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Jan 19 15:10:29 2018 From: brett at python.org (Brett Cannon) Date: Fri, 19 Jan 2018 20:10:29 +0000 Subject: [Python-Dev] Exposing different versions of a system call in Python In-Reply-To: References: Message-ID: On Fri, 19 Jan 2018 at 01:30 Pablo Galindo Salgado wrote: > Hello everyone, > > In today's episode of exposing useful Linux system calls I am exposing > preadv2 in this PR: > > https://github.com/python/cpython/pull/5239 > > as requested in this issue: > > https://bugs.python.org/issue31368 > > As njsmith has commented in the PR, preadv2 only exists because regular > preadv was missing a flags argument, and in C the only way to add an > argument is to make a new function. In Python we have already exposed > preadv2 and a possible solution would be add a optional argument that > passes the new parametera to preadv and calls preadv2 if this happens. > > On the other side, we have pipe and pipe2 as an example of exposing two > versions when this situation happens. > > The question is: > > What is preferable, exposing both functions or augment the old one? > I guess the question is whether discoverability would be hurt by combining. My guess is no, in which case if the only difference between the two system calls is a single argument then I would make a single function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Jan 19 15:14:29 2018 From: brett at python.org (Brett Cannon) Date: Fri, 19 Jan 2018 20:14:29 +0000 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: <1516340218.2647377.1240674648.28C9CA42@webmail.messagingengine.com> References: <1516340218.2647377.1240674648.28C9CA42@webmail.messagingengine.com> Message-ID: On Thu, 18 Jan 2018 at 21:39 Benjamin Peterson wrote: > +1 to both of your specific proposals. > > More generally, I think it makes good sense to allow dropping support for > a platform in the next major Python release after vendor support for the > platform stops. Even we say we support something, it will break quickly > without buildbot validation. > +1 from me as well. We all only have so much bandwidth and if someone wants extended support there are plenty of contractors who could be hired to extend it. -Brett > > On Thu, Jan 18, 2018, at 12:27, Victor Stinner wrote: > > Hi, > > > > I'm working on a exhaustive list of platforms supported by Python: > > > > http://vstinner.readthedocs.io/cpython.html#supported-platforms > > > > > > I noticed that the extended support phase of Windows Vista is expired, > > so I proposed to drop Vista support: > > > > "Drop support of Windows Vista in Python 3.7" > > https://bugs.python.org/issue32592 > > https://github.com/python/cpython/pull/5231 > > > > Python has an explicit policy for Windows support, extract of the PEP 11: > > > > "CPython?s Windows support now follows [Microsoft product support > > lifecycle]. A new feature release X.Y.0 will support all Windows > > releases whose extended support phase is not yet expired. Subsequent > > bug fix releases will support the same Windows releases as the > > original feature release (even if the extended support phase has > > ended)." > > > > > > For Linux and FreeBSD, we have no explicit rule. CPython code base > > still contains code for FreeBSD 4... but FreeBSD 4 support ended > > longer than 10 years ago (January 31, 2007). Maybe it's time to drop > > support of these old platforms to cleanup the CPython code base to > > ease its maintainance. > > > > I proposed: "Drop FreeBSD 9 and older support:" > > > > https://bugs.python.org/issue32593 > > https://github.com/python/cpython/pull/5232 > > > > FreeBSD 9 supported ended 1 year ago (December 2016). > > > > FreeBSD support: > > > > https://www.freebsd.org/security/ > > https://www.freebsd.org/security/unsupported.html > > > > > > CPython still has compatibility code for Linux 2.6, whereas the > > support of Linux 2.6.x ended in August 2011, longer than 6 years ago. > > Should we also drop support for old Linux kernels? If yes, which ones? > > The Linux kernel has LTS version, the oldest is Linux 3.2 (support > > will end in May, 2018). > > > > Linux kernel support: > > > > https://www.kernel.org/category/releases.html > > > > > > Note: I'm only talking about changing the future Python 3.7. We should > > have the same support policy than for Windows. If Python 3.x.0 > > supports a platform, this support should be kept in the whole lifetime > > of the 3.x cycle (until it's end-of-line). > > > > Victor > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariocj89 at gmail.com Fri Jan 19 16:49:08 2018 From: mariocj89 at gmail.com (Mario Corchero) Date: Fri, 19 Jan 2018 21:49:08 +0000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: I am happy to put some work into this (and Pablo Galindo in CC offered to pair on it) but it is not clear for me whether the next step is drafting a new PEP or this is just blocked on "re-evaluating" the current one. If someone can clarify we can put something together. Thanks! On 18 January 2018 at 14:40, Nick Coghlan wrote: > On 18 January 2018 at 19:26, Larry Hastings wrote: > > Would we be adding yet a third argument-parsing function, > > PyArg_ParseTupleAndKeywordsWithPositionalOnly()? > > Checking the docs, it turns out PyArg_ParseTupleAndKeywords already > gained positional-only argument support in 3.6 by way of empty strings > in the keyword array. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariocj89%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Fri Jan 19 18:46:58 2018 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 19 Jan 2018 18:46:58 -0500 Subject: [Python-Dev] Unexpected bytecode difference Message-ID: I have encountered the following difference between Python 3 and 2: (py3) >>> compile('xxx', '<>', 'eval').co_code b'e\x00S\x00' (py2) >>> compile('xxx', '<>', 'eval').co_code 'e\x00\x00S' Note that 'S' (the code for RETURN_VALUE) and a zero byte are swapped in Python 2 compared to Python 3. Is this change documented somewhere? From victor.stinner at gmail.com Fri Jan 19 18:54:30 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 20 Jan 2018 00:54:30 +0100 Subject: [Python-Dev] Unexpected bytecode difference In-Reply-To: References: Message-ID: Python bytecode format changed deeply in Python 3.6. It now uses regular units of 2 bytes, instead of 1 or 3 bytes depending if the instruction has an argument. See for example https://bugs.python.org/issue26647 "wordcode". But CALL_FUNCTION bytecode also evolved. Victor 2018-01-20 0:46 GMT+01:00 Alexander Belopolsky : > I have encountered the following difference between Python 3 and 2: > > (py3) >>>> compile('xxx', '<>', 'eval').co_code > b'e\x00S\x00' > > (py2) >>>> compile('xxx', '<>', 'eval').co_code > 'e\x00\x00S' > > Note that 'S' (the code for RETURN_VALUE) and a zero byte are swapped > in Python 2 compared to Python 3. Is this change documented > somewhere? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From guido at python.org Fri Jan 19 19:01:38 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Jan 2018 16:01:38 -0800 Subject: [Python-Dev] Unexpected bytecode difference In-Reply-To: References: Message-ID: Presumably because Python 3 switched to wordcode. Applying dis.dis() to these code objects results in the same output. >>> dis.dis(c) 0 LOAD_NAME 0 (0) 3 RETURN_VALUE On Fri, Jan 19, 2018 at 3:46 PM, Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > I have encountered the following difference between Python 3 and 2: > > (py3) > >>> compile('xxx', '<>', 'eval').co_code > b'e\x00S\x00' > > (py2) > >>> compile('xxx', '<>', 'eval').co_code > 'e\x00\x00S' > > Note that 'S' (the code for RETURN_VALUE) and a zero byte are swapped > in Python 2 compared to Python 3. Is this change documented > somewhere? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Fri Jan 19 19:07:45 2018 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 19 Jan 2018 19:07:45 -0500 Subject: [Python-Dev] Unexpected bytecode difference In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 7:01 PM, Guido van Rossum wrote: > Presumably because Python 3 switched to wordcode. Applying dis.dis() to > these code objects results in the same output. > >>>> dis.dis(c) > 0 LOAD_NAME 0 (0) > 3 RETURN_VALUE I expected these changes to be documented at , but the EXTENDED_ARG section, for example, is the same in the 2 and 3 versions and says that the default argument is two bytes. From victor.stinner at gmail.com Fri Jan 19 19:18:59 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 20 Jan 2018 01:18:59 +0100 Subject: [Python-Dev] Unexpected bytecode difference In-Reply-To: References: Message-ID: It seems like the EXTENDED_ARG doc wasn't updated. Victor 2018-01-20 1:07 GMT+01:00 Alexander Belopolsky : > On Fri, Jan 19, 2018 at 7:01 PM, Guido van Rossum wrote: >> Presumably because Python 3 switched to wordcode. Applying dis.dis() to >> these code objects results in the same output. >> >>>>> dis.dis(c) >> 0 LOAD_NAME 0 (0) >> 3 RETURN_VALUE > > I expected these changes to be documented at > , but the EXTENDED_ARG > section, for example, is the same in the 2 and 3 versions and says > that the default argument is two bytes. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From tjreedy at udel.edu Fri Jan 19 19:57:48 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 19 Jan 2018 19:57:48 -0500 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On 1/19/2018 4:49 PM, Mario Corchero wrote: > I am happy to put some work into this (and Pablo Galindo in CC offered > to pair on it) but it is not clear for me whether the next step is > drafting a new PEP or this is just blocked on "re-evaluating" the > current one. > > If someone can clarify we can put something together. My understanding is that extending the current use of '/' has already been approved in principle. I personally think that this just needs an issue, if there is not one already, and a PR. I think we need that, or the effort to produce one, to reveal any remaining issues. -- Terry Jan Reedy From gvanrossum at gmail.com Fri Jan 19 20:24:17 2018 From: gvanrossum at gmail.com (Guido van Rossum) Date: Fri, 19 Jan 2018 17:24:17 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: Not so fast. I think a PEP is still needed. This change has more repercussions than argument clinic, e.g. it affects 3rd party tooling and bytecode. On Jan 19, 2018 17:00, "Terry Reedy" wrote: > On 1/19/2018 4:49 PM, Mario Corchero wrote: > >> I am happy to put some work into this (and Pablo Galindo in CC offered to >> pair on it) but it is not clear for me whether the next step is drafting a >> new PEP or this is just blocked on "re-evaluating" the current one. >> >> If someone can clarify we can put something together. >> > > My understanding is that extending the current use of '/' has already been > approved in principle. I personally think that this just needs an issue, if > there is not one already, and a PR. I think we need that, or the effort to > produce one, to reveal any remaining issues. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjevnik at quantopian.com Fri Jan 19 19:05:43 2018 From: jjevnik at quantopian.com (Joe Jevnik) Date: Fri, 19 Jan 2018 19:05:43 -0500 Subject: [Python-Dev] Unexpected bytecode difference In-Reply-To: References: Message-ID: As a general rule, you should not expect the bytecode to be the same between different versions of CPython, including minor version changes. For example, the instructions for dictionary literals are different in 3.4, 3.5, and 3.6. On Fri, Jan 19, 2018 at 6:54 PM, Victor Stinner wrote: > Python bytecode format changed deeply in Python 3.6. It now uses > regular units of 2 bytes, instead of 1 or 3 bytes depending if the > instruction has an argument. > > See for example https://bugs.python.org/issue26647 "wordcode". > > But CALL_FUNCTION bytecode also evolved. > > Victor > > 2018-01-20 0:46 GMT+01:00 Alexander Belopolsky < > alexander.belopolsky at gmail.com>: > > I have encountered the following difference between Python 3 and 2: > > > > (py3) > >>>> compile('xxx', '<>', 'eval').co_code > > b'e\x00S\x00' > > > > (py2) > >>>> compile('xxx', '<>', 'eval').co_code > > 'e\x00\x00S' > > > > Note that 'S' (the code for RETURN_VALUE) and a zero byte are swapped > > in Python 2 compared to Python 3. Is this change documented > > somewhere? > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > joe%40quantopian.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jan 19 23:06:23 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jan 2018 14:06:23 +1000 Subject: [Python-Dev] Exposing different versions of a system call in Python In-Reply-To: <1516379539.3409900.1241239696.7DE2F89C@webmail.messagingengine.com> References: <1516379539.3409900.1241239696.7DE2F89C@webmail.messagingengine.com> Message-ID: On 20 January 2018 at 02:32, Random832 wrote: > On Fri, Jan 19, 2018, at 04:28, Pablo Galindo Salgado wrote: >> On the other side, we have pipe and pipe2 as an example of exposing two >> versions when this situation happens. >> >> The question is: >> >> What is preferable, exposing both functions or augment the old one? > > A large number, possibly a majority, of system calls in the os module, have "dir_fd" arguments that cause them to call the *at counterpart of the underlying system call. And chdir can also be called with a file descriptor, which will call fchdir (though there is also a fchdir function) I don't know why pipe2 was implemented as a separate call. I think it's mainly just that our design philosophy on this front has changed over time, and os.pipe2 was added back in 2011. These days, the preference is more strongly in favour of amending the existing API if Python offers relevant features that let us hide the existence of multiple distinct APIs at the OS level. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Jan 19 23:16:43 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jan 2018 14:16:43 +1000 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: References: Message-ID: On 19 January 2018 at 16:04, Nathaniel Smith wrote: > On Thu, Jan 18, 2018 at 12:27 PM, Victor Stinner > wrote: >> CPython still has compatibility code for Linux 2.6, whereas the >> support of Linux 2.6.x ended in August 2011, longer than 6 years ago. >> Should we also drop support for old Linux kernels? If yes, which ones? >> The Linux kernel has LTS version, the oldest is Linux 3.2 (support >> will end in May, 2018). >> >> Linux kernel support: >> >> https://www.kernel.org/category/releases.html > > RHEL 5 uses 2.6.28, and still has "extended life cycle support" until > 2020, but I guess no-one should be running Python 3.7 on that. CentOS > 6 and RHEL 6 use 2.6.32, and their EOL is also 2020 (or 2024 for RHEL > 6 with extended life cycle support). Redhat does ship and support 3.6 > on CentOS/RHEL 6 through their "software collections" product, and > presumably is planning to do the same for 3.7. > > It is a little surprising to see a Redhat employee suggest dropping > support for RHEL 6. Hopefully you know what you're doing :-) Red Hat's kernel version numbers describe the *oldest* code in that kernel, rather than the newest. So if Red Hat customers want to run a new version of CPython on an older version of RHEL, and the new version of CPython needs a particular kernel feature, then that turns into a backport request for that kernel capability (e.g. while there were multiple drivers for Red Hat backporting the getrandom() syscall the RHEL 7's 3.10 kernel, CPython was one of them: https://bugzilla.redhat.com/show_bug.cgi?id=1330000) So I think it makes sense to base python-dev's Linux kernel support policy on the community LTS streams for the Linux kernel - if a commercial Python vendor chooses to go beyond those dates they can, just as they may go beyond python-dev's nominal support dates for CPython itself. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tjreedy at udel.edu Fri Jan 19 23:24:23 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 19 Jan 2018 23:24:23 -0500 Subject: [Python-Dev] Exposing different versions of a system call in Python In-Reply-To: References: <1516379539.3409900.1241239696.7DE2F89C@webmail.messagingengine.com> Message-ID: On 1/19/2018 11:06 PM, Nick Coghlan wrote: > On 20 January 2018 at 02:32, Random832 wrote: >> On Fri, Jan 19, 2018, at 04:28, Pablo Galindo Salgado wrote: >>> On the other side, we have pipe and pipe2 as an example of exposing two >>> versions when this situation happens. >>> >>> The question is: >>> >>> What is preferable, exposing both functions or augment the old one? I personally find a new parameter easier to remember than a new function, which will also have the new parameter. -- Terry Jan Reedy From ncoghlan at gmail.com Fri Jan 19 23:47:40 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jan 2018 14:47:40 +1000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On 20 January 2018 at 07:49, Mario Corchero wrote: > I am happy to put some work into this (and Pablo Galindo in CC offered to > pair on it) but it is not clear for me whether the next step is drafting a > new PEP or this is just blocked on "re-evaluating" the current one. I think that would be a question for Larry, since there are two main options here: - proposing just the "/" part of PEP 457 (which allows positional-only arguments, but doesn't allow the expression of all builtin and standard library signatures) - proposing the full PEP 547, including the "argument groups" feature (which is a bigger change, but allows the expression of signatures like "range([start,] stop, [step,] /)") One key benefit I'd see to a new subset-of-457 PEP is that it would allow a decision to be made on the basic "/" proposal without deciding one way or the other on whether or not to provide a native way to express signatures like the one for range(). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Sat Jan 20 00:00:16 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Jan 2018 21:00:16 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On Fri, Jan 19, 2018 at 8:47 PM, Nick Coghlan wrote: > On 20 January 2018 at 07:49, Mario Corchero wrote: > > I am happy to put some work into this (and Pablo Galindo in CC offered to > > pair on it) but it is not clear for me whether the next step is drafting > a > > new PEP or this is just blocked on "re-evaluating" the current one. > > I think that would be a question for Larry, > I think you meant for Guido. It's not Larry's language (yet :-). > since there are two main options here: > > - proposing just the "/" part of PEP 457 (which allows positional-only > arguments, but doesn't allow the expression of all builtin and > standard library signatures) > - proposing the full PEP 547, > I assume you meant PEP 457 again. :-) > including the "argument groups" feature > (which is a bigger change, but allows the expression of signatures > like "range([start,] stop, [step,] /)") > > One key benefit I'd see to a new subset-of-457 PEP is that it would > allow a decision to be made on the basic "/" proposal without deciding > one way or the other on whether or not to provide a native way to > express signatures like the one for range(). I personally don't think such signatures are common enough to warrant special syntax, and I don't want to encourage them. The few we have (basically range(), slice() and a few functions in the curses module) don't inspire a lot of copy-cat APIs. OTOH the more plain positional-only arguments are a pretty common need -- for example, for methods that are conventionally used that way, and overridden with disregard for argument names. (IOW I agree with you here. ;-) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Sat Jan 20 02:17:40 2018 From: larry at hastings.org (Larry Hastings) Date: Fri, 19 Jan 2018 23:17:40 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On 01/19/2018 08:47 PM, Nick Coghlan wrote: > - proposing the full PEP 547, including the "argument groups" feature > (which is a bigger change, but allows the expression of signatures > like "range([start,] stop, [step,] /)") I hope we don't go down that route. I added support for "argument groups" to Argument Clinic in an attempt to support legacy functions with crazy argument signatures that count their arguments.? (For example, curses.window.overlay() takes either one or seven arguments exactly--not two!, and not six!.)? I have deeply mixed feelings about the result, and I would hate to see support for it added to the language.? If I had my way I'd rewrite or replace those functions and have only modern Pythonic signatures in the standard library. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariocj89 at gmail.com Sat Jan 20 04:25:36 2018 From: mariocj89 at gmail.com (Mario Corchero) Date: Sat, 20 Jan 2018 09:25:36 +0000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: OK, if no one has anything against, Pablo and I can start a PEP just for the ?/? simple syntax (without the argument group part). On Sat, 20 Jan 2018 at 07:17, Larry Hastings wrote: > > > On 01/19/2018 08:47 PM, Nick Coghlan wrote: > > - proposing the full PEP 547, including the "argument groups" feature > (which is a bigger change, but allows the expression of signatures > like "range([start,] stop, [step,] /)") > > > I hope we don't go down that route. > > I added support for "argument groups" to Argument Clinic in an attempt to > support legacy functions with crazy argument signatures that count their > arguments. (For example, curses.window.overlay() takes either one or seven > arguments exactly--not two!, and not six!.) I have deeply mixed feelings > about the result, and I would hate to see support for it added to the > language. If I had my way I'd rewrite or replace those functions and have > only modern Pythonic signatures in the standard library. > > > */arry* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Sat Jan 20 07:17:43 2018 From: christian at python.org (Christian Heimes) Date: Sat, 20 Jan 2018 13:17:43 +0100 Subject: [Python-Dev] LibreSSL support In-Reply-To: References: Message-ID: On 2018-01-19 15:42, Christian Heimes wrote: > On 2018-01-19 10:43, Steve Holden wrote: >> On Fri, Jan 19, 2018 at 12:09 AM, Nathaniel Smith > > wrote: >> >> On Jan 18, 2018 07:34, "Christian Heimes" > > wrote: >> >> On 2018-01-16 21:17, Christian Heimes wrote: >> > FYI, master on Travis CI now builds and uses OpenSSL 1.1.0g [1]. I have >> > created a daily cronjob to populate Travis' cache with OpenSSL builds. >> > Until the cache is filled, Linux CI will take an extra 5 minute. >> >> I have messed up my initial research. :( When I was checking >> LibreSSL >> and OpenSSL for features, I draw a wrong conclusion. LibreSSL is >> *not* >> OpenSSL 1.0.2 compatible. It only implements some of the required >> features from 1.0.2 (e.g. X509_check_hostname) but not >> X509_VERIFY_PARAM_set1_host. >> >> X509_VERIFY_PARAM_set1_host() is required to perform hostname >> verification during the TLS handshake. Without the function, I'm >> unable >> to fix Python's hostname matching code [1]. LibreSSL upstream knows >> about the issue since 2016 [2]. I have opened another bug report >> [3]. >> >> We have two options until LibreSSL has addressed the issue: >> >> 1) Make the SSL module more secure, simpler and standard conform >> 2) Support LibreSSL >> >> >> ?[...] >> >> ? >> >> We have *very* few people qualified to maintain the ssl module, so >> given the new landscape I think we should focus on keeping our core >> OpenSSL support solid and not worry about LibreSSL. If LibreSSL >> wants to be supported as well then ? like any other 2nd tier >> platform ? they need to find someone to do the work. And if people >> are worried about supporting more diversity in SSL implementations, >> then PEP 543 is probably the thing to focus on. >> >> ?Given the hard limit on resources it seems only sensible to focus on >> the "industry standard" library?. I'm rather disappointed that LibreSSL >> isn't a choice, but given the lack of compatibility that's hardly >> Python's problem. > > Thanks! > > I'd prefer to support LibreSSL, too. Paul Kehrer from PyCA summed up the > issue with LibreSSL nicely: > >> It was marketed as an API compatible drop-in replacement and it is > failing in that capacity. Additionally, it is missing features needed to > improve the security and ease the maintenance burden of CPython?s dev team. > > > Since I haven given up on LibreSSL, I spent some time and implemented > some autoconf magic in https://github.com/python/cpython/pull/5242. It > checks for the presence of libssl and X509_VERIFY_PARAM_set1_host() > function family: > > ... > checking whether compiling and linking against OpenSSL works... yes > checking for X509_VERIFY_PARAM_set1_host in libssl... yes > ... > > The ssl module will regain compatibility with LibreSSL as soon as a new > release provides the necessary functions. No core developer has vetoed against my proposal. I also spoke to several members of Python Cryptographic Authority and Python Packaging Authority. They are all in favor of my proposal, too. There I have decided to move forward and require OpenSSL 1.0.2 API. This means Python 3.7 temporarily suspends support for LibreSSL until https://github.com/libressl-portable/portable/issues/381 is resolved. I have appended a lengthy explanation to my LibreSSL ticket, too. I also informed LibreSSL developers that Python 3.8 will most likely require an OpenSSL 1.1 compatible API. With OpenSSL 1.0.2 support, I can drop a considerable amount of legacy code, e.g. custom thread locking, initialization and a bunch of shim functions. Regards, Christian From barry at python.org Sat Jan 20 10:56:14 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 20 Jan 2018 10:56:14 -0500 Subject: [Python-Dev] Unique loader per module In-Reply-To: References: <1CBF1AB4-A327-4B40-A3ED-BF3F793C5EFA@python.org> Message-ID: <20180120105614.23f77488@presto.wooz.org> On Jan 05, 2018, at 05:12 PM, Nick Coghlan wrote: >I think the main reason you're seeing a problem here is because >ResourceReader has currently been designed to be implemented directly >by loaders, rather than being a subcomponent that you can request >*from* a loader. > >If you instead had an indirection API (that could optionally return >self in the case of non-shared loaders), you'd keep the current >resource reader method signatures, but the way you'd access the itself >would be: > > resources = module.__spec__.loader.get_resource_reader(module) > # resources implements the ResourceReader ABC BTW, just as a quick followup, this API suggestion was brilliant, Nick. It solved the problem nicely, and let me add support for ResourceReader to zipimport with only touching the bare minimum of zipimport.c. :) https://github.com/python/cpython/pull/5248 Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From guido at python.org Sat Jan 20 10:56:49 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 20 Jan 2018 07:56:49 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On Sat, Jan 20, 2018 at 1:25 AM, Mario Corchero wrote: > OK, if no one has anything against, Pablo and I can start a PEP just for > the ?/? simple syntax (without the argument group part). > Go for it! Note that your target will be Python 3.8. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fw at deneb.enyo.de Sun Jan 21 00:21:14 2018 From: fw at deneb.enyo.de (Florian Weimer) Date: Sun, 21 Jan 2018 06:21:14 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: (Victor Stinner's message of "Thu, 18 Jan 2018 21:27:29 +0100") References: Message-ID: <87k1wbdbxx.fsf@mid.deneb.enyo.de> * Victor Stinner: > CPython still has compatibility code for Linux 2.6, whereas the > support of Linux 2.6.x ended in August 2011, longer than 6 years ago. There are still reasonably widely used 2.6 kernels under support, but they have lots of (feature) backports, so maybe they do not need the 2.6.32 workarounds you plan to remove. (glibc upstream nowadays requires Linux 3.2 (stable branch) as the minimum, but then people are less likely to update glibc on really old systems.) > Should we also drop support for old Linux kernels? If yes, which ones? > The Linux kernel has LTS version, the oldest is Linux 3.2 (support > will end in May, 2018). What exactly do you plan to change? Is it about unconditionally assuming accept4 support? accept4 support was added to Linux 3.3 on ia64. This is not uncommon: The first version in which a particular (generic) system call is available varies a lot between architectures. You'll have to investigate each case separately. From mariocj89 at gmail.com Sun Jan 21 08:59:49 2018 From: mariocj89 at gmail.com (Mario Corchero) Date: Sun, 21 Jan 2018 13:59:49 +0000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: Here is the proposal we have worked on (Pablo and I). I've added Larry as co-author given that a good chunk of the content and the idea comes from his PEP. Larry, if you don't want to appear as such please let us know. Thanks! Rendered content: https://github.com/mariocj89/peps/blob/pep-pos-only/pep-9999.rst PEP: 9999 Title: Python Positional-Only Parameters Version: $Revision$ Last-Modified: $Date$ Author: Larry Hastings , Pablo Galindo , Mario Corchero Discussions-To: Python-Dev Status: Type: Content-Type: text/x-rst Created: 20-Jan-2018 ======== Overview ======== This PEP proposes a syntax for positional-only parameters in Python. Positional-only parameters are parameters without an externally-usable name; when a function accepting positional-only parameters is called, positional arguments are mapped to these parameters based solely on their position. ========= Rationale ========= Python has always supported positional-only parameters. Early versions of Python lacked the concept of specifying parameters by name, so naturally all parameters were positional-only. This changed around Python 1.0, when all parameters suddenly became positional-or-keyword. This allowed users to provide arguments to a function both positionally or referencing the keyword used at the definition of it. But, this is not always desired nor even available as even in current versions of Python, many CPython "builtin" functions still only accept positional-only arguments. Even if positional arguments only in a function can be achieved via using``*args`` parameters and extracting them one by one, the solution is far from ideal and not as expressive as the one proposed in this PEP, which targets to provide syntax to specify accepting a specific number of positional-only parameters. Additionally, this will bridge the gap we currently find between builtin functions that today allows to specify positional-only parameters and pure Python implementations that lack the syntax for it. ----------------------------------------------------- Positional-Only Parameter Semantics In Current Python ----------------------------------------------------- There are many, many examples of builtins that only accept positional-only parameters. The resulting semantics are easily experienced by the Python programmer--just try calling one, specifying its arguments by name:: >>> help(pow) ... pow(x, y, z=None, /) ... >>> pow(x=5, y=3) Traceback (most recent call last): File "", line 1, in TypeError: pow() takes no keyword arguments Pow clearly expresses that its arguments are only positional via the `/` marker, but this is at the moment only documentational, Python developers cannot wright such syntax. In addition, there are some functions with particularly interesting semantics: * ``range()``, which accepts an optional parameter to the *left* of its required parameter. [#RANGE]_ * ``dict()``, whose mapping/iterator parameter is optional and semantically must be positional-only. Any externally visible name for this parameter would occlude that name going into the ``**kwarg`` keyword variadic parameter dict! [#DICT]_ Obviously one can simulate any of these in pure Python code by accepting ``(*args, **kwargs)`` and parsing the arguments by hand. But this results in a disconnect between the Python function signature and what it actually accepts, not to mention the work of implementing said argument parsing. ========== Motivation ========== The new syntax will allow developers to further control how their api can be consumed. It will allow to restrict the usage of keyword Specify arguments by adding the new type of positional-only ones. A similar PEP with a broader scope (PEP 457) was proposed to define the syntax. This PEP builds on top of part of it to define and provide an implementation for the ``/`` syntax on function signatures. ================================================================= The Current State Of Documentation For Positional-Only Parameters ================================================================= The documentation for positional-only parameters is incomplete and inconsistent: * Some functions denote optional groups of positional-only arguments by enclosing them in nested square brackets. [#BORDER]_ * Some functions denote optional groups of positional-only arguments by presenting multiple prototypes with varying numbers of arguments. [#SENDFILE]_ * Some functions use *both* of the above approaches. [#RANGE]_ [#ADDCH]_ One more important idea to consider: currently in the documentation there's no way to tell whether a function takes positional-only parameters. ``open()`` accepts keyword arguments, ``ord()`` does not, but there is no way of telling just by reading the documentation that this is true. ==================== Syntax And Semantics ==================== >From the "ten-thousand foot view", and ignoring ``*args`` and ``**kwargs`` for now, the grammar for a function definition currently looks like this:: def name(positional_or_keyword_parameters, *, keyword_only_parameters): Building on that perspective, the new syntax for functions would look like this:: def name(positional_only_parameters, /, positional_or_keyword_parameters, *, keyword_only_parameters): All parameters before the ``/`` are positional-only. If ``/`` is not specified in a function signature, that function does not accept any positional-only parameters. The logic around optional values for positional-only argument Remains the same as the one for positional-or-keyword. Once a positional-only argument is provided with a default, the following positional-only and positional-or-keyword argument need to have a default as well. Positional-only parameters that don?t have a default value are "required" positional-only parameters. Therefore the following are valid signatures:: def name(p1, p2, /, p_or_kw, *, kw): def name(p1, p2=None, /, p_or_kw=None, *, kw): def name(p1, p2=None, /, *, kw): def name(p1, p2=None, /): def name(p1, p2, /, p_or_kw): def name(p1, p2, /): Whilst the followings are not:: def name(p1, p2=None, /, p_or_kw, *, kw): def name(p1=None, p2, /, p_or_kw=None, *, kw): def name(p1=None, p2, /): ========================== Full grammar specification ========================== A draft of the proposed grammar specification is:: new_typedargslist: tfpdef (',' tfpdef)* ',' '/' [',' [typedargslist]] | typedargslist new_varargslist: vfpdef (',' vfpdef)* ',' '/' [',' [varargslist]] | varargslist It will be added to the actual typedargslist and varargslist but for easier discussion is presented as new_typedargslist and new_varargslist ========================= Possible implementations ========================= ---------------------------------- Full grammar change as in PEP 3102 ---------------------------------- This implementation will involve a full change of the Grammar. This will involve following the steps otlined in PEP 306 [#PEP306]_. In addition, other steps are needed including: - Modifying the code object and the function object to be aware of positional only arguments. - Modifiying `ceval.c` (`PyEval_EvalCodeEx`, `PyEval_EvalFrameEx`...) to correctly handle positional-only arguments. - Modifying `marshal.c` to account for the modifications of the code object. This does not intent to be a guide or a comprehensive recipe on how to implement this but a rough outline on the changes this will make to the codebase. The advantages of this implementation involve speed, consistency with the implementation of keyword-only parameters as in PEP 3102 and a simpler implementation of all the tools and modules that will be inpacted by this change. ============ Alternatives ============ The following alternatives were discarded along this PEP ---------- Do Nothing ---------- Always an option, just don't adding it. It was considered though that the benefits of adding it is worth the complexity it adds to the language. --------------------- After marker proposal --------------------- A complain the approach has is the fact that the modifier of the signature impacts the "already passed" tokens. This might make confusing to "human parsers" to read functions with many arguments. Example:: def really_bad_example_of_a_python_function(fist_long_argument, second_long_argument, third_long_argument, /): It is not until you reach the end of the signature that the reader realized the ``/`` and therefore the fact that the arguments are position-only. This deviates from how the keyword-only marker works. That said we could not find an implementation that would modify the arguments after the marker, as that will force the one before the marker to be position only as well. Example:: def (x, y, /, z): If we define that ``/`` makes only z position-only it won't be possible to call x and y via keyword argument. Finding a way to work around it will add confusion given that at the moment keyword arguments cannot be followed by positional arguments. ``/`` will therefore make both the preceding and following position-only. ------------------- Per argument marker ------------------- Using a per argument marker might be an option as well. The approach basically adds a token to each of the arguments that are position only and requires those to be placed together. Example:: def (.arg1, .arg2, arg3): Note the dot on arg1 and arg2. Even if this approach might look easier to read it has been discarded as ``/`` goes further inline with the keyword-only approach and is less error prone. ---------------- Using decorators ---------------- It has been sugested in python-ideas [#python-ideas-decorator-based]_ to provide a decorator written in Python as an implementation for this feature. This approach has the advantage that keeps parameter declaration more easy to read but also introduces an asymetry on how parameter behavior is declared. Also, as the `\` syntax is already introduced for C functions, this inconsistency will make more diffcult to implement all tools and modules that deal with this syntax including but not limited to, the argument clinic, the inspect module and the ast module. Another disadvantage of this approach is that calling the decorated functions will be slower than the functions generated if the feature was implemented directly in C. ====== Thanks ====== Credit for most of the content of this PEP is contained in Larry Hastings?s PEP 457. Credit for the use of '/' as the separator between positional-only and positional-or-keyword parameters goes to Guido van Rossum, in a proposal from 2012. [#GUIDO]_ Credit for making left option groups higher precedence goes to Nick Coghlan. (Conversation in person at PyCon US 2013.) Credit for discussion about the simplification of the grammar goes to Braulio Valdivieso. .. [#DICT] http://docs.python.org/3/library/stdtypes.html#dict .. [#RANGE] http://docs.python.org/3/library/functions.html#func-range .. [#BORDER] http://docs.python.org/3/library/curses.html#curses.window.border .. [#SENDFILE] http://docs.python.org/3/library/os.html#os.sendfile .. [#ADDCH] http://docs.python.org/3/library/curses.html#curses.window.addch .. [#GUIDO] Guido van Rossum, posting to python-ideas, March 2012: https://mail.python.org/pipermail/python-ideas/2012-March/014364.html and https://mail.python.org/pipermail/python-ideas/2012-March/014378.html and https://mail.python.org/pipermail/python-ideas/2012-March/014417.html .. [#PEP306] https://www.python.org/dev/peps/pep-0306/ .. [#python-ideas-decorator-based] https://mail.python.org/pipermail/python-ideas/2017-February/044888.html ========= Copyright ========= This document has been placed in the public domain. On 20 January 2018 at 15:56, Guido van Rossum wrote: > On Sat, Jan 20, 2018 at 1:25 AM, Mario Corchero > wrote: > >> OK, if no one has anything against, Pablo and I can start a PEP just for >> the ?/? simple syntax (without the argument group part). >> > > Go for it! > > Note that your target will be Python 3.8. > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Sun Jan 21 09:16:55 2018 From: phd at phdru.name (Oleg Broytman) Date: Sun, 21 Jan 2018 15:16:55 +0100 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: <20180121141655.GA15928@phdru.name> Hi! A few minor corrections below. On Sun, Jan 21, 2018 at 01:59:49PM +0000, Mario Corchero wrote: > Author: Larry Hastings , Pablo Galindo > , Mario Corchero ^ Add a space or a few here - this is the way for line wrapping in long headers. > introduces an asymetry on how parameter behavior is declared. Also, as the `\` \ -> / Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From mariocj89 at gmail.com Sun Jan 21 10:00:23 2018 From: mariocj89 at gmail.com (Mario Corchero) Date: Sun, 21 Jan 2018 15:00:23 +0000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: <20180121141655.GA15928@phdru.name> References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> <20180121141655.GA15928@phdru.name> Message-ID: Thanks, Oleg! Fixed that and a bunch more typos in the GitHub document. https://github.com/mariocj89/peps/blob/pep-pos-only/pep-9999.rst On 21 January 2018 at 14:16, Oleg Broytman wrote: > Hi! A few minor corrections below. > > On Sun, Jan 21, 2018 at 01:59:49PM +0000, Mario Corchero < > mariocj89 at gmail.com> wrote: > > Author: Larry Hastings , Pablo Galindo > > , Mario Corchero > ^ > Add a space or a few here - this is the way for line wrapping in long > headers. > > > introduces an asymetry on how parameter behavior is declared. Also, as > the `\` > > \ -> / > > Oleg. > -- > Oleg Broytman http://phdru.name/ phd at phdru.name > Programmers don't die, they just GOSUB without RETURN. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariocj89%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablogsal at gmail.com Sun Jan 21 14:40:05 2018 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Sun, 21 Jan 2018 19:40:05 +0000 Subject: [Python-Dev] sys.settrace does not produce events for C functions Message-ID: The docs for sys.settrace mention that: >> event is a string: 'call', 'line', 'return', 'exception', >> 'c_call', 'c_return', or 'c_exception' But in the code for ceval.c the only point where call_trace is invoked with PyTrace_C_CALL or PyTrace_C_RETURN is under the C_TRACE macro. In this macro this line prevents any function set up using sys.settrace to call call_trace with the mentioned arguments: if (tstate->use_tracing && tstate->c_profilefunc) Notice that from the code of PyEval_SetTrace and PyEval_SetProfile, only the later sets tstate->c_profilefunc and therefore only functions installed using sys.setprofile will recieve a c_call for the event. Xiang Zhan has suggested me to ask here what is the best course of action: 1) Document this behavior. 2) Fix the code. This question is related to this issue: https://bugs.python.org/issue17799 Thanks everyone for your time! Pablo -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Jan 21 16:02:10 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 21 Jan 2018 13:02:10 -0800 Subject: [Python-Dev] sys.settrace does not produce events for C functions In-Reply-To: References: Message-ID: As I posted to the issue: The reason not to pass C calls to the tracing function is that tracing exists to support pdb and other debuggers, and pdb only cares about tracing through Python code. So the docs should be updated. On Sun, Jan 21, 2018 at 11:40 AM, Pablo Galindo Salgado wrote: > The docs for sys.settrace mention that: > > >> event is a string: 'call', 'line', 'return', 'exception', >> 'c_call', > 'c_return', or 'c_exception' > > But in the code for ceval.c the only point where call_trace is invoked > with PyTrace_C_CALL or PyTrace_C_RETURN is under the C_TRACE macro. In this > macro this line prevents any function set up using sys.settrace to call > call_trace with the mentioned arguments: > > if (tstate->use_tracing && tstate->c_profilefunc) > > Notice that from the code of PyEval_SetTrace and PyEval_SetProfile, only > the later sets tstate->c_profilefunc and therefore only functions installed > using sys.setprofile will recieve a c_call for the event. > > Xiang Zhan has suggested me to ask here what is the best course of action: > > 1) Document this behavior. > > 2) Fix the code. > > This question is related to this issue: > https://bugs.python.org/issue17799 > > Thanks everyone for your time! > > Pablo > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Sun Jan 21 16:44:41 2018 From: larry at hastings.org (Larry Hastings) Date: Sun, 21 Jan 2018 13:44:41 -0800 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On 01/21/2018 05:59 AM, Mario Corchero wrote: > Credit for making left option groups higher precedence goes to > Nick Coghlan. (Conversation in person at PyCon US 2013.) Actually Argument Clinic has always given left option groups higher precedence.? This theoretically allows Argument Clinic to elegantly support the range builtin as "range([start,] stop, [step])", although nobody has bothered to actually convert range() to Clinic. (Which is reasonable--I don't think there's any reason to bother.) Anyway, this acknowledgement is the only mention of "option groups" in the document.? Perhaps this was in reference to the now-abandoned idea of adding "option groups" to the language?? If so, this acknowledgement should probably be removed too. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariocj89 at gmail.com Sun Jan 21 17:17:14 2018 From: mariocj89 at gmail.com (Mario Corchero) Date: Sun, 21 Jan 2018 22:17:14 +0000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: Ups, indeed, totally missed it. Removed it from https://github.com/mariocj89/peps/blob/pep-pos-only/pep-9999.rst On 21 January 2018 at 21:44, Larry Hastings wrote: > > > On 01/21/2018 05:59 AM, Mario Corchero wrote: > > Credit for making left option groups higher precedence goes to > Nick Coghlan. (Conversation in person at PyCon US 2013.) > > > Actually Argument Clinic has always given left option groups higher > precedence. This theoretically allows Argument Clinic to elegantly support > the range builtin as "range([start,] stop, [step])", although nobody has > bothered to actually convert range() to Clinic. (Which is reasonable--I > don't think there's any reason to bother.) > > Anyway, this acknowledgement is the only mention of "option groups" in the > document. Perhaps this was in reference to the now-abandoned idea of > adding "option groups" to the language? If so, this acknowledgement should > probably be removed too. > > > */arry* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Sun Jan 21 19:05:15 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Mon, 22 Jan 2018 02:05:15 +0200 Subject: [Python-Dev] PEP 567 v3 In-Reply-To: References: <20180117013734.7d6e5a72@fsol> <20180117120331.17ba3b64@fsol> Message-ID: On Thu, Jan 18, 2018 at 3:53 AM, Yury Selivanov wrote: ?[....] > > Given the time frame of the Python 3.7 release schedule it was decided > to defer this proposal to Python 3.8. > ?It occurs to me that I had misread this to refer to the whole PEP. Although I thought it's kind of sad that after all this, contextvars still would not make it into 3.7, I also thought that it might be the right decision. As you may already know, I think there are several problems with this PEP. Would it be worth it to write down some thoughts on this PEP in the morning? -- Koos? -- + Koos Zevenhoven + http://twitter.com/k7hoven + -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sun Jan 21 21:50:52 2018 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 21 Jan 2018 21:50:52 -0500 Subject: [Python-Dev] Concerns about method overriding and subclassing with dataclasses In-Reply-To: References: <5A469982.5040205@stoneleaf.us> <5A46A5FC.8050407@stoneleaf.us> <23111.45146.116335.667080@turnbull.sk.tsukuba.ac.jp> <23116.15323.23376.304340@turnbull.sk.tsukuba.ac.jp> <359d2272-6351-323a-12c0-3023404b6d06@trueblade.com> Message-ID: <0a93efcf-690b-c1c0-9a2d-b0e9aa67b973@trueblade.com> On 1/7/2018 12:25 PM, Guido van Rossum wrote: > On Sun, Jan 7, 2018 at 9:09 AM, Eric V. Smith > wrote: > > On 1/3/2018 1:17 PM, Eric V. Smith wrote: > > I?ll open an issue after I have time to read this thread and > comment on it. > > > https://bugs.python.org/issue32513 > I need to think though how __eq__ and __ne__ work, as well as the > ordering operators. > > My specific concern with __ne__ is that there's one flag to control > their generation, but python will use "not __eq__" if you don't > provide __ne__. I need to think through what happens if the user > only provides __eq__: does dataclasses do nothing, does it add > __ne__, and how does this interact with a base class that does > provide __ne__. > > > Maybe dataclasses should only ever provide __eq__ and always assume > Python's default for __ne__ kicks in? If that's not acceptable (maybe > there are cases where a user did write an explicit __ne__ that needs to > be overridden) I would recommend the following rule: > > - If there's an __eq__, don't do anything (regardless of whether there's > an __ne__) > - If there no __eq__ but there is an __ne__, generate __eq__ but don't > generate __ne__ > - If neither exists, generate both I've added my proposal on issue 32513: https://bugs.python.org/issue32513#msg310392 It's long, so I won't repeat it here. The only really confusing part is __hash__ and its interaction with __eq__. Eric. From larry at hastings.org Mon Jan 22 05:33:01 2018 From: larry at hastings.org (Larry Hastings) Date: Mon, 22 Jan 2018 02:33:01 -0800 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? Message-ID: I have three PRs for Python 3.5.5rc1: https://github.com/python/cpython/pull/4656 https://github.com/python/cpython/pull/5197 https://github.com/python/cpython/pull/5201 I can't merge them because Travis CI is unhappy.? All three CI tests fail in the same way, reporting this error: The command "pyenv global system 3.5" failed and exited with 1 during . Since Travis CI is a "required check", Github won't let me merge the PR.? Yes I could manually merge the patches by hand but I'm hoping it doesn't come to that. I'm slipping 3.4.8rc1 because I prefer to do both releases at once. I'm hoping this problem will be resolved quickly; if we only slip the RCs by a day or two I won't slip the final releases (in about two weeks). PLS SND HALP, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Jan 22 05:57:11 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 22 Jan 2018 11:57:11 +0100 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: References: Message-ID: I created an issue with more information: https://bugs.python.org/issue32620 Victor 2018-01-22 11:33 GMT+01:00 Larry Hastings : > > > I have three PRs for Python 3.5.5rc1: > > https://github.com/python/cpython/pull/4656 > https://github.com/python/cpython/pull/5197 > https://github.com/python/cpython/pull/5201 > > I can't merge them because Travis CI is unhappy. All three CI tests fail in > the same way, reporting this error: > > The command "pyenv global system 3.5" failed and exited with 1 during . > > Since Travis CI is a "required check", Github won't let me merge the PR. > Yes I could manually merge the patches by hand but I'm hoping it doesn't > come to that. > > I'm slipping 3.4.8rc1 because I prefer to do both releases at once. > > I'm hoping this problem will be resolved quickly; if we only slip the RCs by > a day or two I won't slip the final releases (in about two weeks). > > > PLS SND HALP, > > > /arry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > From phd at phdru.name Mon Jan 22 05:59:36 2018 From: phd at phdru.name (Oleg Broytman) Date: Mon, 22 Jan 2018 11:59:36 +0100 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: References: Message-ID: <20180122105936.GA1249@phdru.name> On Mon, Jan 22, 2018 at 02:33:01AM -0800, Larry Hastings wrote: > All ... CI tests fail in > the same way, reporting this error: > > The command "pyenv global system 3.5" failed and exited with 1 during . Seems there is a slow workaround (install python 3.5): https://github.com/travis-ci/travis-ci/issues/8363#issuecomment-354857845 which python3.5 || (pyenv install 3.5.4 && pyenv use system 3.5.4) > //arry/ Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From sf at fermigier.com Mon Jan 22 05:53:20 2018 From: sf at fermigier.com (=?UTF-8?Q?St=C3=A9fane_Fermigier?=) Date: Mon, 22 Jan 2018 11:53:20 +0100 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: References: Message-ID: On Mon, Jan 22, 2018 at 11:33 AM, Larry Hastings wrote: > > > I have three PRs for Python 3.5.5rc1: > > https://github.com/python/cpython/pull/4656 > https://github.com/python/cpython/pull/5197 > https://github.com/python/cpython/pull/5201 > > I can't merge them because Travis CI is unhappy. All three CI tests fail > in the same way, reporting this error: > > The command "pyenv global system 3.5" failed and exited with 1 during . > > This seems to be related to https://github.com/travis-ci/travis-ci/issues/8363 S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, Free&OSS Group / Systematic Cluster - http://www.gt-logiciel-libre.org/ Co-Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyData Paris - http://pydata.fr/ --- ?You never change things by ?ghting the existing reality. To change something, build a new model that makes the existing model obsolete.? ? R. Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.orponen at 4teamwork.ch Mon Jan 22 07:48:26 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Mon, 22 Jan 2018 13:48:26 +0100 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: <20180122105936.GA1249@phdru.name> References: <20180122105936.GA1249@phdru.name> Message-ID: On Mon, Jan 22, 2018 at 11:59 AM, Oleg Broytman wrote: > On Mon, Jan 22, 2018 at 02:33:01AM -0800, Larry Hastings < > larry at hastings.org> wrote: > > All ... CI tests fail in > > the same way, reporting this error: > > > > The command "pyenv global system 3.5" failed and exited with 1 during > . > > Seems there is a slow workaround (install python 3.5): > > https://github.com/travis-ci/travis-ci/issues/8363#issuecomment-354857845 > > which python3.5 || (pyenv install 3.5.4 && pyenv use system 3.5.4) > There is also https://github.com/praekeltfoundation/travis-pyenv I've found useful when one needs excactness and also to decouple oneself from the Travis side rolling releases of Python. Also caches the Python version for you. See https://github.com/plone/plone.intelligenttext/blob/a71bdc5b485b1562b2e320f5c41a15286f205f98/.travis.yml for a usage example. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Jan 22 10:51:37 2018 From: brett at python.org (Brett Cannon) Date: Mon, 22 Jan 2018 15:51:37 +0000 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: References: Message-ID: I can switch off the requirement that holds admins to having to pass the same status checks as everyone else (there's still a big warning when you exercise this power), that way you can override the merge if you want. Not sure if you want to ignore the CI in that case as well. On Mon, 22 Jan 2018 at 02:33 Larry Hastings wrote: > > > I have three PRs for Python 3.5.5rc1: > > https://github.com/python/cpython/pull/4656 > https://github.com/python/cpython/pull/5197 > https://github.com/python/cpython/pull/5201 > > I can't merge them because Travis CI is unhappy. All three CI tests fail > in the same way, reporting this error: > > The command "pyenv global system 3.5" failed and exited with 1 during . > > Since Travis CI is a "required check", Github won't let me merge the PR. > Yes I could manually merge the patches by hand but I'm hoping it doesn't > come to that. > > I'm slipping 3.4.8rc1 because I prefer to do both releases at once. > > I'm hoping this problem will be resolved quickly; if we only slip the RCs > by a day or two I won't slip the final releases (in about two weeks). > > > PLS SND HALP, > > > */arry* > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Jan 22 12:28:22 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 22 Jan 2018 18:28:22 +0100 Subject: [Python-Dev] Support of the Android platform In-Reply-To: References: Message-ID: Hi, I'm still talking with Paul Peny (pmpp on IRC) who is trying to build the master branch of Python on Android, using cross-compilation or directly on an Android device. I started to took notes since Android is a complex platforms and it's not easy for me to remember everything. http://vstinner.readthedocs.io/python_android.html Paul would like to support Android 4.4 Kitkat (API 19) just because it's possible to find cheap devices running Android, but usually only with old Android versions. Technically, it doesn't see difficult to support API 19+ (instead of 21+), a few tiny patches are needed. But I don't want to have a "full support" of API 19+, only basic support like "make sure that the compilation doesn't fail", not "all tests must pass". It seems like sys.platform == 'android' would be more appropriate since Android is not Linux: different libc, different filesystems, etc. While Xavier promotes cross-compilation, Paul would like to build Python directly on Android to get pip and wheels. Honestly, I have no strong opinion, since I don't know well Android. I'm trying to help everybody working on the Android support. IMHO it's fine to support multiple ways to build Python for Android. It's not like there is very obvious option which has no drawback... Cross compilation is complex, getting a C compiler on Android also seems to be complex. From my point of view, compared to a common Fedora Linux, Android doesn't seem easy to use to develop on CPython... Victor 2017-12-10 15:19 GMT+01:00 Xavier de Gaye : > The following note is a proposal to add the support of the Android platform. > > The note is easier to read with clickable links at > https://github.com/xdegaye/cagibi/blob/master/doc/android_support.rst > > Motivations > =========== > > * Android is ubiquitous. > * This would be the first platform supported by Python that is > cross-compiled, > thanks to many contributors. > * Although the Android operating system is linux, it is different from most > linux platforms, for example it does not use GNU libc and runs SELinux in > enforcing mode. Therefore supporting this platform would make Python more > robust and also would allow testing it on arm 64-bit processors. > * Python running on Android is also a handheld calculator, a successor of > the > slide rule and the `HP 41`_. > > Current status > ============== > > * The Python test suite succeeds when run on Android emulators using > buildbot > strenuous settings with the following architectures on API 24: x86, > x86_64, > armv7 and arm64. > * The `Android build system`_ is described in another section. > * The `buildmaster-config PR 26`_ proposes to update ``master.cfg`` to > enable > buildbots to run a given Android API and architecture on the emulators. > * The Android emulator is actually ``qemu``, so the test suites for x86 and > x86_64 last about the same time as the test suite run natively when the > processor of the build system is of the x86 family. The test suites for > the > arm architectures last much longer: about 8 hours for arm64 and 10 hours > for > armv7 on a four years old laptop. > * The changes that have been made to achieve this status are listed in > `bpo-26865`_, the Android meta-issue. > * Given the cpu resources required to run the test suite on the arm > emulators, > it may be difficult to find a contributed buildbot worker. So it remains > to > find the hardware to run these buildbots. > > Proposal > ======== > > Support the Android platform on API 24 [1]_ for the x86_64, armv7 and arm64 > architectures built with NDK 14b. > > *API 24* > * API 21 is the first version to provide usable support for wide > characters > and where SELinux is run in enforcing mode. > > * API 22 introduces an annoying bug on the linker that prints something > like > this when python is started:: > > ``WARNING: linker: libpython3.6m.so.1.0: unused DT entry: type > 0x6ffffffe arg 0x14554``. > > The `termux`_ Android terminal emulator describes this problem at the > end > of its `termux-packages`_ gitlab page and has implemented a > ``termux-elf-cleaner`` tool to strip the useless entries from the ELF > header of executables. > > * API 24 is the first version where the `adb`_ shell is run on the > emulator > as a ``shell`` user instead of the ``root`` user previously, and the > first > version that supports arm64. > > *x86_64* > It seems that no handheld device exists using that architecture. It is > supported because the x86_64 Android emulator runs fast and therefore is a > good candidate as a buildbot worker. > > *NDK 14b* > This release of the NDK is the first one to use `Unified headers`_ fixing > numerous problems that had been fixed by updating the Python configure > script > until now (those changes have been reverted by now). > > Android idiosyncrasies > ====================== > > * The default shell is ``/system/bin/sh``. > * The file system layout is not a traditional unix layout, there is no > ``/tmp`` for example. Most directories have user restricted access, > ``/sdcard`` is mounted as ``noexec`` for example. > * The (java) applications are allocated a unix user id and a subdirectory on > ``/data/data``. > * SELinux is run in enforcing mode. > * Shared memory and semaphores are not supported. > * The default encoding is UTF-8. > > Android build system > ==================== > > The Android build system is implemented at `bpo-30386`_ with `PR 1629`_ and > is documented by its `README`_. It provides the following features: > > * To build a distribution for a device or an emulator with a given API level > and a given architecture. > * To start the emulator and > + install the distribution > + start a remote interactive shell > + or run remotely a python command > + or run remotely the buildbottest > * Run gdb on the python process that is running on the emulator with python > pretty-printing. > > The build system adds the ``Android/`` directory and the > ``configure-android`` > script to the root of the Python source directory on the master branch > without > modifying any other file. The build system can be installed, upgraded (i.e. > the > SDK and NDK) and run remotely, through ssh for example. > > The following external libraries, when they are configured in the build > system, > are downloaded from the internet and cross-compiled (only once, on the first > run of the build system) before the cross-compilation of the extension > modules: > > * ``ncurses`` > * ``readline`` > * ``sqlite`` > * ``libffi`` > * ``openssl``, the cross-compilation of openssl fails on x86_64 and arm64 > and > this step is skipped on those architectures. > > The following extension modules are disabled by adding them to the > ``*disabled*`` section of ``Modules/Setup``: > > * ``_uuid``, Android has no uuid/uuid.h header. > * ``grp`` some grp.h functions are not declared. > * ``_crypt``, Android does not have crypt.h. > * ``_ctypes`` on x86_64 where all long double tests fail (`bpo-32202`_) and > on > arm64 (see `bpo-32203`_). > > .. [1] On Wikipedia `Android version history`_ lists the correspondence > between > API level, commercial name and version for each release. It also provides > information on the global Android version distribution, see the two > charts > on top. > > .. _`README`: > https://github.com/xdegaye/cpython/blob/bpo-30386/Android/README.rst > .. _`bpo-26865`: https://bugs.python.org/issue26865 > .. _`bpo-30386`: https://bugs.python.org/issue30386 > .. _`bpo-32202`: https://bugs.python.org/issue32202 > .. _`bpo-32203`: https://bugs.python.org/issue32203 > .. _`PR 1629`: https://github.com/python/cpython/pull/1629 > .. _`buildmaster-config PR 26`: > https://github.com/python/buildmaster-config/pull/26 > .. _`Android version history`: > https://en.wikipedia.org/wiki/Android_version_history > .. _`termux`: https://termux.com/ > .. _`termux-packages`: https://gitlab.com/jbwhips883/termux-packages > .. _`adb`: https://developer.android.com/studio/command-line/adb.html > .. _`Unified headers`: > https://android.googlesource.com/platform/ndk.git/+/ndk-r14-release/docs/UnifiedHeaders.md > .. _`HP 41`: https://en.wikipedia.org/wiki/HP-41C > .. vim:filetype=rst:tw=78:ts=8:sts=2:sw=2:et: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From victor.stinner at gmail.com Mon Jan 22 12:38:21 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 22 Jan 2018 18:38:21 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: References: Message-ID: 2018-01-18 21:27 GMT+01:00 Victor Stinner : > I proposed: "Drop FreeBSD 9 and older support:" > > https://bugs.python.org/issue32593 > https://github.com/python/cpython/pull/5232 > > FreeBSD 9 supported ended 1 year ago (December 2016). > > FreeBSD support: > > https://www.freebsd.org/security/ > https://www.freebsd.org/security/unsupported.html FYI I merged this PR. I'm open to discuss reverting this change if anyone complains. Victor From victor.stinner at gmail.com Mon Jan 22 12:50:41 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 22 Jan 2018 18:50:41 +0100 Subject: [Python-Dev] Drop support for old unsupported FreeBSD and Linux kernels? In-Reply-To: <20180119102633.1358b54d@fsol> References: <20180119102633.1358b54d@fsol> Message-ID: I asked if we should drop support for Linux kernel 2.6. I now consider that no, we should not. It's not worth it. A colleague proposed to setup a RHEL 6 buildbot which would test Python on Linux 2.6. 2018-01-19 10:26 GMT+01:00 Antoine Pitrou : > What is the problem with supporting Linux 2.6? It increases the code size base. This compatibility code has to be maintained. It would even be better to make sure that it's tested ;-) > Do we need to rely on newer features? (which ones?) My pull request which removed support for FreeBSD 9 and older was quite large and so it was interesting to do it: https://github.com/python/cpython/commit/13ff24582c99dfb439b1af7295b401415e7eb05b According to reactions on this thread, I'm not sure anymore that removing a few lines of C code is worth it compared to loosing support for Linux 2.6 which seems to be important for many users. Python has some fallback code for "recent" Linux features like SOCK_CLOEXEC, accept4(), getrandom(), epoll_create1(), open() and O_CLOEXEC, etc. The worst part is that Python has to check once per process that open() doesn't ignore O_CLOEXEC flag. It requires one extra syscall. But well, compared to the total number of syscalls just for "python3 -c pass", this syscall is likely negligible :-) Victor From alexander.belopolsky at gmail.com Mon Jan 22 13:09:36 2018 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 22 Jan 2018 13:09:36 -0500 Subject: [Python-Dev] Unexpected bytecode difference In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 7:18 PM, Victor Stinner wrote: > It seems like the EXTENDED_ARG doc wasn't updated. I've opened to update the dis module documentation. I have also found a patch (mkfu4.patch) attached to issue 27095 where EXTENDED_ARG is described as .. opcode:: EXTENDED_ARG (ext) EXTENDED_ARG adds ``*ext* * 256`` to the next instruction's argument. This is used for arguments exceeding a byte in size, and can be chained to create 4-byte arguments. I am not sure this is correct. First, multiple EXTENDED_ARG codes seem to add ext * 256 ** i or bit-append ext to arg. Second, it looks like this mechanism allows forming arguments of arbitrary bit length, not just 4-byte arguments. From guido at python.org Mon Jan 22 18:52:15 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 22 Jan 2018 15:52:15 -0800 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) In-Reply-To: References: Message-ID: Yury, I am hereby *accepting* the latest version of PEP 567[1]. Congrats! --Guido [1] https://github.com/python/peps/commit/a459539920b9b8c8394ef61058e88a076ef8b133#diff-9d0ccdec754459da5f665cc6c6b2cc06 On Fri, Jan 19, 2018 at 9:30 AM, Guido van Rossum wrote: > There has been useful and effective discussion on several of the finer > points of PEP 567. I think we've arrived at a solid specification, where > every part of the design is well motivated. I plan to accept it on Monday, > unless someone brings up something significant that we've overlooked before > then. Please don't rehash issues that have already been debated -- we're > unlikely to reach a different conclusion upon revisiting the same issue > (read the Rejected Ideas section first). > > -- > --Guido van Rossum (python.org/~guido) > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Mon Jan 22 18:55:29 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 22 Jan 2018 18:55:29 -0500 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) In-Reply-To: References: Message-ID: Yay! Thank you, Guido! Yury On Mon, Jan 22, 2018 at 6:52 PM, Guido van Rossum wrote: > Yury, > > I am hereby *accepting* the latest version of PEP 567[1]. Congrats! > > --Guido > > [1] > https://github.com/python/peps/commit/a459539920b9b8c8394ef61058e88a076ef8b133#diff-9d0ccdec754459da5f665cc6c6b2cc06 > > On Fri, Jan 19, 2018 at 9:30 AM, Guido van Rossum wrote: >> >> There has been useful and effective discussion on several of the finer >> points of PEP 567. I think we've arrived at a solid specification, where >> every part of the design is well motivated. I plan to accept it on Monday, >> unless someone brings up something significant that we've overlooked before >> then. Please don't rehash issues that have already been debated -- we're >> unlikely to reach a different conclusion upon revisiting the same issue >> (read the Rejected Ideas section first). >> >> -- >> --Guido van Rossum (python.org/~guido) > > > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com > From victor.stinner at gmail.com Mon Jan 22 19:23:36 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 23 Jan 2018 01:23:36 +0100 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) In-Reply-To: References: Message-ID: The PEP 555 looks a competitor PEP of the PEP 567. Since the Yury's PEP 567 was approved, I understand that Koos's PEP 555 should be rejected, no? Victor 2018-01-23 0:52 GMT+01:00 Guido van Rossum : > Yury, > > I am hereby *accepting* the latest version of PEP 567[1]. Congrats! > > --Guido > > [1] > https://github.com/python/peps/commit/a459539920b9b8c8394ef61058e88a076ef8b133#diff-9d0ccdec754459da5f665cc6c6b2cc06 > > On Fri, Jan 19, 2018 at 9:30 AM, Guido van Rossum wrote: >> >> There has been useful and effective discussion on several of the finer >> points of PEP 567. I think we've arrived at a solid specification, where >> every part of the design is well motivated. I plan to accept it on Monday, >> unless someone brings up something significant that we've overlooked before >> then. Please don't rehash issues that have already been debated -- we're >> unlikely to reach a different conclusion upon revisiting the same issue >> (read the Rejected Ideas section first). >> >> -- >> --Guido van Rossum (python.org/~guido) > > > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > From ethan at stoneleaf.us Mon Jan 22 19:31:42 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 22 Jan 2018 16:31:42 -0800 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) In-Reply-To: References: Message-ID: <5A66826E.4030706@stoneleaf.us> On 01/22/2018 03:52 PM, Guido van Rossum wrote: > I am hereby *accepting* the latest version of PEP 567[1]. Congrats! Congratulations, Yury! -- ~Ethan~ From larry at hastings.org Mon Jan 22 20:33:47 2018 From: larry at hastings.org (Larry Hastings) Date: Mon, 22 Jan 2018 17:33:47 -0800 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: References: Message-ID: <471aaabb-37f5-4c55-119d-c2574bc1ad64@hastings.org> On 01/22/2018 07:51 AM, Brett Cannon wrote: > I can switch off the requirement that holds admins to having to pass > the same status checks as everyone else (there's still a big warning > when you exercise this power), that way you can override the merge if > you want. Not sure if you want to ignore the CI in that case as well. Yes, please.? I'll make you a deal: I'll download and apply the patches manually and run the test suite.? I'll only merge if the patch doesn't cause test failures. It'd be swell if we could actually fix the builds on Travis CI naturally.? I assume that will happen eventually, but I don't want to hold up the rc's for that. Thanks, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jan 22 21:09:30 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Jan 2018 12:09:30 +1000 Subject: [Python-Dev] Slipping Python 3.5.5rc1 and 3.4.8rc1 because of a Travis CI issue--can someone make Travis CI happy? In-Reply-To: References: Message-ID: On 22 January 2018 at 20:57, Victor Stinner wrote: > I created an issue with more information: > https://bugs.python.org/issue32620 We shouldn't be requiring a pre-existing Python to build CPython anyway, so it would be nice if we could just delete that step entirely. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Jan 22 21:19:36 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Jan 2018 12:19:36 +1000 Subject: [Python-Dev] Unique loader per module In-Reply-To: <20180120105614.23f77488@presto.wooz.org> References: <1CBF1AB4-A327-4B40-A3ED-BF3F793C5EFA@python.org> <20180120105614.23f77488@presto.wooz.org> Message-ID: On 21 January 2018 at 01:56, Barry Warsaw wrote: > On Jan 05, 2018, at 05:12 PM, Nick Coghlan wrote: > >>I think the main reason you're seeing a problem here is because >>ResourceReader has currently been designed to be implemented directly >>by loaders, rather than being a subcomponent that you can request >>*from* a loader. >> >>If you instead had an indirection API (that could optionally return >>self in the case of non-shared loaders), you'd keep the current >>resource reader method signatures, but the way you'd access the itself >>would be: >> >> resources = module.__spec__.loader.get_resource_reader(module) >> # resources implements the ResourceReader ABC > > BTW, just as a quick followup, this API suggestion was brilliant, Nick. It > solved the problem nicely, and let me add support for ResourceReader to > zipimport with only touching the bare minimum of zipimport.c. :) As API design rules of thumb go, "Prefer composition to inheritance" is one I've come to respect a *lot* :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Jan 22 21:25:49 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Jan 2018 12:25:49 +1000 Subject: [Python-Dev] Positional-only parameters in Python In-Reply-To: References: <5A5F79DC.7080407@stoneleaf.us> <4861b1a2-0ebb-5f83-bc66-2e9f9a195685@hastings.org> Message-ID: On 20 January 2018 at 15:00, Guido van Rossum wrote: > On Fri, Jan 19, 2018 at 8:47 PM, Nick Coghlan wrote: >> >> On 20 January 2018 at 07:49, Mario Corchero wrote: >> > I am happy to put some work into this (and Pablo Galindo in CC offered >> > to >> > pair on it) but it is not clear for me whether the next step is drafting >> > a >> > new PEP or this is just blocked on "re-evaluating" the current one. >> >> I think that would be a question for Larry, > > I think you meant for Guido. It's not Larry's language (yet :-). I did mean Larry, but I was unduly vague about the specific question I was referring to (I wasn't sure if Larry might want to repurpose PEP 457 itself for this proposal). It sounds like we're going to go with the option of a new PEP though, crediting Larry with the design, which seems like a good option to me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From larry at hastings.org Tue Jan 23 09:52:12 2018 From: larry at hastings.org (Larry Hastings) Date: Tue, 23 Jan 2018 06:52:12 -0800 Subject: [Python-Dev] [RELEASED] Python 3.4.8rc1 and Python 3.5.5rc1 are now available Message-ID: <91bc5411-3a98-ef11-fc25-ee3fa75941ba@hastings.org> On behalf of the Python development community, I'm pleased to announce the availability of Python 3.4.8rc1 and Python 3.5.5rc1. Both Python 3.4 and 3.5 are in "security fixes only" mode. Both versions only accept security fixes, not conventional bug fixes, and both releases are source-only. You can find Python 3.4.8rc1 here: https://www.python.org/downloads/release/python-348rc1/ And you can find Python 3.5.5rc1 here: https://www.python.org/downloads/release/python-355rc1/ Happy Pythoning, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jan 23 12:01:16 2018 From: brett at python.org (Brett Cannon) Date: Tue, 23 Jan 2018 17:01:16 +0000 Subject: [Python-Dev] Support of the Android platform In-Reply-To: References: Message-ID: On Mon, 22 Jan 2018 at 09:29 Victor Stinner wrote: > Hi, > > I'm still talking with Paul Peny (pmpp on IRC) who is trying to build > the master branch of Python on Android, using cross-compilation or > directly on an Android device. I started to took notes since Android > is a complex platforms and it's not easy for me to remember > everything. > > http://vstinner.readthedocs.io/python_android.html > > Paul would like to support Android 4.4 Kitkat (API 19) just because > it's possible to find cheap devices running Android, but usually only > with old Android versions. Technically, it doesn't see difficult to > support API 19+ (instead of 21+), a few tiny patches are needed. But I > don't want to have a "full support" of API 19+, only basic support > like "make sure that the compilation doesn't fail", not "all tests > must pass". > > It seems like sys.platform == 'android' would be more appropriate > since Android is not Linux: different libc, different filesystems, > etc. > I've had a similar thought myself. -Brett > > While Xavier promotes cross-compilation, Paul would like to build > Python directly on Android to get pip and wheels. > > Honestly, I have no strong opinion, since I don't know well Android. > I'm trying to help everybody working on the Android support. IMHO it's > fine to support multiple ways to build Python for Android. It's not > like there is very obvious option which has no drawback... Cross > compilation is complex, getting a C compiler on Android also seems to > be complex. From my point of view, compared to a common Fedora Linux, > Android doesn't seem easy to use to develop on CPython... > > Victor > > 2017-12-10 15:19 GMT+01:00 Xavier de Gaye : > > The following note is a proposal to add the support of the Android > platform. > > > > The note is easier to read with clickable links at > > https://github.com/xdegaye/cagibi/blob/master/doc/android_support.rst > > > > Motivations > > =========== > > > > * Android is ubiquitous. > > * This would be the first platform supported by Python that is > > cross-compiled, > > thanks to many contributors. > > * Although the Android operating system is linux, it is different from > most > > linux platforms, for example it does not use GNU libc and runs SELinux > in > > enforcing mode. Therefore supporting this platform would make Python > more > > robust and also would allow testing it on arm 64-bit processors. > > * Python running on Android is also a handheld calculator, a successor of > > the > > slide rule and the `HP 41`_. > > > > Current status > > ============== > > > > * The Python test suite succeeds when run on Android emulators using > > buildbot > > strenuous settings with the following architectures on API 24: x86, > > x86_64, > > armv7 and arm64. > > * The `Android build system`_ is described in another section. > > * The `buildmaster-config PR 26`_ proposes to update ``master.cfg`` to > > enable > > buildbots to run a given Android API and architecture on the emulators. > > * The Android emulator is actually ``qemu``, so the test suites for x86 > and > > x86_64 last about the same time as the test suite run natively when the > > processor of the build system is of the x86 family. The test suites for > > the > > arm architectures last much longer: about 8 hours for arm64 and 10 > hours > > for > > armv7 on a four years old laptop. > > * The changes that have been made to achieve this status are listed in > > `bpo-26865`_, the Android meta-issue. > > * Given the cpu resources required to run the test suite on the arm > > emulators, > > it may be difficult to find a contributed buildbot worker. So it > remains > > to > > find the hardware to run these buildbots. > > > > Proposal > > ======== > > > > Support the Android platform on API 24 [1]_ for the x86_64, armv7 and > arm64 > > architectures built with NDK 14b. > > > > *API 24* > > * API 21 is the first version to provide usable support for wide > > characters > > and where SELinux is run in enforcing mode. > > > > * API 22 introduces an annoying bug on the linker that prints something > > like > > this when python is started:: > > > > ``WARNING: linker: libpython3.6m.so.1.0: unused DT entry: type > > 0x6ffffffe arg 0x14554``. > > > > The `termux`_ Android terminal emulator describes this problem at the > > end > > of its `termux-packages`_ gitlab page and has implemented a > > ``termux-elf-cleaner`` tool to strip the useless entries from the ELF > > header of executables. > > > > * API 24 is the first version where the `adb`_ shell is run on the > > emulator > > as a ``shell`` user instead of the ``root`` user previously, and the > > first > > version that supports arm64. > > > > *x86_64* > > It seems that no handheld device exists using that architecture. It is > > supported because the x86_64 Android emulator runs fast and therefore > is a > > good candidate as a buildbot worker. > > > > *NDK 14b* > > This release of the NDK is the first one to use `Unified headers`_ > fixing > > numerous problems that had been fixed by updating the Python configure > > script > > until now (those changes have been reverted by now). > > > > Android idiosyncrasies > > ====================== > > > > * The default shell is ``/system/bin/sh``. > > * The file system layout is not a traditional unix layout, there is no > > ``/tmp`` for example. Most directories have user restricted access, > > ``/sdcard`` is mounted as ``noexec`` for example. > > * The (java) applications are allocated a unix user id and a > subdirectory on > > ``/data/data``. > > * SELinux is run in enforcing mode. > > * Shared memory and semaphores are not supported. > > * The default encoding is UTF-8. > > > > Android build system > > ==================== > > > > The Android build system is implemented at `bpo-30386`_ with `PR 1629`_ > and > > is documented by its `README`_. It provides the following features: > > > > * To build a distribution for a device or an emulator with a given API > level > > and a given architecture. > > * To start the emulator and > > + install the distribution > > + start a remote interactive shell > > + or run remotely a python command > > + or run remotely the buildbottest > > * Run gdb on the python process that is running on the emulator with > python > > pretty-printing. > > > > The build system adds the ``Android/`` directory and the > > ``configure-android`` > > script to the root of the Python source directory on the master branch > > without > > modifying any other file. The build system can be installed, upgraded > (i.e. > > the > > SDK and NDK) and run remotely, through ssh for example. > > > > The following external libraries, when they are configured in the build > > system, > > are downloaded from the internet and cross-compiled (only once, on the > first > > run of the build system) before the cross-compilation of the extension > > modules: > > > > * ``ncurses`` > > * ``readline`` > > * ``sqlite`` > > * ``libffi`` > > * ``openssl``, the cross-compilation of openssl fails on x86_64 and arm64 > > and > > this step is skipped on those architectures. > > > > The following extension modules are disabled by adding them to the > > ``*disabled*`` section of ``Modules/Setup``: > > > > * ``_uuid``, Android has no uuid/uuid.h header. > > * ``grp`` some grp.h functions are not declared. > > * ``_crypt``, Android does not have crypt.h. > > * ``_ctypes`` on x86_64 where all long double tests fail (`bpo-32202`_) > and > > on > > arm64 (see `bpo-32203`_). > > > > .. [1] On Wikipedia `Android version history`_ lists the correspondence > > between > > API level, commercial name and version for each release. It also > provides > > information on the global Android version distribution, see the two > > charts > > on top. > > > > .. _`README`: > > https://github.com/xdegaye/cpython/blob/bpo-30386/Android/README.rst > > .. _`bpo-26865`: https://bugs.python.org/issue26865 > > .. _`bpo-30386`: https://bugs.python.org/issue30386 > > .. _`bpo-32202`: https://bugs.python.org/issue32202 > > .. _`bpo-32203`: https://bugs.python.org/issue32203 > > .. _`PR 1629`: https://github.com/python/cpython/pull/1629 > > .. _`buildmaster-config PR 26`: > > https://github.com/python/buildmaster-config/pull/26 > > .. _`Android version history`: > > https://en.wikipedia.org/wiki/Android_version_history > > .. _`termux`: https://termux.com/ > > .. _`termux-packages`: https://gitlab.com/jbwhips883/termux-packages > > .. _`adb`: https://developer.android.com/studio/command-line/adb.html > > .. _`Unified headers`: > > > https://android.googlesource.com/platform/ndk.git/+/ndk-r14-release/docs/UnifiedHeaders.md > > .. _`HP 41`: https://en.wikipedia.org/wiki/HP-41C > > .. vim:filetype=rst:tw=78:ts=8:sts=2:sw=2:et: > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 23 12:17:05 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 23 Jan 2018 18:17:05 +0100 Subject: [Python-Dev] Support of the Android platform In-Reply-To: References: Message-ID: Ok, I created https://bugs.python.org/issue32637 "Android: set sys.platform and os.name to android" Victor 2018-01-23 18:01 GMT+01:00 Brett Cannon : > > > On Mon, 22 Jan 2018 at 09:29 Victor Stinner > wrote: >> >> Hi, >> >> I'm still talking with Paul Peny (pmpp on IRC) who is trying to build >> the master branch of Python on Android, using cross-compilation or >> directly on an Android device. I started to took notes since Android >> is a complex platforms and it's not easy for me to remember >> everything. >> >> http://vstinner.readthedocs.io/python_android.html >> >> Paul would like to support Android 4.4 Kitkat (API 19) just because >> it's possible to find cheap devices running Android, but usually only >> with old Android versions. Technically, it doesn't see difficult to >> support API 19+ (instead of 21+), a few tiny patches are needed. But I >> don't want to have a "full support" of API 19+, only basic support >> like "make sure that the compilation doesn't fail", not "all tests >> must pass". >> >> It seems like sys.platform == 'android' would be more appropriate >> since Android is not Linux: different libc, different filesystems, >> etc. > > > I've had a similar thought myself. > > -Brett > >> >> >> While Xavier promotes cross-compilation, Paul would like to build >> Python directly on Android to get pip and wheels. >> >> Honestly, I have no strong opinion, since I don't know well Android. >> I'm trying to help everybody working on the Android support. IMHO it's >> fine to support multiple ways to build Python for Android. It's not >> like there is very obvious option which has no drawback... Cross >> compilation is complex, getting a C compiler on Android also seems to >> be complex. From my point of view, compared to a common Fedora Linux, >> Android doesn't seem easy to use to develop on CPython... >> >> Victor >> >> 2017-12-10 15:19 GMT+01:00 Xavier de Gaye : >> > The following note is a proposal to add the support of the Android >> > platform. >> > >> > The note is easier to read with clickable links at >> > https://github.com/xdegaye/cagibi/blob/master/doc/android_support.rst >> > >> > Motivations >> > =========== >> > >> > * Android is ubiquitous. >> > * This would be the first platform supported by Python that is >> > cross-compiled, >> > thanks to many contributors. >> > * Although the Android operating system is linux, it is different from >> > most >> > linux platforms, for example it does not use GNU libc and runs SELinux >> > in >> > enforcing mode. Therefore supporting this platform would make Python >> > more >> > robust and also would allow testing it on arm 64-bit processors. >> > * Python running on Android is also a handheld calculator, a successor >> > of >> > the >> > slide rule and the `HP 41`_. >> > >> > Current status >> > ============== >> > >> > * The Python test suite succeeds when run on Android emulators using >> > buildbot >> > strenuous settings with the following architectures on API 24: x86, >> > x86_64, >> > armv7 and arm64. >> > * The `Android build system`_ is described in another section. >> > * The `buildmaster-config PR 26`_ proposes to update ``master.cfg`` to >> > enable >> > buildbots to run a given Android API and architecture on the >> > emulators. >> > * The Android emulator is actually ``qemu``, so the test suites for x86 >> > and >> > x86_64 last about the same time as the test suite run natively when >> > the >> > processor of the build system is of the x86 family. The test suites >> > for >> > the >> > arm architectures last much longer: about 8 hours for arm64 and 10 >> > hours >> > for >> > armv7 on a four years old laptop. >> > * The changes that have been made to achieve this status are listed in >> > `bpo-26865`_, the Android meta-issue. >> > * Given the cpu resources required to run the test suite on the arm >> > emulators, >> > it may be difficult to find a contributed buildbot worker. So it >> > remains >> > to >> > find the hardware to run these buildbots. >> > >> > Proposal >> > ======== >> > >> > Support the Android platform on API 24 [1]_ for the x86_64, armv7 and >> > arm64 >> > architectures built with NDK 14b. >> > >> > *API 24* >> > * API 21 is the first version to provide usable support for wide >> > characters >> > and where SELinux is run in enforcing mode. >> > >> > * API 22 introduces an annoying bug on the linker that prints >> > something >> > like >> > this when python is started:: >> > >> > ``WARNING: linker: libpython3.6m.so.1.0: unused DT entry: type >> > 0x6ffffffe arg 0x14554``. >> > >> > The `termux`_ Android terminal emulator describes this problem at >> > the >> > end >> > of its `termux-packages`_ gitlab page and has implemented a >> > ``termux-elf-cleaner`` tool to strip the useless entries from the >> > ELF >> > header of executables. >> > >> > * API 24 is the first version where the `adb`_ shell is run on the >> > emulator >> > as a ``shell`` user instead of the ``root`` user previously, and the >> > first >> > version that supports arm64. >> > >> > *x86_64* >> > It seems that no handheld device exists using that architecture. It is >> > supported because the x86_64 Android emulator runs fast and therefore >> > is a >> > good candidate as a buildbot worker. >> > >> > *NDK 14b* >> > This release of the NDK is the first one to use `Unified headers`_ >> > fixing >> > numerous problems that had been fixed by updating the Python configure >> > script >> > until now (those changes have been reverted by now). >> > >> > Android idiosyncrasies >> > ====================== >> > >> > * The default shell is ``/system/bin/sh``. >> > * The file system layout is not a traditional unix layout, there is no >> > ``/tmp`` for example. Most directories have user restricted access, >> > ``/sdcard`` is mounted as ``noexec`` for example. >> > * The (java) applications are allocated a unix user id and a >> > subdirectory on >> > ``/data/data``. >> > * SELinux is run in enforcing mode. >> > * Shared memory and semaphores are not supported. >> > * The default encoding is UTF-8. >> > >> > Android build system >> > ==================== >> > >> > The Android build system is implemented at `bpo-30386`_ with `PR 1629`_ >> > and >> > is documented by its `README`_. It provides the following features: >> > >> > * To build a distribution for a device or an emulator with a given API >> > level >> > and a given architecture. >> > * To start the emulator and >> > + install the distribution >> > + start a remote interactive shell >> > + or run remotely a python command >> > + or run remotely the buildbottest >> > * Run gdb on the python process that is running on the emulator with >> > python >> > pretty-printing. >> > >> > The build system adds the ``Android/`` directory and the >> > ``configure-android`` >> > script to the root of the Python source directory on the master branch >> > without >> > modifying any other file. The build system can be installed, upgraded >> > (i.e. >> > the >> > SDK and NDK) and run remotely, through ssh for example. >> > >> > The following external libraries, when they are configured in the build >> > system, >> > are downloaded from the internet and cross-compiled (only once, on the >> > first >> > run of the build system) before the cross-compilation of the extension >> > modules: >> > >> > * ``ncurses`` >> > * ``readline`` >> > * ``sqlite`` >> > * ``libffi`` >> > * ``openssl``, the cross-compilation of openssl fails on x86_64 and >> > arm64 >> > and >> > this step is skipped on those architectures. >> > >> > The following extension modules are disabled by adding them to the >> > ``*disabled*`` section of ``Modules/Setup``: >> > >> > * ``_uuid``, Android has no uuid/uuid.h header. >> > * ``grp`` some grp.h functions are not declared. >> > * ``_crypt``, Android does not have crypt.h. >> > * ``_ctypes`` on x86_64 where all long double tests fail (`bpo-32202`_) >> > and >> > on >> > arm64 (see `bpo-32203`_). >> > >> > .. [1] On Wikipedia `Android version history`_ lists the correspondence >> > between >> > API level, commercial name and version for each release. It also >> > provides >> > information on the global Android version distribution, see the two >> > charts >> > on top. >> > >> > .. _`README`: >> > https://github.com/xdegaye/cpython/blob/bpo-30386/Android/README.rst >> > .. _`bpo-26865`: https://bugs.python.org/issue26865 >> > .. _`bpo-30386`: https://bugs.python.org/issue30386 >> > .. _`bpo-32202`: https://bugs.python.org/issue32202 >> > .. _`bpo-32203`: https://bugs.python.org/issue32203 >> > .. _`PR 1629`: https://github.com/python/cpython/pull/1629 >> > .. _`buildmaster-config PR 26`: >> > https://github.com/python/buildmaster-config/pull/26 >> > .. _`Android version history`: >> > https://en.wikipedia.org/wiki/Android_version_history >> > .. _`termux`: https://termux.com/ >> > .. _`termux-packages`: https://gitlab.com/jbwhips883/termux-packages >> > .. _`adb`: https://developer.android.com/studio/command-line/adb.html >> > .. _`Unified headers`: >> > >> > https://android.googlesource.com/platform/ndk.git/+/ndk-r14-release/docs/UnifiedHeaders.md >> > .. _`HP 41`: https://en.wikipedia.org/wiki/HP-41C >> > .. vim:filetype=rst:tw=78:ts=8:sts=2:sw=2:et: >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> > >> > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org From steve at holdenweb.com Tue Jan 23 12:34:41 2018 From: steve at holdenweb.com (Steve Holden) Date: Tue, 23 Jan 2018 17:34:41 +0000 Subject: [Python-Dev] Support of the Android platform In-Reply-To: References: Message-ID: For this to move forward more rapidly it would really help if there were a utility VM appliance available with a ready-installed support an SDK. Or at least that would lower impedance to joining the development effort. Does any such beast by chance exist? S Steve Holden On Tue, Jan 23, 2018 at 5:17 PM, Victor Stinner wrote: > Ok, I created https://bugs.python.org/issue32637 > "Android: set sys.platform and os.name to android" > > Victor > > 2018-01-23 18:01 GMT+01:00 Brett Cannon : > > > > > > On Mon, 22 Jan 2018 at 09:29 Victor Stinner > > wrote: > >> > >> Hi, > >> > >> I'm still talking with Paul Peny (pmpp on IRC) who is trying to build > >> the master branch of Python on Android, using cross-compilation or > >> directly on an Android device. I started to took notes since Android > >> is a complex platforms and it's not easy for me to remember > >> everything. > >> > >> http://vstinner.readthedocs.io/python_android.html > >> > >> Paul would like to support Android 4.4 Kitkat (API 19) just because > >> it's possible to find cheap devices running Android, but usually only > >> with old Android versions. Technically, it doesn't see difficult to > >> support API 19+ (instead of 21+), a few tiny patches are needed. But I > >> don't want to have a "full support" of API 19+, only basic support > >> like "make sure that the compilation doesn't fail", not "all tests > >> must pass". > >> > >> It seems like sys.platform == 'android' would be more appropriate > >> since Android is not Linux: different libc, different filesystems, > >> etc. > > > > > > I've had a similar thought myself. > > > > -Brett > > > >> > >> > >> While Xavier promotes cross-compilation, Paul would like to build > >> Python directly on Android to get pip and wheels. > >> > >> Honestly, I have no strong opinion, since I don't know well Android. > >> I'm trying to help everybody working on the Android support. IMHO it's > >> fine to support multiple ways to build Python for Android. It's not > >> like there is very obvious option which has no drawback... Cross > >> compilation is complex, getting a C compiler on Android also seems to > >> be complex. From my point of view, compared to a common Fedora Linux, > >> Android doesn't seem easy to use to develop on CPython... > >> > >> Victor > >> > >> 2017-12-10 15:19 GMT+01:00 Xavier de Gaye : > >> > The following note is a proposal to add the support of the Android > >> > platform. > >> > > >> > The note is easier to read with clickable links at > >> > https://github.com/xdegaye/cagibi/blob/master/doc/android_support.rst > >> > > >> > Motivations > >> > =========== > >> > > >> > * Android is ubiquitous. > >> > * This would be the first platform supported by Python that is > >> > cross-compiled, > >> > thanks to many contributors. > >> > * Although the Android operating system is linux, it is different from > >> > most > >> > linux platforms, for example it does not use GNU libc and runs > SELinux > >> > in > >> > enforcing mode. Therefore supporting this platform would make Python > >> > more > >> > robust and also would allow testing it on arm 64-bit processors. > >> > * Python running on Android is also a handheld calculator, a successor > >> > of > >> > the > >> > slide rule and the `HP 41`_. > >> > > >> > Current status > >> > ============== > >> > > >> > * The Python test suite succeeds when run on Android emulators using > >> > buildbot > >> > strenuous settings with the following architectures on API 24: x86, > >> > x86_64, > >> > armv7 and arm64. > >> > * The `Android build system`_ is described in another section. > >> > * The `buildmaster-config PR 26`_ proposes to update ``master.cfg`` to > >> > enable > >> > buildbots to run a given Android API and architecture on the > >> > emulators. > >> > * The Android emulator is actually ``qemu``, so the test suites for > x86 > >> > and > >> > x86_64 last about the same time as the test suite run natively when > >> > the > >> > processor of the build system is of the x86 family. The test suites > >> > for > >> > the > >> > arm architectures last much longer: about 8 hours for arm64 and 10 > >> > hours > >> > for > >> > armv7 on a four years old laptop. > >> > * The changes that have been made to achieve this status are listed in > >> > `bpo-26865`_, the Android meta-issue. > >> > * Given the cpu resources required to run the test suite on the arm > >> > emulators, > >> > it may be difficult to find a contributed buildbot worker. So it > >> > remains > >> > to > >> > find the hardware to run these buildbots. > >> > > >> > Proposal > >> > ======== > >> > > >> > Support the Android platform on API 24 [1]_ for the x86_64, armv7 and > >> > arm64 > >> > architectures built with NDK 14b. > >> > > >> > *API 24* > >> > * API 21 is the first version to provide usable support for wide > >> > characters > >> > and where SELinux is run in enforcing mode. > >> > > >> > * API 22 introduces an annoying bug on the linker that prints > >> > something > >> > like > >> > this when python is started:: > >> > > >> > ``WARNING: linker: libpython3.6m.so.1.0: unused DT entry: type > >> > 0x6ffffffe arg 0x14554``. > >> > > >> > The `termux`_ Android terminal emulator describes this problem at > >> > the > >> > end > >> > of its `termux-packages`_ gitlab page and has implemented a > >> > ``termux-elf-cleaner`` tool to strip the useless entries from the > >> > ELF > >> > header of executables. > >> > > >> > * API 24 is the first version where the `adb`_ shell is run on the > >> > emulator > >> > as a ``shell`` user instead of the ``root`` user previously, and > the > >> > first > >> > version that supports arm64. > >> > > >> > *x86_64* > >> > It seems that no handheld device exists using that architecture. It > is > >> > supported because the x86_64 Android emulator runs fast and > therefore > >> > is a > >> > good candidate as a buildbot worker. > >> > > >> > *NDK 14b* > >> > This release of the NDK is the first one to use `Unified headers`_ > >> > fixing > >> > numerous problems that had been fixed by updating the Python > configure > >> > script > >> > until now (those changes have been reverted by now). > >> > > >> > Android idiosyncrasies > >> > ====================== > >> > > >> > * The default shell is ``/system/bin/sh``. > >> > * The file system layout is not a traditional unix layout, there is no > >> > ``/tmp`` for example. Most directories have user restricted access, > >> > ``/sdcard`` is mounted as ``noexec`` for example. > >> > * The (java) applications are allocated a unix user id and a > >> > subdirectory on > >> > ``/data/data``. > >> > * SELinux is run in enforcing mode. > >> > * Shared memory and semaphores are not supported. > >> > * The default encoding is UTF-8. > >> > > >> > Android build system > >> > ==================== > >> > > >> > The Android build system is implemented at `bpo-30386`_ with `PR > 1629`_ > >> > and > >> > is documented by its `README`_. It provides the following features: > >> > > >> > * To build a distribution for a device or an emulator with a given API > >> > level > >> > and a given architecture. > >> > * To start the emulator and > >> > + install the distribution > >> > + start a remote interactive shell > >> > + or run remotely a python command > >> > + or run remotely the buildbottest > >> > * Run gdb on the python process that is running on the emulator with > >> > python > >> > pretty-printing. > >> > > >> > The build system adds the ``Android/`` directory and the > >> > ``configure-android`` > >> > script to the root of the Python source directory on the master branch > >> > without > >> > modifying any other file. The build system can be installed, upgraded > >> > (i.e. > >> > the > >> > SDK and NDK) and run remotely, through ssh for example. > >> > > >> > The following external libraries, when they are configured in the > build > >> > system, > >> > are downloaded from the internet and cross-compiled (only once, on the > >> > first > >> > run of the build system) before the cross-compilation of the extension > >> > modules: > >> > > >> > * ``ncurses`` > >> > * ``readline`` > >> > * ``sqlite`` > >> > * ``libffi`` > >> > * ``openssl``, the cross-compilation of openssl fails on x86_64 and > >> > arm64 > >> > and > >> > this step is skipped on those architectures. > >> > > >> > The following extension modules are disabled by adding them to the > >> > ``*disabled*`` section of ``Modules/Setup``: > >> > > >> > * ``_uuid``, Android has no uuid/uuid.h header. > >> > * ``grp`` some grp.h functions are not declared. > >> > * ``_crypt``, Android does not have crypt.h. > >> > * ``_ctypes`` on x86_64 where all long double tests fail > (`bpo-32202`_) > >> > and > >> > on > >> > arm64 (see `bpo-32203`_). > >> > > >> > .. [1] On Wikipedia `Android version history`_ lists the > correspondence > >> > between > >> > API level, commercial name and version for each release. It also > >> > provides > >> > information on the global Android version distribution, see the two > >> > charts > >> > on top. > >> > > >> > .. _`README`: > >> > https://github.com/xdegaye/cpython/blob/bpo-30386/Android/README.rst > >> > .. _`bpo-26865`: https://bugs.python.org/issue26865 > >> > .. _`bpo-30386`: https://bugs.python.org/issue30386 > >> > .. _`bpo-32202`: https://bugs.python.org/issue32202 > >> > .. _`bpo-32203`: https://bugs.python.org/issue32203 > >> > .. _`PR 1629`: https://github.com/python/cpython/pull/1629 > >> > .. _`buildmaster-config PR 26`: > >> > https://github.com/python/buildmaster-config/pull/26 > >> > .. _`Android version history`: > >> > https://en.wikipedia.org/wiki/Android_version_history > >> > .. _`termux`: https://termux.com/ > >> > .. _`termux-packages`: https://gitlab.com/jbwhips883/termux-packages > >> > .. _`adb`: https://developer.android.com/studio/command-line/adb.html > >> > .. _`Unified headers`: > >> > > >> > https://android.googlesource.com/platform/ndk.git/+/ndk- > r14-release/docs/UnifiedHeaders.md > >> > .. _`HP 41`: https://en.wikipedia.org/wiki/HP-41C > >> > .. vim:filetype=rst:tw=78:ts=8:sts=2:sw=2:et: > >> > _______________________________________________ > >> > Python-Dev mailing list > >> > Python-Dev at python.org > >> > https://mail.python.org/mailman/listinfo/python-dev > >> > Unsubscribe: > >> > > >> > https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> https://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: > >> https://mail.python.org/mailman/options/python-dev/brett%40python.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > steve%40holdenweb.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Tue Jan 23 12:56:48 2018 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Tue, 23 Jan 2018 19:56:48 +0200 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) In-Reply-To: References: Message-ID: On Tue, Jan 23, 2018 at 2:23 AM, Victor Stinner wrote: > The PEP 555 looks a competitor PEP of the PEP 567. Since the Yury's > PEP 567 was approved, I understand that Koos's PEP 555 should be > rejected, no? > > If Guido prefers to reject it?, I assume he'll say so. Anyway, it's still waiting for me to add references to earlier discussions and perhaps summaries of some discussions. Personally, I need to find some time to properly catch up with the latest discussion to figure out why PEP 567 is better than PEP 555 (or similar with .set(..), or PEP 550), despite problems of reasoning about the scopes of variables and unset tokens. In any case, congrats, Yury! This hasn't been an easy one for any of us, and it seems like the implementation required quite a beastly patch too in the end. ?Koos ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Tue Jan 23 19:48:21 2018 From: larry at hastings.org (Larry Hastings) Date: Tue, 23 Jan 2018 16:48:21 -0800 Subject: [Python-Dev] Intention to accept PEP 567 (Context Variables) In-Reply-To: References: Message-ID: <3dedf6a0-ce63-7380-bff2-4035f997174f@hastings.org> On 01/23/2018 09:56 AM, Koos Zevenhoven wrote: > On Tue, Jan 23, 2018 at 2:23 AM, Victor Stinner > >wrote: > > The PEP 555 looks a competitor PEP of the PEP 567. Since the Yury's > PEP 567 was approved, I understand that Koos's PEP 555 should be > rejected, no? > > > If Guido prefers to reject it?, I assume he'll say so. Guido has already made his position clear: On 01/10/2018 08:21 AM, Guido van Rossum wrote: > The current status of PEP 555 is "Withdrawn". I have no interest in > considering it any more, so if you'd rather see a decision from me > I'll be happy to change it to "Rejected". Yes, there is no future for PEP 555. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Wed Jan 24 09:43:17 2018 From: steve at holdenweb.com (Steve Holden) Date: Wed, 24 Jan 2018 14:43:17 +0000 Subject: [Python-Dev] CLion IDE Message-ID: I've just start using CLion from JetBrains, and I wondered if anyone on the list is using this product in CPython development. Links to any guidance would be useful. regards Steve Holden -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 24 11:18:36 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 24 Jan 2018 17:18:36 +0100 Subject: [Python-Dev] PEP 432 progress: Python initalization In-Reply-To: References: Message-ID: Hi, FYI I pushed a new change related to the PEP 432: it becomes possible to skip completely the calculation of paths, especially sys.path. If you fill "Path configuration outputs" fileds of PyCoreConfig (see below), _PyPathConfig_Init() should not be called. It should be helpful for some users when Python is embedded. Sadly, this feature will not be exposed before Python 3.8, PEP 432 APIs are currently private. _PyCoreConfig structure contains most parameters (not all yet) needed by Py_Initialize(). typedef struct { int install_signal_handlers; /* Install signal handlers? -1 means unset */ int ignore_environment; /* -E, Py_IgnoreEnvironmentFlag */ int use_hash_seed; /* PYTHONHASHSEED=x */ unsigned long hash_seed; const char *allocator; /* Memory allocator: _PyMem_SetupAllocators() */ int dev_mode; /* PYTHONDEVMODE, -X dev */ int faulthandler; /* PYTHONFAULTHANDLER, -X faulthandler */ int tracemalloc; /* PYTHONTRACEMALLOC, -X tracemalloc=N */ int import_time; /* PYTHONPROFILEIMPORTTIME, -X importtime */ int show_ref_count; /* -X showrefcount */ int show_alloc_count; /* -X showalloccount */ int dump_refs; /* PYTHONDUMPREFS */ int malloc_stats; /* PYTHONMALLOCSTATS */ int coerce_c_locale; /* PYTHONCOERCECLOCALE, -1 means unknown */ int coerce_c_locale_warn; /* PYTHONCOERCECLOCALE=warn */ int utf8_mode; /* PYTHONUTF8, -X utf8; -1 means unknown */ wchar_t *program_name; /* Program name, see also Py_GetProgramName() */ int argc; /* Number of command line arguments, -1 means unset */ wchar_t **argv; /* Command line arguments */ wchar_t *program; /* argv[0] or "" */ int nxoption; /* Number of -X options */ wchar_t **xoptions; /* -X options */ int nwarnoption; /* Number of warnings options */ wchar_t **warnoptions; /* Warnings options */ /* Path configuration inputs */ wchar_t *module_search_path_env; /* PYTHONPATH environment variable */ wchar_t *home; /* PYTHONHOME environment variable, see also Py_SetPythonHome(). */ /* Path configuration outputs */ int nmodule_search_path; /* Number of sys.path paths, -1 means unset */ wchar_t **module_search_paths; /* sys.path paths */ wchar_t *executable; /* sys.executable */ wchar_t *prefix; /* sys.prefix */ wchar_t *base_prefix; /* sys.base_prefix */ wchar_t *exec_prefix; /* sys.exec_prefix */ wchar_t *base_exec_prefix; /* sys.base_exec_prefix */ /* Private fields */ int _disable_importlib; /* Needed by freeze_importlib */ } _PyCoreConfig; and typedef struct { int install_signal_handlers; /* Install signal handlers? -1 means unset */ PyObject *argv; /* sys.argv list, can be NULL */ PyObject *executable; /* sys.executable str */ PyObject *prefix; /* sys.prefix str */ PyObject *base_prefix; /* sys.base_prefix str, can be NULL */ PyObject *exec_prefix; /* sys.exec_prefix str */ PyObject *base_exec_prefix; /* sys.base_exec_prefix str, can be NULL */ PyObject *warnoptions; /* sys.warnoptions list, can be NULL */ PyObject *xoptions; /* sys._xoptions dict, can be NULL */ PyObject *module_search_path; /* sys.path list */ } _PyMainInterpreterConfig; Victor From songofacandy at gmail.com Thu Jan 25 05:42:07 2018 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 25 Jan 2018 19:42:07 +0900 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit Message-ID: Hi. Devguide says: """ Replace the reference to GitHub pull request #NNNN with GH-NNNN. If the title is too long, the pull request number can be added to the message body. """ https://devguide.python.org/gitbootcamp/#accepting-and-merging-a-pull-request But there are more #NNNN than GH-NNNN in commit log. https://github.com/python/cpython/commits/master Where should we go? Encourage GH-NNNN? or abandon it and use default #NNNN? Regards, -- INADA Naoki From victor.stinner at gmail.com Thu Jan 25 06:26:10 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 25 Jan 2018 12:26:10 +0100 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: GH prefix avoids confusion between bugs.python.org and GitHub, but GitHub generates #xxx in the generated commit message on Merge... Each commit message has to be manually edited. It would be better to get GH automatically. Victor Le 25 janv. 2018 11:44, "INADA Naoki" a ?crit : > Hi. > > Devguide says: > > """ > Replace the reference to GitHub pull request #NNNN with GH-NNNN. If > the title is too long, the pull request number can be added to the > message body. > """ > > https://devguide.python.org/gitbootcamp/#accepting-and- > merging-a-pull-request > > But there are more #NNNN than GH-NNNN in commit log. > https://github.com/python/cpython/commits/master > > Where should we go? > Encourage GH-NNNN? or abandon it and use default #NNNN? > > Regards, > -- > INADA Naoki > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berker.peksag at gmail.com Thu Jan 25 07:20:52 2018 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Thu, 25 Jan 2018 15:20:52 +0300 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 1:42 PM, INADA Naoki wrote: > Hi. > > Devguide says: > > """ > Replace the reference to GitHub pull request #NNNN with GH-NNNN. If > the title is too long, the pull request number can be added to the > message body. > """ > > https://devguide.python.org/gitbootcamp/#accepting-and-merging-a-pull-request > > But there are more #NNNN than GH-NNNN in commit log. > https://github.com/python/cpython/commits/master > > Where should we go? > Encourage GH-NNNN? or abandon it and use default #NNNN? I'd personally drop both GH-NNNN and #NNNN markers. The number of the PR is already linked to the commit on GitHub: https://www.dropbox.com/s/zzm9f56485pbl1v/Screenshot%20from%202018-01-25%2015%3A14%3A28.png?dl=0 You can even see both styles in the same commit (especially in backport PRs) bpo-42: Fix spam eggs (GH-2341) (#2211) --Berker From mariatta.wijaya at gmail.com Thu Jan 25 08:58:53 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Thu, 25 Jan 2018 05:58:53 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: It has to be manually edited right before you commit/merge on GitHub. I don't think it can be automatically changed? Unless we have some kind of post commit hook to amend the commit message. I've been changing it to GH- , so does miss-islington when she backports. If you see the mixed GH- and # in the backports PR, it's because miss-islington converted the first one. On Jan 25, 2018 4:21 AM, "Berker Peksa?" wrote: > On Thu, Jan 25, 2018 at 1:42 PM, INADA Naoki > wrote: > > Hi. > > > > Devguide says: > > > > """ > > Replace the reference to GitHub pull request #NNNN with GH-NNNN. If > > the title is too long, the pull request number can be added to the > > message body. > > """ > > > > https://devguide.python.org/gitbootcamp/#accepting-and- > merging-a-pull-request > > > > But there are more #NNNN than GH-NNNN in commit log. > > https://github.com/python/cpython/commits/master > > > > Where should we go? > > Encourage GH-NNNN? or abandon it and use default #NNNN? > > I'd personally drop both GH-NNNN and #NNNN markers. The number of the > PR is already linked to the commit on GitHub: > https://www.dropbox.com/s/zzm9f56485pbl1v/Screenshot% > 20from%202018-01-25%2015%3A14%3A28.png?dl=0 > > You can even see both styles in the same commit (especially in backport > PRs) > > bpo-42: Fix spam eggs (GH-2341) (#2211) > > --Berker > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariatta.wijaya%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berker.peksag at gmail.com Thu Jan 25 10:29:33 2018 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Thu, 25 Jan 2018 18:29:33 +0300 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 4:58 PM, Mariatta Wijaya wrote: > It has to be manually edited right before you commit/merge on GitHub. > I don't think it can be automatically changed? Unless we have some kind of > post commit hook to amend the commit message. Perhaps it's possible to edit both title and body fields [1] automatically before merging via a browser extension. I can try to add it to https://github.com/berkerpeksag/cpython-bpo-linkify Of course, we would still need to convince people to install it :) --Berker [1] https://www.dropbox.com/s/tbf7j8jm66t707r/Screenshot%20from%202018-01-25%2018%3A26%3A05.png?dl=0 From mariatta.wijaya at gmail.com Thu Jan 25 13:03:07 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Thu, 25 Jan 2018 10:03:07 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: > > Of course, we would still need to convince people to install it :) Right, that's the challenge :) I personally use Chrome (!) and I've been using your Chrome extension, so thank you! However, I don't feel comfortable making this available only for a specific browser user, feels exclusionary to me. Also, sometimes I merge from my phone where there's no chrome extension, (maybe I really shouldn't be doing that?). I think the solution should be something not webbrowser specific. One idea is maybe have a bot to do the squash commit, for example by commenting on GitHub: @merge-bot merge So core devs can do the above instead of pressing the commit button. Any thoughts on this? In the meantime, committers, please try to remember and change the # into GH- :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.ware+pydev at gmail.com Thu Jan 25 13:12:58 2018 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Thu, 25 Jan 2018 12:12:58 -0600 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 12:03 PM, Mariatta Wijaya wrote: >> Of course, we would still need to convince people to install it :) > > > Right, that's the challenge :) > I personally use Chrome (!) and I've been using your Chrome extension, so > thank you! > However, I don't feel comfortable making this available only for a specific > browser user, feels exclusionary to me. > Also, sometimes I merge from my phone where there's no chrome extension, > (maybe I really shouldn't be doing that?). A large part of Brett's push for moving to a PR workflow was to be able to merge patches from a tablet on the beach, so I see no reason not to merge from a phone if you can :) > I think the solution should be something not webbrowser specific. > > One idea is maybe have a bot to do the squash commit, for example by > commenting on GitHub: > @merge-bot merge > > So core devs can do the above instead of pressing the commit button. Any > thoughts on this? > > In the meantime, committers, please try to remember and change the # into > GH- :) +1 to everything here. -- Zach From berker.peksag at gmail.com Thu Jan 25 13:31:21 2018 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Thu, 25 Jan 2018 21:31:21 +0300 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 9:03 PM, Mariatta Wijaya wrote: >> Of course, we would still need to convince people to install it :) > > > Right, that's the challenge :) > I personally use Chrome (!) and I've been using your Chrome extension, so > thank you! > However, I don't feel comfortable making this available only for a specific > browser user, feels exclusionary to me. Thanks to Milan Oberkirch, the version in the master branch uses WebExtension API so it should be easy make it run on Firefox (I don't know if it works on other browsers though) > Also, sometimes I merge from my phone where there's no chrome extension, > (maybe I really shouldn't be doing that?). > > I think the solution should be something not webbrowser specific. > > One idea is maybe have a bot to do the squash commit, for example by > commenting on GitHub: > @merge-bot merge > > So core devs can do the above instead of pressing the commit button. Any > thoughts on this? That would be best solution (I think it would solve https://github.com/python/miss-islington/issues/16 too) but it's more complicated than the extension idea :) I have some time work on it if you'd like to implement the mergebot idea. --Berker From mariatta.wijaya at gmail.com Thu Jan 25 13:38:39 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Thu, 25 Jan 2018 10:38:39 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: > > That would be best solution (I think it would solve > https://github.com/python/miss-islington/issues/16 too) but it's more > complicated than the extension idea :) I have some time work on it if > you'd like to implement the mergebot idea. +1 for the mergebot! :) New bot or miss-islington's new job? Still +1 either way, as long as other core devs are fine with it too :) Mariatta Wijaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Jan 25 13:40:41 2018 From: brett at python.org (Brett Cannon) Date: Thu, 25 Jan 2018 18:40:41 +0000 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, 25 Jan 2018 at 10:14 Zachary Ware wrote: > On Thu, Jan 25, 2018 at 12:03 PM, Mariatta Wijaya > wrote: > >> Of course, we would still need to convince people to install it :) > > > > > > Right, that's the challenge :) > > I personally use Chrome (!) and I've been using your Chrome extension, so > > thank you! > > However, I don't feel comfortable making this available only for a > specific > > browser user, feels exclusionary to me. > I personally use Firefox, so browser-specific, while better than nothing, won't cover all cases. > > Also, sometimes I merge from my phone where there's no chrome extension, > > (maybe I really shouldn't be doing that?). > > A large part of Brett's push for moving to a PR workflow was to be > able to merge patches from a tablet on the beach, so I see no reason > not to merge from a phone if you can :) > There is https://github.com/python/bedevere/issues/14 which is tracking the idea of leaving a follow-up comment if a commit occurs w/o changing the `#-` prefix to `GH-` We definitely don't want to leave `#-` prefixes because those are ambiguous and Python will outlast GitHub and thus what `#-` means would have to represent GitHub forever. So either we namespace the PR numbers w/ `GH-` which GitHub will still automatically link to, or we drop them entirely and rely on the issue number being the point of reference entirely. Those are the two options I see for us going forward. > > > > I think the solution should be something not webbrowser specific. > > > > One idea is maybe have a bot to do the squash commit, for example by > > commenting on GitHub: > > @merge-bot merge > > > > So core devs can do the above instead of pressing the commit button. Any > > thoughts on this? > > > > In the meantime, committers, please try to remember and change the # into > > GH- :) > > +1 to everything here. > I'm fine with a bot handling the merge if people in general are okay with the idea and someone is up for coding it up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berker.peksag at gmail.com Thu Jan 25 13:49:32 2018 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Thu, 25 Jan 2018 21:49:32 +0300 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 9:38 PM, Mariatta Wijaya wrote: >> That would be best solution (I think it would solve >> https://github.com/python/miss-islington/issues/16 too) but it's more >> complicated than the extension idea :) I have some time work on it if >> you'd like to implement the mergebot idea. > > > +1 for the mergebot! :) > > New bot or miss-islington's new job? > > Still +1 either way, as long as other core devs are fine with it too :) I'm not familiar with miss-islington's codebase. Can we reuse some parts of miss-islington in the new bot? If we can, let's implement it in miss-islington (perhaps as a new mode?) --Berker From brett at python.org Thu Jan 25 13:53:17 2018 From: brett at python.org (Brett Cannon) Date: Thu, 25 Jan 2018 18:53:17 +0000 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, 25 Jan 2018 at 10:51 Berker Peksa? wrote: > On Thu, Jan 25, 2018 at 9:38 PM, Mariatta Wijaya > wrote: > >> That would be best solution (I think it would solve > >> https://github.com/python/miss-islington/issues/16 too) but it's more > >> complicated than the extension idea :) I have some time work on it if > >> you'd like to implement the mergebot idea. > > > > > > +1 for the mergebot! :) > > > > New bot or miss-islington's new job? > > > > Still +1 either way, as long as other core devs are fine with it too :) > > I'm not familiar with miss-islington's codebase. Can we reuse some > parts of miss-islington in the new bot? If we can, let's implement it > in miss-islington (perhaps as a new mode?) > I would assume it would just go into miss-islington, but before we get ahead of ourselves and design this we need to get consensus that people like the overall idea of using a bot to do a main commits as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Thu Jan 25 15:06:43 2018 From: steve.dower at python.org (Steve Dower) Date: Fri, 26 Jan 2018 07:06:43 +1100 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: -1 on using magic words in comments rather than the normal UI. Perhaps one of the bots could post a reminder at a time when it makes sense? All checks passed, maybe? Top-posted from my Windows phone From: Brett Cannon Sent: Friday, January 26, 2018 6:01 To: Berker Peksa? Cc: Python Dev Subject: Re: [Python-Dev] GH-NNNN vs #NNNN in merge commit On Thu, 25 Jan 2018 at 10:51 Berker Peksa? wrote: On Thu, Jan 25, 2018 at 9:38 PM, Mariatta Wijaya wrote: >> That would be best solution (I think it would solve >> https://github.com/python/miss-islington/issues/16 too) but it's more >> complicated than the extension idea :) I have some time work on it if >> you'd like to implement the mergebot idea. > > > +1 for the mergebot! :) > > New bot or miss-islington's new job? > > Still +1 either way, as long as other core devs are fine with it too :) I'm not familiar with miss-islington's codebase. Can we reuse some parts of miss-islington in the new bot? If we can, let's implement it in miss-islington (perhaps as a new mode?) I would assume it would just go into miss-islington, but before we get ahead of ourselves and design this we need to get consensus that people like the overall idea of using a bot to do a main commits as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane at wirtel.be Thu Jan 25 15:26:21 2018 From: stephane at wirtel.be (Stephane Wirtel) Date: Thu, 25 Jan 2018 21:26:21 +0100 Subject: [Python-Dev] CLion IDE In-Reply-To: References: Message-ID: <20180125202621.GA28176@xps> On 01/24, Steve Holden wrote: >I've just start using CLion from JetBrains, and I wondered if anyone on the >list is using this product in CPython development. Links to any guidance >would be useful. > >regards >Steve Holden Hi Steve, I tried to use it for CPython, but it uses CMake and not the autotools. I have found a this repo https://github.com/python-cmake-buildsystem/python-cmake-buildsystem but not yet tested Stephane -- St?phane Wirtel - http://wirtel.be - @matrixise From tjreedy at udel.edu Thu Jan 25 15:50:57 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 25 Jan 2018 15:50:57 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On 1/25/2018 1:03 PM, Mariatta Wijaya wrote: > One idea is maybe have a bot to do the squash commit, for example by > commenting on GitHub: > @merge-bot merge > So core devs can do the above instead of pressing the commit button. Any > thoughts on this? I can hardly believe that you are seriously proposing that I should replace a click with a 16 char prefix and then retype the title and message. Did I misunderstand? -- Terry Jan Reedy From lukasz at langa.pl Thu Jan 25 16:07:42 2018 From: lukasz at langa.pl (Lukasz Langa) Date: Thu, 25 Jan 2018 13:07:42 -0800 Subject: [Python-Dev] Merging the implementation of PEP 563 Message-ID: Hi all, Serhiy looks busy these days. I'd appreciate somebody looking at and hopefully merging https://github.com/python/cpython/pull/4390 . Everything there was reviewed by Serhiy except for the latest commit. This should be ready to merge and maybe tweak in the beta stage. I'd like to avoid merging it myself but I'd really hate missing the deadline. - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From tjreedy at udel.edu Thu Jan 25 16:09:18 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 25 Jan 2018 16:09:18 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On 1/25/2018 1:53 PM, Brett Cannon wrote: > I would assume it would just go into miss-islington, but before we get > ahead of ourselves and design this we need to get consensus that people > like the overall idea of using a bot to do a main commits as well. I strongly dislike any idea of making me do more error-prone work when merging. Also, pressing the big green button is a distinct action from anything else one does on the page. I am sure this is intentional from a UI design viewpoint, to minimize the possibility of merging by accident. I am personally not so concerned about changing '#' to 'GH-'. The bpo number is already distinguished by the 'bpo-' prefix. But since you are: Supposes when a PR is created, a bot appended '(GH-nnnn)' to the title. Would github's bot still append '(#nnnn)'? What is it appended '(GH#nnnn)'? Can titles be edited by a bot after a merge? (You might want this anyway to 'correct' existing merges.) Can we ask GH to make the number prefix a configuration option? -- Terry Jan Reedy From berker.peksag at gmail.com Thu Jan 25 16:22:51 2018 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Fri, 26 Jan 2018 00:22:51 +0300 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 11:50 PM, Terry Reedy wrote: > On 1/25/2018 1:03 PM, Mariatta Wijaya wrote: > >> One idea is maybe have a bot to do the squash commit, for example by >> commenting on GitHub: >> @merge-bot merge > > >> So core devs can do the above instead of pressing the commit button. Any >> thoughts on this? > > > I can hardly believe that you are seriously proposing that I should replace > a click with a 16 char prefix and then retype the title and message. Did I > misunderstand? If I understand Mariatta correctly, you can just left a "@merge-bot merge" comment if you're happy with the commit message. Then the bot itself can replace #NNNN with GH-NNNN, clean the body of the commit message from commits like "* fix typo", squash commits, and merge. --Berker From ethan at stoneleaf.us Thu Jan 25 16:30:23 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 25 Jan 2018 13:30:23 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: <5A6A4C6F.5050807@stoneleaf.us> On 01/25/2018 10:53 AM, Brett Cannon wrote: > [...] before we get ahead of ourselves and design this we need to get > consensus that people like the overall idea of using a bot to do a main > commits as well. -1 on using comments for the main commit. -- ~Ethan~ From berker.peksag at gmail.com Thu Jan 25 16:34:50 2018 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Fri, 26 Jan 2018 00:34:50 +0300 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On Fri, Jan 26, 2018 at 12:09 AM, Terry Reedy wrote: > On 1/25/2018 1:53 PM, Brett Cannon wrote: > >> I would assume it would just go into miss-islington, but before we get >> ahead of ourselves and design this we need to get consensus that people like >> the overall idea of using a bot to do a main commits as well. > > > I strongly dislike any idea of making me do more error-prone work when > merging. The whole point of writing a bot is to automatize the steps listed at https://devguide.python.org/gitbootcamp/#accepting-and-merging-a-pull-request and https://devguide.python.org/gitbootcamp/#backporting-merged-changes so you won't have to do bunch of manual edits before pressing the "Confirm squash and merge" button. --Berker From victor.stinner at gmail.com Thu Jan 25 16:39:10 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 25 Jan 2018 22:39:10 +0100 Subject: [Python-Dev] Merging the implementation of PEP 563 In-Reply-To: References: Message-ID: Hi, If nobody is available to review your PR, I suggest to push it anyway, to get it merged before the feature freeze. The code can be reviewed later. Merging it sooner gives more time to test it and spot bugs. It also gives more time to fix bugs ;-) Well, at the end, it's up to you. Victor 2018-01-25 22:07 GMT+01:00 Lukasz Langa : > Hi all, > Serhiy looks busy these days. I'd appreciate somebody looking at and > hopefully merging https://github.com/python/cpython/pull/4390. Everything > there was reviewed by Serhiy except for the latest commit. > > This should be ready to merge and maybe tweak in the beta stage. I'd like to > avoid merging it myself but I'd really hate missing the deadline. > > - ? > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > From barry at python.org Thu Jan 25 16:46:38 2018 From: barry at python.org (Barry Warsaw) Date: Thu, 25 Jan 2018 16:46:38 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> On Jan 25, 2018, at 13:38, Mariatta Wijaya wrote: > > +1 for the mergebot! :) Yes, +1 from me too. As you know, GitLab has the option to ?merge when CI completes successfully? and it?s a great workflow. Once I?ve reviewed and approved the branch, I can hit this button and? we?re done! Assuming of course that no additional commits get pushed (that?s actually configurable I think), and that CI doesn?t fail. I?d be very happy to see how close we can get to that with GitHub and a mergebot. I don?t know that we need a @mergebot mention though. Why not just auto merge if the PR is approved, CI is all green, and no additional commits have been pushed? I suppose the reason would be because in GH, you can?t modify the commit message any other way pre-merge. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mariatta.wijaya at gmail.com Thu Jan 25 16:55:08 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Thu, 25 Jan 2018 13:55:08 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> References: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> Message-ID: > > Why not just auto merge if the PR is approved, CI is all green, and no > additional commits have been pushed? My problem has been that I almost always still need to rewrite the commit message. Especially when someone wrote "fix a typo" or "fix several typos". If it automatically merges, then there's no opportunity to adjust the commit message. So I suggest the option to provide the proper commit message to the mergebot. If not provided, I guess we'll use the GitHub PR title and description. Mariatta Wijaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Jan 25 17:03:34 2018 From: barry at python.org (Barry Warsaw) Date: Thu, 25 Jan 2018 17:03:34 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> Message-ID: <01934ED9-F003-459B-827B-C20C2296879B@python.org> On Jan 25, 2018, at 16:55, Mariatta Wijaya wrote: > My problem has been that I almost always still need to rewrite the commit message. > Especially when someone wrote "fix a typo" or "fix several typos". > > If it automatically merges, then there's no opportunity to adjust the commit message. Right, that?s the bit that?s different here than the GitLab workflow. In the latter, you can set the commit message when you click on ?merge when CI succeeds?. > So I suggest the option to provide the proper commit message to the mergebot. > If not provided, I guess we'll use the GitHub PR title and description. I think that?d be fine actually. When I merge things I almost always just delete most of the default commit message anyway. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From victor.stinner at gmail.com Thu Jan 25 17:09:30 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 25 Jan 2018 23:09:30 +0100 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> References: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> Message-ID: 2018-01-25 22:46 GMT+01:00 Barry Warsaw : > Why not just auto merge if the PR is approved, CI is all green, and no additional commits have been pushed? Merging a PR and approving it should be two different actions. Sometimes, you prefer to wait for 2 approvals before merging. Sometimes, you want to wait until another change is merged. There are many cases. On Gerrit (ex: review.openstack.org), votes and workflow are two different fields. Vote: -2, -1, 0, +1, +2. -2 and +2 votes are reserved to core reviewers. Voting +2 means approving a change. Workflow: -1, 0, +1. -1 means that the change is a work-in-progress. +1 asks to merge the change, once tests pass. Victor From victor.stinner at gmail.com Thu Jan 25 17:19:55 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 25 Jan 2018 23:19:55 +0100 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> Message-ID: 2018-01-25 22:55 GMT+01:00 Mariatta Wijaya : > My problem has been that I almost always still need to rewrite the commit > message. > Especially when someone wrote "fix a typo" or "fix several typos". That's the main drawback of GitHub compared to Gerrit. On Gerrit, the commit message is at the same level than other changes. We can comment the commit message as we comment the change. Note: On Gerrit, it's even possible to edit the commit message online (using the web browser). > If it automatically merges, then there's no opportunity to adjust the commit > message. Maybe we need a tool to preview the future commit message, and let the developer modify it. For example, a merge request would add a comment to the PR with the future commit message and then asks to validate the commit message, or let the developer to override (modify) it. Example: --- Comment 4, dev: "Merge" Comment 5, bot: "bpo-123: Cool fix" Comment 6, dev: "Confirm merge" Comment 7, GitHub: "PR merged, commit xxx" --- > So I suggest the option to provide the proper commit message to the > mergebot. Sometimes, the commit message is just fine and can be used unchanged ;-) > If not provided, I guess we'll use the GitHub PR title and description. Sometimes, I see a typo in the commit message after I push a change. In this case, I quickly amend my commit and push my commit again. But the PR description keeps my original commit message with the typo. So the PR description is not reliable. It doesn't seem to be used by GitHub to generate the commit message, GitHub combines commit messages of the PR commits. Victor From yselivanov.ml at gmail.com Thu Jan 25 17:22:05 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 25 Jan 2018 17:22:05 -0500 Subject: [Python-Dev] Merging the implementation of PEP 563 In-Reply-To: References: Message-ID: I looked at the PR and I think the code is fine. Yury On Thu, Jan 25, 2018 at 4:39 PM, Victor Stinner wrote: > Hi, > > If nobody is available to review your PR, I suggest to push it anyway, > to get it merged before the feature freeze. The code can be reviewed > later. Merging it sooner gives more time to test it and spot bugs. It > also gives more time to fix bugs ;-) Well, at the end, it's up to you. > > Victor > > 2018-01-25 22:07 GMT+01:00 Lukasz Langa : >> Hi all, >> Serhiy looks busy these days. I'd appreciate somebody looking at and >> hopefully merging https://github.com/python/cpython/pull/4390. Everything >> there was reviewed by Serhiy except for the latest commit. >> >> This should be ready to merge and maybe tweak in the beta stage. I'd like to >> avoid merging it myself but I'd really hate missing the deadline. >> >> - ? >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From njs at pobox.com Thu Jan 25 17:50:01 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 25 Jan 2018 14:50:01 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> References: <50A69950-5096-420B-A9E0-030DAD4A6281@python.org> Message-ID: On Thu, Jan 25, 2018 at 1:46 PM, Barry Warsaw wrote: > On Jan 25, 2018, at 13:38, Mariatta Wijaya wrote: >> >> +1 for the mergebot! :) > > Yes, +1 from me too. As you know, GitLab has the option to ?merge when CI completes successfully? and it?s a great workflow. Once I?ve reviewed and approved the branch, I can hit this button and? we?re done! Assuming of course that no additional commits get pushed (that?s actually configurable I think), and that CI doesn?t fail. I?d be very happy to see how close we can get to that with GitHub and a mergebot. You should definitely check out bors-ng: https://bors.tech/ It's designed to support the workflow that projects like Rust and Servo use: you tell the bot that a PR is good to merge, and then it takes over and manages the CI process so as to guarantee that the head of the master branch has always passed all its tests (the "not rocket science rule": https://graydon2.dreamwidth.org/1597.html). So not only does this give you the ability to mark a PR as good before the CI has finished for that PR, like in GitLab, it also gives you a few more things: - one problem with the normal way of using Travis is that the tests on a PR branch can get stale: the PR worked when it was submitted, but in the mean time something else got merged to master that breaks it, but no-one notices until after it's merged. With bors-ng, the bot always tests exactly the snapshot that will become the new master, so it catches cases like this before they're merged. - currently, IIUC, the two times we run automatic tests are when a PR is submitted, and after it's merged. And since we can't run non-sandboxed tests like the buildbots automatically on submit, this means we have to run them after merge, which is suboptimal. This adds a new test step between approval and merging, so it gives the option of turning buildbots into a pre-merge check. (Whether this is a good idea or not I don't know, but it seems like a nice option to have?) - if PRs are coming in faster than the CI system can keep up, then it automatically batches up changes to test together, and if the batch fails it uses binary search to automatically figure out which PR or PRs are responsible. So if appveyor gets clogged up, or you enable some slow buildbots, it should still be able to keep things moving. Anyway, I don't know if it's exactly what cpython wants, but it's at the least got some really interesting ideas. -n -- Nathaniel J. Smith -- https://vorpus.org From jjevnik at quantopian.com Thu Jan 25 18:00:56 2018 From: jjevnik at quantopian.com (Joe Jevnik) Date: Thu, 25 Jan 2018 18:00:56 -0500 Subject: [Python-Dev] inconsistency in annotated assigned targets Message-ID: Currently there are many ways to introduce variables in Python; however, only a few allow annotations. I was working on a toy language and chose to base my syntax on Python's when I noticed that I could not annotate a loop iteration variable. For example: for x: int in range(5): ... This led me to search for other places where new variables are introduced and I noticed that the `as` target of a context manager cannot have an annotation. In the case of a context manager, it would probably need parenthesis to avoid ambiguity with a single-line with statement, for example: with ctx as (variable: annotation): body Finally, you cannot annotate individual members of a destructuring assignment like: a: int, b: int, c: int = 1, 2, 3 Looking at the grammar, these appear to be `expr` or `exprlist` targets. One change may be to allow arbitrary expressions to have an annotation . This would be a small change to the grammar but would potentially have a large effect on the language or static analysis tools. I am posting on the mailing list to see if this is a real problem, and if so, is it worth investing any time to address it. I would be happy to attempt to fix this, but I don't want to start if people don't want the change. Also, I apologize if this should have gone to python-idea; this feels somewhere between a bug report and implementation question more than a new feature so I wasn't sure which list would be more appropriate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.zijlstra at gmail.com Thu Jan 25 18:17:37 2018 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Thu, 25 Jan 2018 15:17:37 -0800 Subject: [Python-Dev] inconsistency in annotated assigned targets In-Reply-To: References: Message-ID: 2018-01-25 15:00 GMT-08:00 Joe Jevnik via Python-Dev : > Currently there are many ways to introduce variables in Python; however, > only a few allow annotations. I was working on a toy language and chose to > base my syntax on Python's when I noticed that I could not annotate a loop > iteration variable. For example: > > for x: int in range(5): > ... > > This led me to search for other places where new variables are introduced > and I noticed that the `as` target of a context manager cannot have an > annotation. In the case of a context manager, it would probably need > parenthesis to avoid ambiguity with a single-line with statement, for > example: > > with ctx as (variable: annotation): body > > Finally, you cannot annotate individual members of a destructuring > assignment like: > > a: int, b: int, c: int = 1, 2, 3 > > Looking at the grammar, these appear to be `expr` or `exprlist` targets. > One change may be to allow arbitrary expressions to have an annotation . > This would be a small change to the grammar but would potentially have a > large effect on the language or static analysis tools. > > I am posting on the mailing list to see if this is a real problem, and if > so, is it worth investing any time to address it. I would be happy to > attempt to fix this, but I don't want to start if people don't want the > change. Also, I apologize if this should have gone to python-idea; this > feels somewhere between a bug report and implementation question more than > a new feature so I wasn't sure which list would be more appropriate. > I have written a fair amount of code with variable annotations, and I don't remember ever wanting to add annotations in any of the three contexts you mention. In practice, variable annotations are usually needed for class/instance variables and for variables whose type the type checker can't infer. The types of loop iteration variables and context manager assignment targets can almost always be inferred trivially. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Jan 25 19:09:12 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 25 Jan 2018 19:09:12 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On 1/25/2018 4:22 PM, Berker Peksa? wrote: > On Thu, Jan 25, 2018 at 11:50 PM, Terry Reedy wrote: >> On 1/25/2018 1:03 PM, Mariatta Wijaya wrote: >> >>> One idea is maybe have a bot to do the squash commit, for example by >>> commenting on GitHub: >>> @merge-bot merge >> >>> So core devs can do the above instead of pressing the commit button. Any >>> thoughts on this? >> >> I can hardly believe that you are seriously proposing that I should replace >> a click with a 16 char prefix and then retype the title and message. Did I >> misunderstand? > > If I understand Mariatta correctly, you can just left a "@merge-bot > merge" comment if you're happy with the commit message. There is no 'the commit message' until one presses the merge button. GH then cobbles together a proposed squashed merge commit message from the various commit messages. But I am usually not happy with the result. I nearly always usually do a rewriting that a bot cannot do. This is usually easier that starting from scratch. > Then the bot itself can replace #NNNN with GH-NNNN, in the title. As I said elsewhere, I think we should try to find other ways to do this independent of the commit message. > clean the body of the commit message from commits like "* fix typo", A bot cannot tell which should be deleted and which should be incorporated into the message and if so, how. For instance, the initial PR has no test. Someone adds some, with 'Add tests', resulting in a line '* Add tests'. I typically merge that into a version of the initial commit message as part of producing the final merge message. > squash commits, and merge. As done now. -- Terry Jan Reedy From mariatta.wijaya at gmail.com Thu Jan 25 19:47:16 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Thu, 25 Jan 2018 16:47:16 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: I think we're starting to deviate from the original topic here which is: please replace # with GH- when you click Squash & Merge button. The idea of the mergebot (by issuing a command) was brought up for a different purpose: to automate the merging of a PR after all CI passes (which can take time) and an approval by a core dev. I still like that idea, if we can figure out a way to supply a commit message we really want, before the bot merges the PR. It might be a separate discussion for core-workflow or python-committers? In my mind, even if we have such mergebot implemented, core devs can still merge using the UI if they want to. (Remember to replace the # with GH-) Mariatta Wijaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Jan 25 19:59:18 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 25 Jan 2018 19:59:18 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On 1/25/2018 4:34 PM, Berker Peksa? wrote: > On Fri, Jan 26, 2018 at 12:09 AM, Terry Reedy wrote: >> On 1/25/2018 1:53 PM, Brett Cannon wrote: >> >>> I would assume it would just go into miss-islington, but before we get >>> ahead of ourselves and design this we need to get consensus that people like >>> the overall idea of using a bot to do a main commits as well. >> >> >> I strongly dislike any idea of making me do more error-prone work when >> merging. > > The whole point of writing a bot is to automatize the steps listed at > https://devguide.python.org/gitbootcamp/#accepting-and-merging-a-pull-request The point is to automate "2. Replace the reference to GitHub pull request #NNNN with GH-NNNN." which in itself is trivial (and would be welcome). (I am not sure that I ever read this ;-). But is would be harder to automate the next sentence. "If the title is too long, the pull request number can be added to the message body." and impossible to properly automate "3. Adjust and clean up the commit message." Looking at the example, a bot cannot turn this 'bad' message """ * Improve the spam module * merge from master * adjust code based on review comment * rebased """ into this 'good' message. """ * Add method A to the spam module * Update the documentation of the spam module """ Actually, I think this would be even better with the '* 's deleted and the missing '.'s added. But good message style is another issue. > and https://devguide.python.org/gitbootcamp/#backporting-merged-changes Miss Islington already handles these bad-good differences. > so you won't have to do bunch of manual edits before pressing the > "Confirm squash and merge" button. Human intelligence is required to write a good commit message, which I try to do. I am aware that some people just commit the concatenated commit messages, as in the bad example above, but I think the best a bot could do is to refuse to merge. An alternate approach would be to ask people to edit the initial commit message into the final version before merging, but I don't think this would be a good idea. -- Terry Jan Reedy From tjreedy at udel.edu Thu Jan 25 20:13:53 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 25 Jan 2018 20:13:53 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: On 1/25/2018 7:47 PM, Mariatta Wijaya wrote: > I think we're starting to deviate from the original topic here which is: > please replace # with GH- when you click Squash & Merge button. I will try to remember to do this, although it seems pointless if most people do not. > The idea of the mergebot (by issuing a command) was brought up for a > different purpose: to automate the merging of a PR after all CI passes > (which can take time) and an approval by a core dev. > I still like that idea, if we can figure out a way to supply a commit > message we really want, before the bot merges the PR. For backports, the squashed merge commit message is already written, and usually already has the cherry-pick addition. So I suggest starting with backports first. Once a backport mergebot is working and tested, we can work on the original merge message problem. -- Terry Jan Reedy From eric at trueblade.com Thu Jan 25 20:32:41 2018 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 25 Jan 2018 20:32:41 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: Message-ID: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> On 1/25/2018 8:13 PM, Terry Reedy wrote: > On 1/25/2018 7:47 PM, Mariatta Wijaya wrote: >> I think we're starting to deviate from the original topic here which >> is: please replace # with GH- when you click Squash & Merge button. > > I will try to remember to do this, although it seems pointless if most > people do not. I sometimes forget, too. Since changing this in the github workflow seems hard, how about sending an email to the committer after the commit if this mistake is detected? I know that in my case, if I were reminded a few times, I would be better at doing it correctly in the future. Eric. From guido at python.org Thu Jan 25 22:14:39 2018 From: guido at python.org (Guido van Rossum) Date: Thu, 25 Jan 2018 19:14:39 -0800 Subject: [Python-Dev] inconsistency in annotated assigned targets In-Reply-To: References: Message-ID: PEP 526 has this in the "Rejected/Postponed Proposals" section: - **Allow annotations in** ``with`` **and** ``for`` **statement:** This was rejected because in ``for`` it would make it hard to spot the actual iterable, and in ``with`` it would confuse the CPython's LL(1) parser. On Thu, Jan 25, 2018 at 3:17 PM, Jelle Zijlstra wrote: > > > 2018-01-25 15:00 GMT-08:00 Joe Jevnik via Python-Dev < > python-dev at python.org>: > >> Currently there are many ways to introduce variables in Python; however, >> only a few allow annotations. I was working on a toy language and chose to >> base my syntax on Python's when I noticed that I could not annotate a loop >> iteration variable. For example: >> >> for x: int in range(5): >> ... >> >> This led me to search for other places where new variables are introduced >> and I noticed that the `as` target of a context manager cannot have an >> annotation. In the case of a context manager, it would probably need >> parenthesis to avoid ambiguity with a single-line with statement, for >> example: >> >> with ctx as (variable: annotation): body >> >> Finally, you cannot annotate individual members of a destructuring >> assignment like: >> >> a: int, b: int, c: int = 1, 2, 3 >> >> Looking at the grammar, these appear to be `expr` or `exprlist` targets. >> One change may be to allow arbitrary expressions to have an annotation . >> This would be a small change to the grammar but would potentially have a >> large effect on the language or static analysis tools. >> >> I am posting on the mailing list to see if this is a real problem, and if >> so, is it worth investing any time to address it. I would be happy to >> attempt to fix this, but I don't want to start if people don't want the >> change. Also, I apologize if this should have gone to python-idea; this >> feels somewhere between a bug report and implementation question more than >> a new feature so I wasn't sure which list would be more appropriate. >> > I have written a fair amount of code with variable annotations, and I > don't remember ever wanting to add annotations in any of the three contexts > you mention. In practice, variable annotations are usually needed for > class/instance variables and for variables whose type the type checker > can't infer. The types of loop iteration variables and context manager > assignment targets can almost always be inferred trivially. > > >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle. >> zijlstra%40gmail.com >> >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjevnik at quantopian.com Thu Jan 25 22:20:42 2018 From: jjevnik at quantopian.com (Joe Jevnik) Date: Thu, 25 Jan 2018 22:20:42 -0500 Subject: [Python-Dev] inconsistency in annotated assigned targets In-Reply-To: References: Message-ID: Thank you for the clarification! I should have looked through the PEPs first. On Thu, Jan 25, 2018 at 10:14 PM, Guido van Rossum wrote: > PEP 526 has this in the "Rejected/Postponed Proposals" section: > > - **Allow annotations in** ``with`` **and** ``for`` **statement:** > This was rejected because in ``for`` it would make it hard to spot the > actual > iterable, and in ``with`` it would confuse the CPython's LL(1) parser. > > > On Thu, Jan 25, 2018 at 3:17 PM, Jelle Zijlstra > wrote: > >> >> >> 2018-01-25 15:00 GMT-08:00 Joe Jevnik via Python-Dev < >> python-dev at python.org>: >> >>> Currently there are many ways to introduce variables in Python; however, >>> only a few allow annotations. I was working on a toy language and chose to >>> base my syntax on Python's when I noticed that I could not annotate a loop >>> iteration variable. For example: >>> >>> for x: int in range(5): >>> ... >>> >>> This led me to search for other places where new variables are >>> introduced and I noticed that the `as` target of a context manager cannot >>> have an annotation. In the case of a context manager, it would probably >>> need parenthesis to avoid ambiguity with a single-line with statement, for >>> example: >>> >>> with ctx as (variable: annotation): body >>> >>> Finally, you cannot annotate individual members of a destructuring >>> assignment like: >>> >>> a: int, b: int, c: int = 1, 2, 3 >>> >>> Looking at the grammar, these appear to be `expr` or `exprlist` targets. >>> One change may be to allow arbitrary expressions to have an annotation . >>> This would be a small change to the grammar but would potentially have a >>> large effect on the language or static analysis tools. >>> >>> I am posting on the mailing list to see if this is a real problem, and >>> if so, is it worth investing any time to address it. I would be happy to >>> attempt to fix this, but I don't want to start if people don't want the >>> change. Also, I apologize if this should have gone to python-idea; this >>> feels somewhere between a bug report and implementation question more than >>> a new feature so I wasn't sure which list would be more appropriate. >>> >> I have written a fair amount of code with variable annotations, and I >> don't remember ever wanting to add annotations in any of the three contexts >> you mention. In practice, variable annotations are usually needed for >> class/instance variables and for variables whose type the type checker >> can't infer. The types of loop iteration variables and context manager >> assignment targets can almost always be inferred trivially. >> >> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailma >>> n/options/python-dev/jelle.zijlstra%40gmail.com >>> >>> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% >> 40python.org >> >> > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Jan 25 22:38:38 2018 From: brett at python.org (Brett Cannon) Date: Fri, 26 Jan 2018 03:38:38 +0000 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> References: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> Message-ID: On Thu, 25 Jan 2018 at 17:33 Eric V. Smith wrote: > On 1/25/2018 8:13 PM, Terry Reedy wrote: > > On 1/25/2018 7:47 PM, Mariatta Wijaya wrote: > >> I think we're starting to deviate from the original topic here which > >> is: please replace # with GH- when you click Squash & Merge button. > > > > I will try to remember to do this, although it seems pointless if most > > people do not. > Just because people are forgetting doesn't mean we shouldn't try to change people's habits for the better. If it continues to be an issue then we can talk about not caring, but for now we should try and spare us future pain like we have had with e.g. issue numbers across various issue trackers where you don't know which tracker a number is pointing you towards. And since you have brought it up a couple of times, Terry, we can't automate this reformatting if you press the Merge button, hence the suggestion of a bot to do the final commit. > > I sometimes forget, too. > > Since changing this in the github workflow seems hard, how about sending > an email to the committer after the commit if this mistake is detected? > I know that in my case, if I were reminded a few times, I would be > better at doing it correctly in the future. > https://github.com/python/bedevere/issues/14 Patches welcome :) -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Jan 26 04:00:01 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 26 Jan 2018 04:00:01 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> Message-ID: On 1/25/2018 10:38 PM, Brett Cannon wrote: > > > On Thu, 25 Jan 2018 at 17:33 Eric V. Smith > wrote: > > On 1/25/2018 8:13 PM, Terry Reedy wrote: > > On 1/25/2018 7:47 PM, Mariatta Wijaya wrote: > >> I think we're starting to deviate from the original topic here which > >> is: please replace # with GH- when you click Squash & Merge button. > > > > I will try to remember to do this, although it seems pointless if > most > > people do not. > > > Just because people are forgetting doesn't mean we shouldn't try to > change people's habits for the better. If it continues to be an issue > then we can talk about not caring, but for now we should try and spare > us future pain like we have had with e.g. issue numbers across various > issue trackers where you don't know which tracker a number is pointing > you towards. I have not (yet) suffered such pain so it is not real to me. However, I saw Mariatta's patch to issue a reminded, and that convinces me that you all Really Mean It. > And since you have brought it up a couple of times, Terry, we can't > automate this reformatting if you press the Merge button, hence the > suggestion of a bot to do the final commit. An optional bot with clearly defined behavior will be ok. It would be useful when the initial commit message is written to be the merge message and there are either no additional commits or one will be happy with whatever the clearly defined behavior is with regard to addition commit messages. > I sometimes forget, too. > > Since changing this in the github workflow seems hard, how about sending > an email to the committer after the commit if this mistake is detected? > I know that in my case, if I were reminded a few times, I would be > better at doing it correctly in the future. I believe https://github.com/python/bedevere/pull/82 will add a comment, which will get emailed to everyone nosy on the PR. -- Terry Jan Reedy From status at bugs.python.org Fri Jan 26 12:09:53 2018 From: status at bugs.python.org (Python tracker) Date: Fri, 26 Jan 2018 18:09:53 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20180126170953.1AD6611A863@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2018-01-19 - 2018-01-26) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6445 (+45) closed 37988 (+39) total 44433 (+84) Open issues with patches: 2512 Issues opened (70) ================== #10381: Add timezone support to datetime C API https://bugs.python.org/issue10381 reopened by vstinner #20767: Some python extensions can't be compiled with clang 3.4 https://bugs.python.org/issue20767 reopened by vstinner #29708: support reproducible Python builds https://bugs.python.org/issue29708 reopened by brett.cannon #32441: os.dup2 should return the new fd https://bugs.python.org/issue32441 reopened by vstinner #32591: Deprecate sys.set_coroutine_wrapper and replace it with more f https://bugs.python.org/issue32591 reopened by yselivanov #32599: Add dtrace hook for PyCFunction_Call https://bugs.python.org/issue32599 opened by fche #32601: PosixPathTest.test_expanduser fails in NixOS build sandbox https://bugs.python.org/issue32601 opened by andersk #32603: Deprecation warning on strings used in re module https://bugs.python.org/issue32603 opened by csabella #32604: Expose the subinterpreters C-API in Python for testing use. https://bugs.python.org/issue32604 opened by eric.snow #32605: Should we really hide unawaited coroutine warnings when an exc https://bugs.python.org/issue32605 opened by njs #32606: Email Header Injection Protection Bypass https://bugs.python.org/issue32606 opened by thedoctorsoup #32608: Incompatibilities with the socketserver and multiprocessing pa https://bugs.python.org/issue32608 opened by rbprogrammer #32609: Add setter and getter for min/max protocol ersion https://bugs.python.org/issue32609 opened by christian.heimes #32610: asyncio.all_tasks() should return only non-finished tasks. https://bugs.python.org/issue32610 opened by asvetlov #32611: Tkinter taskbar icon (Windows) https://bugs.python.org/issue32611 opened by Minion Jim #32612: pathlib.(Pure)WindowsPaths can compare equal but refer to diff https://bugs.python.org/issue32612 opened by benrg #32613: Use PEP 397 py launcher in windows faq https://bugs.python.org/issue32613 opened by mdk #32614: Fix documentation examples of using re with escape sequences https://bugs.python.org/issue32614 opened by csabella #32615: Inconsistent behavior if globals is a dict subclass https://bugs.python.org/issue32615 opened by ppperry #32616: Significant performance problems with Python 2.7 built with cl https://bugs.python.org/issue32616 opened by zmwangx #32620: [3.5] Travis CI fails on Python 3.5 with "pyenv: version `3.5' https://bugs.python.org/issue32620 opened by vstinner #32621: Problem of consistence in collection.abc documentation https://bugs.python.org/issue32621 opened by yahya-abou-imran #32622: Implement loop.sendfile https://bugs.python.org/issue32622 opened by asvetlov #32623: Resize dict on del/pop https://bugs.python.org/issue32623 opened by yselivanov #32624: Implement WriteTransport.is_protocol_paused() https://bugs.python.org/issue32624 opened by asvetlov #32625: Update the dis module documentation to reflect switch to wordc https://bugs.python.org/issue32625 opened by belopolsky #32626: Subscript unpacking raises SyntaxError https://bugs.python.org/issue32626 opened by Ben Burrill #32627: Header dependent _uuid build failure on Fedora 27 https://bugs.python.org/issue32627 opened by ncoghlan #32628: Add configurable DirectoryIndex to http.server https://bugs.python.org/issue32628 opened by epaulson #32629: PyImport_ImportModule occasionally cause access violation https://bugs.python.org/issue32629 opened by Jack Branson #32630: Migrate decimal to use PEP 567 context variables https://bugs.python.org/issue32630 opened by yselivanov #32631: IDLE: revise zzdummy.py https://bugs.python.org/issue32631 opened by terry.reedy #32637: Android: set sys.platform to android https://bugs.python.org/issue32637 opened by vstinner #32638: distutils test errors with AIX and xlc https://bugs.python.org/issue32638 opened by Michael.Felt #32640: Python 2.7 str.join documentation is incorrect https://bugs.python.org/issue32640 opened by Malcolm Smith #32642: add support for path-like objects in sys.path https://bugs.python.org/issue32642 opened by chris.jerdonek #32644: unittest.mock.call len() error https://bugs.python.org/issue32644 opened by snakevil #32645: test_asyncio: TLS tests fail on "x86 Windows7" buildbot https://bugs.python.org/issue32645 opened by vstinner #32646: test_asyncgen: race condition on test_async_gen_asyncio_gc_acl https://bugs.python.org/issue32646 opened by vstinner #32647: Undefined references when compiling ctypes on binutils 2.29.1 https://bugs.python.org/issue32647 opened by cstratak #32649: complete C API doc debug and profile part with new PyTrace_OPC https://bugs.python.org/issue32649 opened by xiang.zhang #32650: Debug support for native coroutines is broken https://bugs.python.org/issue32650 opened by asvetlov #32652: test_distutils: BuildRpmTestCase tests fail on RHEL buildbots https://bugs.python.org/issue32652 opened by vstinner #32653: AttributeError: 'Task' object has no attribute '_callbacks' https://bugs.python.org/issue32653 opened by Timur Irmatov #32654: Fixes Python for Android API 19 https://bugs.python.org/issue32654 opened by vstinner #32655: File mode should be a constant https://bugs.python.org/issue32655 opened by nagayev #32657: Mutable Objects in SMTP send_message Signature https://bugs.python.org/issue32657 opened by Kenny Trytek #32658: Metacharacter (\) documentation suggestion https://bugs.python.org/issue32658 opened by kdraeder #32659: Solaris "stat" should support "st_fstype" https://bugs.python.org/issue32659 opened by jcea #32660: Solaris should support constants like termios' FIONREAD https://bugs.python.org/issue32660 opened by jcea #32661: ProactorEventLoop locks up on close call https://bugs.python.org/issue32661 opened by rt121212121 #32662: Implement Server.serve_forever and corresponding APIs https://bugs.python.org/issue32662 opened by yselivanov #32663: SMTPUTF8SimTests are not actually being run https://bugs.python.org/issue32663 opened by chason.chaffin #32664: Connector "|" missing between ImportError and LookupError https://bugs.python.org/issue32664 opened by Richard Neumann #32665: pathlib.Path._from_parsed_parts should call cls.__new__(cls) https://bugs.python.org/issue32665 opened by qb-cea #32666: Valgrind documentation seems to need updating https://bugs.python.org/issue32666 opened by rrt #32668: deepcopy() fails on ArgumentParser instances https://bugs.python.org/issue32668 opened by mkolman #32669: cgitb file to print OSError exceptions https://bugs.python.org/issue32669 opened by hardkrash #32670: Enforce PEP 479???StopIteration and generators???in Python 3.7 https://bugs.python.org/issue32670 opened by yselivanov #32671: redesign Windows os.getlogin, and add os.getuser https://bugs.python.org/issue32671 opened by eryksun #32672: .then execution of actions following a future's completion https://bugs.python.org/issue32672 opened by dancollins34 #32674: minor documentation fix for '__import__' https://bugs.python.org/issue32674 opened by Qian Yun #32675: dict.__contains__(unhashable) raises TypeError where False was https://bugs.python.org/issue32675 opened by xitop #32676: test_asyncio emits many warnings when run in debug mode https://bugs.python.org/issue32676 opened by vstinner #32677: Add.isascii() to str, bytes and bytearray https://bugs.python.org/issue32677 opened by inada.naoki #32678: Lazy import ast in inspect https://bugs.python.org/issue32678 opened by inada.naoki #32679: concurrent.futures should store full sys.exc_info() https://bugs.python.org/issue32679 opened by jonash #32680: smtplib SMTP instances missing a default sock attribute https://bugs.python.org/issue32680 opened by Romuald #32681: Fix uninitialized variable in os_dup2_impl https://bugs.python.org/issue32681 opened by matrixise #32682: test_zlib improve version parsing https://bugs.python.org/issue32682 opened by pmpp Most recent 15 issues with no replies (15) ========================================== #32682: test_zlib improve version parsing https://bugs.python.org/issue32682 #32678: Lazy import ast in inspect https://bugs.python.org/issue32678 #32676: test_asyncio emits many warnings when run in debug mode https://bugs.python.org/issue32676 #32671: redesign Windows os.getlogin, and add os.getuser https://bugs.python.org/issue32671 #32670: Enforce PEP 479???StopIteration and generators???in Python 3.7 https://bugs.python.org/issue32670 #32668: deepcopy() fails on ArgumentParser instances https://bugs.python.org/issue32668 #32666: Valgrind documentation seems to need updating https://bugs.python.org/issue32666 #32664: Connector "|" missing between ImportError and LookupError https://bugs.python.org/issue32664 #32663: SMTPUTF8SimTests are not actually being run https://bugs.python.org/issue32663 #32661: ProactorEventLoop locks up on close call https://bugs.python.org/issue32661 #32659: Solaris "stat" should support "st_fstype" https://bugs.python.org/issue32659 #32658: Metacharacter (\) documentation suggestion https://bugs.python.org/issue32658 #32650: Debug support for native coroutines is broken https://bugs.python.org/issue32650 #32649: complete C API doc debug and profile part with new PyTrace_OPC https://bugs.python.org/issue32649 #32646: test_asyncgen: race condition on test_async_gen_asyncio_gc_acl https://bugs.python.org/issue32646 Most recent 15 issues waiting for review (15) ============================================= #32682: test_zlib improve version parsing https://bugs.python.org/issue32682 #32681: Fix uninitialized variable in os_dup2_impl https://bugs.python.org/issue32681 #32680: smtplib SMTP instances missing a default sock attribute https://bugs.python.org/issue32680 #32678: Lazy import ast in inspect https://bugs.python.org/issue32678 #32677: Add.isascii() to str, bytes and bytearray https://bugs.python.org/issue32677 #32674: minor documentation fix for '__import__' https://bugs.python.org/issue32674 #32672: .then execution of actions following a future's completion https://bugs.python.org/issue32672 #32670: Enforce PEP 479???StopIteration and generators???in Python 3.7 https://bugs.python.org/issue32670 #32663: SMTPUTF8SimTests are not actually being run https://bugs.python.org/issue32663 #32662: Implement Server.serve_forever and corresponding APIs https://bugs.python.org/issue32662 #32660: Solaris should support constants like termios' FIONREAD https://bugs.python.org/issue32660 #32659: Solaris "stat" should support "st_fstype" https://bugs.python.org/issue32659 #32657: Mutable Objects in SMTP send_message Signature https://bugs.python.org/issue32657 #32654: Fixes Python for Android API 19 https://bugs.python.org/issue32654 #32652: test_distutils: BuildRpmTestCase tests fail on RHEL buildbots https://bugs.python.org/issue32652 Top 10 most discussed issues (10) ================================= #32513: dataclasses: make it easier to use user-supplied special metho https://bugs.python.org/issue32513 16 msgs #32630: Migrate decimal to use PEP 567 context variables https://bugs.python.org/issue32630 11 msgs #27815: Make SSL suppress_ragged_eofs default more secure https://bugs.python.org/issue27815 10 msgs #32436: Implement PEP 567 https://bugs.python.org/issue32436 10 msgs #30491: Add a lightweight mechanism for detecting un-awaited coroutine https://bugs.python.org/issue30491 8 msgs #32591: Deprecate sys.set_coroutine_wrapper and replace it with more f https://bugs.python.org/issue32591 8 msgs #32612: pathlib.(Pure)WindowsPaths can compare equal but refer to diff https://bugs.python.org/issue32612 8 msgs #32623: Resize dict on del/pop https://bugs.python.org/issue32623 8 msgs #32654: Fixes Python for Android API 19 https://bugs.python.org/issue32654 8 msgs #32657: Mutable Objects in SMTP send_message Signature https://bugs.python.org/issue32657 8 msgs Issues closed (40) ================== #17799: settrace docs are wrong about "c_call" events https://bugs.python.org/issue17799 closed by xiang.zhang #17882: test_objecttypes fails for 3.2.4 on CentOS 6 https://bugs.python.org/issue17882 closed by bharper #28980: ResourceWarning when imorting antigravity in 3.6 https://bugs.python.org/issue28980 closed by levkivskyi #29302: add contextlib.AsyncExitStack https://bugs.python.org/issue29302 closed by yselivanov #29752: Enum._missing_ not called for __getitem__ failures https://bugs.python.org/issue29752 closed by ethan.furman #31179: Speed-up dict.copy() up to 5.5 times. https://bugs.python.org/issue31179 closed by yselivanov #32028: Syntactically wrong suggestions by the new custom print statem https://bugs.python.org/issue32028 closed by ncoghlan #32030: PEP 432: Rewrite Py_Main() https://bugs.python.org/issue32030 closed by vstinner #32248: Port importlib_resources (module and ABC) to Python 3.7 https://bugs.python.org/issue32248 closed by barry #32391: Add StreamWriter.wait_closed() https://bugs.python.org/issue32391 closed by asvetlov #32410: Implement loop.sock_sendfile method https://bugs.python.org/issue32410 closed by asvetlov #32502: uuid1() fails if only 64-bit interface addresses are available https://bugs.python.org/issue32502 closed by barry #32503: Avoid creating small frames in pickle protocol 4 https://bugs.python.org/issue32503 closed by serhiy.storchaka #32567: Venv???s config file (pyvenv.cfg) should be compatible with Co https://bugs.python.org/issue32567 closed by vinay.sajip #32574: asyncio.Queue, put() leaks memory if the queue is full https://bugs.python.org/issue32574 closed by yselivanov #32589: Statistics as a result from timeit https://bugs.python.org/issue32589 closed by MGilch #32590: Proposal: add an "ensure(arg)" builtin for parameter validatio https://bugs.python.org/issue32590 closed by ncoghlan #32593: Drop support of FreeBSD 9 and older in Python 3.7 https://bugs.python.org/issue32593 closed by vstinner #32594: File object 'name' attribute inconsistent type and not obvious https://bugs.python.org/issue32594 closed by r.david.murray #32596: Lazy import concurrent.futures.process and thread https://bugs.python.org/issue32596 closed by inada.naoki #32598: Use autoconf to detect OpenSSL and libssl features https://bugs.python.org/issue32598 closed by christian.heimes #32600: SpooledTemporaryFile should implement IOBase https://bugs.python.org/issue32600 closed by martin.panter #32602: Test ECDSA and dual mode context https://bugs.python.org/issue32602 closed by christian.heimes #32607: After Python Installation Error https://bugs.python.org/issue32607 closed by pablogsal #32617: dict.update does not concat/replace if same key https://bugs.python.org/issue32617 closed by eric.smith #32618: fix test_codeccallbacks.test_mutatingdecodehandler https://bugs.python.org/issue32618 closed by xiang.zhang #32619: multiplication error https://bugs.python.org/issue32619 closed by steven.daprano #32632: Mock does not create deepcopy of mutable args https://bugs.python.org/issue32632 closed by michael.foord #32633: Warnings from test_asyncio.test_tasks.SetMethodsTest https://bugs.python.org/issue32633 closed by asvetlov #32634: Message parsing fails where it has incompele headers https://bugs.python.org/issue32634 closed by r.david.murray #32635: test_crypt segfaults when using libxcrypt instead of libcrypt https://bugs.python.org/issue32635 closed by cstratak #32636: test_asyncio fails with PYTHONASYNCIODEBUG=1 https://bugs.python.org/issue32636 closed by yselivanov #32639: Coverity: CID 1428443: Null pointer dereferences (NULL_RETURNS https://bugs.python.org/issue32639 closed by yselivanov #32641: test_context and test_asyncio crash on Windows 7 https://bugs.python.org/issue32641 closed by yselivanov #32643: Make Task._step, Task._wakeup and Future._schedule_callback me https://bugs.python.org/issue32643 closed by yselivanov #32648: Wrong byte count with struct https://bugs.python.org/issue32648 closed by christian.heimes #32651: os.getlogin() should recommend getpass.getuser() https://bugs.python.org/issue32651 closed by barry #32656: writing to stdout prints extraneous size character https://bugs.python.org/issue32656 closed by zach.ware #32667: test_subprocess and test_dtrace fails if the last entry of $PA https://bugs.python.org/issue32667 closed by jayyin11043 #32673: update tutorial dict part to reflect dict is ordered https://bugs.python.org/issue32673 closed by xiang.zhang From mariatta.wijaya at gmail.com Fri Jan 26 12:22:11 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Fri, 26 Jan 2018 09:22:11 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> Message-ID: > > I believe https://github.com/python/bedevere/pull/82 will add a comment, > which will get emailed to everyone nosy on the PR. Yes, and I've just updated my PR description: If a PR was merged and the commit message was not changed from #NNNN to GH-NNNN, bedevere-bot will leave a comment to remind the core dev to update it next time. For example: ``` @Mariatta: Please replace # with GH- in the commit message next time. Thanks! ``` Does that work for everyone? Mariatta Wijaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Fri Jan 26 12:48:02 2018 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 26 Jan 2018 12:48:02 -0500 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> Message-ID: On 1/26/2018 12:22 PM, Mariatta Wijaya wrote: > I believe https://github.com/python/bedevere/pull/82 > ?will add a comment, > which will get emailed to everyone nosy on the PR. > > Yes, and I've just updated my PR description: > > If a PR was merged and the commit message was not changed from > |#NNNN|?to |GH-NNNN|, bedevere-bot will leave a comment to remind the > core dev to update it next time. > For example: > ``` > @Mariatta: Please replace # with GH- in the commit message next time. > Thanks! > ``` > > Does that work for everyone? It works for me: I think this is very helpful. Thanks for coding it up so quickly! Eric From mariatta.wijaya at gmail.com Fri Jan 26 13:00:43 2018 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Fri, 26 Jan 2018 10:00:43 -0800 Subject: [Python-Dev] GH-NNNN vs #NNNN in merge commit In-Reply-To: References: <7c7dbd06-34e3-0836-5636-025c4858baa7@trueblade.com> Message-ID: No problem :) It's been deployed. Mariatta Wijaya On Fri, Jan 26, 2018 at 9:48 AM, Eric V. Smith wrote: > > > It works for me: I think this is very helpful. Thanks for coding it up so > quickly! > > Eric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From JAyyin11043 at hotmail.com Thu Jan 25 00:47:53 2018 From: JAyyin11043 at hotmail.com (Jay Yin) Date: Thu, 25 Jan 2018 05:47:53 +0000 Subject: [Python-Dev] New Contributor looking for help Message-ID: Hello everyone, I've been trying to build the master branch on Ubuntu 16.04 and it currently fails 2 test, I was wondering if this was normal or if I'm missing dependencies, I also tried apt-get build-dev python3.6 and python3.7 to no avail, the build requirements install worked for python3.5 but I suspect 3.7 has different dependencies but I can't find where the documentation for the requirements are. 2 tests failed: test_dtrace test_subprocess running test_dtrace as verbose gave https://pastebin.com/ZGzzxwjk [https://pastebin.com/i/facebook.png] [Bash] FAILED (errors=4) test test_dtrace failed 1 test failed: test_dtrace R - Pastebin.com pastebin.com and running test_subprocess gives https://pastebin.com/DNjPzpgp [https://pastebin.com/i/facebook.png] [Bash] ----------------------------------------------- Modules/Setup.dist is newer tha - Pastebin.com pastebin.com _______________________________________________________________________________________________________________________ [1488460367159_signiture.jpg] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-1488460367.jpg Type: image/jpeg Size: 31025 bytes Desc: Outlook-1488460367.jpg URL: From victor.stinner at gmail.com Fri Jan 26 16:20:02 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 26 Jan 2018 22:20:02 +0100 Subject: [Python-Dev] New Contributor looking for help In-Reply-To: References: Message-ID: This is https://bugs.python.org/issue32667 which I already fixed with Jay Yin yesterday. Victor Le 26 janv. 2018 8:27 PM, "Jay Yin" a ?crit : > Hello everyone, > > > I've been trying to build the master branch on Ubuntu 16.04 and it > currently fails 2 test, I was wondering if this was normal or if I'm > missing dependencies, I also tried apt-get build-dev python3.6 and > python3.7 to no avail, the build requirements install worked for python3.5 > but I suspect 3.7 has different dependencies but I can't find where the > documentation for the requirements are. > 2 tests failed: > test_dtrace test_subprocess > > > running test_dtrace as verbose gave https://pastebin.com/ZGzzxwjk > > [Bash] FAILED (errors=4) test test_dtrace failed 1 test failed: > test_dtrace R - Pastebin.com > pastebin.com > > > and running test_subprocess gives > > > https://pastebin.com/DNjPzpgp > > [Bash] ----------------------------------------------- Modules/Setup.dist > is newer tha - Pastebin.com > pastebin.com > > > > ____________________________________________________________ > ___________________________________________________________ > [image: 1488460367159_signiture.jpg] > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-1488460367.jpg Type: image/jpeg Size: 31025 bytes Desc: not available URL: From senthil at uthcode.com Sat Jan 27 11:58:54 2018 From: senthil at uthcode.com (Senthil Kumaran) Date: Sat, 27 Jan 2018 08:58:54 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 Message-ID: Someone in HackerNews shared the Guido's Python 1.0.0 announcement from 27 Jan 1994. That is, on this day, 20 years ago. https://groups.google.com/forum/?hl=en#!original/comp.lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ It is very entertaining to read. * Guido was the release manager, which is now taken up by other core-dev volunteers. * Announcement highlighted *readable* syntax. * The announcement takes a dig at Perl and Bash. While Bourne shell is still very relevant and might continue for a long time, we recognize the difference is use cases for Bash and Python. * Documentation was LaTeX and PostScript. * Error-free builds on SGI IRIX 4 and 5, Sun SunOS 4 and Solaris 2, HP-UX, DEC Ultrix and OSF/1, IBM AIX, and SCO ODT 3.0. :-) We no longer have them. * You used WWW viewer to view the documentation and got the files via FTP. Fun times! Cheers to Guido and everyone contributing to Python. Thanks, Senthil -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Sat Jan 27 12:05:46 2018 From: phd at phdru.name (Oleg Broytman) Date: Sat, 27 Jan 2018 18:05:46 +0100 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: Message-ID: <20180127170546.GA28858@phdru.name> On Sat, Jan 27, 2018 at 08:58:54AM -0800, Senthil Kumaran wrote: > Someone in HackerNews shared the Guido's Python 1.0.0 announcement from 27 > Jan 1994. That is, on this day, 20 years ago. 24 years ago, no? (-: > https://groups.google.com/forum/?hl=en#!original/comp.lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ > > > It is very entertaining to read. > > * Guido was the release manager, which is now taken up by other core-dev > volunteers. > > * Announcement highlighted *readable* syntax. > > * The announcement takes a dig at Perl and Bash. While Bourne shell is > still very relevant and might continue for a long time, we recognize the > difference is use cases for Bash and Python. > > * Documentation was LaTeX and PostScript. HTML was not very popular in those times! :-))) > * Error-free builds on SGI IRIX 4 and 5, Sun SunOS 4 and Solaris 2, HP-UX, > DEC Ultrix and OSF/1, IBM AIX, and SCO ODT 3.0. :-) We no longer have them. We now have Linux, Linux, and Linux. And best of all, Linux! ;-) > * You used WWW viewer to view the documentation and got the files via FTP. > > Fun times! Cheers to Guido and everyone contributing to Python. > > Thanks, > Senthil Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From rosuav at gmail.com Sat Jan 27 12:08:13 2018 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 28 Jan 2018 04:08:13 +1100 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: Message-ID: On Sun, Jan 28, 2018 at 3:58 AM, Senthil Kumaran wrote: > Someone in HackerNews shared the Guido's Python 1.0.0 announcement from 27 > Jan 1994. That is, on this day, 20 years ago. > > https://groups.google.com/forum/?hl=en#!original/comp.lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ > > It is very entertaining to read. Yes, it is. In twenty years, some things have not changed at all: > Python is an interpreted language, and has the usual advantages of > such languages, such as run-time checks (e.g. bounds checking), > execution of dynamically generated code, automatic memory allocation, > high level operations on strings, lists and dictionaries (associative > arrays), and a fast edit-compile-run cycle. Additionally, it features > modules, classes, exceptions, and dynamic linking of extensions > written in C or C++. It has arbitrary precision integers. But some things have: > (Please don't ask me to mail it to you -- at 1.76 Megabytes it is > unwieldy at least...) hehe. Thanks for digging that up! ChrisA From mertz at gnosis.cx Sat Jan 27 12:35:41 2018 From: mertz at gnosis.cx (David Mertz) Date: Sat, 27 Jan 2018 09:35:41 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: Message-ID: Does anyone have an archive of the Python 1.0 documentation? Sadly http://www.cwi.nl/~guido/Python.html is not a live URL :-). On Sat, Jan 27, 2018 at 9:08 AM, Chris Angelico wrote: > On Sun, Jan 28, 2018 at 3:58 AM, Senthil Kumaran > wrote: > > Someone in HackerNews shared the Guido's Python 1.0.0 announcement from > 27 > > Jan 1994. That is, on this day, 20 years ago. > > > > https://groups.google.com/forum/?hl=en#!original/comp. > lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ > > > > It is very entertaining to read. > > Yes, it is. In twenty years, some things have not changed at all: > > > Python is an interpreted language, and has the usual advantages of > > such languages, such as run-time checks (e.g. bounds checking), > > execution of dynamically generated code, automatic memory allocation, > > high level operations on strings, lists and dictionaries (associative > > arrays), and a fast edit-compile-run cycle. Additionally, it features > > modules, classes, exceptions, and dynamic linking of extensions > > written in C or C++. It has arbitrary precision integers. > > But some things have: > > > (Please don't ask me to mail it to you -- at 1.76 Megabytes it is > > unwieldy at least...) > > hehe. > > Thanks for digging that up! > > ChrisA > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mertz%40gnosis.cx > -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From breamoreboy at gmail.com Sat Jan 27 12:45:43 2018 From: breamoreboy at gmail.com (Mark Lawrence) Date: Sat, 27 Jan 2018 17:45:43 +0000 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: <20180127170546.GA28858@phdru.name> References: <20180127170546.GA28858@phdru.name> Message-ID: On 27/01/18 17:05, Oleg Broytman wrote: > On Sat, Jan 27, 2018 at 08:58:54AM -0800, Senthil Kumaran wrote: >> Someone in HackerNews shared the Guido's Python 1.0.0 announcement from 27 >> Jan 1994. That is, on this day, 20 years ago. > > 24 years ago, no? (-: > Correct so we only have one year to organise the 25th birthday party. The exact time and place for the party will obviously have to be discussed on python-ideas, or do we need a new mailing list? :-) -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence From nad at python.org Sat Jan 27 14:04:34 2018 From: nad at python.org (Ned Deily) Date: Sat, 27 Jan 2018 14:04:34 -0500 Subject: [Python-Dev] IMPORTANT: 3.7.0b1 and feature code cutoff 2018-01-29 Message-ID: <44C4C57A-ABBA-4F5F-BB1E-42A46D12FF5E@python.org> Happy mid-winter (northern hemisphere) or -summer (southern)! The time has come to finish feature development for Python 3.7. As previously announced, this coming Monday marks the end of the alpha phase of the release cycle and the beginning of the beta phase. Up through the alpha phase, there has been unrestricted feature development phase; that ends as of beta 1. All feature code for 3.7.0 must be checked in by the b1 cutoff on end-of-day Monday (unless you have contacted me and we have agreed on an extension). As was done during the 3.6 release cycle, we will create the 3.7 branch at b1 time. During the beta phase, the emphasis is on fixes for new features, fixes for all categories of bugs and regressions, and documentation fixes/updates. I will send out specific information for core committers next week after the creation of the b1 tag and the 3.7 branch. Beta releases are intended to give the wider community the opportunity to test new features and bug fixes and to prepare their projects to support the new feature release. We strongly encourage maintainers of third-party Python projects to test with 3.7 during the beta phase and report issues found to bugs.python.org as soon as possible. While the release will be feature complete entering the beta phase, it is possible that features may be modified or, in rare cases, deleted up until the start of the release candidate phase. Our goal is have no changes after rc1. To achieve that, it will be extremely important to get as much exposure for 3.7 as possible during the beta phase. Also, during the 3.6.0 release cycle, the question of ABI stability during the final (e.g. beta and release candidate) phases of the release came up. Last-minute changes put a burden on our and our downstream users testing efforts and adds risk to the release. Therefore, as was proposed then, we will strive to have no ABI changes after beta 3. More details forthcoming. To recap: 2018-01-29 ~23:59 Anywhere on Earth (UTC-12:00): code snapshot for 3.7.0 beta 1 (feature code freeze, no new features) 2019-01-30: 3.7 branch opens for 3.7.0 feature development continues on master branch, now for 3.8.0 2018-01-30 to 2018-05-21: 3.7.0 beta phase (bug, regression, and doc fixes, no new features) 2018-03-26: 3.7.0 beta 3 (3.7.0 ABI freeze) 2018-05-21: 3.7.0 release candidate 1 (3.7.0 code freeze) 2018-06-15: 3.7.0 release (3.7.0rc1 plus, if necessary, any dire emergency fixes) ~2019-12 tentative (3.7.0 release + 18 months): 3.8.0 release (details TBD) Thank you all for your great efforts so far on 3.7; it should be another great release! --Ned https://www.python.org/dev/peps/pep-0537/ -- Ned Deily nad at python.org -- [] From hodgestar+pythondev at gmail.com Sat Jan 27 15:28:52 2018 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 27 Jan 2018 22:28:52 +0200 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> Message-ID: We need a PPP! From phd at phdru.name Sat Jan 27 15:43:56 2018 From: phd at phdru.name (Oleg Broytman) Date: Sat, 27 Jan 2018 21:43:56 +0100 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> Message-ID: <20180127204356.GA6688@phdru.name> On Sat, Jan 27, 2018 at 10:28:52PM +0200, Simon Cross wrote: > We need a PPP! Playful Python Party?! Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From hodgestar+pythondev at gmail.com Sat Jan 27 15:52:00 2018 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 27 Jan 2018 22:52:00 +0200 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: <20180127204356.GA6688@phdru.name> References: <20180127170546.GA28858@phdru.name> <20180127204356.GA6688@phdru.name> Message-ID: Python Party Proposal! From lukasz at langa.pl Sat Jan 27 15:58:01 2018 From: lukasz at langa.pl (Lukasz Langa) Date: Sat, 27 Jan 2018 12:58:01 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> <20180127204356.GA6688@phdru.name> Message-ID: <27103576-D665-48E9-B375-7A796E0696FB@langa.pl> > On 27 Jan, 2018, at 12:52 PM, Simon Cross wrote: > > Python Party Proposal! Oh, that's okay then. For a second there I got reminded of the dreadful days of trying to get dial-up to work on Linux with a winmodem. PPP. Shudder. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Sat Jan 27 16:02:08 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Jan 2018 16:02:08 -0500 Subject: [Python-Dev] =?utf-8?q?Welcome_the_3=2E8_and_3=2E9_Release_Manag?= =?utf-8?q?er_-_=C5=81ukasz_Langa!?= Message-ID: As Ned just announced, Python 3.7 is very soon to enter beta 1 and thus feature freeze. I think we can all give Ned a huge round of applause for his amazing work as Release Manager for Python 3.6 and 3.7. Let?s also give him all the support he needs to make 3.7 the best version yet. As is tradition, Python release managers serve for two consecutive releases, and so with the 3.7 release branch about to be made, it?s time to announce our release manager for Python 3.8 and 3.9. By unanimous and enthusiastic consent from the Python Secret Underground (PSU, which emphatically does not exist), the Python Cabal of Former and Current Release Managers, Cardinal Xim?nez, and of course the BDFL, please welcome your next release manager? ?ukasz Langa! And also, happy 24th anniversary to Guido?s Python 1.0.0 announcement[1]. It?s been a fun and incredible ride, and I firmly believe that Python?s best days are ahead of us. Enjoy, -Barry [1] https://groups.google.com/forum/?hl=en#!original/comp.lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ericsnowcurrently at gmail.com Sat Jan 27 16:14:35 2018 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 27 Jan 2018 14:14:35 -0700 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: On Sat, Jan 27, 2018 at 2:02 PM, Barry Warsaw wrote: > please welcome your next release manager? > > ?ukasz Langa! Congrats, ?ukasz! (or condolences? ) -eric From guido at python.org Sat Jan 27 16:57:04 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 27 Jan 2018 13:57:04 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> Message-ID: Actually Python was born in December 1989 and first released open source in February 1991. I don't recall what version number that was, perhaps 0.1.0. The 1994 date was just the release of 1.0! On Sat, Jan 27, 2018 at 9:45 AM, Mark Lawrence wrote: > On 27/01/18 17:05, Oleg Broytman wrote: > >> On Sat, Jan 27, 2018 at 08:58:54AM -0800, Senthil Kumaran < >> senthil at uthcode.com> wrote: >> >>> Someone in HackerNews shared the Guido's Python 1.0.0 announcement from >>> 27 >>> Jan 1994. That is, on this day, 20 years ago. >>> >> >> 24 years ago, no? (-: >> >> > Correct so we only have one year to organise the 25th birthday party. The > exact time and place for the party will obviously have to be discussed on > python-ideas, or do we need a new mailing list? :-) > > -- > My fellow Pythonistas, ask not what our language can do for you, ask > what you can do for our language. > > Mark Lawrence > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sat Jan 27 16:07:22 2018 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 27 Jan 2018 16:07:22 -0500 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: That's awesome! A great choice. Congrats, ?ukasz. Eric. On 1/27/2018 4:02 PM, Barry Warsaw wrote: > As Ned just announced, Python 3.7 is very soon to enter beta 1 and thus feature freeze. I think we can all give Ned a huge round of applause for his amazing work as Release Manager for Python 3.6 and 3.7. Let?s also give him all the support he needs to make 3.7 the best version yet. > > As is tradition, Python release managers serve for two consecutive releases, and so with the 3.7 release branch about to be made, it?s time to announce our release manager for Python 3.8 and 3.9. > > By unanimous and enthusiastic consent from the Python Secret Underground (PSU, which emphatically does not exist), the Python Cabal of Former and Current Release Managers, Cardinal Xim?nez, and of course the BDFL, please welcome your next release manager? > > ?ukasz Langa! > > And also, happy 24th anniversary to Guido?s Python 1.0.0 announcement[1]. It?s been a fun and incredible ride, and I firmly believe that Python?s best days are ahead of us. > > Enjoy, > -Barry > > [1] https://groups.google.com/forum/?hl=en#!original/comp.lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ > > > > _______________________________________________ > python-committers mailing list > python-committers at python.org > https://mail.python.org/mailman/listinfo/python-committers > Code of Conduct: https://www.python.org/psf/codeofconduct/ > From guido at python.org Sat Jan 27 17:04:54 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 27 Jan 2018 14:04:54 -0800 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: Hardly a surprising choice! Congrats, ?ukasz. (And never forget that at every Mac OS X upgrade I have to install the extended keyboard just so I can type that darn ?. :-) On Sat, Jan 27, 2018 at 1:07 PM, Eric V. Smith wrote: > That's awesome! A great choice. Congrats, ?ukasz. > > Eric. > > > On 1/27/2018 4:02 PM, Barry Warsaw wrote: > >> As Ned just announced, Python 3.7 is very soon to enter beta 1 and thus >> feature freeze. I think we can all give Ned a huge round of applause for >> his amazing work as Release Manager for Python 3.6 and 3.7. Let?s also >> give him all the support he needs to make 3.7 the best version yet. >> >> As is tradition, Python release managers serve for two consecutive >> releases, and so with the 3.7 release branch about to be made, it?s time to >> announce our release manager for Python 3.8 and 3.9. >> >> By unanimous and enthusiastic consent from the Python Secret Underground >> (PSU, which emphatically does not exist), the Python Cabal of Former and >> Current Release Managers, Cardinal Xim?nez, and of course the BDFL, please >> welcome your next release manager? >> >> ?ukasz Langa! >> >> And also, happy 24th anniversary to Guido?s Python 1.0.0 >> announcement[1]. It?s been a fun and incredible ride, and I firmly believe >> that Python?s best days are ahead of us. >> >> Enjoy, >> -Barry >> >> [1] https://groups.google.com/forum/?hl=en#!original/comp.lang. >> misc/_QUzdEGFwCo/KIFdu0-Dv7sJ >> >> >> >> _______________________________________________ >> python-committers mailing list >> python-committers at python.org >> https://mail.python.org/mailman/listinfo/python-committers >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> > _______________________________________________ > python-committers mailing list > python-committers at python.org > https://mail.python.org/mailman/listinfo/python-committers > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Jan 27 17:12:20 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Jan 2018 17:12:20 -0500 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: On Jan 27, 2018, at 17:04, Guido van Rossum > wrote: > > Hardly a surprising choice! Congrats, ?ukasz. (And never forget that at every Mac OS X upgrade I have to install the extended keyboard just so I can type that darn ?. :-) Heh, I *just* learned that, at least on macOS High Sierra (and probably going back several releases), on a US keyboard you can press and hold the ?L? (cap-L) key. A little popup will appear like the attached image (if this doesn?t get stripped by Mailman). Hit ?1? and the slashy-L will get entered: ?. Cheers -Barry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-01-27 at 17.09.58.png Type: image/png Size: 19881 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From senthil at uthcode.com Sat Jan 27 17:23:35 2018 From: senthil at uthcode.com (Senthil Kumaran) Date: Sat, 27 Jan 2018 14:23:35 -0800 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: Congrats, ?ukasz. And Thank you, Ned, for managing the 3.6 and 3.7 Releases. -- Senthil On Sat, Jan 27, 2018 at 1:02 PM, Barry Warsaw wrote: > As Ned just announced, Python 3.7 is very soon to enter beta 1 and thus > feature freeze. I think we can all give Ned a huge round of applause for > his amazing work as Release Manager for Python 3.6 and 3.7. Let?s also > give him all the support he needs to make 3.7 the best version yet. > > As is tradition, Python release managers serve for two consecutive > releases, and so with the 3.7 release branch about to be made, it?s time to > announce our release manager for Python 3.8 and 3.9. > > By unanimous and enthusiastic consent from the Python Secret Underground > (PSU, which emphatically does not exist), the Python Cabal of Former and > Current Release Managers, Cardinal Xim?nez, and of course the BDFL, please > welcome your next release manager? > > ?ukasz Langa! > > And also, happy 24th anniversary to Guido?s Python 1.0.0 announcement[1]. > It?s been a fun and incredible ride, and I firmly believe that Python?s > best days are ahead of us. > > Enjoy, > -Barry > > [1] https://groups.google.com/forum/?hl=en#!original/comp. > lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ > > > _______________________________________________ > python-committers mailing list > python-committers at python.org > https://mail.python.org/mailman/listinfo/python-committers > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Jan 27 17:38:47 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 27 Jan 2018 14:38:47 -0800 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: Cool trick! Works on Sierra too. I guess it's all part of Apple's drive to merge iOS and OS X... On Sat, Jan 27, 2018 at 2:12 PM, Barry Warsaw wrote: > On Jan 27, 2018, at 17:04, Guido van Rossum wrote: > > > Hardly a surprising choice! Congrats, ?ukasz. (And never forget that at > every Mac OS X upgrade I have to install the extended keyboard just so I > can type that darn ?. :-) > > > Heh, I *just* learned that, at least on macOS High Sierra (and probably > going back several releases), on a US keyboard you can press and hold the > ?L? (cap-L) key. A little popup will appear like the attached image (if > this doesn?t get stripped by Mailman). Hit ?1? and the slashy-L will get > entered: ?. > > Cheers > -Barry > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-01-27 at 17.09.58.png Type: image/png Size: 19881 bytes Desc: not available URL: From elprans at gmail.com Sat Jan 27 18:10:51 2018 From: elprans at gmail.com (Elvis Pranskevichus) Date: Sat, 27 Jan 2018 18:10:51 -0500 Subject: [Python-Dev] =?utf-8?q?=5Bpython-committers=5D_Welcome_the_3=2E8?= =?utf-8?q?_and_3=2E9_Release_Manager_-_=C5=81ukasz_Langa!?= In-Reply-To: References: Message-ID: <1712323.8jBy2cJUTz@hammer.magicstack.net> And on Linux (X11) there's a compose key [1] Compose + / + L = ? You have to map Compose first, as it's not a physical button on modern keyboards: setxkbmap -option compose:ralt [1] https://wiki.archlinux.org/index.php/Keyboard_configuration_in_Xorg#Configuring_compose_key Elvis On Saturday, January 27, 2018 5:38:47 PM EST Guido van Rossum wrote: > Cool trick! Works on Sierra too. I guess it's all part of Apple's > drive to merge iOS and OS X... > > On Sat, Jan 27, 2018 at 2:12 PM, Barry Warsaw wrote: > > On Jan 27, 2018, at 17:04, Guido van Rossum > > wrote: > > > > > > Hardly a surprising choice! Congrats, ?ukasz. (And never forget that > > at every Mac OS X upgrade I have to install the extended keyboard > > just so I can type that darn ?. :-) > > > > > > Heh, I *just* learned that, at least on macOS High Sierra (and > > probably going back several releases), on a US keyboard you can > > press and hold the ?L? (cap-L) key. A little popup will appear > > like the attached image (if this doesn?t get stripped by Mailman). > > Hit ?1? and the slashy-L will get entered: ?. > > > > Cheers > > -Barry From drsalists at gmail.com Sat Jan 27 20:10:31 2018 From: drsalists at gmail.com (Dan Stromberg) Date: Sat, 27 Jan 2018 17:10:31 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> Message-ID: We probably should (if possible) create an archive (with dates) of very old (or all, actually) versions of CPython, analogous to what The Unix Heritage Society does for V5, V7, etc., but for CPython... Or is there one already? I found a bunch of 1.x's, but no 0.x's. What I found was at http://legacy.python.org/download/releases/src/ I realize modern OS's and C compilers won't cope with them anymore, and there'll be some security holes so you wouldn't use them in production, but it'd be an interesting history lesson to set up a matching set for the various releases using virtualboxes or something. I've been getting some mileage, actually, out of: http://stromberg.dnsalias.org/svn/cpythons/trunk/ (build cpythons 2.4 and up, and stash them each in /usr/local/cpython-*) ...and: http://stromberg.dnsalias.org/svn/pythons/trunk/ (run python code on a variety of interpreters to test for compatibility, including a bunch of CPythons, some pypys, jython, micropython, hopefully more someday, like maybe nuitka) It'd be kind of cool to add an authenticated way of running python commands on a remote host to check even older versions. I tried to get "cpythons" to build cpython 2.3 on a modern Linux, but it didn't appear practical. But 2.4 and up have been working well. On Sat, Jan 27, 2018 at 1:57 PM, Guido van Rossum wrote: > Actually Python was born in December 1989 and first released open source in > February 1991. I don't recall what version number that was, perhaps 0.1.0. > The 1994 date was just the release of 1.0! From lukasz at langa.pl Sat Jan 27 20:19:36 2018 From: lukasz at langa.pl (Lukasz Langa) Date: Sat, 27 Jan 2018 17:19:36 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> Message-ID: <673E14B2-0234-45B9-833B-C3991107FB28@langa.pl> > On 27 Jan, 2018, at 5:10 PM, Dan Stromberg wrote: > > We probably should (if possible) create an archive (with dates) of > very old (or all, actually) versions of CPython, analogous to what The > Unix Heritage Society does for V5, V7, etc., but for CPython... > > Or is there one already? I found a bunch of 1.x's, but no 0.x's. > What I found was at http://legacy.python.org/download/releases/src/ If I remember correctly, Dave Beazley, who went on this particular adventure a few months back, concluded that other releases are lost forever due to FTPs and their mirrors going offline over time. He did find a tarball of 0.9.1 reconstructed by Andrew Dalke from usenet posts. Read on, this is pretty fascinating: https://twitter.com/dabeaz/status/934590421984075776 - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From guido at python.org Sat Jan 27 21:45:16 2018 From: guido at python.org (Guido van Rossum) Date: Sat, 27 Jan 2018 18:45:16 -0800 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: <673E14B2-0234-45B9-833B-C3991107FB28@langa.pl> References: <20180127170546.GA28858@phdru.name> <673E14B2-0234-45B9-833B-C3991107FB28@langa.pl> Message-ID: David Beazley has also collected various historic releases here: https://github.com/dabeaz/hoppy/tree/master/Ancient -- he's got 0.9.1, 0.9.6, 0.9.7beta1, 0.9.8, 0.9.9, and 1.0.3. For me personally, the fondest memories are of 1.5.2, which Paul Everitt declared, while we were well into 2.x territory, was still the best Python ever. (I didn't agree, but 1.5.2 did serve us very well for a long time.) On Sat, Jan 27, 2018 at 5:19 PM, Lukasz Langa wrote: > > On 27 Jan, 2018, at 5:10 PM, Dan Stromberg wrote: > > We probably should (if possible) create an archive (with dates) of > very old (or all, actually) versions of CPython, analogous to what The > Unix Heritage Society does for V5, V7, etc., but for CPython... > > Or is there one already? I found a bunch of 1.x's, but no 0.x's. > What I found was at http://legacy.python.org/download/releases/src/ > > > If I remember correctly, Dave Beazley, who went on this particular > adventure a few months back, concluded that other releases are lost forever > due to FTPs and their mirrors going offline over time. He did find a > tarball of 0.9.1 reconstructed by Andrew Dalke from usenet posts. > > Read on, this is pretty fascinating: https://twitter.com/dabeaz/status/ > 934590421984075776 > > - ? > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Jan 27 23:20:37 2018 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Jan 2018 23:20:37 -0500 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: References: <20180127170546.GA28858@phdru.name> <673E14B2-0234-45B9-833B-C3991107FB28@langa.pl> Message-ID: <21AD4FCF-AF5F-4E7D-A0CD-6275095EA23F@python.org> On Jan 27, 2018, at 21:45, Guido van Rossum wrote: > > For me personally, the fondest memories are of 1.5.2, which Paul Everitt declared, while we were well into 2.x territory, was still the best Python ever. (I didn't agree, but 1.5.2 did serve us very well for a long time.) What, not the Contractual Obligation release? :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From eric at trueblade.com Sun Jan 28 10:45:49 2018 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 28 Jan 2018 10:45:49 -0500 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: On 1/6/2018 5:13 PM, Eric V. Smith wrote: > On 12/10/2017 5:00 PM, Raymond Hettinger wrote: ... >> 2) Change the default value for "hash" from "None" to "False".? This >> might take a little effort because there is currently an oddity where >> setting hash=False causes it to be hashable.? I'm pretty sure this >> wasn't intended ;-) > > I haven't looked at this yet. I think the hashing logic explained in https://bugs.python.org/issue32513#msg310830 is correct. It uses hash=None as the default, so that frozen=True objects are hashable, which they would not be if hash=False were the default. If there's some case there that you disagree with, I'd be interested in hearing about it. That logic is what is currently scheduled to go in to 3.7 beta 1. I have not updated the PEP yet, mostly because it's so difficult to explain. What's the case where setting hash=False causes it to be hashable? I don't think that was ever the case, and I hope it's not the case now. Eric From sebastian at realpath.org Sun Jan 28 14:16:36 2018 From: sebastian at realpath.org (Sebastian Krause) Date: Sun, 28 Jan 2018 20:16:36 +0100 Subject: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994 In-Reply-To: (Guido van Rossum's message of "Sat, 27 Jan 2018 18:45:16 -0800") References: <20180127170546.GA28858@phdru.name> <673E14B2-0234-45B9-833B-C3991107FB28@langa.pl> Message-ID: Guido van Rossum wrote: > For me personally, the fondest memories are of 1.5.2, which Paul Everitt > declared, while we were well into 2.x territory, was still the best Python > ever. (I didn't agree, but 1.5.2 did serve us very well for a long time.) That makes me feel better about the fact that 1.5.2 was my employer's main Python version until late 2010. :) (We're at 3.5 now.) From victor.stinner at gmail.com Sun Jan 28 18:00:43 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 29 Jan 2018 00:00:43 +0100 Subject: [Python-Dev] Sad buildbots Message-ID: Hi, It seems like the feature freeze is close: while I usually get 2 emails/day at maximum on buildbot-status, I got 14 emails during the weekend: https://mail.python.org/mm3/archives/list/buildbot-status at python.org/ (are all buildbots red? :-p) I will not have the bandwidth to analyze all buildbot failures. Can someone help to investigate all these funny new regressions? http://buildbot.python.org/all/#/builders I would feel safer to cut a release if most buildbots are green again. Victor From nad at python.org Sun Jan 28 18:23:14 2018 From: nad at python.org (Ned Deily) Date: Sun, 28 Jan 2018 18:23:14 -0500 Subject: [Python-Dev] Sad buildbots In-Reply-To: References: Message-ID: On Jan 28, 2018, at 18:00, Victor Stinner wrote: > It seems like the feature freeze is close: while I usually get 2 > emails/day at maximum on buildbot-status, I got 14 emails during the > weekend: > https://mail.python.org/mm3/archives/list/buildbot-status at python.org/ > (are all buildbots red? :-p) > > I will not have the bandwidth to analyze all buildbot failures. Can > someone help to investigate all these funny new regressions? > http://buildbot.python.org/all/#/builders > > I would feel safer to cut a release if most buildbots are green again. Never fear, we're *not* going to do a release in such a state. That's one of the reasons we have release managers. :-) Not surprisingly, there has been a *lot* of activity over the last few days as core-developers work on getting features finished prior to the 3.7 feature code freeze coming up at the end of Monday AoE. Some of the intermediate checkins cause some breakages across the board, unfortunately, that have subsequently been addressed. Most of the 3.x stable buildbots are currently green with some builds still going on. But, yeah, please all keep an eye of them especially those of you merging code. Just because the CI tests passed doesn't mean there won't be problems on other platforms and configurations. Thanks for everyone's help so far! We're getting close. -- Ned Deily nad at python.org -- [] From raymond.hettinger at gmail.com Sun Jan 28 20:07:02 2018 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 28 Jan 2018 17:07:02 -0800 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: >>> 2) Change the default value for "hash" from "None" to "False". This might take a little effort because there is currently an oddity where setting hash=False causes it to be hashable. I'm pretty sure this wasn't intended ;-) >> I haven't looked at this yet. > > I think the hashing logic explained in https://bugs.python.org/issue32513#msg310830 is correct. It uses hash=None as the default, so that frozen=True objects are hashable, which they would not be if hash=False were the default. Wouldn't it be simpler to make the options orthogonal? Frozen need not imply hashable. I would think if a user wants frozen and hashable, they could just write frozen=True and hashable=True. That would more explicit and clear than just having frozen=True imply that hashability gets turned-on implicitly whether you want it or not. > If there's some case there that you disagree with, I'd be interested in hearing about it. > > That logic is what is currently scheduled to go in to 3.7 beta 1. I have not updated the PEP yet, mostly because it's so difficult to explain. That might be a strong hint that this part of the API needs to be simplified :-) "If the implementation is hard to explain, it's a bad idea." -- Zen If for some reason, dataclasses really do need tri-state logic, it may be better off with enum values (NOT_HASHABLE, VALUE_HASHABLE, IDENTITY_HASHABLE, HASHABLE_IF_FROZEN or some such) rather than with None, True, and False which don't communicate enough information to understand what the decorator is doing. > What's the case where setting hash=False causes it to be hashable? I don't think that was ever the case, and I hope it's not the case now. Python 3.7.0a4+ (heads/master:631fd38dbf, Jan 28 2018, 16:20:11) [GCC 7.2.0] on darwin Type "copyright", "credits" or "license()" for more information. >>> from dataclasses import dataclass >>> @dataclass(hash=False) class A: x: int >>> hash(A(1)) 285969507 I'm hoping that this part of the API gets thought through before it gets set in stone. Since dataclasses code never got a chance to live in the wild (on PyPI or some such), it behooves us to think through all the usability issues. To me at least, the tri-state hashability was entirely unexpected and hard to debug -- I had to do a close reading of the source to figure-out what was happening. Raymond From guido at python.org Sun Jan 28 21:08:53 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 28 Jan 2018 18:08:53 -0800 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: I think this is a good candidate for fine-tuning during the beta period. Though honestly Python's own rules for when a class is hashable or not are the root cause for the complexity here -- since we decided to implicitly set __hash__ = None when you define __eq__, it's hardly surprising that dataclasses are having a hard time making natural rules. On Sun, Jan 28, 2018 at 5:07 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > >>> 2) Change the default value for "hash" from "None" to "False". This > might take a little effort because there is currently an oddity where > setting hash=False causes it to be hashable. I'm pretty sure this wasn't > intended ;-) > >> I haven't looked at this yet. > > > > I think the hashing logic explained in https://bugs.python.org/ > issue32513#msg310830 is correct. It uses hash=None as the default, so > that frozen=True objects are hashable, which they would not be if > hash=False were the default. > > Wouldn't it be simpler to make the options orthogonal? Frozen need not > imply hashable. I would think if a user wants frozen and hashable, they > could just write frozen=True and hashable=True. That would more explicit > and clear than just having frozen=True imply that hashability gets > turned-on implicitly whether you want it or not. > > > If there's some case there that you disagree with, I'd be interested in > hearing about it. > > > > That logic is what is currently scheduled to go in to 3.7 beta 1. I have > not updated the PEP yet, mostly because it's so difficult to explain. > > That might be a strong hint that this part of the API needs to be > simplified :-) > > "If the implementation is hard to explain, it's a bad idea." -- Zen > > If for some reason, dataclasses really do need tri-state logic, it may be > better off with enum values (NOT_HASHABLE, VALUE_HASHABLE, > IDENTITY_HASHABLE, HASHABLE_IF_FROZEN or some such) rather than with None, > True, and False which don't communicate enough information to understand > what the decorator is doing. > > > What's the case where setting hash=False causes it to be hashable? I > don't think that was ever the case, and I hope it's not the case now. > > Python 3.7.0a4+ (heads/master:631fd38dbf, Jan 28 2018, 16:20:11) > [GCC 7.2.0] on darwin > Type "copyright", "credits" or "license()" for more information. > > >>> from dataclasses import dataclass > >>> @dataclass(hash=False) > class A: > x: int > > >>> hash(A(1)) > 285969507 > > > I'm hoping that this part of the API gets thought through before it gets > set in stone. Since dataclasses code never got a chance to live in the > wild (on PyPI or some such), it behooves us to think through all the > usability issues. To me at least, the tri-state hashability was entirely > unexpected and hard to debug -- I had to do a close reading of the source > to figure-out what was happening. > > > Raymond > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jan 28 23:30:08 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Jan 2018 14:30:08 +1000 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? Message-ID: On my current system, "make test" runs in around 3 minutes, while "./python -m test" runs in around 16 minutes. And that's with "make test" actually running more tests (since it enables several of the "-u" options). The difference is that "make test" passes "-j0" and hence not only uses all the available cores in the machines, but will also run other tests while some tests are sleeping. How would folks feel about making "-j 0" the default in the test suite, and then adjusted the handling of "-j 1" to switch back to the current default single process mode? My rationale for that is to improve the default edit-test cycle in local development, while still providing a way to force single-process execution for failure investigation purposes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Sun Jan 28 23:43:33 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 28 Jan 2018 20:43:33 -0800 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: So why can't you just run "make test" if that's faster? On Sun, Jan 28, 2018 at 8:30 PM, Nick Coghlan wrote: > On my current system, "make test" runs in around 3 minutes, while > "./python -m test" runs in around 16 minutes. And that's with "make > test" actually running more tests (since it enables several of the > "-u" options). > > The difference is that "make test" passes "-j0" and hence not only > uses all the available cores in the machines, but will also run other > tests while some tests are sleeping. > > How would folks feel about making "-j 0" the default in the test > suite, and then adjusted the handling of "-j 1" to switch back to the > current default single process mode? > > My rationale for that is to improve the default edit-test cycle in > local development, while still providing a way to force single-process > execution for failure investigation purposes. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Mon Jan 29 00:15:21 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 29 Jan 2018 00:15:21 -0500 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: On 1/28/2018 11:43 PM, Guido van Rossum wrote: > So why can't you just run "make test" if that's faster? Not a standard option on Windows ;-). > On Sun, Jan 28, 2018 at 8:30 PM, Nick Coghlan > wrote: > > On my current system, "make test" runs in around 3 minutes, while > "./python -m test" runs in around 16 minutes. And that's with "make > test" actually running more tests (since it enables several of the > "-u" options). > > The difference is that "make test" passes "-j0" and hence not only > uses all the available cores in the machines, but will also run other > tests while some tests are sleeping. > > How would folks feel about making "-j 0" the default in the test > suite, and then adjusted the handling of "-j 1" to switch back to the > current default single process mode? > > My rationale for that is to improve the default edit-test cycle in > local development, while still providing a way to force single-process > execution for failure investigation purposes. I would like this (though I could write a .bat file). I routinely pass -j14 or so on a 6 core x 2 processes/core machine and get about the same times. The speedup would be even better but for the last very long running test. I wish each test file was limited to about 30 seconds, or even a minute. -- Terry Jan Reedy From ncoghlan at gmail.com Mon Jan 29 01:03:22 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Jan 2018 16:03:22 +1000 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: On 29 January 2018 at 14:43, Guido van Rossum wrote: > So why can't you just run "make test" if that's faster? I can (and do), but I also run it the other way if I need to pass additional options. I'll then notice that I forgot -j0, ctrl-C out, then run it again with -j0. That's a minor irritation for me, but for folks that don't already know about the -j0 option, they're more likely to just go "CPython's test suite is annoyingly slow". To provide a bit more detail on what I'd suggest we do: * "-j1" would explicitly turn off multiprocessing * "-j0" and "-jN" (N >= 2) would explicitly request multiprocessing and error out if there's a conflicting flag * not setting the flag would be equivalent to "-j0" by default, but "-j1" if a conflicting flag was set The testing options that already explicitly conflict with the multiprocessing option are: * -T (tracing) * -l (leak hunting) "-j1" would likely also be a better default when the verbosity flags are set (since the output is incredibly hard to read if you have multiple verbose tests running in parallel). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From v+python at g.nevcal.com Mon Jan 29 01:04:01 2018 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sun, 28 Jan 2018 22:04:01 -0800 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: On 1/28/2018 9:15 PM, Terry Reedy wrote: > The speedup would be even better but for the last very long running test. Could the last very long running test be started first, instead? (maybe it is, or maybe there are reasons not to) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jan 29 01:34:37 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Jan 2018 16:34:37 +1000 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: On 29 January 2018 at 12:08, Guido van Rossum wrote: > I think this is a good candidate for fine-tuning during the beta period. > > Though honestly Python's own rules for when a class is hashable or not are > the root cause for the complexity here -- since we decided to implicitly set > __hash__ = None when you define __eq__, it's hardly surprising that > dataclasses are having a hard time making natural rules. In Raymond's example, the problem is the opposite: data classes are currently interpreting "hash=False" as "Don't add a __hash__ implementation" rather than "Make this unhashable". That interpretation isn't equivalent due to object.__hash__ existing by default. (Reviewing Eric's table again, I believe this problem still exists in the 3.7b1 variant as well - I just missed it the first time I read that) I'd say the major argument in favour of Raymond's suggestion (i.e. always requiring an explicit "hash=True" in the dataclass decorator call if you want the result to be hashable) is that even if we *do* come up with a completely consistent derivation rule that the decorator can follow, most *readers* aren't going to know that rule. It would become a Python gotcha question for tech interviews: ============= Which of the following class definitions are hashable and what is their hash based on?: @dataclass class A: field: int @dataclass(eq=False) class B: field: int @dataclass(frozen=True) class C: field: int @dataclass(eq=False, frozen=True) class D: field: int @dataclass(eq=True, frozen=True) class E: field: int @dataclass(hash=True) class F: field: int @dataclass(frozen=True, hash=True) class G: field: int @dataclass(eq=True, frozen=True, hash=True) class H: field: int ============= Currently the answers are: - A: not hashable - B: hashable (by identity) # Wat? - C: hashable (by field hash) - D: hashable (by identity) # Wat? - E: hashable (by field hash) - F: hashable (by field hash) - G: hashable (by field hash) - H: hashable (by field hash) If we instead make the default "hash=False" (and interpret that as meaning "Inject __hash__=None"), then you end up with the following much simpler outcome that can be mapped directly to the decorator "hash" parameter: - A: not hashable - B: not hashable - C: not hashable - D: not hashable - E: not hashable - F: hashable (by field hash) - G: hashable (by field hash) - H: hashable (by field hash) Inheritance of __hash__ could then be made explicitly opt-in by way of a "dataclasses.INHERIT" constant. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Mon Jan 29 01:55:06 2018 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 29 Jan 2018 06:55:06 +0000 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: On Mon, Jan 29, 2018 at 1:36 AM Nick Coghlan wrote: > [...] > Currently the answers are: > > - A: not hashable > - B: hashable (by identity) # Wat? > - C: hashable (by field hash) > - D: hashable (by identity) # Wat? > - E: hashable (by field hash) > - F: hashable (by field hash) > - G: hashable (by field hash) > - H: hashable (by field hash) This is very convoluted. +1 to make hashability an explicit opt-in. Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Mon Jan 29 02:52:41 2018 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 29 Jan 2018 02:52:41 -0500 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: <8a0790e8-4ee3-dc20-6bb1-21314b4d20a0@trueblade.com> On 1/29/2018 1:55 AM, Yury Selivanov wrote: > > On Mon, Jan 29, 2018 at 1:36 AM Nick Coghlan > wrote: > > [...] > Currently the answers are: > > - A: not hashable > - B: hashable (by identity) # Wat? > - C: hashable (by field hash) > - D: hashable (by identity) # Wat? > - E: hashable (by field hash) > - F: hashable (by field hash) > - G: hashable (by field hash) > - H: hashable (by field hash) > > > This is very convoluted. > > +1 to make hashability an explicit opt-in. I agree it's complicated. I think it would be a bad design to have to opt-in to hashability if using frozen=True. The point of hash=None (the default) is to try and get the simple cases right with the simplest possible interface. It's the intersection of "have simple defaults, but ways to override them" with "if the user provides some dunder methods, don't make them specify feature=False in order to use them" that complicated things. For example, maybe we no longer need eq=False now that specifying a __eq__ turns off dataclasses's __eq__ generation. Does dataclasses really need a way of using object identity for equality? Eric. From ethan at stoneleaf.us Mon Jan 29 03:42:20 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 29 Jan 2018 00:42:20 -0800 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> Message-ID: <5A6EDE6C.4070109@stoneleaf.us> On 01/28/2018 07:45 AM, Eric V. Smith wrote: > On 1/6/2018 5:13 PM, Eric V. Smith wrote: >> On 12/10/2017 5:00 PM, Raymond Hettinger wrote: >>> 2) Change the default value for "hash" from "None" to "False". This might take a little effort because there is >>> currently an oddity where setting hash=False causes it to be hashable. I'm pretty sure this wasn't intended ;-) >> >> I haven't looked at this yet. > > I think the hashing logic explained in https://bugs.python.org/issue32513#msg310830 is correct. It uses hash=None as the > default, so that frozen=True objects are hashable In a class, `__hash__ = None` means the instances are not hashable... but in a dataclass decorator, `hash=None` means they are? -- ~Ethan~ From eric at trueblade.com Mon Jan 29 03:57:04 2018 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 29 Jan 2018 03:57:04 -0500 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <5A6EDE6C.4070109@stoneleaf.us> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> <5A6EDE6C.4070109@stoneleaf.us> Message-ID: On 1/29/2018 3:42 AM, Ethan Furman wrote: > On 01/28/2018 07:45 AM, Eric V. Smith wrote: >> On 1/6/2018 5:13 PM, Eric V. Smith wrote: >>> On 12/10/2017 5:00 PM, Raymond Hettinger wrote: > >>>> 2) Change the default value for "hash" from "None" to "False".? This >>>> might take a little effort because there is >>>> currently an oddity where setting hash=False causes it to be >>>> hashable.? I'm pretty sure this wasn't intended ;-) >>> >>> I haven't looked at this yet. >> >> I think the hashing logic explained in >> https://bugs.python.org/issue32513#msg310830 is correct. It uses >> hash=None as the >> default, so that frozen=True objects are hashable > > In a class, `__hash__ = None` means the instances are not hashable... > but in a dataclass decorator, `hash=None` means they are? It means "don't add a __hash__ attribute, and rely on the base class value". But maybe it should mean "is not hashable". But in that case, how would we specify the "don't add __hash__" case? Note that "repr=False" means "don't add a __repr__", not "is not repr-able". And "init=False" means "don't add a __init__", not "is not init-able". Eric. From raymond.hettinger at gmail.com Mon Jan 29 04:01:31 2018 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 29 Jan 2018 01:01:31 -0800 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <8a0790e8-4ee3-dc20-6bb1-21314b4d20a0@trueblade.com> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> <8a0790e8-4ee3-dc20-6bb1-21314b4d20a0@trueblade.com> Message-ID: <7E2E11C7-861F-4819-A41E-86B213B0862D@gmail.com> > On Jan 28, 2018, at 11:52 PM, Eric V. Smith wrote: > > I think it would be a bad design to have to opt-in to hashability if using frozen=True. I respect that you see it that way, but it doesn't make sense to me. You can have either one without the other. It seems to me that it is clearer and more explicit to just say what you want rather than having implicit logic guess at what you meant. Otherwise, when something goes wrong, it is difficult to debug. The tooltips for the dataclass decorator are essentially of checklist of features that can be turned on or off. That list of features is mostly easy-to-use except for hash=None which has three possible values, only one of which is self-evident. We haven't had much in the way of user testing, so it is a significant data point that one of your first users (me) found was confounded by this API. I recommend putting various correct and incorrect examples in front of other users (preferably experienced Python programmers) and asking them to predict what the code does based on the source code. Raymond From ethan at stoneleaf.us Mon Jan 29 04:12:21 2018 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 29 Jan 2018 01:12:21 -0800 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> <5A6EDE6C.4070109@stoneleaf.us> Message-ID: <5A6EE575.9040304@stoneleaf.us> On 01/29/2018 12:57 AM, Eric V. Smith wrote: > On 1/29/2018 3:42 AM, Ethan Furman wrote: >> On 01/28/2018 07:45 AM, Eric V. Smith wrote: >>> I think the hashing logic explained in https://bugs.python.org/issue32513#msg310830 is correct. It uses hash=None as the >>> default, so that frozen=True objects are hashable >> >> In a class, `__hash__ = None` means the instances are not hashable... but in a dataclass decorator, `hash=None` means >> they are? > > It means "don't add a __hash__ attribute, and rely on the base class value". But maybe it should mean "is not hashable". > But in that case, how would we specify the "don't add __hash__" case? I thought `hash=False` means don't add a __hash__ method.. > Note that "repr=False" means "don't add a __repr__", not "is not repr-able". And "init=False" means "don't add a > __init__", not "is not init-able". Yeah, like that. I get that the default for all (or at least most) of the boring stuff should be "just do it", but I don't think None is the proper place-holder for that. Why not make an `_default = object()` sentinel and use that for the default? At least for __hash__. Then we have: hash=False -> don't add one hash=None -> add `__hash__ = None` (is not hashable) hash=True -> add one (the default... Okay, after writing that down, why don't we have the default value for anything automatically added be True? With True meaning the dataclass should have a custom whatever, and if the programmer did not provide one the decorator will -- it can even be a self-check: if the parameters in the decorator are at odds with the actual class contents (hash=None, but the class has a __hash__ method) then an exception could be raised. -- ~Ethan~ From victor.stinner at gmail.com Mon Jan 29 06:33:14 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 29 Jan 2018 12:33:14 +0100 Subject: [Python-Dev] Sad buildbots In-Reply-To: References: Message-ID: Hi, test_ftplib just failed on my PR, whereas my change couldn't explain the failure. I created https://bugs.python.org/issue32706: "test_check_hostname() of test_ftplib started to fail randomly" Temporary workaround: restart the failed Travis CI job. Victor 2018-01-29 0:23 GMT+01:00 Ned Deily : > On Jan 28, 2018, at 18:00, Victor Stinner wrote: >> It seems like the feature freeze is close: while I usually get 2 >> emails/day at maximum on buildbot-status, I got 14 emails during the >> weekend: >> https://mail.python.org/mm3/archives/list/buildbot-status at python.org/ >> (are all buildbots red? :-p) >> >> I will not have the bandwidth to analyze all buildbot failures. Can >> someone help to investigate all these funny new regressions? >> http://buildbot.python.org/all/#/builders >> >> I would feel safer to cut a release if most buildbots are green again. > > Never fear, we're *not* going to do a release in such a state. That's one of the reasons we have release managers. :-) > > Not surprisingly, there has been a *lot* of activity over the last few days as core-developers work on getting features finished prior to the 3.7 feature code freeze coming up at the end of Monday AoE. Some of the intermediate checkins cause some breakages across the board, unfortunately, that have subsequently been addressed. Most of the 3.x stable buildbots are currently green with some builds still going on. But, yeah, please all keep an eye of them especially those of you merging code. Just because the CI tests passed doesn't mean there won't be problems on other platforms and configurations. > > Thanks for everyone's help so far! We're getting close. > > -- > Ned Deily > nad at python.org -- [] > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From eric at trueblade.com Mon Jan 29 08:13:29 2018 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 29 Jan 2018 08:13:29 -0500 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <7E2E11C7-861F-4819-A41E-86B213B0862D@gmail.com> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> <8a0790e8-4ee3-dc20-6bb1-21314b4d20a0@trueblade.com> <7E2E11C7-861F-4819-A41E-86B213B0862D@gmail.com> Message-ID: <312af550-f156-b6c5-57d6-7ccc2f53768d@trueblade.com> On 1/29/2018 4:01 AM, Raymond Hettinger wrote: > > >> On Jan 28, 2018, at 11:52 PM, Eric V. Smith wrote: >> >> I think it would be a bad design to have to opt-in to hashability if using frozen=True. > > I respect that you see it that way, but it doesn't make sense to me. You can have either one without the other. It seems to me that it is clearer and more explicit to just say what you want rather than having implicit logic guess at what you meant. Otherwise, when something goes wrong, it is difficult to debug. I certainly respect your insights. > The tooltips for the dataclass decorator are essentially of checklist of features that can be turned on or off. That list of features is mostly easy-to-use except for hash=None which has three possible values, only one of which is self-evident. Which is the one that's self-evident? I would think hash=False, correct? The problem is that for repr=, eq=, compare=, you're saying "do or don't add this/these methods, or if true, don't even add it if it's already defined". The same is true for hash=True/False, with the complication of the implicit __hash__ that's added by __eq__. In addition to "do or don't add __hash__", there needs to be a way of setting __hash__=None. The processing of hash=None is trying to guess what sort of __hash__ you want: not set it and just inherit it, generate it based on fields, or set it to None. And if it guesses wrong, based on the fairly simple hash=None rules, you can control it with other values of hash=. Maybe that's the problem. I'm open to ways to express these options. Again, I think losing "do the right thing most of the time without explicitly setting hash=" would be a shame, but not the end of the world. And changing it to "hashable=" isn't quite as simple as it seems, since there's more than one definition of hashable: identity-based or field-based. > We haven't had much in the way of user testing, so it is a significant data point that one of your first users (me) found was confounded by this API. I recommend putting various correct and incorrect examples in front of other users (preferably experienced Python programmers) and asking them to predict what the code does based on the source code. I agree it's sub-optimal, but it's a complex issue. What would the interface look like that allowed a programmer to know if an object was hashable based on object identity versus field values? Eric. From guido at python.org Mon Jan 29 11:21:20 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 29 Jan 2018 08:21:20 -0800 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: I was going to argue, but it's not worth it. What you propose is fine. On Sun, Jan 28, 2018 at 10:03 PM, Nick Coghlan wrote: > On 29 January 2018 at 14:43, Guido van Rossum wrote: > > So why can't you just run "make test" if that's faster? > > I can (and do), but I also run it the other way if I need to pass > additional options. I'll then notice that I forgot -j0, ctrl-C out, > then run it again with -j0. > > That's a minor irritation for me, but for folks that don't already > know about the -j0 option, they're more likely to just go "CPython's > test suite is annoyingly slow". > > To provide a bit more detail on what I'd suggest we do: > > * "-j1" would explicitly turn off multiprocessing > * "-j0" and "-jN" (N >= 2) would explicitly request multiprocessing > and error out if there's a conflicting flag > * not setting the flag would be equivalent to "-j0" by default, but > "-j1" if a conflicting flag was set > > The testing options that already explicitly conflict with the > multiprocessing option are: > > * -T (tracing) > * -l (leak hunting) > > "-j1" would likely also be a better default when the verbosity flags > are set (since the output is incredibly hard to read if you have > multiple verbose tests running in parallel). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Jan 29 11:39:36 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 29 Jan 2018 17:39:36 +0100 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: > * "-j1" would explicitly turn off multiprocessing Running tests "sequentially" but run them in one subprocess per test file is interesting for tests isolation. Runing tests one by one reduces the risk of triggering a race condition (test only failing when the system load is high). -jN was always documented as "use multiprocessing". Maybe we need a new option to explicitly disable multiprocessing instead? vstinner at apu$ ./python -m test Run tests sequentially vs vstinner at apu$ ./python -m test -j1 Run tests in parallel using 1 child processes By the way, Python 2.7 behaves differently and it's annoying: vstinner at apu$ ./python -m test -j0 Run tests sequentially I'm in favor of modifying Python 2.7 to detect the number of cores for -j0, as Python 3.6 does, and run tests in parallel. Python 3.6: vstinner at apu$ ./python -m test -j0 Run tests in parallel using 10 child processes About the default: run tests in parallel or -j1 are the two most reliable options. While -j0 is faster, sometimes it triggers race conditions. I'm not sure that it's safe to change that, at least maybe don't do that in stable branches but only master? Note: Obviously, I'm strongly in favor of fixing all race conditions. I'm doing that for years. We are better today, but we are still not race-condition-free yet. Victor From guido at python.org Mon Jan 29 11:46:43 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 29 Jan 2018 08:46:43 -0800 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <312af550-f156-b6c5-57d6-7ccc2f53768d@trueblade.com> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> <8a0790e8-4ee3-dc20-6bb1-21314b4d20a0@trueblade.com> <7E2E11C7-861F-4819-A41E-86B213B0862D@gmail.com> <312af550-f156-b6c5-57d6-7ccc2f53768d@trueblade.com> Message-ID: I don't think we're going to reach full agreement here, so I'm going to put my weight behind Eric's rules. I think the benefit of the complicated rules is that they almost always do what you want, so you almost never have to think about it. If it doesn't do what you want, setting hash=False or hash=True is much quicker than trying to understand the rules. But the rules *are* deterministic and reasonable. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.schellart at princeton.edu Mon Jan 29 14:34:42 2018 From: p.schellart at princeton.edu (Pim Schellart) Date: Mon, 29 Jan 2018 19:34:42 +0000 Subject: [Python-Dev] cls for metaclass? Message-ID: <5E3BFAD7-5DB8-4E14-B9E1-3FF4EA74ACA1@princeton.edu> Dear Python developers, PEP 8 says: "Always use self for the first argument to instance methods. Always use cls for the first argument to class methods.? But what about metaclasses? PEP 3115 seems to suggest `cls`, and so do many Python books, however tools such as flake8 don?t seem to like it. Is there a consensus opinion, and should PEP 8 be updated? Kind regards, Pim Schellart From guido at python.org Mon Jan 29 14:53:50 2018 From: guido at python.org (Guido van Rossum) Date: Mon, 29 Jan 2018 11:53:50 -0800 Subject: [Python-Dev] cls for metaclass? In-Reply-To: <5E3BFAD7-5DB8-4E14-B9E1-3FF4EA74ACA1@princeton.edu> References: <5E3BFAD7-5DB8-4E14-B9E1-3FF4EA74ACA1@princeton.edu> Message-ID: I think it should be `cls` and flake8 etc. should be fixed. On Mon, Jan 29, 2018 at 11:34 AM, Pim Schellart wrote: > Dear Python developers, > > PEP 8 says: > > "Always use self for the first argument to instance methods. > > Always use cls for the first argument to class methods.? > > But what about metaclasses? > PEP 3115 seems to suggest `cls`, and so do many Python books, however > tools such as flake8 don?t seem to like it. > Is there a consensus opinion, and should PEP 8 be updated? > > Kind regards, > > Pim Schellart > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at indelible.org Mon Jan 29 15:32:44 2018 From: jon at indelible.org (Jon Parise) Date: Mon, 29 Jan 2018 12:32:44 -0800 Subject: [Python-Dev] cls for metaclass? In-Reply-To: References: <5E3BFAD7-5DB8-4E14-B9E1-3FF4EA74ACA1@princeton.edu> Message-ID: Coincidentally, I changed this in flake8's pep8-naming plugin about a month ago[1], although the change has not yet made it into a release. [1]: https://github.com/PyCQA/pep8-naming/pull/47 On Mon, Jan 29, 2018 at 11:53 AM, Guido van Rossum wrote: > I think it should be `cls` and flake8 etc. should be fixed. > > On Mon, Jan 29, 2018 at 11:34 AM, Pim Schellart > wrote: > >> Dear Python developers, >> >> PEP 8 says: >> >> "Always use self for the first argument to instance methods. >> >> Always use cls for the first argument to class methods.? >> >> But what about metaclasses? >> PEP 3115 seems to suggest `cls`, and so do many Python books, however >> tools such as flake8 don?t seem to like it. >> Is there a consensus opinion, and should PEP 8 be updated? >> >> Kind regards, >> >> Pim Schellart >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40p >> ython.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jon%40indelible.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Mon Jan 29 18:00:35 2018 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 30 Jan 2018 12:00:35 +1300 Subject: [Python-Dev] Is static typing still optional? In-Reply-To: <7E2E11C7-861F-4819-A41E-86B213B0862D@gmail.com> References: <36710C01-10C0-4B70-8846-C0B0C235C4BC@gmail.com> <460940d5-48cb-4726-7f6f-e6391495f2bd@trueblade.com> <3ECA48D2-90FB-4AED-B87C-251951ABCF7F@gmail.com> <84590026-8321-3661-c63d-6175023c1ec0@trueblade.com> <8a0790e8-4ee3-dc20-6bb1-21314b4d20a0@trueblade.com> <7E2E11C7-861F-4819-A41E-86B213B0862D@gmail.com> Message-ID: <5A6FA793.3010208@canterbury.ac.nz> Raymond Hettinger wrote: > > That list of features is mostly > easy-to-use except for hash=None which has three possible values, only one of > which is self-evident. Maybe the value of the hash option should be an enum with three explicitly-named values. Or maybe there could be a separate "unhashable" boolean flag for the third option. -- Greg From brett at python.org Mon Jan 29 21:16:14 2018 From: brett at python.org (Brett Cannon) Date: Tue, 30 Jan 2018 02:16:14 +0000 Subject: [Python-Dev] Friendly reminder: be kind to one another Message-ID: Over the last 3 days I have had two situations come up where I was asked for my opinion in regards to possible CoC violations. I just wanted to take this opportunity to remind everyone that open source does not work if we are not open, considerate, and respectful to one another (which also happens to be the PSF CoC that we are all expected to follow when working on Python). When we stop being kind to each other is when open source falls apart because it drives people away, and for a project that is driven by volunteers like Python that will be what ends this project (not to say people should be rude to corporate open source projects, but they can simply choose to switch to a core dump approach of open source). I gave a talk at PyCascades this past week on setting expectations for open source participation: https://youtu.be/HiWfqMbJ3_8?t=7m24s . I had at least one person who was upset about no one getting to their pull request quickly come up to me afterwards and apologize for ever feeling that way after watching my talk, so do please watch it if you have ever felt angry at an open source maintainer or contributor to help keep things in perspective. I also wanted to say that I think core developers should work extra hard to be kind as we help set the tone for this project which can leak into the broader community. People with commit privileges are not beyond rebuke and so people should never feel they are not justified speaking up when they feel a core developer has been rude to them. Anyway, the key point is to remember is that people are what make this project and community work, so please make sure that you do what you can to keep people wanting to participate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Mon Jan 29 21:39:31 2018 From: v+python at g.nevcal.com (Glenn Linderman) Date: Mon, 29 Jan 2018 18:39:31 -0800 Subject: [Python-Dev] Friendly reminder: be kind to one another In-Reply-To: References: Message-ID: <96852020-249d-b646-a3f0-1d9353ae0167@g.nevcal.com> On 1/29/2018 6:16 PM, Brett Cannon wrote: > Over the last 3 days I have had two situations come up where I was > asked for my opinion in regards to possible CoC violations. I just > wanted to take this opportunity to remind everyone that open source > does not work if we are not open, considerate, and respectful to one > another (which also happens to be the PSF CoC that we are all expected > to follow when working on Python). When we stop being kind to each > other is when open source falls apart because it drives people away, > and for a project that is driven by volunteers like Python that will > be what ends this project (not to say people should be rude to > corporate open source projects, but they can simply choose to switch > to a core dump approach of open source). > > I gave a talk at PyCascades this past week on setting expectations for > open source participation: https://youtu.be/HiWfqMbJ3_8?t=7m24s . I > had at least one person who was upset about no one getting to their > pull request quickly come up to me afterwards and apologize for ever > feeling that way after watching my talk, so do please watch it if you > have ever felt angry at an open source maintainer or contributor to > help keep things in perspective. > > I also wanted to say that I think core developers should work extra > hard to be kind as we help set the tone for this project which can > leak into the broader community. People with commit privileges are not > beyond rebuke and so people should never feel they are not justified > speaking up when they feel a core developer has been rude to them. > > Anyway, the key point is to remember is that people are what make this > project and community work, so please make sure that you do what you > can to keep people wanting to participate. Thanks Brett, I'll have to watch that. But even before I do, let me comment that being kind is not something you will have to regret later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Mon Jan 29 22:06:39 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 29 Jan 2018 22:06:39 -0500 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: On 1/28/2018 11:30 PM, Nick Coghlan wrote: > On my current system, "make test" runs in around 3 minutes, while > "./python -m test" runs in around 16 minutes. And that's with "make > test" actually running more tests (since it enables several of the > "-u" options). Did you test with current 3.7.0a+, soon to be 3.7.0b1, repository? For me, recent changes, since 3.7.0a4, have greatly reduced the benefit of j0 = j14 on my system. For 3.6.4 (installed), the times are around 13:30 and 2:20, which is about the same ratio as you report. The times for 3.7.0a4 (installed) are about the same parallel and would have been for serial if not for a crash. For 3.7.0a4+ recompiled today (debug), the times are around 24:00 and 11:10. Debug slows down by nearly half. The extra slowdown by a factor of more than 2 for parallel is because parallel tests are now blocked (on windows). Before, a new test was starting in one of the 14 processes whenever the test in a process was finished. All 12 cpu were kept busy until less than 12 tests remain. Now, 14 new tests are started when the previous 14 finish. Therefore, cpus wait while the slowest test in a batch of 14 finishes. Beginning of old log: Run tests in parallel using 14 child processes 0:00:00 [ 1/413] test__opcode passed 0:00:00 [ 2/413] test__locale passed 0:00:00 [ 3/413] test__osx_support passed 0:00:00 [ 4/413] test_abc passed 0:00:01 [ 5/413] test_abstract_numbers passed 0:00:01 [ 6/413] test_aifc passed 0:00:02 [ 7/413] test_array passed 0:00:02 [ 8/413] test_asdl_parser skipped test_asdl_parser skipped -- test irrelevant for an installed Python 0:00:03 [ 9/413] test_argparse passed 0:00:04 [ 10/413] test_ast passed 0:00:04 [ 11/413] test_asyncgen passed 0:00:05 [ 12/413] test_unittest passed 0:00:06 [ 13/413] test_asynchat passed 0:00:06 [ 14/413] test_atexit passed 0:00:06 [ 15/413] test_audioop passed 0:00:06 [ 16/413] test_augassign passed 0:00:07 [ 17/413] test_asyncore passed 0:00:07 [ 18/413] test_baseexception passed 0:00:07 [ 19/413] test_base64 passed 0:00:07 [ 20/413] test_bigaddrspace passed 0:00:07 [ 21/413] test_bigmem passed 0:00:07 [ 22/413] test_binascii passed 0:00:07 [ 23/413] test_binop passed 0:00:07 [ 24/413] test_binhex passed 0:00:08 [ 25/413] test_bool passed 0:00:08 [ 26/413] test_bisect passed 0:00:10 [ 27/413] test_doctest passed 0:00:10 [ 28/413] test_types passed 0:00:10 [ 29/413] test___future__ passed 0:00:10 [ 30/413] test_dict passed 0:00:10 [ 31/413] test_exceptions passed 0:00:10 [ 32/413] test_support passed 0:00:10 [ 33/413] test_builtin passed 0:00:10 [ 34/413] test_opcodes passed 0:00:10 [ 35/413] test_grammar passed 0:00:10 [ 36/413] test_doctest2 passed 0:00:10 [ 37/413] test___all__ passed 0:00:10 [ 38/413] test_cmath passed 0:00:11 [ 39/413] test_cmd passed 0:00:15 [ 40/413] test_cmd_line passed 0:00:15 [ 41/413] test_buffer passed 0:00:15 [ 42/413] test_code passed 0:00:16 [ 43/413] test_code_module passed 0:00:16 [ 44/413] test_codeccallbacks passed 0:00:16 [ 45/413] test_charmapcodec passed 0:00:16 [ 46/413] test_class passed 0:00:16 [ 47/413] test_capi passed 0:00:16 [ 48/413] test_call passed 0:00:16 [ 49/413] test_bytes passed 0:00:16 [ 50/413] test_cgitb passed 0:00:16 [ 51/413] test_calendar passed 0:00:16 [ 52/413] test_c_locale_coercion passed 0:00:16 [ 53/413] test_bz2 passed 0:00:16 [ 54/413] test_cgi passed 0:00:17 [ 55/413] test_codecencodings_cn passed 0:00:17 [ 56/413] test_codecencodings_iso2022 passed 0:00:17 [ 57/413] test_codeop passed 0:00:17 [ 58/413] test_codecencodings_tw passed 0:00:17 [ 59/413] test_codecencodings_hk passed 0:00:17 [ 60/413] test_codecmaps_kr passed 0:00:17 [ 61/413] test_codecmaps_tw passed 0:00:17 [ 62/413] test_codecmaps_cn passed 0:00:17 [ 63/413] test_codecencodings_kr passed 0:00:17 [ 64/413] test_codecmaps_jp passed 0:00:17 [ 65/413] test_codecmaps_hk passed 0:00:17 [ 66/413] test_codecencodings_jp passed 0:00:19 [ 67/413] test_bufio passed 0:00:19 [ 68/413] test_collections passed 0:00:19 [ 69/413] test_contextlib_async passed 0:00:19 [ 70/413] test_copy passed 0:00:19 [ 71/413] test_copyreg passed 0:00:19 [ 72/413] test_coroutines passed 0:00:19 [ 73/413] test_codecs passed 0:00:20 [ 74/413] test_crashers passed 0:00:20 [ 75/413] test_crypt skipped test_crypt skipped -- No module named '_crypt' 0:00:20 [ 76/413] test_cprofile passed 0:00:20 [ 77/413] test_curses skipped (resource denied) test_curses skipped -- Use of the 'curses' resource not enabled 0:00:20 [ 78/413] test_csv passed 0:00:20 [ 79/413] test_dataclasses passed 0:00:21 [ 80/413] test_cmd_line_script passed 0:00:21 [ 81/413] test_dbm passed 0:00:21 [ 82/413] test_dbm_gnu skipped test_dbm_gnu skipped -- No module named '_gdbm' 0:00:21 [ 83/413] test_ctypes passed 0:00:22 [ 84/413] test_dbm_ndbm skipped test_dbm_ndbm skipped -- No module named '_dbm' 0:00:22 [ 85/413] test_decorators passed 0:00:22 [ 86/413] test_datetime passed 0:00:22 [ 87/413] test_defaultdict passed 0:00:23 [ 88/413] test_descr passed 0:00:23 [ 89/413] test_deque passed 0:00:23 [ 90/413] test_descrtut passed 0:00:23 [ 91/413] test_devpoll skipped test_devpoll skipped -- test works only on Solaris OS family 0:00:23 [ 92/413] test_dict_version passed 0:00:24 [ 93/413] test_dictcomps passed 0:00:24 [ 94/413] test_dictviews passed 0:00:25 [ 95/413] test_dbm_dumb passed 0:00:25 [ 96/413] test_difflib passed 0:00:25 [ 97/413] test_dis passed 0:00:25 [ 98/413] test_dtrace passed 0:00:25 [ 99/413] test_dummy_thread passed 0:00:26 [100/413] test_dummy_threading passed 0:00:26 [101/413] test_dynamic passed 0:00:26 [102/413] test_dynamicclassattribute passed 0:00:26 [103/413] test_eintr passed 0:00:28 [104/413] test_decimal passed 0:00:28 [105/413] test_embed passed 0:00:29 [106/413] test_ensurepip passed 0:00:29 [107/413] test_enum passed 0:00:30 [108/413] test_enumerate passed 0:00:30 [109/413] test_eof passed 0:00:30 [110/413] test_epoll skipped test_epoll skipped -- test works only on Linux 2.6 0:00:30 [111/413] test_errno passed Beginning of new log: Run tests in parallel using 14 child processes running: test_grammar (30 sec), test_opcodes (30 sec), test_dict (30 sec), test_builtin (30 sec), test_exceptions (30 sec), test_types (30 sec), test_unittest (30 sec), test_doctest (30 sec), test_doctest2 (30 sec), test_support (30 sec), test___all__ (30 sec), test___future__ (30 sec), test__locale (30 sec), test__opcode (30 sec) 0:00:41 [ 1/414] test_support passed -- running: test_grammar (41 sec), test_opcodes (41 sec), test_dict (41 sec), test_builtin (41 sec), test_exceptions (41 sec), test_types (41 sec), test_doctest (41 sec), test___all__ (41 sec), test___future__ (41 sec), test__locale (41 sec), test__opcode (41 sec) 0:00:41 [ 2/414] test_doctest2 passed -- running: test_grammar (41 sec), test___all__ (41 sec), test__locale (41 sec) 0:00:41 [ 3/414] test_unittest passed -- running: test___all__ (41 sec), test__locale (41 sec) 0:00:41 [ 4/414] test__opcode passed 0:00:41 [ 5/414] test_dict passed 0:00:41 [ 6/414] test_types passed 0:00:41 [ 7/414] test___future__ passed 0:00:41 [ 8/414] test_builtin passed 0:00:41 [ 9/414] test_doctest passed 0:00:41 [ 10/414] test_opcodes passed 0:00:41 [ 11/414] test_exceptions passed 0:00:41 [ 12/414] test_grammar passed 0:00:41 [ 13/414] test___all__ passed (40 sec) 0:00:41 [ 14/414] test__locale passed Slowest tests took 40 sec, rest waited. [snip list of running tests] 0:01:25 [ 15/414] test_audioop passed 0:01:25 [ 17/414] test_abstract_numbers passed 0:01:25 [ 18/414] test_abc passed 0:01:25 [ 19/414] test_aifc passed 0:01:25 [ 20/414] test_asdl_parser passed 0:01:25 [ 21/414] test_asyncgen passed 0:01:25 [ 22/414] test_atexit passed 0:01:25 [ 23/414] test_asyncio passed (42 sec) [snip output] 0:01:25 [ 25/414] test_ast passed 0:01:25 [ 26/414] test_asynchat passed 0:01:25 [ 27/414] test_array passed 0:01:25 [ 28/414] test__osx_support passed 28 tests done in 85 seconds versus 111 in 30 seconds. I think whatever caused this should be reversed. -- Terry Jan Reedy From solipsis at pitrou.net Tue Jan 30 12:10:34 2018 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jan 2018 18:10:34 +0100 Subject: [Python-Dev] Friendly reminder: be kind to one another References: Message-ID: <20180130181034.49cde406@fsol> This is missing a bit of context. Is this about python-dev? Some other ML? Regards Antoine. On Tue, 30 Jan 2018 02:16:14 +0000 Brett Cannon wrote: > Over the last 3 days I have had two situations come up where I was asked > for my opinion in regards to possible CoC violations. I just wanted to take > this opportunity to remind everyone that open source does not work if we > are not open, considerate, and respectful to one another (which also > happens to be the PSF CoC that we are all expected to follow when working > on Python). When we stop being kind to each other is when open source falls > apart because it drives people away, and for a project that is driven by > volunteers like Python that will be what ends this project (not to say > people should be rude to corporate open source projects, but they can > simply choose to switch to a core dump approach of open source). > > I gave a talk at PyCascades this past week on setting expectations for open > source participation: https://youtu.be/HiWfqMbJ3_8?t=7m24s . I had at least > one person who was upset about no one getting to their pull request quickly > come up to me afterwards and apologize for ever feeling that way after > watching my talk, so do please watch it if you have ever felt angry at an > open source maintainer or contributor to help keep things in perspective. > > I also wanted to say that I think core developers should work extra hard to > be kind as we help set the tone for this project which can leak into the > broader community. People with commit privileges are not beyond rebuke and > so people should never feel they are not justified speaking up when they > feel a core developer has been rude to them. > > Anyway, the key point is to remember is that people are what make this > project and community work, so please make sure that you do what you can to > keep people wanting to participate. > From chris.barker at noaa.gov Tue Jan 30 12:42:07 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 30 Jan 2018 09:42:07 -0800 Subject: [Python-Dev] OS-X builds for 3.7.0 Message-ID: Ned, It looks like you're still building OS-X the same way as in the past: Intel 32+64 bit, 10.6 compatibility Is that right? Might it be time for an update? Do we still need to support 32 bit? From: https://apple.stackexchange.com/questions/99640/how-old-are-macs-that-cannot-run-64-bit-applications There has not been a 32 bit-only Mac sold since 2006, and a out-of the box 32 bit OS since 2006 or 2007 I can't find out what the older OS version Apple supports, but I know my IT dept has been making me upgrade, so I"m going to guess 10.8 or newer... And maybe we could even get rid of the "Framework" builds...... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingw.android at gmail.com Tue Jan 30 12:50:32 2018 From: mingw.android at gmail.com (Ray Donnelly) Date: Tue, 30 Jan 2018 17:50:32 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 5:42 PM, Chris Barker wrote: > Ned, > > It looks like you're still building OS-X the same way as in the past: > > Intel 32+64 bit, 10.6 compatibility > > Is that right? > > Might it be time for an update? > > Do we still need to support 32 bit? From: > > https://apple.stackexchange.com/questions/99640/how-old-are-macs-that-cannot-run-64-bit-applications > > There has not been a 32 bit-only Mac sold since 2006, and a out-of the box > 32 bit OS since 2006 or 2007 > > I can't find out what the older OS version Apple supports, but I know my IT > dept has been making me upgrade, so I"m going to guess 10.8 or newer... > > And maybe we could even get rid of the "Framework" builds...... While we're making such macOS-build requests, any chance of building a static interpreter too? We've been doing that on the Anaconda Distribution since the 5.0 release in September and it seems to be working well. > > -CHB > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/mingw.android%40gmail.com > From p.f.moore at gmail.com Tue Jan 30 13:00:36 2018 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Jan 2018 18:00:36 +0000 Subject: [Python-Dev] Friendly reminder: be kind to one another In-Reply-To: <20180130181034.49cde406@fsol> References: <20180130181034.49cde406@fsol> Message-ID: On 30 January 2018 at 17:10, Antoine Pitrou wrote: > > This is missing a bit of context. Is this about python-dev? Some > other ML? While I'll admit to being curious about what prompted this, honestly, it's none of my business - and so I think that Brett's posting should be viewed as a general reminder, without any specific context implied. Paul From matt at vazor.com Tue Jan 30 13:08:06 2018 From: matt at vazor.com (Matt Billenstein) Date: Tue, 30 Jan 2018 18:08:06 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: <0101016148412d02-dcbced7d-149f-4ec8-bbce-26f721c49fda-000000@us-west-2.amazonses.com> On Tue, Jan 30, 2018 at 09:42:07AM -0800, Chris Barker wrote: > IT dept has been making me upgrade, so I"m going to guess 10.8 or newer... OSX is in a sad state linking to system libs on the later releases -- maybe 10.11 and on, not sure of the exact release -- they stopped shipping the headers for things like ssl and ffi since they don't want 3rd parties linking to deprecated versions of those libraries versus, in the case of ssl, their newer security framework. Recommendation is to bundle what you need if you're not using the framework -- something to think about. thx m -- Matt Billenstein matt at vazor.com http://www.vazor.com/ From j.orponen at 4teamwork.ch Tue Jan 30 13:21:27 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Tue, 30 Jan 2018 19:21:27 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 6:42 PM, Chris Barker wrote: > > And maybe we could even get rid of the "Framework" builds...... > Please do not. These make life easier for doing things the Apple way for signed sandboxed applications. Joining the discussion here from a ~cross-post on pythonmac-sig: https://mail.python.org/pipermail/pythonmac-sig/2018-January/024283.html -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.orponen at 4teamwork.ch Tue Jan 30 13:46:19 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Tue, 30 Jan 2018 19:46:19 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: <0101016148412d02-dcbced7d-149f-4ec8-bbce-26f721c49fda-000000@us-west-2.amazonses.com> References: <0101016148412d02-dcbced7d-149f-4ec8-bbce-26f721c49fda-000000@us-west-2.amazonses.com> Message-ID: On Tue, Jan 30, 2018 at 7:08 PM, Matt Billenstein wrote: > OSX is in a sad state linking to system libs on the later releases -- maybe > 10.11 and on, not sure of the exact release -- they stopped shipping the > headers for things like ssl and ffi since they don't want 3rd parties > linking > to deprecated versions of those libraries versus, in the case of ssl, their > newer security framework. Recommendation is to bundle what you need if > you're > not using the framework -- something to think about. There are also some practical issues with trying to distribute software using some deprecated Cocoa APIs or weak linked syscalls. The pythonmac-sig thead I linked to earlier has pointers to how to flare those up if one ever needs to distribute Python to a specific macOS version target range while compiling on a newer macOS. It would be nice to do more things the Apple way, including porting to modern runtime feature availability check cascades of the Cocoa APIs and using the Apple provided system Frameworks. This seems like a rather major workload and should be targeting 3.8. I'm willing to participate in that effort. The availability of syscalls across targets when cross-compiling for an older target is a more generic build system problem and I'm not sure if Python should do anything other than just document it being a thing. I'm personally fine patching pyconfig.h after running the configure script for this special case. As suggested on pythonmac-sig, I'd like to see 10.11 get chosen as the macOS to build on as it provides a decent balance between hardware compatibility and being new(er). -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.orponen at 4teamwork.ch Tue Jan 30 13:47:18 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Tue, 30 Jan 2018 19:47:18 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 6:50 PM, Ray Donnelly wrote: > While we're making such macOS-build requests, any chance of building a > static interpreter too? We've been doing that on the Anaconda > Distribution since the 5.0 release in September and it seems to be > working well. > PyPy is also currently eyeing doing their macOS builds better: https://bitbucket.org/pypy/pypy/issues/2734/establish-a-build-and-release-pipeline-for What do the Anaconda static builds get built on? -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From fw at deneb.enyo.de Tue Jan 30 13:56:40 2018 From: fw at deneb.enyo.de (Florian Weimer) Date: Tue, 30 Jan 2018 19:56:40 +0100 Subject: [Python-Dev] Python 2.7, long double vs allocator alignment, GCC 8 on x86-64 Message-ID: <874ln3w4yf.fsf@mid.deneb.enyo.de> I hope this is the right list for this kind of question. We recently tried to build Python 2.6 with GCC 8, and ran into this issue: Also quoting for context: | PyInstance_NewRaw contains this code: | | inst = PyObject_GC_New(PyInstanceObject, &PyInstance_Type); | if (inst == NULL) { | Py_DECREF(dict); | return NULL; | } | inst->in_weakreflist = NULL; | Py_INCREF(klass); | inst->in_class = (PyClassObject *)klass; | inst->in_dict = dict; | _PyObject_GC_TRACK(inst); | | _PyObject_GC_TRACK expands to: | | #define _PyObject_GC_TRACK(o) do { \ | PyGC_Head *g = _Py_AS_GC(o); \ | if (g->gc.gc_refs != _PyGC_REFS_UNTRACKED) \ | Py_FatalError("GC object already tracked"); \ | ? | | Via: | | #define _Py_AS_GC(o) ((PyGC_Head *)(o)-1) | | We get to this: | | /* GC information is stored BEFORE the object structure. */ | typedef union _gc_head { | struct { | union _gc_head *gc_next; | union _gc_head *gc_prev; | Py_ssize_t gc_refs; | } gc; | long double dummy; /* force worst-case alignment */ | } PyGC_Head; | | PyGC_Head has 16-byte alignment. The net result is that | | _PyObject_GC_TRACK(inst); | | promises to the compiler that inst is properly aligned for the | PyGC_Head type, but it is not: PyObject_GC_New returns a pointer which | is only 8-byte-aligned. | | Objects/obmalloc.c contains this: | | /* | * Alignment of addresses returned to the user. 8-bytes alignment works | * on most current architectures (with 32-bit or 64-bit address busses). | * The alignment value is also used for grouping small requests in size | * classes spaced ALIGNMENT bytes apart. | * | * You shouldn't change this unless you know what you are doing. | */ | #define ALIGNMENT 8 /* must be 2^N */ | #define ALIGNMENT_SHIFT 3 | #define ALIGNMENT_MASK (ALIGNMENT - 1) | | So either the allocator alignment needs to be increased, or the | PyGC_Head alignment needs to be decreased. Is this a known issue? As far as I can see, it has not been fixed on the 2.7 branch. (Store merging is a relatively new GCC feature. Among other things, this means that on x86-64, for sufficiently aligned pointers, vector instructions are used to update multiple struct fields at once. These vector instructions can trigger alignment traps, similar to what happens on some other architectures for scalars.) From brett at python.org Tue Jan 30 14:28:33 2018 From: brett at python.org (Brett Cannon) Date: Tue, 30 Jan 2018 19:28:33 +0000 Subject: [Python-Dev] Friendly reminder: be kind to one another In-Reply-To: References: <20180130181034.49cde406@fsol> Message-ID: On Tue, 30 Jan 2018 at 10:01 Paul Moore wrote: > On 30 January 2018 at 17:10, Antoine Pitrou wrote: > > > > This is missing a bit of context. Is this about python-dev? Some > > other ML? > > While I'll admit to being curious about what prompted this, honestly, > it's none of my business - and so I think that Brett's posting should > be viewed as a general reminder, without any specific context implied. > What Paul said. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Tue Jan 30 16:20:29 2018 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 30 Jan 2018 21:20:29 +0000 Subject: [Python-Dev] Python 2.7, long double vs allocator alignment, GCC 8 on x86-64 In-Reply-To: <874ln3w4yf.fsf@mid.deneb.enyo.de> References: <874ln3w4yf.fsf@mid.deneb.enyo.de> Message-ID: The proper fix for this in the code would likely break ABI compatibility (ie: not possible in python 2.7 or any other stable release). Clang's UBSAN (undefined behavior sanitizer) has been flagging this one for a long time. In Python 3 a double is used instead of long double since 2012 as I did some digging at the time: https://github.com/python/cpython/commit/e348c8d154cf6342c79d627ebfe89dfe9de23817 -gps On Tue, Jan 30, 2018 at 10:59 AM Florian Weimer wrote: > I hope this is the right list for this kind of question. We recently > tried to build Python 2.6 with GCC 8, and ran into this issue: > > > > Also quoting for context: > > | PyInstance_NewRaw contains this code: > | > | inst = PyObject_GC_New(PyInstanceObject, &PyInstance_Type); > | if (inst == NULL) { > | Py_DECREF(dict); > | return NULL; > | } > | inst->in_weakreflist = NULL; > | Py_INCREF(klass); > | inst->in_class = (PyClassObject *)klass; > | inst->in_dict = dict; > | _PyObject_GC_TRACK(inst); > | > | _PyObject_GC_TRACK expands to: > | > | #define _PyObject_GC_TRACK(o) do { \ > | PyGC_Head *g = _Py_AS_GC(o); \ > | if (g->gc.gc_refs != _PyGC_REFS_UNTRACKED) \ > | Py_FatalError("GC object already tracked"); \ > | ? > | > | Via: > | > | #define _Py_AS_GC(o) ((PyGC_Head *)(o)-1) > | > | We get to this: > | > | /* GC information is stored BEFORE the object structure. */ > | typedef union _gc_head { > | struct { > | union _gc_head *gc_next; > | union _gc_head *gc_prev; > | Py_ssize_t gc_refs; > | } gc; > | long double dummy; /* force worst-case alignment */ > | } PyGC_Head; > | > | PyGC_Head has 16-byte alignment. The net result is that > | > | _PyObject_GC_TRACK(inst); > | > | promises to the compiler that inst is properly aligned for the > | PyGC_Head type, but it is not: PyObject_GC_New returns a pointer which > | is only 8-byte-aligned. > | > | Objects/obmalloc.c contains this: > | > | /* > | * Alignment of addresses returned to the user. 8-bytes alignment works > | * on most current architectures (with 32-bit or 64-bit address busses). > | * The alignment value is also used for grouping small requests in size > | * classes spaced ALIGNMENT bytes apart. > | * > | * You shouldn't change this unless you know what you are doing. > | */ > | #define ALIGNMENT 8 /* must be 2^N */ > | #define ALIGNMENT_SHIFT 3 > | #define ALIGNMENT_MASK (ALIGNMENT - 1) > | > | So either the allocator alignment needs to be increased, or the > | PyGC_Head alignment needs to be decreased. > > Is this a known issue? As far as I can see, it has not been fixed on > the 2.7 branch. > > (Store merging is a relatively new GCC feature. Among other things, > this means that on x86-64, for sufficiently aligned pointers, vector > instructions are used to update multiple struct fields at once. These > vector instructions can trigger alignment traps, similar to what > happens on some other architectures for scalars.) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Tue Jan 30 16:22:17 2018 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 30 Jan 2018 21:22:17 +0000 Subject: [Python-Dev] Python 2.7, long double vs allocator alignment, GCC 8 on x86-64 In-Reply-To: References: <874ln3w4yf.fsf@mid.deneb.enyo.de> Message-ID: I'm curious if changing the obmalloc.c ALIGNMENT and ALIGNMENT_SHIFT defines is sufficient to avoid ABI breakage. -gps On Tue, Jan 30, 2018 at 1:20 PM Gregory P. Smith wrote: > The proper fix for this in the code would likely break ABI compatibility > (ie: not possible in python 2.7 or any other stable release). > > Clang's UBSAN (undefined behavior sanitizer) has been flagging this one > for a long time. > > In Python 3 a double is used instead of long double since 2012 as I did > some digging at the time: > https://github.com/python/cpython/commit/e348c8d154cf6342c79d627ebfe89dfe9de23817 > > -gps > > On Tue, Jan 30, 2018 at 10:59 AM Florian Weimer wrote: > >> I hope this is the right list for this kind of question. We recently >> tried to build Python 2.6 with GCC 8, and ran into this issue: >> >> >> >> Also quoting for context: >> >> | PyInstance_NewRaw contains this code: >> | >> | inst = PyObject_GC_New(PyInstanceObject, &PyInstance_Type); >> | if (inst == NULL) { >> | Py_DECREF(dict); >> | return NULL; >> | } >> | inst->in_weakreflist = NULL; >> | Py_INCREF(klass); >> | inst->in_class = (PyClassObject *)klass; >> | inst->in_dict = dict; >> | _PyObject_GC_TRACK(inst); >> | >> | _PyObject_GC_TRACK expands to: >> | >> | #define _PyObject_GC_TRACK(o) do { \ >> | PyGC_Head *g = _Py_AS_GC(o); \ >> | if (g->gc.gc_refs != _PyGC_REFS_UNTRACKED) \ >> | Py_FatalError("GC object already tracked"); \ >> | ? >> | >> | Via: >> | >> | #define _Py_AS_GC(o) ((PyGC_Head *)(o)-1) >> | >> | We get to this: >> | >> | /* GC information is stored BEFORE the object structure. */ >> | typedef union _gc_head { >> | struct { >> | union _gc_head *gc_next; >> | union _gc_head *gc_prev; >> | Py_ssize_t gc_refs; >> | } gc; >> | long double dummy; /* force worst-case alignment */ >> | } PyGC_Head; >> | >> | PyGC_Head has 16-byte alignment. The net result is that >> | >> | _PyObject_GC_TRACK(inst); >> | >> | promises to the compiler that inst is properly aligned for the >> | PyGC_Head type, but it is not: PyObject_GC_New returns a pointer which >> | is only 8-byte-aligned. >> | >> | Objects/obmalloc.c contains this: >> | >> | /* >> | * Alignment of addresses returned to the user. 8-bytes alignment works >> | * on most current architectures (with 32-bit or 64-bit address busses). >> | * The alignment value is also used for grouping small requests in size >> | * classes spaced ALIGNMENT bytes apart. >> | * >> | * You shouldn't change this unless you know what you are doing. >> | */ >> | #define ALIGNMENT 8 /* must be 2^N */ >> | #define ALIGNMENT_SHIFT 3 >> | #define ALIGNMENT_MASK (ALIGNMENT - 1) >> | >> | So either the allocator alignment needs to be increased, or the >> | PyGC_Head alignment needs to be decreased. >> >> Is this a known issue? As far as I can see, it has not been fixed on >> the 2.7 branch. >> >> (Store merging is a relatively new GCC feature. Among other things, >> this means that on x86-64, for sufficiently aligned pointers, vector >> instructions are used to update multiple struct fields at once. These >> vector instructions can trigger alignment traps, similar to what >> happens on some other architectures for scalars.) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Jan 30 16:44:36 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 30 Jan 2018 22:44:36 +0100 Subject: [Python-Dev] Python 2.7, long double vs allocator alignment, GCC 8 on x86-64 In-Reply-To: <874ln3w4yf.fsf@mid.deneb.enyo.de> References: <874ln3w4yf.fsf@mid.deneb.enyo.de> Message-ID: See https://bugs.python.org/issue31912 and https://bugs.python.org/issue27987 Victor 2018-01-30 19:56 GMT+01:00 Florian Weimer : > I hope this is the right list for this kind of question. We recently > tried to build Python 2.6 with GCC 8, and ran into this issue: > > > > Also quoting for context: > > | PyInstance_NewRaw contains this code: > | > | inst = PyObject_GC_New(PyInstanceObject, &PyInstance_Type); > | if (inst == NULL) { > | Py_DECREF(dict); > | return NULL; > | } > | inst->in_weakreflist = NULL; > | Py_INCREF(klass); > | inst->in_class = (PyClassObject *)klass; > | inst->in_dict = dict; > | _PyObject_GC_TRACK(inst); > | > | _PyObject_GC_TRACK expands to: > | > | #define _PyObject_GC_TRACK(o) do { \ > | PyGC_Head *g = _Py_AS_GC(o); \ > | if (g->gc.gc_refs != _PyGC_REFS_UNTRACKED) \ > | Py_FatalError("GC object already tracked"); \ > | ? > | > | Via: > | > | #define _Py_AS_GC(o) ((PyGC_Head *)(o)-1) > | > | We get to this: > | > | /* GC information is stored BEFORE the object structure. */ > | typedef union _gc_head { > | struct { > | union _gc_head *gc_next; > | union _gc_head *gc_prev; > | Py_ssize_t gc_refs; > | } gc; > | long double dummy; /* force worst-case alignment */ > | } PyGC_Head; > | > | PyGC_Head has 16-byte alignment. The net result is that > | > | _PyObject_GC_TRACK(inst); > | > | promises to the compiler that inst is properly aligned for the > | PyGC_Head type, but it is not: PyObject_GC_New returns a pointer which > | is only 8-byte-aligned. > | > | Objects/obmalloc.c contains this: > | > | /* > | * Alignment of addresses returned to the user. 8-bytes alignment works > | * on most current architectures (with 32-bit or 64-bit address busses). > | * The alignment value is also used for grouping small requests in size > | * classes spaced ALIGNMENT bytes apart. > | * > | * You shouldn't change this unless you know what you are doing. > | */ > | #define ALIGNMENT 8 /* must be 2^N */ > | #define ALIGNMENT_SHIFT 3 > | #define ALIGNMENT_MASK (ALIGNMENT - 1) > | > | So either the allocator alignment needs to be increased, or the > | PyGC_Head alignment needs to be decreased. > > Is this a known issue? As far as I can see, it has not been fixed on > the 2.7 branch. > > (Store merging is a relatively new GCC feature. Among other things, > this means that on x86-64, for sufficiently aligned pointers, vector > instructions are used to update multiple struct fields at once. These > vector instructions can trigger alignment traps, similar to what > happens on some other architectures for scalars.) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From chris.barker at noaa.gov Tue Jan 30 18:43:42 2018 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 30 Jan 2018 15:43:42 -0800 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: And maybe we could even get rid of the "Framework" builds...... > Please do not. These make life easier for doing things the Apple way for signed sandboxed applications. Thanks ? good to hear there is a good reason for them. I?ve always thought that Frameworks were designed with other use-casss, and didn?t really help with Python. For the record, are you re-distributing the python.org builds, or re-building yourself? -CHB Joining the discussion here from a ~cross-post on pythonmac-sig: https://mail.python.org/pipermail/pythonmac-sig/2018-January/024283.html -- Joni Orponen _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Jan 30 18:48:54 2018 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 30 Jan 2018 15:48:54 -0800 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: <0101016148412d02-dcbced7d-149f-4ec8-bbce-26f721c49fda-000000@us-west-2.amazonses.com> Message-ID: > It would be nice to do more things the Apple way, including porting to modern runtime feature availability check cascades of the Cocoa APIs and using the Apple provided system Frameworks. This seems like a rather major workload and should be targeting 3.8. Yeah ? too much to do for 3.7 at this stage. But what about dropping 32 bit and maybe bumping the OS up? Maybe 10.11 is too new, but something newer than 10.6? Or maybe 10.11 is a good target ? looks like it?s been around 2+ years, and Apple now provides free updates. -CHB From steve at pearwood.info Tue Jan 30 20:28:57 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 31 Jan 2018 12:28:57 +1100 Subject: [Python-Dev] Friendly reminder: be kind to one another In-Reply-To: References: <20180130181034.49cde406@fsol> Message-ID: <20180131012857.GI26553@ando.pearwood.info> On Tue, Jan 30, 2018 at 06:00:36PM +0000, Paul Moore wrote: > On 30 January 2018 at 17:10, Antoine Pitrou wrote: > > > > This is missing a bit of context. Is this about python-dev? Some > > other ML? > > While I'll admit to being curious about what prompted this, honestly, > it's none of my business - and so I think that Brett's posting should > be viewed as a general reminder, without any specific context implied. But specific content was not just implied but explicitly eluded to: Brett stated that "the last 3 days I have had two situations come up where I was asked for my opinion in regards to possible CoC violations" so it is our business and every one of us should be wondering what we personally might have written that could have been a possible CoC violation, on this or some other list. As Brett says, none of us are beyond reproach. -- Steve From brett at python.org Tue Jan 30 21:02:36 2018 From: brett at python.org (Brett Cannon) Date: Wed, 31 Jan 2018 02:02:36 +0000 Subject: [Python-Dev] Friendly reminder: be kind to one another In-Reply-To: <20180131012857.GI26553@ando.pearwood.info> References: <20180130181034.49cde406@fsol> <20180131012857.GI26553@ando.pearwood.info> Message-ID: On Tue, Jan 30, 2018, 17:30 Steven D'Aprano, wrote: > On Tue, Jan 30, 2018 at 06:00:36PM +0000, Paul Moore wrote: > > On 30 January 2018 at 17:10, Antoine Pitrou wrote: > > > > > > This is missing a bit of context. Is this about python-dev? Some > > > other ML? > > > > While I'll admit to being curious about what prompted this, honestly, > > it's none of my business - and so I think that Brett's posting should > > be viewed as a general reminder, without any specific context implied. > > But specific content was not just implied but explicitly eluded to: > Brett stated that > > "the last 3 days I have had two situations come up where I was asked > for my opinion in regards to possible CoC violations" > > so it is our business and every one of us should be wondering what we > personally might have written that could have been a possible CoC > violation, on this or some other list. > In both instances the people who were involved have been spoken to, so if no one talked with you about how they felt then it wasn't you (i.e. I'm not being vague to make everyone tread carefully in case it's them). As I said, people just asked for my opinion and I provided it. At this point please just ignore my comment about what personally triggered the email. It's not the focal point of what I was hoping to communicate, just so the email didn't seem totally random is all. -Brett > As Brett says, none of us are beyond reproach. > > -- > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jan 30 21:23:21 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 31 Jan 2018 12:23:21 +1000 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: On 30 January 2018 at 02:39, Victor Stinner wrote: >> * "-j1" would explicitly turn off multiprocessing > > Running tests "sequentially" but run them in one subprocess per test > file is interesting for tests isolation. Runing tests one by one > reduces the risk of triggering a race condition (test only failing > when the system load is high). > > -jN was always documented as "use multiprocessing". > > Maybe we need a new option to explicitly disable multiprocessing instead? > > vstinner at apu$ ./python -m test > Run tests sequentially > > vs > > vstinner at apu$ ./python -m test -j1 > Run tests in parallel using 1 child processes Hmm, that's a good point. Maybe a less intrusive alternative would be akin to what we did with the configure script for non-optimised builds: when we display the total duration at the end, append a note in the serial execution case. Something like: Total duration: 16 minutes 33 seconds (serial execution, pass '-j0' for parallel execution) Such a change would be a safe way to nudge new contributors towards "./python -m test -j0" for faster local testing, without risking backwards compatibility issues with existing test suite invocations in other contexts. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From benjamin at python.org Wed Jan 31 02:20:18 2018 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 30 Jan 2018 23:20:18 -0800 Subject: [Python-Dev] Python 2.7, long double vs allocator alignment, GCC 8 on x86-64 In-Reply-To: References: <874ln3w4yf.fsf@mid.deneb.enyo.de> Message-ID: <1517383218.2794976.1254214168.321B1FB4@webmail.messagingengine.com> Yes, changing obmalloc.c's alignment guarantees would definitely be the easiest solution. I think someone just needs to investigate whether it wastes a lot of memory. On Tue, Jan 30, 2018, at 13:22, Gregory P. Smith wrote: > I'm curious if changing the obmalloc.c ALIGNMENT and ALIGNMENT_SHIFT > defines is sufficient to avoid ABI breakage. > > -gps > > On Tue, Jan 30, 2018 at 1:20 PM Gregory P. Smith wrote: > > > The proper fix for this in the code would likely break ABI compatibility > > (ie: not possible in python 2.7 or any other stable release). > > > > Clang's UBSAN (undefined behavior sanitizer) has been flagging this one > > for a long time. > > > > In Python 3 a double is used instead of long double since 2012 as I did > > some digging at the time: > > https://github.com/python/cpython/commit/e348c8d154cf6342c79d627ebfe89dfe9de23817 > > > > -gps > > > > On Tue, Jan 30, 2018 at 10:59 AM Florian Weimer wrote: > > > >> I hope this is the right list for this kind of question. We recently > >> tried to build Python 2.6 with GCC 8, and ran into this issue: > >> > >> > >> > >> Also quoting for context: > >> > >> | PyInstance_NewRaw contains this code: > >> | > >> | inst = PyObject_GC_New(PyInstanceObject, &PyInstance_Type); > >> | if (inst == NULL) { > >> | Py_DECREF(dict); > >> | return NULL; > >> | } > >> | inst->in_weakreflist = NULL; > >> | Py_INCREF(klass); > >> | inst->in_class = (PyClassObject *)klass; > >> | inst->in_dict = dict; > >> | _PyObject_GC_TRACK(inst); > >> | > >> | _PyObject_GC_TRACK expands to: > >> | > >> | #define _PyObject_GC_TRACK(o) do { \ > >> | PyGC_Head *g = _Py_AS_GC(o); \ > >> | if (g->gc.gc_refs != _PyGC_REFS_UNTRACKED) \ > >> | Py_FatalError("GC object already tracked"); \ > >> | ? > >> | > >> | Via: > >> | > >> | #define _Py_AS_GC(o) ((PyGC_Head *)(o)-1) > >> | > >> | We get to this: > >> | > >> | /* GC information is stored BEFORE the object structure. */ > >> | typedef union _gc_head { > >> | struct { > >> | union _gc_head *gc_next; > >> | union _gc_head *gc_prev; > >> | Py_ssize_t gc_refs; > >> | } gc; > >> | long double dummy; /* force worst-case alignment */ > >> | } PyGC_Head; > >> | > >> | PyGC_Head has 16-byte alignment. The net result is that > >> | > >> | _PyObject_GC_TRACK(inst); > >> | > >> | promises to the compiler that inst is properly aligned for the > >> | PyGC_Head type, but it is not: PyObject_GC_New returns a pointer which > >> | is only 8-byte-aligned. > >> | > >> | Objects/obmalloc.c contains this: > >> | > >> | /* > >> | * Alignment of addresses returned to the user. 8-bytes alignment works > >> | * on most current architectures (with 32-bit or 64-bit address busses). > >> | * The alignment value is also used for grouping small requests in size > >> | * classes spaced ALIGNMENT bytes apart. > >> | * > >> | * You shouldn't change this unless you know what you are doing. > >> | */ > >> | #define ALIGNMENT 8 /* must be 2^N */ > >> | #define ALIGNMENT_SHIFT 3 > >> | #define ALIGNMENT_MASK (ALIGNMENT - 1) > >> | > >> | So either the allocator alignment needs to be increased, or the > >> | PyGC_Head alignment needs to be decreased. > >> > >> Is this a known issue? As far as I can see, it has not been fixed on > >> the 2.7 branch. > >> > >> (Store merging is a relatively new GCC feature. Among other things, > >> this means that on x86-64, for sufficiently aligned pointers, vector > >> instructions are used to update multiple struct fields at once. These > >> vector instructions can trigger alignment traps, similar to what > >> happens on some other architectures for scalars.) > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> https://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: > >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > >> > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org From fw at deneb.enyo.de Wed Jan 31 02:41:43 2018 From: fw at deneb.enyo.de (Florian Weimer) Date: Wed, 31 Jan 2018 08:41:43 +0100 Subject: [Python-Dev] Python 2.7, long double vs allocator alignment, GCC 8 on x86-64 In-Reply-To: (Gregory P. Smith's message of "Tue, 30 Jan 2018 21:20:29 +0000") References: <874ln3w4yf.fsf@mid.deneb.enyo.de> Message-ID: <87sham4gqw.fsf@mid.deneb.enyo.de> * Gregory P. Smith: > The proper fix for this in the code would likely break ABI compatibility > (ie: not possible in python 2.7 or any other stable release). > > Clang's UBSAN (undefined behavior sanitizer) has been flagging this one for > a long time. > > In Python 3 a double is used instead of long double since 2012 as I did > some digging at the time: > https://github.com/python/cpython/commit/e348c8d154cf6342c79d627ebfe89dfe9de23817 A slightly more ABI-safe version of that change looks like this: diff --git a/Include/objimpl.h b/Include/objimpl.h index 55e83eced6..aa906144dc 100644 --- a/Include/objimpl.h +++ b/Include/objimpl.h @@ -248,6 +248,18 @@ PyAPI_FUNC(PyVarObject *) _PyObject_GC_Resize(PyVarObject *, Py_ssize_t); /* for source compatibility with 2.2 */ #define _PyObject_GC_Del PyObject_GC_Del +/* Former over-aligned definition of PyGC_Head, used to compute the + size of the padding for the new version below. */ +union _gc_head; +union _gc_head_old { + struct { + union _gc_head *gc_next; + union _gc_head *gc_prev; + Py_ssize_t gc_refs; + } gc; + long double dummy; +}; + /* GC information is stored BEFORE the object structure. */ typedef union _gc_head { struct { @@ -255,7 +267,8 @@ typedef union _gc_head { union _gc_head *gc_prev; Py_ssize_t gc_refs; } gc; - long double dummy; /* force worst-case alignment */ + double dummy; /* force worst-case alignment */ + char dummy_padding[sizeof(union _gc_head_old)]; } PyGC_Head; extern PyGC_Head *_PyGC_generation0; This preserves the offset used by _Py_AS_GC in case it has been built into existing binaries. It may be more appropriate to do it this way for Python 2.7. I think it's also more conservative than the allocator changes. From nad at python.org Wed Jan 31 03:17:47 2018 From: nad at python.org (Ned Deily) Date: Wed, 31 Jan 2018 03:17:47 -0500 Subject: [Python-Dev] 3.7.0b1 status Message-ID: <6D3EC913-65AE-478E-9201-04E7099D7F20@python.org> Just a quick update: thanks to all of you who worked long hours to get features completed and merged in for the 3.7 feature code cutoff yesterday. We release elves have been busy behind the scenes baking goodies. So far everything looks OK. But we're taking a little longer than usual: this is, in many ways, the most complicated milestone of the release cycle, since it involves creating a new release branch and other munging, and this is the first time we are doing this since we moved to our new git-based workflow last year and we want to get it right. We will have everything done and announced in not more than 24 hours from now. If you wish, feel free to merge new commits into the master branch for release in 3.8, with the understanding that any also destined for 3.7.0 will need to be cherrypicked after the 3.7 branch is available. Other branches (3.6, 2.7) are unaffected. Thanks for your patience! -- Ned Deily nad at python.org -- [] From mingw.android at gmail.com Wed Jan 31 03:31:27 2018 From: mingw.android at gmail.com (Ray Donnelly) Date: Wed, 31 Jan 2018 08:31:27 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Jan 30, 2018 6:47 PM, "Joni Orponen" wrote: On Tue, Jan 30, 2018 at 6:50 PM, Ray Donnelly wrote: > While we're making such macOS-build requests, any chance of building a > static interpreter too? We've been doing that on the Anaconda > Distribution since the 5.0 release in September and it seems to be > working well. > PyPy is also currently eyeing doing their macOS builds better: https://bitbucket.org/pypy/pypy/issues/2734/establish-a-build-and-release- pipeline-for What do the Anaconda static builds get built on? We have our own clang pseudo cross-compilers and use a macOS 10.9 SDK for all of our package compilation to achieve compatibility (this means we can compile on newer macOS just fine). We see a 1.1 to 1.2 times performance benefit over official releases as measured using 'python performance'. Apart from a static interpreter we also enable LTO and PGO and only build for 64-bit so I'm not sure how much each bit continues. Our recipe for python 3.6 can be found at: https://github.com/AnacondaRecipes/python-feedstock/tree/master/recipe -- Joni Orponen _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ mingw.android%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingw.android at gmail.com Wed Jan 31 03:32:42 2018 From: mingw.android at gmail.com (Ray Donnelly) Date: Wed, 31 Jan 2018 08:32:42 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Jan 31, 2018 8:31 AM, "Ray Donnelly" wrote: On Jan 30, 2018 6:47 PM, "Joni Orponen" wrote: On Tue, Jan 30, 2018 at 6:50 PM, Ray Donnelly wrote: > While we're making such macOS-build requests, any chance of building a > static interpreter too? We've been doing that on the Anaconda > Distribution since the 5.0 release in September and it seems to be > working well. > PyPy is also currently eyeing doing their macOS builds better: https://bitbucket.org/pypy/pypy/issues/2734/establis h-a-build-and-release-pipeline-for What do the Anaconda static builds get built on? We have our own clang pseudo cross-compilers and use a macOS 10.9 SDK for all of our package compilation to achieve compatibility (this means we can compile on newer macOS just fine). We see a 1.1 to 1.2 times performance benefit over official releases as measured using 'python performance'. Apart from a static interpreter we also enable LTO and PGO and only build for 64-bit so I'm not sure how much each bit continues. Our recipe for python 3.6 can be found at: s/continues/contributes/ https://github.com/AnacondaRecipes/python-feedstock/tree/master/recipe -- Joni Orponen _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mingw. android%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 31 05:00:38 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 31 Jan 2018 11:00:38 +0100 Subject: [Python-Dev] Making "-j0" the default setting for the test suite? In-Reply-To: References: Message-ID: 2018-01-31 3:23 GMT+01:00 Nick Coghlan : > Something like: > > Total duration: 16 minutes 33 seconds (serial execution, pass > '-j0' for parallel execution) > > Such a change would be a safe way to nudge new contributors towards > "./python -m test -j0" for faster local testing, without risking > backwards compatibility issues with existing test suite invocations in > other contexts. I have no strong opinion on using -j0 by default, but repeating parallel vs serial execution in the summary is an excellent idea :-) Victor From j.orponen at 4teamwork.ch Wed Jan 31 06:13:52 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Wed, 31 Jan 2018 12:13:52 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 12:43 AM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > And maybe we could even get rid of the "Framework" builds...... >> > > Please do not. These make life easier for doing things the Apple way for > signed sandboxed applications. > > Thanks ? good to hear there is a good reason for them. I?ve always thought > that Frameworks were designed with other use-casss, and didn?t really help > with Python. > > For the record, are you re-distributing the python.org builds, or > re-building yourself? > We are re-building ourselves. Seems we've cooked up something not too unsimilar to what Anaconda is doing, but less generic and covering less corner cases. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.orponen at 4teamwork.ch Wed Jan 31 06:16:21 2018 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Wed, 31 Jan 2018 12:16:21 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 9:31 AM, Ray Donnelly wrote: > We see a 1.1 to 1.2 times performance benefit over official releases as > measured using 'python performance'. > > Apart from a static interpreter we also enable LTO and PGO and only build > for 64-bit so I'm not sure how much each bit continues. Our recipe for > python 3.6 can be found at: > Do you metrify LTO and PGO independent of each other as well or only the "enable everything" combo? I've had mixed results with LTO so far, but this is probably hardware / compiler combination specific. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingw.android at gmail.com Wed Jan 31 07:12:18 2018 From: mingw.android at gmail.com (Ray Donnelly) Date: Wed, 31 Jan 2018 12:12:18 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 11:16 AM, Joni Orponen wrote: > On Wed, Jan 31, 2018 at 9:31 AM, Ray Donnelly > wrote: >> >> We see a 1.1 to 1.2 times performance benefit over official releases as >> measured using 'python performance'. >> >> Apart from a static interpreter we also enable LTO and PGO and only build >> for 64-bit so I'm not sure how much each bit continues. Our recipe for >> python 3.6 can be found at: > > > Do you metrify LTO and PGO independent of each other as well or only the > "enable everything" combo? I've had mixed results with LTO so far, but this > is probably hardware / compiler combination specific. I've never found enough time to take detailed metrics, sorry. Maybe one day? Looking at my performance graphs again: Against the official CPython 3.6 (probably .3 or .4) release I see: 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) 5 that are >=1.5x,<1.6x faster. 13 that are >=1.4x,<1.5x faster. 21 that are >=1.3x,<1.4x faster. 14 that are >=1.2x,<1.3x faster. 5 that are >=1.1x,<1.2x faster. 0 that are < 1.1x faster/slower. Pretty good numbers overall I think. > > -- > Joni Orponen > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/mingw.android%40gmail.com > From songofacandy at gmail.com Wed Jan 31 07:20:24 2018 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 31 Jan 2018 21:20:24 +0900 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: > > Against the official CPython 3.6 (probably .3 or .4) release I see: > 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) > 5 that are >=1.5x,<1.6x faster. > 13 that are >=1.4x,<1.5x faster. > 21 that are >=1.3x,<1.4x faster. > 14 that are >=1.2x,<1.3x faster. > 5 that are >=1.1x,<1.2x faster. > 0 that are < 1.1x faster/slower. > > Pretty good numbers overall I think. > > Yay!! Congrats for all of us! -- INADA Naoki From victor.stinner at gmail.com Wed Jan 31 08:06:00 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 31 Jan 2018 14:06:00 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: There is https://speed.python.org/comparison/ to compare Python 2.7, 3.5, 3.6 and master (future 3.7). Victor Le 31 janv. 2018 13:14, "Ray Donnelly" a ?crit : > On Wed, Jan 31, 2018 at 11:16 AM, Joni Orponen > wrote: > > On Wed, Jan 31, 2018 at 9:31 AM, Ray Donnelly > > wrote: > >> > >> We see a 1.1 to 1.2 times performance benefit over official releases as > >> measured using 'python performance'. > >> > >> Apart from a static interpreter we also enable LTO and PGO and only > build > >> for 64-bit so I'm not sure how much each bit continues. Our recipe for > >> python 3.6 can be found at: > > > > > > Do you metrify LTO and PGO independent of each other as well or only the > > "enable everything" combo? I've had mixed results with LTO so far, but > this > > is probably hardware / compiler combination specific. > > I've never found enough time to take detailed metrics, sorry. Maybe > one day? Looking at my performance graphs again: > > Against the official CPython 3.6 (probably .3 or .4) release I see: > 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) > 5 that are >=1.5x,<1.6x faster. > 13 that are >=1.4x,<1.5x faster. > 21 that are >=1.3x,<1.4x faster. > 14 that are >=1.2x,<1.3x faster. > 5 that are >=1.1x,<1.2x faster. > 0 that are < 1.1x faster/slower. > > Pretty good numbers overall I think. > > > > > > > > -- > > Joni Orponen > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > https://mail.python.org/mailman/options/python-dev/ > mingw.android%40gmail.com > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Wed Jan 31 10:08:52 2018 From: steve at holdenweb.com (Steve Holden) Date: Wed, 31 Jan 2018 15:08:52 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: The horizontal axis labelling in that graph is useless with so many tests included! Would a graphic with hover labels over the bars be more useful? Steve Holden On Wed, Jan 31, 2018 at 1:06 PM, Victor Stinner wrote: > There is https://speed.python.org/comparison/ to compare Python 2.7, 3.5, > 3.6 and master (future 3.7). > > Victor > > Le 31 janv. 2018 13:14, "Ray Donnelly" a ?crit : > >> On Wed, Jan 31, 2018 at 11:16 AM, Joni Orponen >> wrote: >> > On Wed, Jan 31, 2018 at 9:31 AM, Ray Donnelly >> > wrote: >> >> >> >> We see a 1.1 to 1.2 times performance benefit over official releases as >> >> measured using 'python performance'. >> >> >> >> Apart from a static interpreter we also enable LTO and PGO and only >> build >> >> for 64-bit so I'm not sure how much each bit continues. Our recipe for >> >> python 3.6 can be found at: >> > >> > >> > Do you metrify LTO and PGO independent of each other as well or only the >> > "enable everything" combo? I've had mixed results with LTO so far, but >> this >> > is probably hardware / compiler combination specific. >> >> I've never found enough time to take detailed metrics, sorry. Maybe >> one day? Looking at my performance graphs again: >> >> Against the official CPython 3.6 (probably .3 or .4) release I see: >> 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) >> 5 that are >=1.5x,<1.6x faster. >> 13 that are >=1.4x,<1.5x faster. >> 21 that are >=1.3x,<1.4x faster. >> 14 that are >=1.2x,<1.3x faster. >> 5 that are >=1.1x,<1.2x faster. >> 0 that are < 1.1x faster/slower. >> >> Pretty good numbers overall I think. >> >> >> >> >> > >> > -- >> > Joni Orponen >> > >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> > https://mail.python.org/mailman/options/python-dev/mingw. >> android%40gmail.com >> > >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor. >> stinner%40gmail.com >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > steve%40holdenweb.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Jan 31 10:35:38 2018 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 31 Jan 2018 16:35:38 +0100 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: Click on "[x] horizontal" to exchange the two axis ;-) Victor 2018-01-31 16:08 GMT+01:00 Steve Holden : > The horizontal axis labelling in that graph is useless with so many tests > included! > > Would a graphic with hover labels over the bars be more useful? > > Steve Holden > > On Wed, Jan 31, 2018 at 1:06 PM, Victor Stinner > wrote: >> >> There is https://speed.python.org/comparison/ to compare Python 2.7, 3.5, >> 3.6 and master (future 3.7). >> >> Victor >> >> Le 31 janv. 2018 13:14, "Ray Donnelly" a ?crit : >>> >>> On Wed, Jan 31, 2018 at 11:16 AM, Joni Orponen >>> wrote: >>> > On Wed, Jan 31, 2018 at 9:31 AM, Ray Donnelly >>> > wrote: >>> >> >>> >> We see a 1.1 to 1.2 times performance benefit over official releases >>> >> as >>> >> measured using 'python performance'. >>> >> >>> >> Apart from a static interpreter we also enable LTO and PGO and only >>> >> build >>> >> for 64-bit so I'm not sure how much each bit continues. Our recipe for >>> >> python 3.6 can be found at: >>> > >>> > >>> > Do you metrify LTO and PGO independent of each other as well or only >>> > the >>> > "enable everything" combo? I've had mixed results with LTO so far, but >>> > this >>> > is probably hardware / compiler combination specific. >>> >>> I've never found enough time to take detailed metrics, sorry. Maybe >>> one day? Looking at my performance graphs again: >>> >>> Against the official CPython 3.6 (probably .3 or .4) release I see: >>> 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) >>> 5 that are >=1.5x,<1.6x faster. >>> 13 that are >=1.4x,<1.5x faster. >>> 21 that are >=1.3x,<1.4x faster. >>> 14 that are >=1.2x,<1.3x faster. >>> 5 that are >=1.1x,<1.2x faster. >>> 0 that are < 1.1x faster/slower. >>> >>> Pretty good numbers overall I think. >>> >>> >>> >>> >>> > >>> > -- >>> > Joni Orponen >>> > >>> > _______________________________________________ >>> > Python-Dev mailing list >>> > Python-Dev at python.org >>> > https://mail.python.org/mailman/listinfo/python-dev >>> > Unsubscribe: >>> > >>> > https://mail.python.org/mailman/options/python-dev/mingw.android%40gmail.com >>> > >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/steve%40holdenweb.com >> > From steve at holdenweb.com Wed Jan 31 11:01:50 2018 From: steve at holdenweb.com (Steve Holden) Date: Wed, 31 Jan 2018 16:01:50 +0000 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: Doh! Thank you. Steve Holden -------------- next part -------------- An HTML attachment was scrubbed... URL: From olegs at traiana.com Wed Jan 31 10:04:04 2018 From: olegs at traiana.com (Oleg Sivokon) Date: Wed, 31 Jan 2018 15:04:04 +0000 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: References: Message-ID: Hello list. I'll give some background before asking my question in more detail. I've been tasked with writing some infrastructure code that needs to talk to Kubernetes. (Kubernetes is a popular software for managing and automating virtualization / containerization of cloud services). One of the requirements was that the code be written in Python 3.X. The tasks my code was supposed to perform on Kubernetes would be something like cluster creation from specification, deletion of all or parts of the cluster, providing realtime statistics of cluster usage etc. There were few prototype scripts written in Bash using kubectl (official client written in Go). My first reaction was to look for an official client for Kubernetes written in Python. There is one official client for Kubernetes, with a single maintainer, impossible to parse documentation, containing mostly generated code. It is nigh impossible to use. Here I need to explain that for whatever reason Kubernetes team decided to write their HTTP API in such a way that the server is "dumb" and the client must be "smart" in order to do anything useful. For instance, if you have a description of your cluster, you cannot just send this description to the server and hope that it will know how to create the cluster from description. You need to make multiple API calls (perhaps hundreds of them) to arrange for the server to create the cluster from description. Since the official client is no help (it really only mirrors the HTTP API), I searched for other clients. There are two more. None is in good shape, and none comes even close to being able to do what kubectl can. There is one more client that shells out calls to kubectl. It implements only a small subset of kubectl commands, and... it's a lot of parsing of standard output with regular expressions and magic. Well... I was given a lot of time to investigate other options for dealing with this project, so I decided, what if I can compile kubectl into a shared library and write a Python extension that links against that library. And, indeed, after few days I came up with such an extension. It worked!.. On Linux... Now all I had to do was to re-create my success on Windows (most of the employees in my company use Windows). At first I thought that I'd cross-compile on Linux using MinGW. I compiled Go shared library into a DLL, then tried to compile my Python extension and... it didn't work. I downloaded VirtualBox and some Windows images, etc... tried to compile on Windows. It didn't work. I started asking around, and was told that even though for some earlier versions of Python this was kind of possible, for Python 3.5, 3.6 it is not. You must use MSVC to compile Python extensions. No way around it. Now, since Go won't compile with MSVC, I'll have to scrap my project and spend many weeks re-implementing kubectl. Here's my question: Why? Why did you choose to use non-free compiler, which also makes cross-compilation impossible? There wasn't really a reason not to choose MinGW as a way to compile extensions on Windows (Ruby does that, Go uses MinGW, perhaps some others too). It would've made things like CI and packaging so much easier... What do Python users / developers get from using MSVC instead? Thank you. Oleg This communication and all information contained in or attached to it is confidential, intended solely for the addressee, may be legally privileged and is the intellectual property of one of the companies of NEX Group plc ("NEX") or third parties. If you are not the intended addressee or receive this message in error, please immediately delete all copies of it and notify the sender. We have taken precautions to minimise the risk of transmitting software viruses, but we advise you to carry out your own virus checks on any attachments. We do not accept liability for any loss or damage caused by software viruses. NEX reserves the right to monitor all communications. We do not accept any legal responsibility for the content of communications, and no communication shall be considered legally binding. Furthermore, if the content of this communication is personal or unconnected with our business, we accept no liability or responsibility for it. NEX Group plc is a public limited company registered in England and Wales under number 10013770 and certain of its affiliates are authorised and regulated by regulatory authorities. For further regulatory information please see www.NEX.com. From mingw.android at gmail.com Wed Jan 31 14:07:18 2018 From: mingw.android at gmail.com (Ray Donnelly) Date: Wed, 31 Jan 2018 19:07:18 +0000 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 3:04 PM, Oleg Sivokon wrote: > Hello list. > > I'll give some background before asking my question in more detail. > > I've been tasked with writing some infrastructure code that needs to talk to Kubernetes. (Kubernetes is a popular software for managing and automating virtualization / containerization of cloud services). One of the requirements was that the code be written in Python 3.X. > > The tasks my code was supposed to perform on Kubernetes would be something like cluster creation from specification, deletion of all or parts of the cluster, providing realtime statistics of cluster usage etc. There were few prototype scripts written in Bash using kubectl (official client written in Go). > > My first reaction was to look for an official client for Kubernetes written in Python. There is one official client for Kubernetes, with a single maintainer, impossible to parse documentation, containing mostly generated code. It is nigh impossible to use. Here I need to explain that for whatever reason Kubernetes team decided to write their HTTP API in such a way that the server is "dumb" and the client must be "smart" in order to do anything useful. For instance, if you have a description of your cluster, you cannot just send this description to the server and hope that it will know how to create the cluster from description. You need to make multiple API calls (perhaps hundreds of them) to arrange for the server to create the cluster from description. > > Since the official client is no help (it really only mirrors the HTTP API), I searched for other clients. There are two more. None is in good shape, and none comes even close to being able to do what kubectl can. > > There is one more client that shells out calls to kubectl. It implements only a small subset of kubectl commands, and... it's a lot of parsing of standard output with regular expressions and magic. > > Well... I was given a lot of time to investigate other options for dealing with this project, so I decided, what if I can compile kubectl into a shared library and write a Python extension that links against that library. And, indeed, after few days I came up with such an extension. It worked!.. On Linux... > > Now all I had to do was to re-create my success on Windows (most of the employees in my company use Windows). At first I thought that I'd cross-compile on Linux using MinGW. I compiled Go shared library into a DLL, then tried to compile my Python extension and... it didn't work. I downloaded VirtualBox and some Windows images, etc... tried to compile on Windows. It didn't work. I started asking around, and was told that even though for some earlier versions of Python this was kind of possible, for Python 3.5, 3.6 it is not. You must use MSVC to compile Python extensions. No way around it. > Now, since Go won't compile with MSVC, I'll have to scrap my project and spend many weeks re-implementing kubectl. > > Here's my question: Why? > > Why did you choose to use non-free compiler, which also makes cross-compilation impossible? There wasn't really a reason not to choose MinGW as a way to compile extensions on Windows (Ruby does that, Go uses MinGW, perhaps some others too). It would've made things like CI and packaging so much easier... What do Python users / developers get from using MSVC instead? You can compile extension modules with mingw-w64 just fine (modulus a few gotchas). The Anaconda Distribution we do this for a few packages, for example rpy2. You can see the build script used here: https://github.com/AnacondaRecipes/rpy2-feedstock/blob/master/recipe/bld.bat (disclaimer: I work for Anaconda Inc on this stuff). MSYS2 also have mingw-w64 builds of Python that might meet your needs. It is pretty popular in some parts of the open source on Windows world (disclaimer: I did a lot of the work for this stuff on MSYS2). > > Thank you. > > Oleg > This communication and all information contained in or attached to it is confidential, intended solely for the addressee, may be legally privileged and is the intellectual property of one of the companies of NEX Group plc ("NEX") or third parties. If you are not the intended addressee or receive this message in error, please immediately delete all copies of it and notify the sender. We have taken precautions to minimise the risk of transmitting software viruses, but we advise you to carry out your own virus checks on any attachments. We do not accept liability for any loss or damage caused by software viruses. NEX reserves the right to monitor all communications. We do not accept any legal responsibility for the content of communications, and no communication shall be considered legally binding. Furthermore, if the content of this communication is personal or unconnected with our business, we accept no liability or responsibility for it. NEX Group plc is a public limited company regi > stered in England and Wales under number 10013770 and certain of its affiliates are authorised and regulated by regulatory authorities. For further regulatory information please see www.NEX.com. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mingw.android%40gmail.com From tjreedy at udel.edu Wed Jan 31 14:35:13 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 31 Jan 2018 14:35:13 -0500 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: References: Message-ID: On 1/31/2018 10:04 AM, Oleg Sivokon wrote: > Why did you choose to use non-free compiler, which also makes cross-compilation impossible? There wasn't really a reason not to choose MinGW as Python was ported to DOS years before the initial 1998 release of the mingw32 predecessor. There has been some work to make Python also compile with MinGW, but AFAIK, no one has volunteered to persist in doing complete patch for MinGW. You can look at the tracker to get some idea of what is missing. I presume contributions are still allowed and welcome. > a way to compile extensions on Windows (Ruby does that, Go uses MinGW, perhaps some others too). It would've made things like CI and packaging so much easier... What do Python users / developers get from using MSVC instead? Timely access to all features added to Windows, such as 64 bit binaries. Access to everything compiled with the compiler nearly everyone else uses. Help to make Python work on DOS, now Windows, from people who only know and use Microsoft tools. > communication and all information contained in or attached to it isconfidential, intended solely for the addressee, Please omit this noise. Posting to pydev is posting to the world, including multiple mirrors and repositories. -- Terry Jan Reedy From brett at snarky.ca Tue Jan 30 22:35:40 2018 From: brett at snarky.ca (Brett Cannon) Date: Wed, 31 Jan 2018 03:35:40 +0000 Subject: [Python-Dev] Backfilling 'awaiting' labels Message-ID: I have written a script that will go through and backfill the 'awaiting' label on older pull requests based on the review state as it stands today. A comment will be left if an "awaiting changes" label is set explaining that we're backfilling and if you're ready for a change review then leave the magical comment to trigger Bedevere. My plan is to limit this to only 20 total comments within a day so at to not overwhelm any single person with notifications. I will also run this script manually so there's no guarantee this will even occur every day. Assuming that 20 comment/day limit seems reasonable to people I will probably do the inaugural run tomorrow which will add an 'awaiting label' to 158 issues (which should be more than half of the issues lacking an 'awaiting' label). -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Wed Jan 31 15:31:09 2018 From: python at mrabarnett.plus.com (MRAB) Date: Wed, 31 Jan 2018 20:31:09 +0000 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: References: Message-ID: <40c62d12-2491-097e-aa15-7dff96acb04a@mrabarnett.plus.com> On 2018-01-31 19:07, Ray Donnelly wrote: > On Wed, Jan 31, 2018 at 3:04 PM, Oleg Sivokon wrote: >> Hello list. >> >> I'll give some background before asking my question in more detail. >> [snip] >> >> Now all I had to do was to re-create my success on Windows (most of the employees in my company use Windows). At first I thought that I'd cross-compile on Linux using MinGW. I compiled Go shared library into a DLL, then tried to compile my Python extension and... it didn't work. I downloaded VirtualBox and some Windows images, etc... tried to compile on Windows. It didn't work. I started asking around, and was told that even though for some earlier versions of Python this was kind of possible, for Python 3.5, 3.6 it is not. You must use MSVC to compile Python extensions. No way around it. >> Now, since Go won't compile with MSVC, I'll have to scrap my project and spend many weeks re-implementing kubectl. >> >> Here's my question: Why? >> >> Why did you choose to use non-free compiler, which also makes cross-compilation impossible? There wasn't really a reason not to choose MinGW as a way to compile extensions on Windows (Ruby does that, Go uses MinGW, perhaps some others too). It would've made things like CI and packaging so much easier... What do Python users / developers get from using MSVC instead? > > You can compile extension modules with mingw-w64 just fine (modulus a > few gotchas). The Anaconda Distribution we do this for a few packages, > for example rpy2. You can see the build script used here: > https://github.com/AnacondaRecipes/rpy2-feedstock/blob/master/recipe/bld.bat > (disclaimer: I work for Anaconda Inc on this stuff). > > MSYS2 also have mingw-w64 builds of Python that might meet your needs. > It is pretty popular in some parts of the open source on Windows world > (disclaimer: I did a lot of the work for this stuff on MSYS2). > I build the wheels (binaries for Windows) for the regex module using mingw-w64 and they also work just fine. I can also build using Microsoft Visual Studio Community 2017 (which is free) and they work. [snip] From steve.dower at python.org Wed Jan 31 18:05:33 2018 From: steve.dower at python.org (Steve Dower) Date: Thu, 1 Feb 2018 10:05:33 +1100 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: References: Message-ID: Because every other supported platform builds using the native tools, so why shouldn?t the one with the most users? I?m likely biased because I work there and I?m the main intermediary with python-dev, but these days Microsoft is one of the strongest supporters of CPython. We employ the most core developers of any private company and we all are allowed work time to contribute, we provide full access to our development tools and platforms to all core developers and some prominent projects, we?ve made fixes, enhancements and releases or core products such as the CRT, MSVC, Visual Studio, Visual Studio Code, and Azure SPECIFICALLY to support CPython development and users. As far as I know, ALL the Windows buildbots are running on Azure subscriptions that Microsoft provides (though managed by some awesome volunteers). You?ll see us at PyCon US under the biggest banner and we?ll have a booth filled with engineers and not recruiters. Crash reports from thousands of opted-in users come into our systems and have directly lead to both CPython and Windows bug fixes. Meanwhile, most of the MinGW contributions have been complaints and drive-by patches. We (python-dev) are not opposed to supporting a second compiler for Windows, and honestly I?d love for extensions built with other compilers to be fully compatible with our main binary release, but the sacrifice involved in switching is significant and there?s no apparent commitment from the alternative options. (Note that I?m not saying Microsoft?s support is conditional on our compiler being used. But our ability to contribute technically would be greatly reduced if we didn?t have the inside access that we do.) And as has been mentioned, MSVC was selected before the other options were feasible. Python is a much older tool than those others, and so uses the tools that were best at the time. So in my opinion at least, the reasoning for selecting MSVC was perfectly sound, and the reasoning for continuing with it is perfectly sound. Unwillingness on the part of package developers to not even test on Windows before releasing a wheel for it is not a compelling reason to change anything. Cheers, Steve Top-posted from my Windows phone From: Oleg Sivokon Sent: Thursday, February 1, 2018 5:40 To: python-dev at python.org Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? Hello list. I'll give some background before asking my question in more detail. I've been tasked with writing some infrastructure code that needs to talk to Kubernetes. (Kubernetes is a popular software for managing and automating virtualization / containerization of cloud services). One of the requirements was that the code be written in Python 3.X. The tasks my code was supposed to perform on Kubernetes would be something like cluster creation from specification, deletion of all or parts of the cluster, providing realtime statistics of cluster usage etc. There were few prototype scripts written in Bash using kubectl (official client written in Go). My first reaction was to look for an official client for Kubernetes written in Python. There is one official client for Kubernetes, with a single maintainer, impossible to parse documentation, containing mostly generated code. It is nigh impossible to use. Here I need to explain that for whatever reason Kubernetes team decided to write their HTTP API in such a way that the server is "dumb" and the client must be "smart" in order to do anything useful. For instance, if you have a description of your cluster, you cannot just send this description to the server and hope that it will know how to create the cluster from description. You need to make multiple API calls (perhaps hundreds of them) to arrange for the server to create the cluster from description. Since the official client is no help (it really only mirrors the HTTP API), I searched for other clients. There are two more. None is in good shape, and none comes even close to being able to do what kubectl can. There is one more client that shells out calls to kubectl. It implements only a small subset of kubectl commands, and... it's a lot of parsing of standard output with regular expressions and magic. Well... I was given a lot of time to investigate other options for dealing with this project, so I decided, what if I can compile kubectl into a shared library and write a Python extension that links against that library. And, indeed, after few days I came up with such an extension. It worked!.. On Linux... Now all I had to do was to re-create my success on Windows (most of the employees in my company use Windows). At first I thought that I'd cross-compile on Linux using MinGW. I compiled Go shared library into a DLL, then tried to compile my Python extension and... it didn't work. I downloaded VirtualBox and some Windows images, etc... tried to compile on Windows. It didn't work. I started asking around, and was told that even though for some earlier versions of Python this was kind of possible, for Python 3.5, 3.6 it is not. You must use MSVC to compile Python extensions. No way around it. Now, since Go won't compile with MSVC, I'll have to scrap my project and spend many weeks re-implementing kubectl. Here's my question: Why? Why did you choose to use non-free compiler, which also makes cross-compilation impossible? There wasn't really a reason not to choose MinGW as a way to compile extensions on Windows (Ruby does that, Go uses MinGW, perhaps some others too). It would've made things like CI and packaging so much easier... What do Python users / developers get from using MSVC instead? Thank you. Oleg This communication and all information contained in or attached to it is confidential, intended solely for the addressee, may be legally privileged and is the intellectual property of one of the companies of NEX Group plc ("NEX") or third parties. If you are not the intended addressee or receive this message in error, please immediately delete all copies of it and notify the sender. We have taken precautions to minimise the risk of transmitting software viruses, but we advise you to carry out your own virus checks on any attachments. We do not accept liability for any loss or damage caused by software viruses. NEX reserves the right to monitor all communications. We do not accept any legal responsibility for the content of communications, and no communication shall be considered legally binding. Furthermore, if the content of this communication is personal or unconnected with our business, we accept no liability or responsibility for it. NEX Group plc is a public limited company regi stered in England and Wales under number 10013770 and certain of its affiliates are authorised and regulated by regulatory authorities. For further regulatory information please see www.NEX.com. _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Jan 31 18:18:39 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 31 Jan 2018 15:18:39 -0800 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 3:13 AM, Joni Orponen wrote: > On Wed, Jan 31, 2018 at 12:43 AM, Chris Barker - NOAA Federal < > chris.barker at noaa.gov> wrote: > >> And maybe we could even get rid of the "Framework" builds...... >>> >> >> Please do not. These make life easier for doing things the Apple way for >> signed sandboxed applications. >> >> For the record, are you re-distributing the python.org builds, or >> re-building yourself? >> > > We are re-building ourselves. > Then it makes no difference to you if the pyton.org binaries are Framework builds... though maybe you want the configure target available. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Jan 31 18:23:17 2018 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 31 Jan 2018 15:23:17 -0800 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 4:20 AM, INADA Naoki wrote: > > Against the official CPython 3.6 (probably .3 or .4) release I see: > > 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) > > 5 that are >=1.5x,<1.6x faster. > > 13 that are >=1.4x,<1.5x faster. > > 21 that are >=1.3x,<1.4x faster. > > 14 that are >=1.2x,<1.3x faster. > > 5 that are >=1.1x,<1.2x faster. > > 0 that are < 1.1x faster/slower. > > > > Pretty good numbers overall I think. > > Yay!! Congrats for all of us! > I'm confused -- I _think_ these are performance improvements of the Anaconda build over the python.org build for OS-X -- so congrats to the Anaconda team :-) But a hint that maybe we should do the python.org builds differently! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Jan 31 19:27:09 2018 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 31 Jan 2018 19:27:09 -0500 Subject: [Python-Dev] OS-X builds for 3.7.0 In-Reply-To: References: Message-ID: On 1/31/2018 6:23 PM, Chris Barker wrote: > On Wed, Jan 31, 2018 at 4:20 AM, INADA Naoki > wrote: > > > Against the official CPython 3.6 (probably .3 or .4) release I see: > > 1 that is 2.01x faster (python-startup, 24.6ms down to 12.2ms) > > 5 that are >=1.5x,<1.6x faster. > > 13 that are >=1.4x,<1.5x faster. > > 21 that are >=1.3x,<1.4x faster. > > 14 that are >=1.2x,<1.3x faster. > > 5 that are >=1.1x,<1.2x faster. > > 0 that are < 1.1x faster/slower. > > > > Pretty good numbers overall I think. > > Yay!!? Congrats for all of us! > > > I'm confused -- I _think_ these are performance improvements of the > Anaconda build over the python.org build for OS-X -- > so congrats to the Anaconda team :-) > > But a hint that maybe we should do the python.org > builds differently! Ned Deily is in charge of the Mac build (as well as current release manager). Within the last week, he revised the official builds (now two, I believe) for 3.7.0b1, due in a day or so. One will be a future oriented 64-bit build. The PR and What's New have more. He may not be reading this thread, but will read MacOS tracker issues with a specific proposal, data and a patch. Comparisons should be against the current master or an installed 3.7.0b1. -- Terry Jan Reedy From greg at krypto.org Wed Jan 31 19:42:02 2018 From: greg at krypto.org (Gregory P. Smith) Date: Thu, 01 Feb 2018 00:42:02 +0000 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: <3zWzR85rP2zFrJV@mail.python.org> References: <3zWzR85rP2zFrJV@mail.python.org> Message-ID: TL;DR of Steve's post - MSVC is the compiler of choice for most serious software on Windows. So we use it to best integrate with the world. There is no compelling reason to change that. The free-as-in-beer MSVC community edition is finally non-sucky (their earlier efforts were crippled, they seem to have learned the lesson) There are other viable Windows compilers. If we want to support those in CPython someone needs to contribute the work to do so, ongoing maintenance, and buildbots. I'd love to see a Clang based Windows build (Google Chrome is built using that). But I have no motivating reason to do the work. I *believe* such a build could be made to integrate and inter-operate fully with MSVC builds and ABIs. We could *probably* even make cross-compilation of extensions from Linux -> Windows work that way. We're highly unlikely to ever stop shipping python.org Windows binaries built with anything other than MSVC unless Microsoft takes a turn toward the dark side again. -gps On Wed, Jan 31, 2018 at 3:07 PM Steve Dower wrote: > Because every other supported platform builds using the native tools, so > why shouldn?t the one with the most users? > > > > I?m likely biased because I work there and I?m the main intermediary with > python-dev, but these days Microsoft is one of the strongest supporters of > CPython. We employ the most core developers of any private company and we > all are allowed work time to contribute, we provide full access to our > development tools and platforms to all core developers and some prominent > projects, we?ve made fixes, enhancements and releases or core products such > as the CRT, MSVC, Visual Studio, Visual Studio Code, and Azure SPECIFICALLY > to support CPython development and users. As far as I know, ALL the Windows > buildbots are running on Azure subscriptions that Microsoft provides > (though managed by some awesome volunteers). You?ll see us at PyCon US > under the biggest banner and we?ll have a booth filled with engineers and > not recruiters. Crash reports from thousands of opted-in users come into > our systems and have directly lead to both CPython and Windows bug fixes. > > > > Meanwhile, most of the MinGW contributions have been complaints and > drive-by patches. We (python-dev) are not opposed to supporting a second > compiler for Windows, and honestly I?d love for extensions built with other > compilers to be fully compatible with our main binary release, but the > sacrifice involved in switching is significant and there?s no apparent > commitment from the alternative options. > > > > (Note that I?m not saying Microsoft?s support is conditional on our > compiler being used. But our ability to contribute technically would be > greatly reduced if we didn?t have the inside access that we do.) > > > > And as has been mentioned, MSVC was selected before the other options were > feasible. Python is a much older tool than those others, and so uses the > tools that were best at the time. > > > > So in my opinion at least, the reasoning for selecting MSVC was perfectly > sound, and the reasoning for continuing with it is perfectly sound. > Unwillingness on the part of package developers to not even test on Windows > before releasing a wheel for it is not a compelling reason to change > anything. > > > > Cheers, > > Steve > > > > Top-posted from my Windows phone > > > > *From: *Oleg Sivokon > *Sent: *Thursday, February 1, 2018 5:40 > *To: *python-dev at python.org > *Subject: *[Python-Dev] Why is Python for Windows compiled with MSVC? > > > > Hello list. > > > > I'll give some background before asking my question in more detail. > > > > I've been tasked with writing some infrastructure code that needs to talk > to Kubernetes. (Kubernetes is a popular software for managing and > automating virtualization / containerization of cloud services). One of > the requirements was that the code be written in Python 3.X. > > > > The tasks my code was supposed to perform on Kubernetes would be something > like cluster creation from specification, deletion of all or parts of the > cluster, providing realtime statistics of cluster usage etc. There were > few prototype scripts written in Bash using kubectl (official client > written in Go). > > > > My first reaction was to look for an official client for Kubernetes > written in Python. There is one official client for Kubernetes, with a > single maintainer, impossible to parse documentation, containing mostly > generated code. It is nigh impossible to use. Here I need to explain that > for whatever reason Kubernetes team decided to write their HTTP API in such > a way that the server is "dumb" and the client must be "smart" in order to > do anything useful. For instance, if you have a description of your > cluster, you cannot just send this description to the server and hope that > it will know how to create the cluster from description. You need to make > multiple API calls (perhaps hundreds of them) to arrange for the server to > create the cluster from description. > > > > Since the official client is no help (it really only mirrors the HTTP > API), I searched for other clients. There are two more. None is in good > shape, and none comes even close to being able to do what kubectl can. > > > > There is one more client that shells out calls to kubectl. It implements > only a small subset of kubectl commands, and... it's a lot of parsing of > standard output with regular expressions and magic. > > > > Well... I was given a lot of time to investigate other options for dealing > with this project, so I decided, what if I can compile kubectl into a > shared library and write a Python extension that links against that > library. And, indeed, after few days I came up with such an extension. It > worked!.. On Linux... > > > > Now all I had to do was to re-create my success on Windows (most of the > employees in my company use Windows). At first I thought that I'd > cross-compile on Linux using MinGW. I compiled Go shared library into a > DLL, then tried to compile my Python extension and... it didn't work. I > downloaded VirtualBox and some Windows images, etc... tried to compile on > Windows. It didn't work. I started asking around, and was told that even > though for some earlier versions of Python this was kind of possible, for > Python 3.5, 3.6 it is not. You must use MSVC to compile Python > extensions. No way around it. > > Now, since Go won't compile with MSVC, I'll have to scrap my project and > spend many weeks re-implementing kubectl. > > > > Here's my question: Why? > > > > Why did you choose to use non-free compiler, which also makes > cross-compilation impossible? There wasn't really a reason not to choose > MinGW as a way to compile extensions on Windows (Ruby does that, Go uses > MinGW, perhaps some others too). It would've made things like CI and > packaging so much easier... What do Python users / developers get from > using MSVC instead? > > > > Thank you. > > > > Oleg > > This communication and all information contained in or attached to it is > confidential, intended solely for the addressee, may be legally privileged > and is the intellectual property of one of the companies of NEX Group plc > ("NEX") or third parties. If you are not the intended addressee or receive > this message in error, please immediately delete all copies of it and > notify the sender. We have taken precautions to minimise the risk of > transmitting software viruses, but we advise you to carry out your own > virus checks on any attachments. We do not accept liability for any loss or > damage caused by software viruses. NEX reserves the right to monitor all > communications. We do not accept any legal responsibility for the content > of communications, and no communication shall be considered legally > binding. Furthermore, if the content of this communication is personal or > unconnected with our business, we accept no liability or responsibility for > it. NEX Group plc is a public limited company regi > > stered in England and Wales under number 10013770 and certain of its > affiliates are authorised and regulated by regulatory authorities. For > further regulatory information please see www.NEX.com. > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcea at jcea.es Wed Jan 31 19:46:09 2018 From: jcea at jcea.es (Jesus Cea) Date: Thu, 1 Feb 2018 01:46:09 +0100 Subject: [Python-Dev] "threading.Lock().locked()" is not documented Message-ID: <06cf0afb-27ec-793e-61d1-3c0efb8da149@jcea.es> https://docs.python.org/3.6/library/threading.html doesn't document "threading.Lock().locked()", and it is something quite useful. In fact, it is used in "threading.py" itself. For instance, lines 109, 985, 1289. Is there any reason to not document it?. (I didn't investigate other objects in the module). -- Jes?s Cea Avi?n _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ Twitter: @jcea _/_/ _/_/ _/_/_/_/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From nad at python.org Wed Jan 31 20:04:54 2018 From: nad at python.org (Ned Deily) Date: Wed, 31 Jan 2018 20:04:54 -0500 Subject: [Python-Dev] [IMPORTANT] post 3.7.0b1 development now open Message-ID: Here we are: 3.7.0b1 and feature code freeze! Congratulations and thanks to all of you who've contributed to the huge number of PEPs, features, bug fixes, and doc changes that have gone into 3.7 since feature development began back in September 2016, after 3.6.0b1, 3.6's feature freeze. Now that feature development for 3.7 is over, the challenge is to put the finishing touches on the features and documentation, squash bugs, and test test test. In the cpython repo, there is now a 3.7 branch. Starting now, all PRs destined for 3.7.0 should get cherry-picked from master to the 3.7 branch or just pushed to 3.7 if only applicable there. New features should continue to be pushed to the master branch for release in 3.8; no new features are now permitted in 3.7 unless you have contacted me and we have agreed on an extension (and all granted extensions will expire as of 3.7.0b2). As before, bug fixes appropriate for 3.6.x should continue to be cherry-picked to the 3.6 branch. I've updated the Developer's Guide to reflect the now current workflow. Let me know if you find any bugs in it. Likewise, please contact me if you have any questions about the workflow or about whether a change is appropriate for 3.7 beta. The cpython repo on Github has been updated. You should now find that builds on the master branch produce a Python 3.8, rather than 3.7; you may want to clean your build directory. And there is now a 3.7 branch that you will need to use for 3.7 builds and pushs. There were several PRs that were merged to master over the last couple of days since we started 3.7.0b1 release engineering. All but one of those have been cherry-picked into the new 3.7 branch and you should have seen messages for them. One was for a 3.8 feature and so was not backported. At the moment, the new 3.7 buildbots may not be fully operational but they should be soon. Likewise, the docs.python.org may take up to 24 hours to reflect all the changes. Note that is the first time we've done a feature freeze using our new git-based workflow, so there's likely that there might be a glitch or something overlooked. Please let us know if you suspect something or have a question. I'll be around here and or #python-dev. Also, don't forget to direct 3.8-related questions to ?ukasz. Welcome on-board! To recap: 2018-01-31: 3.7 branch open for 3.7.0; 3.8.0 feature development begins 2018-01-31 to 2018-05-21: 3.7.0 beta phase (no new 3.7 features) - push PRs for new features, bugs, regressions, doc fixes to the master branch for release in 3.8 - cherry-pick or push PRs for 3.7.0 (bug/regression/doc fixes) to the new 3.7 branch - cherry-pick or push select PRs for important bug/regression/doc fixes to 3.6 and 2.7 branches as appropriate - propose PRs to 3.5 and 3.4 branches for any identified security issues 2018-02-26: 3.7.0 beta 2 (next beta preview) 2018-03-26: 3.7.0 beta 3 (3.7.0 ABI freeze) 2018-04-30: 3.7.0 beta 4 (only critical and urgent fixes after this point) 2018-05-21: 3.7.0 release candidate 1 (3.7.0 code freeze, only any emergency fixes afer this point) 2018-06-15: 3.7.0 release 2019-10-20: 3.8.0 release (next planned feature release, see PEP 569) Thank you all again for your great efforts so far on 3.7! --Ned From nad at python.org Wed Jan 31 20:34:12 2018 From: nad at python.org (Ned Deily) Date: Wed, 31 Jan 2018 20:34:12 -0500 Subject: [Python-Dev] [RELEASE] Python 3.7.0b1 is now available for testing Message-ID: <9425596C-A92F-4B10-A8B7-98F4E827E8D0@python.org> On behalf of the Python development community and the Python 3.7 release team, I'm happy to announce the availability of Python 3.7.0b1. b1 is the first of four planned beta releases of Python 3.7, the next major release of Python, and marks the end of the feature development phase for 3.7. You can find Python 3.7.0b1 here: https://www.python.org/downloads/release/python-370b1/ Among the new major new features in Python 3.7 are: * PEP 538, Coercing the legacy C locale to a UTF-8 based locale * PEP 539, A New C-API for Thread-Local Storage in CPython * PEP 540, UTF-8 mode * PEP 552, Deterministic pyc * PEP 553, Built-in breakpoint() * PEP 557, Data Classes * PEP 560, Core support for typing module and generic types * PEP 562, Module __getattr__ and __dir__ * PEP 563, Postponed Evaluation of Annotations * PEP 564, Time functions with nanosecond resolution * PEP 565, Show DeprecationWarning in __main__ * PEP 567, Context Variables Please see "What?s New In Python 3.7" for more information. Additional documentation for these features and for other changes will be provided during the beta phase. https://docs.python.org/3.7/whatsnew/3.7.html Beta releases are intended to give you the opportunity to test new features and bug fixes and to prepare their projects to support the new feature release. We strongly encourage you to test your projects with 3.7 during the beta phase and report issues found to https://bugs.python.org as soon as possible. While the release is feature complete entering the beta phase, it is possible that features may be modified or, in rare cases, deleted up until the start of the release candidate phase (2018-05-21). Our goal is have no ABI changes after beta 3 and no code changes after rc1. To achieve that, it will be extremely important to get as much exposure for 3.7 as possible during the beta phase. Attention macOS users: with 3.7.0b1, we are providing a choice of two binary installers. The new variant provides a 64-bit-only version for macOS 10.9 and later systems; this variant also now includes its own built-in version of Tcl/Tk 8.6. We welcome your feedback. Please keep in mind that this is a preview release and its use is not recommended for production environments. The next planned release of Python 3.7 will be 3.7.0b2, currently scheduled for 2018-02-26. More information about the release schedule can be found here: https://www.python.org/dev/peps/pep-0537/ -- Ned Deily nad at python.org -- [] From steve at pearwood.info Wed Jan 31 21:10:51 2018 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 1 Feb 2018 13:10:51 +1100 Subject: [Python-Dev] Why is Python for Windows compiled with MSVC? In-Reply-To: References: Message-ID: <20180201021051.GJ26553@ando.pearwood.info> On Wed, Jan 31, 2018 at 03:04:04PM +0000, Oleg Sivokon wrote: > Now, since Go won't compile with MSVC [...] Perhaps you should be asking Google why Go doesn't support the most popular C compiler on Windows. -- Steve